An open-source agentic framework for building AI agents with Ollama-based models
Project description
EdgeBrain: Ollama Agentic Framework
A powerful, extensible framework for building autonomous AI agents using Ollama-based language models. This framework provides a complete solution for creating, orchestrating, and managing AI agents that can work independently or collaboratively to solve complex tasks.
๐ Features
Core Capabilities
- Multi-Agent Orchestration: Coordinate multiple agents working together on complex tasks
- Flexible Agent Architecture: Create specialized agents with custom roles, capabilities, and behaviors
- Tool Integration: Extensible tool system for web search, file operations, calculations, and custom tools
- Memory Management: Persistent memory system with semantic search capabilities
- Workflow Engine: Define and execute complex multi-step workflows with dependencies
- Inter-Agent Communication: Built-in messaging system for agent collaboration
- Ollama Integration: Native support for all Ollama models with automatic model management
Advanced Features
- Asynchronous Processing: Full async/await support for high-performance operations
- Vector Memory: Semantic memory storage with embedding-based retrieval
- Task Scheduling: Priority-based task queue with automatic assignment
- Error Handling: Robust error handling and recovery mechanisms
- Extensible Architecture: Plugin-based system for easy customization and extension
- Comprehensive Testing: Full test suite with mock integrations for development
๐ Quick Start
Prerequisites
- Python 3.11 or higher
- Ollama installed and running
- SQLite (included with Python)
Installation
- Clone the repository:
git clone https://github.com/madnansultandotme/ollama-agentic-framework.git
cd ollama-agentic-framework
- Install dependencies:
pip install -r requirements.txt
- Install the framework:
pip install -e .
Basic Usage
Here's a simple example to get you started:
import asyncio
from src.core.orchestrator import AgentOrchestrator
from src.integration.ollama_client import OllamaIntegrationLayer
from src.tools.tool_registry import ToolRegistry
from src.memory.memory_manager import MemoryManager
async def main():
# Initialize components
ollama_integration = OllamaIntegrationLayer()
await ollama_integration.initialize()
tool_registry = ToolRegistry()
memory_manager = MemoryManager()
# Create orchestrator
orchestrator = AgentOrchestrator(
ollama_integration=ollama_integration,
tool_registry=tool_registry,
memory_manager=memory_manager
)
# Create an agent
agent = orchestrator.register_agent(
agent_id="researcher_001",
role="Research Specialist",
description="Conducts research and analysis",
model="llama3.1"
)
# Start the orchestrator
await orchestrator.start()
# Create and assign a task
task_id = await orchestrator.create_task(
description="Research the latest trends in artificial intelligence"
)
await orchestrator.assign_task_to_agent(task_id, agent.agent_id)
# Monitor execution
# ... (see examples for complete implementation)
await orchestrator.stop()
if __name__ == "__main__":
asyncio.run(main())
๐ Documentation
Core Components
Agent Orchestrator
The central control unit that manages agents, tasks, and workflows. It handles:
- Agent lifecycle management
- Task distribution and execution
- Inter-agent communication
- Workflow orchestration
Agents
Autonomous entities with specific roles and capabilities. Each agent has:
- Unique identity and role
- Custom capabilities and tools
- Memory and learning systems
- Goal-oriented behavior
Tool Registry
Extensible system for managing tools that agents can use:
- Built-in tools (web search, file operations, calculations)
- Custom tool development
- Tool discovery and validation
- Secure tool execution
Memory Manager
Persistent storage system for agent knowledge:
- Short-term context memory
- Long-term knowledge storage
- Semantic search capabilities
- Memory importance scoring
Architecture Overview
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Agent Orchestrator โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ Task Management โ Agent Lifecycle โ Communication โ
โ Workflow Engine โ Resource Mgmt โ Event Handling โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ โ โ
โผ โผ โผ
โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ
โ Agents โ โ Tool Registry โ โ Memory Manager โ
โ โ โ โ โ โ
โ โข Research โ โ โข Web Search โ โ โข Vector Store โ
โ โข Writing โ โ โข File Ops โ โ โข Semantic โ
โ โข Analysis โ โ โข Calculator โ โ Search โ
โ โข Custom โ โ โข Custom Tools โ โ โข Persistence โ
โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ
โ โ โ
โโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโ
โผ
โโโโโโโโโโโโโโโโโโโ
โ Ollama Client โ
โ โ
โ โข Model Mgmt โ
โ โข Generation โ
โ โข Tool Calling โ
โ โข Streaming โ
โโโโโโโโโโโโโโโโโโโ
๐ ๏ธ Examples
The framework includes several comprehensive examples:
1. Simple Research Agent
A basic agent that conducts research and provides summaries.
python examples/simple_research_agent.py
2. Multi-Agent Collaboration
Multiple agents working together to create a technical blog post.
python examples/multi_agent_collaboration.py
3. Code Generation Agent
An agent specialized in software development and code generation.
python examples/code_generation_agent.py
4. Comprehensive Demo
A full demonstration of all framework capabilities.
python examples/comprehensive_demo.py
๐ง Configuration
Ollama Configuration
The framework automatically detects and configures Ollama models. You can customize the configuration:
ollama_integration = OllamaIntegrationLayer(
base_url="http://localhost:11434", # Ollama server URL
default_model="llama3.1", # Default model to use
timeout=30 # Request timeout
)
Memory Configuration
Configure the memory system for your needs:
memory_manager = MemoryManager(
db_path="agent_memory.db", # Database file path
embedding_dim=384 # Embedding vector dimension
)
Tool Configuration
Add custom tools to extend agent capabilities:
from src.tools.tool_registry import BaseTool
class CustomTool(BaseTool):
def __init__(self):
super().__init__(
name="custom_tool",
description="My custom tool",
category="custom"
)
async def execute(self, param: str) -> dict:
# Tool implementation
return {"result": f"Processed: {param}"}
# Register the tool
tool_registry.register_tool(CustomTool())
๐งช Testing
Run the test suite to ensure everything is working correctly:
# Run all tests
python -m pytest tests/ -v
# Run specific test files
python -m pytest tests/test_ollama_integration.py -v
python -m pytest tests/test_tool_registry.py -v
# Run with coverage
python -m pytest tests/ --cov=src --cov-report=html
๐ฆ Project Structure
ollama_agentic_framework/
โโโ src/ # Source code
โ โโโ core/ # Core framework components
โ โ โโโ agent.py # Agent implementation
โ โ โโโ orchestrator.py # Orchestrator implementation
โ โโโ integration/ # External integrations
โ โ โโโ ollama_client.py # Ollama integration
โ โโโ tools/ # Tool system
โ โ โโโ tool_registry.py # Tool registry and built-in tools
โ โโโ memory/ # Memory management
โ โ โโโ memory_manager.py # Memory system implementation
โ โโโ __init__.py
โโโ tests/ # Test suite
โ โโโ test_ollama_integration.py
โ โโโ test_tool_registry.py
โ โโโ __init__.py
โโโ examples/ # Usage examples
โ โโโ simple_research_agent.py
โ โโโ multi_agent_collaboration.py
โ โโโ code_generation_agent.py
โ โโโ comprehensive_demo.py
โโโ docs/ # Documentation
โโโ requirements.txt # Dependencies
โโโ setup.py # Package setup
โโโ README.md # This file
๐ค Contributing
We welcome contributions! Please see our Contributing Guide for details.
Development Setup
- Fork the repository
- Create a virtual environment
- Install development dependencies:
pip install -r requirements.txt pip install -e .
- Run tests to ensure everything works
- Make your changes
- Add tests for new functionality
- Submit a pull request
Code Style
- Follow PEP 8 guidelines
- Use type hints for all functions
- Add docstrings for all public methods
- Maintain test coverage above 90%
๐ License
This project is licensed under the MIT License - see the LICENSE file for details.
๐ Acknowledgments
- Ollama for providing the foundation for local LLM inference
- The open-source AI community for inspiration and best practices
- Contributors and users who help improve this framework
๐ Support
- Documentation: Full documentation
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Email: info.adnansultan@gmail.com
๐บ๏ธ Roadmap
Version 1.0 (Current)
- โ Core agent framework
- โ Ollama integration
- โ Basic tool system
- โ Memory management
- โ Multi-agent orchestration
Version 1.1 (Planned)
- ๐ Enhanced tool ecosystem
- ๐ Web interface for agent management
- ๐ Advanced workflow templates
- ๐ Performance optimizations
Version 2.0 (Future)
- ๐ฎ Multi-modal agent support
- ๐ฎ Distributed agent networks
- ๐ฎ Advanced learning algorithms
- ๐ฎ Enterprise features
Built with โค๏ธ by the Muhammad Adnan Sultan
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file edgebrain-0.1.1.tar.gz.
File metadata
- Download URL: edgebrain-0.1.1.tar.gz
- Upload date:
- Size: 83.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
422f9312b7a0726023b4aec041048b6099e85495a9fec660391102eb8e0ee99d
|
|
| MD5 |
4df75ae0f9201ca4e31f08a4d32d7af6
|
|
| BLAKE2b-256 |
387ac2609a155ed62062a05897cdc6df8a9df6060aaed4fa37e8aa8462d927ea
|
File details
Details for the file edgebrain-0.1.1-py3-none-any.whl.
File metadata
- Download URL: edgebrain-0.1.1-py3-none-any.whl
- Upload date:
- Size: 28.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b115c2601340cc3b6b4f0ee3a6b24e21d780128b31c465e37e70834768c0e6dd
|
|
| MD5 |
f94dcf010b4e66df5359276bf368d82e
|
|
| BLAKE2b-256 |
94665af62bcaf0cf9573d42b59896ca9147d8cf1874e709e7fac3eb64ba7b38a
|