An open-source agentic framework for building AI agents with Ollama-based models
Project description
EdgeBrain: Ollama Agentic Framework
A powerful, extensible framework for building autonomous AI agents using Ollama-based language models. This framework provides a complete solution for creating, orchestrating, and managing AI agents that can work independently or collaboratively to solve complex tasks.
๐ Features
Core Capabilities
- Multi-Agent Orchestration: Coordinate multiple agents working together on complex tasks
- Flexible Agent Architecture: Create specialized agents with custom roles, capabilities, and behaviors
- Async Ollama Integration: Native support for the official Ollama Python client with async/await patterns
- Code Generation Engine: Specialized agents for software development using qwen2.5:3b and other models
- Tool Integration: Extensible tool system for web search, file operations, calculations, and custom tools
- Memory Management: Persistent memory system with semantic search capabilities
- Workflow Engine: Define and execute complex multi-step workflows with dependencies
- Inter-Agent Communication: Built-in messaging system for agent collaboration
Advanced Features
- Asynchronous Processing: Full async/await support for high-performance operations using AsyncClient
- Real-time Code Generation: Direct integration with qwen2.5:3b for instant code creation
- Vector Memory: Semantic memory storage with embedding-based retrieval
- Task Scheduling: Priority-based task queue with automatic assignment
- Error Handling: Robust error handling and recovery mechanisms with graceful fallbacks
- Extensible Architecture: Plugin-based system for easy customization and extension
- Comprehensive Testing: Full test suite with mock integrations for development
๐ Quick Start
Prerequisites
- Python 3.11 or higher
- Ollama installed and running
- Required models:
ollama pull qwen2.5:3b(for code generation) - SQLite (included with Python)
Installation
Quick Install from PyPI
# Install EdgeBrain
pip install edgebrain
# Install official Ollama async client
pip install ollama
# Pull recommended models
ollama pull qwen2.5:3b # Fast code generation
ollama pull llama3.1 # General purpose
Development Installation
- Clone the repository:
git clone https://github.com/madnansultandotme/ollama-agentic-framework.git
cd ollama-agentic-framework
- Install dependencies:
pip install -r requirements.txt
pip install ollama # Official Ollama Python client
- Install the framework:
pip install -e .
Basic Usage
Here's a simple example using EdgeBrain from PyPI:
import asyncio
from edgebrain.core.orchestrator import AgentOrchestrator
from edgebrain.integration.ollama_client import OllamaIntegrationLayer
async def main():
# Initialize Ollama integration
ollama_integration = OllamaIntegrationLayer()
await ollama_integration.initialize()
# Create orchestrator
orchestrator = AgentOrchestrator(
ollama_integration=ollama_integration
)
# Register an agent
agent = orchestrator.register_agent(
agent_id="assistant",
role="Research Assistant",
capabilities=["research", "analysis"]
)
# Assign a task
task_id = await orchestrator.assign_task(
agent_id="assistant",
task_description="Research the benefits of async programming",
context={"focus": "Python development"}
)
# Get results
results = await orchestrator.wait_for_completion(task_id)
print(f"Results: {results}")
await orchestrator.shutdown()
if __name__ == "__main__":
asyncio.run(main())
Quick Start: Simple Code Generation
Create a basic code generation agent:
import asyncio
import ollama
async def generate_code():
client = ollama.AsyncClient()
response = await client.chat(
model="qwen2.5:3b",
messages=[
{"role": "system", "content": "You are a Python expert."},
{"role": "user", "content": "Create a function to calculate fibonacci numbers"}
]
)
print(response['message']['content'])
asyncio.run(generate_code())
ollama_integration=ollama_integration,
tool_registry=tool_registry,
memory_manager=memory_manager
)
# Create an agent
agent = orchestrator.register_agent(
agent_id="researcher_001",
role="Research Specialist",
description="Conducts research and analysis",
model="llama3.1"
)
# Start the orchestrator
await orchestrator.start()
# Create and assign a task
task_id = await orchestrator.create_task(
description="Research the latest trends in artificial intelligence"
)
await orchestrator.assign_task_to_agent(task_id, agent.agent_id)
# Monitor execution
# ... (see examples for complete implementation)
await orchestrator.stop()
if name == "main": asyncio.run(main())
### Direct Code Generation Example
For immediate code generation using the async Ollama client:
```python
import asyncio
from ollama import AsyncClient
async def generate_code():
client = AsyncClient()
# Simple code generation
message = {
'role': 'user',
'content': 'Create a Python function to calculate factorial'
}
response = await client.chat(model='qwen2.5:3b', messages=[message])
print(response.message.content)
# With system prompt for better results
messages = [
{
'role': 'system',
'content': 'You are a Python expert. Write clean, documented code.'
},
{
'role': 'user',
'content': 'Create a Fibonacci sequence generator with error handling'
}
]
response = await client.chat(model='qwen2.5:3b', messages=messages)
# Save generated code
with open('generated_fibonacci.py', 'w') as f:
f.write(response.message.content)
asyncio.run(generate_code())
๐ Documentation
Core Components
Agent Orchestrator
The central control unit that manages agents, tasks, and workflows. It handles:
- Agent lifecycle management
- Task distribution and execution
- Inter-agent communication
- Workflow orchestration
Agents
Autonomous entities with specific roles and capabilities. Each agent has:
- Unique identity and role
- Custom capabilities and tools
- Memory and learning systems
- Goal-oriented behavior
Tool Registry
Extensible system for managing tools that agents can use:
- Built-in tools (web search, file operations, calculations)
- Custom tool development
- Tool discovery and validation
- Secure tool execution
Memory Manager
Persistent storage system for agent knowledge:
- Short-term context memory
- Long-term knowledge storage
- Semantic search capabilities
- Memory importance scoring
Architecture Overview
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Agent Orchestrator โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ Task Management โ Agent Lifecycle โ Communication โ
โ Workflow Engine โ Resource Mgmt โ Event Handling โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ โ โ
โผ โผ โผ
โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ
โ Agents โ โ Tool Registry โ โ Memory Manager โ
โ โ โ โ โ โ
โ โข Research โ โ โข Web Search โ โ โข Vector Store โ
โ โข Writing โ โ โข File Ops โ โ โข Semantic โ
โ โข Analysis โ โ โข Calculator โ โ Search โ
โ โข Custom โ โ โข Custom Tools โ โ โข Persistence โ
โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ
โ โ โ
โโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโ
โผ
โโโโโโโโโโโโโโโโโโโ
โ Ollama Client โ
โ โ
โ โข Model Mgmt โ
โ โข Generation โ
โ โข Tool Calling โ
โ โข Streaming โ
โโโโโโโโโโโโโโโโโโโ
๐ ๏ธ Examples
The framework includes several comprehensive examples:
1. Simple Research Agent
A basic agent that conducts research and provides summaries.
python examples/simple_research_agent.py
# Or specify custom topic:
python examples/simple_research_agent.py "machine learning trends 2025"
2. Code Generation Agent (NEW!)
An agent specialized in software development using qwen2.5:3b model.
# Direct code generation (fast)
python examples/code_generation_agent.py --simple
# Full agent framework integration
python examples/code_generation_agent.py
Features:
- Generates complete Python functions with documentation
- Creates web scrapers, APIs, algorithms
- Includes error handling and best practices
- Saves code to files automatically
- Real-time async generation
3. Async Ollama Testing
Test the direct async integration with various models.
python examples/test_async_ollama.py
python examples/simple_code_test.py
4. Multi-Agent Collaboration
Multiple agents working together to create a technical blog post.
python examples/multi_agent_collaboration.py
5. Enhanced Research Agent
Advanced research capabilities with real web search and file output.
python examples/enhanced_research_agent.py
6. Comprehensive Demo
A full demonstration of all framework capabilities.
python examples/comprehensive_demo.py
๐ง Configuration
Async Ollama Configuration
The framework supports both the custom integration layer and direct async client usage:
Direct AsyncClient (Recommended for Code Generation):
from ollama import AsyncClient
async def setup_direct_ollama():
client = AsyncClient()
# Test connection
response = await client.chat(
model='qwen2.5:3b',
messages=[{'role': 'user', 'content': 'Hello'}]
)
return client
Custom Integration Layer:
ollama_integration = OllamaIntegrationLayer(
base_url="http://localhost:11434", # Ollama server URL
default_model="llama3.1", # Default model to use
timeout=30 # Request timeout
)
Model Recommendations
- qwen2.5:3b: Best for code generation (fast, lightweight, high quality)
- llama3.1: General purpose tasks, research, analysis
- codellama: Alternative for code tasks (larger, more detailed)
Memory Configuration
Configure the memory system for your needs:
memory_manager = MemoryManager(
db_path="agent_memory.db", # Database file path
embedding_dim=384 # Embedding vector dimension
)
Tool Configuration
Add custom tools to extend agent capabilities:
from src.tools.tool_registry import BaseTool
class CustomTool(BaseTool):
def __init__(self):
super().__init__(
name="custom_tool",
description="My custom tool",
category="custom"
)
async def execute(self, param: str) -> dict:
# Tool implementation
return {"result": f"Processed: {param}"}
# Register the tool
tool_registry.register_tool(CustomTool())
๐งช Testing
Run the test suite to ensure everything is working correctly:
# Run all tests
python -m pytest tests/ -v
# Run specific test files
python -m pytest tests/test_ollama_integration.py -v
python -m pytest tests/test_tool_registry.py -v
# Run with coverage
python -m pytest tests/ --cov=src --cov-report=html
๐ฆ Project Structure
edgebrain/
โโโ src/ # Source code
โ โโโ core/ # Core framework components
โ โ โโโ agent.py # Agent implementation
โ โ โโโ orchestrator.py # Orchestrator implementation
โ โโโ integration/ # External integrations
โ โ โโโ ollama_client.py # Ollama integration
โ โโโ tools/ # Tool system
โ โ โโโ tool_registry.py # Tool registry and built-in tools
โ โโโ memory/ # Memory management
โ โ โโโ memory_manager.py # Memory system implementation
โ โโโ __init__.py
โโโ tests/ # Test suite
โ โโโ test_ollama_integration.py
โ โโโ test_tool_registry.py
โ โโโ __init__.py
โโโ examples/ # Usage examples
โ โโโ simple_research_agent.py
โ โโโ multi_agent_collaboration.py
โ โโโ code_generation_agent.py
โ โโโ comprehensive_demo.py
โโโ docs/ # Documentation
โโโ requirements.txt # Dependencies
โโโ setup.py # Package setup
โโโ README.md # This file
๐ค Contributing
We welcome contributions! Please see our Contributing Guide for details.
Development Setup
- Fork the repository
- Create a virtual environment
- Install development dependencies:
pip install -r requirements.txt pip install -e .
- Run tests to ensure everything works
- Make your changes
- Add tests for new functionality
- Submit a pull request
Code Style
- Follow PEP 8 guidelines
- Use type hints for all functions
- Add docstrings for all public methods
- Maintain test coverage above 90%
๐ License
This project is licensed under the MIT License - see the LICENSE file for details.
๐ Acknowledgments
- Ollama for providing the foundation for local LLM inference
- The open-source AI community for inspiration and best practices
- Contributors and users who help improve this framework
๐ Support
- Documentation: Full documentation
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Email: info.adnansultan@gmail.com
๐บ๏ธ Roadmap
Version 1.0 (Current)
- โ Core agent framework
- โ Ollama integration
- โ Basic tool system
- โ Memory management
- โ Multi-agent orchestration
Version 1.1 (Planned)
- ๐ Enhanced tool ecosystem
- ๐ Web interface for agent management
- ๐ Advanced workflow templates
- ๐ Performance optimizations
Version 2.0 (Future)
- ๐ฎ Multi-modal agent support
- ๐ฎ Distributed agent networks
- ๐ฎ Advanced learning algorithms
- ๐ฎ Enterprise features
Built with โค๏ธ by the Muhammad Adnan Sultan
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file edgebrain-0.1.3.tar.gz.
File metadata
- Download URL: edgebrain-0.1.3.tar.gz
- Upload date:
- Size: 107.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a4b8bb77d5045077f5d6d4355603878a770cc49107a71aeeef0c557c9e31abd2
|
|
| MD5 |
c12d8fe997aa19a0ced67282e298d2fc
|
|
| BLAKE2b-256 |
03ebc91082581bd3666d8d00f0b89e23c64cc348280004f74de517aea53ab4e9
|
File details
Details for the file edgebrain-0.1.3-py3-none-any.whl.
File metadata
- Download URL: edgebrain-0.1.3-py3-none-any.whl
- Upload date:
- Size: 36.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
341d8993f12850337eafca171fea841e79889fd79e7bfd010e814caad65dc493
|
|
| MD5 |
29dff2e4e256e436b80a44a574643b8a
|
|
| BLAKE2b-256 |
f07f3e273514ad4c533944ee0a864a5b58039eb19f04e566dfeb125b7b7855b8
|