Skip to main content

An open-source agentic framework for building AI agents with Ollama-based models

Project description

EdgeBrain: Ollama Agentic Framework

A powerful, extensible framework for building autonomous AI agents using Ollama-based language models. This framework provides a complete solution for creating, orchestrating, and managing AI agents that can work independently or collaboratively to solve complex tasks.

๐ŸŒŸ Features

Core Capabilities

  • Multi-Agent Orchestration: Coordinate multiple agents working together on complex tasks
  • Flexible Agent Architecture: Create specialized agents with custom roles, capabilities, and behaviors
  • Async Ollama Integration: Native support for the official Ollama Python client with async/await patterns
  • Code Generation Engine: Specialized agents for software development using qwen2.5:3b and other models
  • Tool Integration: Extensible tool system for web search, file operations, calculations, and custom tools
  • Memory Management: Persistent memory system with semantic search capabilities
  • Workflow Engine: Define and execute complex multi-step workflows with dependencies
  • Inter-Agent Communication: Built-in messaging system for agent collaboration

Advanced Features

  • Asynchronous Processing: Full async/await support for high-performance operations using AsyncClient
  • Real-time Code Generation: Direct integration with qwen2.5:3b for instant code creation
  • Vector Memory: Semantic memory storage with embedding-based retrieval
  • Task Scheduling: Priority-based task queue with automatic assignment
  • Error Handling: Robust error handling and recovery mechanisms with graceful fallbacks
  • Extensible Architecture: Plugin-based system for easy customization and extension
  • Comprehensive Testing: Full test suite with mock integrations for development

๐Ÿš€ Quick Start

Prerequisites

  • Python 3.11 or higher
  • Ollama installed and running
  • Required models: ollama pull qwen2.5:3b (for code generation)
  • SQLite (included with Python)

Installation

Quick Install from PyPI

# Install EdgeBrain
pip install edgebrain

# Install official Ollama async client
pip install ollama

# Pull recommended models
ollama pull qwen2.5:3b    # Fast code generation
ollama pull llama3.1      # General purpose

Development Installation

  1. Clone the repository:
git clone https://github.com/madnansultandotme/ollama-agentic-framework.git
cd ollama-agentic-framework
  1. Install dependencies:
pip install -r requirements.txt
pip install ollama  # Official Ollama Python client
  1. Install the framework:
pip install -e .

Basic Usage

Here's a simple example using EdgeBrain from PyPI:

import asyncio
from edgebrain.core.orchestrator import AgentOrchestrator
from edgebrain.integration.ollama_client import OllamaIntegrationLayer

async def main():
    # Initialize Ollama integration
    ollama_integration = OllamaIntegrationLayer()
    await ollama_integration.initialize()
    
    # Create orchestrator
    orchestrator = AgentOrchestrator(
        ollama_integration=ollama_integration
    )
    
    # Register an agent
    agent = orchestrator.register_agent(
        agent_id="assistant",
        role="Research Assistant",
        capabilities=["research", "analysis"]
    )
    
    # Assign a task
    task_id = await orchestrator.assign_task(
        agent_id="assistant",
        task_description="Research the benefits of async programming",
        context={"focus": "Python development"}
    )
    
    # Get results
    results = await orchestrator.wait_for_completion(task_id)
    print(f"Results: {results}")
    
    await orchestrator.shutdown()

if __name__ == "__main__":
    asyncio.run(main())

Quick Start: Simple Code Generation

Create a basic code generation agent:

import asyncio
import ollama

async def generate_code():
    client = ollama.AsyncClient()
    
    response = await client.chat(
        model="qwen2.5:3b",
        messages=[
            {"role": "system", "content": "You are a Python expert."},
            {"role": "user", "content": "Create a function to calculate fibonacci numbers"}
        ]
    )
    
    print(response['message']['content'])

asyncio.run(generate_code())
    ollama_integration=ollama_integration,
    tool_registry=tool_registry,
    memory_manager=memory_manager
)

# Create an agent
agent = orchestrator.register_agent(
    agent_id="researcher_001",
    role="Research Specialist",
    description="Conducts research and analysis",
    model="llama3.1"
)

# Start the orchestrator
await orchestrator.start()

# Create and assign a task
task_id = await orchestrator.create_task(
    description="Research the latest trends in artificial intelligence"
)

await orchestrator.assign_task_to_agent(task_id, agent.agent_id)

# Monitor execution
# ... (see examples for complete implementation)

await orchestrator.stop()

if name == "main": asyncio.run(main())


### Direct Code Generation Example

For immediate code generation using the async Ollama client:

```python
import asyncio
from ollama import AsyncClient

async def generate_code():
    client = AsyncClient()
    
    # Simple code generation
    message = {
        'role': 'user', 
        'content': 'Create a Python function to calculate factorial'
    }
    
    response = await client.chat(model='qwen2.5:3b', messages=[message])
    print(response.message.content)
    
    # With system prompt for better results
    messages = [
        {
            'role': 'system',
            'content': 'You are a Python expert. Write clean, documented code.'
        },
        {
            'role': 'user',
            'content': 'Create a Fibonacci sequence generator with error handling'
        }
    ]
    
    response = await client.chat(model='qwen2.5:3b', messages=messages)
    
    # Save generated code
    with open('generated_fibonacci.py', 'w') as f:
        f.write(response.message.content)

asyncio.run(generate_code())

๐Ÿ“š Documentation

Core Components

Agent Orchestrator

The central control unit that manages agents, tasks, and workflows. It handles:

  • Agent lifecycle management
  • Task distribution and execution
  • Inter-agent communication
  • Workflow orchestration

Agents

Autonomous entities with specific roles and capabilities. Each agent has:

  • Unique identity and role
  • Custom capabilities and tools
  • Memory and learning systems
  • Goal-oriented behavior

Tool Registry

Extensible system for managing tools that agents can use:

  • Built-in tools (web search, file operations, calculations)
  • Custom tool development
  • Tool discovery and validation
  • Secure tool execution

Memory Manager

Persistent storage system for agent knowledge:

  • Short-term context memory
  • Long-term knowledge storage
  • Semantic search capabilities
  • Memory importance scoring

Architecture Overview

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚                    Agent Orchestrator                       โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚  Task Management  โ”‚  Agent Lifecycle  โ”‚  Communication     โ”‚
โ”‚  Workflow Engine  โ”‚  Resource Mgmt    โ”‚  Event Handling    โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
           โ”‚                    โ”‚                    โ”‚
           โ–ผ                    โ–ผ                    โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚     Agents      โ”‚  โ”‚  Tool Registry  โ”‚  โ”‚ Memory Manager  โ”‚
โ”‚                 โ”‚  โ”‚                 โ”‚  โ”‚                 โ”‚
โ”‚ โ€ข Research      โ”‚  โ”‚ โ€ข Web Search    โ”‚  โ”‚ โ€ข Vector Store  โ”‚
โ”‚ โ€ข Writing       โ”‚  โ”‚ โ€ข File Ops      โ”‚  โ”‚ โ€ข Semantic      โ”‚
โ”‚ โ€ข Analysis      โ”‚  โ”‚ โ€ข Calculator    โ”‚  โ”‚   Search        โ”‚
โ”‚ โ€ข Custom        โ”‚  โ”‚ โ€ข Custom Tools  โ”‚  โ”‚ โ€ข Persistence   โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
           โ”‚                    โ”‚                    โ”‚
           โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                               โ–ผ
                    โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
                    โ”‚ Ollama Client   โ”‚
                    โ”‚                 โ”‚
                    โ”‚ โ€ข Model Mgmt    โ”‚
                    โ”‚ โ€ข Generation    โ”‚
                    โ”‚ โ€ข Tool Calling  โ”‚
                    โ”‚ โ€ข Streaming     โ”‚
                    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

๐Ÿ› ๏ธ Examples

The framework includes several comprehensive examples:

1. Simple Research Agent

A basic agent that conducts research and provides summaries.

python examples/simple_research_agent.py
# Or specify custom topic:
python examples/simple_research_agent.py "machine learning trends 2025"

2. Code Generation Agent (NEW!)

An agent specialized in software development using qwen2.5:3b model.

# Direct code generation (fast)
python examples/code_generation_agent.py --simple

# Full agent framework integration
python examples/code_generation_agent.py

Features:

  • Generates complete Python functions with documentation
  • Creates web scrapers, APIs, algorithms
  • Includes error handling and best practices
  • Saves code to files automatically
  • Real-time async generation

3. Async Ollama Testing

Test the direct async integration with various models.

python examples/test_async_ollama.py
python examples/simple_code_test.py

4. Multi-Agent Collaboration

Multiple agents working together to create a technical blog post.

python examples/multi_agent_collaboration.py

5. Enhanced Research Agent

Advanced research capabilities with real web search and file output.

python examples/enhanced_research_agent.py

6. Comprehensive Demo

A full demonstration of all framework capabilities.

python examples/comprehensive_demo.py

๐Ÿ”ง Configuration

Async Ollama Configuration

The framework supports both the custom integration layer and direct async client usage:

Direct AsyncClient (Recommended for Code Generation):

from ollama import AsyncClient

async def setup_direct_ollama():
    client = AsyncClient()
    # Test connection
    response = await client.chat(
        model='qwen2.5:3b',
        messages=[{'role': 'user', 'content': 'Hello'}]
    )
    return client

Custom Integration Layer:

ollama_integration = OllamaIntegrationLayer(
    base_url="http://localhost:11434",  # Ollama server URL
    default_model="llama3.1",           # Default model to use
    timeout=30                          # Request timeout
)

Model Recommendations

  • qwen2.5:3b: Best for code generation (fast, lightweight, high quality)
  • llama3.1: General purpose tasks, research, analysis
  • codellama: Alternative for code tasks (larger, more detailed)

Memory Configuration

Configure the memory system for your needs:

memory_manager = MemoryManager(
    db_path="agent_memory.db",    # Database file path
    embedding_dim=384             # Embedding vector dimension
)

Tool Configuration

Add custom tools to extend agent capabilities:

from src.tools.tool_registry import BaseTool

class CustomTool(BaseTool):
    def __init__(self):
        super().__init__(
            name="custom_tool",
            description="My custom tool",
            category="custom"
        )
    
    async def execute(self, param: str) -> dict:
        # Tool implementation
        return {"result": f"Processed: {param}"}

# Register the tool
tool_registry.register_tool(CustomTool())

๐Ÿงช Testing

Run the test suite to ensure everything is working correctly:

# Run all tests
python -m pytest tests/ -v

# Run specific test files
python -m pytest tests/test_ollama_integration.py -v
python -m pytest tests/test_tool_registry.py -v

# Run with coverage
python -m pytest tests/ --cov=src --cov-report=html

๐Ÿ“ฆ Project Structure

edgebrain/
โ”œโ”€โ”€ src/                          # Source code
โ”‚   โ”œโ”€โ”€ core/                     # Core framework components
โ”‚   โ”‚   โ”œโ”€โ”€ agent.py             # Agent implementation
โ”‚   โ”‚   โ””โ”€โ”€ orchestrator.py      # Orchestrator implementation
โ”‚   โ”œโ”€โ”€ integration/              # External integrations
โ”‚   โ”‚   โ””โ”€โ”€ ollama_client.py     # Ollama integration
โ”‚   โ”œโ”€โ”€ tools/                    # Tool system
โ”‚   โ”‚   โ””โ”€โ”€ tool_registry.py     # Tool registry and built-in tools
โ”‚   โ”œโ”€โ”€ memory/                   # Memory management
โ”‚   โ”‚   โ””โ”€โ”€ memory_manager.py    # Memory system implementation
โ”‚   โ””โ”€โ”€ __init__.py
โ”œโ”€โ”€ tests/                        # Test suite
โ”‚   โ”œโ”€โ”€ test_ollama_integration.py
โ”‚   โ”œโ”€โ”€ test_tool_registry.py
โ”‚   โ””โ”€โ”€ __init__.py
โ”œโ”€โ”€ examples/                     # Usage examples
โ”‚   โ”œโ”€โ”€ simple_research_agent.py
โ”‚   โ”œโ”€โ”€ multi_agent_collaboration.py
โ”‚   โ”œโ”€โ”€ code_generation_agent.py
โ”‚   โ””โ”€โ”€ comprehensive_demo.py
โ”œโ”€โ”€ docs/                         # Documentation
โ”œโ”€โ”€ requirements.txt              # Dependencies
โ”œโ”€โ”€ setup.py                      # Package setup
โ””โ”€โ”€ README.md                     # This file

๐Ÿค Contributing

We welcome contributions! Please see our Contributing Guide for details.

Development Setup

  1. Fork the repository
  2. Create a virtual environment
  3. Install development dependencies:
    pip install -r requirements.txt
    pip install -e .
    
  4. Run tests to ensure everything works
  5. Make your changes
  6. Add tests for new functionality
  7. Submit a pull request

Code Style

  • Follow PEP 8 guidelines
  • Use type hints for all functions
  • Add docstrings for all public methods
  • Maintain test coverage above 90%

๐Ÿ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

๐Ÿ™ Acknowledgments

  • Ollama for providing the foundation for local LLM inference
  • The open-source AI community for inspiration and best practices
  • Contributors and users who help improve this framework

๐Ÿ“ž Support

๐Ÿ—บ๏ธ Roadmap

Version 1.0 (Current)

  • โœ… Core agent framework
  • โœ… Ollama integration
  • โœ… Basic tool system
  • โœ… Memory management
  • โœ… Multi-agent orchestration

Version 1.1 (Planned)

  • ๐Ÿ”„ Enhanced tool ecosystem
  • ๐Ÿ”„ Web interface for agent management
  • ๐Ÿ”„ Advanced workflow templates
  • ๐Ÿ”„ Performance optimizations

Version 2.0 (Future)

  • ๐Ÿ”ฎ Multi-modal agent support
  • ๐Ÿ”ฎ Distributed agent networks
  • ๐Ÿ”ฎ Advanced learning algorithms
  • ๐Ÿ”ฎ Enterprise features

Built with โค๏ธ by the Muhammad Adnan Sultan

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

edgebrain-0.1.2.tar.gz (107.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

edgebrain-0.1.2-py3-none-any.whl (36.1 kB view details)

Uploaded Python 3

File details

Details for the file edgebrain-0.1.2.tar.gz.

File metadata

  • Download URL: edgebrain-0.1.2.tar.gz
  • Upload date:
  • Size: 107.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.3

File hashes

Hashes for edgebrain-0.1.2.tar.gz
Algorithm Hash digest
SHA256 4e7389b93f3b102fd1ec730be8912f7cd1ede020b7a89740841edd282d58fef0
MD5 5520a6826b56a93cf903bd1b75cc0492
BLAKE2b-256 edc584f6abc602276f581213d02d0ae543dfc400aabb33c44737d13d7796300d

See more details on using hashes here.

File details

Details for the file edgebrain-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: edgebrain-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 36.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.3

File hashes

Hashes for edgebrain-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 f300d9aa91ecd384e80117413068a2d3322d27b95237587ea71a73b9fc1b859a
MD5 6f5a2b9bbc03f3c0468c96bdd3efd675
BLAKE2b-256 a2754893551b1842c6007d5df88576b86ad114df4b834762314f060ee5b6a570

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page