Skip to main content

Multi Agent Orchestration - A modern framework for orchestrating AI agents

Project description

MAO - Multi Agent Orchestration

FastAPI Qdrant DuckDB LangChain

Anthropic OpenAI Ollama MCP

MAO is a modern framework for orchestrating AI agents. It combines the power of vector databases, LLMs, and the Model Context Protocol (MCP) to enable robust and scalable agent workflows.

Features

  • 🤖 Agent Orchestration - Manage complex multi-agent workflows
  • 🧠 Vector-based Memory - Store and retrieve context information
  • 🔄 MCP Integration - Seamless communication between agents and tools
  • 🛠️ Extensible Tools - Easy integration of new capabilities
  • 📊 DuckDB Analytics - Powerful data analysis and processing
  • 🔍 Semantic Search - Find relevant information across agent memories
  • 🤝 Team Management - Organize agents into collaborative teams
  • 🔒 Secure Configuration - Centralized management of API keys and settings
  • 📤 Import/Export - Backup and restore system configurations
  • 🔄 Supervisor Agents - Coordinate team workflows with supervisor agents
  • 📚 Knowledge & Experience Trees - Structured storage for agent knowledge
  • 🌐 Multi-LLM Support - Works with OpenAI, Anthropic, and Ollama models

API Endpoints

The MAO API provides the following main endpoints:

  • /agents - Agent creation, management, and interaction
  • /teams - Team creation and management
  • /teams/supervisors - Supervisor management for agent teams
  • /mcp - MCP server and tool management
  • /config - Global configuration settings
  • /export, /import - Configuration import/export utilities
  • /health - API health check endpoint

API documentation is available at:

  • Swagger UI: /docs
  • ReDoc: /redoc

Requirements

  • Python 3.11+
  • Qdrant vector database (accessible via HTTP)
  • DuckDB for configuration storage
  • LLM provider API keys (OpenAI, Anthropic, or local Ollama instance)

Installation

# With uv (recommended)
curl -LsSf https://astral.sh/uv/install.sh | sh
# Review the install script or use your package manager if you prefer.
uv sync

Quick Start

from mao.agents import create_agent
from mao.storage import KnowledgeTree, ExperienceTree

# Initialize storage
knowledge_tree = await KnowledgeTree.create(collection_name="agent-memory")
experience_tree = await ExperienceTree.create(collection_name="agent-experience")

# Create an agent
agent_app = await create_agent(
    provider="anthropic",
    model_name="claude-3-opus-20240229",
    agent_name="assistant",
    knowledge_tree=knowledge_tree,
    experience_tree=experience_tree,
)

# Execute a query
response = await agent_app.ainvoke(
    {"messages": [{"role": "user", "content": "Analyze the latest economic data"}]}
)
if hasattr(response, "content"):
    print(response.content)
elif isinstance(response, dict) and response.get("messages"):
    print(response["messages"][-1].content)
else:
    print(response)

Environment Variables

The following environment variables are supported:

# LLM API Keys
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-...

# Vector Database
QDRANT_URL=http://localhost:6333
QDRANT_API_KEY=your-qdrant-api-key
EMBEDDING_MODEL=text-embedding-3-small

# DuckDB Configuration
MCP_DB_PATH=/path/to/mcp_config.duckdb

# MCP Configuration
MCP_CONFIG_PATH=/path/to/mcp.json
OLLAMA_HOST=http://localhost:11434

# MCP Server API Keys
CONTEXT7_API_KEY=your-context7-api-key  # For up-to-date code documentation

# Server
PORT=8000

MCP Servers

MAO includes several MCP (Model Context Protocol) servers configured by default:

  • context7: Provides up-to-date code documentation and examples
    • Requires: CONTEXT7_API_KEY environment variable
    • Get your API key from context7.com
    • Enables AI agents to access current library documentation
  • dockerailabs: Docker-based MCP server via socat
  • perplexity-ask: AI-powered search via Perplexity API

To configure MCP servers, edit mcp.json in the project root.

Docker

# Build and start the services with Docker Compose
docker compose up -d

# Or build the Docker image manually
docker build -t mao-api -f docker/Dockerfile.api .

# Start the container
docker run -p 8000:8000 -v ./data:/data -v ./.env:/app/.env mao-api

For development, you can use the following commands:

# Build with BuildKit enabled for better caching
DOCKER_BUILDKIT=1 docker build -t mao-api -f docker/Dockerfile.api .

# Run with mounted source directory for development
docker run -p 8000:8000 -v ./data:/data -v ./.env:/app/.env mao-api

# Pass environment variables directly
docker run -p 8000:8000 \
  -e OPENAI_API_KEY=sk-... \
  -e ANTHROPIC_API_KEY=sk-... \
  -e QDRANT_URL=http://localhost:6333 \
  mao-api

# Or use the --env-file option
docker run -p 8000:8000 --env-file .env mao-api

Docker Compose with Environment Variables

You can also use Docker Compose to manage environment variables:

services:
  api:
    build:
      context: .
      dockerfile: docker/Dockerfile.api
    ports:
      - "8000:8000"
    volumes:
      - ./data:/data
    env_file:
      - .env

API Example

import httpx

async with httpx.AsyncClient() as client:
    # Create a new agent
    response = await client.post(
        "http://localhost:8000/agents",
        json={
            "name": "research_assistant",
            "provider": "anthropic",
            "model_name": "claude-3-opus-20240229",
            "system_prompt": "You are a research assistant."
        }
    )
    agent_id = response.json()["id"]
    
    # Send a message to the agent
    response = await client.post(
        f"http://localhost:8000/agents/{agent_id}/chat",
        json={"content": "Summarize the latest developments in AI."}
    )
    print(response.json()["response"])

Team Workflow Example

# Create a team with supervisor
team_id = "team_research"
supervisor_id = "supervisor_research_team"

# Add agents to the team
await client.post(
    f"http://localhost:8000/teams/{team_id}/members",
    json={
        "agent_id": "agent_researcher",
        "role": "researcher",
        "order_index": 1
    }
)

await client.post(
    f"http://localhost:8000/teams/{team_id}/members",
    json={
        "agent_id": "agent_writer",
        "role": "writer",
        "order_index": 2
    }
)

# Start the team
await client.post(f"http://localhost:8000/teams/{team_id}/start")

# Send a task to the team
response = await client.post(
    f"http://localhost:8000/teams/{team_id}/chat",
    json={"message": "Research quantum computing advancements and write a report"}
)

AI Agent Capabilities

Memory Planning Research

Tools Collaboration Analytics

Security Performance Integration

CI/CD with GitHub Actions

This project uses GitHub Actions for continuous integration and deployment:

Workflows

  • Test and Lint - Runs tests, linting, and type checking on every push and pull request.
  • Docker Build - Builds and publishes Docker images on pushes to the main branch and tags.
  • Docker Multi-Platform Build - Creates Docker images for multiple platforms (amd64, arm64).
  • Dependency Updates - Automatically updates project dependencies weekly.
  • Package Publishing - Publishes the package to PyPI on new releases.

Environment Variables and Secrets

To use environment variables in GitHub Actions workflows, you need to add them as GitHub Secrets:

  1. Go to your GitHub repository
  2. Navigate to Settings > Secrets and variables > Actions
  3. Click on "New repository secret"
  4. Add each environment variable from your .env file:
    • OPENAI_API_KEY
    • ANTHROPIC_API_KEY
    • QDRANT_URL
    • EMBEDDING_MODEL
    • MCP_DB_PATH
    • MCP_CONFIG_PATH
    • OLLAMA_HOST

These secrets are then passed to the Docker build process as build arguments and set as environment variables in the container.

Workflow Execution

# Manually run the dependency update workflow
gh workflow run dependency-update.yml

# Manually publish a version
gh workflow run publish.yml -f version=0.2.0

# Manually run multi-platform Docker build
gh workflow run docker-multi-platform.yml -f platforms=linux/amd64,linux/arm64,linux/arm/v7

License

This project is licensed under the MIT License - see the LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mao_agents-0.1.0.tar.gz (88.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mao_agents-0.1.0-py3-none-any.whl (57.7 kB view details)

Uploaded Python 3

File details

Details for the file mao_agents-0.1.0.tar.gz.

File metadata

  • Download URL: mao_agents-0.1.0.tar.gz
  • Upload date:
  • Size: 88.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for mao_agents-0.1.0.tar.gz
Algorithm Hash digest
SHA256 a9ad79c55287d583c2360fe3f0228697673523f38181b926f67daafd9a194199
MD5 bb3e5ee9f3ef4e0579d586afffe37217
BLAKE2b-256 5450654ae385bcc2cbb782054d5282c4c20cf30562f2c3a3381258fa06156006

See more details on using hashes here.

Provenance

The following attestation bundles were made for mao_agents-0.1.0.tar.gz:

Publisher: publish.yml on agentic-dev-io/mao

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file mao_agents-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: mao_agents-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 57.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for mao_agents-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 ca4b56b93c6b4c2ce3fdd52aae7d18fe9a7403ae6f571283246e1a48023502df
MD5 a608ae392a3f210d6fd87540c3337ab2
BLAKE2b-256 ece3fa178d2600c12d5f41f8670c07fc4897f1e20ca4111fc399de55bc6d6dbe

See more details on using hashes here.

Provenance

The following attestation bundles were made for mao_agents-0.1.0-py3-none-any.whl:

Publisher: publish.yml on agentic-dev-io/mao

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page