Skip to main content

A reusable foundation for building multi-agent systems with observability, cost tracking, and A2A protocol support

Project description

Multi-Agent Base Framework

PyPI version Python 3.10+ License: MIT Tests Coverage

A production-ready Python framework for building multi-agent AI systems with comprehensive observability, cost tracking, resilience patterns, and A2A protocol support.

๐Ÿš€ Installation

# Basic installation from PyPI
pip install multi-agent-base

# Or install from GitHub (latest)
pip install git+https://github.com/gokhandiker/multi-agent-base.git

Optional Dependencies

# With specific provider support
pip install multi-agent-base[ollama]      # Ollama support
pip install multi-agent-base[openai]      # OpenAI support
pip install multi-agent-base[anthropic]   # Anthropic support

# With DevUI for debugging
pip install multi-agent-base[devui]

# With Redis support (for distributed caching/rate limiting)
pip install multi-agent-base[redis]

# All features
pip install multi-agent-base[all]

# Development (includes testing tools)
pip install multi-agent-base[all,dev]

โšก Quick Start

Basic Agent

from multi_agent_base.core import AgentConfig
from multi_agent_base.providers import ModelClientFactory

# Create a simple agent
config = AgentConfig(
    name="assistant",
    model="gpt-4o-mini",
    provider="openai",
    system_prompt="You are a helpful assistant.",
)

# Use with your preferred client
client = ModelClientFactory.create(config)
response = await client.chat("Hello, how are you?")
print(response)

With Observability (Phoenix Tracing)

from multi_agent_base.observability import setup_phoenix, AgentLogger

# Setup Phoenix tracing
tracer = setup_phoenix(project_name="my-agents")

# Create logger
logger = AgentLogger()
logger.log_agent_start("assistant")
logger.log_llm_call(
    agent_name="assistant",
    model="gpt-4o",
    provider="openai",
    input_tokens=50,
    output_tokens=100,
    duration_ms=250.0,
)
logger.log_agent_end("assistant", duration_ms=500.0)

With Memory

from multi_agent_base.memory import BufferMemory, SlidingWindowMemory

# Simple buffer memory
memory = BufferMemory(max_entries=100)
await memory.add("user", "Hello!")
await memory.add("assistant", "Hi there!")
history = await memory.get_history()

# Sliding window (keeps last N messages)
memory = SlidingWindowMemory(window_size=10)

With Resilience Patterns

from multi_agent_base.resilience import (
    retry_with_backoff,
    CircuitBreaker,
    with_timeout,
    Fallback,
)

# Retry with exponential backoff
@retry_with_backoff(max_attempts=3, base_delay=1.0)
async def call_api():
    return await risky_operation()

# Circuit breaker for external services
breaker = CircuitBreaker(failure_threshold=5, recovery_timeout=30)

async with breaker:
    result = await external_service()

# Timeout wrapper
result = await with_timeout(slow_operation(), timeout=5.0)

# Fallback chain
fallback = Fallback(default="Service unavailable")
result = await fallback.execute(primary_service)

With Cost Tracking

from multi_agent_base.providers import PricingCalculator

# Calculate costs
cost = PricingCalculator.calculate(
    provider="openai",
    model="gpt-4o",
    input_tokens=1000,
    output_tokens=500,
)
print(f"Cost: ${cost:.4f}")

โœจ Features

  • ๐Ÿค– Multi-Provider LLM Support: Ollama, OpenAI, Anthropic via Microsoft Agent Framework
  • ๐Ÿ“Š Full Observability: Agent conversations, tool usage, inputs/outputs via Arize Phoenix
  • ๐Ÿ’ฐ Cost Tracking: Token usage and cost calculation per model/provider
  • ๐Ÿ—๏ธ Parametric Architectures: SingleAgent, Supervisor, Swarm patterns
  • ๐ŸŽด A2A Agent Cards: Agent metadata and capability declaration
  • ๐Ÿ” Skill Auto-Discovery: Automatic skill extraction from tool functions
  • ๐Ÿ“ Structured Logging: OpenTelemetry-based tracing
  • ๐Ÿง  Conversation Memory: Multiple backends including buffer, sliding window, vector, and Redis
  • ๐Ÿ”„ Resilience Patterns: Retry strategies, circuit breaker, timeout handling, fallbacks
  • โฑ๏ธ Rate Limiting: Token bucket, sliding window, and composite limiters
  • ๐Ÿ—„๏ธ Response Caching: LRU, TTL, and semantic caching strategies
  • ๐Ÿ“ก Event System: Pub/sub event bus for inter-agent communication
  • ๐Ÿ”’ Security: Input validation, injection detection, permission management

Installation

# Basic installation
pip install multi-agent-base

# With specific provider support
pip install multi-agent-base[ollama]
pip install multi-agent-base[openai]
pip install multi-agent-base[anthropic]

# All providers
pip install multi-agent-base[all]

# Development
pip install multi-agent-base[dev]

Quick Start

Single Agent

from multi_agent_base import SystemConfig, SingleAgentPattern
from multi_agent_base.providers import ModelClientFactory

# Configure system
config = SystemConfig(
    provider="openai",
    model="gpt-4o-mini",
    observability_enabled=True,
)

# Create agent
pattern = SingleAgentPattern(config)
agent = pattern.create_agent(
    name="assistant",
    system_prompt="You are a helpful assistant.",
)

# Run
response = await agent.run("Hello, how are you?")
print(response)

Supervisor Team

from multi_agent_base import SystemConfig, SupervisorPattern

config = SystemConfig(
    provider="ollama",
    model="llama3.2",
    observability_enabled=True,
)

pattern = SupervisorPattern(config)
team = pattern.create_team(
    supervisor_name="manager",
    worker_configs=[
        {"name": "researcher", "system_prompt": "You research topics."},
        {"name": "writer", "system_prompt": "You write content."},
    ]
)

response = await team.run("Write a blog post about AI agents.")

Agent Cards

from multi_agent_base.a2a import AgentCard, SkillDiscoverer

# Auto-discover skills from tools
discoverer = SkillDiscoverer()
skills = discoverer.discover_from_tools([my_tool_function])

# Create agent card
card = AgentCard(
    name="research-agent",
    description="An agent that researches topics",
    skills=skills,
    capabilities=["text-generation", "web-search"],
)

# Export as JSON
card.to_json("agent_card.json")

Memory System

from multi_agent_base.memory import BufferMemory, MemoryConfig

# Create memory with configuration
memory = BufferMemory(MemoryConfig(max_entries=100))

# Store conversation
await memory.store(role="user", content="Hello!")
await memory.store(role="assistant", content="Hi there!")

# Retrieve history
history = await memory.retrieve(limit=10)

Rate Limiting

from multi_agent_base.ratelimit import RateLimiter, RateLimitConfig

# Create rate limiter
limiter = RateLimiter(RateLimitConfig(
    requests_per_minute=60,
    tokens_per_minute=10000,
))

# Check before making API calls
if await limiter.can_acquire():
    await limiter.acquire(tokens=100)
    # Make API call

Resilience

from multi_agent_base.resilience import retry_with_backoff, CircuitBreaker

# Retry with exponential backoff
@retry_with_backoff(max_attempts=3, delay=1.0)
async def call_api():
    return await risky_operation()

# Circuit breaker pattern
breaker = CircuitBreaker(failure_threshold=5, recovery_timeout=30)
async with breaker:
    result = await external_service()

Security

from multi_agent_base.security import (
    validate_input,
    check_injection,
    SecretManager,
)

# Input validation
validation = validate_input(user_input, max_length=1000)
if not validation.is_valid:
    raise ValueError(validation.errors)

# Injection detection
result = check_injection(user_input)
if not result.is_safe:
    log_security_event(result.threats)

# Secure secrets management
secrets = SecretManager()
api_key = secrets.get("OPENAI_API_KEY")

Architecture

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚                     Multi-Agent Base                            โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚  Patterns                                                       โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”            โ”‚
โ”‚  โ”‚ SingleAgent  โ”‚ โ”‚  Supervisor  โ”‚ โ”‚    Swarm     โ”‚            โ”‚
โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜            โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚  Cross-Cutting Concerns                                         โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”‚
โ”‚  โ”‚ Memory  โ”‚ โ”‚Resilienceโ”‚ โ”‚  Rate   โ”‚ โ”‚  Cache  โ”‚ โ”‚Security โ”‚  โ”‚
โ”‚  โ”‚         โ”‚ โ”‚         โ”‚ โ”‚ Limitingโ”‚ โ”‚         โ”‚ โ”‚         โ”‚  โ”‚
โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚  Core Services                                                  โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”            โ”‚
โ”‚  โ”‚  A2A Cards   โ”‚ โ”‚    Cost      โ”‚ โ”‚ Observabilityโ”‚            โ”‚
โ”‚  โ”‚ & Discovery  โ”‚ โ”‚   Tracker    โ”‚ โ”‚   (Phoenix)  โ”‚            โ”‚
โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜            โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”                                               โ”‚
โ”‚  โ”‚ Event System โ”‚                                               โ”‚
โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜                                               โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚  LLM Providers (via Microsoft Agent Framework)                  โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”            โ”‚
โ”‚  โ”‚   Ollama     โ”‚ โ”‚   OpenAI     โ”‚ โ”‚  Anthropic   โ”‚            โ”‚
โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜            โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

Documentation

Development

# Clone repository
git clone https://github.com/gokhandiker/multi-agent-base.git
cd multi-agent-base

# Create virtual environment
python -m venv .venv
source .venv/bin/activate

# Install with dev dependencies
pip install -e ".[dev,all]"

# Run tests
pytest

# Run linting
ruff check src tests
mypy src

๐Ÿ“ฆ Version History

Version Date Changes
0.1.0b1 2026-02-02 Initial beta release with 18 modules, 1243 tests

๐Ÿค Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

๐Ÿ“„ License

MIT License - See LICENSE for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

multi_agent_base-0.1.0b1.tar.gz (563.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

multi_agent_base-0.1.0b1-py3-none-any.whl (324.8 kB view details)

Uploaded Python 3

File details

Details for the file multi_agent_base-0.1.0b1.tar.gz.

File metadata

  • Download URL: multi_agent_base-0.1.0b1.tar.gz
  • Upload date:
  • Size: 563.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.2

File hashes

Hashes for multi_agent_base-0.1.0b1.tar.gz
Algorithm Hash digest
SHA256 8e8068574a65a73772ebe5da9fcefe76a5d578677b0b56031bf501526ecbad60
MD5 f60aa7266bf72abbb4bfe3947b704e86
BLAKE2b-256 fe2f0329e4a3294d549cfb55873662acc06ef7e663465d7650934eb8c39c6993

See more details on using hashes here.

File details

Details for the file multi_agent_base-0.1.0b1-py3-none-any.whl.

File metadata

File hashes

Hashes for multi_agent_base-0.1.0b1-py3-none-any.whl
Algorithm Hash digest
SHA256 f0fc686467e598f686c6708e9b4edc52bbcd5e5e607dfb342bd17229b95b3839
MD5 7441612ae19bd298f3e1e99ae229ab65
BLAKE2b-256 80e47eca8ea2c4606eb93a4d9f31c7016452dcba29b1aaacc0965c4bbafc5770

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page