A reusable foundation for building multi-agent systems with observability, cost tracking, and A2A protocol support
Project description
Multi-Agent Base Framework
A production-ready Python framework for building multi-agent AI systems with comprehensive observability, cost tracking, resilience patterns, and A2A protocol support.
๐ Installation
# Basic installation from PyPI
pip install multi-agent-base
# Or install from GitHub (latest)
pip install git+https://github.com/gokhandiker/multi-agent-base.git
Optional Dependencies
# With specific provider support
pip install multi-agent-base[ollama] # Ollama support
pip install multi-agent-base[openai] # OpenAI support
pip install multi-agent-base[anthropic] # Anthropic support
# With DevUI for debugging
pip install multi-agent-base[devui]
# With Redis support (for distributed caching/rate limiting)
pip install multi-agent-base[redis]
# All features
pip install multi-agent-base[all]
# Development (includes testing tools)
pip install multi-agent-base[all,dev]
โก Quick Start
Basic Agent
from multi_agent_base.core import AgentConfig
from multi_agent_base.providers import ModelClientFactory
# Create a simple agent
config = AgentConfig(
name="assistant",
model="gpt-4o-mini",
provider="openai",
system_prompt="You are a helpful assistant.",
)
# Use with your preferred client
client = ModelClientFactory.create(config)
response = await client.chat("Hello, how are you?")
print(response)
With Observability (Phoenix Tracing)
from multi_agent_base.observability import setup_phoenix, AgentLogger
# Setup Phoenix tracing
tracer = setup_phoenix(project_name="my-agents")
# Create logger
logger = AgentLogger()
logger.log_agent_start("assistant")
logger.log_llm_call(
agent_name="assistant",
model="gpt-4o",
provider="openai",
input_tokens=50,
output_tokens=100,
duration_ms=250.0,
)
logger.log_agent_end("assistant", duration_ms=500.0)
With Memory
from multi_agent_base.memory import BufferMemory, SlidingWindowMemory
# Simple buffer memory
memory = BufferMemory(max_entries=100)
await memory.add("user", "Hello!")
await memory.add("assistant", "Hi there!")
history = await memory.get_history()
# Sliding window (keeps last N messages)
memory = SlidingWindowMemory(window_size=10)
With Resilience Patterns
from multi_agent_base.resilience import (
retry_with_backoff,
CircuitBreaker,
with_timeout,
Fallback,
)
# Retry with exponential backoff
@retry_with_backoff(max_attempts=3, base_delay=1.0)
async def call_api():
return await risky_operation()
# Circuit breaker for external services
breaker = CircuitBreaker(failure_threshold=5, recovery_timeout=30)
async with breaker:
result = await external_service()
# Timeout wrapper
result = await with_timeout(slow_operation(), timeout=5.0)
# Fallback chain
fallback = Fallback(default="Service unavailable")
result = await fallback.execute(primary_service)
With Cost Tracking
from multi_agent_base.providers import PricingCalculator
# Calculate costs
cost = PricingCalculator.calculate(
provider="openai",
model="gpt-4o",
input_tokens=1000,
output_tokens=500,
)
print(f"Cost: ${cost:.4f}")
โจ Features
- ๐ค Multi-Provider LLM Support: Ollama, OpenAI, Anthropic via Microsoft Agent Framework
- ๐ Full Observability: Agent conversations, tool usage, inputs/outputs via Arize Phoenix
- ๐ฐ Cost Tracking: Token usage and cost calculation per model/provider
- ๐๏ธ Parametric Architectures: SingleAgent, Supervisor, Swarm patterns
- ๐ด A2A Agent Cards: Agent metadata and capability declaration
- ๐ Skill Auto-Discovery: Automatic skill extraction from tool functions
- ๐ Structured Logging: OpenTelemetry-based tracing
- ๐ง Conversation Memory: Multiple backends including buffer, sliding window, vector, and Redis
- ๐ Resilience Patterns: Retry strategies, circuit breaker, timeout handling, fallbacks
- โฑ๏ธ Rate Limiting: Token bucket, sliding window, and composite limiters
- ๐๏ธ Response Caching: LRU, TTL, and semantic caching strategies
- ๐ก Event System: Pub/sub event bus for inter-agent communication
- ๐ Security: Input validation, injection detection, permission management
Installation
# Basic installation
pip install multi-agent-base
# With specific provider support
pip install multi-agent-base[ollama]
pip install multi-agent-base[openai]
pip install multi-agent-base[anthropic]
# All providers
pip install multi-agent-base[all]
# Development
pip install multi-agent-base[dev]
Quick Start
Single Agent
from multi_agent_base import SystemConfig, SingleAgentPattern
from multi_agent_base.providers import ModelClientFactory
# Configure system
config = SystemConfig(
provider="openai",
model="gpt-4o-mini",
observability_enabled=True,
)
# Create agent
pattern = SingleAgentPattern(config)
agent = pattern.create_agent(
name="assistant",
system_prompt="You are a helpful assistant.",
)
# Run
response = await agent.run("Hello, how are you?")
print(response)
Supervisor Team
from multi_agent_base import SystemConfig, SupervisorPattern
config = SystemConfig(
provider="ollama",
model="llama3.2",
observability_enabled=True,
)
pattern = SupervisorPattern(config)
team = pattern.create_team(
supervisor_name="manager",
worker_configs=[
{"name": "researcher", "system_prompt": "You research topics."},
{"name": "writer", "system_prompt": "You write content."},
]
)
response = await team.run("Write a blog post about AI agents.")
Agent Cards
from multi_agent_base.a2a import AgentCard, SkillDiscoverer
# Auto-discover skills from tools
discoverer = SkillDiscoverer()
skills = discoverer.discover_from_tools([my_tool_function])
# Create agent card
card = AgentCard(
name="research-agent",
description="An agent that researches topics",
skills=skills,
capabilities=["text-generation", "web-search"],
)
# Export as JSON
card.to_json("agent_card.json")
Memory System
from multi_agent_base.memory import BufferMemory, MemoryConfig
# Create memory with configuration
memory = BufferMemory(MemoryConfig(max_entries=100))
# Store conversation
await memory.store(role="user", content="Hello!")
await memory.store(role="assistant", content="Hi there!")
# Retrieve history
history = await memory.retrieve(limit=10)
Rate Limiting
from multi_agent_base.ratelimit import RateLimiter, RateLimitConfig
# Create rate limiter
limiter = RateLimiter(RateLimitConfig(
requests_per_minute=60,
tokens_per_minute=10000,
))
# Check before making API calls
if await limiter.can_acquire():
await limiter.acquire(tokens=100)
# Make API call
Resilience
from multi_agent_base.resilience import retry_with_backoff, CircuitBreaker
# Retry with exponential backoff
@retry_with_backoff(max_attempts=3, delay=1.0)
async def call_api():
return await risky_operation()
# Circuit breaker pattern
breaker = CircuitBreaker(failure_threshold=5, recovery_timeout=30)
async with breaker:
result = await external_service()
Security
from multi_agent_base.security import (
validate_input,
check_injection,
SecretManager,
)
# Input validation
validation = validate_input(user_input, max_length=1000)
if not validation.is_valid:
raise ValueError(validation.errors)
# Injection detection
result = check_injection(user_input)
if not result.is_safe:
log_security_event(result.threats)
# Secure secrets management
secrets = SecretManager()
api_key = secrets.get("OPENAI_API_KEY")
Architecture
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Multi-Agent Base โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ Patterns โ
โ โโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ โ
โ โ SingleAgent โ โ Supervisor โ โ Swarm โ โ
โ โโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ Cross-Cutting Concerns โ
โ โโโโโโโโโโโ โโโโโโโโโโโ โโโโโโโโโโโ โโโโโโโโโโโ โโโโโโโโโโโ โ
โ โ Memory โ โResilienceโ โ Rate โ โ Cache โ โSecurity โ โ
โ โ โ โ โ โ Limitingโ โ โ โ โ โ
โ โโโโโโโโโโโ โโโโโโโโโโโ โโโโโโโโโโโ โโโโโโโโโโโ โโโโโโโโโโโ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ Core Services โ
โ โโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ โ
โ โ A2A Cards โ โ Cost โ โ Observabilityโ โ
โ โ & Discovery โ โ Tracker โ โ (Phoenix) โ โ
โ โโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ โ
โ โโโโโโโโโโโโโโโโ โ
โ โ Event System โ โ
โ โโโโโโโโโโโโโโโโ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ LLM Providers (via Microsoft Agent Framework) โ
โ โโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ โ
โ โ Ollama โ โ OpenAI โ โ Anthropic โ โ
โ โโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Documentation
- Getting Started
- Configuration Guide
- Architecture Patterns
- A2A Agent Cards
- Observability
- Cost Tracking
- Memory System
- Resilience Patterns
- Rate Limiting
- Caching
- Event System
- Security
- API Reference
Development
# Clone repository
git clone https://github.com/gokhandiker/multi-agent-base.git
cd multi-agent-base
# Create virtual environment
python -m venv .venv
source .venv/bin/activate
# Install with dev dependencies
pip install -e ".[dev,all]"
# Run tests
pytest
# Run linting
ruff check src tests
mypy src
๐ฆ Version History
| Version | Date | Changes |
|---|---|---|
| 0.1.0b1 | 2026-02-02 | Initial beta release with 18 modules, 1243 tests |
๐ค Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
๐ License
MIT License - See LICENSE for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file multi_agent_base-0.1.0b1.tar.gz.
File metadata
- Download URL: multi_agent_base-0.1.0b1.tar.gz
- Upload date:
- Size: 563.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8e8068574a65a73772ebe5da9fcefe76a5d578677b0b56031bf501526ecbad60
|
|
| MD5 |
f60aa7266bf72abbb4bfe3947b704e86
|
|
| BLAKE2b-256 |
fe2f0329e4a3294d549cfb55873662acc06ef7e663465d7650934eb8c39c6993
|
File details
Details for the file multi_agent_base-0.1.0b1-py3-none-any.whl.
File metadata
- Download URL: multi_agent_base-0.1.0b1-py3-none-any.whl
- Upload date:
- Size: 324.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f0fc686467e598f686c6708e9b4edc52bbcd5e5e607dfb342bd17229b95b3839
|
|
| MD5 |
7441612ae19bd298f3e1e99ae229ab65
|
|
| BLAKE2b-256 |
80e47eca8ea2c4606eb93a4d9f31c7016452dcba29b1aaacc0965c4bbafc5770
|