Skip to main content

The Universal AI Agent Framework for the MCP Era

Project description

๐Ÿค– AA Kit

The Universal AI Agent Framework for the MCP Era

AA Kit is a Python framework designed to build AI agents that naturally compose into ecosystems. Every agent is simultaneously a standalone agent, an MCP server, and an MCP client - creating true interoperability across the entire AI landscape.

๐ŸŽฏ Core Philosophy

"Make simple things simple, complex things possible, and everything interoperable"

AA Kit fills the gap left by existing frameworks by being:

  • Simple by default - Create agents in 3 lines of code
  • MCP-native - Universal compatibility with all AI tools and frameworks
  • Composition-first - Agents naturally work together
  • Deploy-ready - Production deployment in one line

๐Ÿš€ Quick Start

from aakit import Agent

# Create an agent in 3 lines
agent = Agent(
    name="assistant",
    instruction="You are a helpful assistant",
    model="gpt-4o"
)

# Chat synchronously - no async/await needed!
response = agent.chat("Hello! What can you help me with?")
print(response)

# Stream responses
for chunk in agent.stream_chat("Tell me a story"):
    print(chunk, end='', flush=True)

# Deploy as MCP server
agent.serve_mcp(port=8080)  # Now accessible to any MCP client

๐ŸŽ‰ New: Simple Synchronous API

No more async/await complexity for basic use cases! AA Kit now provides both sync and async APIs:

# Synchronous (NEW!) - Perfect for scripts and simple use cases
response = agent.chat("Hello")  # Just works!

# Asynchronous - When you need it for advanced use cases  
response = await agent.achat("Hello")  # Async version with 'a' prefix

๐Ÿ“‹ Table of Contents

๐Ÿ“ฆ Installation

pip install aa-kit

Requirements:

  • Python 3.9+
  • At least one LLM API key (OpenAI, Anthropic, etc.)

๐Ÿง  Core Concepts

Agents are Simple Constructors

from aakit import Agent

agent = Agent(
    name="my_agent",                    # Unique identifier
    instruction="Your role description", # System prompt
    model="gpt-4",                      # LLM to use
    tools=[],                           # Optional tools
    memory=None,                        # Optional memory backend
    reasoning="simple"                  # Reasoning pattern
)

# Use it immediately - no setup needed!
response = agent.chat("Hello!")  # Synchronous
response = await agent.achat("Hello!")  # Async when needed

Tools are Always MCP

# Define tools as regular Python functions
def search_database(query: str) -> str:
    return f"Results for: {query}"

def create_ticket(issue: str) -> str:
    return f"Ticket #{random.randint(1000, 9999)} created"

# Agent automatically converts them to MCP
agent = Agent(
    name="support",
    instruction="You help customers",
    model="gpt-4",
    tools=[search_database, create_ticket]
)

Every Agent IS an MCP Server

# Your agent is automatically an MCP server
agent.serve_mcp(port=8080)

# Other agents can now use it as a tool
other_agent = Agent(
    name="manager", 
    instruction="You coordinate support",
    model="gpt-4",
    tools=["http://localhost:8080"]  # Use the support agent
)

๐Ÿ”ฅ Key Differentiators

1. MCP-First Architecture

  • Every tool speaks MCP protocol
  • Every agent IS an MCP server
  • Universal compatibility with all AI frameworks

2. Built-in Reasoning Patterns

# Choose how your agent thinks
simple_agent = Agent("You chat", model="gpt-4", reasoning="simple")
react_agent = Agent("You solve problems", model="gpt-4", reasoning="react")
cot_agent = Agent("You analyze", model="gpt-4", reasoning="chain_of_thought")

3. Stateless + External Memory

# Memory is injected, not built-in
agent = Agent(
    name="assistant",
    instruction="You remember conversations",
    model="gpt-4",
    memory="redis://localhost"  # Any storage backend
)

4. Zero-Config LLM Management

# Automatic model selection and fallbacks
agent = Agent("assistant", "You help", model="auto")  # OpenAI โ†’ Anthropic โ†’ Local
agent = Agent("assistant", "You help", model=["gpt-4", "claude-3"])  # Fallback chain

5. True Interoperability

# AA Kit agents work in any framework
my_agent = Agent("Helper", model="gpt-4")

# Use in LangChain
langchain_tool = Tool.from_mcp(my_agent.mcp_endpoint)

# Use in CrewAI
crewai_tool = MCPTool(my_agent.mcp_endpoint)

๐Ÿ‘จโ€๐Ÿ’ป Developer Experience

Simple Creation

# Minimal agent
agent = Agent(
    name="math_helper",
    instruction="You help with math",
    model="gpt-4"
)

# With tools
calculator = Agent(
    name="calculator",
    instruction="You solve math problems",
    model="gpt-4",
    tools=[add, multiply, divide]
)

# With memory
personal_assistant = Agent(
    name="assistant",
    instruction="You are my personal assistant",
    model="gpt-4", 
    memory="sqlite://assistant.db"
)

Easy Composition

# Agents use other agents naturally
researcher = Agent("You research topics", model="gpt-4", tools=[web_search])
writer = Agent("You write articles", model="claude-3")

def create_content(topic):
    research = researcher.chat(f"Research {topic}")
    article = writer.chat(f"Write an article about: {research}")
    return article

One-Line Deployment

# Local development
agent.serve()  # localhost:8000

# Production
agent.deploy(mode="serverless")  # Auto-scaling cloud deployment

๐Ÿ—๏ธ Architecture

Core Components

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚     Agent       โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ โ€ข Name          โ”‚
โ”‚ โ€ข Instruction   โ”‚
โ”‚ โ€ข Model         โ”‚
โ”‚ โ€ข Tools (MCP)   โ”‚
โ”‚ โ€ข Memory        โ”‚
โ”‚ โ€ข Reasoning     โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
         โ”‚
         โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚  MCP Server     โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ โ€ข Auto-generatedโ”‚
โ”‚ โ€ข Standard API  โ”‚
โ”‚ โ€ข Tool calls    โ”‚
โ”‚ โ€ข Responses     โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

Reasoning Patterns

  1. Simple: Direct LLM call, no tool use
  2. ReAct: Reason โ†’ Act โ†’ Observe loop with tools
  3. Chain of Thought: Think step-by-step before responding
  4. Custom: Define your own reasoning pattern

Memory Backends

  • None: Stateless (default)
  • Local: In-memory for development
  • Redis: Fast external memory
  • SQLite: File-based persistence
  • PostgreSQL: Production database
  • Custom: Bring your own storage

๐Ÿ“š Examples

Customer Support Agent

from aakit import Agent

def search_orders(customer_id: str) -> str:
    return f"Orders for {customer_id}: [Order #1, Order #2]"

def create_ticket(issue: str) -> str:
    return f"Support ticket created: {issue}"

support_agent = Agent(
    name="support",
    instruction="""You are a helpful customer support agent. 
    Help customers with orders and issues. Be empathetic and solution-focused.""",
    model="gpt-4",
    tools=[search_orders, create_ticket],
    reasoning="react"
)

# Use the agent
response = support_agent.chat("I can't find my order #12345")
print(response)

Multi-Agent Content Team

from aakit import Agent

# Define specialized agents
researcher = Agent(
    name="researcher",
    instruction="You research topics thoroughly using web search",
    model="gpt-4",
    tools=[web_search]
)

writer = Agent(
    name="writer", 
    instruction="You write engaging, well-structured articles",
    model="claude-3"
)

editor = Agent(
    name="editor",
    instruction="You review and improve written content",
    model="gpt-4"
)

# Expose team as MCP services
from aakit import serve_mcp

serve_mcp({
    "researcher": researcher,
    "writer": writer, 
    "editor": editor
}, port=8080)

# Now other agents can use the entire team
coordinator = Agent(
    name="coordinator",
    instruction="You coordinate content creation using the research, writing, and editing team",
    model="gpt-4",
    tools=["http://localhost:8080/researcher", 
           "http://localhost:8080/writer",
           "http://localhost:8080/editor"]
)

Code Analysis Agent

def analyze_code(code: str) -> str:
    """Analyze code for potential issues"""
    return f"Analysis of {len(code)} characters of code..."

def suggest_improvements(analysis: str) -> str:
    """Suggest code improvements"""
    return f"Improvements based on: {analysis[:50]}..."

code_agent = Agent(
    name="code_reviewer",
    instruction="""You are a senior code reviewer. 
    Analyze code for bugs, security issues, and best practices.""",
    model="gpt-4",
    tools=[analyze_code, suggest_improvements],
    reasoning="chain_of_thought"
)

# Use with different models for cost optimization
quick_review = Agent(
    name="quick_reviewer",
    instruction="You do quick code reviews",
    model="gpt-3.5-turbo",
    tools=[analyze_code]
)

๐Ÿ“– API Reference

Agent Class

class Agent:
    def __init__(
        self,
        name: str,
        instruction: str,
        model: str | List[str] = "auto",
        tools: List[Callable | str] = None,
        memory: str | MemoryBackend = None,
        reasoning: str = "simple",
        temperature: float = 0.7,
        max_tokens: int = None,
        rate_limit: int = None
    )
    
    def chat(self, message: str) -> str:
        """Send a message to the agent"""
        
    def serve(self, port: int = 8000) -> None:
        """Start REST API + WebSocket server"""
        
    def serve_mcp(self, port: int = 8080) -> None:
        """Start MCP server"""
        
    def deploy(self, mode: str = "serverless") -> str:
        """Deploy to cloud"""
        
    @property
    def mcp_endpoint(self) -> str:
        """Get MCP endpoint URL"""

Utility Functions

from aakit import serve_mcp, discover_mcp_tools

# Serve multiple agents as MCP
serve_mcp({
    "agent1": agent1,
    "agent2": agent2
}, port=8080)

# Discover available MCP tools
tools = discover_mcp_tools("http://localhost:8080")

๐Ÿš€ Deployment

Local Development

# Start agent with web UI
agent.serve()  # http://localhost:8000

# MCP endpoint available at
# http://localhost:8000/mcp

Production Deployment

# Serverless deployment (auto-scaling)
agent.deploy(mode="serverless")

# Container deployment
agent.deploy(mode="container")

# Kubernetes deployment  
agent.deploy(mode="kubernetes")

Environment Variables

# LLM Configuration
OPENAI_API_KEY=your_openai_key
ANTHROPIC_API_KEY=your_anthropic_key

# Memory Configuration
REDIS_URL=redis://localhost:6379
DATABASE_URL=postgresql://user:pass@localhost/db

# AA Kit Configuration
AAKIT_DEFAULT_MODEL=gpt-4
AAKIT_DEBUG=true

๐Ÿ› ๏ธ Contributing

We welcome contributions! See CONTRIBUTING.md for guidelines.

Development Setup

git clone https://github.com/josharsh/aa-kit
cd aa-kit
pip install -e ".[dev]"
pytest

๐Ÿ“„ License

MIT License - see LICENSE for details.

๐Ÿ”— Links


AA Kit - Building the future of AI agent interoperability ๐Ÿš€

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aa_kit-0.2.0.tar.gz (87.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

aa_kit-0.2.0-py3-none-any.whl (104.9 kB view details)

Uploaded Python 3

File details

Details for the file aa_kit-0.2.0.tar.gz.

File metadata

  • Download URL: aa_kit-0.2.0.tar.gz
  • Upload date:
  • Size: 87.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.10

File hashes

Hashes for aa_kit-0.2.0.tar.gz
Algorithm Hash digest
SHA256 5b1b5fdf1acd3cb9be3cff1808dd6105e8ff1cf15e25f049985772bcb38ba8a0
MD5 954cbbdd5fd6e079e90c5f74aa621ae7
BLAKE2b-256 a0077923ccaa8771aed584bbf418c96fd8694d3c048cbb4cd0762b4264282edc

See more details on using hashes here.

File details

Details for the file aa_kit-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: aa_kit-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 104.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.10

File hashes

Hashes for aa_kit-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 703a4d494aab83f43c0e052d28f363c4953bf08355520715ba1a6da6b7722fec
MD5 f919a1766dedaefa7842a409a8bf997e
BLAKE2b-256 43dbedef97c8dad38ab396d66b6c5fcff2182ad75db11875bbbb3762c8dca5c7

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page