Skip to main content

Agents for intelligence and coordination

Project description

Calute 🤖

Calute is a powerful, production-ready framework for building and orchestrating AI agents with advanced function calling, memory systems, and multi-agent collaboration capabilities. Designed for both researchers and developers, Calute provides enterprise-grade features for creating sophisticated AI systems.

🚀 Key Features

Core Capabilities

  • 🎭 Multi-Agent Orchestration: Seamlessly manage and coordinate multiple specialized agents with dynamic switching based on context, capabilities, or custom triggers
  • ⚡ Enhanced Function Execution: Advanced function calling with timeout management, retry policies, parallel/sequential execution strategies, and comprehensive error handling
  • 🧠 Advanced Memory Systems: Sophisticated memory management with multiple types (short-term, long-term, episodic, semantic, working, procedural), vector search, caching, and persistence
  • 🔄 Workflow Engine: Define and execute complex multi-step workflows with conditional logic and state management
  • 🌊 Streaming Support: Real-time streaming responses with function execution tracking
  • 🔌 LLM Flexibility: Unified interface supporting OpenAI, Gemini, Anthropic, and custom models

Enhanced Features

  • Memory Store with Indexing: Fast retrieval with tag-based indexing and importance scoring
  • Function Registry: Centralized function management with metrics and validation
  • Error Recovery: Robust error handling with customizable retry policies and fallback strategies
  • Performance Monitoring: Built-in metrics collection for execution times, success rates, and resource usage
  • Context Management: Sophisticated context passing between agents and functions
  • Security Features: Function validation, safe execution environments, and access control

📦 Installation

Core Installation (Lightweight)

# Minimal installation with only essential dependencies
pip install calute

Feature-Specific Installations

# For web search capabilities
pip install "calute[search]"

# For image/vision processing
pip install "calute[vision]"

# For additional LLM providers (Gemini, Anthropic, Cohere)
pip install "calute[providers]"

# For database support (PostgreSQL, MongoDB, etc.)
pip install "calute[database]"

# For Redis caching/queuing
pip install "calute[redis]"

# For monitoring and observability
pip install "calute[monitoring]"

# For vector search and embeddings
pip install "calute[vectors]"

Preset Configurations

# Research-focused installation (search, vision, vectors)
pip install "calute[research]"

# Enterprise installation (database, redis, monitoring, providers)
pip install "calute[enterprise]"

# Full installation with all features
pip install "calute[full]"

Development Installation

git clone https://github.com/erfanzar/calute.git
cd calute
pip install -e ".[dev]"

🎯 Quick Start

Basic Agent Setup

import openai
from calute import Agent, Calute

# Initialize your LLM client
client = openai.OpenAI(api_key="your-key")

# Create an agent with functions
def search_web(query: str) -> str:
    """Search the web for information."""
    return f"Results for: {query}"

def analyze_data(data: str) -> dict:
    """Analyze provided data."""
    return {"summary": data, "insights": ["insight1", "insight2"]}

agent = Agent(
    id="research_agent",
    name="Research Assistant",
    model="gpt-4",
    instructions="You are a helpful research assistant.",
    functions=[search_web, analyze_data],
    temperature=0.7
)

# Initialize Calute and register agent
calute = Calute(client)
calute.register_agent(agent)

# Use the agent
response = await calute.create_response(
    prompt="Find information about quantum computing",
    agent_id="research_agent"
)

Advanced Memory-Enhanced Agent

from calute.memory import MemoryStore, MemoryType

# Create memory store with persistence
memory = MemoryStore(
    max_short_term=100,
    max_long_term=1000,
    enable_persistence=True,
    persistence_path="./agent_memory"
)

# Add memories
memory.add_memory(
    content="User prefers technical explanations",
    memory_type=MemoryType.LONG_TERM,
    agent_id="assistant",
    tags=["preference", "user_profile"],
    importance_score=0.9
)

# Attach to Calute
calute.memory = memory

Multi-Agent Collaboration

from calute.executors import EnhancedAgentOrchestrator, EnhancedFunctionExecutor

# Create specialized agents
research_agent = Agent(id="researcher", name="Researcher", ...)
analyst_agent = Agent(id="analyst", name="Data Analyst", ...)
writer_agent = Agent(id="writer", name="Content Writer", ...)

# Set up orchestrator
orchestrator = EnhancedAgentOrchestrator(enable_metrics=True)
await orchestrator.register_agent(research_agent)
await orchestrator.register_agent(analyst_agent)
await orchestrator.register_agent(writer_agent)

# Enhanced executor with parallel execution
executor = EnhancedFunctionExecutor(
    orchestrator=orchestrator,
    default_timeout=30.0,
    max_concurrent_executions=5
)

# Execute functions across agents
from calute.types import RequestFunctionCall, FunctionCallStrategy

calls = [
    RequestFunctionCall(name="research_topic", arguments={"topic": "AI"}, id="1"),
    RequestFunctionCall(name="analyze_findings", arguments={"data": "..."}, id="2"),
    RequestFunctionCall(name="write_report", arguments={"content": "..."}, id="3")
]

results = await executor.execute_function_calls(
    calls=calls,
    strategy=FunctionCallStrategy.PARALLEL
)

📚 Example Scenarios

The examples/ directory contains comprehensive scenarios demonstrating Calute's capabilities:

  1. Conversational Assistant (scenario_1_conversational_assistant.py)

    • Memory-enhanced chatbot with user preference learning
    • Sentiment analysis and context retention
  2. Code Analyzer (scenario_2_code_analyzer.py)

    • Python code analysis with security scanning
    • Refactoring suggestions and test generation
    • Parallel analysis execution
  3. Multi-Agent Collaboration (scenario_3_multi_agent_collaboration.py)

    • Coordinated task execution across specialized agents
    • Dynamic agent switching based on context
    • Shared memory and progress tracking
  4. Streaming Research Assistant (scenario_4_streaming_research_assistant.py)

    • Real-time streaming responses
    • Knowledge graph building
    • Research synthesis and progress tracking

🏗️ Architecture

graph TB
    subgraph "Calute Core"
        A[Client Interface] --> B[Agent Registry]
        B --> C[Orchestrator]
        C --> D[Function Executor]
        D --> E[Memory Store]
    end
    
    subgraph "Enhanced Features"
        F[Retry Policy] --> D
        G[Timeout Manager] --> D
        H[Metrics Collector] --> D
        I[Vector Search] --> E
        J[Cache Layer] --> E
        K[Persistence] --> E
    end
    
    subgraph "Agents"
        L[Agent 1] --> C
        M[Agent 2] --> C
        N[Agent N] --> C
    end

🛠️ Core Components

Memory System

  • MemoryStore: Advanced memory management with indexing and caching
  • MemoryType: SHORT_TERM, LONG_TERM, EPISODIC, SEMANTIC, WORKING, PROCEDURAL
  • Features: Vector search, similarity matching, consolidation, pattern analysis

Executors

  • EnhancedAgentOrchestrator: Multi-agent coordination with metrics
  • EnhancedFunctionExecutor: Parallel/sequential execution with timeout and retry
  • FunctionRegistry: Centralized function management and validation

Configuration

  • CaluteConfig: Centralized configuration management
  • Environment-based settings: Development, staging, production profiles
  • Logging configuration: Structured logging with customizable levels

📊 Performance & Monitoring

# Access execution metrics
metrics = orchestrator.function_registry.get_metrics("function_name")
print(f"Total calls: {metrics.total_calls}")
print(f"Success rate: {metrics.successful_calls / metrics.total_calls:.0%}")
print(f"Avg duration: {metrics.average_duration:.2f}s")

# Memory statistics
stats = memory.get_statistics()
print(f"Cache hit rate: {stats['cache_hit_rate']:.1%}")
print(f"Total memories: {stats['total_memories']}")

🔒 Security & Best Practices

  • Function validation before execution
  • Timeout protection against hanging operations
  • Secure memory persistence with encryption support
  • Rate limiting and resource management
  • Comprehensive error handling and logging

🔌 MCP Integration

Calute now supports Model Context Protocol (MCP) for connecting agents to external data sources, tools, and APIs!

Quick Start with MCP

import asyncio
from calute.cortex import CortexAgent
from calute.llms import OpenAILLM
from calute.mcp import MCPManager, MCPServerConfig
from calute.mcp.integration import add_mcp_tools_to_agent
from calute.mcp.types import MCPTransportType

async def main():
    # Setup MCP manager
    mcp_manager = MCPManager()

    # Configure MCP server (e.g., filesystem access)
    config = MCPServerConfig(
        name="filesystem",
        command="npx",
        args=["-y", "@modelcontextprotocol/server-filesystem", "/data"],
        transport=MCPTransportType.STDIO,
    )

    # Connect to MCP server
    await mcp_manager.add_server(config)

    # Create agent with MCP tools
    agent = CortexAgent(
        role="Data Assistant",
        goal="Help with file operations",
        backstory="Expert with filesystem access",
        model="gpt-4",
        llm=OpenAILLM(api_key="your-key"),
    )

    # Add MCP tools to agent
    await add_mcp_tools_to_agent(agent, mcp_manager)

    # Use the agent with MCP capabilities
    result = agent.execute("List and analyze files in the directory")
    print(result)

    await mcp_manager.disconnect_all()

asyncio.run(main())

Supported MCP Servers

  • Filesystem: Local file operations
  • SQLite: Database queries
  • GitHub: Repository management
  • Brave Search: Web search capabilities
  • Custom: Build your own MCP servers

See the MCP Integration Guide for detailed documentation and examples.

📖 Documentation

🤝 Contributing

We welcome contributions! Please see our Contributing Guidelines for details.

Development Setup

# Install with dev dependencies
poetry install --with dev

# Run tests
pytest

# Run linting
ruff check .

# Format code
black .

📄 License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.

🙏 Acknowledgments

Built with ❤️ by erfanzar and contributors.

📬 Contact


Note: This is an active research project. APIs may change between versions. Please pin your dependencies for production use.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

calute-0.1.3.tar.gz (455.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

calute-0.1.3-py3-none-any.whl (276.9 kB view details)

Uploaded Python 3

File details

Details for the file calute-0.1.3.tar.gz.

File metadata

  • Download URL: calute-0.1.3.tar.gz
  • Upload date:
  • Size: 455.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.8.0

File hashes

Hashes for calute-0.1.3.tar.gz
Algorithm Hash digest
SHA256 f10af67d895094deb64b43870f03b5073774e69d2d8980c40d86ee32c3db1aaf
MD5 2e4950cfc463d7da6b8138f2836821b9
BLAKE2b-256 5bd57d107f4801ac8a6310bfb03ebac4425cc043539fe883f17ea62804fcefe3

See more details on using hashes here.

File details

Details for the file calute-0.1.3-py3-none-any.whl.

File metadata

  • Download URL: calute-0.1.3-py3-none-any.whl
  • Upload date:
  • Size: 276.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.8.0

File hashes

Hashes for calute-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 25051337c653e332efd94c67feeed69bdf428c7bea0dd976d57c955c83e73801
MD5 6b64d1a0c01dfa7023af95594b93ab90
BLAKE2b-256 06036afee335562da3667d9515c1764faafc1ca19d1dfead35a49e6fec22b514

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page