Skip to main content

Agents for intelligence and coordination

Project description

Calute 🤖

Calute is a powerful, production-ready framework for building and orchestrating AI agents with advanced function calling, memory systems, and multi-agent collaboration capabilities. Designed for both researchers and developers, Calute provides enterprise-grade features for creating sophisticated AI systems.

🚀 Key Features

Core Capabilities

  • 🎭 Multi-Agent Orchestration: Seamlessly manage and coordinate multiple specialized agents with dynamic switching based on context, capabilities, or custom triggers
  • ⚡ Enhanced Function Execution: Advanced function calling with timeout management, retry policies, parallel/sequential execution strategies, and comprehensive error handling
  • 🧠 Advanced Memory Systems: Sophisticated memory management with multiple types (short-term, long-term, episodic, semantic, working, procedural), vector search, caching, and persistence
  • 🔄 Workflow Engine: Define and execute complex multi-step workflows with conditional logic and state management
  • 🌊 Streaming Support: Real-time streaming responses with function execution tracking
  • 🔌 LLM Flexibility: Unified interface supporting OpenAI, Gemini, Anthropic, and custom models

Enhanced Features

  • Memory Store with Indexing: Fast retrieval with tag-based indexing and importance scoring
  • Function Registry: Centralized function management with metrics and validation
  • Error Recovery: Robust error handling with customizable retry policies and fallback strategies
  • Performance Monitoring: Built-in metrics collection for execution times, success rates, and resource usage
  • Context Management: Sophisticated context passing between agents and functions
  • Security Features: Function validation, safe execution environments, and access control

📦 Installation

Core Installation (Lightweight)

# Minimal installation with only essential dependencies
pip install calute

Feature-Specific Installations

# For web search capabilities
pip install "calute[search]"

# For image/vision processing
pip install "calute[vision]"

# For additional LLM providers (Gemini, Anthropic, Cohere)
pip install "calute[providers]"

# For database support (PostgreSQL, MongoDB, etc.)
pip install "calute[database]"

# For Redis caching/queuing
pip install "calute[redis]"

# For monitoring and observability
pip install "calute[monitoring]"

# For vector search and embeddings
pip install "calute[vectors]"

Preset Configurations

# Research-focused installation (search, vision, vectors)
pip install "calute[research]"

# Enterprise installation (database, redis, monitoring, providers)
pip install "calute[enterprise]"

# Full installation with all features
pip install "calute[full]"

Development Installation

git clone https://github.com/erfanzar/calute.git
cd calute
pip install -e ".[dev]"

🎯 Quick Start

Basic Agent Setup

import openai
from calute import Agent, Calute

# Initialize your LLM client
client = openai.OpenAI(api_key="your-key")

# Create an agent with functions
def search_web(query: str) -> str:
    """Search the web for information."""
    return f"Results for: {query}"

def analyze_data(data: str) -> dict:
    """Analyze provided data."""
    return {"summary": data, "insights": ["insight1", "insight2"]}

agent = Agent(
    id="research_agent",
    name="Research Assistant",
    model="gpt-4",
    instructions="You are a helpful research assistant.",
    functions=[search_web, analyze_data],
    temperature=0.7
)

# Initialize Calute and register agent
calute = Calute(client)
calute.register_agent(agent)

# Use the agent
response = await calute.create_response(
    prompt="Find information about quantum computing",
    agent_id="research_agent"
)

Advanced Memory-Enhanced Agent

from calute.memory import MemoryStore, MemoryType

# Create memory store with persistence
memory = MemoryStore(
    max_short_term=100,
    max_long_term=1000,
    enable_persistence=True,
    persistence_path="./agent_memory"
)

# Add memories
memory.add_memory(
    content="User prefers technical explanations",
    memory_type=MemoryType.LONG_TERM,
    agent_id="assistant",
    tags=["preference", "user_profile"],
    importance_score=0.9
)

# Attach to Calute
calute.memory = memory

Multi-Agent Collaboration

from calute.executors import EnhancedAgentOrchestrator, EnhancedFunctionExecutor

# Create specialized agents
research_agent = Agent(id="researcher", name="Researcher", ...)
analyst_agent = Agent(id="analyst", name="Data Analyst", ...)
writer_agent = Agent(id="writer", name="Content Writer", ...)

# Set up orchestrator
orchestrator = EnhancedAgentOrchestrator(enable_metrics=True)
await orchestrator.register_agent(research_agent)
await orchestrator.register_agent(analyst_agent)
await orchestrator.register_agent(writer_agent)

# Enhanced executor with parallel execution
executor = EnhancedFunctionExecutor(
    orchestrator=orchestrator,
    default_timeout=30.0,
    max_concurrent_executions=5
)

# Execute functions across agents
from calute.types import RequestFunctionCall, FunctionCallStrategy

calls = [
    RequestFunctionCall(name="research_topic", arguments={"topic": "AI"}, id="1"),
    RequestFunctionCall(name="analyze_findings", arguments={"data": "..."}, id="2"),
    RequestFunctionCall(name="write_report", arguments={"content": "..."}, id="3")
]

results = await executor.execute_function_calls(
    calls=calls,
    strategy=FunctionCallStrategy.PARALLEL
)

📚 Example Scenarios

The examples/ directory contains comprehensive scenarios demonstrating Calute's capabilities:

  1. Conversational Assistant (scenario_1_conversational_assistant.py)

    • Memory-enhanced chatbot with user preference learning
    • Sentiment analysis and context retention
  2. Code Analyzer (scenario_2_code_analyzer.py)

    • Python code analysis with security scanning
    • Refactoring suggestions and test generation
    • Parallel analysis execution
  3. Multi-Agent Collaboration (scenario_3_multi_agent_collaboration.py)

    • Coordinated task execution across specialized agents
    • Dynamic agent switching based on context
    • Shared memory and progress tracking
  4. Streaming Research Assistant (scenario_4_streaming_research_assistant.py)

    • Real-time streaming responses
    • Knowledge graph building
    • Research synthesis and progress tracking

🏗️ Architecture

graph TB
    subgraph "Calute Core"
        A[Client Interface] --> B[Agent Registry]
        B --> C[Orchestrator]
        C --> D[Function Executor]
        D --> E[Memory Store]
    end
    
    subgraph "Enhanced Features"
        F[Retry Policy] --> D
        G[Timeout Manager] --> D
        H[Metrics Collector] --> D
        I[Vector Search] --> E
        J[Cache Layer] --> E
        K[Persistence] --> E
    end
    
    subgraph "Agents"
        L[Agent 1] --> C
        M[Agent 2] --> C
        N[Agent N] --> C
    end

🛠️ Core Components

Memory System

  • MemoryStore: Advanced memory management with indexing and caching
  • MemoryType: SHORT_TERM, LONG_TERM, EPISODIC, SEMANTIC, WORKING, PROCEDURAL
  • Features: Vector search, similarity matching, consolidation, pattern analysis

Executors

  • EnhancedAgentOrchestrator: Multi-agent coordination with metrics
  • EnhancedFunctionExecutor: Parallel/sequential execution with timeout and retry
  • FunctionRegistry: Centralized function management and validation

Configuration

  • CaluteConfig: Centralized configuration management
  • Environment-based settings: Development, staging, production profiles
  • Logging configuration: Structured logging with customizable levels

📊 Performance & Monitoring

# Access execution metrics
metrics = orchestrator.function_registry.get_metrics("function_name")
print(f"Total calls: {metrics.total_calls}")
print(f"Success rate: {metrics.successful_calls / metrics.total_calls:.0%}")
print(f"Avg duration: {metrics.average_duration:.2f}s")

# Memory statistics
stats = memory.get_statistics()
print(f"Cache hit rate: {stats['cache_hit_rate']:.1%}")
print(f"Total memories: {stats['total_memories']}")

🔒 Security & Best Practices

  • Function validation before execution
  • Timeout protection against hanging operations
  • Secure memory persistence with encryption support
  • Rate limiting and resource management
  • Comprehensive error handling and logging

📖 Documentation

🤝 Contributing

We welcome contributions! Please see our Contributing Guidelines for details.

Development Setup

# Install with dev dependencies
poetry install --with dev

# Run tests
pytest

# Run linting
ruff check .

# Format code
black .

📄 License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.

🙏 Acknowledgments

Built with ❤️ by erfanzar and contributors.

📬 Contact


Note: This is an active research project. APIs may change between versions. Please pin your dependencies for production use.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

calute-0.1.0.tar.gz (394.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

calute-0.1.0-py3-none-any.whl (234.0 kB view details)

Uploaded Python 3

File details

Details for the file calute-0.1.0.tar.gz.

File metadata

  • Download URL: calute-0.1.0.tar.gz
  • Upload date:
  • Size: 394.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.8.0

File hashes

Hashes for calute-0.1.0.tar.gz
Algorithm Hash digest
SHA256 4df345759cb67c4594d2e742d6a580ae6cf783648a9d64ea65832ac415f3b478
MD5 b8218f0a2c3e7ffbd856f2bece25d117
BLAKE2b-256 4bef896682da900cd8adc097a0a28cd92e166823275d15d34a99a5d792e2531b

See more details on using hashes here.

File details

Details for the file calute-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: calute-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 234.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.8.0

File hashes

Hashes for calute-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 7e170690813f0a487bf0977922a93093161f140959d525e085e17708c0ec778c
MD5 672b2e7ac637de9fb42d3ea5f23d354c
BLAKE2b-256 42220778dc01b1301019a1665f80aaa99230c199df434c859c87a5adbc5cc0ee

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page