Skip to main content

DUKE Agents - Advanced AI Agent Framework with IPO Architecture

Project description

DUKE Agents

PyPI version Python Support License: MIT Documentation Downloads Code style: black

DUKE Agents is an advanced AI agent framework implementing the IPO (Input-Process-Output) architecture with enriched memory and feedback loops. It provides autonomous agents powered by Mistral LLMs for complex task execution, enabling developers to build sophisticated AI-driven workflows with minimal effort.

๐ŸŽฏ Why DUKE Agents?

  • Production-Ready: Built with enterprise-grade reliability and error handling
  • Memory-Enhanced: Persistent memory across workflow steps enables context-aware processing
  • Self-Correcting: Automatic retry with satisfaction scoring ensures quality outputs
  • Fully Typed: Complete type annotations for better IDE support and fewer runtime errors
  • Extensible: Easy to create custom agents and extend functionality
  • Secure: Sandboxed code execution and configurable security policies

๐Ÿš€ Features

Core Capabilities

  • ๐Ÿ—๏ธ IPO Architecture: Structured Input-Process-Output workflow with memory persistence
  • ๐Ÿค– Multiple Agent Types:
    • AtomicAgent: For discrete, well-defined tasks
    • CodeActAgent: For code generation and execution
    • Custom agents through simple inheritance
  • ๐Ÿง  Mistral Integration: Native support for all Mistral models including Codestral
  • ๐Ÿ’พ Memory Management: Rich workflow memory with feedback loops and context propagation
  • ๐Ÿ”„ Auto-correction: Built-in retry logic with configurable satisfaction thresholds
  • ๐ŸŽญ Flexible Orchestration:
    • Linear workflows for predefined sequences
    • LLM-driven dynamic agent selection
  • โœ… Type Safety: Full Pydantic models for robust data validation

Advanced Features

  • ๐Ÿ“Š Workflow Visualization: Export workflows as diagrams
  • ๐Ÿ” Debugging Tools: Comprehensive logging and memory inspection
  • โšก Async Support: Asynchronous agent execution for better performance
  • ๐Ÿ›ก๏ธ Security: Sandboxed execution environment for generated code
  • ๐Ÿ“ˆ Metrics: Built-in performance tracking and optimization hints
  • ๐Ÿ”Œ Extensible: Plugin system for custom functionality

๐Ÿ“ฆ Installation

Standard Installation

pip install duke-agents

Development Installation

# Clone the repository
git clone https://github.com/elmasson/duke-agents.git
cd duke-agents

# Install in development mode with all dependencies
pip install -e ".[dev,docs]"

Prerequisites

๐Ÿ”ง Quick Start

1. Set Up Your Environment

import os
from duke_agents import ContextManager, Orchestrator

# Set your Mistral API key
os.environ["MISTRAL_API_KEY"] = "your-api-key"

# Or use a .env file
# MISTRAL_API_KEY=your-api-key

2. Basic Agent Usage

from duke_agents import AtomicAgent, ContextManager, Orchestrator

# Initialize context manager
context = ContextManager("Process customer feedback")

# Create orchestrator
orchestrator = Orchestrator(context)

# Create and register an agent
agent = AtomicAgent("feedback_analyzer")
orchestrator.register_agent(agent)

# Define workflow
workflow = [{
    'agent': 'feedback_analyzer',
    'input_type': 'atomic',
    'input_data': {
        'task_id': 'analyze_001',
        'parameters': {
            'feedback': 'Great product but shipping was slow',
            'analyze': ['sentiment', 'topics', 'actionable_insights']
        }
    }
}]

# Execute workflow
results = orchestrator.execute_linear_workflow(workflow)

# Access results
if results[0].success:
    print(f"Analysis: {results[0].result}")
    print(f"Confidence: {results[0].satisfaction_score}")

3. Code Generation and Execution

from duke_agents import CodeActAgent, ContextManager, Orchestrator

# Create a code generation agent
context = ContextManager("Data Analysis Assistant")
orchestrator = Orchestrator(context)

code_agent = CodeActAgent("data_analyst", model="codestral-latest")
orchestrator.register_agent(code_agent)

# Generate and execute code
workflow = [{
    'agent': 'data_analyst',
    'input_type': 'codeact',
    'input_data': {
        'prompt': '''Create a function that:
        1. Loads sales data from a CSV file
        2. Calculates total revenue by product category
        3. Identifies top 5 performing products
        4. Generates a summary report with visualizations''',
        'context_data': {
            'csv_path': 'sales_data.csv',
            'date_column': 'transaction_date'
        }
    }
}]

results = orchestrator.execute_linear_workflow(workflow)

if results[0].success:
    print(f"Generated Code:\n{results[0].generated_code}")
    print(f"\nExecution Output:\n{results[0].execution_result}")

4. Multi-Agent Workflows

# Create multiple specialized agents
data_agent = AtomicAgent("data_processor")
analysis_agent = CodeActAgent("analyzer")
report_agent = AtomicAgent("report_generator")

# Register all agents
for agent in [data_agent, analysis_agent, report_agent]:
    orchestrator.register_agent(agent)

# Define multi-step workflow
workflow = [
    {
        'agent': 'data_processor',
        'input_type': 'atomic',
        'input_data': {
            'task_id': 'load_data',
            'parameters': {'source': 'database', 'table': 'sales_2024'}
        }
    },
    {
        'agent': 'analyzer',
        'input_type': 'codeact',
        'input_data': {
            'prompt': 'Analyze the sales data and identify trends, anomalies, and opportunities'
        }
    },
    {
        'agent': 'report_generator',
        'input_type': 'atomic',
        'input_data': {
            'task_id': 'create_report',
            'parameters': {'format': 'pdf', 'include_visuals': True}
        }
    }
]

# Execute the complete workflow
results = orchestrator.execute_linear_workflow(workflow)

5. Custom Agent Creation

from duke_agents.agents import BaseAgent
from duke_agents.models import AtomicInput, AtomicOutput
from pydantic import BaseModel

class TranslationOutput(BaseModel):
    translated_text: str
    source_language: str
    target_language: str
    confidence: float

class TranslationAgent(BaseAgent):
    """Custom agent for language translation."""
    
    def __init__(self, name: str, model: str = "mistral-large"):
        super().__init__(name, model)
        self.agent_type = "translator"
    
    def process(self, input_data: AtomicInput, context_data: dict = None) -> TranslationOutput:
        # Custom processing logic
        prompt = f"""Translate the following text to {input_data.parameters.get('target_language', 'English')}:
        
        {input_data.parameters['text']}
        
        Also identify the source language."""
        
        response = self.llm_client.complete(prompt)
        
        # Parse response and create output
        return TranslationOutput(
            translated_text=response['translation'],
            source_language=response['source_language'],
            target_language=input_data.parameters['target_language'],
            confidence=0.95
        )

# Use the custom agent
translator = TranslationAgent("translator")
orchestrator.register_agent(translator)

๐Ÿ“– Advanced Usage

Dynamic Workflow with LLM-Driven Orchestration

# Let the LLM decide which agents to use
context = ContextManager("Solve user problem: analyze and visualize climate data")

# Register multiple specialized agents
agents = {
    'data_fetcher': AtomicAgent("data_fetcher"),
    'data_cleaner': AtomicAgent("data_cleaner"),
    'statistician': CodeActAgent("statistician"),
    'visualizer': CodeActAgent("visualizer"),
    'reporter': AtomicAgent("reporter")
}

for agent in agents.values():
    orchestrator.register_agent(agent)

# Execute LLM-driven workflow
results = orchestrator.execute_llm_driven_workflow(
    user_request="Fetch climate data for the last 10 years, clean it, perform statistical analysis, create visualizations, and generate a comprehensive report",
    max_steps=10
)

Memory and Context Management

# Access workflow memory
memory = context.memory

# Inspect memory records
for record in memory.agent_records:
    print(f"Agent: {record.agent_name}")
    print(f"Input: {record.input_summary}")
    print(f"Output: {record.output_summary}")
    print(f"Timestamp: {record.timestamp}")
    print("---")

# Add custom feedback
memory.add_feedback("visualization", "Excellent charts, very clear and informative", 0.95)

# Get memory summary for LLM context
summary = memory.get_summary()

Configuration and Customization

from duke_agents.config import DukeConfig

# Custom configuration
config = DukeConfig(
    mistral_api_key="your-key",
    default_model="mistral-large",
    temperature=0.7,
    max_retries=5,
    satisfaction_threshold=0.8,
    code_execution_timeout=60,  # seconds
    enable_sandboxing=True
)

# Create orchestrator with custom config
orchestrator = Orchestrator(context, config=config)

Error Handling and Debugging

# Enable detailed logging
import logging
logging.getLogger('duke_agents').setLevel(logging.DEBUG)

# Execute with error handling
try:
    results = orchestrator.execute_linear_workflow(workflow)
except Exception as e:
    # Access detailed error information
    print(f"Workflow failed: {e}")
    
    # Inspect partial results
    for i, record in enumerate(context.memory.agent_records):
        if record.error:
            print(f"Step {i} failed: {record.error}")

# Export workflow for debugging
orchestrator.export_workflow("debug_workflow.json")

๐Ÿ—๏ธ Architecture

Component Overview

duke-agents/
โ”œโ”€โ”€ agents/              # Agent implementations
โ”‚   โ”œโ”€โ”€ base_agent.py    # Abstract base class
โ”‚   โ”œโ”€โ”€ atomic_agent.py  # Simple task execution
โ”‚   โ””โ”€โ”€ codeact_agent.py # Code generation/execution
โ”œโ”€โ”€ models/              # Data models
โ”‚   โ”œโ”€โ”€ atomic_models.py # Input/Output for AtomicAgent
โ”‚   โ”œโ”€โ”€ codeact_models.py # Input/Output for CodeActAgent
โ”‚   โ””โ”€โ”€ memory.py        # Memory management
โ”œโ”€โ”€ orchestration/       # Workflow management
โ”‚   โ”œโ”€โ”€ context_manager.py # Context and memory
โ”‚   โ””โ”€โ”€ orchestrator.py    # Workflow execution
โ”œโ”€โ”€ executors/           # Code execution
โ”‚   โ””โ”€โ”€ code_executor.py  # Safe code execution
โ”œโ”€โ”€ llm/                 # LLM integration
โ”‚   โ””โ”€โ”€ mistral_client.py # Mistral API client
โ””โ”€โ”€ config.py            # Configuration

Design Principles

  1. Separation of Concerns: Each component has a single, well-defined responsibility
  2. Extensibility: Easy to add new agent types and capabilities
  3. Type Safety: Full type hints and runtime validation
  4. Memory-First: All operations consider memory and context
  5. Fail-Safe: Graceful error handling and recovery

๐Ÿงช Testing

# Run all tests
pytest

# Run with coverage
pytest --cov=duke_agents

# Run specific test file
pytest tests/test_agents.py

# Run with verbose output
pytest -v

๐Ÿ“Š Performance Considerations

  • Concurrent Execution: Agents can run in parallel when dependencies allow
  • Caching: LLM responses are cached to reduce API calls
  • Memory Optimization: Automatic memory pruning for long workflows
  • Batch Processing: Support for processing multiple inputs efficiently

๐Ÿ”’ Security

  • Sandboxed Execution: Code runs in isolated environments
  • Input Validation: All inputs are validated before processing
  • API Key Protection: Secure handling of sensitive credentials
  • Rate Limiting: Built-in rate limiting for API calls
  • Audit Logging: Complete audit trail of all operations

๐Ÿ“š Documentation

๐Ÿค Contributing

We welcome contributions! Please see our Contributing Guide for details on:

  • Code style and standards
  • Development setup
  • Testing requirements
  • Pull request process
  • Issue reporting

๐Ÿ“ˆ Roadmap

  • v1.1: Async/await support throughout
  • v1.2: Additional LLM providers (OpenAI, Anthropic)
  • v1.3: Web UI for workflow design
  • v1.4: Distributed agent execution
  • v2.0: Agent marketplace and sharing

๐Ÿ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

๐Ÿ™ Acknowledgments

๐Ÿ“ฌ Support


Made with โค๏ธ by the DUKE Analytics team

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

duke_agents-1.0.1.tar.gz (46.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

duke_agents-1.0.1-py3-none-any.whl (25.9 kB view details)

Uploaded Python 3

File details

Details for the file duke_agents-1.0.1.tar.gz.

File metadata

  • Download URL: duke_agents-1.0.1.tar.gz
  • Upload date:
  • Size: 46.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.5

File hashes

Hashes for duke_agents-1.0.1.tar.gz
Algorithm Hash digest
SHA256 6f63e7f141c863739a053be391103679d7ff685242f7595d317d2447e3f5808e
MD5 62a7ee798d3be6ebb441e074f3b2743c
BLAKE2b-256 2022efabc829dfd9ae8a404a2b339883a0103b7603b767206fa809cd748297d3

See more details on using hashes here.

File details

Details for the file duke_agents-1.0.1-py3-none-any.whl.

File metadata

  • Download URL: duke_agents-1.0.1-py3-none-any.whl
  • Upload date:
  • Size: 25.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.5

File hashes

Hashes for duke_agents-1.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 cf75f60383bebcf99b580accb4033e9848d1c7f1d3d4079810a7faef18f07cd5
MD5 dc5a15d47fbb9882de963988b19b8612
BLAKE2b-256 23eb10f0bdf61f339856a52e62bef23d67589ff932cbc885c848c782a3916382

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page