Advanced AI Agent Framework with Enterprise Features
Project description
LlamaAgent
Advanced AI Agent Framework for Production-Ready Applications
Empowering developers to build intelligent, scalable AI agents with enterprise-grade reliability
Overview
LlamaAgent is a comprehensive AI agent framework designed for production environments. It provides a robust foundation for building intelligent agents that can reason, use tools, maintain memory, and integrate seamlessly with modern AI providers.
Key Features
- Multi-Provider LLM Support: OpenAI, Anthropic, Cohere, Together AI, and more
- Advanced Reasoning: ReAct pattern implementation with chain-of-thought capabilities
- Tool Integration: Extensible tool system with calculator, Python REPL, and custom tools
- Memory Management: Persistent memory with vector storage capabilities
- Production Ready: Comprehensive error handling, logging, and monitoring
- FastAPI Integration: RESTful API endpoints for web applications
- Docker Support: Containerized deployment with Kubernetes manifests
- Comprehensive Testing: 38+ tests with 100% pass rate
Quick Start
Installation
# Install from PyPI
pip install llamaagent
# Or install from source
git clone https://github.com/llamasearchai/llamaagent.git
cd llamaagent
pip install -e ".[dev]"
Basic Usage
from llamaagent.agents.react import ReactAgent
from llamaagent.agents.base import AgentConfig
from llamaagent.llm.providers.openai_provider import OpenAIProvider
from llamaagent.types import TaskInput
# Configure the agent
config = AgentConfig(
name="MyAgent",
description="A helpful AI assistant",
tools_enabled=True
)
# Initialize LLM provider
provider = OpenAIProvider(
model_name="gpt-4",
api_key="your-api-key"
)
# Create the agent
agent = ReactAgent(config=config, llm_provider=provider)
# Execute a task
task = TaskInput(
id="task-1",
task="Calculate the square root of 144 and explain the process"
)
result = await agent.arun(task)
print(result.content)
Architecture
LlamaAgent follows a modular architecture designed for scalability and maintainability:
├── agents/ # Agent implementations (ReAct, reasoning chains)
├── llm/ # LLM provider integrations
├── tools/ # Tool system and implementations
├── memory/ # Memory management and storage
├── api/ # FastAPI web interfaces
├── monitoring/ # Observability and metrics
├── security/ # Authentication and validation
└── types/ # Core type definitions
Advanced Features
Tool System
from llamaagent.tools.calculator import CalculatorTool
from llamaagent.tools.python_repl import PythonREPLTool
# Register custom tools
agent.register_tool(CalculatorTool())
agent.register_tool(PythonREPLTool())
Memory Management
from llamaagent.memory.vector_memory import VectorMemory
# Configure persistent memory
memory = VectorMemory(
embedding_model="text-embedding-3-large",
storage_path="./agent_memory"
)
agent.set_memory(memory)
FastAPI Integration
from llamaagent.api.main import create_app
# Create web API
app = create_app()
# Run with: uvicorn main:app --host 0.0.0.0 --port 8000
Configuration
Environment Variables
# LLM Provider Keys
OPENAI_API_KEY=your_openai_key
ANTHROPIC_API_KEY=your_anthropic_key
COHERE_API_KEY=your_cohere_key
# Database Configuration
DATABASE_URL=postgresql://user:pass@localhost/db
REDIS_URL=redis://localhost:6379
# Monitoring
ENABLE_METRICS=true
LOG_LEVEL=INFO
Configuration File
# config/default.yaml
agent:
name: "ProductionAgent"
max_iterations: 10
timeout: 300
llm:
provider: "openai"
model: "gpt-4"
temperature: 0.7
max_tokens: 2000
tools:
enabled: true
timeout: 30
memory:
enabled: true
type: "vector"
max_entries: 10000
Deployment
Docker
# Build the image
docker build -t llamaagent:latest .
# Run the container
docker run -p 8000:8000 -e OPENAI_API_KEY=your_key llamaagent:latest
Kubernetes
# Deploy to Kubernetes
kubectl apply -f k8s/
Docker Compose
# Full stack deployment
docker-compose up -d
API Reference
Core Endpoints
POST /agents/execute- Execute agent taskGET /agents/{agent_id}/status- Get agent statusPOST /tools/execute- Execute tool directlyGET /health- Health check endpoint
OpenAI Compatible API
# Chat completions
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4",
"messages": [{"role": "user", "content": "Hello!"}]
}'
Testing
# Run all tests
pytest tests/ -v
# Run with coverage
pytest tests/ --cov=src --cov-report=html
# Run specific test categories
pytest tests/unit/ -v # Unit tests
pytest tests/integration/ -v # Integration tests
pytest tests/e2e/ -v # End-to-end tests
Monitoring and Observability
Metrics
LlamaAgent provides comprehensive metrics for production monitoring:
- Request/response times
- Success/failure rates
- Token usage and costs
- Agent performance metrics
- Tool execution statistics
Logging
import logging
from llamaagent.monitoring.logging import setup_logging
# Configure structured logging
setup_logging(level=logging.INFO, format="json")
Health Checks
# Check system health
curl http://localhost:8000/health
# Detailed diagnostics
curl http://localhost:8000/diagnostics
Security
Authentication
from llamaagent.security.authentication import APIKeyAuth
# Configure API key authentication
auth = APIKeyAuth(api_keys=["your-secret-key"])
app.add_middleware(auth)
Input Validation
from llamaagent.security.validator import InputValidator
# Validate and sanitize inputs
validator = InputValidator()
safe_input = validator.sanitize(user_input)
Contributing
We welcome contributions! Please see our Contributing Guide for details.
Development Setup
# Clone the repository
git clone https://github.com/llamasearchai/llamaagent.git
cd llamaagent
# Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install development dependencies
pip install -e ".[dev]"
# Run pre-commit hooks
pre-commit install
Code Quality
# Format code
black src/ tests/
isort src/ tests/
# Lint code
ruff check src/ tests/
# Type checking
mypy src/
# Security scan
bandit -r src/
Examples
Basic Agent
# examples/basic_agent.py
import asyncio
from llamaagent.agents.react import ReactAgent
from llamaagent.agents.base import AgentConfig
from llamaagent.llm.providers.mock_provider import MockProvider
from llamaagent.types import TaskInput
async def main():
config = AgentConfig(name="BasicAgent")
provider = MockProvider(model_name="test-model")
agent = ReactAgent(config=config, llm_provider=provider)
task = TaskInput(
id="example-1",
task="Explain quantum computing in simple terms"
)
result = await agent.arun(task)
print(f"Agent Response: {result.content}")
if __name__ == "__main__":
asyncio.run(main())
Multi-Agent System
# examples/multi_agent.py
import asyncio
from llamaagent.spawning.agent_spawner import AgentSpawner
from llamaagent.orchestration.adaptive_orchestra import AdaptiveOrchestra
async def main():
spawner = AgentSpawner()
orchestra = AdaptiveOrchestra()
# Spawn multiple specialized agents
research_agent = await spawner.spawn_agent("researcher")
analysis_agent = await spawner.spawn_agent("analyst")
writer_agent = await spawner.spawn_agent("writer")
# Orchestrate collaborative task
result = await orchestra.execute_collaborative_task(
task="Write a comprehensive report on AI safety",
agents=[research_agent, analysis_agent, writer_agent]
)
print(f"Collaborative Result: {result}")
if __name__ == "__main__":
asyncio.run(main())
Benchmarks
LlamaAgent includes comprehensive benchmarking against industry standards:
- GAIA Benchmark: General AI Assistant evaluation
- SPRE Evaluation: Structured Problem Reasoning
- Custom Benchmarks: Domain-specific performance testing
# Run benchmarks
python -m llamaagent.benchmarks.run_all --provider openai --model gpt-4
Roadmap
- Multi-modal agent support (vision, audio)
- Advanced reasoning patterns (Tree of Thoughts, Graph of Thoughts)
- Federated learning capabilities
- Enhanced security features
- Performance optimizations
- Extended tool ecosystem
License
This project is licensed under the MIT License - see the LICENSE file for details.
Support
- Documentation: https://llamaagent.readthedocs.io
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Email: nikjois@llamasearch.ai
Acknowledgments
Built with love by Nik Jois and the LlamaSearch AI team.
Special thanks to the open-source community and all contributors who make this project possible.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file llamaagent-0.2.4.tar.gz.
File metadata
- Download URL: llamaagent-0.2.4.tar.gz
- Upload date:
- Size: 747.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.9.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9959b60fb82cac8573dacb12a107ca1534da5f7018ceff0bfb43903f5a21a176
|
|
| MD5 |
a8860bda81f452c72830ac08c70501fc
|
|
| BLAKE2b-256 |
fc6c16bfe2a47c4f365e7004929383cc44e38ac4b91bcb775df2a24b718aead7
|
File details
Details for the file llamaagent-0.2.4-py3-none-any.whl.
File metadata
- Download URL: llamaagent-0.2.4-py3-none-any.whl
- Upload date:
- Size: 7.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.9.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
56f9b05f068bbfd057b17e1d19793fcb6dea3daf6ab0b04a05e11a2115f1b467
|
|
| MD5 |
1317c64e8dbcfc6d47b9be81c20e2982
|
|
| BLAKE2b-256 |
1d9a804148070d413d5c1359c835e3d7c93729d4fd45e097fc9c48b0800b050e
|