A comprehensive Python framework for building and serving conversational AI agents with FastAPI
Project description
Agent Framework Library
A comprehensive Python framework for building and serving conversational AI agents with FastAPI. Features automatic multi-provider support (OpenAI, Gemini), dynamic configuration, session management, streaming responses, and a rich web interface.
🎉 NEW: PyPI Package - The Agent Framework is now available as a pip-installable package from PyPI, making it easy to integrate into any Python project.
Installation
# Install from PyPI (recommended)
uv pip install agent-framework-lib
# Install with development dependencies
uv pip install agent-framework-lib[dev]
# Install from local source (development)
uv pip install -e .
🚀 Features
Core Capabilities
- Multi-Provider Support: Automatic routing between OpenAI and Gemini APIs
- Dynamic System Prompts: Session-based system prompt control
- Agent Configuration: Runtime model parameter adjustment
- Session Management: Persistent conversation handling with structured workflow
- Session Workflow: Initialize/end session lifecycle with immutable configurations
- User Feedback System: Message-level thumbs up/down and session-level flags
- Media Detection: Automatic detection and handling of generated images/videos
- Web Interface: Built-in test application with rich UI controls
- Debug Logging: Comprehensive logging for system prompts and model configuration
Advanced Features
- Model Auto-Detection: Automatic provider selection based on model name
- Parameter Filtering: Provider-specific parameter validation (e.g., Gemini doesn't support frequency_penalty)
- Configuration Validation: Built-in validation and status endpoints
- Correlation & Conversation Tracking: Link sessions across agents and track individual exchanges
- Manager Agent Support: Built-in coordination features for multi-agent workflows
- Persistent Session Storage: MongoDB integration for scalable session persistence (see MongoDB Session Storage Guide)
- Agent Identity Support: Multi-agent deployment support with automatic agent identification in MongoDB (see Agent Identity Guide)
- Reverse Proxy Support: Automatic path prefix detection for deployment behind reverse proxies (see Reverse Proxy Setup Guide)
- Backward Compatibility: Existing implementations continue to work
🚀 Quick Start
Option 1: AutoGen-Based Agents (Recommended for AutoGen)
The fastest way to create AutoGen agents with all boilerplate handled automatically:
from typing import Any, Dict, List
from agent_framework import AutoGenBasedAgent, create_basic_agent_server
class MyAgent(AutoGenBasedAgent):
def get_agent_prompt(self) -> str:
return "You are a helpful assistant that can perform calculations."
def get_agent_tools(self) -> List[callable]:
return [self.add, self.subtract]
def get_agent_metadata(self) -> Dict[str, Any]:
return {
"name": "Math Assistant",
"description": "An agent that helps with basic math"
}
def create_autogen_agent(self, tools: List[callable], model_client: Any, system_message: str):
"""Create and configure the AutoGen agent."""
from autogen_agentchat.agents import AssistantAgent
return AssistantAgent(
name="math_assistant",
model_client=model_client,
system_message=system_message,
max_tool_iterations=250,
reflect_on_tool_use=True,
tools=tools,
model_client_stream=True
)
@staticmethod
def add(a: float, b: float) -> float:
"""Add two numbers together."""
return a + b
@staticmethod
def subtract(a: float, b: float) -> float:
"""Subtract one number from another."""
return a - b
# Start server with one line - includes AutoGen, MCP tools, streaming, etc.
create_basic_agent_server(MyAgent, port=8000)
✨ Benefits:
- 95% less code - No AutoGen boilerplate needed
- Built-in streaming - Real-time responses with tool visualization
- MCP integration - Add external tools easily
- Session management - Automatic state persistence
- 10-15 minutes to create a full-featured agent
- Full control over AutoGen agent type and configuration
Option 2: Generic Agent Interface
For non-AutoGen agents or custom implementations:
from agent_framework import AgentInterface, StructuredAgentInput, StructuredAgentOutput, create_basic_agent_server
class MyAgent(AgentInterface):
async def get_metadata(self):
return {"name": "My Agent", "version": "1.0.0"}
async def handle_message(self, session_id: str, agent_input: StructuredAgentInput):
return StructuredAgentOutput(response_text=f"Hello! You said: {agent_input.query}")
# Start server with one line - handles server setup, routing, and all framework features
create_basic_agent_server(MyAgent, port=8000)
See docs/autogen_agent_guide.md for the complete AutoGen development guide.
📋 Table of Contents
- Features
- Quick Start
- Configuration
- API Reference
- Client Examples
- Web Interface
- Advanced Usage
- Development
- AutoGen Development Guide
- Authentication
- Contributing
- License
- Support
🛠️ Development
Traditional Development Setup
For development within the AgentFramework repository:
1. Installation
# Clone the repository
git clone <your-repository-url>
cd AgentFramework
# Install dependencies
uv venv
uv pip install -e .[dev]
2. Configuration
# Copy configuration template
cp env-template.txt .env
# Edit .env with your API keys
Minimal .env setup:
# At least one API key required
OPENAI_API_KEY=sk-your-openai-key-here
GEMINI_API_KEY=your-gemini-api-key-here
# Set default model
DEFAULT_MODEL=gpt-4
# Authentication (optional - set to true to enable)
REQUIRE_AUTH=false
BASIC_AUTH_USERNAME=admin
BASIC_AUTH_PASSWORD=password
API_KEYS=sk-your-secure-api-key-123
3. Start the Server
Option A: Using convenience function (recommended for external projects)
# In your agent file
from agent_framework import create_basic_agent_server
create_basic_agent_server(MyAgent, port=8000)
Option B: Traditional method
# Start the development server
uv run python agent.py
# Or using uvicorn directly
export AGENT_CLASS_PATH="agent:Agent"
uvicorn server:app --reload --host 0.0.0.0 --port 8000
4. Test the Agent
Open your browser to http://localhost:8000/ui or make API calls:
# Without authentication (REQUIRE_AUTH=false)
curl -X POST http://localhost:8000/message \
-H "Content-Type: application/json" \
-d '{"query": "Hello, how are you?"}'
# With API Key authentication (REQUIRE_AUTH=true)
curl -X POST http://localhost:8000/message \
-H "Content-Type: application/json" \
-H "X-API-Key: sk-your-secure-api-key-123" \
-d '{"query": "Hello, how are you?"}'
# With Basic authentication (REQUIRE_AUTH=true)
curl -u admin:password -X POST http://localhost:8000/message \
-H "Content-Type: application/json" \
-d '{"query": "Hello, how are you?"}'
Project Structure
AgentFramework/
├── agent_framework/ # Main framework package
│ ├── __init__.py # Library exports and convenience functions
│ ├── agent_interface.py # Abstract agent interface
│ ├── base_agent.py # AutoGen-based agent implementation
│ ├── server.py # FastAPI server
│ ├── model_config.py # Multi-provider configuration
│ ├── model_clients.py # Model client factory
│ └── session_storage.py # Session storage implementations
├── examples/ # Usage examples
├── docs/ # Documentation
├── test_app.html # Web interface
├── env-template.txt # Configuration template
└── pyproject.toml # Package configuration
Creating Custom Agents
Option 1: AutoGen-Based Agents (Recommended)
For AutoGen-powered agents, inherit from AutoGenBasedAgent for maximum productivity:
from typing import Any, Dict, List
from agent_framework import AutoGenBasedAgent, create_basic_agent_server
class MyAutoGenAgent(AutoGenBasedAgent):
def get_agent_prompt(self) -> str:
return """You are a specialized agent for [your domain].
You can [list capabilities]."""
def get_agent_tools(self) -> List[callable]:
return [self.my_tool, self.another_tool]
def get_agent_metadata(self) -> Dict[str, Any]:
return {
"name": "My AutoGen Agent",
"description": "A specialized agent with AutoGen superpowers",
"capabilities": {
"streaming": True,
"tool_use": True,
"mcp_integration": True
}
}
def create_autogen_agent(self, tools: List[callable], model_client: Any, system_message: str):
"""Create and configure the AutoGen agent."""
from autogen_agentchat.agents import AssistantAgent
return AssistantAgent(
name="my_agent",
model_client=model_client,
system_message=system_message,
max_tool_iterations=300,
reflect_on_tool_use=True,
tools=tools,
model_client_stream=True
)
@staticmethod
def my_tool(input_data: str) -> str:
"""Your custom tool implementation."""
return f"Processed: {input_data}"
# Start server with full AutoGen capabilities
create_basic_agent_server(MyAutoGenAgent, port=8000)
✨ What you get automatically:
- Real-time streaming responses
- MCP tools integration
- Session state management
- Tool call visualization
- Error handling & logging
- Special block parsing (forms, charts, etc.)
Option 2: Generic AgentInterface
For non-AutoGen agents or when you need full control:
from agent_framework import AgentInterface, StructuredAgentInput, StructuredAgentOutput
class MyCustomAgent(AgentInterface):
async def handle_message(self, session_id: str, agent_input: StructuredAgentInput) -> StructuredAgentOutput:
# Implement your logic here
pass
async def handle_message_stream(self, session_id: str, agent_input: StructuredAgentInput):
# Implement streaming logic
pass
async def get_metadata(self):
return {
"name": "My Custom Agent",
"description": "A custom agent implementation",
"capabilities": {"streaming": True}
}
def get_system_prompt(self) -> Optional[str]:
return "Your custom system prompt here..."
# Start server
create_basic_agent_server(MyCustomAgent, port=8000)
Testing
The project includes a comprehensive test suite built with pytest. The tests are located in the tests/ directory and are configured to run in a self-contained environment.
For detailed instructions on how to set up the test environment and run the tests, please refer to the README file inside the test directory:
Agent Framework Test Suite Guide
A brief overview of the steps:
- Navigate to the test directory:
cd tests - Create a virtual environment:
uv venv - Activate it:
source .venv/bin/activate - Install dependencies:
uv pip install -e .. && uv pip install -r requirements.txt - Run the tests:
pytest
Debug Logging
Set debug logging to see detailed system prompt and configuration information:
export AGENT_LOG_LEVEL=DEBUG
uv run python agent.py
Debug logs include:
- Model configuration loading and validation
- System prompt handling and persistence
- Agent configuration merging and application
- Provider selection and parameter filtering
- Client creation and model routing
⚙️ Configuration
Session Storage Configuration
Configure persistent session storage (optional):
# === Session Storage ===
# Use "memory" (default) for in-memory storage or "mongodb" for persistent storage
SESSION_STORAGE_TYPE=memory
# MongoDB configuration (only required when SESSION_STORAGE_TYPE=mongodb)
MONGODB_CONNECTION_STRING=mongodb://localhost:27017
MONGODB_DATABASE_NAME=agent_sessions
MONGODB_COLLECTION_NAME=sessions
For detailed MongoDB setup and configuration, see the MongoDB Session Storage Guide.
📚 API Reference
Core Endpoints
Send Message
Send a message to the agent and receive a complete response.
Endpoint: POST /message
Request Body:
{
"query": "Your message here",
"parts": [],
"system_prompt": "Optional custom system prompt",
"agent_config": {
"temperature": 0.8,
"max_tokens": 1000,
"model_selection": "gpt-4"
},
"session_id": "optional-session-id",
"correlation_id": "optional-correlation-id-for-linking-sessions"
}
Response:
{
"response_text": "Agent's response",
"parts": [
{
"type": "text",
"text": "Agent's response"
}
],
"session_id": "generated-or-provided-session-id",
"user_id": "user1",
"correlation_id": "correlation-id-if-provided",
"conversation_id": "unique-id-for-this-exchange"
}
Session Workflow (NEW)
Initialize Session: POST /init
{
"user_id": "string", // required
"correlation_id": "string", // optional
"session_id": "string", // optional (auto-generated if not provided)
"data": { ... }, // optional
"configuration": { // required
"system_prompt": "string",
"model_name": "string",
"model_config": {
"temperature": 0.7,
"token_limit": 1000
}
}
}
Initializes a new chat session with immutable configuration. Must be called before any chat interactions. Returns the session configuration and generated session ID if not provided.
End Session: POST /end
{
"session_id": "string"
}
Closes a session and prevents further interactions. Persists final session state and locks feedback system.
Submit Message Feedback: POST /feedback/message
{
"session_id": "string",
"message_id": "string",
"feedback": "up" | "down"
}
Submit thumbs up/down feedback for a specific message. Can only be submitted once per message.
Submit/Update Session Flag: POST|PUT /feedback/flag
{
"session_id": "string",
"flag_message": "string"
}
Submit or update a session-level flag message. Editable while session is active, locked after session ends.
Session Management
List Sessions: GET /sessions
curl http://localhost:8000/sessions
# Response: ["session1", "session2", ...]
Get History: GET /sessions/{session_id}/history
curl http://localhost:8000/sessions/abc123/history
Find Sessions by Correlation ID: GET /sessions/by-correlation/{correlation_id}
curl http://localhost:8000/sessions/by-correlation/task-123
# Response: [{"user_id": "user1", "session_id": "abc123", "correlation_id": "task-123"}]
Correlation & Conversation Tracking
The framework provides advanced tracking capabilities for multi-agent workflows and detailed conversation analytics.
Correlation ID Support
Purpose: Link multiple sessions across different agents that are part of the same larger task or workflow.
Usage:
# Start a task with correlation ID
response1 = client.send_message(
"Analyze this data set",
correlation_id="data-analysis-task-001"
)
# Continue task in another session/agent with same correlation ID
response2 = client.send_message(
"Generate visualizations for the analysis",
correlation_id="data-analysis-task-001" # Same correlation ID
)
# Find all sessions related to this task
sessions = requests.get("/sessions/by-correlation/data-analysis-task-001")
Key Features:
- Optional field: Can be set when sending messages or creating sessions
- Persistent: Correlation ID is maintained throughout the session lifecycle
- Cross-agent: Multiple agents can share the same correlation ID
- Searchable: Query all sessions by correlation ID
Conversation ID Support
Purpose: Track individual message exchanges (request/reply pairs) within sessions for detailed analytics and debugging.
Key Features:
- Automatic generation: Each request/reply pair gets a unique conversation ID
- Shared between request/reply: User message and agent response share the same conversation ID
- Database-ready: Designed for storing individual exchanges in databases
- Analytics-friendly: Enables detailed conversation flow analysis
Example Response with IDs:
{
"response_text": "Here's the analysis...",
"session_id": "session-abc-123",
"user_id": "data-scientist-1",
"correlation_id": "data-analysis-task-001",
"conversation_id": "conv-uuid-456-789"
}
Manager Agent Coordination
These features enable sophisticated multi-agent workflows:
class ManagerAgent:
def __init__(self):
self.correlation_id = f"task-{uuid.uuid4()}"
async def coordinate_task(self, task_description):
# Step 1: Data analysis agent
analysis_response = await self.send_to_agent(
"data-agent",
f"Analyze: {task_description}",
correlation_id=self.correlation_id
)
# Step 2: Visualization agent
viz_response = await self.send_to_agent(
"viz-agent",
f"Create charts for: {analysis_response}",
correlation_id=self.correlation_id
)
# Step 3: Find all related sessions
related_sessions = await self.get_sessions_by_correlation(self.correlation_id)
return {
"task_id": self.correlation_id,
"sessions": related_sessions,
"final_result": viz_response
}
Web Interface Features
The test application includes full support for correlation tracking:
- Correlation ID Input: Set correlation IDs when sending messages
- Session Finder: Search for all sessions sharing a correlation ID
- ID Display: Shows correlation and conversation IDs in chat history
- Visual Indicators: Clear display of tracking information
Configuration Endpoints
Get Model Configuration: GET /config/models
{
"default_model": "gpt-4",
"configuration_status": {
"valid": true,
"warnings": [],
"errors": []
},
"supported_models": {
"openai": ["gpt-4", "gpt-3.5-turbo"],
"gemini": ["gemini-1.5-pro", "gemini-pro"]
},
"supported_providers": {
"openai": true,
"gemini": true
}
}
Validate Model: GET /config/validate/{model_name}
{
"model": "gpt-4",
"provider": "openai",
"supported": true,
"api_key_configured": true,
"client_available": true,
"issues": []
}
Get System Prompt: GET /system-prompt
{
"system_prompt": "You are a helpful AI assistant that helps users accomplish their tasks efficiently..."
}
Returns the default system prompt configured for the agent. Returns 404 if no system prompt is configured.
Response (404 if not configured):
{
"detail": "System prompt not configured"
}
Agent Configuration Parameters
| Parameter | Type | Range | Description | Providers |
|---|---|---|---|---|
temperature |
float | 0.0-2.0 | Controls randomness | OpenAI, Gemini |
max_tokens |
integer | 1+ | Maximum response tokens | OpenAI, Gemini |
top_p |
float | 0.0-1.0 | Nucleus sampling | OpenAI, Gemini |
frequency_penalty |
float | -2.0-2.0 | Reduce frequent tokens | OpenAI only |
presence_penalty |
float | -2.0-2.0 | Reduce any repetition | OpenAI only |
stop_sequences |
array | - | Custom stop sequences | OpenAI, Gemini |
timeout |
integer | 1+ | Request timeout (seconds) | OpenAI, Gemini |
max_retries |
integer | 0+ | Retry attempts | OpenAI, Gemini |
model_selection |
string | - | Override model for session | OpenAI, Gemini |
💻 Client Examples
Python Client
import requests
import json
class AgentClient:
def __init__(self, base_url="http://localhost:8000"):
self.base_url = base_url
self.session = requests.Session()
# Add basic auth if required
self.session.auth = ("admin", "password")
def send_message(self, message, session_id=None, correlation_id=None):
"""Send a message and get complete response."""
payload = {
"query": message,
"parts": []
}
if session_id:
payload["session_id"] = session_id
if correlation_id:
payload["correlation_id"] = correlation_id
response = self.session.post(
f"{self.base_url}/message",
json=payload
)
response.raise_for_status()
return response.json()
def init_session(self, user_id, configuration, correlation_id=None, session_id=None, data=None):
"""Initialize a new session with configuration."""
payload = {
"user_id": user_id,
"configuration": configuration
}
if correlation_id:
payload["correlation_id"] = correlation_id
if session_id:
payload["session_id"] = session_id
if data:
payload["data"] = data
response = self.session.post(
f"{self.base_url}/init",
json=payload
)
response.raise_for_status()
return response.json()
def end_session(self, session_id):
"""End a session."""
response = self.session.post(
f"{self.base_url}/end",
json={"session_id": session_id}
)
response.raise_for_status()
return response.ok
def submit_feedback(self, session_id, message_id, feedback):
"""Submit feedback for a message."""
response = self.session.post(
f"{self.base_url}/feedback/message",
json={
"session_id": session_id,
"message_id": message_id,
"feedback": feedback
}
)
response.raise_for_status()
return response.ok
def get_model_config(self):
"""Get available models and configuration."""
response = self.session.get(f"{self.base_url}/config/models")
response.raise_for_status()
return response.json()
# Usage example
client = AgentClient()
# Initialize session with configuration
session_data = client.init_session(
user_id="user123",
configuration={
"system_prompt": "You are a creative writing assistant",
"model_name": "gpt-4",
"model_config": {
"temperature": 1.2,
"token_limit": 500
}
},
correlation_id="creative-writing-session-001"
)
session_id = session_data["session_id"]
# Send messages using the initialized session
response = client.send_message(
"Write a creative story about space exploration",
session_id=session_id
)
print(response["response_text"])
# Submit feedback on the response
client.submit_feedback(session_id, response["conversation_id"], "up")
# Continue the conversation
response2 = client.send_message("Add more details about the characters", session_id=session_id)
print(response2["response_text"])
# End session when done
client.end_session(session_id)
JavaScript Client
class AgentClient {
constructor(baseUrl = 'http://localhost:8000') {
this.baseUrl = baseUrl;
this.auth = btoa('admin:password'); // Basic auth
}
async sendMessage(message, options = {}) {
const payload = {
query: message,
parts: [],
...options
};
const response = await fetch(`${this.baseUrl}/message`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Basic ${this.auth}`
},
body: JSON.stringify(payload)
});
if (!response.ok) {
throw new Error(`HTTP ${response.status}: ${response.statusText}`);
}
return response.json();
}
async initSession(userId, configuration, options = {}) {
const payload = {
user_id: userId,
configuration,
...options
};
const response = await fetch(`${this.baseUrl}/init`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Basic ${this.auth}`
},
body: JSON.stringify(payload)
});
if (!response.ok) {
throw new Error(`HTTP ${response.status}: ${response.statusText}`);
}
return response.json();
}
async endSession(sessionId) {
const response = await fetch(`${this.baseUrl}/end`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Basic ${this.auth}`
},
body: JSON.stringify({ session_id: sessionId })
});
if (!response.ok) {
throw new Error(`HTTP ${response.status}: ${response.statusText}`);
}
return response.ok;
}
async submitFeedback(sessionId, messageId, feedback) {
const response = await fetch(`${this.baseUrl}/feedback/message`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Basic ${this.auth}`
},
body: JSON.stringify({
session_id: sessionId,
message_id: messageId,
feedback
})
});
return response.ok;
}
async getModelConfig() {
const response = await fetch(`${this.baseUrl}/config/models`, {
headers: { 'Authorization': `Basic ${this.auth}` }
});
return response.json();
}
}
// Usage example
const client = new AgentClient();
// Initialize session with configuration
const sessionInit = await client.initSession('user123', {
system_prompt: 'You are a helpful coding assistant',
model_name: 'gpt-4',
model_config: {
temperature: 0.7,
token_limit: 1000
}
}, {
correlation_id: 'coding-help-001'
});
// Send messages using the initialized session
const response = await client.sendMessage('Help me debug this Python code', {
session_id: sessionInit.session_id
});
console.log(response.response_text);
// Submit feedback
await client.submitFeedback(sessionInit.session_id, response.conversation_id, 'up');
// End session when done
await client.endSession(sessionInit.session_id);
curl Examples
# Basic message with correlation ID
curl -X POST http://localhost:8000/message \
-u admin:password \
-H "Content-Type: application/json" \
-d '{
"query": "Hello, world!",
"correlation_id": "greeting-task-001",
"agent_config": {
"temperature": 0.8,
"model_selection": "gpt-4"
}
}'
# Initialize session
curl -X POST http://localhost:8000/init \
-u admin:password \
-H "Content-Type: application/json" \
-d '{
"user_id": "user123",
"correlation_id": "poetry-session-001",
"configuration": {
"system_prompt": "You are a talented poet",
"model_name": "gpt-4",
"model_config": {
"temperature": 1.5,
"token_limit": 200
}
}
}'
# Submit feedback for a message
curl -X POST http://localhost:8000/feedback/message \
-u admin:password \
-H "Content-Type: application/json" \
-d '{
"session_id": "session-123",
"message_id": "msg-456",
"feedback": "up"
}'
# End session
curl -X POST http://localhost:8000/end \
-u admin:password \
-H "Content-Type: application/json" \
-d '{
"session_id": "session-123"
}'
# Get model configuration
curl http://localhost:8000/config/models -u admin:password
# Validate model support
curl http://localhost:8000/config/validate/gemini-1.5-pro -u admin:password
# Get system prompt
curl http://localhost:8000/system-prompt -u admin:password
# Find sessions by correlation ID
curl http://localhost:8000/sessions/by-correlation/greeting-task-001 -u admin:password
🌐 Web Interface
- TODO
🔧 Advanced Usage
System Prompt Configuration
The framework supports configurable system prompts both at the server level and per-session:
Server-Level System Prompt
Agents can provide a default system prompt via the get_system_prompt() method:
class MyAgent(AgentInterface):
def get_system_prompt(self) -> Optional[str]:
return """
You are a helpful coding assistant specializing in Python.
Always provide:
1. Working code examples
2. Clear explanations
3. Best practices
4. Error handling
"""
Accessing System Prompt via API
# Get the default system prompt from server
response = requests.get("http://localhost:8000/system-prompt")
if response.status_code == 200:
system_prompt = response.json()["system_prompt"]
else:
print("No system prompt configured")
Per-Session System Prompts
# Set system prompt for specific use case
custom_prompt = """
You are a creative writing assistant.
Focus on storytelling and narrative structure.
"""
response = client.send_message(
"Help me write a short story",
system_prompt=custom_prompt
)
Web Interface System Prompt Management
The web interface provides comprehensive system prompt management:
- Auto-loading: Default system prompt loads automatically on new sessions
- Session persistence: Each session remembers its custom system prompt
- Reset functionality: "🔄 Reset to Default" button restores server default
- Manual reload: Refresh system prompt from server without losing session data
🤖 AutoGen Development Guide
The Agent Framework provides a comprehensive base class for AutoGen agents that eliminates 95% of boilerplate code. This allows you to focus on your agent's specific logic rather than AutoGen integration details.
Quick Start with AutoGen
from typing import Any, Dict, List
from agent_framework import AutoGenBasedAgent, create_basic_agent_server
class DataAnalysisAgent(AutoGenBasedAgent):
def get_agent_prompt(self) -> str:
return """You are a data analysis expert.
You can analyze datasets, create visualizations, and generate insights.
Always provide clear explanations and cite your sources."""
def get_agent_tools(self) -> List[callable]:
return [self.analyze_data, self.create_chart, self.summarize_findings]
def get_agent_metadata(self) -> Dict[str, Any]:
return {
"name": "Data Analysis Agent",
"description": "Expert in statistical analysis and data visualization",
"version": "1.0.0",
"capabilities": {
"data_analysis": True,
"visualization": True,
"statistical_modeling": True
}
}
def create_autogen_agent(self, tools: List[callable], model_client: Any, system_message: str):
"""Create an AssistantAgent optimized for data analysis."""
from autogen_agentchat.agents import AssistantAgent
return AssistantAgent(
name="data_analyst",
model_client=model_client,
system_message=system_message,
max_tool_iterations=400, # More iterations for complex analysis
reflect_on_tool_use=True,
tools=tools,
model_client_stream=True
)
@staticmethod
def analyze_data(dataset: str, analysis_type: str = "descriptive") -> str:
"""Analyze a dataset with specified analysis type."""
# Your data analysis logic here
return f"Analysis complete for {dataset} using {analysis_type} methods"
@staticmethod
def create_chart(data: str, chart_type: str = "bar") -> str:
"""Create a chart from data."""
# Return chart configuration
return f'```chart\n{{"type": "chartjs", "chartConfig": {{"type": "{chart_type}"}}}}\n```'
# Start server - includes AutoGen, streaming, MCP tools, state management
create_basic_agent_server(DataAnalysisAgent, port=8000)
What AutoGenBasedAgent Provides
✅ Complete AutoGen Integration:
- AssistantAgent setup and lifecycle management
- Model client factory integration
- AutoGen agent configuration
✅ Advanced Features:
- Real-time streaming with event handling
- MCP (Model Context Protocol) tools integration
- Session management and state persistence
- Special block parsing (forms, charts, tables, options)
- Tool call visualization and debugging
✅ Error Handling:
- Robust error handling and logging
- Graceful degradation for failed components
- Comprehensive debugging information
Adding MCP Tools
from autogen_ext.tools.mcp import StdioServerParams
class AdvancedAgent(AutoGenBasedAgent):
# ... implement required methods ...
def get_mcp_server_params(self) -> List[StdioServerParams]:
"""Configure external MCP tools."""
return [
# Python execution server
StdioServerParams(
command='deno',
args=['run', '-N', '-R=node_modules', '-W=node_modules',
'--node-modules-dir=auto', 'jsr:@pydantic/mcp-run-python', 'stdio'],
read_timeout_seconds=120
),
# File system access server
StdioServerParams(
command='npx',
args=['-y', '@modelcontextprotocol/server-filesystem', '/tmp'],
read_timeout_seconds=60
)
]
Development Benefits
📉 95% Code Reduction:
- Before: 970+ lines of boilerplate per agent
- After: 40-60 lines for a complete agent
⚡ Faster Development:
- Before: 2-3 hours to create new agent
- After: 10-15 minutes to create new agent
🔧 Better Maintainability:
- Framework updates benefit all agents automatically
- Consistent behavior across all AutoGen agents
- Single source of truth for AutoGen integration
Complete Documentation
For comprehensive documentation, examples, and best practices, see:
- AutoGen Agent Development Guide - Complete tutorial with examples
- AutoGen Refactoring Summary - Architecture and benefits overview
Model-Specific Configuration
# OpenAI-specific configuration
openai_config = {
"model_selection": "gpt-4",
"temperature": 0.7,
"frequency_penalty": 0.5, # OpenAI only
"presence_penalty": 0.3 # OpenAI only
}
# Gemini-specific configuration
gemini_config = {
"model_selection": "gemini-1.5-pro",
"temperature": 0.8,
"top_p": 0.9,
"max_tokens": 1000
# Note: frequency_penalty not supported by Gemini
}
Session Persistence
# Start conversation with custom settings
response1 = client.send_message(
"Let's start a coding session",
system_prompt="You are my coding pair programming partner",
config={"temperature": 0.3}
)
session_id = response1["session_id"]
# Continue conversation - settings persist
response2 = client.send_message(
"Help me debug this function",
session_id=session_id
)
# Override settings for this message only
response3 = client.send_message(
"Now be creative and suggest alternatives",
session_id=session_id,
config={"temperature": 1.5} # Temporary override
)
Multi-Modal Support
# Send image with message
payload = {
"query": "What's in this image?",
"parts": [
{
"type": "image_url",
"image_url": {"url": "data:image/jpeg;base64,/9j/4AAQ..."}
}
]
}
🔒 Authentication
The framework supports two authentication methods that can be used simultaneously:
1. Basic Authentication (Username/Password)
HTTP Basic Authentication using username and password credentials.
Configuration:
# Enable authentication
REQUIRE_AUTH=true
# Basic Auth credentials
BASIC_AUTH_USERNAME=admin
BASIC_AUTH_PASSWORD=your-secure-password
Usage Examples:
# cURL with Basic Auth
curl -u admin:password http://localhost:8000/message \
-H "Content-Type: application/json" \
-d '{"query": "Hello!"}'
# Python requests
import requests
response = requests.post(
"http://localhost:8000/message",
json={"query": "Hello!"},
auth=("admin", "password")
)
2. API Key Authentication
More secure option for API clients using bearer tokens or X-API-Key headers.
Configuration:
# Enable authentication
REQUIRE_AUTH=true
# API Keys (comma-separated list of valid keys)
API_KEYS=sk-your-secure-key-123,ak-another-api-key-456,my-client-api-key-789
Usage Examples:
# cURL with Bearer Token
curl -H "Authorization: Bearer sk-your-secure-key-123" \
http://localhost:8000/message \
-H "Content-Type: application/json" \
-d '{"query": "Hello!"}'
# cURL with X-API-Key Header
curl -H "X-API-Key: sk-your-secure-key-123" \
http://localhost:8000/message \
-H "Content-Type: application/json" \
-d '{"query": "Hello!"}'
# Python requests with Bearer Token
import requests
headers = {
"Authorization": "Bearer sk-your-secure-key-123",
"Content-Type": "application/json"
}
response = requests.post(
"http://localhost:8000/message",
json={"query": "Hello!"},
headers=headers
)
# Python requests with X-API-Key
headers = {
"X-API-Key": "sk-your-secure-key-123",
"Content-Type": "application/json"
}
response = requests.post(
"http://localhost:8000/message",
json={"query": "Hello!"},
headers=headers
)
Authentication Priority
The framework tries authentication methods in this order:
- API Key via Bearer Token (
Authorization: Bearer <key>) - API Key via X-API-Key Header (
X-API-Key: <key>) - Basic Authentication (username/password)
Python Client Library Support
from AgentClient import AgentClient
# Using Basic Auth
client = AgentClient("http://localhost:8000")
client.session.auth = ("admin", "password")
# Using API Key
client = AgentClient("http://localhost:8000")
client.session.headers.update({"X-API-Key": "sk-your-secure-key-123"})
# Send authenticated request
response = client.send_message("Hello, authenticated world!")
Web Interface Authentication
The web interface (/testapp) supports both authentication methods. Update the JavaScript client:
// Basic Auth
this.auth = btoa('admin:password');
headers['Authorization'] = `Basic ${this.auth}`;
// API Key
headers['X-API-Key'] = 'sk-your-secure-key-123';
Security Best Practices
- Use Strong API Keys: Generate cryptographically secure random keys
- Rotate Keys Regularly: Update API keys periodically
- Environment Variables: Never hardcode credentials in source code
- HTTPS Only: Always use HTTPS in production to protect credentials
- Minimize Key Scope: Use different keys for different applications/users
Generate Secure API Keys:
# Generate a secure API key (32 bytes, base64 encoded)
python -c "import secrets, base64; print('sk-' + base64.urlsafe_b64encode(secrets.token_bytes(32)).decode().rstrip('='))"
# Or use openssl
openssl rand -base64 32 | sed 's/^/sk-/'
Disable Authentication
To disable authentication completely:
REQUIRE_AUTH=false
When disabled, all endpoints are publicly accessible without any authentication.
📝 Contributing
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests for new functionality
- Submit a pull request
📄 License
[Your License Here]
🤝 Support
- Documentation: This README and inline code comments
- Examples: See
test_*.pyfiles for usage examples - Issues: Report bugs and feature requests via GitHub Issues
Quick Links:
- Web Interface - Interactive testing
- API Documentation - OpenAPI/Swagger docs
- Configuration Test - Validate setup
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file agent_framework_lib-0.1.7.tar.gz.
File metadata
- Download URL: agent_framework_lib-0.1.7.tar.gz
- Upload date:
- Size: 237.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ff9ca048777d35abfd773f1c2afcc8dfe6f0fdbe04a6f62230b429f47c2b8ee3
|
|
| MD5 |
2b080838f53dcc7597dca5b4a6366af4
|
|
| BLAKE2b-256 |
ee55ee819ff833e259250ff16e546d4edaca45b5a5a76cba5ce604f94c389654
|
File details
Details for the file agent_framework_lib-0.1.7-py3-none-any.whl.
File metadata
- Download URL: agent_framework_lib-0.1.7-py3-none-any.whl
- Upload date:
- Size: 193.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e5a7b11b9c2edb06e64ab9ab763131f1cacf61199749e3e72b84f17db649e2d4
|
|
| MD5 |
12798a9ec75a7f849923a0e93817e659
|
|
| BLAKE2b-256 |
548cb5ccb5ce84dd2df44b181656a14e959d32d4d0b8af2ee9e8ecdf6b42a023
|