A comprehensive Python framework for building and serving conversational AI agents with FastAPI
Project description
Agent Framework Library
A comprehensive Python framework for building and serving conversational AI agents with FastAPI. Features automatic multi-provider support (OpenAI, Gemini), dynamic configuration, session management, streaming responses, and a rich web interface.
๐ NEW: Library Usage - The Agent Framework can now be installed as an external dependency from GitHub repositories. See Library Usage Guide for details.
Library Installation
# Install from GitHub (HTTPS - works with public/private repos with token)
pip install git+https://github.com/Cinco-AI/AgentFramework.git
# Install from GitHub (SSH - requires SSH key setup)
pip install git+ssh://git@github.com/Cinco-AI/AgentFramework.git
# Install from local source (development)
pip install -e .
๐ Features
Core Capabilities
- Multi-Provider Support: Automatic routing between OpenAI and Gemini APIs
- Dynamic System Prompts: Session-based system prompt control
- Agent Configuration: Runtime model parameter adjustment
- Session Management: Persistent conversation handling with structured workflow
- Session Workflow: Initialize/end session lifecycle with immutable configurations
- User Feedback System: Message-level thumbs up/down and session-level flags
- Media Detection: Automatic detection and handling of generated images/videos
- Web Interface: Built-in test application with rich UI controls
- Debug Logging: Comprehensive logging for system prompts and model configuration
Advanced Features
- Model Auto-Detection: Automatic provider selection based on model name
- Parameter Filtering: Provider-specific parameter validation (e.g., Gemini doesn't support frequency_penalty)
- Configuration Validation: Built-in validation and status endpoints
- Correlation & Conversation Tracking: Link sessions across agents and track individual exchanges
- Manager Agent Support: Built-in coordination features for multi-agent workflows
- Persistent Session Storage: MongoDB integration for scalable session persistence (see MongoDB Session Storage Guide)
- Agent Identity Support: Multi-agent deployment support with automatic agent identification in MongoDB (see Agent Identity Guide)
- Reverse Proxy Support: Automatic path prefix detection for deployment behind reverse proxies (see Reverse Proxy Setup Guide)
- Backward Compatibility: Existing implementations continue to work
๐ Quick Start
Library Usage (Recommended)
The easiest way to use the Agent Framework is with the convenience function:
from agent_framework import AgentInterface, StructuredAgentInput, StructuredAgentOutput, create_basic_agent_server
class MyAgent(AgentInterface):
async def get_metadata(self):
return {"name": "My Agent", "version": "1.0.0"}
async def handle_message(self, session_id: str, agent_input: StructuredAgentInput):
return StructuredAgentOutput(response_text=f"Hello! You said: {agent_input.query}")
# Start server with one line - no server.py file needed!
create_basic_agent_server(MyAgent, port=8000)
This automatically handles server setup, routing, and all framework features.
See examples/ for complete examples and docs/library_usage.md for comprehensive documentation.
๐ Table of Contents
- Features
- Quick Start
- Configuration
- API Reference
- Client Examples
- Web Interface
- Advanced Usage
- Development
- Authentication
- Contributing
- License
- Support
๐ ๏ธ Development
Traditional Development Setup
For development within the AgentFramework repository:
1. Installation
# Clone the repository
git clone <your-repository-url>
cd AgentFramework
# Install dependencies
uv venv
uv pip install -e .[dev]
2. Configuration
# Copy configuration template
cp env-template.txt .env
# Edit .env with your API keys
Minimal .env setup:
# At least one API key required
OPENAI_API_KEY=sk-your-openai-key-here
GEMINI_API_KEY=your-gemini-api-key-here
# Set default model
DEFAULT_MODEL=gpt-4
# Authentication (optional - set to true to enable)
REQUIRE_AUTH=false
BASIC_AUTH_USERNAME=admin
BASIC_AUTH_PASSWORD=password
API_KEYS=sk-your-secure-api-key-123
3. Start the Server
Option A: Using convenience function (recommended for external projects)
# In your agent file
from agent_framework import create_basic_agent_server
create_basic_agent_server(MyAgent, port=8000)
Option B: Traditional method
# Start the development server
uv run python agent.py
# Or using uvicorn directly
export AGENT_CLASS_PATH="agent:Agent"
uvicorn server:app --reload --host 0.0.0.0 --port 8000
4. Test the Agent
Open your browser to http://localhost:8000/testapp or make API calls:
# Without authentication (REQUIRE_AUTH=false)
curl -X POST http://localhost:8000/message \
-H "Content-Type: application/json" \
-d '{"query": "Hello, how are you?"}'
# With API Key authentication (REQUIRE_AUTH=true)
curl -X POST http://localhost:8000/message \
-H "Content-Type: application/json" \
-H "X-API-Key: sk-your-secure-api-key-123" \
-d '{"query": "Hello, how are you?"}'
# With Basic authentication (REQUIRE_AUTH=true)
curl -u admin:password -X POST http://localhost:8000/message \
-H "Content-Type: application/json" \
-d '{"query": "Hello, how are you?"}'
Project Structure
AgentFramework/
โโโ agent_framework/ # Main framework package
โ โโโ __init__.py # Library exports and convenience functions
โ โโโ agent_interface.py # Abstract agent interface
โ โโโ base_agent.py # AutoGen-based agent implementation
โ โโโ server.py # FastAPI server
โ โโโ model_config.py # Multi-provider configuration
โ โโโ model_clients.py # Model client factory
โ โโโ session_storage.py # Session storage implementations
โโโ examples/ # Usage examples
โโโ docs/ # Documentation
โโโ test_app.html # Web interface
โโโ env-template.txt # Configuration template
โโโ pyproject.toml # Package configuration
Creating Custom Agents
- Inherit from AgentInterface:
from agent_framework import AgentInterface, StructuredAgentInput, StructuredAgentOutput
class MyCustomAgent(AgentInterface):
async def handle_message(self, session_id: str, agent_input: StructuredAgentInput) -> StructuredAgentOutput:
# Implement your logic here
pass
async def handle_message_stream(self, session_id: str, agent_input: StructuredAgentInput):
# Implement streaming logic
pass
async def get_metadata(self):
return {
"name": "My Custom Agent",
"description": "A custom agent implementation",
"capabilities": {"streaming": True}
}
def get_system_prompt(self) -> Optional[str]:
return "Your custom system prompt here..."
- Start the server:
from agent_framework import create_basic_agent_server
create_basic_agent_server(MyCustomAgent, port=8000)
Testing
The project includes a comprehensive test suite built with pytest. The tests are located in the tests/ directory and are configured to run in a self-contained environment.
For detailed instructions on how to set up the test environment and run the tests, please refer to the README file inside the test directory:
Agent Framework Test Suite Guide
A brief overview of the steps:
- Navigate to the test directory:
cd tests - Create a virtual environment:
uv venv - Activate it:
source .venv/bin/activate - Install dependencies:
uv pip install -e .. && uv pip install -r requirements.txt - Run the tests:
pytest
Debug Logging
Set debug logging to see detailed system prompt and configuration information:
export AGENT_LOG_LEVEL=DEBUG
uv run python agent.py
Debug logs include:
- Model configuration loading and validation
- System prompt handling and persistence
- Agent configuration merging and application
- Provider selection and parameter filtering
- Client creation and model routing
โ๏ธ Configuration
Multi-Provider Setup
The framework automatically routes requests to the appropriate AI provider based on the model name:
# === API Keys ===
OPENAI_API_KEY=sk-your-openai-key-here
GEMINI_API_KEY=your-gemini-api-key-here
# === Default Model ===
DEFAULT_MODEL=gpt-4
# === Model Lists (Optional) ===
OPENAI_MODELS=gpt-4,gpt-4-turbo,gpt-4o,gpt-3.5-turbo,o1-preview,o1-mini
GEMINI_MODELS=gemini-1.5-pro,gemini-1.5-flash,gemini-2.0-flash-exp,gemini-pro
# === Provider Defaults ===
FALLBACK_PROVIDER=openai
OPENAI_DEFAULT_TEMPERATURE=0.7
GEMINI_DEFAULT_TEMPERATURE=0.7
Session Storage Configuration
Configure persistent session storage (optional):
# === Session Storage ===
# Use "memory" (default) for in-memory storage or "mongodb" for persistent storage
SESSION_STORAGE_TYPE=memory
# MongoDB configuration (only required when SESSION_STORAGE_TYPE=mongodb)
MONGODB_CONNECTION_STRING=mongodb://localhost:27017
MONGODB_DATABASE_NAME=agent_sessions
MONGODB_COLLECTION_NAME=sessions
For detailed MongoDB setup and configuration, see the MongoDB Session Storage Guide.
Configuration Validation
Test your configuration:
# Validate configuration
uv run python test_multi_provider.py
# Check specific model support
curl http://localhost:8000/config/validate/gpt-4
๐ API Reference
Core Endpoints
Send Message
Send a message to the agent and receive a complete response.
Endpoint: POST /message
Request Body:
{
"query": "Your message here",
"parts": [],
"system_prompt": "Optional custom system prompt",
"agent_config": {
"temperature": 0.8,
"max_tokens": 1000,
"model_selection": "gpt-4"
},
"session_id": "optional-session-id",
"correlation_id": "optional-correlation-id-for-linking-sessions"
}
Response:
{
"response_text": "Agent's response",
"parts": [
{
"type": "text",
"text": "Agent's response"
}
],
"session_id": "generated-or-provided-session-id",
"user_id": "user1",
"correlation_id": "correlation-id-if-provided",
"conversation_id": "unique-id-for-this-exchange"
}
Session Workflow (NEW)
Initialize Session: POST /init
{
"user_id": "string", // required
"correlation_id": "string", // optional
"session_id": "string", // optional (auto-generated if not provided)
"data": { ... }, // optional
"configuration": { // required
"system_prompt": "string",
"model_name": "string",
"model_config": {
"temperature": 0.7,
"token_limit": 1000
}
}
}
Initializes a new chat session with immutable configuration. Must be called before any chat interactions. Returns the session configuration and generated session ID if not provided.
End Session: POST /end
{
"session_id": "string"
}
Closes a session and prevents further interactions. Persists final session state and locks feedback system.
Submit Message Feedback: POST /feedback/message
{
"session_id": "string",
"message_id": "string",
"feedback": "up" | "down"
}
Submit thumbs up/down feedback for a specific message. Can only be submitted once per message.
Submit/Update Session Flag: POST|PUT /feedback/flag
{
"session_id": "string",
"flag_message": "string"
}
Submit or update a session-level flag message. Editable while session is active, locked after session ends.
Session Management
List Sessions: GET /sessions
curl http://localhost:8000/sessions
# Response: ["session1", "session2", ...]
Get History: GET /sessions/{session_id}/history
curl http://localhost:8000/sessions/abc123/history
Find Sessions by Correlation ID: GET /sessions/by-correlation/{correlation_id}
curl http://localhost:8000/sessions/by-correlation/task-123
# Response: [{"user_id": "user1", "session_id": "abc123", "correlation_id": "task-123"}]
Correlation & Conversation Tracking
The framework provides advanced tracking capabilities for multi-agent workflows and detailed conversation analytics.
Correlation ID Support
Purpose: Link multiple sessions across different agents that are part of the same larger task or workflow.
Usage:
# Start a task with correlation ID
response1 = client.send_message(
"Analyze this data set",
correlation_id="data-analysis-task-001"
)
# Continue task in another session/agent with same correlation ID
response2 = client.send_message(
"Generate visualizations for the analysis",
correlation_id="data-analysis-task-001" # Same correlation ID
)
# Find all sessions related to this task
sessions = requests.get("/sessions/by-correlation/data-analysis-task-001")
Key Features:
- Optional field: Can be set when sending messages or creating sessions
- Persistent: Correlation ID is maintained throughout the session lifecycle
- Cross-agent: Multiple agents can share the same correlation ID
- Searchable: Query all sessions by correlation ID
Conversation ID Support
Purpose: Track individual message exchanges (request/reply pairs) within sessions for detailed analytics and debugging.
Key Features:
- Automatic generation: Each request/reply pair gets a unique conversation ID
- Shared between request/reply: User message and agent response share the same conversation ID
- Database-ready: Designed for storing individual exchanges in databases
- Analytics-friendly: Enables detailed conversation flow analysis
Example Response with IDs:
{
"response_text": "Here's the analysis...",
"session_id": "session-abc-123",
"user_id": "data-scientist-1",
"correlation_id": "data-analysis-task-001",
"conversation_id": "conv-uuid-456-789"
}
Manager Agent Coordination
These features enable sophisticated multi-agent workflows:
class ManagerAgent:
def __init__(self):
self.correlation_id = f"task-{uuid.uuid4()}"
async def coordinate_task(self, task_description):
# Step 1: Data analysis agent
analysis_response = await self.send_to_agent(
"data-agent",
f"Analyze: {task_description}",
correlation_id=self.correlation_id
)
# Step 2: Visualization agent
viz_response = await self.send_to_agent(
"viz-agent",
f"Create charts for: {analysis_response}",
correlation_id=self.correlation_id
)
# Step 3: Find all related sessions
related_sessions = await self.get_sessions_by_correlation(self.correlation_id)
return {
"task_id": self.correlation_id,
"sessions": related_sessions,
"final_result": viz_response
}
Web Interface Features
The test application includes full support for correlation tracking:
- Correlation ID Input: Set correlation IDs when sending messages
- Session Finder: Search for all sessions sharing a correlation ID
- ID Display: Shows correlation and conversation IDs in chat history
- Visual Indicators: Clear display of tracking information
Configuration Endpoints
Get Model Configuration: GET /config/models
{
"default_model": "gpt-4",
"configuration_status": {
"valid": true,
"warnings": [],
"errors": []
},
"supported_models": {
"openai": ["gpt-4", "gpt-3.5-turbo"],
"gemini": ["gemini-1.5-pro", "gemini-pro"]
},
"supported_providers": {
"openai": true,
"gemini": true
}
}
Validate Model: GET /config/validate/{model_name}
{
"model": "gpt-4",
"provider": "openai",
"supported": true,
"api_key_configured": true,
"client_available": true,
"issues": []
}
Get System Prompt: GET /system-prompt
{
"system_prompt": "You are a helpful AI assistant that helps users accomplish their tasks efficiently..."
}
Returns the default system prompt configured for the agent. Returns 404 if no system prompt is configured.
Response (404 if not configured):
{
"detail": "System prompt not configured"
}
Agent Configuration Parameters
| Parameter | Type | Range | Description | Providers |
|---|---|---|---|---|
temperature |
float | 0.0-2.0 | Controls randomness | OpenAI, Gemini |
max_tokens |
integer | 1+ | Maximum response tokens | OpenAI, Gemini |
top_p |
float | 0.0-1.0 | Nucleus sampling | OpenAI, Gemini |
frequency_penalty |
float | -2.0-2.0 | Reduce frequent tokens | OpenAI only |
presence_penalty |
float | -2.0-2.0 | Reduce any repetition | OpenAI only |
stop_sequences |
array | - | Custom stop sequences | OpenAI, Gemini |
timeout |
integer | 1+ | Request timeout (seconds) | OpenAI, Gemini |
max_retries |
integer | 0+ | Retry attempts | OpenAI, Gemini |
model_selection |
string | - | Override model for session | OpenAI, Gemini |
๐ป Client Examples
Python Client
import requests
import json
class AgentClient:
def __init__(self, base_url="http://localhost:8000"):
self.base_url = base_url
self.session = requests.Session()
# Add basic auth if required
self.session.auth = ("admin", "password")
def send_message(self, message, session_id=None, correlation_id=None):
"""Send a message and get complete response."""
payload = {
"query": message,
"parts": []
}
if session_id:
payload["session_id"] = session_id
if correlation_id:
payload["correlation_id"] = correlation_id
response = self.session.post(
f"{self.base_url}/message",
json=payload
)
response.raise_for_status()
return response.json()
def init_session(self, user_id, configuration, correlation_id=None, session_id=None, data=None):
"""Initialize a new session with configuration."""
payload = {
"user_id": user_id,
"configuration": configuration
}
if correlation_id:
payload["correlation_id"] = correlation_id
if session_id:
payload["session_id"] = session_id
if data:
payload["data"] = data
response = self.session.post(
f"{self.base_url}/init",
json=payload
)
response.raise_for_status()
return response.json()
def end_session(self, session_id):
"""End a session."""
response = self.session.post(
f"{self.base_url}/end",
json={"session_id": session_id}
)
response.raise_for_status()
return response.ok
def submit_feedback(self, session_id, message_id, feedback):
"""Submit feedback for a message."""
response = self.session.post(
f"{self.base_url}/feedback/message",
json={
"session_id": session_id,
"message_id": message_id,
"feedback": feedback
}
)
response.raise_for_status()
return response.ok
def get_model_config(self):
"""Get available models and configuration."""
response = self.session.get(f"{self.base_url}/config/models")
response.raise_for_status()
return response.json()
# Usage example
client = AgentClient()
# Initialize session with configuration
session_data = client.init_session(
user_id="user123",
configuration={
"system_prompt": "You are a creative writing assistant",
"model_name": "gpt-4",
"model_config": {
"temperature": 1.2,
"token_limit": 500
}
},
correlation_id="creative-writing-session-001"
)
session_id = session_data["session_id"]
# Send messages using the initialized session
response = client.send_message(
"Write a creative story about space exploration",
session_id=session_id
)
print(response["response_text"])
# Submit feedback on the response
client.submit_feedback(session_id, response["conversation_id"], "up")
# Continue the conversation
response2 = client.send_message("Add more details about the characters", session_id=session_id)
print(response2["response_text"])
# End session when done
client.end_session(session_id)
JavaScript Client
class AgentClient {
constructor(baseUrl = 'http://localhost:8000') {
this.baseUrl = baseUrl;
this.auth = btoa('admin:password'); // Basic auth
}
async sendMessage(message, options = {}) {
const payload = {
query: message,
parts: [],
...options
};
const response = await fetch(`${this.baseUrl}/message`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Basic ${this.auth}`
},
body: JSON.stringify(payload)
});
if (!response.ok) {
throw new Error(`HTTP ${response.status}: ${response.statusText}`);
}
return response.json();
}
async initSession(userId, configuration, options = {}) {
const payload = {
user_id: userId,
configuration,
...options
};
const response = await fetch(`${this.baseUrl}/init`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Basic ${this.auth}`
},
body: JSON.stringify(payload)
});
if (!response.ok) {
throw new Error(`HTTP ${response.status}: ${response.statusText}`);
}
return response.json();
}
async endSession(sessionId) {
const response = await fetch(`${this.baseUrl}/end`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Basic ${this.auth}`
},
body: JSON.stringify({ session_id: sessionId })
});
if (!response.ok) {
throw new Error(`HTTP ${response.status}: ${response.statusText}`);
}
return response.ok;
}
async submitFeedback(sessionId, messageId, feedback) {
const response = await fetch(`${this.baseUrl}/feedback/message`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Basic ${this.auth}`
},
body: JSON.stringify({
session_id: sessionId,
message_id: messageId,
feedback
})
});
return response.ok;
}
async getModelConfig() {
const response = await fetch(`${this.baseUrl}/config/models`, {
headers: { 'Authorization': `Basic ${this.auth}` }
});
return response.json();
}
}
// Usage example
const client = new AgentClient();
// Initialize session with configuration
const sessionInit = await client.initSession('user123', {
system_prompt: 'You are a helpful coding assistant',
model_name: 'gpt-4',
model_config: {
temperature: 0.7,
token_limit: 1000
}
}, {
correlation_id: 'coding-help-001'
});
// Send messages using the initialized session
const response = await client.sendMessage('Help me debug this Python code', {
session_id: sessionInit.session_id
});
console.log(response.response_text);
// Submit feedback
await client.submitFeedback(sessionInit.session_id, response.conversation_id, 'up');
// End session when done
await client.endSession(sessionInit.session_id);
curl Examples
# Basic message with correlation ID
curl -X POST http://localhost:8000/message \
-u admin:password \
-H "Content-Type: application/json" \
-d '{
"query": "Hello, world!",
"correlation_id": "greeting-task-001",
"agent_config": {
"temperature": 0.8,
"model_selection": "gpt-4"
}
}'
# Initialize session
curl -X POST http://localhost:8000/init \
-u admin:password \
-H "Content-Type: application/json" \
-d '{
"user_id": "user123",
"correlation_id": "poetry-session-001",
"configuration": {
"system_prompt": "You are a talented poet",
"model_name": "gpt-4",
"model_config": {
"temperature": 1.5,
"token_limit": 200
}
}
}'
# Submit feedback for a message
curl -X POST http://localhost:8000/feedback/message \
-u admin:password \
-H "Content-Type: application/json" \
-d '{
"session_id": "session-123",
"message_id": "msg-456",
"feedback": "up"
}'
# End session
curl -X POST http://localhost:8000/end \
-u admin:password \
-H "Content-Type: application/json" \
-d '{
"session_id": "session-123"
}'
# Get model configuration
curl http://localhost:8000/config/models -u admin:password
# Validate model support
curl http://localhost:8000/config/validate/gemini-1.5-pro -u admin:password
# Get system prompt
curl http://localhost:8000/system-prompt -u admin:password
# Find sessions by correlation ID
curl http://localhost:8000/sessions/by-correlation/greeting-task-001 -u admin:password
๐ Web Interface
Access the built-in web interface at http://localhost:8000/testapp
Features:
- Model Selection: Dropdown with all available models
- System Prompt Management:
- Dedicated textarea for custom prompts
- Auto-loads default system prompt from server
- Session-specific prompt persistence
- Reset to default functionality
- Manual reload from server option
- Advanced Configuration: Collapsible panel with all parameters
- Parameter Validation: Real-time validation with visual feedback
- Provider Awareness: Disables unsupported parameters (e.g., frequency_penalty for Gemini)
- Session Management: Create, load, and manage conversation sessions with structured workflow
- Session Initialization: Configure sessions with immutable system prompts and model settings
- User Feedback: Thumbs up/down feedback and session-level flags
- Media Detection: Automatic detection and display of generated images/videos
- Correlation Tracking:
- Set correlation IDs to link sessions across agents
- Search for sessions by correlation ID
- Visual display of correlation and conversation IDs
- Manager agent coordination support
Configuration Presets:
- Creative: High temperature, relaxed parameters for creative tasks
- Precise: Low temperature, focused parameters for analytical tasks
- Custom: Manual parameter adjustment
๐ง Advanced Usage
System Prompt Configuration
The framework supports configurable system prompts both at the server level and per-session:
Server-Level System Prompt
Agents can provide a default system prompt via the get_system_prompt() method:
class MyAgent(AgentInterface):
def get_system_prompt(self) -> Optional[str]:
return """
You are a helpful coding assistant specializing in Python.
Always provide:
1. Working code examples
2. Clear explanations
3. Best practices
4. Error handling
"""
Accessing System Prompt via API
# Get the default system prompt from server
response = requests.get("http://localhost:8000/system-prompt")
if response.status_code == 200:
system_prompt = response.json()["system_prompt"]
else:
print("No system prompt configured")
Per-Session System Prompts
# Set system prompt for specific use case
custom_prompt = """
You are a creative writing assistant.
Focus on storytelling and narrative structure.
"""
response = client.send_message(
"Help me write a short story",
system_prompt=custom_prompt
)
Web Interface System Prompt Management
The web interface provides comprehensive system prompt management:
- Auto-loading: Default system prompt loads automatically on new sessions
- Session persistence: Each session remembers its custom system prompt
- Reset functionality: "๐ Reset to Default" button restores server default
- Manual reload: Refresh system prompt from server without losing session data
Model-Specific Configuration
# OpenAI-specific configuration
openai_config = {
"model_selection": "gpt-4",
"temperature": 0.7,
"frequency_penalty": 0.5, # OpenAI only
"presence_penalty": 0.3 # OpenAI only
}
# Gemini-specific configuration
gemini_config = {
"model_selection": "gemini-1.5-pro",
"temperature": 0.8,
"top_p": 0.9,
"max_tokens": 1000
# Note: frequency_penalty not supported by Gemini
}
Session Persistence
# Start conversation with custom settings
response1 = client.send_message(
"Let's start a coding session",
system_prompt="You are my coding pair programming partner",
config={"temperature": 0.3}
)
session_id = response1["session_id"]
# Continue conversation - settings persist
response2 = client.send_message(
"Help me debug this function",
session_id=session_id
)
# Override settings for this message only
response3 = client.send_message(
"Now be creative and suggest alternatives",
session_id=session_id,
config={"temperature": 1.5} # Temporary override
)
Multi-Modal Support
# Send image with message
payload = {
"query": "What's in this image?",
"parts": [
{
"type": "image_url",
"image_url": {"url": "data:image/jpeg;base64,/9j/4AAQ..."}
}
]
}
๐ Authentication
The framework supports two authentication methods that can be used simultaneously:
1. Basic Authentication (Username/Password)
HTTP Basic Authentication using username and password credentials.
Configuration:
# Enable authentication
REQUIRE_AUTH=true
# Basic Auth credentials
BASIC_AUTH_USERNAME=admin
BASIC_AUTH_PASSWORD=your-secure-password
Usage Examples:
# cURL with Basic Auth
curl -u admin:password http://localhost:8000/message \
-H "Content-Type: application/json" \
-d '{"query": "Hello!"}'
# Python requests
import requests
response = requests.post(
"http://localhost:8000/message",
json={"query": "Hello!"},
auth=("admin", "password")
)
2. API Key Authentication
More secure option for API clients using bearer tokens or X-API-Key headers.
Configuration:
# Enable authentication
REQUIRE_AUTH=true
# API Keys (comma-separated list of valid keys)
API_KEYS=sk-your-secure-key-123,ak-another-api-key-456,my-client-api-key-789
Usage Examples:
# cURL with Bearer Token
curl -H "Authorization: Bearer sk-your-secure-key-123" \
http://localhost:8000/message \
-H "Content-Type: application/json" \
-d '{"query": "Hello!"}'
# cURL with X-API-Key Header
curl -H "X-API-Key: sk-your-secure-key-123" \
http://localhost:8000/message \
-H "Content-Type: application/json" \
-d '{"query": "Hello!"}'
# Python requests with Bearer Token
import requests
headers = {
"Authorization": "Bearer sk-your-secure-key-123",
"Content-Type": "application/json"
}
response = requests.post(
"http://localhost:8000/message",
json={"query": "Hello!"},
headers=headers
)
# Python requests with X-API-Key
headers = {
"X-API-Key": "sk-your-secure-key-123",
"Content-Type": "application/json"
}
response = requests.post(
"http://localhost:8000/message",
json={"query": "Hello!"},
headers=headers
)
Authentication Priority
The framework tries authentication methods in this order:
- API Key via Bearer Token (
Authorization: Bearer <key>) - API Key via X-API-Key Header (
X-API-Key: <key>) - Basic Authentication (username/password)
Python Client Library Support
from AgentClient import AgentClient
# Using Basic Auth
client = AgentClient("http://localhost:8000")
client.session.auth = ("admin", "password")
# Using API Key
client = AgentClient("http://localhost:8000")
client.session.headers.update({"X-API-Key": "sk-your-secure-key-123"})
# Send authenticated request
response = client.send_message("Hello, authenticated world!")
Web Interface Authentication
The web interface (/testapp) supports both authentication methods. Update the JavaScript client:
// Basic Auth
this.auth = btoa('admin:password');
headers['Authorization'] = `Basic ${this.auth}`;
// API Key
headers['X-API-Key'] = 'sk-your-secure-key-123';
Security Best Practices
- Use Strong API Keys: Generate cryptographically secure random keys
- Rotate Keys Regularly: Update API keys periodically
- Environment Variables: Never hardcode credentials in source code
- HTTPS Only: Always use HTTPS in production to protect credentials
- Minimize Key Scope: Use different keys for different applications/users
Generate Secure API Keys:
# Generate a secure API key (32 bytes, base64 encoded)
python -c "import secrets, base64; print('sk-' + base64.urlsafe_b64encode(secrets.token_bytes(32)).decode().rstrip('='))"
# Or use openssl
openssl rand -base64 32 | sed 's/^/sk-/'
Disable Authentication
To disable authentication completely:
REQUIRE_AUTH=false
When disabled, all endpoints are publicly accessible without any authentication.
๐ Contributing
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests for new functionality
- Submit a pull request
๐ License
[Your License Here]
๐ค Support
- Documentation: This README and inline code comments
- Examples: See
test_*.pyfiles for usage examples - Issues: Report bugs and feature requests via GitHub Issues
Quick Links:
- Web Interface - Interactive testing
- API Documentation - OpenAPI/Swagger docs
- Configuration Test - Validate setup
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file agent_framework_lib-0.1.0.tar.gz.
File metadata
- Download URL: agent_framework_lib-0.1.0.tar.gz
- Upload date:
- Size: 133.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4ac6bad7052b6d16cbfcca4c8c1269a01fb3084d63bc8717ab05e4c12f27b94a
|
|
| MD5 |
909ac67010a99e4e5ce7e7dbbf9a8902
|
|
| BLAKE2b-256 |
fd7671d359ad27518bca85427a49642e98944d4585e64accd1f29d2b9ca00275
|
File details
Details for the file agent_framework_lib-0.1.0-py3-none-any.whl.
File metadata
- Download URL: agent_framework_lib-0.1.0-py3-none-any.whl
- Upload date:
- Size: 100.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c843b671816b15f179dcd7f3ffb5ca019de324dd337ae5fae9992a740547b64d
|
|
| MD5 |
81973b692d4f38aa220743ce5b9b1dcd
|
|
| BLAKE2b-256 |
a16fb361059caddf61f98aa05f8189f26a3b10f14c923048d13766a77649046f
|