SDK for creating and managing OpenAI agents with MCP server integration
Project description
OpenAI Agent Factory
A toolkit for creating and managing OpenAI Agents with MCP server integration.
Features
- Create and manage multiple AI agents with different configurations
- Support for MCP (Model Context Protocol) servers
- Integration with Azure OpenAI models
- Command-line interface for interactive agent communication with conversation history management
- HTTP service support via FastAPI for exposing agents as web endpoints (JSON and streaming responses)
- Async context manager support for easy resource management
- Comprehensive environment variable configuration
- Advanced model settings customization (temperature, tokens, penalties, etc.)
- Metadata support for agents
Installation
Via pip
pip install openai-agent-factory
For development
git clone https://github.com/jiahzhu1989/agent-factory.git
cd agent-factory
pip install -e .
Requirements
- Python 3.10 or higher
- openai >= 1.77.0
- openai-agents >= 0.0.14
- pydantic >= 2.0.0
- fastapi >= 0.109.0 (for HTTP service)
- uvicorn >= 0.27.0 (for running HTTP services)
- pyyaml >= 6.0.1 (for configuration files)
- Additional dependencies in pyproject.toml
Command-line Interface
The package includes a CLI tool for interacting with agents:
agent-cli -c path/to/config.yaml
Options:
-c, --config: Path to the agent configuration YAML file (required)-l, --list: List all available agents-a, --agent: Name of the agent to interact with-v, --verbose: Enable verbose logging--max-history: Maximum number of conversation turns to keep in history (default: 500)--max-tokens: Maximum token limit for conversation history (default: 100000)
Example Usage
List all available agents:
agent-cli -c examples/configs/cli_example.yaml -l
Interact with a specific agent:
agent-cli -c examples/configs/cli_example.yaml -a "General Assistant"
Configuration
Agent configuration is defined in YAML format:
agents:
- name: "General Assistant"
instructions: |
You are a helpful, friendly AI assistant.
Answer questions clearly and concisely.
model: "gpt-4.1"
model_settings:
temperature: 0.7
max_tokens: 1000
frequency_penalty: 0.0
presence_penalty: 0.0
mcp_servers: ["time"]
metadata:
description: "General-purpose AI assistant"
version: "1.0"
capabilities: ["answering questions", "casual conversation"]
- name: "Python Coder"
instructions: |
You are a Python coding expert.
Always provide working code examples.
model: "gpt-4.1"
model_settings:
temperature: 0.5
max_tokens: 2000
mcp_servers: ["time"]
metadata:
description: "Python programming expert"
version: "1.0"
capabilities: ["code generation", "debugging", "code explanation"]
mcp_servers:
time:
type: "stdio"
command: "python"
args: ["-m", "mcp_server_time"]
env:
DEBUG: "true"
encoding: "utf-8"
azure:
type: "stdio"
command: "npx"
args: ["-y", "@azure/mcp@latest", "server", "start"]
env:
AZURE_MCP_INCLUDE_PRODUCTION_CREDENTIALS: "true"
openai_models:
- api_key: "${AZURE_OPENAI_API_KEY}"
endpoint: "${AZURE_OPENAI_ENDPOINT}"
api_version: "${OPENAI_API_VERSION}"
model: "gpt-4.1"
Environment Variables
Override with environment variables: You can override any configuration value by setting environment variables with the prefix AGENT_FACTORY_. For example:
# Override the API key for the first OpenAI model
export AGENT_FACTORY_OPENAI_MODELS_0_API_KEY="your-api-key"
# Override the temperature for the first agent
export AGENT_FACTORY_AGENTS_0_TEMPERATURE="0.5"
# Override the instructions for the second agent
export AGENT_FACTORY_AGENTS_1_INSTRUCTIONS="You are a helpful assistant."
Environment variable overrides take precedence over values defined in the configuration file.
Code Examples
Using As Async Context Manager (Recommended)
async with AgentFactory(config) as factory:
agent = factory.get_agent("General Assistant")
response = await agent.generate("Tell me a joke")
print(response)
Using Explicit Initialization and Shutdown
factory = AgentFactory(config)
await factory.initialize()
try:
agent = factory.get_agent("General Assistant")
response = await agent.generate("Tell me a joke")
print(response)
# Get a list of all available agents
all_agents = factory.get_all_agents()
print(f"Available agents: {list(all_agents.keys())}")
finally:
await factory.shutdown()
Creating an HTTP Service
Use the AgentServiceFactory to create a FastAPI application that exposes agents as HTTP endpoints:
import os
from fastapi import FastAPI
from contextlib import asynccontextmanager
from agent_factory import AgentConfig, AgentFactoryConfig, AzureOpenAIConfig, AgentServiceFactory, ModelSettings
# Create an asynccontextmanager for the service lifecycle
@asynccontextmanager
async def lifespan(app: FastAPI):
# Create agent configurations with advanced model settings
weather_agent_config = AgentConfig(
name="weather-assistant",
instructions="You are a helpful weather assistant.",
model="gpt-4.1",
model_settings=ModelSettings(
temperature=0.7,
max_tokens=1000,
frequency_penalty=0.0,
presence_penalty=0.0
),
metadata={
"description": "Weather information and forecast assistant",
"version": "1.0",
"capabilities": ["current weather", "forecasts", "historical data"]
}
)
# Create the factory configuration
factory_config = AgentFactoryConfig(
agents=[weather_agent_config],
openai_models=[
AzureOpenAIConfig(
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
endpoint=os.getenv("AZURE_OPENAI_ENDPOINT"),
api_version=os.getenv("OPENAI_API_VERSION"),
model="gpt-4.1"
)
]
)
# Create and initialize the agent service factory
async with AgentServiceFactory(factory_config) as service_factory:
# Mount the agent service to the main app
service_factory.mount_to(app, prefix="/")
# Yield control to FastAPI, keeping the service alive
yield
# Cleanup happens automatically when exiting the context
# Create the main FastAPI application with the lifespan
app = FastAPI(title="Agent Service Example", lifespan=lifespan)
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000)
This creates the following endpoints for each agent:
/agents/weather-assistant/- Get agent information and metadata/agents/weather-assistant/chat/stream- Stream agent responses with Server-Sent Events/agents- List all available agents with their details
You can use curl to test the streaming endpoint:
curl -X POST -H "Content-Type: application/json" \
-d '{"messages":[{"role":"user","content":"Tell me about the weather"}]}' \
http://localhost:8000/agents/weather-assistant/chat/stream
Development
Running Tests
bash scripts/run_tests.sh
Building Documentation
cd docs
make html
License
Examples
The examples directory contains complete usage samples:
agent_service_example.py: Basic setup for exposing agents through HTTP endpointscli_example.py: Command-line interface usageconfig_agent_service.py: Using configuration files with AgentServiceFactorykubernetes_server_example.py: Deploying agents in a Kubernetes environmentmodel_configuration_example.py: Advanced model settings configurationtime_server_example.py: Using MCP time server with agents
API Documentation
Key Classes
- AgentFactory: Core factory for creating and managing OpenAI agents
- AgentServiceFactory: Exposes agents as HTTP endpoints using FastAPI
- AgentConfig: Configuration for a single agent
- AgentFactoryConfig: Configuration for the agent factory
- ModelSettings: Advanced model parameter settings
- MCPServerManager: Manages MCP server lifecycle
Configuration
The library provides comprehensive configuration options through both YAML/JSON files and environment variables:
- Agent instructions, model settings, and dependencies
- MCP server configurations
- OpenAI model settings and credentials
- Service endpoints and metadata
Development
Prerequisites
- Python 3.10 or higher
- OpenAI API access
- Required MCP servers for your use case
Testing
pytest
License
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file openai_agent_factory-0.2.5-py3-none-any.whl.
File metadata
- Download URL: openai_agent_factory-0.2.5-py3-none-any.whl
- Upload date:
- Size: 23.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3494cc379e9ab2119d7272d660c7e322e7b144217ca58543cbbcf102001ed4aa
|
|
| MD5 |
334c9948d5c191504cbe816dfde81a1c
|
|
| BLAKE2b-256 |
1951ffbde559ca95b82d533c1a7edb0792b6d6f9659d273bb8282ca268730557
|
Provenance
The following attestation bundles were made for openai_agent_factory-0.2.5-py3-none-any.whl:
Publisher:
python-publish.yml on jiahzhu1989/agent-factory
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
openai_agent_factory-0.2.5-py3-none-any.whl -
Subject digest:
3494cc379e9ab2119d7272d660c7e322e7b144217ca58543cbbcf102001ed4aa - Sigstore transparency entry: 214943566
- Sigstore integration time:
-
Permalink:
jiahzhu1989/agent-factory@517479196c2388e47e3851395c9e83b65c9193b4 -
Branch / Tag:
refs/tags/v0.2.5 - Owner: https://github.com/jiahzhu1989
-
Access:
private
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
python-publish.yml@517479196c2388e47e3851395c9e83b65c9193b4 -
Trigger Event:
release
-
Statement type: