Skip to main content

SDK for creating and managing OpenAI agents with MCP server integration

Project description

OpenAI Agent Factory

A toolkit for creating and managing OpenAI Agents with MCP server integration.

Features

  • Create and manage multiple AI agents with different configurations
  • Support for MCP (Model Context Protocol) servers
  • Integration with Azure OpenAI models
  • Command-line interface for interactive agent communication with conversation history management
  • HTTP service support via FastAPI for exposing agents as web endpoints (JSON and streaming responses)
  • Async context manager support for easy resource management
  • Comprehensive environment variable configuration
  • Advanced model settings customization (temperature, tokens, penalties, etc.)
  • Metadata support for agents

Installation

Via pip

pip install openai-agent-factory

For development

git clone https://github.com/jiahzhu1989/agent-factory.git
cd agent-factory
pip install -e .

Requirements

  • Python 3.10 or higher
  • openai >= 1.77.0
  • openai-agents >= 0.0.14
  • pydantic >= 2.0.0
  • fastapi >= 0.109.0 (for HTTP service)
  • uvicorn >= 0.27.0 (for running HTTP services)
  • pyyaml >= 6.0.1 (for configuration files)
  • Additional dependencies in pyproject.toml

Command-line Interface

The package includes a CLI tool for interacting with agents:

agent-cli -c path/to/config.yaml

Options:

  • -c, --config: Path to the agent configuration YAML file (required)
  • -l, --list: List all available agents
  • -a, --agent: Name of the agent to interact with
  • -v, --verbose: Enable verbose logging
  • --max-history: Maximum number of conversation turns to keep in history (default: 500)
  • --max-tokens: Maximum token limit for conversation history (default: 100000)

Example Usage

List all available agents:

agent-cli -c examples/configs/cli_example.yaml -l

Interact with a specific agent:

agent-cli -c examples/configs/cli_example.yaml -a "General Assistant"

Configuration

Agent configuration is defined in YAML format:

agents:
  - name: "General Assistant"
    instructions: |
      You are a helpful, friendly AI assistant.
      Answer questions clearly and concisely.
    model: "gpt-4.1"
    model_settings:
      temperature: 0.7
      max_tokens: 1000
      frequency_penalty: 0.0
      presence_penalty: 0.0
    mcp_servers: ["time"]
    metadata:
      description: "General-purpose AI assistant"
      version: "1.0"
      capabilities: ["answering questions", "casual conversation"]

  - name: "Python Coder"
    instructions: |
      You are a Python coding expert. 
      Always provide working code examples.
    model: "gpt-4.1"
    model_settings:
      temperature: 0.5
      max_tokens: 2000
    mcp_servers: ["time"]
    metadata:
      description: "Python programming expert"
      version: "1.0"
      capabilities: ["code generation", "debugging", "code explanation"]

mcp_servers:
  time:
    type: "stdio"
    command: "python"
    args: ["-m", "mcp_server_time"]
    env:
      DEBUG: "true"
    encoding: "utf-8"
  azure:
    type: "stdio"
    command: "npx"
    args: ["-y", "@azure/mcp@latest", "server", "start"]
    env:
      AZURE_MCP_INCLUDE_PRODUCTION_CREDENTIALS: "true"

openai_models:
  - api_key: "${AZURE_OPENAI_API_KEY}"
    endpoint: "${AZURE_OPENAI_ENDPOINT}"
    api_version: "${OPENAI_API_VERSION}"
    model: "gpt-4.1"

Environment Variables

Override with environment variables: You can override any configuration value by setting environment variables with the prefix AGENT_FACTORY_. For example:

# Override the API key for the first OpenAI model
export AGENT_FACTORY_OPENAI_MODELS_0_API_KEY="your-api-key"

# Override the temperature for the first agent
export AGENT_FACTORY_AGENTS_0_TEMPERATURE="0.5"

# Override the instructions for the second agent
export AGENT_FACTORY_AGENTS_1_INSTRUCTIONS="You are a helpful assistant."

Environment variable overrides take precedence over values defined in the configuration file.

Code Examples

Using As Async Context Manager (Recommended)

async with AgentFactory(config) as factory:
    agent = factory.get_agent("General Assistant")
    response = await agent.generate("Tell me a joke")
    print(response)

Using Explicit Initialization and Shutdown

factory = AgentFactory(config)
await factory.initialize()

try:
    agent = factory.get_agent("General Assistant")
    response = await agent.generate("Tell me a joke")
    print(response)
    
    # Get a list of all available agents
    all_agents = factory.get_all_agents()
    print(f"Available agents: {list(all_agents.keys())}")
finally:
    await factory.shutdown()

Creating an HTTP Service

Use the AgentServiceFactory to create a FastAPI application that exposes agents as HTTP endpoints:

import os
from fastapi import FastAPI
from contextlib import asynccontextmanager
from agent_factory import AgentConfig, AgentFactoryConfig, AzureOpenAIConfig, AgentServiceFactory, ModelSettings

# Create an asynccontextmanager for the service lifecycle
@asynccontextmanager
async def lifespan(app: FastAPI):
    # Create agent configurations with advanced model settings
    weather_agent_config = AgentConfig(
        name="weather-assistant",
        instructions="You are a helpful weather assistant.",
        model="gpt-4.1",
        model_settings=ModelSettings(
            temperature=0.7,
            max_tokens=1000,
            frequency_penalty=0.0,
            presence_penalty=0.0
        ),
        metadata={
            "description": "Weather information and forecast assistant",
            "version": "1.0",
            "capabilities": ["current weather", "forecasts", "historical data"]
        }
    )
    
    # Create the factory configuration
    factory_config = AgentFactoryConfig(
        agents=[weather_agent_config],
        openai_models=[
            AzureOpenAIConfig(
                api_key=os.getenv("AZURE_OPENAI_API_KEY"),
                endpoint=os.getenv("AZURE_OPENAI_ENDPOINT"),
                api_version=os.getenv("OPENAI_API_VERSION"),
                model="gpt-4.1"
            )
        ]
    )
    
    # Create and initialize the agent service factory
    async with AgentServiceFactory(factory_config) as service_factory:
        # Mount the agent service to the main app
        service_factory.mount_to(app, prefix="/")
        
        # Yield control to FastAPI, keeping the service alive
        yield
        # Cleanup happens automatically when exiting the context

# Create the main FastAPI application with the lifespan
app = FastAPI(title="Agent Service Example", lifespan=lifespan)

if __name__ == "__main__":
    import uvicorn
    uvicorn.run(app, host="0.0.0.0", port=8000)

This creates the following endpoints for each agent:

  • /agents/weather-assistant/ - Get agent information and metadata
  • /agents/weather-assistant/chat/stream - Stream agent responses with Server-Sent Events
  • /agents - List all available agents with their details

You can use curl to test the streaming endpoint:

curl -X POST -H "Content-Type: application/json" \
  -d '{"messages":[{"role":"user","content":"Tell me about the weather"}]}' \
  http://localhost:8000/agents/weather-assistant/chat/stream

Development

Running Tests

bash scripts/run_tests.sh

Building Documentation

cd docs
make html

License

MIT

Examples

The examples directory contains complete usage samples:

  • agent_service_example.py: Basic setup for exposing agents through HTTP endpoints
  • cli_example.py: Command-line interface usage
  • config_agent_service.py: Using configuration files with AgentServiceFactory
  • kubernetes_server_example.py: Deploying agents in a Kubernetes environment
  • model_configuration_example.py: Advanced model settings configuration
  • time_server_example.py: Using MCP time server with agents

API Documentation

Key Classes

  • AgentFactory: Core factory for creating and managing OpenAI agents
  • AgentServiceFactory: Exposes agents as HTTP endpoints using FastAPI
  • AgentConfig: Configuration for a single agent
  • AgentFactoryConfig: Configuration for the agent factory
  • ModelSettings: Advanced model parameter settings
  • MCPServerManager: Manages MCP server lifecycle

Configuration

The library provides comprehensive configuration options through both YAML/JSON files and environment variables:

  • Agent instructions, model settings, and dependencies
  • MCP server configurations
  • OpenAI model settings and credentials
  • Service endpoints and metadata

Development

Prerequisites

  • Python 3.10 or higher
  • OpenAI API access
  • Required MCP servers for your use case

Testing

pytest

License

MIT License

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

openai_agent_factory-0.2.2-py3-none-any.whl (23.0 kB view details)

Uploaded Python 3

File details

Details for the file openai_agent_factory-0.2.2-py3-none-any.whl.

File metadata

File hashes

Hashes for openai_agent_factory-0.2.2-py3-none-any.whl
Algorithm Hash digest
SHA256 1cd2d90bca9d3168aaddb91b0f743711d9622ad9d09ee7e157eff1b5a5740295
MD5 b0eb1abfd46c3aadaad8beb9a8fad643
BLAKE2b-256 871e3564231b627fb08c42e2f17976f8d8ad0b0400de46cdae1d472f16269337

See more details on using hashes here.

Provenance

The following attestation bundles were made for openai_agent_factory-0.2.2-py3-none-any.whl:

Publisher: python-publish.yml on jiahzhu1989/agent-factory

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page