Model adapters for OpenAI Agents SDK
Project description
Agents SDK Models 🤖🔌
A collection of model adapters and workflow utilities for the OpenAI Agents SDK, enabling you to use various LLM providers and build practical agent pipelines with a unified interface!
🌟 Features
- 🔄 Unified Factory: Use the
get_llmfunction to easily get model instances for different providers. - 🧩 Multiple Providers: Support for OpenAI, Ollama, Google Gemini, and Anthropic Claude.
- 📊 Structured Output: All models instantiated via
get_llmsupport structured output using Pydantic models. - 🏗️ AgentPipeline Class: Easily compose generation, evaluation, tool integration, and guardrails in one workflow.
- 🛡️ Guardrails: Add input/output guardrails for safe and compliant agent behavior.
- 🛠️ Simple Interface: Minimal code, maximum flexibility.
- ✨ Zero-Code Evaluation & Self-Improvement: Just specify model names and system prompts to automatically run generation, evaluation, and feedback-driven retries.
- 🔍 Custom Console Tracing: Console tracing is enabled by default using
ConsoleTracingProcessor. While the OpenAI Agents SDK uses OpenAI's Tracing service by default (requiringOPENAI_API_KEY), this library provides a lightweight console-based tracer that works with any provider. You can disable tracing entirely withdisable_tracing().
v0.20 Release Notes
- Support
OLLAMA_BASE_URLenvironment variable for Ollama configuration - Remove OpenAI Agents SDK standard Trace and use console-only tracing for better compatibility
v0.19 Release Notes
- Add
get_available_models()andget_available_models_async()functions to retrieve available model names from different providers - Update model lists to latest versions: Claude-4 (Opus/Sonnet), Gemini 2.5 (Pro/Flash), OpenAI latest models (gpt-4.1, o3, o4-mini)
- Support dynamic model discovery for Ollama via
/api/psendpoint
v0.18 Release Notes
- Support OpenAI Agents SDK Trace feature, with default console tracing enabled.
- Add
evaluation_modelparameter to switch evaluation model separately from generation model.
🛠️ Installation
From PyPI (Recommended)
pip install agents-sdk-models
From Source
git clone https://github.com/kitfactory/agents-sdk-models.git
cd agents-sdk-models
python -m venv .venv
.venv\Scripts\activate # Windows
source .venv/bin/activate # Linux/Mac
pip install -e .[dev]
🧪 Tests & Coverage
Run tests and generate a coverage report:
pytest --cov=agents_sdk_models --cov-report=term-missing
- ✅ All tests currently pass successfully.
- The coverage badge indicates the line coverage percentage for the
agents_sdk_modelspackage (measured by pytest-cov).
🚀 Quick Start: Using get_llm
The get_llm function supports specifying the model and provider, or just the model (provider is inferred):
from agents_sdk_models import get_llm
# Specify both model and provider
llm = get_llm(model="gpt-4o-mini", provider="openai")
# Or just the model (provider inferred)
llm = get_llm("claude-3-5-sonnet-latest")
Example: Structured Output
from agents import Agent, Runner
from agents_sdk_models import get_llm
from pydantic import BaseModel
class WeatherInfo(BaseModel):
location: str
temperature: float
condition: str
llm = get_llm("gpt-4o-mini")
agent = Agent(
name="Weather Reporter",
model=llm,
instructions="You are a helpful weather reporter.",
output_type=WeatherInfo
)
result = Runner.run_sync(agent, "What's the weather in Tokyo?")
print(result.final_output)
Example: Tracing
from agents_sdk_models import enable_console_tracing, disable_tracing
from agents_sdk_models.pipeline import AgentPipeline
from agents.tracing import trace
# Enable console tracing (uses ConsoleTracingProcessor)
enable_console_tracing()
pipeline = AgentPipeline(
name="trace_example",
generation_instructions="You are a helpful assistant.",
evaluation_instructions=None,
model="gpt-4o-mini"
)
# Run pipeline under a trace context
with trace("MyTrace"):
result = pipeline.run("Hello, world!")
print(result)
Example: Get Available Models
from agents_sdk_models import get_available_models, get_available_models_async
# Get models from all providers (synchronous)
models = get_available_models(["openai", "google", "anthropic", "ollama"])
print("Available models:", models)
# Get models from specific providers (asynchronous)
import asyncio
async def main():
models = await get_available_models_async(["openai", "google"])
for provider, model_list in models.items():
print(f"{provider}: {model_list}")
asyncio.run(main())
# Custom Ollama URL
models = get_available_models(["ollama"], ollama_base_url="http://custom-host:11434")
🏗️ AgentPipeline Class: Easy LLM Workflows
The AgentPipeline class provides an all-in-one solution for AI agent workflows. It:
- Generates content based on user-defined instructions
- Evaluates the generated content with scoring and comments
- Integrates custom tools (via
function_tool) for external data or computation - Applies input/output guardrails (via
input_guardrail) for safety and compliance - Manages session history and context
- Supports configurable retries with automatic feedback (via
retry_comment_importance)
Key initialization parameters:
generation_instructions(str): System prompt for content generationevaluation_instructions(str, optional): System prompt for content evaluationmodel(str, optional): LLM model to use (e.g., "gpt-4o-mini")evaluation_model(str, optional): LLM model to use for evaluation (overridesmodel).- Note: You can specify a different model provider for
evaluation_model, such as using OpenAI for generation and a local Ollama model for evaluation, to reduce cost and improve performance. generation_tools(list, optional): Tools for generation stageinput_guardrails,output_guardrails(list, optional): Guardrails for input/outputthreshold(int): Minimum score to accept generated contentretries(int): Number of retry attempts on low evaluationretry_comment_importance(list[str], optional): Importance levels ("serious","normal","minor") whose comments will be prepended to the prompt on retry
Basic Usage
from agents_sdk_models.pipeline import AgentPipeline
pipeline = AgentPipeline(
name="simple_generator",
generation_instructions="""
You are a helpful assistant that generates creative stories.
Please generate a short story based on the user's input.
""",
evaluation_instructions=None, # No evaluation
model="gpt-4o"
)
result = pipeline.run("A story about a robot learning to paint")
With Evaluation
pipeline = AgentPipeline(
name="evaluated_generator",
generation_instructions="""
You are a helpful assistant that generates creative stories.
Please generate a short story based on the user's input.
""",
evaluation_instructions="""
You are a story evaluator. Please evaluate the generated story based on:
1. Creativity (0-100)
2. Coherence (0-100)
3. Emotional impact (0-100)
Calculate the average score and provide specific comments for each aspect.
""",
model="gpt-4o",
threshold=70
)
result = pipeline.run("A story about a robot learning to paint")
With Tools
from agents import function_tool
@function_tool
def search_web(query: str) -> str:
# Implement actual web search here
return f"Search results for: {query}"
@function_tool
def get_weather(location: str) -> str:
# Implement actual weather API here
return f"Weather in {location}: Sunny, 25°C"
tools = [search_web, get_weather]
pipeline = AgentPipeline(
name="tooled_generator",
generation_instructions="""
You are a helpful assistant that can use tools to gather information.
You have access to the following tools:
1. search_web: Search the web for information
2. get_weather: Get current weather for a location
Please use these tools when appropriate to provide accurate information.
""",
evaluation_instructions=None,
model="gpt-4o",
generation_tools=tools
)
result = pipeline.run("What's the weather like in Tokyo?")
With Guardrails (input_guardrails)
from agents import Agent, input_guardrail, GuardrailFunctionOutput, InputGuardrailTripwireTriggered, Runner, RunContextWrapper
from agents_sdk_models.pipeline import AgentPipeline
from pydantic import BaseModel
class MathHomeworkOutput(BaseModel):
is_math_homework: bool
reasoning: str
guardrail_agent = Agent(
name="Guardrail check",
instructions="Check if the user is asking you to do their math homework.",
output_type=MathHomeworkOutput,
)
@input_guardrail
async def math_guardrail(ctx: RunContextWrapper, agent: Agent, input: str):
result = await Runner.run(guardrail_agent, input, context=ctx.context)
return GuardrailFunctionOutput(
output_info=result.final_output,
tripwire_triggered=result.final_output.is_math_homework,
)
pipeline = AgentPipeline(
name="guardrail_pipeline",
generation_instructions="""
You are a helpful assistant. Please answer the user's question.
""",
evaluation_instructions=None,
model="gpt-4o",
input_guardrails=[math_guardrail],
)
try:
result = pipeline.run("Can you help me solve for x: 2x + 3 = 11?")
print(result)
except InputGuardrailTripwireTriggered:
print("[Guardrail Triggered] Math homework detected. Request blocked.")
With Dynamic Prompt
# You can provide a custom function to dynamically build the prompt.
from agents_sdk_models.pipeline import AgentPipeline
def my_dynamic_prompt(user_input: str) -> str:
# Example: Uppercase the user input and add a prefix
return f"[DYNAMIC PROMPT] USER SAID: {user_input.upper()}"
pipeline = AgentPipeline(
name="dynamic_prompt_example",
generation_instructions="""
You are a helpful assistant. Respond to the user's request.
""",
evaluation_instructions=None,
model="gpt-4o",
dynamic_prompt=my_dynamic_prompt
)
result = pipeline.run("Tell me a joke.")
print(result)
🖥️ Supported Environments
- Python 3.9+
- OpenAI Agents SDK 0.0.9+
- Windows, Linux, MacOS
💡 Why use this?
- Unified: One interface for all major LLM providers
- Flexible: Compose generation, evaluation, tools, and guardrails as you like
- Easy: Minimal code to get started, powerful enough for advanced workflows
- Safe: Guardrails for compliance and safety
- Self-Improving: Automatic feedback and retry mechanism with minimal configuration
📂 Examples
See the examples/ directory for more advanced usage:
pipeline_simple_generation.py: Minimal generationpipeline_with_evaluation.py: Generation + evaluationpipeline_with_tools.py: Tool-augmented generationpipeline_with_guardrails.py: Guardrails (input filtering)
📄 License & Credits
MIT License. Powered by OpenAI Agents SDK.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file agents_sdk_models-0.0.20.tar.gz.
File metadata
- Download URL: agents_sdk_models-0.0.20.tar.gz
- Upload date:
- Size: 752.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
768bdb7dc43585006bf425e89cc80ac5fa68c9e27f3cb4c951ca72a4aa9d575c
|
|
| MD5 |
a63b42d73d30ff88e73d8d356f087d44
|
|
| BLAKE2b-256 |
686ac1c393ec3dde6a097ddd369a134b19357561d1282efa165e578546b72f0b
|
File details
Details for the file agents_sdk_models-0.0.20-py3-none-any.whl.
File metadata
- Download URL: agents_sdk_models-0.0.20-py3-none-any.whl
- Upload date:
- Size: 24.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d9c150517bdba23fb57ec8bb483c3194b0d5d210d1108fb89bdda36ecf52c1c7
|
|
| MD5 |
d4d6440afd8a4b4143ef47634aa4ac9e
|
|
| BLAKE2b-256 |
e01163e990ca02625fdd881646a926b6960e006cc32ccd9aa4ee01768c77cbb5
|