Model adapters for OpenAI Agents SDK
Project description
Agents SDK Models 🤖🔌
A collection of model adapters and workflow utilities for the OpenAI Agents SDK, enabling you to use various LLM providers and build practical agent pipelines with a unified interface!
⚡ Recommended: Flow/Step Architecture - Super Simple!
🎉 New in v0.0.22: We now recommend using the Flow/Step architecture with GenAgent. It's incredibly simple and powerful!
🚀 Just 3 Lines to Get Started!
from agents_sdk_models import create_simple_gen_agent, Flow, Context
import asyncio
# Step 1: Create a GenAgent (like AgentPipeline, but better!)
gen_agent = create_simple_gen_agent(
name="simple_gen",
instructions="You are a helpful assistant. Answer user questions concisely.",
model="gpt-4o-mini"
)
# Step 2: Create a Flow (even simpler now!)
flow = Flow(steps=gen_agent) # Single step - that's it!
# Step 3: Run it! (same simple interface as before)
context = Context()
context.add_user_message("Hello! Tell me about Japanese culture briefly.")
result = asyncio.run(gen_agent.run("Hello! Tell me about Japanese culture briefly.", context))
print(result.shared_state["simple_gen_result"]) # Your response is ready!
🚀 NEW: Ultra-Simple Flow Creation!
Now you can create flows in 3 different ways:
# 1. Single Step (NEW!)
flow = Flow(steps=gen_agent)
# 2. Sequential Steps (NEW!)
flow = Flow(steps=[step1, step2, step3]) # Auto-connects them!
# 3. Traditional (for complex flows)
flow = Flow(start="step1", steps={"step1": step1, "step2": step2})
🎯 Why is it SO Much Simpler?
| LangChain/LangGraph (~50-100+ lines) | GenAgent + Flow (3-5 lines) |
|---|---|
| 🔧 Complex imports (10+ modules) | ✨ One import - everything included |
| 📝 Manual prompt templates | 🎯 Simple instruction strings |
| 🧩 Graph/Chain building (20+ lines) | 🔄 Auto-generated workflows |
| ⚙️ Custom error handling | 🛡️ Built-in error recovery |
| 🔁 Manual retry logic | 🔄 Auto-retry with evaluation |
| 🛠️ State management code | 📦 Handled automatically |
🌟 Real-World Example: Content Generator with Evaluation
from agents_sdk_models import create_evaluated_gen_agent, Context
import asyncio
# Create GenAgent with evaluation (replaces complex AgentPipeline setup)
gen_agent = create_evaluated_gen_agent(
name="eval_gen",
generation_instructions="Explain the future of AI in about 200 characters, clearly and accurately.",
evaluation_instructions="Evaluate if the answer is about 200 characters, clear, and accurate.",
model="gpt-4o-mini"
)
# Run with evaluation
context = Context()
context.add_user_message("Please explain the future of AI in about 200 characters.")
result = asyncio.run(gen_agent.run("Please explain the future of AI in about 200 characters.", context))
print(result.shared_state["eval_gen_result"])
print("Evaluation:", result.shared_state.get("eval_gen_evaluation"))
# Automatically handles: generation → evaluation → feedback
🎨 Compared to LangChain/LangGraph - HUGE Difference!
# LangChain/LangGraph way (~80+ lines, complex setup)
"""
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
from langchain.schema import BaseOutputParser
from langchain.callbacks import BaseCallbackHandler
from langchain.schema.runnable import RunnablePassthrough
from langgraph.graph import StateGraph, END
from typing import TypedDict, Annotated
import operator
# ... (約15行のimport文) ...
class AgentState(TypedDict):
input: str
generation: str
evaluation: dict
retry_count: int
# ... (約10行の状態定義) ...
def generation_node(state):
# ... (約15行の生成ロジック) ...
def evaluation_node(state):
# ... (約20行の評価ロジック) ...
def should_retry(state):
# ... (約10行のリトライ判定) ...
workflow = StateGraph(AgentState)
workflow.add_node("generate", generation_node)
workflow.add_node("evaluate", evaluation_node)
workflow.add_conditional_edges(
"evaluate",
should_retry,
{"retry": "generate", "end": END}
)
# ... (約10行のグラフ構築) ...
"""
# GenAgent + Flow way (3 lines!)
gen_agent = create_simple_gen_agent(
name="simple_setup",
instructions="...",
model="gpt-4o-mini"
)
# Use GenAgent directly - no complex Flow needed!
result = asyncio.run(gen_agent.run("Your input", Context())) # Done!
🏗️ Advanced Features Made Simple
# Simple Flow Example
from agents_sdk_models import Context, FunctionStep, Flow, create_simple_flow
import asyncio
def process_greeting(user_input, ctx):
"""Process greeting with user data"""
name = ctx.shared_state.get("user_name", "Anonymous")
task = ctx.shared_state.get("task", "something")
greeting = f"Hello, {name}! I'll help you with {task}."
ctx.shared_state["greeting"] = greeting
ctx.finish()
return ctx
# Create simple flow
context = Context()
context.shared_state["user_name"] = "Taro"
context.shared_state["task"] = "programming learning"
greeting_step = FunctionStep("greeting", process_greeting)
flow = create_simple_flow([("greeting", greeting_step)], context)
result = asyncio.run(flow.run())
print(result.shared_state.get("greeting")) # "Hello, Taro! I'll help you with programming learning."
Example: Conditional Flow
from agents_sdk_models import Context, ConditionStep, FunctionStep, Flow
import asyncio
# Create context with user level
context = Context()
context.shared_state["user_level"] = "beginner"
# Create condition function
def is_beginner(ctx):
return ctx.shared_state.get("user_level") == "beginner"
# Create action functions
def beginner_action(user_input, ctx):
ctx.shared_state["message"] = "Starting beginner tutorial."
ctx.finish()
return ctx
def advanced_action(user_input, ctx):
ctx.shared_state["message"] = "Displaying advanced content."
ctx.finish()
return ctx
# Create conditional flow
condition_step = ConditionStep("condition", is_beginner, "beginner", "advanced")
beginner_step = FunctionStep("beginner", beginner_action)
advanced_step = FunctionStep("advanced", advanced_action)
flow = Flow(
start="condition",
steps={
"condition": condition_step,
"beginner": beginner_step,
"advanced": advanced_step
},
context=context
)
result = asyncio.run(flow.run())
print(result.shared_state.get("message")) # "Starting beginner tutorial."
✨ Benefits You'll Love:
- 🔄 More Flexibility: Compose complex workflows using modular steps
- 🧩 Better Reusability: Steps can be reused across different flows
- 🎯 Cleaner Architecture: Clear separation of concerns
- 🚀 Future-Proof: Designed for scalability and extensibility
- 💡 Intuitive: If you understand AgentPipeline, you already understand this!
Note: Compared to LangChain/LangGraph's 50-100+ lines of complex setup, GenAgent + Flow achieves the same functionality in just 3-5 lines! AgentPipeline is now deprecated and will be removed in v0.1.0.
🌟 Features
- 🔄 Unified Factory: Use the
get_llmfunction to easily get model instances for different providers. - 🧩 Multiple Providers: Support for OpenAI, Ollama, Google Gemini, and Anthropic Claude.
- 📊 Structured Output: All models instantiated via
get_llmsupport structured output using Pydantic models. - 🏗️ AgentPipeline Class: Easily compose generation, evaluation, tool integration, and guardrails in one workflow.
- 🛡️ Guardrails: Add input/output guardrails for safe and compliant agent behavior.
- 🛠️ Simple Interface: Minimal code, maximum flexibility.
- ✨ Zero-Code Evaluation & Self-Improvement: Just specify model names and system prompts to automatically run generation, evaluation, and feedback-driven retries.
- 🔍 Custom Console Tracing: Console tracing is enabled by default using
ConsoleTracingProcessor. While the OpenAI Agents SDK uses OpenAI's Tracing service by default (requiringOPENAI_API_KEY), this library provides a lightweight console-based tracer that works with any provider. You can disable tracing entirely withdisable_tracing().
v0.22 Release Notes
- 🚀 Major: New Flow Constructor - Added ultra-simple Flow creation with 3 modes:
- Single step:
Flow(steps=gen_agent) - Sequential steps:
Flow(steps=[step1, step2, step3])(auto-connects) - Traditional:
Flow(start="step1", steps={"step1": step1, "step2": step2})
- Single step:
- 🚀 Enhanced Flow.run() - Added
input_dataparameter (preferred overinitial_input) - ✨ GenAgent + Flow Architecture - Now recommended over AgentPipeline for new projects
- ⚠️ AgentPipeline Deprecation - AgentPipeline is now deprecated and will be removed in v0.1.0
- 📚 Complete Documentation Update - All tutorials and examples updated to showcase new Flow features
v0.21 Release Notes
- Fix
get_available_modelssynchronous function to work properly in environments with running event loops (e.g., Jupyter Notebook, IPython) - Support dynamic model discovery for Ollama via
/api/tagendpoint
v0.20 Release Notes
- Support
OLLAMA_BASE_URLenvironment variable for Ollama configuration - Remove OpenAI Agents SDK standard Trace and use console-only tracing for better compatibility
v0.19 Release Notes
- Add
get_available_models()andget_available_models_async()functions to retrieve available model names from different providers - Update model lists to latest versions: Claude-4 (Opus/Sonnet), Gemini 2.5 (Pro/Flash), OpenAI latest models (gpt-4.1, o3, o4-mini)
v0.18 Release Notes
- Support OpenAI Agents SDK Trace feature, with default console tracing enabled.
- Add
evaluation_modelparameter to switch evaluation model separately from generation model.
🛠️ Installation
From PyPI (Recommended)
pip install agents-sdk-models
From Source
git clone https://github.com/kitfactory/agents-sdk-models.git
cd agents-sdk-models
python -m venv .venv
.venv\Scripts\activate # Windows
source .venv/bin/activate # Linux/Mac
pip install -e .[dev]
🧪 Tests & Coverage
Run tests and generate a coverage report:
pytest --cov=agents_sdk_models --cov-report=term-missing
- ✅ All tests currently pass successfully.
- The coverage badge indicates the line coverage percentage for the
agents_sdk_modelspackage (measured by pytest-cov).
🚀 Quick Start: Using get_llm
The get_llm function supports specifying the model and provider, or just the model (provider is inferred):
from agents_sdk_models import get_llm
# Specify both model and provider
llm = get_llm(model="gpt-4o-mini", provider="openai")
# Or just the model (provider inferred)
llm = get_llm("claude-3-5-sonnet-latest")
Example: Structured Output
from agents import Agent, Runner
from agents_sdk_models import get_llm
from pydantic import BaseModel
class WeatherInfo(BaseModel):
location: str
temperature: float
condition: str
llm = get_llm("gpt-4o-mini")
agent = Agent(
name="Weather Reporter",
model=llm,
instructions="You are a helpful weather reporter.",
output_type=WeatherInfo
)
result = Runner.run_sync(agent, "What's the weather in Tokyo?")
print(result.final_output)
Example: Tracing
from agents_sdk_models import enable_console_tracing, disable_tracing
from agents_sdk_models.pipeline import AgentPipeline
from agents.tracing import trace
# Enable console tracing (uses ConsoleTracingProcessor)
enable_console_tracing()
pipeline = AgentPipeline(
name="trace_example",
generation_instructions="You are a helpful assistant.",
evaluation_instructions=None,
model="gpt-4o-mini"
)
# Run pipeline under a trace context
with trace("MyTrace"):
result = pipeline.run("Hello, world!")
print(result)
Example: ClearifyAgent for Ambiguous Requests
from agents_sdk_models import create_simple_clearify_agent, Context
import asyncio
# Create ClearifyAgent for handling ambiguous requests
agent = create_simple_clearify_agent(
name="clarify_agent",
instructions="Ask questions to clarify ambiguous user requests. When the request is clear enough, output the clarified request.",
model="gpt-4o-mini"
)
# Process ambiguous request
ambiguous_request = "I want to create an API"
context = Context()
context.add_user_message(ambiguous_request)
result = asyncio.run(agent.run(ambiguous_request, context))
print("Original:", ambiguous_request)
print("Clarified:", result.shared_state.get("clarify_agent_result", "Still clarifying"))
Example: Multi-Provider LLM Access
from agents_sdk_models import get_llm
# Try different providers
providers = [
("openai", "gpt-4o-mini"),
("anthropic", "claude-3-haiku-20240307"),
("google", "gemini-1.5-flash"),
("ollama", "llama3.1:8b")
]
for provider, model in providers:
try:
llm = get_llm(provider=provider, model=model)
print(f"✓ {provider}: {model} - Ready")
except Exception as e:
print(f"✗ {provider}: {model} - Error: {str(e)}")
Example: Get Available Models
from agents_sdk_models import get_available_models, get_available_models_async
# Get models from all providers (synchronous)
models = get_available_models(["openai", "google", "anthropic", "ollama"])
print("Available models:", models)
# Get models from specific providers (asynchronous)
import asyncio
async def main():
models = await get_available_models_async(["openai", "google"])
for provider, model_list in models.items():
print(f"{provider}: {model_list}")
asyncio.run(main())
# Custom Ollama URL
models = get_available_models(["ollama"], ollama_base_url="http://custom-host:11434")
🏗️ AgentPipeline Class: Easy LLM Workflows (⚠️ Deprecated)
⚠️ Deprecated: AgentPipeline is deprecated as of v0.0.22 and will be removed in v0.1.0. Please use GenAgent with Flow/Step architecture instead.
The AgentPipeline class provides an all-in-one solution for AI agent workflows. It:
- Generates content based on user-defined instructions
- Evaluates the generated content with scoring and comments
- Integrates custom tools (via
function_tool) for external data or computation - Applies input/output guardrails (via
input_guardrail) for safety and compliance - Manages session history and context
- Supports configurable retries with automatic feedback (via
retry_comment_importance)
Key initialization parameters:
generation_instructions(str): System prompt for content generationevaluation_instructions(str, optional): System prompt for content evaluationmodel(str, optional): LLM model to use (e.g., "gpt-4o-mini")evaluation_model(str, optional): LLM model to use for evaluation (overridesmodel).- Note: You can specify a different model provider for
evaluation_model, such as using OpenAI for generation and a local Ollama model for evaluation, to reduce cost and improve performance. generation_tools(list, optional): Tools for generation stageinput_guardrails,output_guardrails(list, optional): Guardrails for input/outputthreshold(int): Minimum score to accept generated contentretries(int): Number of retry attempts on low evaluationretry_comment_importance(list[str], optional): Importance levels ("serious","normal","minor") whose comments will be prepended to the prompt on retry
Basic Usage
from agents_sdk_models.pipeline import AgentPipeline
pipeline = AgentPipeline(
name="simple_generator",
generation_instructions="""
You are a helpful assistant that generates creative stories.
Please generate a short story based on the user's input.
""",
evaluation_instructions=None, # No evaluation
model="gpt-4o"
)
result = pipeline.run("A story about a robot learning to paint")
With Evaluation
pipeline = AgentPipeline(
name="evaluated_generator",
generation_instructions="""
You are a helpful assistant that generates creative stories.
Please generate a short story based on the user's input.
""",
evaluation_instructions="""
You are a story evaluator. Please evaluate the generated story based on:
1. Creativity (0-100)
2. Coherence (0-100)
3. Emotional impact (0-100)
Calculate the average score and provide specific comments for each aspect.
""",
model="gpt-4o",
threshold=70
)
result = pipeline.run("A story about a robot learning to paint")
With Tools
from agents import function_tool
@function_tool
def search_web(query: str) -> str:
# Implement actual web search here
return f"Search results for: {query}"
@function_tool
def get_weather(location: str) -> str:
# Implement actual weather API here
return f"Weather in {location}: Sunny, 25°C"
tools = [search_web, get_weather]
pipeline = AgentPipeline(
name="tooled_generator",
generation_instructions="""
You are a helpful assistant that can use tools to gather information.
You have access to the following tools:
1. search_web: Search the web for information
2. get_weather: Get current weather for a location
Please use these tools when appropriate to provide accurate information.
""",
evaluation_instructions=None,
model="gpt-4o",
generation_tools=tools
)
result = pipeline.run("What's the weather like in Tokyo?")
With Guardrails (input_guardrails)
from agents import Agent, input_guardrail, GuardrailFunctionOutput, InputGuardrailTripwireTriggered, Runner, RunContextWrapper
from agents_sdk_models.pipeline import AgentPipeline
from pydantic import BaseModel
class MathHomeworkOutput(BaseModel):
is_math_homework: bool
reasoning: str
guardrail_agent = Agent(
name="Guardrail check",
instructions="Check if the user is asking you to do their math homework.",
output_type=MathHomeworkOutput,
)
@input_guardrail
async def math_guardrail(ctx: RunContextWrapper, agent: Agent, input: str):
result = await Runner.run(guardrail_agent, input, context=ctx.context)
return GuardrailFunctionOutput(
output_info=result.final_output,
tripwire_triggered=result.final_output.is_math_homework,
)
pipeline = AgentPipeline(
name="guardrail_pipeline",
generation_instructions="""
You are a helpful assistant. Please answer the user's question.
""",
evaluation_instructions=None,
model="gpt-4o",
input_guardrails=[math_guardrail],
)
try:
result = pipeline.run("Can you help me solve for x: 2x + 3 = 11?")
print(result)
except InputGuardrailTripwireTriggered:
print("[Guardrail Triggered] Math homework detected. Request blocked.")
With Dynamic Prompt
# You can provide a custom function to dynamically build the prompt.
from agents_sdk_models.pipeline import AgentPipeline
def my_dynamic_prompt(user_input: str) -> str:
# Example: Uppercase the user input and add a prefix
return f"[DYNAMIC PROMPT] USER SAID: {user_input.upper()}"
pipeline = AgentPipeline(
name="dynamic_prompt_example",
generation_instructions="""
You are a helpful assistant. Respond to the user's request.
""",
evaluation_instructions=None,
model="gpt-4o",
dynamic_prompt=my_dynamic_prompt
)
result = pipeline.run("Tell me a joke.")
print(result)
🖥️ Supported Environments
- Python 3.9+
- OpenAI Agents SDK 0.0.9+
- Windows, Linux, MacOS
💡 Why use this?
- Unified: One interface for all major LLM providers
- Flexible: Compose generation, evaluation, tools, and guardrails as you like
- Easy: Minimal code to get started, powerful enough for advanced workflows
- Safe: Guardrails for compliance and safety
- Self-Improving: Automatic feedback and retry mechanism with minimal configuration
📂 Examples
See the examples/ directory for more advanced usage:
pipeline_simple_generation.py: Minimal generationpipeline_with_evaluation.py: Generation + evaluationpipeline_with_tools.py: Tool-augmented generationpipeline_with_guardrails.py: Guardrails (input filtering)
📄 License & Credits
MIT License. Powered by OpenAI Agents SDK.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file agents_sdk_models-0.0.23.tar.gz.
File metadata
- Download URL: agents_sdk_models-0.0.23.tar.gz
- Upload date:
- Size: 1.2 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f5619b7ef7457594d840b6bf4dcc9b08e5ac12bbe58034c9282d2faceb226167
|
|
| MD5 |
1b51865602eefa02f3892e6dab5dc85d
|
|
| BLAKE2b-256 |
a6ad1fe7f4c40ed74cd716f32eacf99527833300f76218adbe2c9793ebfa0ea4
|
File details
Details for the file agents_sdk_models-0.0.23-py3-none-any.whl.
File metadata
- Download URL: agents_sdk_models-0.0.23-py3-none-any.whl
- Upload date:
- Size: 62.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7c5f609d88e3cef45b8018919251149951d0b4f6855b63823833a19aeef8bfa2
|
|
| MD5 |
6f052d7410bef03d6ecd992490ef44e5
|
|
| BLAKE2b-256 |
a3178d69dac877a2a6248462505d58aaaf858b25e5d46d8abc36a03ffc9f8fe2
|