Model adapters for OpenAI Agents SDK
Project description
Agents SDK Models 🤖🔌
A collection of model adapters and workflow utilities for the OpenAI Agents SDK, enabling you to use various LLM providers and build practical agent pipelines with a unified interface!
🌟 Features
- 🔄 Unified Factory: Use the
get_llmfunction to easily get model instances for different providers. - 🧩 Multiple Providers: Support for OpenAI, Ollama, Google Gemini, and Anthropic Claude.
- 📊 Structured Output: All models instantiated via
get_llmsupport structured output using Pydantic models. - 🏗️ Pipeline Class: Easily compose generation, evaluation, tool integration, and guardrails in one workflow.
- 🛡️ Guardrails: Add input/output guardrails for safe and compliant agent behavior.
- 🛠️ Simple Interface: Minimal code, maximum flexibility.
🛠️ Installation
From PyPI (Recommended)
pip install agents-sdk-models
# For examples with structured output (includes pydantic)
pip install agents-sdk-models[examples]
From Source
git clone https://github.com/kitfactory/agents-sdk-models.git
cd agents-sdk-models
python -m venv .venv
.venv\Scripts\activate # Windows
source .venv/bin/activate # Linux/Mac
pip install -e .[dev]
🚀 Quick Start: Using get_llm
The get_llm function supports specifying the model and provider, or just the model (provider is inferred):
from agents_sdk_models import get_llm
# Specify both model and provider
llm = get_llm(model="gpt-4o-mini", provider="openai")
# Or just the model (provider inferred)
llm = get_llm("claude-3-5-sonnet-latest")
Example: Structured Output
from agents import Agent, Runner
from agents_sdk_models import get_llm
from pydantic import BaseModel
class WeatherInfo(BaseModel):
location: str
temperature: float
condition: str
llm = get_llm("gpt-4o-mini")
agent = Agent(
name="Weather Reporter",
model=llm,
instructions="You are a helpful weather reporter.",
output_type=WeatherInfo
)
result = Runner.run_sync(agent, "What's the weather in Tokyo?")
print(result.final_output)
🏗️ Pipeline Class: Easy LLM Workflows
The Pipeline class lets you flexibly build LLM agent workflows by combining generation templates, evaluation templates, tools, and guardrails.
Basic Usage
from agents_sdk_models.pipeline import Pipeline
pipeline = Pipeline(
name="simple_generator",
generation_instructions="""
You are a helpful assistant that generates creative stories.
Please generate a short story based on the user's input.
""",
evaluation_instructions=None, # No evaluation
model="gpt-4o"
)
result = pipeline.run("A story about a robot learning to paint")
With Evaluation
pipeline = Pipeline(
name="evaluated_generator",
generation_instructions="""
You are a helpful assistant that generates creative stories.
Please generate a short story based on the user's input.
""",
evaluation_instructions="""
You are a story evaluator. Please evaluate the generated story based on:
1. Creativity (0-100)
2. Coherence (0-100)
3. Emotional impact (0-100)
Calculate the average score and provide specific comments for each aspect.
""",
model="gpt-4o",
threshold=70
)
result = pipeline.run("A story about a robot learning to paint")
With Tools
from agents import function_tool
@function_tool
def search_web(query: str) -> str:
# Implement actual web search here
return f"Search results for: {query}"
@function_tool
def get_weather(location: str) -> str:
# Implement actual weather API here
return f"Weather in {location}: Sunny, 25°C"
tools = [search_web, get_weather]
pipeline = Pipeline(
name="tooled_generator",
generation_instructions="""
You are a helpful assistant that can use tools to gather information.
You have access to the following tools:
1. search_web: Search the web for information
2. get_weather: Get current weather for a location
Please use these tools when appropriate to provide accurate information.
""",
evaluation_instructions=None,
model="gpt-4o",
generation_tools=tools
)
result = pipeline.run("What's the weather like in Tokyo?")
With Guardrails (input_guardrails)
from agents import Agent, input_guardrail, GuardrailFunctionOutput, InputGuardrailTripwireTriggered, Runner, RunContextWrapper
from agents_sdk_models.pipeline import Pipeline
from pydantic import BaseModel
class MathHomeworkOutput(BaseModel):
is_math_homework: bool
reasoning: str
guardrail_agent = Agent(
name="Guardrail check",
instructions="Check if the user is asking you to do their math homework.",
output_type=MathHomeworkOutput,
)
@input_guardrail
async def math_guardrail(ctx: RunContextWrapper, agent: Agent, input: str):
result = await Runner.run(guardrail_agent, input, context=ctx.context)
return GuardrailFunctionOutput(
output_info=result.final_output,
tripwire_triggered=result.final_output.is_math_homework,
)
pipeline = Pipeline(
name="guardrail_pipeline",
generation_instructions="""
You are a helpful assistant. Please answer the user's question.
""",
evaluation_instructions=None,
model="gpt-4o",
input_guardrails=[math_guardrail],
)
try:
result = pipeline.run("Can you help me solve for x: 2x + 3 = 11?")
print(result)
except InputGuardrailTripwireTriggered:
print("[Guardrail Triggered] Math homework detected. Request blocked.")
With Dynamic Prompt
# You can provide a custom function to dynamically build the prompt.
from agents_sdk_models.pipeline import Pipeline
def my_dynamic_prompt(user_input: str) -> str:
# Example: Uppercase the user input and add a prefix
return f"[DYNAMIC PROMPT] USER SAID: {user_input.upper()}"
pipeline = Pipeline(
name="dynamic_prompt_example",
generation_instructions="""
You are a helpful assistant. Respond to the user's request.
""",
evaluation_instructions=None,
model="gpt-4o",
dynamic_prompt=my_dynamic_prompt
)
result = pipeline.run("Tell me a joke.")
print(result)
🖥️ Supported Environments
- Python 3.9+
- OpenAI Agents SDK 0.0.9+
- Windows, Linux, MacOS
💡 Why use this?
- Unified: One interface for all major LLM providers
- Flexible: Compose generation, evaluation, tools, and guardrails as you like
- Easy: Minimal code to get started, powerful enough for advanced workflows
- Safe: Guardrails for compliance and safety
📂 Examples
See the examples/ directory for more advanced usage:
pipeline_simple_generation.py: Minimal generationpipeline_with_evaluation.py: Generation + evaluationpipeline_with_tools.py: Tool-augmented generationpipeline_with_guardrails.py: Guardrails (input filtering)
📄 License & Credits
MIT License. Powered by OpenAI Agents SDK.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file agents_sdk_models-0.0.15.tar.gz.
File metadata
- Download URL: agents_sdk_models-0.0.15.tar.gz
- Upload date:
- Size: 45.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ffdd8f40d50d20a9fc263f87592ade61cf5889d473a5ddcfe3a004c31471866c
|
|
| MD5 |
fadf63079aff49e0c61e31c1fed91f2b
|
|
| BLAKE2b-256 |
8761535ad77b7ac201db47b79401400a5a02a25e8826b98b6f428a508e74bc64
|
File details
Details for the file agents_sdk_models-0.0.15-py3-none-any.whl.
File metadata
- Download URL: agents_sdk_models-0.0.15-py3-none-any.whl
- Upload date:
- Size: 16.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a26e06e5f5a9c10039877ae70b361fd7b0f7fbf46cd792e0c3bf805488c7e592
|
|
| MD5 |
c0743639c23e21a3a876f0714fe88cdd
|
|
| BLAKE2b-256 |
093c9856c4d0d1e312cfc561de5c3d9acb66cbccfb6e41d1319a722c72c15731
|