Model adapters for OpenAI Agents SDK
Project description
Agents SDK Models 🤖🔌
A collection of model adapters for OpenAI Agents SDK, allowing you to use various LLM providers with a unified interface! 🚀
🌟 Features
- 🔄 Unified Interface: Use the same OpenAI Agents SDK interface with multiple model providers
- 🧩 Multiple Models: Support for Ollama, Google Gemini, and Anthropic Claude
- 📊 Structured Output: All models support structured output using Pydantic models
🛠️ Installation
From PyPI (Recommended)
# Install from PyPI
pip install agents-sdk-models
# For examples with structured output
pip install agents-sdk-models[examples]
From Source
# Clone the repository
git clone https://github.com/kitfactory/agents-sdk-models.git
cd agents-sdk-models
# Create and activate a virtual environment
python -m venv .venv
.venv\Scripts\activate # Windows
source .venv/bin/activate # Linux/Mac
# Install the package in development mode
pip install -e .
🚀 Quick Start
LlmModel (Example: OpenAI)
import asyncio
import os
from agents import Agent, Runner
from agents_sdk_models import LlmModel
async def main():
# Get API key from environment variable (if needed)
# api_key = os.environ.get("OPENAI_API_KEY") # Uncomment if using OpenAI, Google, or Anthropic
# Initialize the LlmModel model (Example uses OpenAI's gpt-4o-mini)
model = LlmModel(
provider="openai", # Can be "openai", "google", "anthropic", or "ollama"
model="gpt-4o-mini", # Specify the model name for the chosen provider
temperature=0.7,
# api_key=api_key # Uncomment and provide API key if needed
)
# Create an agent with the model
agent = Agent(
name="Assistant",
instructions="You are a helpful assistant.",
model=model
)
# Run the agent
response = await Runner.run(agent, "What is your name and what can you do?")
print(response.final_output)
if __name__ == "__main__":
asyncio.run(main())
Ollama
import asyncio
from agents import Agent, Runner
from agents_sdk_models import OllamaModel
async def main():
# Initialize the Ollama model
model = OllamaModel(
model="llama3", # or any other model available in your Ollama instance
temperature=0.7
)
# Create an agent with the model
agent = Agent(
name="Assistant",
instructions="You are a helpful assistant.",
model=model
)
# Run the agent
response = await Runner.run(agent, "What is your name and what can you do?")
print(response.final_output)
if __name__ == "__main__":
asyncio.run(main())
Google Gemini
import asyncio
import os
from agents import Agent, Runner
from agents_sdk_models import GeminiModel
async def main():
# Get API key from environment variable
api_key = os.environ.get("GOOGLE_API_KEY")
# Initialize the Gemini model
model = GeminiModel(
model="gemini-1.5-pro",
temperature=0.7,
api_key=api_key
)
# Create an agent with the model
agent = Agent(
name="Assistant",
instructions="You are a helpful assistant.",
model=model
)
# Run the agent
response = await Runner.run(agent, "What is your name and what can you do?")
print(response.final_output)
if __name__ == "__main__":
asyncio.run(main())
Anthropic Claude
import asyncio
import os
from agents import Agent, Runner
from agents_sdk_models import ClaudeModel
async def main():
# Get API key from environment variable
api_key = os.environ.get("ANTHROPIC_API_KEY")
# Initialize the Claude model
model = ClaudeModel(
model="claude-3-sonnet-20240229",
temperature=0.7,
api_key=api_key,
thinking=True # Enable thinking for complex reasoning
)
# Create an agent with the model
agent = Agent(
name="Assistant",
instructions="You are a helpful assistant.",
model=model
)
# Run the agent
response = await Runner.run(agent, "What is your name and what can you do?")
print(response.final_output)
if __name__ == "__main__":
asyncio.run(main())
📊 Structured Output
All models support structured output using Pydantic models:
from pydantic import BaseModel
from typing import List
class WeatherInfo(BaseModel):
location: str
temperature: float
condition: str
recommendation: str
class WeatherReport(BaseModel):
report_date: str
locations: List[WeatherInfo]
# Create an agent with structured output
agent = Agent(
name="Weather Reporter",
model=model,
instructions="You are a helpful weather reporter.",
output_type=WeatherReport
)
# Get structured response
response = await Runner.run(agent, "What's the weather like in Tokyo, Osaka, and Sapporo?")
weather_report = response.final_output # This is a WeatherReport object
🔧 Supported Environments
- Operating Systems: Windows, macOS, Linux
- Python Version: 3.9+
- Dependencies:
- openai>=1.73.0
- openai-agents==0.0.9
- pydantic>=2.10, <3 (for examples with structured output)
📝 License
This project is licensed under the MIT License - see the LICENSE file for details.
🙏 Acknowledgements
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file agents_sdk_models-0.0.10.tar.gz.
File metadata
- Download URL: agents_sdk_models-0.0.10.tar.gz
- Upload date:
- Size: 33.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b40b71632935f06f590601b265537fc175d4befdc14d3bad7477f449bd7cd217
|
|
| MD5 |
0c8f62858ea484cab549ab6f2cef209b
|
|
| BLAKE2b-256 |
d34dbb898682b9e3dd7350e282f87a16389351f2e9fdb04c51b6f26e458d36c2
|
File details
Details for the file agents_sdk_models-0.0.10-py3-none-any.whl.
File metadata
- Download URL: agents_sdk_models-0.0.10-py3-none-any.whl
- Upload date:
- Size: 11.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e5d2104cf0a8099a7b76126de8541f3e8559aa74147265f8309f43fe01e4fd7f
|
|
| MD5 |
bd6724c6d80a4c1b8dc745919c4dc2cf
|
|
| BLAKE2b-256 |
e05b20cd552eebcf66ad8479c21bcc58f57994f8cadac50f2b1cd1d083dcf542
|