Skip to main content

Model adapters for OpenAI Agents SDK

Project description

Agents SDK Models 🤖🔌

Python 3.11+ OpenAI Agents

A collection of model adapters for OpenAI Agents SDK, allowing you to use various LLM providers with a unified interface! 🚀

🌟 Features

  • 🔄 Unified Interface: Use the same OpenAI Agents SDK interface with multiple model providers
  • 🧩 Multiple Models: Support for Ollama, Google Gemini, and Anthropic Claude
  • 📊 Structured Output: All models support structured output using Pydantic models
  • 🌊 Streaming: Support for streaming responses (where available)
  • 🤔 Thinking: Enable extended thinking capabilities for complex reasoning (Claude)

🛠️ Installation

# Clone the repository
git clone https://github.com/yourusername/agents-sdk-models.git
cd agents-sdk-models

# Create and activate a virtual environment
python -m venv .venv
.venv\Scripts\activate  # Windows
source .venv/bin/activate  # Linux/Mac

# Install the package in development mode
pip install -e .

🚀 Quick Start

Ollama

import asyncio
from agents import Agent, Runner
from agents_sdk_models import OllamaModel

async def main():
    # Initialize the Ollama model
    model = OllamaModel(
        model="llama3",  # or any other model available in your Ollama instance
        temperature=0.7
    )
    
    # Create an agent with the model
    agent = Agent(
        name="Assistant",
        instructions="You are a helpful assistant.",
        model=model
    )
    
    # Run the agent
    response = await Runner.run(agent, "What is your name and what can you do?")
    print(response.final_output)

if __name__ == "__main__":
    asyncio.run(main())

Google Gemini

import asyncio
import os
from agents import Agent, Runner
from agents_sdk_models import GeminiModel

async def main():
    # Get API key from environment variable
    api_key = os.environ.get("GOOGLE_API_KEY")
    
    # Initialize the Gemini model
    model = GeminiModel(
        model="gemini-1.5-pro",
        temperature=0.7,
        api_key=api_key
    )
    
    # Create an agent with the model
    agent = Agent(
        name="Assistant",
        instructions="You are a helpful assistant.",
        model=model
    )
    
    # Run the agent
    response = await Runner.run(agent, "What is your name and what can you do?")
    print(response.final_output)

if __name__ == "__main__":
    asyncio.run(main())

Anthropic Claude

import asyncio
import os
from agents import Agent, Runner
from agents_sdk_models import ClaudeModel

async def main():
    # Get API key from environment variable
    api_key = os.environ.get("ANTHROPIC_API_KEY")
    
    # Initialize the Claude model
    model = ClaudeModel(
        model="claude-3-sonnet-20240229",
        temperature=0.7,
        api_key=api_key,
        thinking=True  # Enable thinking for complex reasoning
    )
    
    # Create an agent with the model
    agent = Agent(
        name="Assistant",
        instructions="You are a helpful assistant.",
        model=model
    )
    
    # Run the agent
    response = await Runner.run(agent, "What is your name and what can you do?")
    print(response.final_output)

if __name__ == "__main__":
    asyncio.run(main())

📊 Structured Output

All models support structured output using Pydantic models:

from pydantic import BaseModel
from typing import List

class WeatherInfo(BaseModel):
    location: str
    temperature: float
    condition: str
    recommendation: str

class WeatherReport(BaseModel):
    report_date: str
    locations: List[WeatherInfo]

# Create an agent with structured output
agent = Agent(
    name="Weather Reporter",
    model=model,
    instructions="You are a helpful weather reporter.",
    output_type=WeatherReport
)

# Get structured response
response = await Runner.run(agent, "What's the weather like in Tokyo, Osaka, and Sapporo?")
weather_report = response.final_output  # This is a WeatherReport object

🔧 Supported Environments

  • Operating Systems: Windows, macOS, Linux
  • Python Version: 3.11+
  • Dependencies:
    • openai>=1.66.2
    • openai-agents==0.0.4
    • pydantic>=2.10, <3
    • ollama>=0.4.7 (for Ollama support)

📝 License

This project is licensed under the MIT License - see the LICENSE file for details.

🙏 Acknowledgements

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agents_sdk_models-0.0.4.tar.gz (35.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agents_sdk_models-0.0.4-py3-none-any.whl (8.3 kB view details)

Uploaded Python 3

File details

Details for the file agents_sdk_models-0.0.4.tar.gz.

File metadata

  • Download URL: agents_sdk_models-0.0.4.tar.gz
  • Upload date:
  • Size: 35.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.5

File hashes

Hashes for agents_sdk_models-0.0.4.tar.gz
Algorithm Hash digest
SHA256 533ff71b391e27ea2415d7384920610b19f2917f8c74c43c8d041e9d4d86c0e1
MD5 edc6e9ac5dcc02003e464f02550b4c85
BLAKE2b-256 9d85764cf539467aa7b32ecda405126533c4eee0b3ca7f1e85e9fc7c8942cfc0

See more details on using hashes here.

File details

Details for the file agents_sdk_models-0.0.4-py3-none-any.whl.

File metadata

File hashes

Hashes for agents_sdk_models-0.0.4-py3-none-any.whl
Algorithm Hash digest
SHA256 e4edaf898ab5e6434378c9c9362c774bc8b89baf8aab78d956795e47708fd2d5
MD5 3e0341383891ac9ac9f0ed23e5c0c3c4
BLAKE2b-256 411dcbe1dd86514ef3ff11ee921ed65341700275bcf7df319312fdc34a7fff19

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page