Skip to main content

Model adapters for OpenAI Agents SDK

Project description

Agents SDK Models 🤖🔌

PyPI Downloads

Python 3.9+ OpenAI Agents 0.0.9

A collection of model adapters for OpenAI Agents SDK, allowing you to use various LLM providers with a unified interface via the get_llm function! 🚀

🌟 Features

  • 🔄 Unified Factory: Use the get_llm function to easily get model instances for different providers.
  • 🧩 Multiple Providers: Support for OpenAI, Ollama, Google Gemini, and Anthropic Claude.
  • 📊 Structured Output: All models instantiated via get_llm support structured output using Pydantic models.
  • 🏭 Simple Interface: Just specify the provider and optionally the model name.

🛠️ Installation

From PyPI (Recommended)

# Install from PyPI
pip install agents-sdk-models

# For examples with structured output (includes pydantic)
# pip install agents-sdk-models[examples] # Option currently not configured in pyproject.toml
pip install agents-sdk-models pydantic>=2.0,<3

From Source

# Clone the repository
git clone https://github.com/kitfactory/agents-sdk-models.git
cd agents-sdk-models

# Create and activate a virtual environment
python -m venv .venv
.venv\Scripts\activate  # Windows
source .venv/bin/activate  # Linux/Mac

# Install the package in development mode
pip install -e .[dev] # Install with dev dependencies (pytest etc.)

🚀 Quick Start: Using get_llm

The get_llm function provides a single entry point to obtain model instances for different providers.

import asyncio
import os
from agents import Agent, Runner
# Import the factory function
from agents_sdk_models import get_llm

async def main():
    # --- Example: OpenAI ---
    # Requires OPENAI_API_KEY environment variable
    openai_api_key = os.environ.get("OPENAI_API_KEY")
    if openai_api_key:
        print("\nRunning OpenAI example...")
        # Get the model using get_llm
        model_openai = get_llm(
            provider="openai",      # Specify the provider
            model="gpt-4o-mini",    # Specify the model name (optional, uses default if None)
            temperature=0.7,
            api_key=openai_api_key # Pass API key if required
        )
        agent_openai = Agent(
            name="Assistant",
            instructions="You are a helpful assistant.",
            model=model_openai
        )
        response_openai = await Runner.run(agent_openai, "What is your name and what can you do?")
        print(response_openai.final_output)
    else:
        print("OPENAI_API_KEY not found. Skipping OpenAI example.")

    # --- Example: Ollama ---
    # Assumes Ollama server is running locally
    print("\nRunning Ollama example...")
    try:
        # Get the model using get_llm
        model_ollama = get_llm(
            provider="ollama",
            model="llama3", # Specify the model name available in your Ollama instance
            temperature=0.7
            # base_url="http://localhost:11434" # Optional: specify if not default
        )
        agent_ollama = Agent(
            name="Assistant",
            instructions="You are a helpful assistant.",
            model=model_ollama
        )
        response_ollama = await Runner.run(agent_ollama, "What is your name and what can you do?")
        print(response_ollama.final_output)
    except Exception as e:
        print(f"Could not run Ollama example: {e}")
        print("Ensure the Ollama server is running and the model 'llama3' is available.")


    # --- Example: Google Gemini ---
    # Requires GOOGLE_API_KEY environment variable
    google_api_key = os.environ.get("GOOGLE_API_KEY")
    if google_api_key:
        print("\nRunning Google Gemini example...")
        # Get the model using get_llm
        model_gemini = get_llm(
            provider="google",
            model="gemini-1.5-flash", # Specify the model name
            temperature=0.7,
            api_key=google_api_key
        )
        agent_gemini = Agent(
            name="Assistant",
            instructions="You are a helpful assistant.",
            model=model_gemini
        )
        response_gemini = await Runner.run(agent_gemini, "What is your name and what can you do?")
        print(response_gemini.final_output)
    else:
        print("GOOGLE_API_KEY not found. Skipping Google Gemini example.")


    # --- Example: Anthropic Claude ---
    # Requires ANTHROPIC_API_KEY environment variable
    anthropic_api_key = os.environ.get("ANTHROPIC_API_KEY")
    if anthropic_api_key:
        print("\nRunning Anthropic Claude example...")
        # Get the model using get_llm
        model_claude = get_llm(
            provider="anthropic",
            model="claude-3-haiku-20240307", # Specify the model name
            temperature=0.7,
            api_key=anthropic_api_key,
            thinking=True # Pass provider-specific arguments like 'thinking' for Claude
        )
        agent_claude = Agent(
            name="Assistant",
            instructions="You are a helpful assistant.",
            model=model_claude
        )
        response_claude = await Runner.run(agent_claude, "What is your name and what can you do?")
        print(response_claude.final_output)
    else:
        print("ANTHROPIC_API_KEY not found. Skipping Anthropic Claude example.")


if __name__ == "__main__":
    # Disable tracing for non-OpenAI providers if desired
    # import sys
    # provider = sys.argv[1] if len(sys.argv) > 1 else "openai"
    # if provider != "openai":
    #     from agents import set_tracing_disabled
    #     set_tracing_disabled(True)
    asyncio.run(main())

📊 Structured Output with get_llm

All models obtained via get_llm support structured output using Pydantic models:

import asyncio
import os
from agents import Agent, Runner
from agents_sdk_models import get_llm
from pydantic import BaseModel
from typing import List

# --- Define Pydantic Model ---
class WeatherInfo(BaseModel):
    location: str
    temperature: float
    condition: str
    recommendation: str

class WeatherReport(BaseModel):
    report_date: str
    locations: List<WeatherInfo>

# --- Get a model instance (e.g., OpenAI) ---
async def run_structured_example():
    openai_api_key = os.environ.get("OPENAI_API_KEY")
    if not openai_api_key:
        print("OPENAI_API_KEY not found. Skipping structured output example.")
        return

    model = get_llm(
        provider="openai",
        model="gpt-4o-mini",
        api_key=openai_api_key
    )

    # --- Create Agent with Structured Output ---
    agent = Agent(
        name="Weather Reporter",
        model=model,
        instructions="You are a helpful weather reporter. Provide the date in YYYY-MM-DD format.",
        output_type=WeatherReport # Specify the Pydantic model
    )

    # --- Run Agent and Get Structured Response ---
    print("\nRunning structured output example...")
    response = await Runner.run(agent, "What's the weather like today in Tokyo, Osaka, and Sapporo?")

    # --- Access the structured output ---
    if response.final_output:
        weather_report: WeatherReport = response.final_output
        print(f"Report Date: {weather_report.report_date}")
        for info in weather_report.locations:
            print(f"- Location: {info.location}, Temp: {info.temperature}, Condition: {info.condition}")
            print(f"  Recommendation: {info.recommendation}")
    else:
        print("Failed to get structured output.")
        print(f"Raw output: {response.raw_output}") # Print raw output for debugging

if __name__ == "__main__":
    asyncio.run(run_structured_example())

🔧 Supported Environments

  • Operating Systems: Windows, macOS, Linux
  • Python Version: 3.9+
  • Dependencies:
    • openai-agents>=0.0.9 (Core dependency)
    • google-generativeai (Required for Google Gemini)
    • anthropic (Required for Anthropic Claude)
    • httpx (Required for Ollama)
    • pydantic>=2.0,<3 (Required for structured output examples)

Note: Provider-specific dependencies (google, anthropic, httpx) are installed automatically when needed by the respective models.

📝 License

This project is licensed under the MIT License - see the LICENSE file for details.

🙏 Acknowledgements

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agents_sdk_models-0.0.11.tar.gz (35.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agents_sdk_models-0.0.11-py3-none-any.whl (12.1 kB view details)

Uploaded Python 3

File details

Details for the file agents_sdk_models-0.0.11.tar.gz.

File metadata

  • Download URL: agents_sdk_models-0.0.11.tar.gz
  • Upload date:
  • Size: 35.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.5

File hashes

Hashes for agents_sdk_models-0.0.11.tar.gz
Algorithm Hash digest
SHA256 6ae4db9124362a228f3ba0ea74512272ff94df590bad721ebece4af3bc1cda2b
MD5 5a2906981da1284361b5101fb3b89375
BLAKE2b-256 93e956340b8c7eea9d6384e83a32d35b2ffdd21f06639aca36125e4fb138fb2e

See more details on using hashes here.

File details

Details for the file agents_sdk_models-0.0.11-py3-none-any.whl.

File metadata

File hashes

Hashes for agents_sdk_models-0.0.11-py3-none-any.whl
Algorithm Hash digest
SHA256 8f2e4d3b6baade6d53672635c0560719df41d758476c8e65a882e175d8479ee7
MD5 5ec1be3bdbbf396844df16c0ce92b8b5
BLAKE2b-256 a0280668f16dc483f3f1232b0650e10c2200f0a5c6d1028bb1d3658fbd0a1f07

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page