Skip to main content

Testing utilities for openai-agents-python: fake models, providers, and pytest fixtures

Project description

openai-agents-testkit

PyPI version Python Versions License: MIT

Testing utilities for openai-agents-python: fake models, providers, and pytest fixtures.

Note: This is an unofficial community library, not affiliated with OpenAI.

Installation

pip install openai-agents-testkit

Quick Start

Basic Usage

from agents import Agent, Runner, RunConfig
from openai_agents_testkit import FakeModelProvider

# Create a fake provider (no API calls!)
provider = FakeModelProvider(delay=0.1)

# Use it with any agent
agent = Agent(
    name="My Agent",
    model="gpt-4",  # Model name is ignored, FakeModel is always used
    instructions="You are a helpful assistant.",
)

result = Runner.run_sync(
    agent,
    "Hello, how are you?",
    run_config=RunConfig(model_provider=provider),
)

print(result.final_output)  # "Fake response #1"

With pytest Fixtures

Fixtures are auto-discovered when you install the package:

# tests/test_my_agent.py
from agents import Agent, Runner, RunConfig

def test_agent_responds(fake_model_provider):
    """fake_model_provider is automatically available!"""
    agent = Agent(name="Test", model="gpt-4", instructions="Be helpful")

    result = Runner.run_sync(
        agent,
        "Hello",
        run_config=RunConfig(model_provider=fake_model_provider),
    )

    assert result.final_output is not None
    assert "Fake response" in result.final_output

Custom Responses

from openai_agents_testkit import FakeModelProvider

def my_response_factory(call_id: int, input) -> str:
    """Generate custom responses based on input."""
    if "hello" in str(input).lower():
        return "Hi there!"
    return f"Response #{call_id}: I processed your request."

provider = FakeModelProvider(
    delay=0.0,  # No delay for fast tests
    response_factory=my_response_factory,
)

Inspecting Calls

def test_agent_tool_usage(fake_model_provider):
    agent = Agent(name="Test", model="gpt-4", instructions="Test")

    Runner.run_sync(
        agent,
        "Do something",
        run_config=RunConfig(model_provider=fake_model_provider),
    )

    # Get the model instance
    model = fake_model_provider.get_model("gpt-4")

    # Inspect call history
    assert model.call_count == 1
    assert model.call_history[0]["system_instructions"] == "Test"

Available Fixtures

Fixture Description
fake_model A single FakeModel instance
fake_model_provider A FakeModelProvider with 0.1s delay
fake_model_provider_factory Factory for custom provider configuration
no_delay_provider A FakeModelProvider with zero delay

API Reference

FakeModel

FakeModel(
    delay: float = 0.1,  # Simulated API latency
    response_factory: Callable[[int, input], str] | None = None,
)

Attributes:

  • call_count: int - Number of times the model was called
  • call_history: list[dict] - Details of each call

Methods:

  • reset() - Reset call count and history

FakeModelProvider

FakeModelProvider(
    delay: float = 0.1,
    response_factory: Callable[[int, input], str] | None = None,
)

Methods:

  • get_model(model_name) - Get/create a FakeModel for the name
  • get_all_models() - Get all created model instances
  • reset_all() - Reset all model instances
  • clear() - Clear all cached models

Use Cases

  • Unit testing agents without API costs
  • Integration testing agent workflows
  • CI/CD pipelines that can't access OpenAI API
  • Development when iterating on agent logic
  • Concurrent testing (FakeModel is thread-safe)

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

License

MIT License - see LICENSE for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

openai_agents_testkit-0.2.0.tar.gz (8.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

openai_agents_testkit-0.2.0-py3-none-any.whl (8.7 kB view details)

Uploaded Python 3

File details

Details for the file openai_agents_testkit-0.2.0.tar.gz.

File metadata

  • Download URL: openai_agents_testkit-0.2.0.tar.gz
  • Upload date:
  • Size: 8.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for openai_agents_testkit-0.2.0.tar.gz
Algorithm Hash digest
SHA256 f50d22556093a5635f03262979aa3503d7627290dbc3477ec911ed689ec37866
MD5 36610d2aaf8ab0ac6776d6007025c999
BLAKE2b-256 91c3ef672b6fd62fb8e4652216fe20a6e9797faa548b7a8bef28f726c55ac5dc

See more details on using hashes here.

Provenance

The following attestation bundles were made for openai_agents_testkit-0.2.0.tar.gz:

Publisher: release.yml on xncbf/openai-agents-testkit

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file openai_agents_testkit-0.2.0-py3-none-any.whl.

File metadata

File hashes

Hashes for openai_agents_testkit-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 4ba453f05367dbaf8ff01b3e7741b6d67dad9f1e89ba7ef177325e6b1ef7394a
MD5 fe15b283d359597846f92586cfc8bb4e
BLAKE2b-256 8c1fd5a3242331c0df989f8d8cb2ce2eb60ddef9d155dfd73a4c63f70e372877

See more details on using hashes here.

Provenance

The following attestation bundles were made for openai_agents_testkit-0.2.0-py3-none-any.whl:

Publisher: release.yml on xncbf/openai-agents-testkit

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page