Skip to main content

Testing utilities for openai-agents-python: fake models, providers, and pytest fixtures

Project description

openai-agents-testkit

PyPI version Python Versions License: MIT

Testing utilities for openai-agents-python: fake models, providers, and pytest fixtures.

Note: This is an unofficial community library, not affiliated with OpenAI.

Installation

pip install openai-agents-testkit

Quick Start

Basic Usage

from agents import Agent, Runner, RunConfig
from openai_agents_testkit import FakeModelProvider

# Create a fake provider (no API calls!)
provider = FakeModelProvider(delay=0.1)

# Use it with any agent
agent = Agent(
    name="My Agent",
    model="gpt-4",  # Model name is ignored, FakeModel is always used
    instructions="You are a helpful assistant.",
)

result = Runner.run_sync(
    agent,
    "Hello, how are you?",
    run_config=RunConfig(model_provider=provider),
)

print(result.final_output)  # "Fake response #1"

With pytest Fixtures

Fixtures are auto-discovered when you install the package:

# tests/test_my_agent.py
from agents import Agent, Runner, RunConfig

def test_agent_responds(fake_model_provider):
    """fake_model_provider is automatically available!"""
    agent = Agent(name="Test", model="gpt-4", instructions="Be helpful")

    result = Runner.run_sync(
        agent,
        "Hello",
        run_config=RunConfig(model_provider=fake_model_provider),
    )

    assert result.final_output is not None
    assert "Fake response" in result.final_output

Custom Responses

from openai_agents_testkit import FakeModelProvider

def my_response_factory(call_id: int, input) -> str:
    """Generate custom responses based on input."""
    if "hello" in str(input).lower():
        return "Hi there!"
    return f"Response #{call_id}: I processed your request."

provider = FakeModelProvider(
    delay=0.0,  # No delay for fast tests
    response_factory=my_response_factory,
)

Inspecting Calls

def test_agent_tool_usage(fake_model_provider):
    agent = Agent(name="Test", model="gpt-4", instructions="Test")

    Runner.run_sync(
        agent,
        "Do something",
        run_config=RunConfig(model_provider=fake_model_provider),
    )

    # Get the model instance
    model = fake_model_provider.get_model("gpt-4")

    # Inspect call history
    assert model.call_count == 1
    assert model.call_history[0]["system_instructions"] == "Test"

Available Fixtures

Fixture Description
fake_model A single FakeModel instance
fake_model_provider A FakeModelProvider with 0.1s delay
fake_model_provider_factory Factory for custom provider configuration
no_delay_provider A FakeModelProvider with zero delay

API Reference

FakeModel

FakeModel(
    delay: float = 0.1,  # Simulated API latency
    response_factory: Callable[[int, input], str] | None = None,
)

Attributes:

  • call_count: int - Number of times the model was called
  • call_history: list[dict] - Details of each call

Methods:

  • reset() - Reset call count and history

FakeModelProvider

FakeModelProvider(
    delay: float = 0.1,
    response_factory: Callable[[int, input], str] | None = None,
)

Methods:

  • get_model(model_name) - Get/create a FakeModel for the name
  • get_all_models() - Get all created model instances
  • reset_all() - Reset all model instances
  • clear() - Clear all cached models

Use Cases

  • Unit testing agents without API costs
  • Integration testing agent workflows
  • CI/CD pipelines that can't access OpenAI API
  • Development when iterating on agent logic
  • Concurrent testing (FakeModel is thread-safe)

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

License

MIT License - see LICENSE for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

openai_agents_testkit-0.1.0.tar.gz (8.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

openai_agents_testkit-0.1.0-py3-none-any.whl (8.5 kB view details)

Uploaded Python 3

File details

Details for the file openai_agents_testkit-0.1.0.tar.gz.

File metadata

  • Download URL: openai_agents_testkit-0.1.0.tar.gz
  • Upload date:
  • Size: 8.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for openai_agents_testkit-0.1.0.tar.gz
Algorithm Hash digest
SHA256 fe74c42d33702c0b7e3b10fc758a42ba8018277b75dc5477bc7dc478b4234e67
MD5 7e3debb0822d0f05e9d24f733d84bf99
BLAKE2b-256 44635f1a1b150cf25e5d316c1ef85518340a063aeaaad99fdd41d70167fb3719

See more details on using hashes here.

Provenance

The following attestation bundles were made for openai_agents_testkit-0.1.0.tar.gz:

Publisher: release.yml on xncbf/openai-agents-testkit

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file openai_agents_testkit-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for openai_agents_testkit-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 1b3808ccd834d80c3462f3422e3b40c27d869f94bfdc0d28f75ef3ce1f920591
MD5 1fa0eb7e00427f0293810bb370175e3a
BLAKE2b-256 5c8ece2314b8b518624b502ddbd968d79d2c7446ee54cc72c56a5ef89f777871

See more details on using hashes here.

Provenance

The following attestation bundles were made for openai_agents_testkit-0.1.0-py3-none-any.whl:

Publisher: release.yml on xncbf/openai-agents-testkit

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page