Skip to main content

Simulation and trace-based evaluation for agentic systems

Project description

understudy: Scenario Testing for AI Agents

PyPI version Downloads Python 3.12+ Documentation License: MIT

Understudy is a scenario-driven testing framework for AI agents that simulates realistic multi-turn users, runs those scenes against an agent through a simple app adapter, records a structured execution trace of messages, tool calls, handoffs, and terminal states, and then evaluates behavior with deterministic checks, optional LLM judges, and run reports.

How It Works

Testing with understudy is 4 steps:

  1. Wrap your agent — Adapt your agent (ADK, LangGraph, HTTP) to understudy's interface
  2. Mock your tools — Register handlers that return test data instead of calling real services
  3. Write scenes — YAML files defining what the simulated user wants and what you expect
  4. Run and assert — Execute simulations, check traces, generate reports

The key insight: assert against the trace, not the prose. Don't check what the agent said—check what it did (tool calls, terminal state).

See real examples:

Installation

pip install understudy[all]

Quick Start

1. Wrap your agent

from understudy.adk import ADKApp
from my_agent import agent

app = ADKApp(agent=agent)

2. Mock your tools

Your agent has tools that call external services. Mock them for testing:

from understudy.mocks import MockToolkit

mocks = MockToolkit()

@mocks.handle("lookup_order")
def lookup_order(order_id: str) -> dict:
    return {"order_id": order_id, "items": [...], "status": "delivered"}

@mocks.handle("create_return")
def create_return(order_id: str, item_sku: str, reason: str) -> dict:
    return {"return_id": "RET-001", "status": "created"}

3. Write a scene

Create scenes/return_backpack.yaml:

id: return_eligible_backpack
description: Customer wants to return a backpack

starting_prompt: "I'd like to return an item please."
conversation_plan: |
  Goal: Return the hiking backpack from order ORD-10031.
  - Provide order ID when asked
  - Return reason: too small

persona: cooperative
max_turns: 15

expectations:
  required_tools:
    - lookup_order
    - create_return
  allowed_terminal_states:
    - return_created

4. Run simulation

from understudy import Scene, run, check

scene = Scene.from_file("scenes/return_backpack.yaml")
trace = run(app, scene, mocks=mocks)

assert trace.called("lookup_order")
assert trace.called("create_return")
assert trace.terminal_state == "return_created"

Or with pytest (define app and mocks fixtures in conftest.py):

pytest test_returns.py -v

Suites with Tags

Run multiple scenes and tag runs for comparison:

from understudy import Suite, RunStorage

suite = Suite.from_directory("scenes/")
storage = RunStorage()

# Tag runs for later comparison
results = suite.run(app, mocks=mocks, storage=storage, tags={"version": "v1"})
print(f"{results.pass_count}/{len(results.results)} passed")

CLI Commands

# List runs (shows tags)
understudy list

# Show run details
understudy show <run_id>

# Aggregate metrics
understudy summary

# Compare runs by tag
understudy compare --tag version --before v1 --after v2

# Generate HTML reports
understudy report -o report.html
understudy compare --tag version --before v1 --after v2 --html comparison.html

# Interactive browser
understudy serve --port 8080

# Cleanup
understudy delete <run_id>
understudy clear

LLM Judges

For qualities that can't be checked deterministically:

from understudy.judges import Judge

empathy_judge = Judge(
    rubric="The agent acknowledged frustration and was empathetic while enforcing policy.",
    samples=5,
)

result = empathy_judge.evaluate(trace)
assert result.score == 1

Built-in rubrics:

from understudy.judges import (
    TOOL_USAGE_CORRECTNESS,
    POLICY_COMPLIANCE,
    TONE_EMPATHY,
    ADVERSARIAL_ROBUSTNESS,
    TASK_COMPLETION,
)

Report Contents

The understudy summary command shows:

  • Pass rate - percentage of scenes that passed all expectations
  • Avg turns - average conversation length
  • Tool usage - distribution of tool calls across runs
  • Terminal states - breakdown of how conversations ended
  • Agents - which agents were invoked

The HTML report (understudy report) includes:

  • All metrics above
  • Full conversation transcripts
  • Tool call details with arguments
  • Expectation check results
  • Judge evaluation results (when used)

Documentation

See the full documentation for:

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

understudy-0.3.0.tar.gz (31.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

understudy-0.3.0-py3-none-any.whl (41.2 kB view details)

Uploaded Python 3

File details

Details for the file understudy-0.3.0.tar.gz.

File metadata

  • Download URL: understudy-0.3.0.tar.gz
  • Upload date:
  • Size: 31.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for understudy-0.3.0.tar.gz
Algorithm Hash digest
SHA256 e07889880079184f887008c9719e497ee4dc60eb426fb41cf81d9e62c0f79635
MD5 ca68be8e3aeed85b113cb994abf667b2
BLAKE2b-256 332e4aab59b2dbf2bd4e9b14bb618a86a7105536a172445e43af49af22bc493f

See more details on using hashes here.

Provenance

The following attestation bundles were made for understudy-0.3.0.tar.gz:

Publisher: python-publish.yml on gojiplus/understudy

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file understudy-0.3.0-py3-none-any.whl.

File metadata

  • Download URL: understudy-0.3.0-py3-none-any.whl
  • Upload date:
  • Size: 41.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for understudy-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 a2eef46ffa7b7f07da572ae4a5e1a74e8e5e3ae9c7481ccb033f25f5e39a056d
MD5 ea91c70ac174ff05bac73c5587204f7f
BLAKE2b-256 46eb0a60a8431baaefab80630cf1b412749f940d109d5517c43bf62ec0142801

See more details on using hashes here.

Provenance

The following attestation bundles were made for understudy-0.3.0-py3-none-any.whl:

Publisher: python-publish.yml on gojiplus/understudy

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page