Skip to main content

Simulation and trace-based evaluation for agentic systems

Project description

understudy: Scenario Testing for AI Agents

PyPI version PyPI Downloads Python 3.12+ Documentation License: MIT

Understudy is a scenario-driven testing framework for AI agents that simulates realistic multi-turn users, runs those scenes against an agent through a simple app adapter, records a structured execution trace of messages, tool calls, and handoffs, and then evaluates behavior with deterministic checks, optional LLM judges, and run reports.

How It Works

Testing with understudy is 4 steps:

  1. Wrap your agent — Adapt your agent (ADK, LangGraph, HTTP) to understudy's interface
  2. Mock your tools — Register handlers that return test data instead of calling real services
  3. Write scenes — YAML files defining what the simulated user wants and what you expect
  4. Run and assert — Execute simulations, check traces, generate reports

The key insight: assert against the trace, not the prose. Don't check what the agent said—check what it did (tool calls).

Two Evaluation Paradigms

Conversational Agent Evaluation

Simulate multi-turn conversations with personas to test dialogue agents.

  • Use case: Customer service bots, assistants, chatbots
  • Assert on tool calls: trace.called("tool_name")

Agentic Flow Evaluation

Evaluate autonomous agents executing multi-step tasks.

  • Use case: Code agents, research agents, task automation
  • Assert on actions: trace.performed("action")

See examples/README.md for complete examples of both paradigms.

See real examples:

Installation

pip install understudy[all]

Quick Start

1. Wrap your agent

from understudy.adk import ADKApp
from my_agent import agent

app = ADKApp(agent=agent)

2. Mock your tools

Your agent has tools that call external services. Mock them for testing:

from understudy.mocks import MockToolkit

mocks = MockToolkit()

@mocks.handle("lookup_order")
def lookup_order(order_id: str) -> dict:
    return {"order_id": order_id, "items": [...], "status": "delivered"}

@mocks.handle("create_return")
def create_return(order_id: str, item_sku: str, reason: str) -> dict:
    return {"return_id": "RET-001", "status": "created"}

3. Write a scene

Create scenes/return_backpack.yaml:

id: return_eligible_backpack
description: Customer wants to return a backpack

starting_prompt: "I'd like to return an item please."
conversation_plan: |
  Goal: Return the hiking backpack from order ORD-10031.
  - Provide order ID when asked
  - Return reason: too small

persona: cooperative
max_turns: 15

expectations:
  required_tools:
    - lookup_order
    - create_return
  forbidden_tools:
    - issue_refund

4. Run simulation

from understudy import Scene, run

scene = Scene.from_file("scenes/return_backpack.yaml")
trace = run(app, scene, mocks=mocks)

assert trace.called("lookup_order")
assert trace.called("create_return")
assert not trace.called("issue_refund")

Or with pytest (define app and mocks fixtures in conftest.py):

pytest test_returns.py -v

Suites and Batch Runs

Run multiple scenes with multiple simulations per scene:

from understudy import Suite, RunStorage

suite = Suite.from_directory("scenes/")
storage = RunStorage()

# Run each scene 3 times and tag for comparison
results = suite.run(
    app,
    mocks=mocks,
    storage=storage,
    n_sims=3,
    tags={"version": "v1"},
)
print(f"{results.pass_count}/{len(results.results)} passed")

Simulation and Evaluation

Understudy separates simulation (generating traces) from evaluation (checking traces). Use together or separately:

Combined (most common)

understudy run \
  --app mymodule:agent_app \
  --scene ./scenes/ \
  --n-sims 3 \
  --junit results.xml

Separate workflows

Generate traces only:

understudy simulate \
  --app mymodule:agent_app \
  --scenes ./scenes/ \
  --output ./traces/ \
  --n-sims 3

Evaluate existing traces:

understudy evaluate \
  --traces ./traces/ \
  --output ./results/ \
  --junit results.xml

Python API:

from understudy import simulate_batch, evaluate_batch

# Generate traces
traces = simulate_batch(
    app=agent_app,
    scenes="./scenes/",
    n_sims=3,
    output="./traces/",
)

# Evaluate later
results = evaluate_batch(
    traces="./traces/",
    output="./results/",
)

CLI Commands

# Run simulations
understudy run --app mymodule:app --scene ./scenes/
understudy simulate --app mymodule:app --scenes ./scenes/
understudy evaluate --traces ./traces/

# View results
understudy list
understudy show <run_id>
understudy summary

# Compare runs by tag
understudy compare --tag version --before v1 --after v2

# Generate reports
understudy report -o report.html
understudy compare --tag version --before v1 --after v2 --html comparison.html

# Interactive browser
understudy serve --port 8080

# HTTP simulator server (for browser/UI testing)
understudy serve-api --port 8000

# Cleanup
understudy delete <run_id>
understudy clear

LLM Judges

For qualities that can't be checked deterministically:

from understudy.judges import Judge

empathy_judge = Judge(
    rubric="The agent acknowledged frustration and was empathetic while enforcing policy.",
    samples=5,
)

result = empathy_judge.evaluate(trace)
assert result.score == 1

Built-in rubrics:

from understudy.judges import (
    TOOL_USAGE_CORRECTNESS,
    POLICY_COMPLIANCE,
    TONE_EMPATHY,
    ADVERSARIAL_ROBUSTNESS,
    TASK_COMPLETION,
)

Report Contents

The understudy summary command shows:

  • Pass rate — percentage of scenes that passed all expectations
  • Avg turns — average conversation length
  • Tool usage — distribution of tool calls across runs
  • Agents — which agents were invoked

The HTML report (understudy report) includes:

  • All metrics above
  • Full conversation transcripts
  • Tool call details with arguments
  • Expectation check results
  • Judge evaluation results (when used)

Documentation

See the full documentation for:

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

understudy-0.5.0.tar.gz (71.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

understudy-0.5.0-py3-none-any.whl (92.1 kB view details)

Uploaded Python 3

File details

Details for the file understudy-0.5.0.tar.gz.

File metadata

  • Download URL: understudy-0.5.0.tar.gz
  • Upload date:
  • Size: 71.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for understudy-0.5.0.tar.gz
Algorithm Hash digest
SHA256 419c65a0b14f726dc58d6b6891111a434e16b377ab4387b2f3ee1d7b6623d82a
MD5 a47c97db55d5516ab45147873a8cef9c
BLAKE2b-256 ad91883bd304143e79cd7f65acc5070d0cf9fb63d357c64a5db04385bf425501

See more details on using hashes here.

Provenance

The following attestation bundles were made for understudy-0.5.0.tar.gz:

Publisher: python-publish.yml on gojiplus/understudy

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file understudy-0.5.0-py3-none-any.whl.

File metadata

  • Download URL: understudy-0.5.0-py3-none-any.whl
  • Upload date:
  • Size: 92.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for understudy-0.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 6685ea379cd3d261a2afa402ebe0b8eeb0fd7efcfb4556812d61a1efaf584e85
MD5 fd09170ca9269c843d40e969f800fa2b
BLAKE2b-256 de16e7c4024b32dcc5daae47a04a2dbe679e4c59974223fb8b8ae69c3c6312a7

See more details on using hashes here.

Provenance

The following attestation bundles were made for understudy-0.5.0-py3-none-any.whl:

Publisher: python-publish.yml on gojiplus/understudy

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page