Skip to main content

pytest for LLM prompts — tests, regressions, CI.

Project description

pytest-prompts

pytest for LLM prompts. Write tests, detect regressions, run in CI.

from pytest_prompts import prompt_test

@prompt_test()
def test_qa_knows_capital_of_france(runner):
    result = runner.run(
        prompt="prompts/qa.txt",
        input="What is the capital of France?",
    )
    assert "Paris" in result.output
    assert result.latency_ms < 5000
$ pytest-prompts run
                                pytest-prompts results
 Test                                           Model             Tokens  Latency  Status
 examples/test_prompts.py::test_summary...      claude-sonnet-4-6    174    1.2s    PASS
 examples/test_prompts.py::test_qa_knows...     claude-sonnet-4-6     48    0.9s    PASS
 examples/test_prompts.py::test_qa_admits...    claude-sonnet-4-6     52    1.1s    PASS

3 passed, 0 failed  274 tokens total  $0.0012

Change a prompt → rerun → pytest-prompts diff shows what regressed.


Install

uv add pytest-prompts
export ANTHROPIC_API_KEY=sk-ant-...

Write a test

Any pytest file. Decorate with @prompt_test(), declare a runner fixture, assert on the result.

# tests/test_summarizer.py
from pytest_prompts import prompt_test

@prompt_test(model="claude-sonnet-4-6")
def test_summary_is_concise(runner):
    result = runner.run(
        prompt="prompts/summarizer.txt",
        input="Long text here...",
    )
    assert len(result.output.split()) < 100
    assert result.tokens_used < 500

result exposes output, input_tokens, output_tokens, tokens_used, latency_ms, model, cost_usd.

Detect regressions

Every run writes snapshots to .pytest-prompts/snapshots/.

# Capture baseline on main
git checkout main
pytest-prompts run --snapshot-dir .pytest-prompts/base

# Run on your branch
git checkout feature/new-prompt
pytest-prompts run --snapshot-dir .pytest-prompts/head

# Compare
pytest-prompts diff .pytest-prompts/base .pytest-prompts/head

Output:

pytest-prompts diff
 Test                                  Base          Head          Status
 test_summary_is_concise               ✓ 342t 1.2s   ✓ 891t 3.1s   REGRESSION
 test_qa_knows_capital_of_france       ✓ 48t 0.9s    ✓ 48t 0.8s    ok

Regressions:
  • test_summary_is_concise — tokens 342 → 891 (+160%)

Exit code 1 on any regression. Wire it into CI and you're done.

CI (GitHub Actions)

- uses: actions/checkout@v4
  with:
    fetch-depth: 0  # required for base-ref diff mode
- uses: chahine-tech/pytest-prompts@v0.1
  with:
    path: tests/prompts
    anthropic-api-key: ${{ secrets.ANTHROPIC_API_KEY }}
    base-ref: main
    github-token: ${{ secrets.GITHUB_TOKEN }}

On pull requests, the action runs your tests against both main and your branch, compares them, fails the job on regressions, and posts a summary comment on the PR.

Input Default Description
path . Test path (file or directory)
anthropic-api-key Required. Anthropic API key
python-version 3.13 Python version
base-ref Base git ref (e.g. main). On PRs, results are compared against this ref
threshold 0.05 Regression threshold as a fraction (5% by default)
github-token When set on pull_request events, posts the results as a PR comment
fail-on-regression true Fail the job if a regression is detected

Outputs: passed, failed, total-tokens, total-cost-usd, regressions.

What's in the POC

  • @prompt_test decorator with pytest integration
  • Runner for the Anthropic API (Claude Sonnet 4.6 default)
  • pytest-prompts run — run tests, summarize tokens/latency/cost
  • pytest-prompts diff — compare two snapshot dirs, flag regressions

Not here yet: OpenAI/Gemini adapters, static prompt analysis, LLM-as-judge, HTML reports. If you want them, open an issue — priorities come from usage, not from a roadmap.

License

MIT.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pytest_prompts-0.1.0.tar.gz (8.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pytest_prompts-0.1.0-py3-none-any.whl (12.0 kB view details)

Uploaded Python 3

File details

Details for the file pytest_prompts-0.1.0.tar.gz.

File metadata

  • Download URL: pytest_prompts-0.1.0.tar.gz
  • Upload date:
  • Size: 8.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for pytest_prompts-0.1.0.tar.gz
Algorithm Hash digest
SHA256 bdf4fe12afbf587c67c3b700daefbfde1e3e673cf744a70d459f50ace5aa50ff
MD5 b7b191c7cdf772acf973b73676cf5103
BLAKE2b-256 d745ce798252ba826f794430907ad3c15358bf71e2655dd797b757d5a1776d85

See more details on using hashes here.

Provenance

The following attestation bundles were made for pytest_prompts-0.1.0.tar.gz:

Publisher: release.yml on Chahine-tech/PromptCI

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file pytest_prompts-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: pytest_prompts-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 12.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for pytest_prompts-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 22c326aa462dd33526188f9c144f4fc3232884cb6d4b3e378fb8d93e6f060f72
MD5 fa2b3a82b8fc95ae6a7bbe63bf613fe3
BLAKE2b-256 a78c722bc1cb803c212a1df2391bc968f8b394211832d382a43a38b9413855b8

See more details on using hashes here.

Provenance

The following attestation bundles were made for pytest_prompts-0.1.0-py3-none-any.whl:

Publisher: release.yml on Chahine-tech/PromptCI

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page