Skip to main content

Change-coupled eval probe generation for LLM systems

Project description

Probegen

Probegen detects behaviorally significant pull request changes in LLM systems and proposes targeted evaluation probes for review before writing them to an evaluation platform. Probegen is non-blocking — it runs as a parallel CI job and never prevents PR merges.

What it does

Probegen runs in CI on pull requests. It:

  1. Detects changes to prompts, instructions, guardrails, validators, tool descriptions, classifiers, retry policies, output schemas, and other agent harness artifacts that are likely to alter agent behavior.
  2. Retrieves nearby evaluation coverage from your existing eval stack when mappings exist.
  3. Falls back to starter probe generation when no eval corpus exists yet.
  4. Generates ranked probe proposals tailored to the specific change, including multi-turn conversational probes when the agent is conversational.
  5. Exports those probes as files and, after explicit approval, writes them to the configured platform.

Probegen is not an eval runner. It generates eval inputs that plug into LangSmith, Braintrust, Arize Phoenix, Promptfoo, or file-based workflows.

Probegen works out of the box even if you have no evals yet. In that case it generates plausible starter probes from the diff, system prompt or guardrails, and whatever product context you provide. The more eval coverage and product detail you give it, the sharper its novelty detection and boundary analysis become.

Prerequisites

  • Python 3.11+
  • Node.js 22+ — required in CI by the GitHub Action (installed automatically). Only needed locally if running probegen run-stage directly.
  • An Anthropic API key
  • An eval platform API key only if you want direct platform integration or automatic writeback

Quick Start (GitHub Action)

  1. Install the package: pip install probegen

  2. Run interactive setup: probegen init — generates probegen.yaml, workflow file, and context/ stubs

  3. Fill in context/product.md and context/bad_examples.md (and other context files for best results)

  4. Add GitHub secrets:

    Secret Purpose Where to get it
    ANTHROPIC_API_KEY Required — powers all three stages console.anthropic.com → API Keys
    OPENAI_API_KEY Required for coverage-aware mode platform.openai.com → API Keys
    LANGSMITH_API_KEY If using LangSmith smith.langchain.com → Settings
    BRAINTRUST_API_KEY If using Braintrust braintrust.dev → Settings
    PHOENIX_API_KEY If using Arize Phoenix app.phoenix.arize.com → Settings
  5. Create the approval label in GitHub:

    gh label create "probegen:approve" --color 0075ca --description "Approve Probegen probe writeback"
    
  6. Commit probegen.yaml, .github/workflows/probegen.yml, and context/.

  7. Open a PR that touches a prompt or guardrail.

  8. Run probegen doctor to verify your setup.

Cost control

Each stage has a configurable Anthropic API spend budget (see budgets: in probegen.yaml). Typical costs per PR:

  • Stage 1 (change detection): $0.05–0.30
  • Stage 2 (coverage analysis): $0.10–0.50
  • Stage 3 (probe generation): $0.10–0.60

Increase budget limits if stages time out on large diffs or complex repos.

Advanced Configuration

The full configuration reference is available in probegen.yaml.example.

Real example quickstart

If you want to test Probegen against a real LangGraph repo instead of wiring everything from scratch, use the in-repo demo under examples/langgraph-agentic-rag and follow examples/langgraph-agentic-rag/docs/quickstart.md.

Context pack and trace safety

Probegen works without a context pack, but probe quality drops significantly. At minimum, fill in product context and known failure modes. This matters even more in starter mode, where Probegen has no existing eval corpus to compare against.

Production traces are never sanitized by the tool. If you add files under context/traces/, anonymize them first. Remove names, emails, account IDs, and any other sensitive data before committing them.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

probegen-0.1.6.tar.gz (43.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

probegen-0.1.6-py3-none-any.whl (59.6 kB view details)

Uploaded Python 3

File details

Details for the file probegen-0.1.6.tar.gz.

File metadata

  • Download URL: probegen-0.1.6.tar.gz
  • Upload date:
  • Size: 43.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for probegen-0.1.6.tar.gz
Algorithm Hash digest
SHA256 6d20a4b8bd171f214b42968643ba5bab237470ee9ac06aad1336f6afad0a84f3
MD5 05c48f8104e0fb189eb2e32bfec8e699
BLAKE2b-256 14a1c24bdc39f3ff3f757329f2b6067e90d968a1cc3979d0fb14ec3114b7d917

See more details on using hashes here.

File details

Details for the file probegen-0.1.6-py3-none-any.whl.

File metadata

  • Download URL: probegen-0.1.6-py3-none-any.whl
  • Upload date:
  • Size: 59.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for probegen-0.1.6-py3-none-any.whl
Algorithm Hash digest
SHA256 7f46eb626a362a3f69e4d016d7935c60f0baee4f066011c2851ef8dcdcf7e208
MD5 6ce4f34b1bff152d2a018ece0ac9a4f5
BLAKE2b-256 d8275205bce6a5c3640ca090fb1b495d0aaa2021a5c4f7e1d8af40342a44251c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page