Skip to main content

Systematic evaluation of LangGraph nodes using Arize Phoenix experiments.

Project description

evalwire

Systematic, reproducible evaluation of LangGraph nodes and subgraphs against human-curated testsets, tracked in Arize Phoenix.


License CI PyPI - Version

What it does

When iterating on a LangGraph agent, it is hard to know whether a change to a specific node improved or degraded its behaviour. Running the full graph end-to-end is expensive and makes it difficult to attribute a score change to a specific component.

evalwire solves this by:

  • Turning a human-curated CSV of queries and expected outputs into versioned Arize Phoenix datasets.
  • Letting you define a task that isolates and invokes individual LangGraph nodes independently of the rest of the graph.
  • Running those tasks against the stored datasets, scoring each output with one or more evaluators, and recording results in Phoenix — giving you a reproducible, comparable experiment per run.

Installation

pip install evalwire
# With LangGraph node-isolation helpers:
pip install 'evalwire[langgraph]'
# With LLM-as-a-judge evaluator:
pip install 'evalwire[llm-judge]'
# Everything:
pip install 'evalwire[all]'

Quick start

1. Upload your testset

evalwire upload --csv data/testset.csv

The CSV must contain a tags column whose values name the target Phoenix dataset (multiple tags can be pipe-delimited: es_search|source_router).

2. Structure your experiments

experiments/
├── es_search/
│   ├── task.py        # defines: async def task(example) -> Any
│   └── top_k.py       # defines: def top_k(output, expected) -> float
└── source_router/
    ├── task.py
    └── accuracy.py

3. Run experiments

evalwire run --experiments experiments/

Built-in evaluators

All factories are importable from evalwire.evaluators and return a callable with signature (output, expected: dict) -> float | bool.

Factory Returns Use case
make_top_k_evaluator(K=20) float Position-weighted retrieval scoring
make_membership_evaluator() bool Classification / routing label check
make_exact_match_evaluator() bool Extractive QA, single ground-truth string
make_contains_evaluator() bool Free-text generation, required phrase present
make_regex_evaluator() bool Structured format validation (dates, IDs, …)
make_json_match_evaluator(keys) float Tool-call / structured-output key matching
make_schema_evaluator(schema) bool JSON Schema conformance
make_numeric_tolerance_evaluator(atol, rtol) bool Math / calculation tasks with tolerance
make_llm_judge_evaluator(model, prompt, schema) float|bool LLM-as-a-judge with structured output

Example

from evalwire.evaluators import make_top_k_evaluator, make_exact_match_evaluator

# Drop the factory return value into your experiment directory as the evaluator
top_k = make_top_k_evaluator(K=5)
exact = make_exact_match_evaluator()

LLM judge

from pydantic import BaseModel
from langchain.chat_models import init_chat_model
from evalwire.evaluators import make_llm_judge_evaluator

class Verdict(BaseModel):
    explanation: str
    score: bool  # True = correct

llm_judge = make_llm_judge_evaluator(
    model=init_chat_model("gpt-4o-mini"),
    prompt_template=(
        "Output: {output}\n"
        "Expected: {expected_output}\n"
        "Is the output correct? Think step by step, then set score."
    ),
    output_schema=Verdict,
)

Requires pip install 'evalwire[llm-judge]'.


Node isolation

Use invoke_node to call a single LangGraph node without compiling a full graph:

from evalwire.langgraph import invoke_node

async def task(example) -> list[str]:
    result = await invoke_node(retrieve, example.input["user_query"], RAGState)
    return result["retrieved_titles"]

CLI reference

Command Description
evalwire upload --csv PATH Upload CSV testset to Phoenix
evalwire run --experiments DIR Discover and run all experiments
evalwire run --name NAME Run a single named experiment
evalwire run --dry-run N Run N examples without recording results
evalwire run --concurrency N Run N experiments in parallel

Configuration

Create evalwire.toml in your project root to avoid repeating flags:

[dataset]
csv_path = "data/testset.csv"
on_exist = "skip"

[experiments]
dir = "experiments"
prefix = "eval"
concurrency = 4

Requirements

  • Python >= 3.10
  • arize-phoenix >= 13.0, < 14
  • A running Phoenix instance (local or cloud)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

evalwire-0.3.0.tar.gz (277.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

evalwire-0.3.0-py3-none-any.whl (26.7 kB view details)

Uploaded Python 3

File details

Details for the file evalwire-0.3.0.tar.gz.

File metadata

  • Download URL: evalwire-0.3.0.tar.gz
  • Upload date:
  • Size: 277.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.11.2 {"installer":{"name":"uv","version":"0.11.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for evalwire-0.3.0.tar.gz
Algorithm Hash digest
SHA256 5d07b3c93ebe5dd082085e238c856a5137cafef2865bc5bf165fe2fc8b23423f
MD5 1fb90d4fa36f5fab09f34ca81941c93b
BLAKE2b-256 baa9f04b504b1105ab1b69f6ec2b3d8ac9a9c4ed0fe7a682bd0493d5c4816aa6

See more details on using hashes here.

File details

Details for the file evalwire-0.3.0-py3-none-any.whl.

File metadata

  • Download URL: evalwire-0.3.0-py3-none-any.whl
  • Upload date:
  • Size: 26.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.11.2 {"installer":{"name":"uv","version":"0.11.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for evalwire-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 a15daa2ffa3589c85943c860ff3d62785dbea67f65e462650202e22e84a430c3
MD5 dc792895ff52a0234c116e1707cbe058
BLAKE2b-256 e53ffb03484bd32b6a5bbb2f3b2754a3d69f28edb1cee2da83f108d785799715

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page