Skip to main content

Standalone framework to evaluate agent correctness based on portable OpenTelemetry traces

Project description

agentevals

Ship Agents Reliably

Benchmark your agents before they hit production.
agentevals scores performance and inference quality from OpenTelemetry traces — no re-runs, no guesswork.

GitHub Stars   Discord   Release   License   PyPI

Install · Quick Start · Releases · Contributing · Discord


What is agentevals?

agentevals is a framework-agnostic evaluation solution that scores AI agent behavior directly from OpenTelemetry traces. Record your agent's actions once, then evaluate as many times as you want — no re-runs, no guesswork.

It works with any OTel-instrumented framework (LangChain, Strands, Google ADK, and others), supports Jaeger JSON and OTLP trace formats, and ships with built-in evaluators, custom evaluator support, and LLM-based judges.

  • CLI for scripting and CI pipelines
  • Web UI for visual inspection and local developer experience
  • MCP server so MCP clients can run evaluations from a conversation

Why agentevals?

Most evaluation tools require you to re-execute your agent for every test — burning tokens, time, and money on duplicate LLM calls. agentevals takes a different approach:

  • No re-execution — score agents from existing traces without replaying expensive LLM calls
  • Framework-agnostic — works with any agent framework that emits OpenTelemetry spans
  • Golden eval sets — compare actual behavior against defined expected behaviors for deterministic pass/fail gating
  • Custom evaluators — write scoring logic in Python, JavaScript, or any language
  • CI/CD ready — gate deployments on quality thresholds directly in your pipeline
  • Local-first — no cloud dependency required; everything runs on your machine

How It Works

agentevals follows three simple steps:

  1. Collect traces — Instrument your agent with OpenTelemetry (or export traces from your tracing backend). Point the OTLP exporter at the agentevals receiver, or load trace files directly.
  2. Define eval sets — Create golden evaluation sets that describe expected agent behavior: which tools should be called, in what order, and what the output should look like.
  3. Run evaluations — Use the CLI, Web UI, or MCP server to score traces against your eval sets. Get per-metric scores, pass/fail results, and detailed span-level breakdowns.

[!IMPORTANT] This project is under active development. Expect breaking changes.

Contents

Installation

From PyPI (recommended): the published package includes the CLI, REST API, and embedded web UI.

pip install agentevals-cli

Optional extras:

pip install "agentevals-cli[live]"        # MCP server support
pip install "agentevals-cli[openai]"      # OpenAI Evals API graders

GitHub releases also ship core wheels (CLI and API only) and bundle wheels (with the embedded UI) if you need a specific version or offline pip install ./path/to.whl.

From source with uv or Nix:

uv sync
# or: nix develop .

See DEVELOPMENT.md for build instructions.

Quick Start

Examples use agentevals on your PATH after pip install agentevals-cli. If you are working from a clone of this repo, use uv run agentevals instead.

Run an evaluation against a sample trace:

agentevals run samples/helm.json \
  --eval-set samples/eval_set_helm.json \
  -m tool_trajectory_avg_score

List available evaluators:

agentevals evaluator list

Integration

Zero-Code (Recommended)

Point any OTel-instrumented agent at the receiver. No SDK, no code changes:

# Terminal 1
agentevals serve --dev

# Terminal 2
export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
export OTEL_RESOURCE_ATTRIBUTES="agentevals.session_name=my-agent"
python your_agent.py

Traces stream to the UI in real-time. Works with LangChain, Strands, Google ADK, or any framework that emits OTel spans (http/protobuf and http/json supported). Sessions are auto-created and grouped by agentevals.session_name. Set agentevals.eval_set_id to associate traces with an eval set.

See examples/zero-code-examples/ for working examples.

SDK

For programmatic session lifecycle and decorator API:

from agentevals import AgentEvals

app = AgentEvals()

with app.session(eval_set_id="my-eval"):
    agent.invoke("Roll a 20-sided die for me")

Requires pip install "agentevals-cli[streaming]". See examples/sdk_example/ for framework-specific patterns.

CLI

# Single trace
agentevals run samples/helm.json \
  --eval-set samples/eval_set_helm.json \
  -m tool_trajectory_avg_score

# Multiple traces
agentevals run samples/helm.json samples/k8s.json \
  --eval-set samples/eval_set_helm.json \
  -m tool_trajectory_avg_score

# JSON output
agentevals run samples/helm.json \
  --eval-set samples/eval_set_helm.json \
  --output json

# List available evaluators (builtin + community)
agentevals evaluator list

# List only builtin evaluators
agentevals evaluator list --source builtin

Custom Evaluators

Beyond the built-in metrics, you can write your own evaluators in Python, JavaScript, or any language. An evaluator is any program that reads JSON from stdin and writes a score to stdout.

agentevals evaluator init my_evaluator

This scaffolds a directory with boilerplate and a manifest. You can also list supported runtimes and generate config snippets:

agentevals evaluator runtimes           # show supported languages
agentevals evaluator config my_evaluator --path ./evaluators/my_evaluator.py

Implement your scoring logic, then reference it in an eval config:

# eval_config.yaml
evaluators:
  - name: tool_trajectory_avg_score
    type: builtin

  - name: my_evaluator
    type: code
    path: ./evaluators/my_evaluator.py
    threshold: 0.7
agentevals run trace.json --config eval_config.yaml --eval-set eval_set.json

Community evaluators can be referenced directly from a shared GitHub repository using type: remote. You can also delegate grading to the OpenAI Evals API using type: openai_eval (requires pip install "agentevals-cli[openai]" and OPENAI_API_KEY). See the Custom Evaluators guide for the full protocol reference, SDK usage, and how to contribute evaluators.

Web UI

Installed bundle (port 8001):

agentevals serve

From source (two terminals):

uv run agentevals serve --dev    # Terminal 1
cd ui && npm install && npm run dev             # Terminal 2 → http://localhost:5173

Upload traces and eval sets, select metrics, and view results with interactive span trees. Live-streamed traces appear in the "Local Dev" tab, grouped by session ID.

REST API Reference

While the server is running, interactive API documentation is available at:

Endpoint Description
/docs Swagger UI with interactive request builder
/redoc ReDoc reference documentation
/openapi.json Raw OpenAPI 3.x schema (for code generation or CI)

The OTLP receiver (port 4318) serves its own docs at http://localhost:4318/docs.

MCP Server

Exposes evaluation tools to MCP clients. A .mcp.json at the project root lets Claude Code pick it up automatically.

Tool Requires serve Description
list_metrics yes List available metrics
evaluate_traces no Evaluate local trace files (OTLP or Jaeger)
list_sessions yes List streaming sessions
summarize_session yes Structured summary of a session's tool calls
evaluate_sessions yes Evaluate sessions against a golden reference
# Custom server URL (requires pip install "agentevals-cli[live]")
AGENTEVALS_SERVER_URL=http://localhost:9000 agentevals mcp

The React UI and MCP server share the same in-memory session state and can run simultaneously.

Claude Code Skills

Two slash-command workflows in .claude/skills/, available automatically in this repo:

Skill What it does
/eval Score traces or compare sessions against a golden reference
/inspect Turn-by-turn narrative of a live session with anomaly detection

Docs

Guide Description
Eval Set Format Schema, field reference, and examples for golden eval set JSON files
Custom Evaluators Write your own scoring logic in Python, JavaScript, or any language
Live Streaming Real-time trace streaming, dev server setup, and session management
OpenTelemetry Compatibility Supported OTel conventions, message delivery mechanisms, and OTLP receiver

Development

uv run pytest                      # run tests
uv run agentevals serve --dev      # backend
cd ui && npm run dev               # frontend (separate terminal)

See DEVELOPMENT.md for build tiers, Makefile targets, and Nix setup. To contribute, see CONTRIBUTING.md.

FAQ

How does this compare to ADK's evaluations? Unlike ADK's LocalEvalService, which couples agent execution with evaluation, agentevals only handles scoring: it takes pre-recorded traces and compares them against expected behavior using metrics like tool trajectory matching, response quality, and LLM-based judgments.

However, if you're iterating on your agents locally, you can point your agents to agentevals and you will see rich runtime information in your browser. For more details, use the bundled wheel and explore the Local Development option in the UI.

How does this compare to Bedrock AgentCore's evaluation? AgentCore's evaluation integration (via strands-agents-evals) also couples agent execution with evaluation. It re-invokes the agent for each test case, converts the resulting OTel spans to AWS's ADOT format, and scores them against 4 built-in evaluators (Helpfulness, Accuracy, Harmfulness, Relevance) via a cloud API call. This means you need an AWS account, valid credentials, and network access for every evaluation.

agentevals takes a different approach: it scores pre-recorded traces locally without re-running anything. It works with standard Jaeger JSON and OTLP formats from any framework, supports open-ended metrics (tool trajectory matching, LLM-based judges, custom scorers), and ships with a CLI, web UI, and MCP server. No cloud dependency required, though we do include all ADK's GCP-based evals as of now.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agentevals_cli-0.6.0.tar.gz (1.3 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agentevals_cli-0.6.0-py3-none-any.whl (561.3 kB view details)

Uploaded Python 3

File details

Details for the file agentevals_cli-0.6.0.tar.gz.

File metadata

  • Download URL: agentevals_cli-0.6.0.tar.gz
  • Upload date:
  • Size: 1.3 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.11.2 {"installer":{"name":"uv","version":"0.11.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for agentevals_cli-0.6.0.tar.gz
Algorithm Hash digest
SHA256 4dadd8bb1588adc5167f1f20b8b7c8538617d96b716b760d049a8358e514c9a0
MD5 ed56d45ba3b4b8b129eb84c9100e08ad
BLAKE2b-256 eea9b252b7cde138f618abe4308a8b6e444e3976c6d47d000730ddc54b74634d

See more details on using hashes here.

File details

Details for the file agentevals_cli-0.6.0-py3-none-any.whl.

File metadata

  • Download URL: agentevals_cli-0.6.0-py3-none-any.whl
  • Upload date:
  • Size: 561.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.11.2 {"installer":{"name":"uv","version":"0.11.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for agentevals_cli-0.6.0-py3-none-any.whl
Algorithm Hash digest
SHA256 85c9b9e2ba2119db2a2039ee929ef60f6de552aa131a32ff0f49b9960cb5960c
MD5 988a23701e129f95409c65c249eaecbf
BLAKE2b-256 8f91769a53c5f0ad19e42fe7f7d45418c611387ba724f4c55008133cdd29615b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page