Skip to main content

MCP server for agentic LLM evaluation: jury scoring, agent tracing via OpenTelemetry, document-grounded QA generation, PDF reports.

Project description

Agentic AI-Guided Evaluation Platform

An LLM evaluation system where you describe what you want to evaluate in natural language — an expert AI agent handles dataset generation, judge configuration, execution, and analysis end-to-end, and hands you back a PDF report.

Features

  • Expert agent interface — The agent knows evaluation best practices, recommends criteria and validates configurations before execution. No config files or CLI expertise needed.
  • Jury system — Multiple judges from different model families (e.g. Claude Sonnet, Nova Pro, Nemotron) each evaluate distinct aspects of every response — correctness, reasoning, completeness. Combining diverse judge families reduces self-preference bias, and aggregating weak signals from diverse judges and criteria produces stronger results than any single judge (Verma et al., 2025, Frick et al., 2025).
  • Adaptable binary scoring — Binary pass/fail per criteria rather than subjective numeric scales, shown to produce more reliable results across judges (Chiang et al., 2025). Criteria are tailored by the agent to what you're evaluating.
  • Document-grounded synthetic data — Upload PDFs, knowledge bases, or product docs and generate QA pairs grounded in your actual content, reflecting real customer scenarios.
  • Agentic eval support — Evaluate any agent calling Bedrock (Strands, LangChain, custom boto3) with zero code modification via OpenTelemetry instrumentation.

Quick Start

Prerequisites

  • AWS credentials with Bedrock model access
  • uv installed
  • Claude Code, Cursor, Kiro, VS Code, or any MCP-compatible IDE

Install

Pick your IDE and paste / click.

Claude Code — one CLI command:

claude mcp add-json eval '{"type":"stdio","command":"uvx","args":["--from","llm-evaluation-system","eval-mcp"]}' --scope user

Cursor — one-click deeplink: Install eval-mcp in Cursor

Kiro — add to ~/.kiro/settings/mcp.json:

{
  "mcpServers": {
    "eval": {
      "command": "uvx",
      "args": ["--from", "llm-evaluation-system", "eval-mcp"]
    }
  }
}

Codex CLI — add to ~/.codex/config.toml, then restart Codex:

[mcp_servers.eval]
command = "uvx"
args = ["--from", "llm-evaluation-system", "eval-mcp"]

VS Code (with GitHub Copilot MCP) — one CLI command:

code --add-mcp '{"name":"eval","command":"uvx","args":["--from","llm-evaluation-system","eval-mcp"]}'

Using a coding agent to install? Point it at INSTALL.md — it handles the config edit, warms the uvx cache, and asks about optional S3 team sharing.

Use

Ask your AI assistant to evaluate agents, models, or prompts — using a dataset you provide or one generated from your documents or context:

  • "Evaluate my agent at ./my_agent.py"
  • "Compare Claude Sonnet vs Nova Pro on this dataset"
  • "Test these three prompt templates against my golden QA set"
  • "Generate a dataset from this PDF and run an eval"

The agent picks the right mode, auto-generates whatever's missing (dataset, judge, criteria), runs it, opens the results viewer in your browser, and hands you a PDF report.

Team Sharing (S3)

Share datasets, judges, configs, and eval results across your team via a shared S3 bucket. No servers needed.

Setup

uvx --from llm-evaluation-system eval-mcp init my-team-evals

User identity is auto-detected from your AWS credentials. Projects are auto-discovered from the bucket.

How it works

s3://my-team-evals/
  users/alice/            ← Alice's evals, datasets, judges, configs (auto-replicated on every write)
  users/bob/              ← Bob's
  projects/project-alpha/ ← shared team evals
  projects/project-beta/  ← shared team evals
  • Every write (eval result, dataset, judge, config, PDF report) auto-replicates to users/{you}/ in the background
  • Every list/read auto-pulls from S3 first (debounced) so your local state mirrors S3
  • eval-mcp share my-project → promote your stuff to a shared project prefix
  • eval-mcp sync → manual reconcile (used after long offline periods or on a fresh laptop)

Create the bucket

One person on the team runs this once:

git clone https://github.com/awslabs/llm-evaluation-system.git
cd llm-evaluation-system/infra/modules/eval-logs-bucket
terraform init
terraform apply -var="bucket_name=my-team-evals"

Deploy Full Platform on EKS

For a multi-user web app with Cognito auth, chat UI, and per-user isolation, the repo also ships an EKS deployment. This is the heavyweight path — for most users the MCP above is enough.

./deploy.sh

The script auto-installs Terraform, kubectl, and Helm, then deploys the complete platform (Cognito auth, CloudFront, WAF, per-user isolation).

User Management

./manage-users.sh create user@example.com
./manage-users.sh list
./manage-users.sh delete user@example.com

Teardown

./destroy.sh

Architecture details, OIDC config, Helm values, and manual deployment steps: docs/DEVELOPMENT.md.

Contributing / Local Development

See docs/DEVELOPMENT.md for how to clone, run from source, rebuild the viewer frontend, and contribute.

Acknowledgments

Built on Inspect AI by the UK AI Security Institute.

Legal Disclaimer

Sample code, software libraries, command line tools, proofs of concept, templates, or other related technology are provided as AWS Content or Third-Party Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content or Third-Party Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content or Third-Party Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content or Third-Party Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm_evaluation_system-0.1.2.tar.gz (489.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llm_evaluation_system-0.1.2-py3-none-any.whl (537.9 kB view details)

Uploaded Python 3

File details

Details for the file llm_evaluation_system-0.1.2.tar.gz.

File metadata

  • Download URL: llm_evaluation_system-0.1.2.tar.gz
  • Upload date:
  • Size: 489.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.5

File hashes

Hashes for llm_evaluation_system-0.1.2.tar.gz
Algorithm Hash digest
SHA256 168d8931b90311df744d4c0a0abf70962e5296f77fa4e7d4796be531e63f5ada
MD5 4aa6ccb05113208060a98368fcce8ca6
BLAKE2b-256 d72e34d9408191c6f8e744f1904e5f0d54f76e1d9177297d1d54e42605e8f703

See more details on using hashes here.

File details

Details for the file llm_evaluation_system-0.1.2-py3-none-any.whl.

File metadata

File hashes

Hashes for llm_evaluation_system-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 1e06cfef3639650c5ccee43a478138d585f25a6145e11fe23735ac4109f4ac43
MD5 53f58c755c4abb23678ff3c44e1cf17a
BLAKE2b-256 6b597853e63f39ec4fda47eec4005fc852d77f098f824e3c11c94305bb92a3b5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page