Causal Judge Evaluation - Unbiased LLM evaluation framework
Project description
CJE - Causal Judge Evaluation
Your LLM judge scores are noisy and biased. CJE calibrates them to what actually matters.
Quick Start
pip install cje-eval
from cje import analyze_dataset
results = analyze_dataset(
fresh_draws_data={
"gpt-4o": [
{"prompt_id": "eval_001", "judge_score": 0.85, "oracle_label": 0.9},
{"prompt_id": "eval_002", "judge_score": 0.72, "oracle_label": 0.7},
{"prompt_id": "eval_003", "judge_score": 0.68},
{"prompt_id": "eval_004", "judge_score": 0.79},
],
"claude-sonnet": [
{"prompt_id": "eval_001", "judge_score": 0.78, "oracle_label": 0.82},
{"prompt_id": "eval_002", "judge_score": 0.81, "oracle_label": 0.79},
{"prompt_id": "eval_003", "judge_score": 0.75},
{"prompt_id": "eval_004", "judge_score": 0.83},
],
}
)
results.plot_estimates(save_path="ranking.png") # requires pip install "cje-eval[viz]"
CJE learns the judge→oracle mapping from labeled samples and applies it everywhere. Label 5–25% of samples with your oracle (human raters, strong model, downstream metric). Any bounded scale works automatically (0–1, 0–100, Likert 1–5).
Default workflow: If you can generate fresh responses on a shared prompt set, use Direct + two-stage calibration. Use IPS/DR only when you truly need off-policy estimation and overlap diagnostics look healthy enough to trust reweighting.
What CJE covers: reward calibration, calibration-aware inference, transport audits, and overlap diagnostics for counterfactual OPE.
Real-World Validation
We ran CJE on 29,511 physician-labeled HealthBench records with two LLM judges. Both judges were overconfident — by 24.5 pp and 13.0 pp respectively — and disagreed with each other by up to 73 percentage points on specific criteria categories. After calibration with just 5% oracle labels (~1,400 records), both converged to the physician ground truth.
Read the full HealthBench audit →
Example output: calibrated estimates with valid confidence intervals
Documentation
| Resource | Description |
|---|---|
| Interactive Tutorial | Walk through a complete example in Colab — no setup required |
| CJE in 3 Minutes | Video: why raw judge scores mislead and how CJE fixes it |
| Technical Walkthrough | Video: calibration, evaluation, and transport auditing pipeline |
| Operational Playbook | End-to-end runbook: audits, drift correction, label budgeting |
| Planning Notebook | Optimize your evaluation budget with pilot data |
| Full Docs | Installation, assumptions, API reference, research notes |
Bridges: Already running evals in Promptfoo, TruLens, LangSmith, OpenCompass, or Inspect AI? Convert those outputs into CJE format with one command.
Technical deep dives: Calibration methods · Diagnostics · Estimators · Interface/API · Experiments
Development
git clone https://github.com/cimo-labs/cje.git
cd cje && poetry install && make test
Citation
If you use CJE in your research, please cite:
@misc{landesberg2025causaljudgeevaluationcalibrated,
title={Causal Judge Evaluation: Calibrated Surrogate Metrics for LLM Systems},
author={Eddie Landesberg},
year={2025},
eprint={2512.11150},
archivePrefix={arXiv},
primaryClass={stat.ME},
url={https://arxiv.org/abs/2512.11150},
}
License
MIT — See LICENSE for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file cje_eval-0.2.25.tar.gz.
File metadata
- Download URL: cje_eval-0.2.25.tar.gz
- Upload date:
- Size: 344.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
34fd3180fc293c1752ba3839725eb26183753624f3261c251d59e723e654c1e1
|
|
| MD5 |
f7d0860ea07bb2659c8eb2675501f678
|
|
| BLAKE2b-256 |
7c9015080cff5d854513cbded3986c1338318179c56b8751cab7daca5f36b497
|
Provenance
The following attestation bundles were made for cje_eval-0.2.25.tar.gz:
Publisher:
publish.yml on cimo-labs/cje
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
cje_eval-0.2.25.tar.gz -
Subject digest:
34fd3180fc293c1752ba3839725eb26183753624f3261c251d59e723e654c1e1 - Sigstore transparency entry: 1220057688
- Sigstore integration time:
-
Permalink:
cimo-labs/cje@043a6854100b075c48172a08678db8f4ebf30dd4 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/cimo-labs
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@043a6854100b075c48172a08678db8f4ebf30dd4 -
Trigger Event:
workflow_dispatch
-
Statement type:
File details
Details for the file cje_eval-0.2.25-py3-none-any.whl.
File metadata
- Download URL: cje_eval-0.2.25-py3-none-any.whl
- Upload date:
- Size: 405.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
06f46f57fc2446147f86b50c836c0056a990afc1e8ae7afc45477b7c8a9d8cc6
|
|
| MD5 |
8cca8f19ef0467ac7c8cb3fe6e283626
|
|
| BLAKE2b-256 |
6bd8ada3998cc4937afd7fae0ff36e7265fac9475b82f176c48d3e9047783165
|
Provenance
The following attestation bundles were made for cje_eval-0.2.25-py3-none-any.whl:
Publisher:
publish.yml on cimo-labs/cje
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
cje_eval-0.2.25-py3-none-any.whl -
Subject digest:
06f46f57fc2446147f86b50c836c0056a990afc1e8ae7afc45477b7c8a9d8cc6 - Sigstore transparency entry: 1220057715
- Sigstore integration time:
-
Permalink:
cimo-labs/cje@043a6854100b075c48172a08678db8f4ebf30dd4 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/cimo-labs
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@043a6854100b075c48172a08678db8f4ebf30dd4 -
Trigger Event:
workflow_dispatch
-
Statement type: