Causal Judge Evaluation - Unbiased LLM evaluation framework
Project description
CJE - Causal Judge Evaluation
Your LLM judge scores are lying. CJE calibrates them to what actually matters.
We ran 16,000+ tests on Chatbot Arena data. Without calibration, 95% confidence intervals captured the true value 0% of the time. With CJE: 99% ranking accuracy using just 5% oracle labels, at 14× lower cost.
Quick Start
pip install cje-eval
from cje import analyze_dataset
# Point to your response files (one JSONL per policy)
results = analyze_dataset(fresh_draws_dir="data/responses/")
# Get calibrated estimates with valid confidence intervals
results.plot_estimates(
policy_labels={"prompt_v1": "Conversational tone", ...},
save_path="ranking.png"
)
Data format (one JSONL file per policy):
{"prompt_id": "1", "judge_score": 0.85, "oracle_label": 0.9}
{"prompt_id": "2", "judge_score": 0.72}
Only 5-25% of samples need oracle labels. CJE learns the judge→oracle mapping and applies it everywhere.
Why You Need This
Uncalibrated LLM-as-judge evaluation has two systematic failure modes:
| Failure Mode | What Happens | Evidence |
|---|---|---|
| Invalid confidence intervals | Your error bars don't work | "95% confident" was actually 0% accurate |
| Hidden scale distortion | Judge scores ≠ oracle scores | Calibration cut prediction error by 72% |
With 0% CI coverage, you can't trust any A/B test conclusion. Rankings improve too (91% → 99%), but the uncertainty problem is universal.
CJE fixes both by treating your judge as a sensor that must be calibrated against ground truth, then propagating calibration uncertainty into valid confidence intervals.
The Results
We tested on 5,000 Chatbot Arena prompts with GPT-5 as the oracle (ground truth) and GPT-4.1-nano as the cheap judge:
| Without CJE | With CJE |
|---|---|
| Rankings correct 91% of the time | Rankings correct 99% of the time |
| Error bars contain truth 0% of the time | Error bars contain truth 87% of the time |
| Need 100% oracle labels | Need only 5% oracle labels |
| Full labeling cost | 14× cheaper |
Label ~250 samples with your oracle (human raters, downstream KPIs, expensive model). CJE learns the judge→oracle mapping and applies it to everything else.
Already using an expensive model for evals? Switch to a 10-30× cheaper judge + CJE calibration. Same accuracy, fraction of the inference cost.
Example output: comparing prompt variants with calibrated confidence intervals
Read the full Arena Experiment → ・ Paper (Zenodo)
Monitoring Calibration Over Time
Calibration can drift. Periodically verify it still holds with a small probe:
from cje import analyze_dataset
from cje.diagnostics import audit_transportability
# results.calibrator is automatically fitted during analysis
results = analyze_dataset(fresh_draws_dir="responses/")
# Check if calibration still works on this week's data (50+ oracle labels)
diag = audit_transportability(results.calibrator, this_week_samples)
print(diag.summary())
# Status: PASS | Samples: 48 | Mean error: +0.007 (CI: -0.05 to +0.06)
PASS means your calibration is still valid. FAIL means something changed — investigate or recalibrate.
Try It Now
Open the interactive tutorial in Google Colab →
Walk through a complete example: compare prompt variants, check if calibration transfers, inspect what's fooling the judge, and monitor drift over time. No setup required.
Documentation
Technical Guides
- Calibration Methods — AutoCal-R, isotonic regression, two-stage
- Diagnostics System — Uncertainty quantification, transportability
- Estimators — Direct, IPS, DR implementations
- Interface/API —
analyze_datasetimplementation
Examples & Data
- Examples Folder — Working code samples
- Arena Sample Data — Real-world test data
Development
git clone https://github.com/cimo-labs/cje.git
cd cje && poetry install && make test
Support
License
MIT — See LICENSE for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file cje_eval-0.2.9.tar.gz.
File metadata
- Download URL: cje_eval-0.2.9.tar.gz
- Upload date:
- Size: 307.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
aaea0207e439ba232807a47f06fff3fcde12d9eaf76d9aaa9e581e406b7af70a
|
|
| MD5 |
d0f3fa6652f68392bf5f8bf791641c06
|
|
| BLAKE2b-256 |
873ed027151997f4283b16a6a1af2f030dafc61290726b007e494c2869f8aa93
|
Provenance
The following attestation bundles were made for cje_eval-0.2.9.tar.gz:
Publisher:
publish.yml on cimo-labs/cje
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
cje_eval-0.2.9.tar.gz -
Subject digest:
aaea0207e439ba232807a47f06fff3fcde12d9eaf76d9aaa9e581e406b7af70a - Sigstore transparency entry: 763927931
- Sigstore integration time:
-
Permalink:
cimo-labs/cje@7fb72cb53298a9bd80b9146b3531daec00793ecb -
Branch / Tag:
refs/heads/main - Owner: https://github.com/cimo-labs
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@7fb72cb53298a9bd80b9146b3531daec00793ecb -
Trigger Event:
workflow_dispatch
-
Statement type:
File details
Details for the file cje_eval-0.2.9-py3-none-any.whl.
File metadata
- Download URL: cje_eval-0.2.9-py3-none-any.whl
- Upload date:
- Size: 369.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c89af07206c5f8e8fdd6c8d875a6f5ef234b55f63faf7c46be2685d8bb824195
|
|
| MD5 |
d59dd3b05bfec074eb792dfa9580b253
|
|
| BLAKE2b-256 |
9eb334aab95330286e29b784fa1fda2b83dc08873acd1555610d7a86d80db2cc
|
Provenance
The following attestation bundles were made for cje_eval-0.2.9-py3-none-any.whl:
Publisher:
publish.yml on cimo-labs/cje
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
cje_eval-0.2.9-py3-none-any.whl -
Subject digest:
c89af07206c5f8e8fdd6c8d875a6f5ef234b55f63faf7c46be2685d8bb824195 - Sigstore transparency entry: 763927942
- Sigstore integration time:
-
Permalink:
cimo-labs/cje@7fb72cb53298a9bd80b9146b3531daec00793ecb -
Branch / Tag:
refs/heads/main - Owner: https://github.com/cimo-labs
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@7fb72cb53298a9bd80b9146b3531daec00793ecb -
Trigger Event:
workflow_dispatch
-
Statement type: