Skip to main content

Causal Judge Evaluation - Unbiased LLM evaluation framework

Project description

CJE Logo

CJE - Causal Judge Evaluation

Your LLM judge scores are noisy and weakly calibrated. CJE calibrates them to what actually matters.

arXiv Dataset Open In Colab Docs Python Tests License PyPI Downloads

We ran 16,000+ tests on Chatbot Arena data. Without calibration, 95% confidence intervals captured the true value 0% of the time. CJE restores reliable uncertainty and ranking decisions with a small oracle slice.


Quick Start

pip install cje-eval
# Optional (for plotting):
pip install "cje-eval[viz]"
from cje import analyze_dataset

# Compare policies on the same evaluation prompts
# Structure: { policy_name: [samples] }
# Each sample needs: prompt_id, judge_score
# Optional: oracle_label (human ground truth) on 5-25% of samples

results = analyze_dataset(
    fresh_draws_data={
        "gpt-4o": [
            {"prompt_id": "eval_001", "judge_score": 0.85, "oracle_label": 0.9},
            {"prompt_id": "eval_002", "judge_score": 0.72, "oracle_label": 0.7},
            {"prompt_id": "eval_003", "judge_score": 0.68},
            {"prompt_id": "eval_004", "judge_score": 0.79},
        ],
        "claude-sonnet": [
            {"prompt_id": "eval_001", "judge_score": 0.78, "oracle_label": 0.82},
            {"prompt_id": "eval_002", "judge_score": 0.81, "oracle_label": 0.79},
            {"prompt_id": "eval_003", "judge_score": 0.75},
            {"prompt_id": "eval_004", "judge_score": 0.83},
        ],
    }
)

# Or from files: analyze_dataset(fresh_draws_dir="responses/")

# Optional: plotting requires matplotlib (pip install "cje-eval[viz]")
results.plot_estimates(save_path="ranking.png")

CJE learns the judge→oracle mapping from the labeled samples and applies it everywhere. Notation used in docs/playbook: S = judge score signal (judge_score), Y = oracle target label (oracle_label).

Default recommendation: use Direct mode (fresh_draws_*) for most evaluation workflows. Advanced note: IPS/DR variants are supported for counterfactual OPE, but are not part of the default operational loop.


Label Compatibility

CJE automatically handles different label scales without manual preprocessing:

# 0-100 scores work automatically
results = analyze_dataset(
    fresh_draws_data={
        "gpt-4o": [
            {"prompt_id": "1", "judge_score": 85, "oracle_label": 78},  # 0-100 scale
            {"prompt_id": "2", "judge_score": 72, "oracle_label": 65},
        ],
    }
)

# Results are returned in YOUR scale (0-100), not [0,1]
print(results.estimates[0])  # → 73.5 (not 0.735)

Supported: [0,1], 0-100, Likert 1-5, or any bounded range. If values are already in [0,1], no transformation is applied.


Why You Need This

LLM-as-judge gives you rankings. CJE gives you certainty.

Without calibration, you know prompt A scored higher than B—but you don't know:

  • Is the difference real or noise?
  • How big is the improvement, actually?
  • Have I tested enough samples?
  • Will this hold next week?

CJE answers all of these. Label a small slice with your oracle (human raters, latest SOTA model or AI agent, downstream metric). CJE learns the calibration and applies it everywhere—giving you trustworthy magnitudes, valid confidence intervals, and drift detection.

The result: Make decisions faster, spend less on labeling, and defend your conclusions with real statistics.

Read the full explanation →


The Results

We tested on 5,000 Chatbot Arena prompts with GPT-5 as the oracle (ground truth) and GPT-4.1-nano as the cheap judge:

CJE achieves 99% ranking accuracy using only 5% oracle labels—matching full-oracle performance at 14× lower cost.

Label ~250 samples with your oracle (human raters, downstream KPIs, expensive model). CJE learns the judge→oracle mapping and applies it to everything else. Without calibration, error bars contained the true value 0% of the time. With CJE: ~95%.

Already using an expensive model for evals? Switch to a 10-30× cheaper judge + CJE calibration. Same accuracy, fraction of the inference cost.

CJE Output Example
Example output with simulated data (not real model benchmarks)

Read the full Arena Experiment →


Monitoring Calibration Over Time

Calibration can drift. Periodically verify it still holds with a small probe:

from cje import analyze_dataset
from cje.diagnostics import audit_transportability

# results.calibrator is automatically fitted during analysis
results = analyze_dataset(fresh_draws_dir="responses/")

# Check if calibration still works on this week's data (50+ oracle labels)
diag = audit_transportability(results.calibrator, this_week_samples)
print(diag.summary())
# Transport: PASS | Group: ... | N=50 | δ̂: +0.012 (CI: [-0.008, +0.032])
Temporal Monitoring

PASS means your calibration is still valid. FAIL means something changed — investigate or recalibrate.

CJE operational loop: design metrics, sample, fit, precision gate, deploy, monitor, drift gate

Try It Now

Open the interactive tutorial in Google Colab →

Walk through a complete example: compare policies, check if calibration transfers, inspect what's fooling the judge, and monitor drift over time. No setup required.


Documentation

Planning sample sizes? Use pilot data to optimize your evaluation budget: Planning Notebook

Video Walkthroughs

Technical Guides

Bridges (Promptfoo / TruLens / LangSmith / OpenCompass → CJE)

Note: these bridge converters are repo scripts (they are not installed with pip install cje-eval). Clone the repo to use them:

git clone https://github.com/cimo-labs/cje.git
cd cje

If you already run evals in Promptfoo, TruLens, LangSmith, or OpenCompass, you can convert those outputs into CJE’s fresh_draws_data format.

# Promptfoo
python3 scripts/cje_bridges/convert.py promptfoo results.json \
  --out cje_fresh_draws_data.json \
  --label-template oracle_label_template.csv

# TruLens (install first: pip install trulens)
python3 scripts/cje_bridges/convert.py trulens \
  --database-url sqlite:///default.sqlite \
  --judge-col "Answer Relevance" \
  --out cje_fresh_draws_data.json \
  --label-template oracle_label_template.csv

# LangSmith (install first: pip install langsmith; set LANGSMITH_API_KEY)
python3 scripts/cje_bridges/convert.py langsmith \
  --project "my_model_a_project" \
  --project "my_model_b_project" \
  --feedback-key "correctness" \
  --out cje_fresh_draws_data.json \
  --label-template oracle_label_template.csv

# OpenCompass (LLM-as-judge; run OpenCompass with --dump-eval-details)
python3 scripts/cje_bridges/convert.py opencompass path/to/opencompass_results.json \
  --out cje_fresh_draws_data.json \
  --label-template oracle_label_template.csv

After you label an oracle slice, re-run the converter to populate oracle_label:

  • Promptfoo/TruLens/OpenCompass: pass --oracle-labels <your_labeled_csv_or_jsonl>
  • LangSmith: if labels are stored in LangSmith as feedback, pass --oracle-feedback-key <key>

See: scripts/cje_bridges/README.md

Examples & Data


Development

git clone https://github.com/cimo-labs/cje.git
cd cje && poetry install && make test

Support

Citation

If you use CJE in your research, please cite:

@misc{landesberg2025causaljudgeevaluationcalibrated,
  title={Causal Judge Evaluation: Calibrated Surrogate Metrics for LLM Systems},
  author={Eddie Landesberg},
  year={2025},
  eprint={2512.11150},
  archivePrefix={arXiv},
  primaryClass={stat.ME},
  url={https://arxiv.org/abs/2512.11150},
}

License

MIT — See LICENSE for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cje_eval-0.2.23.tar.gz (354.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cje_eval-0.2.23-py3-none-any.whl (414.3 kB view details)

Uploaded Python 3

File details

Details for the file cje_eval-0.2.23.tar.gz.

File metadata

  • Download URL: cje_eval-0.2.23.tar.gz
  • Upload date:
  • Size: 354.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for cje_eval-0.2.23.tar.gz
Algorithm Hash digest
SHA256 3a4667509712d2054c7fc99bf46840af6aaf268609dd02cc6e6a057e44a82196
MD5 eddd0305ef909a6f540ebad3467dc086
BLAKE2b-256 d31864a06f7e726927dafafda8456565f73e1a352b17772e0059f78319091700

See more details on using hashes here.

Provenance

The following attestation bundles were made for cje_eval-0.2.23.tar.gz:

Publisher: publish.yml on cimo-labs/cje

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file cje_eval-0.2.23-py3-none-any.whl.

File metadata

  • Download URL: cje_eval-0.2.23-py3-none-any.whl
  • Upload date:
  • Size: 414.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for cje_eval-0.2.23-py3-none-any.whl
Algorithm Hash digest
SHA256 ed593a86aae12a0cb4f7195a78577d8e8dad60e5d8ed01ea4b43328b16725527
MD5 21f3c3a3fc4ecf957d5f2030ca7b2f8b
BLAKE2b-256 8b1c36c8ff2236875e21b1609f6c0cddc68fa0c9650adad90e3cf06bf90f5208

See more details on using hashes here.

Provenance

The following attestation bundles were made for cje_eval-0.2.23-py3-none-any.whl:

Publisher: publish.yml on cimo-labs/cje

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page