Skip to main content

Causal Judge Evaluation - Unbiased LLM evaluation framework

Project description

CJE Logo

CJE - Causal Judge Evaluation

Your LLM judge scores are noisy and biased. CJE calibrates them to what actually matters.

arXiv Dataset Open In Colab Docs Python Tests License PyPI Downloads


Quick Start

pip install cje-eval
from cje import analyze_dataset

results = analyze_dataset(
    fresh_draws_data={
        "gpt-4o": [
            {"prompt_id": "eval_001", "judge_score": 0.85, "oracle_label": 0.9},
            {"prompt_id": "eval_002", "judge_score": 0.72, "oracle_label": 0.7},
            {"prompt_id": "eval_003", "judge_score": 0.68},
            {"prompt_id": "eval_004", "judge_score": 0.79},
        ],
        "claude-sonnet": [
            {"prompt_id": "eval_001", "judge_score": 0.78, "oracle_label": 0.82},
            {"prompt_id": "eval_002", "judge_score": 0.81, "oracle_label": 0.79},
            {"prompt_id": "eval_003", "judge_score": 0.75},
            {"prompt_id": "eval_004", "judge_score": 0.83},
        ],
    }
)

results.plot_estimates(save_path="ranking.png")  # requires pip install "cje-eval[viz]"

CJE learns the judge→oracle mapping from labeled samples and applies it everywhere. Label 5–25% of samples with your oracle (human raters, strong model, downstream metric). Any bounded scale works automatically (0–1, 0–100, Likert 1–5).

Default workflow: If you can generate fresh responses on a shared prompt set, use Direct + two-stage calibration. Use IPS/DR only when you truly need off-policy estimation and overlap diagnostics look healthy enough to trust reweighting.

What CJE covers: reward calibration, calibration-aware inference, transport audits, and overlap diagnostics for counterfactual OPE.


Real-World Validation

We ran CJE on 29,511 physician-labeled HealthBench records with two LLM judges. Both judges were overconfident — by 24.5 pp and 13.0 pp respectively — and disagreed with each other by up to 73 percentage points on specific criteria categories. After calibration with just 5% oracle labels (~1,400 records), both converged to the physician ground truth.

Read the full HealthBench audit →

CJE forest plot showing calibrated policy estimates with confidence intervals
Example output: calibrated estimates with valid confidence intervals

Documentation

Resource Description
Interactive Tutorial Walk through a complete example in Colab — no setup required
CJE in 3 Minutes Video: why raw judge scores mislead and how CJE fixes it
Technical Walkthrough Video: calibration, evaluation, and transport auditing pipeline
Operational Playbook End-to-end runbook: audits, drift correction, label budgeting
Planning Notebook Optimize your evaluation budget with pilot data
Full Docs Installation, assumptions, API reference, research notes

Bridges: Already running evals in Promptfoo, TruLens, LangSmith, OpenCompass, or Inspect AI? Convert those outputs into CJE format with one command.

Technical deep dives: Calibration methods · Diagnostics · Estimators · Interface/API · Experiments


Development

git clone https://github.com/cimo-labs/cje.git
cd cje && poetry install && make test

Citation

If you use CJE in your research, please cite:

@misc{landesberg2025causaljudgeevaluationcalibrated,
  title={Causal Judge Evaluation: Calibrated Surrogate Metrics for LLM Systems},
  author={Eddie Landesberg},
  year={2025},
  eprint={2512.11150},
  archivePrefix={arXiv},
  primaryClass={stat.ME},
  url={https://arxiv.org/abs/2512.11150},
}

License

MIT — See LICENSE for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cje_eval-0.2.24.tar.gz (352.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cje_eval-0.2.24-py3-none-any.whl (414.1 kB view details)

Uploaded Python 3

File details

Details for the file cje_eval-0.2.24.tar.gz.

File metadata

  • Download URL: cje_eval-0.2.24.tar.gz
  • Upload date:
  • Size: 352.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for cje_eval-0.2.24.tar.gz
Algorithm Hash digest
SHA256 fd198b2d47569069d0ba2a15acbe9167c79ff909f3f222e556c09c3c3428f82d
MD5 f69e65b5c365e1e259640328f02309ef
BLAKE2b-256 5a3ee77b509346b256b88a2de2dbe7487197da9ae7260368449e91b3ba3097d2

See more details on using hashes here.

Provenance

The following attestation bundles were made for cje_eval-0.2.24.tar.gz:

Publisher: publish.yml on cimo-labs/cje

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file cje_eval-0.2.24-py3-none-any.whl.

File metadata

  • Download URL: cje_eval-0.2.24-py3-none-any.whl
  • Upload date:
  • Size: 414.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for cje_eval-0.2.24-py3-none-any.whl
Algorithm Hash digest
SHA256 a2babcc3923fc8a04837ce9eda7b625aefd70db18083f9e5d5660b63c6171c32
MD5 80e5706d587742f24928b18d64e2b94a
BLAKE2b-256 bd64246a11433e13d2b19e02c4a0ec8a762ca6383996dbb7374c5e482dfcb81e

See more details on using hashes here.

Provenance

The following attestation bundles were made for cje_eval-0.2.24-py3-none-any.whl:

Publisher: publish.yml on cimo-labs/cje

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page