Skip to main content

Causal Judge Evaluation - Unbiased LLM evaluation framework

Project description

CJE Logo

CJE - Causal Judge Evaluation

Your LLM judge scores are lying. CJE calibrates them to what actually matters.

Open In Colab Docs Python Tests License PyPI Downloads

We ran 16,000+ tests on Chatbot Arena data. Without calibration, 95% confidence intervals captured the true value 0% of the time. With CJE: 99% ranking accuracy using just 5% oracle labels, at 14× lower cost.


Quick Start

pip install cje-eval
from cje import analyze_dataset

# Point to your response files (one JSONL per policy)
results = analyze_dataset(fresh_draws_dir="data/responses/")

# Get calibrated estimates with valid confidence intervals
results.plot_estimates(
    policy_labels={"prompt_v1": "Conversational tone", ...},
    save_path="ranking.png"
)

Data format (one JSONL file per policy):

{"prompt_id": "1", "judge_score": 0.85, "oracle_label": 0.9}
{"prompt_id": "2", "judge_score": 0.72}

Only 5-25% of samples need oracle labels. CJE learns the judge→oracle mapping and applies it everywhere.


Why You Need This

Uncalibrated LLM-as-judge evaluation has two systematic failure modes:

Failure Mode What Happens Evidence
Invalid confidence intervals Your error bars don't work "95% confident" was actually 0% accurate
Hidden scale distortion Judge scores ≠ oracle scores Calibration cut prediction error by 72%

With 0% CI coverage, you can't trust any A/B test conclusion. Rankings improve too (91% → 99%), but the uncertainty problem is universal.

CJE fixes both by treating your judge as a sensor that must be calibrated against ground truth, then propagating calibration uncertainty into valid confidence intervals.

Read the full explanation →


The Results

We tested on 5,000 Chatbot Arena prompts with GPT-5 as the oracle (ground truth) and GPT-4.1-nano as the cheap judge:

Without CJE With CJE
Rankings correct 91% of the time Rankings correct 99% of the time
Error bars contain truth 0% of the time Error bars contain truth 87% of the time
Need 100% oracle labels Need only 5% oracle labels
Full labeling cost 14× cheaper

Label ~250 samples with your oracle (human raters, downstream KPIs, expensive model). CJE learns the judge→oracle mapping and applies it to everything else.

Already using an expensive model for evals? Switch to a 10-30× cheaper judge + CJE calibration. Same accuracy, fraction of the inference cost.

CJE Output Example
Example output: comparing prompt variants with calibrated confidence intervals

Read the full Arena Experiment →Paper (Zenodo)


Monitoring Calibration Over Time

Calibration can drift. Periodically verify it still holds with a small probe:

from cje import analyze_dataset
from cje.diagnostics import audit_transportability

# results.calibrator is automatically fitted during analysis
results = analyze_dataset(fresh_draws_dir="responses/")

# Check if calibration still works on this week's data (50+ oracle labels)
diag = audit_transportability(results.calibrator, this_week_samples)
print(diag.summary())
# Status: PASS | Samples: 48 | Mean error: +0.007 (CI: -0.05 to +0.06)
Temporal Monitoring

PASS means your calibration is still valid. FAIL means something changed — investigate or recalibrate.


Try It Now

Open the interactive tutorial in Google Colab →

Walk through a complete example: compare prompt variants, check if calibration transfers, inspect what's fooling the judge, and monitor drift over time. No setup required.


Documentation

Technical Guides

Examples & Data


Development

git clone https://github.com/cimo-labs/cje.git
cd cje && poetry install && make test

Support

License

MIT — See LICENSE for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cje_eval-0.2.9.tar.gz (307.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cje_eval-0.2.9-py3-none-any.whl (369.1 kB view details)

Uploaded Python 3

File details

Details for the file cje_eval-0.2.9.tar.gz.

File metadata

  • Download URL: cje_eval-0.2.9.tar.gz
  • Upload date:
  • Size: 307.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for cje_eval-0.2.9.tar.gz
Algorithm Hash digest
SHA256 aaea0207e439ba232807a47f06fff3fcde12d9eaf76d9aaa9e581e406b7af70a
MD5 d0f3fa6652f68392bf5f8bf791641c06
BLAKE2b-256 873ed027151997f4283b16a6a1af2f030dafc61290726b007e494c2869f8aa93

See more details on using hashes here.

Provenance

The following attestation bundles were made for cje_eval-0.2.9.tar.gz:

Publisher: publish.yml on cimo-labs/cje

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file cje_eval-0.2.9-py3-none-any.whl.

File metadata

  • Download URL: cje_eval-0.2.9-py3-none-any.whl
  • Upload date:
  • Size: 369.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for cje_eval-0.2.9-py3-none-any.whl
Algorithm Hash digest
SHA256 c89af07206c5f8e8fdd6c8d875a6f5ef234b55f63faf7c46be2685d8bb824195
MD5 d59dd3b05bfec074eb792dfa9580b253
BLAKE2b-256 9eb334aab95330286e29b784fa1fda2b83dc08873acd1555610d7a86d80db2cc

See more details on using hashes here.

Provenance

The following attestation bundles were made for cje_eval-0.2.9-py3-none-any.whl:

Publisher: publish.yml on cimo-labs/cje

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page