Skip to main content

Causal Judge Evaluation - Unbiased LLM evaluation framework

Project description

CJE Logo

CJE - Causal Judge Evaluation

Your LLM judge scores are lying. CJE calibrates them to what actually matters.

Open In Colab Docs Python Tests License PyPI Downloads

CJE Pipeline

Quick Start

pip install cje-eval
from cje import analyze_dataset

# Point to your response files (one JSONL per policy)
results = analyze_dataset(fresh_draws_dir="data/responses/")

# Get calibrated estimates with valid confidence intervals
results.plot_estimates(
    policy_labels={"prompt_v1": "Conversational tone", ...},
    save_path="ranking.png"
)

Data format (one JSONL file per policy):

{"prompt_id": "1", "judge_score": 0.85, "oracle_label": 0.9}
{"prompt_id": "2", "judge_score": 0.72}

Only 5-25% of samples need oracle labels. CJE learns the judge→oracle mapping and applies it everywhere.


Why You Need This

Raw LLM judge scores suffer from systematic biases that make your metrics unreliable:

  • Preference inversion: Higher scores often predict lower real-world quality
  • Invalid confidence intervals: Standard error bars yield 0% coverage
  • Scale arbitrariness: Is "4.2" actually better than "4.0"?

CJE fixes this by treating your judge as a sensor that must be calibrated against ground truth.

Read the full explanation →


The Proof

We benchmarked 14 estimators on 5,000 real Chatbot Arena prompts using GPT-5 as oracle:

CJE Calibration Accuracy
Illustrative output comparing prompt variants
Method Result
Raw Judges 0% CI coverage — error bars were mathematical lies
CJE (Direct + Two-Stage) 99% ranking accuracy with just 5% oracle labels

Cost savings: CJE achieves oracle-quality rankings at 14× lower cost by calibrating a cheap judge (~250 labels) instead of labeling everything.

Read the full Arena Experiment →Paper (Zenodo)


Monitoring Calibration Over Time

Calibration can drift. Periodically verify it still holds with a small probe:

from cje import analyze_dataset
from cje.diagnostics import audit_transportability

# results.calibrator is automatically fitted during analysis
results = analyze_dataset(fresh_draws_dir="responses/")

# Check if calibration still works on this week's data (40-60 oracle labels)
diag = audit_transportability(results.calibrator, this_week_samples)
print(diag.summary())
# Transport: PASS | N=48 | δ̂: +0.007 (CI: [-0.05, +0.06])
Temporal Monitoring

PASS means your calibration is still valid. FAIL means something changed — investigate or recalibrate.


Try It Now

Open the interactive tutorial in Google Colab →

Walk through a complete example: compare prompt variants, check if calibration transfers, inspect what's fooling the judge, and monitor drift over time. No setup required.


Documentation

Technical Guides

Examples & Data


Development

git clone https://github.com/cimo-labs/cje.git
cd cje && poetry install && make test

Support

License

MIT — See LICENSE for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cje_eval-0.2.7.tar.gz (304.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cje_eval-0.2.7-py3-none-any.whl (366.0 kB view details)

Uploaded Python 3

File details

Details for the file cje_eval-0.2.7.tar.gz.

File metadata

  • Download URL: cje_eval-0.2.7.tar.gz
  • Upload date:
  • Size: 304.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for cje_eval-0.2.7.tar.gz
Algorithm Hash digest
SHA256 df60a5ab5c0728181510c20100af3f7cbb56adc348b264e2da9948bf5089d6b8
MD5 6f1f49d37a909608656aa073f3c85d6d
BLAKE2b-256 0261665be38c5456b3aa8a794bb93ea41e03fc85b4d4201f0739ce4488c0ff57

See more details on using hashes here.

Provenance

The following attestation bundles were made for cje_eval-0.2.7.tar.gz:

Publisher: publish.yml on cimo-labs/cje

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file cje_eval-0.2.7-py3-none-any.whl.

File metadata

  • Download URL: cje_eval-0.2.7-py3-none-any.whl
  • Upload date:
  • Size: 366.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for cje_eval-0.2.7-py3-none-any.whl
Algorithm Hash digest
SHA256 dbaffc84367368a24a5803d391f110359927393af9e63642dbf3dea00bf5cea4
MD5 81331b1ef18ef526bebe88048e79ccd0
BLAKE2b-256 c7f3704302b1d8e37fd3d6b9fe0a77edbc631b9af200b9ca2f49956ddbd5dc2d

See more details on using hashes here.

Provenance

The following attestation bundles were made for cje_eval-0.2.7-py3-none-any.whl:

Publisher: publish.yml on cimo-labs/cje

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page