Skip to main content

Evaluation tools for TREC AutoJudge: meta-evaluate, qrel-evaluate, leaderboard statistics

Project description

autojudge-evaluate

Evaluation tools for the TREC AutoJudge framework. Computes leaderboard correlations, inter-annotator agreement on qrels, leaderboard statistics, and format conversion for evaluation result files.

Installation

uv pip install autojudge-evaluate

CLI Commands

All commands are available via auto-judge-evaluate <command>.


meta-evaluate — Leaderboard correlation

Correlate predicted leaderboards against a ground-truth leaderboard.

auto-judge-evaluate meta-evaluate \
    --truth-leaderboard truth.eval.jsonl --truth-format jsonl \
    --eval-format tot -i results/*eval.txt \
    --correlation kendall --correlation spearman --correlation tauap_b \
    --truth-measure nugget_coverage --truth-measure f1 \
    --on-missing default \
    --output correlations.jsonl

Key options:

Option Description
--truth-leaderboard FILE Ground-truth leaderboard file (required)
--truth-format FMT Format: trec_eval, tot, ir_measures, ranking, jsonl
--eval-format FMT Format of input leaderboard files
-i FILE / positional Input leaderboard file(s), supports globs. Repeatable
--correlation METHOD Correlation method. Repeatable. Supports kendall, pearson, spearman, tauap_b, and top-k variants like kendall@15
--truth-measure NAME Truth measure(s) to correlate against. Repeatable. Omit for all
--eval-measure NAME Eval measure(s) to include. Repeatable. Omit for all
--on-missing MODE Handle run mismatches: error, warn, skip, default (fill 0.0)
--only-shared-topics Intersect topics across truth and eval (default: --all-topics)
--only-shared-runs Intersect runs across truth and eval (default: --all-runs)
--truth-drop-aggregate Recompute aggregates from per-topic data
--output FILE Output .jsonl or .txt
--out-format FMT jsonl (default) or table
--aggregate Report only mean across all judges

Output: One row per (Judge, TruthMeasure, EvalMeasure) with correlation values as columns.


qrel-evaluate — Inter-annotator agreement on qrels

Compare predicted relevance judgments (qrels) against truth qrels. Computes set overlap (precision, recall, F1) and agreement metrics (Cohen's Kappa, Krippendorff's Alpha, Jaccard, ARI).

auto-judge-evaluate qrel-evaluate \
    --truth-qrels official.qrels \
    --predict-qrels predicted.qrels

Key options:

Option Description
--truth-qrels FILE Truth qrels in TREC format
--truth-nugget-docs DIR Alternative: truth as nugget-docs directory
--predict-qrels FILE Predicted qrels in TREC format
--predict-nugget-docs DIR Alternative: predicted as nugget-docs directory
--truth-max-grade N Grade scale upper bound for truth (default: 1 = binary)
--predict-max-grade N Grade scale upper bound for predicted (default: 1)
--truth-relevance-threshold N Binary threshold for truth side (default: 1)
--predict-relevance-threshold N Binary threshold for predicted side (default: 1)
--on-missing MODE Handle topics in only one side: error, warn, default, skip
--output FILE Output .jsonl or .txt

Output: Per-topic table with Precision, Recall, F1, Jaccard, Kappa, Krippendorff's Alpha, ARI, plus a MEAN row.


leaderboard — Leaderboard statistics

Compute per-run statistics (mean, stderr, stdev, min, max) from leaderboard files.

auto-judge-evaluate leaderboard \
    --eval-format tot -i results/*eval.txt --sort

Key options:

Option Description
--eval-format FMT Input format (required)
-i FILE / positional Input file(s), supports globs. Repeatable
--eval-measure NAME Filter to specific measures. Repeatable
--sort Sort runs by mean score (descending)
--output FILE Output .jsonl or .csv

Output: One row per (Judge, RunID, Measure) with Topics, Mean, Stderr, Stdev, Min, Max.


Analysis Module

Post-hoc analysis, tables, plots of meta-evaluate output. Produces correlation tables and bar plots with judge categorization.

python -m autojudge_evaluate.analysis.correlation_table \
    -d ragtime:ragtime-correlations.jsonl \
    -d rag:rag-correlations.jsonl \
    -d dragun:dragun-correlations.jsonl \
    --judges judges.yml \
    --correlation kendall \
    --truth-measure nugget_coverage \
    --format latex \
    --plot-dir plots/

Judge configuration (judges.yml) maps cryptic filenames to display names and categories, with optional plot styling:

styles:
  colors:
    pointwise: "#4A90D9"
    pairwise:  "#D94A4A"
  hatches:
    gpt-4o:    ""
    llama-3:   "//"

judges:
  my-judge-A.eval:
    name: System A
    method: pointwise     # category column
    model: gpt-4o         # category column
  my-judge-B.eval:
    name: System B
    method: pairwise
    model: llama-3
  • styles.colors: maps category values to fill colors (any matplotlib color string)
  • styles.hatches: maps category values to hatch patterns (//, .., xx, \\, etc). Values combine across categories.
  • Color is picked from the first matching category value; hatches are combined from all matches.
  • Without a styles: section, bars use a sequential grayscale fallback.
  • Judges not in the YAML are excluded unless --all-judges is passed.

Key options: --format (github, latex, tsv, plain, html, pipe), --columns (correlations or measures), --summary (add mean/max rows), --aggregate (aggregate across datasets), --same THRESHOLD (highlight near-equal values).


eval-result — Format conversion and verification

Clean and convert evaluation result files.

# Convert tot to jsonl
auto-judge-evaluate eval-result data.txt -if tot -of jsonl -o data.jsonl

# Filter to specific runs and topics
auto-judge-evaluate eval-result data.txt -if tot -of jsonl -o filtered.jsonl \
    --filter-runs system_A --filter-runs system_B \
    --filter-topics topic_1

Key options:

Option Description
-if FMT Input format: trec_eval, tot, ir_measures, ranking, jsonl
-of FMT Output format (defaults to input format)
-o FILE Output file. Omit for roundtrip test to temp file
--filter-runs ID Keep only these runs. Repeatable
--filter-topics ID Keep only these topics. Repeatable
--filter-measures NAME Keep only these measures. Repeatable
--compare-aggregates Compare file aggregates vs recomputed from per-topic data
--drop-aggregates Drop existing aggregate rows
--recompute-aggregates Recompute from per-topic data (implies --drop-aggregates)
--roundtrip / --no-roundtrip Enable/disable roundtrip verification (default: on)

Supported formats:

Format Columns
trec_eval measure topic value (3 cols, run_id from filename)
tot run measure topic value (4 cols)
ir_measures run topic measure value (4 cols)
ranking topic Q0 doc_id rank score run (6 cols)
jsonl JSON lines with run_id, topic_id, measure, value

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

autojudge_evaluate-0.3.14.tar.gz (59.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

autojudge_evaluate-0.3.14-py3-none-any.whl (52.5 kB view details)

Uploaded Python 3

File details

Details for the file autojudge_evaluate-0.3.14.tar.gz.

File metadata

  • Download URL: autojudge_evaluate-0.3.14.tar.gz
  • Upload date:
  • Size: 59.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for autojudge_evaluate-0.3.14.tar.gz
Algorithm Hash digest
SHA256 218666a3a64d168f0dc70a5eefc1f4a860aca5dbdaab64565bad8cc48aff793f
MD5 38fda023af00fee4a48d18227c647e0e
BLAKE2b-256 41b865d11ac5fce45941e6c0f496be13a139f1e511345821e26dea48547f11e5

See more details on using hashes here.

Provenance

The following attestation bundles were made for autojudge_evaluate-0.3.14.tar.gz:

Publisher: publish.yml on trec-auto-judge/auto-judge-evaluate

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file autojudge_evaluate-0.3.14-py3-none-any.whl.

File metadata

File hashes

Hashes for autojudge_evaluate-0.3.14-py3-none-any.whl
Algorithm Hash digest
SHA256 2b490871769c7c1fa154bf7f8be88c8d3e2878a8cd6c14a4953505a18bf86b2f
MD5 5e36df602244616c4409ed9a039371ab
BLAKE2b-256 408e77031bc8e5191b1789e0d88c13037dacac984d5a2a0a259404c4ede6cc30

See more details on using hashes here.

Provenance

The following attestation bundles were made for autojudge_evaluate-0.3.14-py3-none-any.whl:

Publisher: publish.yml on trec-auto-judge/auto-judge-evaluate

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page