Skip to main content

Manual relevance annotation tools for TREC AutoJudge

Project description

Autojudge Annotate

A tool for human annotation of RAG report quality. Generates a self-contained HTML file that annotators open in a browser — no server required.

Annotators highlight relevant passages, rate report quality, and add comments. All work is auto-saved to the browser's localStorage and can be exported as JSONL.

Installation

uv pip install ./auto-judge-annotate

Requires autojudge-base and click.

Usage

autojudge-annotate \
    --rag-responses path/to/runs/ \
    --rag-topics path/to/topics.jsonl \
    --output annotator.html \
    --dataset my-dataset \
    --show-documents

Then open annotator.html in a browser.

Options

Flag Description
--rag-responses Directory containing report files (any extension, JSONL format)
--rag-topics JSONL file with evaluation topics/requests
--output Output HTML file path
--dataset Freetext label included in annotation output
--show-documents Enable citation document popups (increases file size)
--topic ID Filter to specific topics (repeatable)

Filtering topics

For large datasets, pass only the topics you need:

autojudge-annotate \
    --rag-responses runs/ \
    --rag-topics topics.jsonl \
    --output annotator.html \
    --dataset my-dataset \
    --topic 1101 --topic 1102 --topic 1103

Annotation worksflow

Before you begin, enter your username, which persists across sessions and is exported to annotation file.

The topbar Mode selector switches between three annotation modes:

Reports mode

Annotate full report text per topic/run.

  1. Select a topic from the sidebar, then a run
  2. Read the request (title, problem statement, background) and the report
  3. Highlight relevant passages by selecting text — selections crossing sentence boundaries are automatically split into per-sentence subspans with sentence_idx
  4. Click [DocId] citation markers to view source documents in a popup (when --show-documents is enabled)
  5. Choose a rating and add optional comments

Documents mode

Annotate individual source documents per topic.

  1. Select a topic, then a run, then a document from the sidebar
  2. Read the document text (title + body)
  3. Highlight relevant passages
  4. Choose a rating and add optional comments

Citations mode

Step through report sentences and annotate the relationship between each sentence and its cited documents with dual spans (report spans + document spans).

  1. Select a topic, then a run from the sidebar; sentences appear in the sidebar
  2. Use the sentence stepper (Prev/Next buttons) or click a sentence in the sidebar
  3. The current sentence is displayed in a yellow box; highlight text to create report spans
  4. If the sentence has citations, the cited document appears below; highlight text to create document spans
  5. For sentences with multiple citations, use the citation tabs to switch between documents
  6. Choose a rating and add optional comments
  7. The sidebar shows checkmarks on fully annotated sentences (all citations rated)

Common features

  • Auto-save: every change is saved to localStorage immediately
  • Progress tracking: sidebar shows checkmarks on annotated items and completion counts per topic
  • Ratings: Perfect, Mostly Good, So-so, Bad, or Not rated
  • Download: click Download JSONL to export all annotations
  • Clear all: small button at the bottom of the sidebar to reset all annotations (with confirmation)
  • Username: persists across sessions via localStorage

Output format

Each annotation is a JSON line. The format varies by mode:

Report annotation

{
  "dataset": "my-dataset",
  "request_id": "1101",
  "run_id": "run1",
  "team_id": "teamA",
  "topic_id": "1101",
  "username": "alice",
  "rating": "Mostly Good",
  "comment": "Good coverage but missing key detail",
  "spans": [
    {"start": 0, "end": 45, "text": "First relevant passage", "sentence_idx": 0},
    {"start": 46, "end": 120, "text": "Second passage from next sentence", "sentence_idx": 1}
  ],
  "report": { ... }
}

Document annotation

{
  "dataset": "my-dataset",
  "request_id": "1101",
  "docid": "doc-abc-123",
  "topic_id": "1101",
  "username": "alice",
  "rating": "So-so",
  "comment": "",
  "spans": [
    {"start": 10, "end": 85, "text": "Relevant passage from document"}
  ],
  "document": { ... }
}

Citation annotation

{
  "dataset": "my-dataset",
  "request_id": "1101",
  "topic_id": "1101",
  "username": "alice",
  "rating": "Perfect",
  "comment": "Sentence accurately reflects source",
  "spans": [
    {"start": 0, "end": 50, "text": "Document passage supporting the claim"}
  ],
  "report_spans": [
    {"start": 0, "end": 30, "text": "Sentence text being verified"}
  ],
  "citation": {
    "report": { ... },
    "sentence_idx": 2,
    "sentence": {"text": "The full sentence text.", "citations": ["doc-abc-123"]},
    "docid": "doc-abc-123",
    "document": { ... }
  }
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

autojudge_annotate-0.3.5.tar.gz (25.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

autojudge_annotate-0.3.5-py3-none-any.whl (27.4 kB view details)

Uploaded Python 3

File details

Details for the file autojudge_annotate-0.3.5.tar.gz.

File metadata

  • Download URL: autojudge_annotate-0.3.5.tar.gz
  • Upload date:
  • Size: 25.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for autojudge_annotate-0.3.5.tar.gz
Algorithm Hash digest
SHA256 95c4e9b1e05c54da20d6500525663fd301416d8cf8b7843cf961a677e499fb63
MD5 e86eea0318357def0a42e1d3feaba801
BLAKE2b-256 e207d010d074447697926b5b20b7186dc87ff84cfadb72138431fe6e877f357c

See more details on using hashes here.

Provenance

The following attestation bundles were made for autojudge_annotate-0.3.5.tar.gz:

Publisher: publish.yml on trec-auto-judge/auto-judge-annotate

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file autojudge_annotate-0.3.5-py3-none-any.whl.

File metadata

File hashes

Hashes for autojudge_annotate-0.3.5-py3-none-any.whl
Algorithm Hash digest
SHA256 7c2d4235c2cfae5d74cf40c87d90ca73d623722c941c52109cd894f549493b87
MD5 92613db65121ae84f42deacf1a4fabb9
BLAKE2b-256 bc3ea3200acd1148bc952cea8467706ff573c147524ee69d53f8e57a5b1a2515

See more details on using hashes here.

Provenance

The following attestation bundles were made for autojudge_annotate-0.3.5-py3-none-any.whl:

Publisher: publish.yml on trec-auto-judge/auto-judge-annotate

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page