Skip to main content

Manual relevance annotation tools for TREC AutoJudge

Project description

Autojudge Annotate

A tool for human annotation of RAG report quality. Generates a self-contained HTML file that annotators open in a browser — no server required.

Annotators highlight relevant passages, rate report quality, and add comments. All work is auto-saved to the browser's localStorage and can be exported as JSONL.

Installation

uv pip install ./auto-judge-annotate

Requires autojudge-base and click.

Usage

autojudge-annotate \
    --rag-responses path/to/runs/ \
    --rag-topics path/to/topics.jsonl \
    --output annotator.html \
    --dataset my-dataset \
    --show-documents

Then open annotator.html in a browser.

Options

Flag Description
--rag-responses Directory containing report files (any extension, JSONL format)
--rag-topics JSONL file with evaluation topics/requests
--output Output HTML file path
--dataset Freetext label included in annotation output
--show-documents Enable citation document popups (increases file size)
--topic ID Filter to specific topics (repeatable)

Filtering topics

For large datasets, pass only the topics you need:

autojudge-annotate \
    --rag-responses runs/ \
    --rag-topics topics.jsonl \
    --output annotator.html \
    --dataset my-dataset \
    --topic 1101 --topic 1102 --topic 1103

Annotation worksflow

Before you begin, enter your username, which persists across sessions and is exported to annotation file.

The topbar Mode selector switches between three annotation modes:

Reports mode

Annotate full report text per topic/run.

  1. Select a topic from the sidebar, then a run
  2. Read the request (title, problem statement, background) and the report
  3. Highlight relevant passages by selecting text — selections crossing sentence boundaries are automatically split into per-sentence subspans with sentence_idx
  4. Click [DocId] citation markers to view source documents in a popup (when --show-documents is enabled)
  5. Choose a rating and add optional comments

Documents mode

Annotate individual source documents per topic.

  1. Select a topic, then a run, then a document from the sidebar
  2. Read the document text (title + body)
  3. Highlight relevant passages
  4. Choose a rating and add optional comments

Citations mode

Step through report sentences and annotate the relationship between each sentence and its cited documents with dual spans (report spans + document spans).

  1. Select a topic, then a run from the sidebar; sentences appear in the sidebar
  2. Use the sentence stepper (Prev/Next buttons) or click a sentence in the sidebar
  3. The current sentence is displayed in a yellow box; highlight text to create report spans
  4. If the sentence has citations, the cited document appears below; highlight text to create document spans
  5. For sentences with multiple citations, use the citation tabs to switch between documents
  6. Choose a rating and add optional comments
  7. The sidebar shows checkmarks on fully annotated sentences (all citations rated)

Common features

  • Auto-save: every change is saved to localStorage immediately
  • Progress tracking: sidebar shows checkmarks on annotated items and completion counts per topic
  • Ratings: Perfect, Mostly Good, So-so, Bad, or Not rated
  • Download: click Download JSONL to export all annotations
  • Clear all: small button at the bottom of the sidebar to reset all annotations (with confirmation)
  • Username: persists across sessions via localStorage

Output format

Each annotation is a JSON line. The format varies by mode:

Report annotation

{
  "dataset": "my-dataset",
  "request_id": "1101",
  "run_id": "run1",
  "team_id": "teamA",
  "topic_id": "1101",
  "username": "alice",
  "rating": "Mostly Good",
  "comment": "Good coverage but missing key detail",
  "spans": [
    {"start": 0, "end": 45, "text": "First relevant passage", "sentence_idx": 0},
    {"start": 46, "end": 120, "text": "Second passage from next sentence", "sentence_idx": 1}
  ],
  "report": { ... }
}

Document annotation

{
  "dataset": "my-dataset",
  "request_id": "1101",
  "docid": "doc-abc-123",
  "topic_id": "1101",
  "username": "alice",
  "rating": "So-so",
  "comment": "",
  "spans": [
    {"start": 10, "end": 85, "text": "Relevant passage from document"}
  ],
  "document": { ... }
}

Citation annotation

{
  "dataset": "my-dataset",
  "request_id": "1101",
  "topic_id": "1101",
  "username": "alice",
  "rating": "Perfect",
  "comment": "Sentence accurately reflects source",
  "spans": [
    {"start": 0, "end": 50, "text": "Document passage supporting the claim"}
  ],
  "report_spans": [
    {"start": 0, "end": 30, "text": "Sentence text being verified"}
  ],
  "citation": {
    "report": { ... },
    "sentence_idx": 2,
    "sentence": {"text": "The full sentence text.", "citations": ["doc-abc-123"]},
    "docid": "doc-abc-123",
    "document": { ... }
  }
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

autojudge_annotate-0.3.3.tar.gz (21.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

autojudge_annotate-0.3.3-py3-none-any.whl (23.6 kB view details)

Uploaded Python 3

File details

Details for the file autojudge_annotate-0.3.3.tar.gz.

File metadata

  • Download URL: autojudge_annotate-0.3.3.tar.gz
  • Upload date:
  • Size: 21.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for autojudge_annotate-0.3.3.tar.gz
Algorithm Hash digest
SHA256 b5a1596585479336d827beacf13abff16711f7393f54805e71e37ab1cfb18b52
MD5 1a55c94bd7eb63acc2c2622bda40e812
BLAKE2b-256 0b522befb773ac3f025d69f7378dd07992bf997ac0627e6b0ed2151ab03cab96

See more details on using hashes here.

Provenance

The following attestation bundles were made for autojudge_annotate-0.3.3.tar.gz:

Publisher: publish.yml on trec-auto-judge/auto-judge-annotate

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file autojudge_annotate-0.3.3-py3-none-any.whl.

File metadata

File hashes

Hashes for autojudge_annotate-0.3.3-py3-none-any.whl
Algorithm Hash digest
SHA256 80bebc56ad814660e4e1352b518bd62aac4d010c059c43f55e151441282409a5
MD5 a5fbd9c676cd3b186ff97447d5e6e8de
BLAKE2b-256 6b7c99ffe8e785f38f3623c3a5955345d33d9ccdf24d04e098d1bfa0de6a094b

See more details on using hashes here.

Provenance

The following attestation bundles were made for autojudge_annotate-0.3.3-py3-none-any.whl:

Publisher: publish.yml on trec-auto-judge/auto-judge-annotate

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page