Skip to main content

All-in-one metrics for evaluating AI-generated radiology text

Project description

RadEval

All-in-one metrics for evaluating AI-generated radiology text

PyPI Python version Expert Dataset Model Video Gradio Demo EMNLP License

TL;DR

pip install -e .
from RadEval import RadEval
import json

refs = [
    "Mild cardiomegaly with small bilateral pleural effusions and basilar atelectasis.",
    "No pleural effusions or pneumothoraces.",
]
hyps = [
    "Mildly enlarged cardiac silhouette with small pleural effusions and dependent bibasilar atelectasis.",
    "No pleural effusions or pneumothoraces.",
]

evaluator = RadEval(
    do_radgraph=True,
    do_bleu=True
)

results = evaluator(refs=refs, hyps=hyps)
print(json.dumps(results, indent=2))
{
  "radgraph_simple": 0.72,
  "radgraph_partial": 0.61,
  "radgraph_complete": 0.61,
  "bleu": 0.36
}

Installation

pip install RadEval              # from PyPI
pip install RadEval[api]         # include OpenAI/Gemini for MammoGREEN

Or install from source:

git clone https://github.com/jbdel/RadEval.git && cd RadEval
conda create -n radeval python=3.11 -y && conda activate radeval
pip install -e '.[api]'

Supported Metrics

Category Metric Flag Modality Best For Usage
Lexical BLEU do_bleu -- Surface-level n-gram overlap docs
ROUGE do_rouge -- Content coverage docs
Semantic BERTScore do_bertscore -- Semantic similarity docs
RadEval BERTScore do_radeval_bertscore -- Domain-adapted radiology semantics docs
Clinical F1CheXbert do_f1chexbert CXR CheXpert finding classification docs
F1RadBERT-CT do_f1radbert_ct CT CT finding classification docs
F1RadGraph do_radgraph CXR Clinical entity/relation accuracy docs
RaTEScore do_ratescore CXR Entity-level synonym-aware scoring docs
Specialized RadGraph-RadCliQ do_radgraph_radcliq CXR Per-pair entity+relation F1 (RadCliQ variant) docs
RadCliQ-v1 do_radcliq CXR Composite clinical relevance docs
SRRBert do_srrbert CXR Structured report evaluation docs
Temporal F1 do_temporal CXR Temporal consistency docs
GREEN do_green CXR LLM-based overall quality (7B model) docs
MammoGREEN do_mammo_green Mammo Mammography-specific LLM scoring docs
CRIMSON do_crimson CXR LLM-based clinical significance scoring docs
RadFact-CT do_radfact_ct CT LLM-based factual precision/recall docs

Modality: CXR = Chest X-Ray, CT = Computed Tomography, Mammo = Mammography, -- = modality-agnostic.

Enable only the metrics you need -- each one is loaded lazily.

Per-Sample Output

Pass do_per_sample=True to get per-sample scores for every enabled metric. The output uses the same flat keys as the default mode, but each value is a list[float] of length n_samples instead of a single aggregate.

evaluator = RadEval(do_bleu=True, do_bertscore=True, do_per_sample=True)
results = evaluator(refs=refs, hyps=hyps)
# results["bleu"]      → [0.85, 0.40, ...]   (one per sample)
# results["bertscore"] → [0.95, 0.89, ...]

Per-sample output keys by metric

Metric Default keys do_per_sample keys
BLEU bleu bleu
ROUGE rouge1, rouge2, rougeL rouge1, rouge2, rougeL
BERTScore bertscore bertscore
RadEval BERTScore radeval_bertscore radeval_bertscore
F1CheXbert f1chexbert_5_micro_f1, f1chexbert_all_micro_f1, ... f1chexbert_sample_acc_5, f1chexbert_sample_acc_all
F1RadBERT-CT f1radbert_ct_accuracy, f1radbert_ct_micro_f1, ... f1radbert_ct_sample_acc
F1RadGraph radgraph_simple, radgraph_partial, radgraph_complete radgraph_simple, radgraph_partial, radgraph_complete
RaTEScore ratescore ratescore
RadGraph-RadCliQ radgraph_radcliq radgraph_radcliq
RadCliQ-v1 radcliq_v1 radcliq_v1
SRRBert srrbert_weighted_f1, srrbert_weighted_precision, srrbert_weighted_recall srrbert_weighted_f1, srrbert_weighted_precision, srrbert_weighted_recall
Temporal F1 temporal_f1 temporal_f1
GREEN green green
MammoGREEN mammo_green mammo_green
CRIMSON crimson crimson
RadFact-CT radfact_ct_precision, radfact_ct_recall, radfact_ct_f1 radfact_ct_precision, radfact_ct_recall, radfact_ct_f1

Note: F1-classifier metrics (F1CheXbert, F1RadBERT-CT) return per-sample accuracy (fraction of labels correct per report) rather than per-sample F1, since micro/macro F1 are corpus-level aggregates.

Detailed Output

Pass do_details=True to get per-sample scores, label breakdowns, and entity annotations for every enabled metric. See docs/metrics.md for the full output schema of each metric.

Comparing Systems

Use compare_systems to run paired approximate randomization tests between any number of systems:

from RadEval import RadEval, compare_systems

evaluator = RadEval(do_bleu=True)
signatures, scores = compare_systems(
    systems={
        'baseline': baseline_reports,
        'improved': improved_reports,
    },
    metrics={'bleu': lambda hyps, refs: evaluator(refs, hyps)['bleu']},
    references=reference_reports,
    n_samples=10000,
)

See docs/hypothesis_testing.md for a full walkthrough and interpretation guide.

Documentation

Page Contents
docs/metrics.md What each metric measures, do_per_sample / do_details output schemas
docs/hypothesis_testing.md Statistical background, full example, performance notes
docs/file_formats.md Loading data from .tok, .json, and Python lists

RadEval Expert Dataset

A curated evaluation set annotated by board-certified radiologists for validating automatic metrics. Available on HuggingFace.

Citation

@inproceedings{xu-etal-2025-radeval,
    title = "{R}ad{E}val: A framework for radiology text evaluation",
    author = "Xu, Justin  and
      Zhang, Xi  and
      Abderezaei, Javid  and
      Bauml, Julie  and
      Boodoo, Roger  and
      Haghighi, Fatemeh  and
      Ganjizadeh, Ali  and
      Brattain, Eric  and
      Van Veen, Dave  and
      Meng, Zaiqiao  and
      Eyre, David W  and
      Delbrouck, Jean-Benoit",
    booktitle = "Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
    month = nov,
    year = "2025",
    address = "Suzhou, China",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2025.emnlp-demos.40/",
    doi = "10.18653/v1/2025.emnlp-demos.40",
    pages = "546--557",
}

Contributors

Jean-Benoit Delbrouck
Jean-Benoit Delbrouck
Justin Xu
Justin Xu
Xi Zhang
Xi Zhang

Acknowledgments

Built on the work of the radiology AI community: CheXbert, RadGraph, BERTScore, RaTEScore, SRR-BERT, GREEN, and datasets like MIMIC-CXR.


If you find RadEval useful, please give us a star!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

radeval-0.1.5.tar.gz (98.5 kB view details)

Uploaded Source

File details

Details for the file radeval-0.1.5.tar.gz.

File metadata

  • Download URL: radeval-0.1.5.tar.gz
  • Upload date:
  • Size: 98.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for radeval-0.1.5.tar.gz
Algorithm Hash digest
SHA256 907d7d0815b3aaea9596d1f364883634919a3b9766bcbd592d4e9601acff3265
MD5 8455583f19db21966b63b5e2a31a3809
BLAKE2b-256 6ff5a050ab893866783cc69fca2ab3649aebf8a805ff6192606a75949d37cd8b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page