Skip to main content

Holistic evaluation of language models for medical applications (HELM for medicine)

Project description

MedHELM

License: Apache 2.0 PyPI

MedHELM

Holistic Evaluation of Language Models (HELM) is an open source Python framework created by the Center for Research on Foundation Models (CRFM) at Stanford for holistic, reproducible and transparent evaluation of foundation models, including large language models (LLMs) and multimodal models. This framework includes the following features:

  • Datasets and benchmarks in a standardized format (e.g. MMLU-Pro, GPQA, IFEval, WildBench)
  • Models from various providers accessible through a unified interface (e.g. OpenAI models, Anthropic Claude, Google Gemini)
  • Metrics for measuring various aspects beyond accuracy (e.g. efficiency, bias, toxicity)
  • Web UI for inspecting individual prompts and responses
  • Web leaderboard for comparing results across models and benchmarks

Documentation

Documentation: medhelm.org

Install & run (MedHELM library)

MedHELM uses the HELM core engine and adds medical benchmarks. Install from PyPI:

Standard (recommended to start)

Scenarios: PubMedQA, MedCalc-Bench, MedicationQA, MedHallu.

pip install medhelm
# or with uv:
uv pip install medhelm

Run a benchmark:

uv run medhelm-run --run-entries "pubmed_qa:model=qwen/qwen2.5-7b-instruct,model_deployment=huggingface/qwen2.5-7b-instruct" --suite my_med_test --max-eval-instances 10
uv run helm-summarize --suite my_med_test
uv run helm-server --suite my_med_test

Then open http://localhost:8000/ in your browser.

Clinical NLP tier ([summarization])

Adds heavy libraries (bert-score, rouge-score, nltk). Install can take 2–3 minutes.

Scenarios: DischargeMe (hospital course summaries; requires PhysioNet data_path), ACI-Bench (clinical transcripts), Patient-Edu (simplifying medical jargon).

pip install "medhelm[summarization]"
# or: uv pip install "medhelm[summarization]"

Example (ACI-Bench; runs without extra data):

uv run medhelm-run --run-entries "aci_bench:model=qwen/qwen2.5-7b-instruct,model_deployment=huggingface/qwen2.5-7b-instruct" --suite med_summaries --max-eval-instances 5
uv run helm-summarize --suite med_summaries
uv run helm-server --suite med_summaries

Gated / licensing tier ([gated])

Adds gdown for scenarios that use Google Drive. Install can also take longer.

Scenarios: MedQA (USMLE/Board exams), MedMCQA (AIIMS/NEET exams).

pip install "medhelm[gated]"
# or: uv pip install "medhelm[gated]"

Example:

uv run medhelm-run --run-entries "med_qa:model=qwen/qwen2.5-7b-instruct,model_deployment=huggingface/qwen2.5-7b-instruct" --suite board_exams --max-eval-instances 10
uv run helm-summarize --suite board_exams
uv run helm-server --suite board_exams

Classic HELM commands

You can still use helm-run, helm-summarize, and helm-server; medhelm-run is an alias for helm-run.

helm-run --run-entries mmlu:subject=philosophy,model=openai/gpt2 --suite my-suite --max-eval-instances 10
helm-summarize --suite my-suite
helm-server --suite my-suite

Quick Start (summary)

Tier Install Scenarios
Standard pip install medhelm or uv pip install medhelm PubMedQA, MedCalc-Bench, MedicationQA, MedHallu
Summarization (Clinical NLP tier) pip install "medhelm[summarization]" DischargeMe, ACI-Bench, Patient-Edu (2–3 min install; bert-score, rouge-score, nltk)
Gated (licensing tier) pip install "medhelm[gated]" MedQA, MedMCQA (Drive; gdown)

Run: uv run medhelm-run --run-entries "<scenario>:model=<model>" --suite <name> --max-eval-instances <n> then helm-summarize and helm-server. See medhelm.org for full docs.

Goals & roadmap

MedHELM aims to be a new public repo with fewer dependencies, easier installation, and public documentation. We welcome feedback on the following:

  • HealthBench: We are considering new subcategories to include HealthBench. Do you see value in adding HealthBench, and how would you use it?
  • Non-gated alternatives: We provide 7 non-gated datasets (e.g. PubMedQA, MedCalc-Bench, MedicationQA, MedHallu, and others in the Standard and Summarization tiers) as free alternatives for the same kinds of tasks as gated benchmarks.
  • Hospital & private data: We want to make it easier for hospital systems to contribute or add their own private datasets. If your institution is interested in running or contributing benchmarks, we’d like to hear from you.

Leaderboards

We maintain official leaderboards with results from evaluating recent models on notable benchmarks using this framework. Our current flagship leaderboards are:

We also maintain leaderboards for a diverse range of domains (e.g. medicine, finance) and aspects (e.g. multi-linguality, world knowledge, regulation compliance). Refer to the HELM website for a full list of leaderboards.

Papers

The HELM framework was used in the following papers for evaluating models.

The HELM framework can be used to reproduce the published model evaluation results from these papers. To get started, refer to the documentation links above for the corresponding paper, or the Reproducing Leaderboards documentation on medhelm.org.

Citation

If you use this software in your research, please cite the Holistic Evaluation of Language Models paper as below.

@article{
liang2023holistic,
title={Holistic Evaluation of Language Models},
author={Percy Liang and Rishi Bommasani and Tony Lee and Dimitris Tsipras and Dilara Soylu and Michihiro Yasunaga and Yian Zhang and Deepak Narayanan and Yuhuai Wu and Ananya Kumar and Benjamin Newman and Binhang Yuan and Bobby Yan and Ce Zhang and Christian Alexander Cosgrove and Christopher D Manning and Christopher Re and Diana Acosta-Navas and Drew Arad Hudson and Eric Zelikman and Esin Durmus and Faisal Ladhak and Frieda Rong and Hongyu Ren and Huaxiu Yao and Jue WANG and Keshav Santhanam and Laurel Orr and Lucia Zheng and Mert Yuksekgonul and Mirac Suzgun and Nathan Kim and Neel Guha and Niladri S. Chatterji and Omar Khattab and Peter Henderson and Qian Huang and Ryan Andrew Chi and Sang Michael Xie and Shibani Santurkar and Surya Ganguli and Tatsunori Hashimoto and Thomas Icard and Tianyi Zhang and Vishrav Chaudhary and William Wang and Xuechen Li and Yifan Mai and Yuhui Zhang and Yuta Koreeda},
journal={Transactions on Machine Learning Research},
issn={2835-8856},
year={2023},
url={https://openreview.net/forum?id=iO4LZibEqW},
note={Featured Certification, Expert Certification}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

medhelm-0.5.15.tar.gz (18.2 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

medhelm-0.5.15-py3-none-any.whl (10.2 MB view details)

Uploaded Python 3

File details

Details for the file medhelm-0.5.15.tar.gz.

File metadata

  • Download URL: medhelm-0.5.15.tar.gz
  • Upload date:
  • Size: 18.2 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for medhelm-0.5.15.tar.gz
Algorithm Hash digest
SHA256 8eecb0371543dd6630172ffd66d94e6f3a720c2eb56eda135eb06c86577c8145
MD5 dbd3db98508b081b43e1bb0cc41466b6
BLAKE2b-256 a0dc2ac232ef6b0c3fbc346f5f60e29a589a2429296ffd922d8881acecd6b7c5

See more details on using hashes here.

Provenance

The following attestation bundles were made for medhelm-0.5.15.tar.gz:

Publisher: publish.yml on PacificAI/medhelm

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file medhelm-0.5.15-py3-none-any.whl.

File metadata

  • Download URL: medhelm-0.5.15-py3-none-any.whl
  • Upload date:
  • Size: 10.2 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for medhelm-0.5.15-py3-none-any.whl
Algorithm Hash digest
SHA256 79f0fae6e2f130e1aca64ed89b85aea06fe23965a94983deb18c067a81701a56
MD5 c9eba438d326401ea8c48ba65694b71b
BLAKE2b-256 0a08cd027f86c1567575507d13f17fd58b6b0de45914d242a1bea2286fdff802

See more details on using hashes here.

Provenance

The following attestation bundles were made for medhelm-0.5.15-py3-none-any.whl:

Publisher: publish.yml on PacificAI/medhelm

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page