Skip to main content

Framework for evaluation of foundation models

Project description

Holistic Evaluation of Language Models (HELM)

HELM logo

GitHub Repo stars GitHub contributors GitHub Actions Workflow Status Documentation Status License PyPI

Holistic Evaluation of Language Models (HELM) is an open source Python framework created by the Center for Research on Foundation Models (CRFM) at Stanford for holistic, reproducible and transparent evaluation of foundation models, including large language models (LLMs) and multimodal models. This framework includes the following features:

  • Datasets and benchmarks in a standardized format (e.g. MMLU-Pro, GPQA, IFEval, WildBench)
  • Models from various providers accessible through a unified interface (e.g. OpenAI models, Anthropic Claude, Google Gemini)
  • Metrics for measuring various aspects beyond accuracy (e.g. efficiency, bias, toxicity)
  • Web UI for inspecting individual prompts and responses
  • Web leaderboard for comparing results across models and benchmarks

Documentation

Please refer to the documentation on Read the Docs for instructions on how to install and run HELM.

Quick Start

Install the package from PyPI:

pip install crfm-helm

Run the following in your shell:

# Run benchmark
helm-run --run-entries mmlu:subject=philosophy,model=openai/gpt2 --suite my-suite --max-eval-instances 10

# Summarize benchmark results
helm-summarize --suite my-suite

# Start a web server to display benchmark results
helm-server --suite my-suite

Then go to http://localhost:8000/ in your browser.

Leaderboards

We maintain offical leaderboards with results from evaluating recent models on notable benchmarks using this framework. Our current flagship leaderboards are:

We also maintain leaderboards for a diverse range of domains (e.g. medicine, finance) and aspects (e.g. multi-linguality, world knowledge, regulation compliance). Refer to the HELM website for a full list of leaderboards.

Papers

The HELM framework was used in the following papers for evaluating models.

The HELM framework can be used to reproduce the published model evaluation results from these papers. To get started, refer to the documentation links above for the corresponding paper, or the main Reproducing Leaderboards documentation.

Citation

If you use this software in your research, please cite the Holistic Evaluation of Language Models paper as below.

@article{
liang2023holistic,
title={Holistic Evaluation of Language Models},
author={Percy Liang and Rishi Bommasani and Tony Lee and Dimitris Tsipras and Dilara Soylu and Michihiro Yasunaga and Yian Zhang and Deepak Narayanan and Yuhuai Wu and Ananya Kumar and Benjamin Newman and Binhang Yuan and Bobby Yan and Ce Zhang and Christian Alexander Cosgrove and Christopher D Manning and Christopher Re and Diana Acosta-Navas and Drew Arad Hudson and Eric Zelikman and Esin Durmus and Faisal Ladhak and Frieda Rong and Hongyu Ren and Huaxiu Yao and Jue WANG and Keshav Santhanam and Laurel Orr and Lucia Zheng and Mert Yuksekgonul and Mirac Suzgun and Nathan Kim and Neel Guha and Niladri S. Chatterji and Omar Khattab and Peter Henderson and Qian Huang and Ryan Andrew Chi and Sang Michael Xie and Shibani Santurkar and Surya Ganguli and Tatsunori Hashimoto and Thomas Icard and Tianyi Zhang and Vishrav Chaudhary and William Wang and Xuechen Li and Yifan Mai and Yuhui Zhang and Yuta Koreeda},
journal={Transactions on Machine Learning Research},
issn={2835-8856},
year={2023},
url={https://openreview.net/forum?id=iO4LZibEqW},
note={Featured Certification, Expert Certification}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

crfm_helm-0.5.15.tar.gz (8.3 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

crfm_helm-0.5.15-py3-none-any.whl (8.9 MB view details)

Uploaded Python 3

File details

Details for the file crfm_helm-0.5.15.tar.gz.

File metadata

  • Download URL: crfm_helm-0.5.15.tar.gz
  • Upload date:
  • Size: 8.3 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for crfm_helm-0.5.15.tar.gz
Algorithm Hash digest
SHA256 d84989b623bc69712899d612873442714e5d555dbc0fcd543c375066b31bb160
MD5 da0903eb1ef24ffc9c7efa1395c43f5c
BLAKE2b-256 6d421a9d0dcb5452a9fb19094408acce94e7808d1f276365380c35197145a7fc

See more details on using hashes here.

Provenance

The following attestation bundles were made for crfm_helm-0.5.15.tar.gz:

Publisher: publish-pypi.yml on stanford-crfm/helm

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file crfm_helm-0.5.15-py3-none-any.whl.

File metadata

  • Download URL: crfm_helm-0.5.15-py3-none-any.whl
  • Upload date:
  • Size: 8.9 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for crfm_helm-0.5.15-py3-none-any.whl
Algorithm Hash digest
SHA256 3fdca5a645c3969e3ff1db69dee890b6d5f5cf5238919791f0ca8b9527585dc3
MD5 565c0d1760eef973f3ecba0e68ccddd1
BLAKE2b-256 86e879959af069bc9fc80880180e8c4a757e5d883617747bd1f75d546ebe2a7b

See more details on using hashes here.

Provenance

The following attestation bundles were made for crfm_helm-0.5.15-py3-none-any.whl:

Publisher: publish-pypi.yml on stanford-crfm/helm

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page