Benchmark for language models
Project description
Holistic Evaluation of Language Models
Welcome! The crfm-helm
Python package contains code used in the Holistic Evaluation of Language Models project (paper, website) by Stanford CRFM. This package includes the following features:
- Collection of datasets in a standard format (e.g., NaturalQuestions)
- Collection of models accessible via a unified API (e.g., GPT-3, MT-NLG, OPT, BLOOM)
- Collection of metrics beyond accuracy (efficiency, bias, toxicity, etc.)
- Collection of perturbations for evaluating robustness and fairness (e.g., typos, dialect)
- Modular framework for constructing prompts from datasets
- Proxy server for managing accounts and providing unified interface to access models
To get started, refer to the documentation on Read the Docs for how to install and run the package.
Directory Structure
The directory structure for this repo is as follows
├── docs # MD used to generate readthedocs
│
├── scripts # Python utility scripts for HELM
│ ├── cache
│ ├── data_overlap # Calculate train test overlap
│ │ ├── common
│ │ ├── scenarios
│ │ └── test
│ ├── efficiency
│ ├── fact_completion
│ ├── offline_eval
│ └── scale
└── src
├── helm # Benchmarking Scripts for HELM
│ │
│ ├── benchmark # Main Python code for running HELM
│ │ │
│ │ └── static # Current JS (Jquery) code for rendering front-end
│ │ │
│ │ └── ...
│ │
│ ├── common # Additional Python code for running HELM
│ │
│ └── proxy # Python code for external web requests
│
└── helm-frontend # New React Front-end
Tutorial
This tutorial will explain how to use the HELM command line tools to run benchmarks, aggregate statistics, and visualize results.
We will run two runs using the mmlu
scenario on the openai/gpt2
model. The mmlu
scenario implements the Massive Multitask Language (MMLU) benchmark from this paper, and consists of a Question Answering (QA) task using a dataset with questions from 57 subjects such as elementary mathematics, US history, computer science, law, and more. Note that GPT-2 performs poorly on MMLU, so this is just a proof of concept. We will run two runs: the first using questions about anatomy, and the second using questions about philosophy.
Using helm-run
helm-run
is a command line tool for running benchmarks.
To run this benchmark using the HELM command-line tools, we need to specify run spec descriptions that describes the desired runs. For this example, the run spec descriptions are mmlu:subject=anatomy,model=openai/gpt2
(for anatomy) and mmlu:subject=philosophy,model=openai/gpt2
(for philosophy).
Next, we need to create a run spec configuration file contining these run spec descriptions. A run spec configuration file is a text file containing RunEntries
serialized to JSON, where each entry in RunEntries
contains a run spec description. The description
field of each entry should be a run spec description. Create a text file named run_specs.conf
with the following contents:
entries: [
{description: "mmlu:subject=anatomy,model=openai/gpt2", priority: 1},
{description: "mmlu:subject=philosophy,model=openai/gpt2", priority: 1},
]
We will now use helm-run
to execute the runs that have been specified in this run spec configuration file. Run this command:
helm-run --conf-paths run_specs.conf --suite v1 --max-eval-instances 10
The meaning of the additional arguments are as follows:
--suite
specifies a subdirectory under the output directory in which all the output will be placed.--max-eval-instances
limits evaluation to only the first N inputs (i.e. instances) from the benchmark.
helm-run
creates an environment directory environment and an output directory by default.
- The environment directory is
prod_env/
by default and can be set using--local-path
. Credentials for making API calls should be added to acredentials.conf
file in this directory. - The output directory is
benchmark_output/
by default and can be set using--output-path
.
After running this command, navigate to the benchmark_output/runs/v1/
directory. This should contain a two sub-directories named mmlu:subject=anatomy,model=openai_gpt2
and mmlu:subject=philosophy,model=openai_gpt2
. Note that the names of these sub-directories is based on the run spec descriptions we used earlier, but with /
replaced with _
.
Each output sub-directory will contain several JSON files that were generated during the corresponding run:
run_spec.json
contains theRunSpec
, which specifies the scenario, adapter and metrics for the run.scenario.json
contains a serializedScenario
, which contains the scenario for the run and specifies the instances (i.e. inputs) used.scenario_state.json
contains a serializedScenarioState
, which contains every request to and response from the model.per_instance_stats.json
contains a serialized list ofPerInstanceStats
, which contains the statistics produced for the metrics for each instance (i.e. input).stats.json
contains a serialized list ofPerInstanceStats
, which contains the statistics produced for the metrics, aggregated across all instances (i.e. inputs).
helm-run
provides additional arguments that can be used to filter out --models-to-run
, --groups-to-run
and --priority
. It can be convenient to create a large run_specs.conf
file containing every run spec description of interest, and then use these flags to filter down the RunSpecs to actually run. As an example, the main run_specs.conf
file used for the HELM benchmarking paper can be found here.
Using helm-summarize
The helm-summarize
reads the output files of helm-run
and computes aggregate statistics across runs. Run the following:
helm-summarize --suite v1
This reads the pre-existing files in benchmark_output/runs/v1/
that were written by helm-run
previously, and writes the following new files back to benchmark_output/runs/v1/
:
summary.json
contains a serializedExecutiveSummary
with a date and suite name.run_specs.json
contains the run spec descriptions for all the runs.runs.json
contains serialized list ofRun
, which contains the run path, run spec and adapter spec and statistics for each run.groups.json
contains a serialized list ofTable
, each containing information about groups in a group category.groups_metadata.json
contains a list of all the groups along with a human-readable description and a taxonomy.
Additionally, for each group and group-relavent metric, it will output a pair of files: benchmark_output/runs/v1/groups/latex/<group_name>_<metric_name>.tex
and benchmark_output/runs/v1/groups/json/<group_name>_<metric_name>.json
. These files contain the statistics for that metric from each run within the group.
Using helm-server
Finally, the helm-server
command launches a web server to visualize the output files of helm-run
and helm-benchmark
. Run:
helm-server
Open a browser and go to http://localhost:8000/ to view the visualization. You should see a similar view as live website for the paper, but for the data from your benchmark runs. The website has three main sections:
- Models contains a list of available models.
- Scenarios contains a list of available scenarios.
- Results contains results from the runs, organized into groups and categories of groups.
- Raw Runs contains a searchable list of runs.
Other Tips
- The suite name can be used as a versioning mechanism to separate runs using different versions of scenarios or models.
- Tools such as
jq
are useful for examining the JSON output files on the command line.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file crfm-helm-0.4.0.tar.gz
.
File metadata
- Download URL: crfm-helm-0.4.0.tar.gz
- Upload date:
- Size: 1.4 MB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/4.0.2 CPython/3.11.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 06d49ad3c3c07eae67898e204c856b75e96a20c93e6cf8f20e56bce2c13cdaa3 |
|
MD5 | 3db1bcfb4bc098b15dfca221af875886 |
|
BLAKE2b-256 | e627ec79036faf027b7af87bfd82988362e7940e5e43a292c8d355c6ecec8d2a |
File details
Details for the file crfm_helm-0.4.0-py3-none-any.whl
.
File metadata
- Download URL: crfm_helm-0.4.0-py3-none-any.whl
- Upload date:
- Size: 1.6 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/4.0.2 CPython/3.11.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 3fc9c3721f78f48632cad6dfb04851de7055f83d900125fd8d247b9503a99a27 |
|
MD5 | ba7ff8a10701578a6f97e0cacd421f5c |
|
BLAKE2b-256 | a9cd6ad6f58732b1a30236e19fd5a6140910b7cea0626d8dec39bf1eabf93b52 |