Skip to main content

Benchmark for language models

Project description

Holistic Evaluation of Language Models

Welcome! The crfm-helm Python package contains code used in the Holistic Evaluation of Language Models project (paper, website) by Stanford CRFM. This package includes the following features:

  • Collection of datasets in a standard format (e.g., NaturalQuestions)
  • Collection of models accessible via a unified API (e.g., GPT-3, MT-NLG, OPT, BLOOM)
  • Collection of metrics beyond accuracy (efficiency, bias, toxicity, etc.)
  • Collection of perturbations for evaluating robustness and fairness (e.g., typos, dialect)
  • Modular framework for constructing prompts from datasets
  • Proxy server for managing accounts and providing unified interface to access models

To get started, refer to the documentation on Read the Docs for how to install and run the package.

Directory Structure

The directory structure for this repo is as follows

├── docs # MD used to generate readthedocs
│
├── scripts # Python utility scripts for HELM
│ ├── cache
│ ├── data_overlap # Calculate train test overlap
│ │ ├── common
│ │ ├── scenarios
│ │ └── test
│ ├── efficiency
│ ├── fact_completion
│ ├── offline_eval
│ └── scale
└── src
├── helm # Benchmarking Scripts for HELM
│ │
│ ├── benchmark # Main Python code for running HELM
│ │ │
│ │ └── static # Current JS (Jquery) code for rendering front-end
│ │ │
│ │ └── ...
│ │
│ ├── common # Additional Python code for running HELM
│ │
│ └── proxy # Python code for external web requests
│
└── helm-frontend # New React Front-end

Holistic Evaluation of Text-To-Image Models

Significant effort has recently been made in developing text-to-image generation models, which take textual prompts as input and generate images. As these models are widely used in real-world applications, there is an urgent need to comprehensively understand their capabilities and risks. However, existing evaluations primarily focus on image-text alignment and image quality. To address this limitation, we introduce a new benchmark, Holistic Evaluation of Text-To-Image Models (HEIM).

We identify 12 different aspects that are important in real-world model deployment, including:

  • image-text alignment
  • image quality
  • aesthetics
  • originality
  • reasoning
  • knowledge
  • bias
  • toxicity
  • fairness
  • robustness
  • multilinguality
  • efficiency

By curating scenarios encompassing these aspects, we evaluate state-of-the-art text-to-image models using this benchmark. Unlike previous evaluations that focused on alignment and quality, HEIM significantly improves coverage by evaluating all models across all aspects. Our results reveal that no single model excels in all aspects, with different models demonstrating strengths in different aspects.

This repository contains the code used to produce the results on the website and paper.

Tutorial

This tutorial will explain how to use the HELM command line tools to run benchmarks, aggregate statistics, and visualize results.

We will run two runs using the mmlu scenario on the openai/gpt2 model. The mmlu scenario implements the Massive Multitask Language (MMLU) benchmark from this paper, and consists of a Question Answering (QA) task using a dataset with questions from 57 subjects such as elementary mathematics, US history, computer science, law, and more. Note that GPT-2 performs poorly on MMLU, so this is just a proof of concept. We will run two runs: the first using questions about anatomy, and the second using questions about philosophy.

Using helm-run

helm-run is a command line tool for running benchmarks.

To run this benchmark using the HELM command-line tools, we need to specify run spec descriptions that describes the desired runs. For this example, the run spec descriptions are mmlu:subject=anatomy,model=openai/gpt2 (for anatomy) and mmlu:subject=philosophy,model=openai/gpt2 (for philosophy).

Next, we need to create a run spec configuration file containing these run spec descriptions. A run spec configuration file is a text file containing RunEntries serialized to JSON, where each entry in RunEntries contains a run spec description. The description field of each entry should be a run spec description. Create a text file named run_entries.conf with the following contents:

entries: [
  {description: "mmlu:subject=anatomy,model=openai/gpt2", priority: 1},
  {description: "mmlu:subject=philosophy,model=openai/gpt2", priority: 1},
]

We will now use helm-run to execute the runs that have been specified in this run spec configuration file. Run this command:

helm-run --conf-paths run_entries.conf --suite v1 --max-eval-instances 10

The meaning of the additional arguments are as follows:

  • --suite specifies a subdirectory under the output directory in which all the output will be placed.
  • --max-eval-instances limits evaluation to only the first N inputs (i.e. instances) from the benchmark.

helm-run creates an environment directory environment and an output directory by default.

  • The environment directory is prod_env/ by default and can be set using --local-path. Credentials for making API calls should be added to a credentials.conf file in this directory.
  • The output directory is benchmark_output/ by default and can be set using --output-path.

After running this command, navigate to the benchmark_output/runs/v1/ directory. This should contain a two sub-directories named mmlu:subject=anatomy,model=openai_gpt2 and mmlu:subject=philosophy,model=openai_gpt2. Note that the names of these sub-directories is based on the run spec descriptions we used earlier, but with / replaced with _.

Each output sub-directory will contain several JSON files that were generated during the corresponding run:

  • run_spec.json contains the RunSpec, which specifies the scenario, adapter and metrics for the run.
  • scenario.json contains a serialized Scenario, which contains the scenario for the run and specifies the instances (i.e. inputs) used.
  • scenario_state.json contains a serialized ScenarioState, which contains every request to and response from the model.
  • per_instance_stats.json contains a serialized list of PerInstanceStats, which contains the statistics produced for the metrics for each instance (i.e. input).
  • stats.json contains a serialized list of PerInstanceStats, which contains the statistics produced for the metrics, aggregated across all instances (i.e. inputs).

helm-run provides additional arguments that can be used to filter out --models-to-run, --groups-to-run and --priority. It can be convenient to create a large run_entries.conf file containing every run spec description of interest, and then use these flags to filter down the RunSpecs to actually run. As an example, the main run_specs.conf file used for the HELM benchmarking paper can be found here.

Using model or model_deployment: Some models have several deployments (for exmaple eleutherai/gpt-j-6b is deployed under huggingface/gpt-j-6b, gooseai/gpt-j-6b and together/gpt-j-6b). Since the results can differ depending on the deployment, we provide a way to specify the deployment instead of the model. Instead of using model=eleutherai/gpt-g-6b, use model_deployment=huggingface/gpt-j-6b. If you do not, a deployment will be arbitrarily chosen. This can still be used for models that have a single deployment and is a good practice to follow to avoid any ambiguity.

Using helm-summarize

The helm-summarize reads the output files of helm-run and computes aggregate statistics across runs. Run the following:

helm-summarize --suite v1

This reads the pre-existing files in benchmark_output/runs/v1/ that were written by helm-run previously, and writes the following new files back to benchmark_output/runs/v1/:

  • summary.json contains a serialized ExecutiveSummary with a date and suite name.
  • run_specs.json contains the run spec descriptions for all the runs.
  • runs.json contains serialized list of Run, which contains the run path, run spec and adapter spec and statistics for each run.
  • groups.json contains a serialized list of Table, each containing information about groups in a group category.
  • groups_metadata.json contains a list of all the groups along with a human-readable description and a taxonomy.

Additionally, for each group and group-relavent metric, it will output a pair of files: benchmark_output/runs/v1/groups/latex/<group_name>_<metric_name>.tex and benchmark_output/runs/v1/groups/json/<group_name>_<metric_name>.json. These files contain the statistics for that metric from each run within the group.

Using helm-server

Finally, the helm-server command launches a web server to visualize the output files of helm-run and helm-benchmark. Run:

helm-server

Open a browser and go to http://localhost:8000/ to view the visualization. You should see a similar view as live website for the paper, but for the data from your benchmark runs. The website has three main sections:

  • Models contains a list of available models.
  • Scenarios contains a list of available scenarios.
  • Results contains results from the runs, organized into groups and categories of groups.
  • Raw Runs contains a searchable list of runs.

Other Tips

  • The suite name can be used as a versioning mechanism to separate runs using different versions of scenarios or models.
  • Tools such as jq are useful for examining the JSON output files on the command line.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

crfm_helm-0.5.1.tar.gz (5.3 MB view details)

Uploaded Source

Built Distribution

crfm_helm-0.5.1-py3-none-any.whl (5.6 MB view details)

Uploaded Python 3

File details

Details for the file crfm_helm-0.5.1.tar.gz.

File metadata

  • Download URL: crfm_helm-0.5.1.tar.gz
  • Upload date:
  • Size: 5.3 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.0.0 CPython/3.12.3

File hashes

Hashes for crfm_helm-0.5.1.tar.gz
Algorithm Hash digest
SHA256 a00c814f654738402de38a8800049de3401095bb1d9b41c506280b70c13d2d18
MD5 1729eadd6c989b176f8d32e6742c3d2b
BLAKE2b-256 1e3186e88995949a6c46c00c99933492be5441efd4a0e0b4a4c37ae4533f97e8

See more details on using hashes here.

File details

Details for the file crfm_helm-0.5.1-py3-none-any.whl.

File metadata

  • Download URL: crfm_helm-0.5.1-py3-none-any.whl
  • Upload date:
  • Size: 5.6 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.0.0 CPython/3.12.3

File hashes

Hashes for crfm_helm-0.5.1-py3-none-any.whl
Algorithm Hash digest
SHA256 157632ef3aea876c7d63887efc9e1cbbbe2cb081395cdb2c93beae30069cf6d3
MD5 0221c99f88d88648a7f57f923bc1f050
BLAKE2b-256 8f8f0b97aea2c0f37bdcb566224781c8ad23a0d642249e7c82a76b4a5ce4baef

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page