Skip to main content

A framework for evaluating autoregressive language models

Project description

Language Model Evaluation Harness

codecov

Overview

This project provides a unified framework to test autoregressive language models (GPT-2, GPT-3, GPTNeo, etc) on a large number of different evaluation tasks.

Features:

  • 200+ tasks implemented. See the task-table for a complete list.
  • Support for GPT-2, GPT-3, GPT-Neo, GPT-NeoX, and GPT-J, with flexible tokenization-agnostic interface.
  • Task versioning to ensure reproducibility.

Install

pip install lm-eval

To install additional multlingual tokenization and text segmenation packages, you must install the package with the multilingual extra:

pip install "lm-eval[multilingual]"

Basic Usage

Note: When reporting results from eval harness, please include the task versions (shown in results["versions"]) for reproducibility. This allows bug fixes to tasks while also ensuring that previously reported scores are reproducible. See the Task Versioning section for more info.

To evaluate a model (e.g. GPT-2) on NLP tasks such as SuperGLUE WiC, you can run the following command:

python main.py \
    --model gpt2 \
    --tasks lambada_openai,hellaswag \
    --device 0

This example uses gpt2-117M by default as per HF defaults.

Additional arguments can be provided to the model constructor using the --model_args flag. Most importantly, the gpt2 model can be used to load an arbitrary HuggingFace CausalLM. For example, to run GPTNeo use the following:

python main.py \
    --model gpt2 \
    --model_args pretrained=EleutherAI/gpt-neo-2.7B \
    --tasks lambada_openai,hellaswag \
    --device 0

If you have access to the OpenAI API, you can also evaluate GPT-3:

export OPENAI_API_SECRET_KEY=YOUR_KEY_HERE
python main.py \
    --model gpt3 \
    --model_args engine=davinci \
    --tasks lambada_openai,hellaswag

And if you want to verify the data integrity of the tasks you're performing in addition to running the tasks themselves, you can use the --check_integrity flag:

python main.py \
    --model gpt3 \
    --model_args engine=davinci \
    --tasks lambada_openai,hellaswag \
    --check_integrity

To evaluate mesh-transformer-jax models that are not available on HF, please invoke eval harness through this script.

💡 Tip: You can inspect what the LM inputs look like by running the following command:

python write_out.py \
    --tasks all_tasks \
    --num_fewshot 5 \
    --num_examples 10 \
    --output_base_path /path/to/output/folder

This will write out one text file for each task.

Implementing new tasks

To implement a new task in the eval harness, see this guide.

Task Versioning

To help improve reproducibility, all tasks have a VERSION field. When run from the command line, this is reported in a column in the table, or in the "version" field in the evaluator return dict. The purpose of the version is so that if the task definition changes (i.e to fix a bug), then we can know exactly which metrics were computed using the old buggy implementation to avoid unfair comparisons. To enforce this, there are unit tests that make sure the behavior of all tests remains the same as when they were first implemented. Task versions start at 0, and each time a breaking change is made, the version is incremented by one.

When reporting eval harness results, please also report the version of each task. This can be done either with a separate column in the table, or by reporting the task name with the version appended as such: taskname-v0.

Test Set Decontamination

For details on text decontamination, see the decontamination guide.

Note that the directory provided to the --decontamination_ngrams_path argument should contain the ngram files and info.json. See the above guide for ngram generation for the pile, this could be adapted for other training sets.

python main.py \
    --model gpt2 \
    --tasks sciq \
    --decontamination_ngrams_path path/containing/training/set/ngrams \
    --device 0

Cite as

@software{eval-harness,
  author       = {Gao, Leo and
                  Tow, Jonathan and
                  Biderman, Stella and
                  Black, Sid and
                  DiPofi, Anthony and
                  Foster, Charles and
                  Golding, Laurence and
                  Hsu, Jeffrey and
                  McDonell, Kyle and
                  Muennighoff, Niklas and
                  Phang, Jason and
                  Reynolds, Laria and
                  Tang, Eric and
                  Thite, Anish and
                  Wang, Ben and
                  Wang, Kevin and
                  Zou, Andy},
  title        = {A framework for few-shot language model evaluation},
  month        = sep,
  year         = 2021,
  publisher    = {Zenodo},
  version      = {v0.0.1},
  doi          = {10.5281/zenodo.5371628},
  url          = {https://doi.org/10.5281/zenodo.5371628}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lm_eval-0.3.0.tar.gz (120.6 kB view details)

Uploaded Source

Built Distribution

lm_eval-0.3.0-py3-none-any.whl (178.7 kB view details)

Uploaded Python 3

File details

Details for the file lm_eval-0.3.0.tar.gz.

File metadata

  • Download URL: lm_eval-0.3.0.tar.gz
  • Upload date:
  • Size: 120.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.8.0 pkginfo/1.8.2 readme-renderer/33.0 requests/2.28.1 requests-toolbelt/0.9.1 urllib3/1.26.12 tqdm/4.45.0 importlib-metadata/4.10.1 keyring/23.5.0 rfc3986/2.0.0 colorama/0.4.4 CPython/3.9.9

File hashes

Hashes for lm_eval-0.3.0.tar.gz
Algorithm Hash digest
SHA256 643b12bf9374f4d7c78ce55471b6ad82c130ab1aa0577d97fdfa48875dbc598b
MD5 8a1d2fa73ae48c3e938b47a1d8617d0e
BLAKE2b-256 c4f858abc65390a758c8c2e5f1d8bb9b58d7885d02535d5f48de27006453d07e

See more details on using hashes here.

File details

Details for the file lm_eval-0.3.0-py3-none-any.whl.

File metadata

  • Download URL: lm_eval-0.3.0-py3-none-any.whl
  • Upload date:
  • Size: 178.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.8.0 pkginfo/1.8.2 readme-renderer/33.0 requests/2.28.1 requests-toolbelt/0.9.1 urllib3/1.26.12 tqdm/4.45.0 importlib-metadata/4.10.1 keyring/23.5.0 rfc3986/2.0.0 colorama/0.4.4 CPython/3.9.9

File hashes

Hashes for lm_eval-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 a1b3cc6c3f1291717cac79a995dc2204547fe086ecfdec0e440ff1cea20b2ac2
MD5 ae1830db4e0f5ec746b9cacb14c57709
BLAKE2b-256 61c5bff92e6b61fc2b0c1b7ac769633731910152e5176a404912ce7c07329ba0

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page