Skip to main content

No project description provided

Project description

OpenAI Evals

Evals provide a framework for evaluating large language models (LLMs) or systems built using LLMs. We offer an existing registry of evals to test different dimensions of OpenAI models and the ability to write your own custom evals for use cases you care about. You can also use your data to build private evals which represent the common LLMs patterns in your workflow without exposing any of that data publicly.

If you are building with LLMs, creating high quality evals is one of the most impactful things you can do. Without evals, it can be very difficult and time intensive to understand how different model versions might affect your use case. In the words of OpenAI's President Greg Brockman:

https://x.com/gdb/status/1733553161884127435?s=20

Setup

To run evals, you will need to set up and specify your OpenAI API key. After you obtain an API key, specify it using the OPENAI_API_KEY environment variable. Please be aware of the costs associated with using the API when running evals. You can also run and create evals using Weights & Biases.

Minimum Required Version: Python 3.9

Downloading evals

Our evals registry is stored using Git-LFS. Once you have downloaded and installed LFS, you can fetch the evals (from within your local copy of the evals repo) with:

cd evals
git lfs fetch --all
git lfs pull

This will populate all the pointer files under evals/registry/data.

You may just want to fetch data for a select eval. You can achieve this via:

git lfs fetch --include=evals/registry/data/${your eval}
git lfs pull

Making evals

If you are going to be creating evals, we suggest cloning this repo directly from GitHub and installing the requirements using the following command:

pip install -e .

Using -e, changes you make to your eval will be reflected immediately without having to reinstall.

Optionally, you can install the formatters for pre-committing with:

pip install -e .[formatters]

Then run pre-commit install to install pre-commit into your git hooks. pre-commit will now run on every commit.

If you want to manually run all pre-commit hooks on a repository, run pre-commit run --all-files. To run individual hooks use pre-commit run <hook_id>.

Running evals

If you don't want to contribute new evals, but simply want to run them locally, you can install the evals package via pip:

pip install evals

You can find the full instructions to run existing evals in run-evals.md and our existing eval templates in eval-templates.md. For more advanced use cases like prompt chains or tool-using agents, you can use our Completion Function Protocol.

We provide the option for you to log your eval results to a Snowflake database, if you have one or wish to set one up. For this option, you will further have to specify the SNOWFLAKE_ACCOUNT, SNOWFLAKE_DATABASE, SNOWFLAKE_USERNAME, and SNOWFLAKE_PASSWORD environment variables.

Writing evals

We suggest getting starting by:

Please note that we are currently not accepting evals with custom code! While we ask you to not submit such evals at the moment, you can still submit model-graded evals with custom model-graded YAML files.

If you think you have an interesting eval, please open a pull request with your contribution. OpenAI staff actively review these evals when considering improvements to upcoming models.

FAQ

Do you have any examples of how to build an eval from start to finish?

  • Yes! These are in the examples folder. We recommend that you also read through build-eval.md in order to gain a deeper understanding of what is happening in these examples.

Do you have any examples of evals implemented in multiple different ways?

  • Yes! In particular, see evals/registry/evals/coqa.yaml. We have implemented small subsets of the CoQA dataset for various eval templates to help illustrate the differences.

When I run an eval, it sometimes hangs at the very end (after the final report). What's going on?

  • This is a known issue, but you should be able to interrupt it safely and the eval should finish immediately after.

There's a lot of code, and I just want to spin up a quick eval. Help? OR,

I am a world-class prompt engineer. I choose not to code. How can I contribute my wisdom?

  • If you follow an existing eval template to build a basic or model-graded eval, you don't need to write any evaluation code at all! Just provide your data in JSON format and specify your eval parameters in YAML. build-eval.md walks you through these steps, and you can supplement these instructions with the Jupyter notebooks in the examples folder to help you get started quickly. Keep in mind, though, that a good eval will inevitably require careful thought and rigorous experimentation!

Disclaimer

By contributing to evals, you are agreeing to make your evaluation logic and data under the same MIT license as this repository. You must have adequate rights to upload any data used in an eval. OpenAI reserves the right to use this data in future service improvements to our product. Contributions to OpenAI evals will be subject to our usual Usage Policies: https://platform.openai.com/docs/usage-policies.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

evals-3.0.1.post1.tar.gz (44.2 MB view details)

Uploaded Source

Built Distribution

evals-3.0.1.post1-py3-none-any.whl (46.3 MB view details)

Uploaded Python 3

File details

Details for the file evals-3.0.1.post1.tar.gz.

File metadata

  • Download URL: evals-3.0.1.post1.tar.gz
  • Upload date:
  • Size: 44.2 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.3

File hashes

Hashes for evals-3.0.1.post1.tar.gz
Algorithm Hash digest
SHA256 0ffd9bed75c273a4a9f0b10ccf11a5e1e1c63c89926f12958375551429a4dc21
MD5 ff5259c3d291c969acb690dc89dc662c
BLAKE2b-256 b1c7deae74cdbc70ce92a23d3708851770257c487a4d0a668d5aed69916d532e

See more details on using hashes here.

File details

Details for the file evals-3.0.1.post1-py3-none-any.whl.

File metadata

  • Download URL: evals-3.0.1.post1-py3-none-any.whl
  • Upload date:
  • Size: 46.3 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.3

File hashes

Hashes for evals-3.0.1.post1-py3-none-any.whl
Algorithm Hash digest
SHA256 0abcb2051303500784b1641a6e4f6b813ed43ad64f879a37d344a6774eb8eb78
MD5 f939836dde842bb5d0394c824f7866ec
BLAKE2b-256 7fe4b54a8285cd6bece2722fb2091570b70df061a9c7821aa460f48717bb6bda

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page