Skip to main content

No project description provided

Project description

FlexEval

logo

Flexible evaluation tool for language models. Easy to extend, highly customizable!

English | 日本語 |

With FlexEval, you can evaluate language models with:

  • Zero/few-shot in-context learning tasks
  • Open-ended text-generation benchmarks such as MT-Bench with automatic evaluation using GPT-4
  • Log-probability-based multiple-choice tasks
  • Computing perplexity of text data

For more use cases, see the documentation.

Key Features

  • Flexibility: flexeval is flexible in terms of the evaluation setup and the language model to be evaluated.
  • Modularity: The core components of flexeval are easily extensible and replaceable.
  • Clarity: The results of evaluation are clear and all the details are saved.
  • Reproducibility: flexeval should be reproducible, with the ability to save and load configurations and results.

Installation

pip install flexeval

Quick Start

The following minimal example evaluates the hugging face model sbintuitions/tiny-lm with the commonsense_qa task.

flexeval_lm \
  --language_model HuggingFaceLM \
  --language_model.model "sbintuitions/tiny-lm" \
  --eval_setup "commonsense_qa" \
  --save_dir "results/commonsense_qa"

(The model used in the example is solely for debugging purposes and does not perform well. Try switching to your favorite model!)

The results saved in --saved_dir contain:

  • config.json: The configuration of the evaluation, which can be used to replicate the evaluation.
  • metrics.json: The evaluation metrics.
  • outputs.jsonl: The outputs of the language model that comes with instance-level metrics.

You can flexibly customize the evaluation by specifying command-line arguments or configuration files. Besides the Transformers model, you can also evaluate models via OpenAI ChatGPT and vLLM, and other models can be readily added!

Next Steps

  • Run flexeval_presets to check the list of off-the-shelf presets in addition to commonsense_qa. You can find the details in the Preset Configs section.
  • See Getting Started to check the tutorial examples for other kinds of tasks.
  • See the Configuration Guide to set up your evaluation.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

flexeval-0.5.12.tar.gz (178.7 kB view details)

Uploaded Source

Built Distribution

flexeval-0.5.12-py3-none-any.whl (246.1 kB view details)

Uploaded Python 3

File details

Details for the file flexeval-0.5.12.tar.gz.

File metadata

  • Download URL: flexeval-0.5.12.tar.gz
  • Upload date:
  • Size: 178.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.9.19 Linux/6.5.0-1025-azure

File hashes

Hashes for flexeval-0.5.12.tar.gz
Algorithm Hash digest
SHA256 73bfea519421d53c7aea6a2d1fea7c1b7be3b21ac7a5b72a18cabea9c54b8ea6
MD5 aaaf110fcf4ffbfb96bd5827140ef3f2
BLAKE2b-256 09984b5a9a987986bfd6e98ba4cab830e2c985e0ad3cbd444ab9651e25442909

See more details on using hashes here.

File details

Details for the file flexeval-0.5.12-py3-none-any.whl.

File metadata

  • Download URL: flexeval-0.5.12-py3-none-any.whl
  • Upload date:
  • Size: 246.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.9.19 Linux/6.5.0-1025-azure

File hashes

Hashes for flexeval-0.5.12-py3-none-any.whl
Algorithm Hash digest
SHA256 8a236b6b450df8e6e6a70645ea61700dd0a7e0c0cce8e369a35213e4bef5a120
MD5 f453f2c30c37576f9511d51e2ebf0089
BLAKE2b-256 35ebf808788e91e64c6dc31f8d06581c0f7b9fb8347eca8ed1cb7d3b5ba51630

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page