HuggingFace community-driven open-source library of evaluation
Project description
🤗 Evaluate is a library that makes evaluating and comparing models and reporting their performance easier and more standardized.
It currently contains:
- implementations of dozens of popular metrics: the existing metrics cover a variety of tasks spanning from NLP to Computer Vision, and include dataset-specific metrics for datasets. With a simple command like
accuracy = load("accuracy")
, get any of these metrics ready to use for evaluating a ML model in any framework (Numpy/Pandas/PyTorch/TensorFlow/JAX). - includes comparisons and measurements: comparisons are used to measure the difference between models and measurements are tools to evaluate datasets.
- an easy way of adding new evaluation modules to the 🤗 Hub: you can create new evaluation modules and push them to a dedicated Space in the 🤗 Hub with
evaluate-cli create [metric name]
, which allows you to see easily compare different metrics and their outputs for the same sets of references and predictions.
🔎 Find a metric, comparison, measurement on the Hub
🤗 Evaluate also has lots of useful features like:
- Type checking: the input types are checked to make sure that you are using the right input formats for each metric
- Metric cards: each metrics comes with a card that describes the values, limitations and their ranges, as well as providing examples of their usage and usefulness.
- Community metrics: Metrics live on the Hugging Face Hub and you can easily add your own metrics for your project or to collaborate with others.
Installation
With pip
🤗 Evaluate can be installed from PyPi and has to be installed in a virtual environment (venv or conda for instance)
pip install evaluate
Usage
🤗 Evaluate's main methods are:
evaluate.list_evaluation_modules()
to list the available metrics, comparisons and measurementsevaluate.load(module_name, **kwargs)
to instantiate an evaluation moduleresults = module.compute(*kwargs)
to compute the result of an evaluation module
Adding a new evaluation module
First install the necessary dependencies to create a new metric with the following command:
pip install evaluate[template]
Then you can get started with the following command which will create a new folder for your metric and display the necessary steps:
evaluate-cli create "Awesome Metric"
See this step-by-step guide in the documentation for detailed instructions.
Credits
Thanks to @marella for letting us use the evaluate
namespace on PyPi previously used by his library.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file evaluate-0.1.1.tar.gz
.
File metadata
- Download URL: evaluate-0.1.1.tar.gz
- Upload date:
- Size: 50.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.0 CPython/3.9.12
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | c8c4960f84480b6976e5844885283c884c75e5a3c4437ae57334b2843727bc2c |
|
MD5 | 30a10f2afae42dd14f1eb8eb4375eae7 |
|
BLAKE2b-256 | c044db0ef1cd14a44a8e7d146ff98940533a3b9bafc7ee19c4e04a12c678f591 |
File details
Details for the file evaluate-0.1.1-py3-none-any.whl
.
File metadata
- Download URL: evaluate-0.1.1-py3-none-any.whl
- Upload date:
- Size: 68.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.0 CPython/3.9.12
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | c4ea387acb0075193975e5676ad9c181dcf93fce94bac2b259082b038465fae4 |
|
MD5 | 659625db415d9fd6dbfe9259b0624892 |
|
BLAKE2b-256 | 88ad8df53ab5cddd04f2d0cba8c72a5e0c3a19a748ff402dedff4b06c1cf7634 |