Skip to main content

The robust European language model benchmark.

Project description

The robust European language model benchmark.

(formerly known as ScandEval)


Documentation PyPI Status First paper Second paper License LastCommit Code Coverage Contributor Covenant

Maintainer

Installation

To install the package simply write the following command in your favorite terminal:

$ pip install euroeval[all]

This will install the EuroEval package with all extras. You can also install the minimal version by leaving out the [all], in which case the package will let you know when an evaluation requires a certain extra dependency, and how you install it.

Quickstart

Benchmarking from the Command Line

The easiest way to benchmark pretrained models is via the command line interface. After having installed the package, you can benchmark your favorite model like so:

$ euroeval --model <model-id>

Here model is the HuggingFace model ID, which can be found on the HuggingFace Hub. By default this will benchmark the model on all the tasks available. If you want to benchmark on a particular task, then use the --task argument:

$ euroeval --model <model-id> --task sentiment-classification

We can also narrow down which languages we would like to benchmark on. This can be done by setting the --language argument. Here we thus benchmark the model on the Danish sentiment classification task:

$ euroeval --model <model-id> --task sentiment-classification --language da

Multiple models, datasets and/or languages can be specified by just attaching multiple arguments. Here is an example with two models:

$ euroeval --model <model-id1> --model <model-id2>

The specific model version/revision to use can also be added after the suffix '@':

$ euroeval --model <model-id>@<commit>

This can be a branch name, a tag name, or a commit id. It defaults to 'main' for latest.

See all the arguments and options available for the euroeval command by typing

$ euroeval --help

Benchmarking from a Script

In a script, the syntax is similar to the command line interface. You simply initialise an object of the Benchmarker class, and call this benchmark object with your favorite model:

>>> from euroeval import Benchmarker
>>> benchmark = Benchmarker()
>>> benchmark(model="<model-id>")

To benchmark on a specific task and/or language, you simply specify the task or language arguments, shown here with same example as above:

>>> benchmark(model="<model-id>", task="sentiment-classification", language="da")

If you want to benchmark a subset of all the models on the Hugging Face Hub, you can simply leave out the model argument. In this example, we're benchmarking all Danish models on the Danish sentiment classification task:

>>> benchmark(task="sentiment-classification", language="da")

Benchmarking in an Offline Environment

If you need to benchmark in an offline environment, you need to download the models, datasets and metrics beforehand. This can be done by adding the --download-only argument, from the command line, or the download_only argument, if benchmarking from a script. For example to download the model you want and all of the Danish sentiment classification datasets:

$ euroeval --model <model-id> --task sentiment-classification --language da --download-only

Or from a script:

>>> benchmark(
... model="<model-id>",
... task="sentiment-classification",
... language="da",
... download_only=True,
... )

Please note: Offline benchmarking of adapter models is not currently supported. An internet connection will be required during evaluation. If offline support is important to you, please consider opening an issue.

Benchmarking from Docker

A Dockerfile is provided in the repo, which can be downloaded and run, without needing to clone the repo and installing from source. This can be fetched programmatically by running the following:

$ wget https://raw.githubusercontent.com/EuroEval/EuroEval/main/Dockerfile.cuda

Next, to be able to build the Docker image, first ensure that the NVIDIA Container Toolkit is installed and configured. Ensure that the the CUDA version stated at the top of the Dockerfile matches the CUDA version installed (which you can check using nvidia-smi). After that, we build the image as follows:

$ docker build --pull -t euroeval -f Dockerfile.cuda .

With the Docker image built, we can now evaluate any model as follows:

$ docker run -e args="<euroeval-arguments>" --gpus 1 --name euroeval --rm euroeval

Here <euroeval-arguments> consists of the arguments added to the euroeval CLI argument. This could for instance be --model <model-id> --task sentiment-classification.

Reproducing the datasets

All datasets used in this project are generated using the scripts located in the src/scripts folder. To reproduce a dataset, run the corresponding script with the following command

$ uv run src/scripts/<name-of-script>.py

Replace with the specific script you wish to execute, e.g.,

$ uv run src/scripts/create_allocine.py

Contributors :pray:

A huge thank you to all the contributors who have helped make this project a success!

Contributor avatar for peter-sk Contributor avatar for AJDERS Contributor avatar for oliverkinch Contributor avatar for versae Contributor avatar for KennethEnevoldsen Contributor avatar for viggo-gascou Contributor avatar for mathiasesn Contributor avatar for Alkarex Contributor avatar for marksverdhei Contributor avatar for Mikeriess Contributor avatar for ThomasKluiters Contributor avatar for BramVanroy Contributor avatar for peregilk Contributor avatar for Rijgersberg Contributor avatar for duarteocarmo Contributor avatar for slowwavesleep

Contribute to EuroEval

We welcome contributions to EuroEval! Whether you're fixing bugs, adding features, or contributing new datasets, your help makes this project better for everyone.

  • General contributions: Check out our contribution guidelines for information on how to get started.
  • Adding datasets: If you're interested in adding a new dataset to EuroEval, we have a dedicated guide with step-by-step instructions.

Special Thanks

  • Thanks to Google for sponsoring Gemini credits as part of their Google Cloud for Researchers Program.
  • Thanks @Mikeriess for evaluating many of the larger models on the leaderboards.
  • Thanks to OpenAI for sponsoring OpenAI credits as part of their Researcher Access Program.
  • Thanks to UWV and KU Leuven for sponsoring the Azure OpenAI credits used to evaluate GPT-4-turbo in Dutch.
  • Thanks to Miðeind for sponsoring the OpenAI credits used to evaluate GPT-4-turbo in Icelandic and Faroese.
  • Thanks to CHC for sponsoring the OpenAI credits used to evaluate GPT-4-turbo in German.

Citing EuroEval

If you want to cite the framework then feel free to use this:

@article{smart2024encoder,
  title={Encoder vs Decoder: Comparative Analysis of Encoder and Decoder Language Models on Multilingual NLU Tasks},
  author={Smart, Dan Saattrup and Enevoldsen, Kenneth and Schneider-Kamp, Peter},
  journal={arXiv preprint arXiv:2406.13469},
  year={2024}
}
@inproceedings{smart2023scandeval,
  author = {Smart, Dan Saattrup},
  booktitle = {Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa)},
  month = may,
  pages = {185--201},
  title = {{ScandEval: A Benchmark for Scandinavian Natural Language Processing}},
  year = {2023}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

euroeval-16.2.1.tar.gz (1.5 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

euroeval-16.2.1-py3-none-any.whl (181.3 kB view details)

Uploaded Python 3

File details

Details for the file euroeval-16.2.1.tar.gz.

File metadata

  • Download URL: euroeval-16.2.1.tar.gz
  • Upload date:
  • Size: 1.5 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.8.17

File hashes

Hashes for euroeval-16.2.1.tar.gz
Algorithm Hash digest
SHA256 485e5327f51b39e0b1992257ad2f757614cd2780dbf4caf063f36b9a20f32144
MD5 2c65f058173b4314b78fbb737158439a
BLAKE2b-256 822c7573543c4844cafef337656999c2abc0f3ded87774bef704a1491736a8bd

See more details on using hashes here.

File details

Details for the file euroeval-16.2.1-py3-none-any.whl.

File metadata

  • Download URL: euroeval-16.2.1-py3-none-any.whl
  • Upload date:
  • Size: 181.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.8.17

File hashes

Hashes for euroeval-16.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 d471a3528e2ef40960cbb7b2cff413c15ad6ce35a67eb6d96bc47beec28775c5
MD5 ba25482769ed69836e79d48b3b77c402
BLAKE2b-256 b6f093c9bc67039337288fab1cee1c5f4e471900ff8da2987fa8ef0b38fd2ab6

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page