Skip to main content

High-quality Machine Translation Evaluation

Project description



License GitHub stars PyPI Code Style

Version 1.0.0 is finally out 🥳! whats new?

  1. comet-compare command for statistical comparison between two models
  2. comet-score with multiple hypothesis/systems
  3. Embeddings caching for faster inference (thanks to @jsouza).
  4. Length Batching for faster inference (thanks to @CoderPat)
  5. Integration with SacreBLEU for dataset downloading (thanks to @mjpost)
  6. Monte-carlo Dropout for uncertainty estimation (thanks to @glushkovato and @chryssa-zrv)
  7. Some code refactoring

Quick Installation

Detailed usage examples and instructions can be found in the Full Documentation.

Simple installation from PyPI

pip install unbabel-comet==1.0.0

To develop locally install Poetry and run the following commands:

git clone https://github.com/Unbabel/COMET
cd COMET
poetry install

Alternately, for development, you can run the CLI tools directly, e.g.,

PYTHONPATH=. ./comet/cli/score.py

Scoring MT outputs:

CLI Usage:

Test examples:

echo -e "Dem Feuer konnte Einhalt geboten werden\nSchulen und Kindergärten wurden eröffnet." >> src.de
echo -e "The fire could be stopped\nSchools and kindergartens were open" >> hyp1.en
echo -e "The fire could have been stopped\nSchools and pre-school were open" >> hyp2.en
echo -e "They were able to control the fire.\nSchools and kindergartens opened" >> ref.en
comet-score -s src.de -t hyp1.en -r ref.en

Scoring multiple systems:

comet-score -s src.de -t hyp1.en hyp2.en -r ref.en

WMT test sets via SacreBLEU:

comet-score -d wmt20:en-de -t PATH/TO/TRANSLATIONS

You can select another model/metric with the --model flag and for reference-free (QE-as-a-metric) models you don't need to pass a reference.

comet-score -s src.de -t hyp1.en --model wmt20-comet-qe-da

Following the work on Uncertainty-Aware MT Evaluation you can use the --mc_dropout flag to get a variance/uncertainty value for each segment score. If this value is high, it means that the metric is less confident in that prediction.

comet-score -s src.de -t hyp1.en -r ref.en --mc_dropout 30

When comparing two MT systems we encourage you to run the comet-compare command to get statistical significance with Paired T-Test and bootstrap resampling (Koehn, et al 2004).

comet-compare -s src.de -x hyp1.en -y hyp2.en -r ref.en

For even more detailed MT contrastive evaluation please take a look at our new tool MT-Telescope.

Multi-GPU Inference:

COMET is optimized to be used in a single GPU by taking advantage of length batching and embedding caching. When using Multi-GPU since data e spread across GPUs we will typically get fewer cache hits and the length batching samples is replaced by a DistributedSampler. Because of that, according to our experiments, using 1 GPU is faster than using 2 GPUs specially when scoring multiple systems for the same source and reference.

Nonetheless, if your data does not have repetitions and you have more than 1 GPU available, you can run multi-GPU inference with the following command:

comet-score -s src.de -t hyp1.en -r ref.en --gpus 2

Changing Embedding Cache Size:

You can change the cache size of COMET using the following env variable:

export COMET_EMBEDDINGS_CACHE="2048"

by default the COMET cache size is 1024.

Scoring within Python:

from comet import download_model, load_from_checkpoint

model_path = download_model("wmt20-comet-da")
model = load_from_checkpoint(model_path)
data = [
    {
        "src": "Dem Feuer konnte Einhalt geboten werden",
        "mt": "The fire could be stopped",
        "ref": "They were able to control the fire."
    },
    {
        "src": "Schulen und Kindergärten wurden eröffnet.",
        "mt": "Schools and kindergartens were open",
        "ref": "Schools and kindergartens opened"
    }
]
seg_scores, sys_score = model.predict(data, batch_size=8, gpus=1)

Languages Covered:

All the above mentioned models are build on top of XLM-R which cover the following languages:

Afrikaans, Albanian, Amharic, Arabic, Armenian, Assamese, Azerbaijani, Basque, Belarusian, Bengali, Bengali Romanized, Bosnian, Breton, Bulgarian, Burmese, Burmese, Catalan, Chinese (Simplified), Chinese (Traditional), Croatian, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Hausa, Hebrew, Hindi, Hindi Romanized, Hungarian, Icelandic, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish (Kurmanji), Kyrgyz, Lao, Latin, Latvian, Lithuanian, Macedonian, Malagasy, Malay, Malayalam, Marathi, Mongolian, Nepali, Norwegian, Oriya, Oromo, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Sanskri, Scottish, Gaelic, Serbian, Sindhi, Sinhala, Slovak, Slovenian, Somali, Spanish, Sundanese, Swahili, Swedish, Tamil, Tamil Romanized, Telugu, Telugu Romanized, Thai, Turkish, Ukrainian, Urdu, Urdu Romanized, Uyghur, Uzbek, Vietnamese, Welsh, Western, Frisian, Xhosa, Yiddish.

Thus, results for language pairs containing uncovered languages are unreliable!

COMET Models:

We recommend the two following models to evaluate your translations:

  • wmt20-comet-da: DEFAULT Reference-based Regression model build on top of XLM-R (large) and trained of Direct Assessments from WMT17 to WMT19. Same as wmt-large-da-estimator-1719 from previous versions.
  • wmt20-comet-qe-da: Reference-FREE Regression model build on top of XLM-R (large) and trained of Direct Assessments from WMT17 to WMT19. Same as wmt-large-qe-estimator-1719 from previous versions.

These two models were developed to participate on the WMT20 Metrics shared task (Mathur et al. 2020) and were among the best metrics that year. Also, in a large-scale study performed by Microsoft Research these two metrics are ranked 1st and 2nd in terms of system-level decision accuracy (Kocmi et al. 2020). At segment-level, these systems also correlate well with expert evaluations based on MQM data (Freitag et al. 2020).

For more information about the available COMET models read our metrics descriptions here

Train your own Metric:

Instead of using pretrained models your can train your own model with the following command:

comet-train --cfg configs/models/{your_model_config}.yaml

You can then use your own metric to score:

comet-score -s src.de -t hyp1.en -r ref.en --model PATH/TO/CHECKPOINT

Note: Please contact ricardo.rei@unbabel.com if you wish to host your own metric within COMET available metrics!

unittest:

In order to run the toolkit tests you must run the following command:

coverage run --source=comet -m unittest discover
coverage report -m

Publications

If you use COMET please cite our work! Also, don't forget to say which model you used to evaluate your systems.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

unbabel-comet-1.0.0.tar.gz (37.8 kB view details)

Uploaded Source

Built Distribution

unbabel_comet-1.0.0-py3-none-any.whl (56.2 kB view details)

Uploaded Python 3

File details

Details for the file unbabel-comet-1.0.0.tar.gz.

File metadata

  • Download URL: unbabel-comet-1.0.0.tar.gz
  • Upload date:
  • Size: 37.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.6.0 importlib_metadata/4.8.2 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.8.10

File hashes

Hashes for unbabel-comet-1.0.0.tar.gz
Algorithm Hash digest
SHA256 0cb5013670a99e5853f8dcb333548757e6e7b537633cec79ec60c14caa61ab92
MD5 6b3df3b6cbfb7e6bc12ed2bb9dbdbb15
BLAKE2b-256 d8aea8361a5702d69beb6595ded813e3efd20728973f5c39b162e6826c009a83

See more details on using hashes here.

File details

Details for the file unbabel_comet-1.0.0-py3-none-any.whl.

File metadata

  • Download URL: unbabel_comet-1.0.0-py3-none-any.whl
  • Upload date:
  • Size: 56.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.6.0 importlib_metadata/4.8.2 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.8.10

File hashes

Hashes for unbabel_comet-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 f515f9ac9b69fa5b86df67ab2e4e221e804c1b4c7cf23e92cc2b194573b2cf14
MD5 5c97286d3df367dd374fe9cdc105f476
BLAKE2b-256 f1f96c9d4dc4771b157a2fb9b11151903e9c3eadfb605904f683eea4cff73911

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page