High-quality Machine Translation Evaluation
Project description
Whats new?
- Bump some requirements in order to be easier to use COMET on Windows and Apple M1.
- CometKiwi was the winning submission for the QE shared task 2022 🥳!. Code will be released soon!
Quick Installation
COMET requires python 3.8 or above!
Simple installation from PyPI
pip install --upgrade pip # ensures that pip is current
pip install unbabel-comet
or
pip install unbabel-comet==1.1.2 --use-feature=2020-resolver
To develop locally install Poetry (pip install poetry
) and run the following commands:
git clone https://github.com/Unbabel/COMET
cd COMET
poetry install
Alternately, for development, you can run the CLI tools directly, e.g.,
PYTHONPATH=. ./comet/cli/score.py
Scoring MT outputs:
CLI Usage:
Test examples:
echo -e "Dem Feuer konnte Einhalt geboten werden\nSchulen und Kindergärten wurden eröffnet." >> src.de
echo -e "The fire could be stopped\nSchools and kindergartens were open" >> hyp1.en
echo -e "The fire could have been stopped\nSchools and pre-school were open" >> hyp2.en
echo -e "They were able to control the fire.\nSchools and kindergartens opened" >> ref.en
Basic scoring command:
comet-score -s src.de -t hyp1.en -r ref.en
you can set
--gpus 0
to test on CPU.
Scoring multiple systems:
comet-score -s src.de -t hyp1.en hyp2.en -r ref.en
WMT test sets via SacreBLEU:
comet-score -d wmt20:en-de -t PATH/TO/TRANSLATIONS
The default setting of comet-score
prints the score for each segment individually. If you are only interested in the score for the whole dataset (computed as the average of the segment scores), you can use the --quiet
flag.
comet-score -s src.de -t hyp1.en -r ref.en --quiet
You can select another model/metric with the --model flag and for reference-free (QE-as-a-metric) models you don't need to pass a reference.
comet-score -s src.de -t hyp1.en --model wmt21-comet-qe-mqm
Following the work on Uncertainty-Aware MT Evaluation you can use the --mc_dropout flag to get a variance/uncertainty value for each segment score. If this value is high, it means that the metric is less confident in that prediction.
comet-score -s src.de -t hyp1.en -r ref.en --mc_dropout 30
When comparing multiple MT systems we encourage you to run the comet-compare
command to get statistical significance with Paired T-Test and bootstrap resampling (Koehn, et al 2004).
comet-compare -s src.de -t hyp1.en hyp2.en hyp3.en -r ref.en
New: Minimum Bayes Risk Decoding:
Inspired by Amrhein et al, 2022 work, we have developed a command to perform Minimum Bayes Risk decoding. This command receives a text file with source sentences and a text file containing all the MT samples and writes to an output file the best sample according to COMET.
comet-mbr -s [SOURCE].txt -t [MT_SAMPLES].txt --num_sample [X] -o [OUTPUT_FILE].txt
Multi-GPU Inference:
COMET is optimized to be used in a single GPU by taking advantage of length batching and embedding caching. When using Multi-GPU since data e spread across GPUs we will typically get fewer cache hits and the length batching samples is replaced by a DistributedSampler. Because of that, according to our experiments, using 1 GPU is faster than using 2 GPUs specially when scoring multiple systems for the same source and reference.
Nonetheless, if your data does not have repetitions and you have more than 1 GPU available, you can run multi-GPU inference with the following command:
comet-score -s src.de -t hyp1.en -r ref.en --gpus 2 --quiet
Warning: Segment-level scores using multigpu will be out of order. This is only useful for system scoring.
Changing Embedding Cache Size:
You can change the cache size of COMET using the following env variable:
export COMET_EMBEDDINGS_CACHE="2048"
by default the COMET cache size is 1024.
Scoring within Python:
from comet import download_model, load_from_checkpoint
model_path = download_model("wmt20-comet-da")
model = load_from_checkpoint(model_path)
data = [
{
"src": "Dem Feuer konnte Einhalt geboten werden",
"mt": "The fire could be stopped",
"ref": "They were able to control the fire."
},
{
"src": "Schulen und Kindergärten wurden eröffnet.",
"mt": "Schools and kindergartens were open",
"ref": "Schools and kindergartens opened"
}
]
seg_scores, sys_score = model.predict(data, batch_size=8, gpus=1)
Languages Covered:
All the above mentioned models are build on top of XLM-R which cover the following languages:
Afrikaans, Albanian, Amharic, Arabic, Armenian, Assamese, Azerbaijani, Basque, Belarusian, Bengali, Bengali Romanized, Bosnian, Breton, Bulgarian, Burmese, Burmese, Catalan, Chinese (Simplified), Chinese (Traditional), Croatian, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Hausa, Hebrew, Hindi, Hindi Romanized, Hungarian, Icelandic, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish (Kurmanji), Kyrgyz, Lao, Latin, Latvian, Lithuanian, Macedonian, Malagasy, Malay, Malayalam, Marathi, Mongolian, Nepali, Norwegian, Oriya, Oromo, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Sanskri, Scottish, Gaelic, Serbian, Sindhi, Sinhala, Slovak, Slovenian, Somali, Spanish, Sundanese, Swahili, Swedish, Tamil, Tamil Romanized, Telugu, Telugu Romanized, Thai, Turkish, Ukrainian, Urdu, Urdu Romanized, Uyghur, Uzbek, Vietnamese, Welsh, Western, Frisian, Xhosa, Yiddish.
Thus, results for language pairs containing uncovered languages are unreliable!
COMET Models:
We recommend the two following models to evaluate your translations:
wmt20-comet-da
: DEFAULT Reference-based Regression model build on top of XLM-R (large) and trained of Direct Assessments from WMT17 to WMT19. Same aswmt-large-da-estimator-1719
from previous versions.wmt21-comet-qe-mqm
: Reference-FREE Regression model build on top of XLM-R (large), trained on Direct Assessments and fine-tuned on MQM.eamt22-cometinho-da
: Lightweight Reference-based Regression model that was distilled from an ensemble of COMET models similar towmt20-comet-da
.
The default model was developed to participate in the WMT20 Metrics shared task (Mathur et al. 2020) and were among the best metrics that year. Also, in a large-scale study performed by Microsoft Research this metrics ranked 1st in terms of system-level decision accuracy (Kocmi et al. 2020).
Our recommended QE system was developed for the WMT21 Metrics shared task and was the best performing QE as a Metric that year (Freitag et al. 2021).
Note: The range of scores between different models can be totally different. To better understand COMET scores please take a look at our FAQs
For more information about the available COMET models read our metrics descriptions here
Train your own Metric:
Instead of using pretrained models your can train your own model with the following command:
comet-train --cfg configs/models/{your_model_config}.yaml
You can then use your own metric to score:
comet-score -s src.de -t hyp1.en -r ref.en --model PATH/TO/CHECKPOINT
Note: Please contact ricardo.rei@unbabel.com if you wish to host your own metric within COMET available metrics!
unittest:
In order to run the toolkit tests you must run the following command:
coverage run --source=comet -m unittest discover
coverage report -m
Publications
If you use COMET please cite our work! Also, don't forget to say which model you used to evaluate your systems.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file unbabel-comet-1.1.3.tar.gz
.
File metadata
- Download URL: unbabel-comet-1.1.3.tar.gz
- Upload date:
- Size: 42.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.8.13
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 4db1f44e4656f13e705be81237db61a4007f73dad841678bf63373f3b8ab270f |
|
MD5 | 8628c972190bd35c1ddc479830ae3f33 |
|
BLAKE2b-256 | 18769890db60b0d5bdaf274a558f7938e74af582fe622fcfcda4a12b802fc4cd |
File details
Details for the file unbabel_comet-1.1.3-py3-none-any.whl
.
File metadata
- Download URL: unbabel_comet-1.1.3-py3-none-any.whl
- Upload date:
- Size: 64.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.8.13
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 982bbba9b72a7b4edb815ab7f589f78ee56e9406e1ff646c935a1b001d013cf0 |
|
MD5 | 1566ce16100e72d8b131d83540973b10 |
|
BLAKE2b-256 | 7a1895fab0c9c5eac7cee25563a2c520b6b7f0b96a4361225767d20bbc041a8a |