Skip to main content

Evaluation toolkit for neural language generation.

Project description

Jury

Simple tool/toolkit for evaluating NLG (Natural Language Generation) offering various automated metrics. Jury offers a smooth and easy-to-use interface. It uses huggingface/datasets package for underlying metric computation, and hence adding custom metric is easy as adopting datasets.Metric.

Installation

Through pip,

pip install jury

or build from source,

git clone https://github.com/obss/jury.git
cd jury
python setup.py install

Usage

API Usage

It is only two lines of code to evaluate generated outputs.

from jury import Jury

jury = Jury()
# Microsoft translator translition for "Yurtta sulh, cihanda sulh." (16.07.2021)
predictions = ["Peace in the dormitory, peace in the world."]
references = ["Peace at home, peace in the world."]
scores = jury.evaluate(predictions, references)

Specify metrics you want to use on instantiation.

jury = Jury(metrics=["bleu", "meteor"])
scores = jury.evaluate(predictions, references)

CLI Usage

Coming soon...

License

Licensed under the MIT License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

jury-0.0.4.tar.gz (8.3 kB view hashes)

Uploaded Source

Built Distribution

jury-0.0.4-py3-none-any.whl (9.1 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page