Skip to main content

A library for benchmarking AI/ML applications.

Project description

https://img.shields.io/github/license/ebu/benchmarkstt.svg https://img.shields.io/azure-devops/build/danielthepope/benchmarkstt/4/master.svg?logo=azure-devops https://img.shields.io/azure-devops/tests/danielthepope/benchmarkstt/4/master.svg?logo=azure-devops https://img.shields.io/azure-devops/coverage/danielthepope/benchmarkstt/4/master.svg?logo=azure-devops docs/img/benchmarksttcli.png

About

This is a command line tool for benchmarking Automatic Speech Recognition engines.

It is designed for non-academic production environments, and prioritises ease of use and relative benchmarking over scientific procedure and high-accuracy absolute scoring.

Because of the wide range of languages, algorithms and audio characteristics, no single STT engine can be expected to excel in all circumstances. For this reason, this tool places responsibility on the users to design their own benchmarking procedure and to decide, based on the combination of test data and metrics, which engine is best suited for their particular use case.

Usage

$ benchmarkstt reference.txt hypothesis.txt --wer
$ benchmarkstt reference.txt hypothesis.txt --wer --lowercase

Return the Word Error Rate after lowercasing both reference and hypothesis. This normlization improves the accuracy of the Word Error Rate as it removes diffs that might otherwise be considered errors.

$ benchmarkstt reference.txt hypothesis.txt --worddiffs --config conf

Return a visual diff after applying all the normalization rules specified in the config file.

Further information

This is a collaborative project to create a library for benchmarking AI/ML applications. It was created in response to the needs of broadcasters and providers of Access Services to media organisations, but anyone is welcome to contribute. The group behind this project is the EBU’s Media Information Management & AI group.

Currently the group is focussing on Speech-to-Text, but it will consider creating benchmarking tools for other AI/ML services.

For general information about this project, including the motivations and guiding principles, please see the project wiki

To install and start using the tool,go to the documentation.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

benchmarkstt-1.0rc2.tar.gz (28.2 kB view hashes)

Uploaded Source

Built Distributions

benchmarkstt-1.0rc2-py3-none-any.whl (42.3 kB view hashes)

Uploaded Python 3

benchmarkstt-1.0rc2-py2.py3-none-any.whl (42.3 kB view hashes)

Uploaded Python 2 Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page