Evaluating ASR (automatic speech recognition) hypotheses, i.e. computing word error rate.
Python module for evaluting ASR hypotheses (i.e. word error rate and word recognition rate).
This module depends on the editdistance project, for computing edit distances between arbitrary sequences.
The formatting of the output of this program is very loosely based around the same idea as the align.c program commonly used within the Sphinx ASR community. This may run a bit faster if neither instances nor confusions are printed.
Please let me know if you have any comments, questions, or problems.
The program outputs three standard measurements:
- Word error rate (WER)
- Word recognition rate (the number of matched words in the alignment divided by the number of words in the reference).
- Sentence error rate (SER) (the number of incorrect sentences divided by the total number of sentences).
Installing & uninstalling
The easiest way to install is using pip:
pip install asr-evaluation
Alternatively you can clone this git repo and install using distutils:
git clone firstname.lastname@example.org:belambert/asr-evaluation.git cd asr-evaluation python setup.py install
To uninstall with pip:
pip uninstall asr-evaluation
Command line usage
For command line usage, see:
It should display something like this:
usage: wer [-h] [-i | -r] [--head-ids] [-id] [-c] [-p] [-m count] [-a] [-e] ref hyp Evaluate an ASR transcript against a reference transcript. positional arguments: ref Reference transcript filename hyp ASR hypothesis filename optional arguments: -h, --help show this help message and exit -i, --print-instances Print all individual sentences and their errors. -r, --print-errors Print all individual sentences that contain errors. --head-ids Hypothesis and reference files have ids in the first token? (Kaldi format) -id, --tail-ids, --has-ids Hypothesis and reference files have ids in the last token? (Sphinx format) -c, --confusions Print tables of which words were confused. -p, --print-wer-vs-length Print table of average WER grouped by reference sentence length. -m count, --min-word-count count Minimum word count to show a word in confusions. -a, --case-insensitive Down-case the text before running the evaluation. -e, --remove-empty-refs Skip over any examples where the reference is empty.
Contributing and code of conduct
For contributions, it's best to Github issues and pull requests. Proper testing and documentation suggested.
Code of conduct is expected to be reasonable, especially as specified by the Contributor Covenant
Release history Release notifications | RSS feed
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|Filename, size||File type||Python version||Upload date||Hashes|
|Filename, size asr_evaluation-2.0.4.tar.gz (8.2 kB)||File type Source||Python version None||Upload date||Hashes View|
|Filename, size asr_evaluation-2.0.4-py3-none-any.whl (9.1 kB)||File type Wheel||Python version py3||Upload date||Hashes View|
Hashes for asr_evaluation-2.0.4-py3-none-any.whl