Skip to main content

Evaluate your speech-to-text system with similarity measures such as word error rate (WER)

Project description

JiWER: Similarity measures for automatic speech recognition evaluation

This repository contains a simple python package to approximate the Word Error Rate (WER), Match Error Rate (MER), Word Information Lost (WIL) and Word Information Preserved (WIP) of a transcript. It computes the minimum-edit distance between the ground-truth sentence and the hypothesis sentence of a speech-to-text API. The minimum-edit distance is calculated using the python C module python-Levenshtein.

For a comparison between WER, MER and WIL, see:
Morris, Andrew & Maier, Viktoria & Green, Phil. (2004). From WER and RIL to MER and WIL: improved evaluation measures for connected speech recognition.

Installation

You should be able to install this package using pip if you're using Python >= 3.5:

$ pip install jiwer

Usage

The most simple use-case is computing the edit distance between two strings:

from jiwer import wer

ground_truth = "hello world"
hypothesis = "hello duck"

error = wer(ground_truth, hypothesis)

Similarly, to get other measures:

import jiwer

ground_truth = "hello world"
hypothesis = "hello duck"

wer = jiwer.wer(ground_truth, hypothesis)
mer = jiwer.mer(ground_truth, hypothesis)
wil = jiwer.wil(ground_truth, hypothesis)

# faster, because `compute_measures` only needs to perform the heavy lifting once:
measures = jiwer.compute_measures(ground_truth, hypothesis)
wer = measures['wer']
mer = measures['mer']
wil = measures['wil']

You can also compute the WER over multiple sentences:

from jiwer import wer

ground_truth = ["hello world", "i like monthy python"]
hypothesis = ["hello duck", "i like python"]

error = wer(ground_truth, hypothesis)

When the amount of ground-truth sentences and hypothesis sentences differ, a minimum alignment is done over the merged sentence:

ground_truth = ["i like monthy python", "what do you mean, african or european swallow"]
hypothesis = ["i like", "python", "what you mean" , "or swallow"]

# is equivalent to

ground_truth = "i like monthy python what do you mean african or european swallow"
hypothesis = "i like python what you mean or swallow"

pre-processing

It might be necessary to apply some pre-processing steps on either the hypothesis or ground truth text. This is possible with the transformation API:

import jiwer

ground_truth = "I like  python!"
hypothesis = "i like Python?\n"

transformation = jiwer.Compose([
    jiwer.ToLowerCase(),
    jiwer.RemoveMultipleSpaces(),
    jiwer.RemoveWhiteSpace(replace_by_space=False),
    jiwer.SentencesToListOfWords(word_delimiter=" ")
]) 

jiwer.wer(
    ground_truth, 
    hypothesis, 
    truth_transform=transformation, 
    hypothesis_transform=transformation
)

By default, the following transformation is applied to both the ground truth and the hypothesis. Note that is simply to get it into the right format to calculate the WER.

default_transformation = jiwer.Compose([
    jiwer.RemoveMultipleSpaces(),
    jiwer.Strip(),
    jiwer.SentencesToListOfWords(),
    jiwer.RemoveEmptyStrings()
])

Transformations

Compose

jiwer.Compose(transformations: List[Transform]) can be used to combine multiple transformations.

Example:

jiwer.Compose([
    jiwer.RemoveMultipleSpaces(),
    jiwer.SentencesToListOfWords()
])

SentencesToListOfWords

jiwer.SentencesToListOfWords(word_delimiter=" ") can be used to transform one or more sentences into a list of words. The sentences can be given as a string (one sentence) or a list of strings (one or more sentences).

Example:

sentences = ["hi", "this is an example"]

print(jiwer.SentencesToListOfWords()(sentences))
# prints: ['hi', 'this', 'is', 'an, 'example']

RemoveSpecificWords

jiwer.RemoveSpecificWords(words_to_remove: List[str]) can be used to filter out certain words.

Example:

sentences = ["yhe awesome", "the apple is not a pear", "yhe"]

print(jiwer.RemoveSpecificWords(["yhe", "the", "a"])(sentences))
# prints: ["awesome", "apple is pear", ""]

RemoveWhiteSpace

jiwer.RemoveWhiteSpace(replace_by_space=False) can be used to filter out white space. The whitespace characters are , \t, \n, \r, \x0b and \x0c. Note that by default space ( ) is also removed, which will make it impossible to split a sentence into words by using SentencesToListOfWords. This can be prevented by replacing all whitespace with the space character.

Example:

sentences = ["this is an example", "hello\tworld\n\r"]

print(jiwer.RemoveWhiteSpace()(sentences))
# prints: ["thisisanexample", "helloworld"]

print(jiwer.RemoveWhiteSpace(replace_by_space=True)(sentences))
# prints: ["this is an example", "hello world  "]
# note the trailing spaces

RemovePunctuation

jiwer.RemovePunctuation() can be used to filter out punctuation. The punctuation characters are:

'!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~'

Example:

sentences = ["this is an example!", "hello. goodbye"]

print(jiwer.RemovePunctuation()(sentences))
# prints: ['this is an example', "hello goodbye"]

RemoveMultipleSpaces

jiwer.RemoveMultipleSpaces() can be used to filter out multiple spaces between words.

Example:

sentences = ["this is   an   example ", "  hello goodbye  ", "  "]

print(jiwer.RemoveMultipleSpaces()(sentences))
# prints: ['this is an example ', " hello goodbye ", " "]
# note that there are still trailing spaces

Strip

jiwer.Strip() can be used to remove all leading and trailing spaces.

Example:

sentences = [" this is an example ", "  hello goodbye  ", "  "]

print(jiwer.Strip()(sentences))
# prints: ['this is an example', "hello goodbye", ""]
# note that there is an empty string left behind which might need to be cleaned up

RemoveEmptyStrings

jiwer.RemoveEmptyStrings() can be used to remove empty strings.

Example:

sentences = ["", "this is an example", " ",  "                "]

print(jiwer.RemoveEmptyStrings()(sentences))
# prints: ['this is an example']

ExpandCommonEnglishContractions

jiwer.ExpandCommonEnglishContractions() can be used to replace common contractions such as let's to let us.

Currently, this method will perform the following replacements. Note that is used to indicate a space ( ) to get around markdown rendering constrains.

Contraction transformed into
won't ␣will not
can't ␣can not
let's ␣let us
n't ␣not
're ␣are
's ␣is
'd ␣would
'll ␣will
't ␣not
've ␣have
'm ␣am

Example:

sentences = ["she'll make sure you can't make it", "let's party!"]

print(jiwer.ExpandCommonEnglishContractions()(sentences))
# prints: ["she will make sure you can not make it", "let us party!"]

SubstituteWords

jiwer.SubstituteWords(dictionary: Mapping[str, str]) can be used to replace a word into another word. Note that the whole word is matched. If the word you're attempting to substitute is a substring of another word it will not be affected. For example, if you're substituting foo into bar, the word foobar will NOT be substituted into barbar.

Example:

sentences = ["you're pretty", "your book", "foobar"]

print(jiwer.SubstituteWords({"pretty": "awesome", "you": "i", "'re": " am", 'foo': 'bar'})(sentences))

# prints: ["i am awesome", "your book", "foobar"]

SubstituteRegexes

jiwer.SubstituteRegexes(dictionary: Mapping[str, str]) can be used to replace a substring matching a regex expression into another substring.

Example:

sentences = ["is the world doomed or loved?", "edibles are allegedly cultivated"]

# note: the regex string "\b(\w+)ed\b", matches every word ending in 'ed', 
# and "\1" stands for the first group ('\w+). It therefore removes 'ed' in every match.
print(jiwer.SubstituteRegexes({r"doom": r"sacr", r"\b(\w+)ed\b": r"\1"}))

# prints: ["is the world sacr or lov?", "edibles are allegedly cultivat"]

ToLowerCase

jiwer.ToLowerCase() can be used to convert every character into lowercase.

Example:

sentences = ["You're PRETTY"]

print(jiwer.ToLowerCase()(sentences))

# prints: ["you're pretty"]

ToUpperCase

jiwer.ToLowerCase() can be used to replace every character into uppercase.

Example:

sentences = ["You're amazing"]

print(jiwer.ToUpperCase()(sentences))

# prints: ["YOU'RE AMAZING"]

RemoveKaldiNonWords

jiwer.RemoveKaldiNonWords() can be used to remove any word between [] and <>. This can be useful when working with hypotheses from the Kaldi project, which can output non-words such as [laugh] and <unk>.

Example:

sentences = ["you <unk> like [laugh]"]

print(jiwer.RemoveKaldiNonWords()(sentences))

# prints: ["you  like "]
# note the extra spaces

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

jiwer-2.1.0.tar.gz (11.4 kB view details)

Uploaded Source

Built Distribution

jiwer-2.1.0-py3-none-any.whl (13.2 kB view details)

Uploaded Python 3

File details

Details for the file jiwer-2.1.0.tar.gz.

File metadata

  • Download URL: jiwer-2.1.0.tar.gz
  • Upload date:
  • Size: 11.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.23.0 setuptools/41.2.0 requests-toolbelt/0.9.1 tqdm/4.45.0 CPython/3.8.2

File hashes

Hashes for jiwer-2.1.0.tar.gz
Algorithm Hash digest
SHA256 2b7d6fd28056728d9073898c2ca824e3802afab5ea26f885a269f04813e2c560
MD5 a89650c86a433955f0697476c625fbbc
BLAKE2b-256 9f13f38876c4ed6aa4e44a0ae5eb8335208f93df63eb1be5ff95b6f2dd7c9390

See more details on using hashes here.

File details

Details for the file jiwer-2.1.0-py3-none-any.whl.

File metadata

  • Download URL: jiwer-2.1.0-py3-none-any.whl
  • Upload date:
  • Size: 13.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.23.0 setuptools/41.2.0 requests-toolbelt/0.9.1 tqdm/4.45.0 CPython/3.8.2

File hashes

Hashes for jiwer-2.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 b22501c22fcdc9cd43b07220f3e52288d2d9c0f47c796594f4bb824141c20f6d
MD5 a3d74d058eda509a0984a0144745c05f
BLAKE2b-256 d72a266d3b1e41cb9f4981f386a07a54b49070c3b0b72e21caae4cca2317ae02

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page