Skip to main content

Tokenizer for Text to Speech (TTS) models

Project description

ttstokenizer: Tokenizer for Text to Speech (TTS) models

Version

See the original repository https://github.com/Kyubyong/g2p for more information on English Grapheme to Phoneme conversion.

Other than removing unused dependencies and reorganizing the files, the original logic remains intact.

ttstokenizer makes it easy to feed text to speech models with minimal dependencies that are Apache 2.0 compatible.

The standard preprocessing logic for many English Text to Speech (TTS) models is as follows:

  • Apply Tacotron text normalization rules
    • This project replicates the logic found in ESPnet
  • Convert Graphemes to Phonemes
  • Build an integer array mapping Phonemes to their integer token positions

This project adds new tokenizers that runs the logic above. The output is consumable by machine learning models.

Installation

The easiest way to install is via pip and PyPI

pip install ttstokenizer

Usage

This project has two supported tokenizers.

  • TTSTokenizer - Tokenizes text to ARPABET phoenemes. Word to phoeneme definitions are provided by CMUdict. These phonemes are then mapped to token ids using a provided token - token id mapping.

  • IPATokenizer - Tokenizes text to International Phonetic Alphabet (IPA) phoenemes. The graphemes for each phoneme are mapped to token ids.

The IPATokenizer is designed to be a drop in replacement for models that depend on eSpeak to tokenize text into IPA phoenemes.

An example of tokenizing text for each of the TTS models is shown below.

from ttstokenizer import TTSTokenizer

tokenizer = TTSTokenizer(tokens)
print(tokenizer("Text to tokenize"))

>>> array([ 4, 15, 10,  6,  4,  4, 28,  4, 34, 10,  2,  3, 51, 11])
from ttstokenizer import IPATokenizer

tokenizer = IPATokenizer()
print(tokenizer("Text to tokenize"))

>>> array([ 62 156  86  53  61  62  16  62  70  16  62 156  57 135  53  70  56 157 43 102  68])

Debugging

Both tokenizers also support returning raw types to help with debugging.

The following returns ARPABET phonemes instead of token ids for the TTSTokenizer.

from ttstokenizer import TTSTokenizer

tokenizer = TTSTokenizer()
print(tokenizer("Text to tokenize"))

>>> ['T', 'EH1', 'K', 'S', 'T', 'T', 'AH0', 'T', 'OW1', 'K', 'AH0', 'N', 'AY2', 'Z']

The same can be done with the IPATokenizer. The following returns the transcribed IPA tokens

from ttstokenizer import IPATokenizer

tokenizer = IPATokenizer(tokenize=False)
print(tokenizer("Text to tokenize"))

>>> "tˈɛkst tɐ tˈoʊkɐnˌaɪz"

The IPATokenizer has an additional method to accept IPA tokens directly.

from ttstokenizer import IPATokenizer

tokenizer = IPATokenizer(transcribe=False)
print(tokenizer("tˈɛkst tɐ tˈoʊkɐnˌaɪz"))

>>> array([[ 62 156  86  53  61  62  16  62  70  16  62 156  57 135  53  70  56 157 43 102  68]])

Notice how the output is the same as above. When the output doesn't sound right, these methods can help trace what's going on.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ttstokenizer-1.1.0.tar.gz (4.1 MB view details)

Uploaded Source

Built Distribution

ttstokenizer-1.1.0-py3-none-any.whl (4.0 MB view details)

Uploaded Python 3

File details

Details for the file ttstokenizer-1.1.0.tar.gz.

File metadata

  • Download URL: ttstokenizer-1.1.0.tar.gz
  • Upload date:
  • Size: 4.1 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.21

File hashes

Hashes for ttstokenizer-1.1.0.tar.gz
Algorithm Hash digest
SHA256 6a45e2b1cc39ec2329d890dd26aef0a9e05e30be35c8bc8587b7d36701019550
MD5 cac4ecde17bd493ea91cb820422407e0
BLAKE2b-256 995a3d1c774a3bd49d60cd93f2d8d8e30c172100fb4bb93e438ba1413277de80

See more details on using hashes here.

File details

Details for the file ttstokenizer-1.1.0-py3-none-any.whl.

File metadata

  • Download URL: ttstokenizer-1.1.0-py3-none-any.whl
  • Upload date:
  • Size: 4.0 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.21

File hashes

Hashes for ttstokenizer-1.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 c5a3caa4019edbca2a24013ae8433e8efb1071d404e628453de61df9b39ae9ba
MD5 fa21c55563b0885a241b45cf2c10b7c5
BLAKE2b-256 4807a0d54d1a6b9e59f8e253b6d867853013727c4cfa1b35e6c25cb611fc699c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page