Skip to main content

Deep learning for Text to Speech by Coqui.

Project description

🐸TTS is a library for advanced Text-to-Speech generation. It's built on the latest research, was designed to achieve the best trade-off among ease-of-training, speed and quality. 🐸TTS comes with pretrained models, tools for measuring dataset quality and already used in 20+ languages for products and research projects.

Gitter License PyPI version Covenant Downloads DOI

GithubActions GithubActions GithubActions GithubActions GithubActions GithubActions GithubActions GithubActions GithubActions Docs

📰 Subscribe to 🐸Coqui.ai Newsletter

📢 English Voice Samples and SoundCloud playlist

📄 Text-to-Speech paper collection

💬 Where to ask questions

Please use our dedicated channels for questions and discussion. Help is much more valuable if it's shared publicly so that more people can benefit from it.

Type Platforms
🚨 Bug Reports GitHub Issue Tracker
🎁 Feature Requests & Ideas GitHub Issue Tracker
👩‍💻 Usage Questions Github Discussions
🗯 General Discussion Github Discussions or Gitter Room

🔗 Links and Resources

Type Links
💼 Documentation ReadTheDocs
💾 Installation TTS/README.md
👩‍💻 Contributing CONTRIBUTING.md
📌 Road Map Main Development Plans
🚀 Released Models TTS Releases and Experimental Models

🥇 TTS Performance

Underlined "TTS*" and "Judy*" are 🐸TTS models

Features

  • High-performance Deep Learning models for Text2Speech tasks.
    • Text2Spec models (Tacotron, Tacotron2, Glow-TTS, SpeedySpeech).
    • Speaker Encoder to compute speaker embeddings efficiently.
    • Vocoder models (MelGAN, Multiband-MelGAN, GAN-TTS, ParallelWaveGAN, WaveGrad, WaveRNN)
  • Fast and efficient model training.
  • Detailed training logs on the terminal and Tensorboard.
  • Support for Multi-speaker TTS.
  • Efficient, flexible, lightweight but feature complete Trainer API.
  • Released and ready-to-use models.
  • Tools to curate Text2Speech datasets underdataset_analysis.
  • Utilities to use and test your models.
  • Modular (but not too much) code base enabling easy implementation of new ideas.

Implemented Models

Text-to-Spectrogram

End-to-End Models

Attention Methods

  • Guided Attention: paper
  • Forward Backward Decoding: paper
  • Graves Attention: paper
  • Double Decoder Consistency: blog
  • Dynamic Convolutional Attention: paper
  • Alignment Network: paper

Speaker Encoder

Vocoders

You can also help us implement more models.

Install TTS

🐸TTS is tested on Ubuntu 18.04 with python >= 3.7, < 3.11..

If you are only interested in synthesizing speech with the released 🐸TTS models, installing from PyPI is the easiest option.

pip install TTS

If you plan to code or train models, clone 🐸TTS and install it locally.

git clone https://github.com/coqui-ai/TTS
pip install -e .[all,dev,notebooks]  # Select the relevant extras

If you are on Ubuntu (Debian), you can also run following commands for installation.

$ make system-deps  # intended to be used on Ubuntu (Debian). Let us know if you have a different OS.
$ make install

If you are on Windows, 👑@GuyPaddock wrote installation instructions here.

Use TTS

Single Speaker Models

  • List provided models:

    $ tts --list_models
    
  • Get model info (for both tts_models and vocoder_models):

    • Query by type/name: The model_info_by_name uses the name as it from the --list_models.

      $ tts --model_info_by_name "<model_type>/<language>/<dataset>/<model_name>"
      

      For example:

      $ tts --model_info_by_name tts_models/tr/common-voice/glow-tts
      
      $ tts --model_info_by_name vocoder_models/en/ljspeech/hifigan_v2
      
    • Query by type/idx: The model_query_idx uses the corresponding idx from --list_models.

      $ tts --model_info_by_idx "<model_type>/<model_query_idx>"
      

      For example:

      $ tts --model_info_by_idx tts_models/3 
      
  • Run TTS with default models:

    $ tts --text "Text for TTS" --out_path output/path/speech.wav
    
  • Run a TTS model with its default vocoder model:

    $ tts --text "Text for TTS" --model_name "<model_type>/<language>/<dataset>/<model_name>" --out_path output/path/speech.wav
    

    For example:

    $ tts --text "Text for TTS" --model_name "tts_models/en/ljspeech/glow-tts" --out_path output/path/speech.wav
    
  • Run with specific TTS and vocoder models from the list:

    $ tts --text "Text for TTS" --model_name "<model_type>/<language>/<dataset>/<model_name>" --vocoder_name "<model_type>/<language>/<dataset>/<model_name>" --out_path output/path/speech.wav
    

    For example:

    $ tts --text "Text for TTS" --model_name "tts_models/en/ljspeech/glow-tts" --vocoder_name "vocoder_models/en/ljspeech/univnet" --out_path output/path/speech.wav
    
  • Run your own TTS model (Using Griffin-Lim Vocoder):

    $ tts --text "Text for TTS" --model_path path/to/model.pth --config_path path/to/config.json --out_path output/path/speech.wav
    
  • Run your own TTS and Vocoder models:

    $ tts --text "Text for TTS" --model_path path/to/config.json --config_path path/to/model.pth --out_path output/path/speech.wav
        --vocoder_path path/to/vocoder.pth --vocoder_config_path path/to/vocoder_config.json
    

Multi-speaker Models

  • List the available speakers and choose as <speaker_id> among them:

    $ tts --model_name "<language>/<dataset>/<model_name>"  --list_speaker_idxs
    
  • Run the multi-speaker TTS model with the target speaker ID:

    $ tts --text "Text for TTS." --out_path output/path/speech.wav --model_name "<language>/<dataset>/<model_name>"  --speaker_idx <speaker_id>
    
  • Run your own multi-speaker TTS model:

    $ tts --text "Text for TTS" --out_path output/path/speech.wav --model_path path/to/config.json --config_path path/to/model.pth --speakers_file_path path/to/speaker.json --speaker_idx <speaker_id>
    

Directory Structure

|- notebooks/       (Jupyter Notebooks for model evaluation, parameter selection and data analysis.)
|- utils/           (common utilities.)
|- TTS
    |- bin/             (folder for all the executables.)
      |- train*.py                  (train your target model.)
      |- distribute.py              (train your TTS model using Multiple GPUs.)
      |- compute_statistics.py      (compute dataset statistics for normalization.)
      |- ...
    |- tts/             (text to speech models)
        |- layers/          (model layer definitions)
        |- models/          (model definitions)
        |- utils/           (model specific utilities.)
    |- speaker_encoder/ (Speaker Encoder models.)
        |- (same)
    |- vocoder/         (Vocoder models.)
        |- (same)

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

TTS-0.8.0.tar.gz (1.4 MB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

TTS-0.8.0-cp310-cp310-manylinux1_x86_64.whl (559.1 kB view details)

Uploaded CPython 3.10

TTS-0.8.0-cp39-cp39-manylinux1_x86_64.whl (559.1 kB view details)

Uploaded CPython 3.9

TTS-0.8.0-cp38-cp38-manylinux1_x86_64.whl (559.1 kB view details)

Uploaded CPython 3.8

TTS-0.8.0-cp37-cp37m-manylinux1_x86_64.whl (559.1 kB view details)

Uploaded CPython 3.7m

File details

Details for the file TTS-0.8.0.tar.gz.

File metadata

  • Download URL: TTS-0.8.0.tar.gz
  • Upload date:
  • Size: 1.4 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.8.13

File hashes

Hashes for TTS-0.8.0.tar.gz
Algorithm Hash digest
SHA256 12443cbc68584a12953086a6195241ba4987c6d113a536ebbb1b6b9eb53f0fef
MD5 4cf8c77a5153de2c7ce78c03d1098264
BLAKE2b-256 67996c20d7d05ba1b790ecf630c966339575f1e1ba1de99180cfb6cd0cdef613

See more details on using hashes here.

File details

Details for the file TTS-0.8.0-cp310-cp310-manylinux1_x86_64.whl.

File metadata

File hashes

Hashes for TTS-0.8.0-cp310-cp310-manylinux1_x86_64.whl
Algorithm Hash digest
SHA256 8a9f256d3ae70a34704ed38ff0c6224eb571d1e6f9c55a96cff770fb261a9080
MD5 d2ff533cb977b9cffd8977e189cbf325
BLAKE2b-256 e1931b2bd97e43c15b733c9d8957c5bbb9edae27f028f2ecf48ca5a91936a421

See more details on using hashes here.

File details

Details for the file TTS-0.8.0-cp39-cp39-manylinux1_x86_64.whl.

File metadata

File hashes

Hashes for TTS-0.8.0-cp39-cp39-manylinux1_x86_64.whl
Algorithm Hash digest
SHA256 9d3216aaa459ad4bd77c9e35c8a594980daad18b2ff021eeb73009c77490d537
MD5 6b4208bd183345b317368665590caa30
BLAKE2b-256 fae7d78645cd4869ce62399d698cab4833839a807070d2e2ca3d5f9f6853ba22

See more details on using hashes here.

File details

Details for the file TTS-0.8.0-cp38-cp38-manylinux1_x86_64.whl.

File metadata

File hashes

Hashes for TTS-0.8.0-cp38-cp38-manylinux1_x86_64.whl
Algorithm Hash digest
SHA256 96d4bec76abb78213e64e19005c2d84485839116a946ca75ce6c8c9381d7c9ce
MD5 9c3db2107aa7e3c359ef57d629cc520c
BLAKE2b-256 90553bef1c9b36acda1645119770fe345728c85969d044f4cd144d4e1a4d17b4

See more details on using hashes here.

File details

Details for the file TTS-0.8.0-cp37-cp37m-manylinux1_x86_64.whl.

File metadata

File hashes

Hashes for TTS-0.8.0-cp37-cp37m-manylinux1_x86_64.whl
Algorithm Hash digest
SHA256 a8a4ddbfec58e9cbec8d667fa327fe9ff0bb0eaae7fb154cd602d1caf340e525
MD5 57b51f4df4ccef783f445b8ce0720715
BLAKE2b-256 e52798e238b6990a1771925d0b0485beab140004670fae9cf599139c0a2c4690

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page