Skip to main content

Deep learning for Text to Speech by Coqui.

Project description

🐸Coqui.ai News

🐸TTS is a library for advanced Text-to-Speech generation. It's built on the latest research, was designed to achieve the best trade-off among ease-of-training, speed and quality. 🐸TTS comes with pretrained models, tools for measuring dataset quality and already used in 20+ languages for products and research projects.

Dicord License PyPI version Covenant Downloads DOI

GithubActions GithubActions GithubActions GithubActions GithubActions GithubActions GithubActions GithubActions GithubActions GithubActions GithubActions Docs

📰 Subscribe to 🐸Coqui.ai Newsletter

📢 English Voice Samples and SoundCloud playlist

📄 Text-to-Speech paper collection

💬 Where to ask questions

Please use our dedicated channels for questions and discussion. Help is much more valuable if it's shared publicly so that more people can benefit from it.

Type Platforms
🚨 Bug Reports GitHub Issue Tracker
🎁 Feature Requests & Ideas GitHub Issue Tracker
👩‍💻 Usage Questions GitHub Discussions
🗯 General Discussion GitHub Discussions or Discord

🔗 Links and Resources

Type Links
💼 Documentation ReadTheDocs
💾 Installation TTS/README.md
👩‍💻 Contributing CONTRIBUTING.md
📌 Road Map Main Development Plans
🚀 Released Models TTS Releases and Experimental Models

🥇 TTS Performance

Underlined "TTS*" and "Judy*" are 🐸TTS models

Features

  • High-performance Deep Learning models for Text2Speech tasks.
    • Text2Spec models (Tacotron, Tacotron2, Glow-TTS, SpeedySpeech).
    • Speaker Encoder to compute speaker embeddings efficiently.
    • Vocoder models (MelGAN, Multiband-MelGAN, GAN-TTS, ParallelWaveGAN, WaveGrad, WaveRNN)
  • Fast and efficient model training.
  • Detailed training logs on the terminal and Tensorboard.
  • Support for Multi-speaker TTS.
  • Efficient, flexible, lightweight but feature complete Trainer API.
  • Released and ready-to-use models.
  • Tools to curate Text2Speech datasets underdataset_analysis.
  • Utilities to use and test your models.
  • Modular (but not too much) code base enabling easy implementation of new ideas.

Implemented Models

Spectrogram models

End-to-End Models

Attention Methods

  • Guided Attention: paper
  • Forward Backward Decoding: paper
  • Graves Attention: paper
  • Double Decoder Consistency: blog
  • Dynamic Convolutional Attention: paper
  • Alignment Network: paper

Speaker Encoder

Vocoders

Voice Conversion

You can also help us implement more models.

Install TTS

🐸TTS is tested on Ubuntu 18.04 with python >= 3.7, < 3.11..

If you are only interested in synthesizing speech with the released 🐸TTS models, installing from PyPI is the easiest option.

pip install TTS

If you plan to code or train models, clone 🐸TTS and install it locally.

git clone https://github.com/coqui-ai/TTS
pip install -e .[all,dev,notebooks]  # Select the relevant extras

If you are on Ubuntu (Debian), you can also run following commands for installation.

$ make system-deps  # intended to be used on Ubuntu (Debian). Let us know if you have a different OS.
$ make install

If you are on Windows, 👑@GuyPaddock wrote installation instructions here.

Docker Image

You can also try TTS without install with the docker image. Simply run the following command and you will be able to run TTS without installing it.

docker run --rm -it -p 5002:5002 --entrypoint /bin/bash ghcr.io/coqui-ai/tts-cpu
python3 TTS/server/server.py --list_models #To get the list of available models
python3 TTS/server/server.py --model_name tts_models/en/vctk/vits # To start a server

You can then enjoy the TTS server here More details about the docker images (like GPU support) can be found here

Synthesizing speech by 🐸TTS

🐍 Python API

from TTS.api import TTS

# Running a multi-speaker and multi-lingual model

# List available 🐸TTS models and choose the first one
model_name = TTS.list_models()[0]
# Init TTS
tts = TTS(model_name)

# Run TTS

# ❗ Since this model is multi-speaker and multi-lingual, we must set the target speaker and the language
# Text to speech with a numpy output
wav = tts.tts("This is a test! This is also a test!!", speaker=tts.speakers[0], language=tts.languages[0])
# Text to speech to a file
tts.tts_to_file(text="Hello world!", speaker=tts.speakers[0], language=tts.languages[0], file_path="output.wav")

# Running a single speaker model

# Init TTS with the target model name
tts = TTS(model_name="tts_models/de/thorsten/tacotron2-DDC", progress_bar=False, gpu=False)
# Run TTS
tts.tts_to_file(text="Ich bin eine Testnachricht.", file_path=OUTPUT_PATH)

# Example voice cloning with YourTTS in English, French and Portuguese

tts = TTS(model_name="tts_models/multilingual/multi-dataset/your_tts", progress_bar=False, gpu=True)
tts.tts_to_file("This is voice cloning.", speaker_wav="my/cloning/audio.wav", language="en", file_path="output.wav")
tts.tts_to_file("C'est le clonage de la voix.", speaker_wav="my/cloning/audio.wav", language="fr-fr", file_path="output.wav")
tts.tts_to_file("Isso é clonagem de voz.", speaker_wav="my/cloning/audio.wav", language="pt-br", file_path="output.wav")


# Example voice conversion converting speaker of the `source_wav` to the speaker of the `target_wav`

tts = TTS(model_name="voice_conversion_models/multilingual/vctk/freevc24", progress_bar=False, gpu=True)
tts.voice_conversion_to_file(source_wav="my/source.wav", target_wav="my/target.wav", file_path="output.wav")

# Example voice cloning by a single speaker TTS model combining with the voice conversion model. This way, you can
# clone voices by using any model in 🐸TTS.

tts = TTS("tts_models/de/thorsten/tacotron2-DDC")
tts.tts_with_vc_to_file(
    "Wie sage ich auf Italienisch, dass ich dich liebe?",
    speaker_wav="target/speaker.wav",
    file_path="ouptut.wav"
)

# Example text to speech using [🐸Coqui Studio](https://coqui.ai) models.

# You can use all of your available speakers in the studio.
# [🐸Coqui Studio](https://coqui.ai) API token is required. You can get it from the [account page](https://coqui.ai/account).
# You should set the `COQUI_STUDIO_TOKEN` environment variable to use the API token.

# If you have a valid API token set you will see the studio speakers as separate models in the list.
# The name format is coqui_studio/en/<studio_speaker_name>/coqui_studio
models = TTS().list_models()
# Init TTS with the target studio speaker
tts = TTS(model_name="coqui_studio/en/Torcull Diarmuid/coqui_studio", progress_bar=False, gpu=False)
# Run TTS
tts.tts_to_file(text="This is a test.", file_path=OUTPUT_PATH)
# Run TTS with emotion and speed control
tts.tts_to_file(text="This is a test.", file_path=OUTPUT_PATH, emotion="Happy", speed=1.5)


#Example text to speech using **Fairseq models in ~1100 languages** 🤯.

#For these models use the following name format: `tts_models/<lang-iso_code>/fairseq/vits`.
#You can find the list of language ISO codes [here](https://dl.fbaipublicfiles.com/mms/tts/all-tts-languages.html) and learn about the Fairseq models [here](https://github.com/facebookresearch/fairseq/tree/main/examples/mms).

# TTS with on the fly voice conversion
api = TTS("tts_models/deu/fairseq/vits")
api.tts_with_vc_to_file(
    "Wie sage ich auf Italienisch, dass ich dich liebe?",
    speaker_wav="target/speaker.wav",
    file_path="ouptut.wav"
)

Command line tts

Single Speaker Models

  • List provided models:

    $ tts --list_models
    
  • Get model info (for both tts_models and vocoder_models):

    • Query by type/name: The model_info_by_name uses the name as it from the --list_models.

      $ tts --model_info_by_name "<model_type>/<language>/<dataset>/<model_name>"
      

      For example:

      $ tts --model_info_by_name tts_models/tr/common-voice/glow-tts
      
      $ tts --model_info_by_name vocoder_models/en/ljspeech/hifigan_v2
      
    • Query by type/idx: The model_query_idx uses the corresponding idx from --list_models.

      $ tts --model_info_by_idx "<model_type>/<model_query_idx>"
      

      For example:

      $ tts --model_info_by_idx tts_models/3
      
  • Run TTS with default models:

    $ tts --text "Text for TTS" --out_path output/path/speech.wav
    
  • Run a TTS model with its default vocoder model:

    $ tts --text "Text for TTS" --model_name "<model_type>/<language>/<dataset>/<model_name>" --out_path output/path/speech.wav
    

    For example:

    $ tts --text "Text for TTS" --model_name "tts_models/en/ljspeech/glow-tts" --out_path output/path/speech.wav
    
  • Run with specific TTS and vocoder models from the list:

    $ tts --text "Text for TTS" --model_name "<model_type>/<language>/<dataset>/<model_name>" --vocoder_name "<model_type>/<language>/<dataset>/<model_name>" --out_path output/path/speech.wav
    

    For example:

    $ tts --text "Text for TTS" --model_name "tts_models/en/ljspeech/glow-tts" --vocoder_name "vocoder_models/en/ljspeech/univnet" --out_path output/path/speech.wav
    
  • Run your own TTS model (Using Griffin-Lim Vocoder):

    $ tts --text "Text for TTS" --model_path path/to/model.pth --config_path path/to/config.json --out_path output/path/speech.wav
    
  • Run your own TTS and Vocoder models:

    $ tts --text "Text for TTS" --model_path path/to/model.pth --config_path path/to/config.json --out_path output/path/speech.wav
        --vocoder_path path/to/vocoder.pth --vocoder_config_path path/to/vocoder_config.json
    

Multi-speaker Models

  • List the available speakers and choose a <speaker_id> among them:

    $ tts --model_name "<language>/<dataset>/<model_name>"  --list_speaker_idxs
    
  • Run the multi-speaker TTS model with the target speaker ID:

    $ tts --text "Text for TTS." --out_path output/path/speech.wav --model_name "<language>/<dataset>/<model_name>"  --speaker_idx <speaker_id>
    
  • Run your own multi-speaker TTS model:

    $ tts --text "Text for TTS" --out_path output/path/speech.wav --model_path path/to/model.pth --config_path path/to/config.json --speakers_file_path path/to/speaker.json --speaker_idx <speaker_id>
    

Directory Structure

|- notebooks/       (Jupyter Notebooks for model evaluation, parameter selection and data analysis.)
|- utils/           (common utilities.)
|- TTS
    |- bin/             (folder for all the executables.)
      |- train*.py                  (train your target model.)
      |- ...
    |- tts/             (text to speech models)
        |- layers/          (model layer definitions)
        |- models/          (model definitions)
        |- utils/           (model specific utilities.)
    |- speaker_encoder/ (Speaker Encoder models.)
        |- (same)
    |- vocoder/         (Vocoder models.)
        |- (same)

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

TTS-0.14.3.tar.gz (1.5 MB view details)

Uploaded Source

Built Distributions

TTS-0.14.3-cp310-cp310-manylinux1_x86_64.whl (739.0 kB view details)

Uploaded CPython 3.10

TTS-0.14.3-cp39-cp39-manylinux1_x86_64.whl (739.0 kB view details)

Uploaded CPython 3.9

TTS-0.14.3-cp38-cp38-manylinux1_x86_64.whl (739.0 kB view details)

Uploaded CPython 3.8

TTS-0.14.3-cp37-cp37m-manylinux1_x86_64.whl (739.0 kB view details)

Uploaded CPython 3.7m

File details

Details for the file TTS-0.14.3.tar.gz.

File metadata

  • Download URL: TTS-0.14.3.tar.gz
  • Upload date:
  • Size: 1.5 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.8.16

File hashes

Hashes for TTS-0.14.3.tar.gz
Algorithm Hash digest
SHA256 b91153e067bda93d37770f866b1e28860dfcc140bd5ce2bffe08f510cdc96e49
MD5 daec4c3d997fa44d3423988a683e197e
BLAKE2b-256 90080659b0ea3503884871a8ed6164fd71e207b10b99a089b983a7a2f4e81224

See more details on using hashes here.

File details

Details for the file TTS-0.14.3-cp310-cp310-manylinux1_x86_64.whl.

File metadata

File hashes

Hashes for TTS-0.14.3-cp310-cp310-manylinux1_x86_64.whl
Algorithm Hash digest
SHA256 298f4aaf00ba2cc837ea7790dbb707884ceb0ecc52d7957c698cd1d3e23b6043
MD5 78e54e384ece70095d1ed060a0113fa1
BLAKE2b-256 55389720915cf44d0c2dfcd6735992ccfdc62156eaa46d34d3cd621d2fbf26b3

See more details on using hashes here.

File details

Details for the file TTS-0.14.3-cp39-cp39-manylinux1_x86_64.whl.

File metadata

File hashes

Hashes for TTS-0.14.3-cp39-cp39-manylinux1_x86_64.whl
Algorithm Hash digest
SHA256 caa3d8b1b53c91935c00131d93d84fa572e67bb8f76efa4a5bb4a7c30f1b544e
MD5 75620ccc8cfd09f421b0f8e7584f87fe
BLAKE2b-256 e6f58ab1878ef206d1f5d741086072f4ff0316ff735fc2178bbacf7b9879dda2

See more details on using hashes here.

File details

Details for the file TTS-0.14.3-cp38-cp38-manylinux1_x86_64.whl.

File metadata

File hashes

Hashes for TTS-0.14.3-cp38-cp38-manylinux1_x86_64.whl
Algorithm Hash digest
SHA256 e4cc8d9cc404c49d1aba387526b0a7ad8dcc806512146fa8ec2e338bd70d24bb
MD5 4fcac679715bcca10755bb12beb3c098
BLAKE2b-256 c8a046678b7c9d4f08983aba13d2bd37aa7c8e97c61ba2da52e7747d00876db6

See more details on using hashes here.

File details

Details for the file TTS-0.14.3-cp37-cp37m-manylinux1_x86_64.whl.

File metadata

File hashes

Hashes for TTS-0.14.3-cp37-cp37m-manylinux1_x86_64.whl
Algorithm Hash digest
SHA256 6c07bebea79a0dac41695a855dac9c732639ecd8c64ecf3e1d69c4cb0d57bc9a
MD5 2763cf13938c61a6d92a8c64ba20c216
BLAKE2b-256 e0ee5671aa0af0d4860bce4f0a3dd2962f291cebe1040f310956a6264dd5d768

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page