Skip to main content

NeuTTS - a package for text-to-speech generation using Neuphonic's TTS models.

Project description

NeuTTS

HuggingFace 🤗:

NeuTTS-Nano Demo Video

Created by Neuphonic - building faster, smaller, on-device voice AI

State-of-the-art Voice AI has been locked behind web APIs for too long. NeuTTS is a collection of open source, on-device, TTS speech language models with instant voice cloning. Built off of LLM backbones, NeuTTS brings natural-sounding speech, real-time performance, built-in security and speaker cloning to your local device - unlocking a new category of embedded voice agents, assistants, toys, and compliance-safe apps.

Key Features

  • 🗣Best-in-class realism for their size - produce natural, ultra-realistic voices that sound human, at the sweet spot between speed, size, and quality for real-world applications
  • 📱Optimised for on-device deployment - provided in GGML format, ready to run on phones, laptops, or even Raspberry Pis
  • 👫Instant voice cloning - create your own speaker with as little as 3 seconds of audio
  • 🚄Simple LM + codec architecture - making development and deployment simple

[!CAUTION] Websites like neutts.com are popping up and they're not affliated with Neuphonic, our github or this repo.

We are on neuphonic.com only. Please be careful out there! 🙏

Model Details

NeuTTS models are built from small LLM backbones - lightweight yet capable language models optimised for text understanding and generation - as well as a powerful combination of technologies designed for efficiency and quality:

  • Supported Languages: English
  • Audio Codec: NeuCodec - our 50hz neural audio codec that achieves exceptional audio quality at low bitrates using a single codebook
  • Context Window: 2048 tokens, enough for processing ~30 seconds of audio (including prompt duration)
  • Format: Available in GGML format for efficient on-device inference
  • Responsibility: Watermarked outputs
  • Inference Speed: Real-time generation on mid-range devices
  • Power Consumption: Optimised for mobile and embedded devices
NeuTTSAir NeuTTSNano
# Params (Active) ~360m ~120m
# Params (Emb + Active) ~552m ~229m
Cloning Yes Yes
License Apache 2.0 NeuTTS Open License 1.0

Throughput Benchmarking

The two models were benchmarked using the Q4 quantisations neutts-air-Q4-0 and neutts-nano-Q4-0. Benchmarks on CPU were run through llama-bench (llama.cpp) to measure prefill and decode throughput at multiple context sizes.

For GPU's (specifically RTX 4090), we leverage vLLM to maximise throughput. We run benchmarks using the vLLM benchmark.

We include benchmarks on four devices: Galaxy A25 5G, AMD Ryzen 9HX 370, iMac M4 16GB, NVIDIA GeForce RTX 4090.

NeuTTSAir NeuTTSNano
Galaxy A25 5G (CPU only) 20 tokens/s 45 tokens/s
AMD Ryzen 9 HX 370 (CPU only) 119 tokens/s 221 tokens/s
iMAc M4 16 GB (CPU only) 111 tokens/s 195 tokens/s
RTX 4090 16194 tokens/s 19268 tokens/s

[!NOTE] llama-bench used 14 threads for prefill and 16 threads for decode (as configured in the benchmark run) on AMD Ryzen 9HX 370 and iMac M4 16GB, and 6 threads for each on the Galaxy A25 5G. The tokens/s reported are when having 500 prefill tokens and generating 250 output tokens.

[!NOTE] Please note that these benchmarks only include the Speech Language Model and do not include the Codec which is needed for a full audio generation pipeline.

Get Started with NeuTTS

[!NOTE] We have added a streaming example using the llama-cpp-python library as well as a finetuning script. For finetuning, please refer to the finetune guide for more details.

  1. Install System Dependecies (required): espeak

    Please refer to the following link for instructions on how to install espeak:

    https://github.com/espeak-ng/espeak-ng/blob/master/docs/guide.md

    # Mac OS
    brew install espeak-ng
    
    # Ubuntu/Debian
    sudo apt install espeak-ng
    
    # Windows install
    # via chocolatey (https://community.chocolatey.org/packages?page=1&prerelease=False&moderatorQueue=False&tags=espeak)
    choco install espeak-ng
    # via wingit
    winget install -e --id eSpeak-NG.eSpeak-NG
    # via msi (need to add to path or folow the "Windows users who installed via msi" below)
    # find the msi at https://github.com/espeak-ng/espeak-ng/releases
    

    Windows users who installed via msi / do not have their install on path need to run the following (see https://github.com/bootphon/phonemizer/issues/163)

    $env:PHONEMIZER_ESPEAK_LIBRARY = "c:\Program Files\eSpeak NG\libespeak-ng.dll"
    $env:PHONEMIZER_ESPEAK_PATH = "c:\Program Files\eSpeak NG"
    setx PHONEMIZER_ESPEAK_LIBRARY "c:\Program Files\eSpeak NG\libespeak-ng.dll"
    setx PHONEMIZER_ESPEAK_PATH "c:\Program Files\eSpeak NG"
    
  2. Install NeuTTS

    pip install neutts
    

    Alternatively to get the full install (including onnx and llama-cpp extensions):

    pip install neutts[all] # to get onnx and llamacpp dependency
    

    Or local editable install:

    pip install -e .
    
  3. (Optional) Install Llama-cpp-python to use the GGUF models.

    pip install "neutts[llama]"
    

    Note that this installs llama-cpp without GPU support. To run llama-cpp with GPU support (e.g., CUDA, MPS) please refer to: https://pypi.org/project/llama-cpp-python/

  4. (Optional) Install onnxruntime to use the .onnx decoder.

    pip install "neutts[onnx]"
    

Examples

To get started with the example scripts, clone the repository and navigate into the project directory:

git clone https://github.com/neuphonic/neutts.git
cd neutts

Basic Example

Run the basic example script to synthesize speech:

python -m examples.basic_example \
  --input_text "My name is Andy. I'm 25 and I just moved to London. The underground is pretty confusing, but it gets me around in no time at all." \
  --ref_audio samples/jo.wav \
  --ref_text samples/jo.txt

To specify a particular model repo for the backbone or codec, add the --backbone argument. Available backbones are listed in NeuTTS-Air and NeuTTS-Nano huggingface collections.

Several examples are available, including a Jupyter notebook in the examples folder.

One-Code Block Usage

from neutts import NeuTTS
import soundfile as sf

tts = NeuTTS(
   backbone_repo="neuphonic/neutts-nano", # or 'neuphonic/neutts-nano-q4-gguf' with llama-cpp-python installed
   backbone_device="cpu",
   codec_repo="neuphonic/neucodec",
   codec_device="cpu"
)
input_text = "My name is Andy. I'm 25 and I just moved to London. The underground is pretty confusing, but it gets me around in no time at all."

ref_text = "samples/jo.txt"
ref_audio_path = "samples/jo.wav"

ref_text = open(ref_text, "r").read().strip()
ref_codes = tts.encode_reference(ref_audio_path)

wav = tts.infer(input_text, ref_codes, ref_text)
sf.write("test.wav", wav, 24000)

Streaming

Speech can also be synthesised in streaming mode, where audio is generated in chunks and plays as generated. Note that this requires pyaudio to be installed. To do this, run:

python -m examples.basic_streaming_example \
  --input_text "My name is Andy. I'm 25 and I just moved to London. The underground is pretty confusing, but it gets me around in no time at all." \
  --ref_codes samples/jo.pt \
  --ref_text samples/jo.txt

Again, a particular model repo can be specified with the --backbone argument - note that for streaming the model must be in GGUF format.

Preparing References for Cloning

NeuTTS requires two inputs:

  1. A reference audio sample (.wav file)
  2. A text string

The model then synthesises the text as speech in the style of the reference audio. This is what enables NeuTTS models instant voice cloning capability.

Example Reference Files

You can find some ready-to-use samples in the examples folder:

  • samples/dave.wav
  • samples/jo.wav

Guidelines for Best Results

For optimal performance, reference audio samples should be:

  1. Mono channel
  2. 16-44 kHz sample rate
  3. 3–15 seconds in length
  4. Saved as a .wav file
  5. Clean — minimal to no background noise
  6. Natural, continuous speech — like a monologue or conversation, with few pauses, so the model can capture tone effectively

Guidelines for minimizing Latency

For optimal performance on-device:

  1. Use the GGUF model backbones
  2. Pre-encode references
  3. Use the onnx codec decoder

Take a look at this example examples README to get started.

Responsibility

Every audio file generated by NeuTTS includes Perth (Perceptual Threshold) Watermarker.

Disclaimer

Don't use this model to do bad things… please.

Developer Requirements

To run the pre commit hooks to contribute to this project run:

pip install pre-commit

Then:

pre-commit install

Running Tests

First, install the dev requirements:

pip install -r requirements-dev.txt

To run the tests:

pytest tests/

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

neutts-1.0.0.tar.gz (19.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

neutts-1.0.0-py3-none-any.whl (14.9 kB view details)

Uploaded Python 3

File details

Details for the file neutts-1.0.0.tar.gz.

File metadata

  • Download URL: neutts-1.0.0.tar.gz
  • Upload date:
  • Size: 19.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.3.2 CPython/3.10.19 Linux/6.11.0-1018-azure

File hashes

Hashes for neutts-1.0.0.tar.gz
Algorithm Hash digest
SHA256 f1f99d6509e9660c60ecebfcb36cb94709ba4c7a666adafbb5c7dff896d42855
MD5 181d36b4e3ff2dc5811b6d9dc58536d0
BLAKE2b-256 f752ba3794791de3471bcab542305f8e3aaa94fdb64ca7e913a4cb7bd6e7eb23

See more details on using hashes here.

File details

Details for the file neutts-1.0.0-py3-none-any.whl.

File metadata

  • Download URL: neutts-1.0.0-py3-none-any.whl
  • Upload date:
  • Size: 14.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.3.2 CPython/3.10.19 Linux/6.11.0-1018-azure

File hashes

Hashes for neutts-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 66488e639bfca582e482898a6606e30cddd16c40a2926899e78a97c47fcb7570
MD5 2629593f477254aeea34d7d69c71b050
BLAKE2b-256 2a5ff54c268dd9979714e90e485c5840e7d0c351576b574d8190dfce0a2ef1e7

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page