NeuTTS - a package for text-to-speech generation using Neuphonics TTS models.
Project description
NeuTTS
HuggingFace 🤗:
Created by Neuphonic - building faster, smaller, on-device voice AI
State-of-the-art Voice AI has been locked behind web APIs for too long. NeuTTS is a collection of open source, on-device, TTS speech language models with instant voice cloning. Built off of LLM backbones, NeuTTS brings natural-sounding speech, real-time performance, built-in security and speaker cloning to your local device - unlocking a new category of embedded voice agents, assistants, toys, and compliance-safe apps.
Key Features
- 🗣Best-in-class realism for their size - produce natural, ultra-realistic voices that sound human, at the sweet spot between speed, size, and quality for real-world applications
- 📱Optimised for on-device deployment - provided in GGML format, ready to run on phones, laptops, or even Raspberry Pis
- 👫Instant voice cloning - create your own speaker with as little as 3 seconds of audio
- 🚄Simple LM + codec architecture - making development and deployment simple
[!CAUTION] Websites like neutts.com are popping up and they're not affliated with Neuphonic, our github or this repo.
We are on neuphonic.com only. Please be careful out there! 🙏
Model Details
NeuTTS models are built from small LLM backbones - lightweight yet capable language models optimised for text understanding and generation - as well as a powerful combination of technologies designed for efficiency and quality:
- Supported Languages: English
- Audio Codec: NeuCodec - our 50hz neural audio codec that achieves exceptional audio quality at low bitrates using a single codebook
- Context Window: 2048 tokens, enough for processing ~30 seconds of audio (including prompt duration)
- Format: Available in GGML format for efficient on-device inference
- Responsibility: Watermarked outputs
- Inference Speed: Real-time generation on mid-range devices
- Power Consumption: Optimised for mobile and embedded devices
| NeuTTSAir | NeuTTSNano | |
|---|---|---|
| # Params (Active) | ~360m | ~120m |
| # Params (Emb + Active) | ~552m | ~229m |
| Cloning | Yes | Yes |
| License | Apache 2.0 | NeuTTS Open License 1.0 |
Throughput Benchmarking
The two models were benchmarked using the Q4 quantisations neutts-air-Q4-0 and neutts-nano-Q4-0. Benchmarks on CPU were run through llama-bench (llama.cpp) to measure prefill and decode throughput at multiple context sizes.
For GPU's (specifically RTX 4090), we leverage vLLM to maximise throughput. We run benchmarks using the vLLM benchmark.
We include benchmarks on four devices: Galaxy A25 5G, AMD Ryzen 9HX 370, iMac M4 16GB, NVIDIA GeForce RTX 4090.
| NeuTTSAir | NeuTTSNano | |
|---|---|---|
| Galaxy A25 5G (CPU only) | 20 tokens/s | 45 tokens/s |
| AMD Ryzen 9 HX 370 (CPU only) | 119 tokens/s | 221 tokens/s |
| iMAc M4 16 GB (CPU only) | 111 tokens/s | 195 tokens/s |
| RTX 4090 | 16194 tokens/s | 19268 tokens/s |
[!NOTE] llama-bench used 14 threads for prefill and 16 threads for decode (as configured in the benchmark run) on AMD Ryzen 9HX 370 and iMac M4 16GB, and 6 threads for each on the Galaxy A25 5G. The tokens/s reported are when having 500 prefill tokens and generating 250 output tokens.
[!NOTE] Please note that these benchmarks only include the Speech Language Model and do not include the Codec which is needed for a full audio generation pipeline.
Get Started with NeuTTS
[!NOTE] We have added a streaming example using the
llama-cpp-pythonlibrary as well as a finetuning script. For finetuning, please refer to the finetune guide for more details.
-
Clone Git Repo
git clone https://github.com/neuphonic/neutts.git cd neutts
-
Install
espeak(required dependency)Please refer to the following link for instructions on how to install
espeak:https://github.com/espeak-ng/espeak-ng/blob/master/docs/guide.md
# Mac OS brew install espeak-ng # Ubuntu/Debian sudo apt install espeak-ng # Windows install # via chocolatey (https://community.chocolatey.org/packages?page=1&prerelease=False&moderatorQueue=False&tags=espeak) choco install espeak-ng # via wingit winget install -e --id eSpeak-NG.eSpeak-NG # via msi (need to add to path or folow the "Windows users who installed via msi" below) # find the msi at https://github.com/espeak-ng/espeak-ng/releases
Windows users who installed via msi / do not have their install on path need to run the following (see https://github.com/bootphon/phonemizer/issues/163)
$env:PHONEMIZER_ESPEAK_LIBRARY = "c:\Program Files\eSpeak NG\libespeak-ng.dll" $env:PHONEMIZER_ESPEAK_PATH = "c:\Program Files\eSpeak NG" setx PHONEMIZER_ESPEAK_LIBRARY "c:\Program Files\eSpeak NG\libespeak-ng.dll" setx PHONEMIZER_ESPEAK_PATH "c:\Program Files\eSpeak NG"
-
Install NeuTTS
pip install neutts
alternatively
pip install neutts[all] # to get onnx and llamacpp dependency
-
(Optional) Install Llama-cpp-python to use the
GGUFmodels.pip install "neutts[llama]"
To run llama-cpp with GPU suport (CUDA, MPS) support please refer to: https://pypi.org/project/llama-cpp-python/
-
(Optional) Install onnxruntime to use the
.onnxdecoder. If you want to run the onnxdecoderpip install "neutts[onnx]"
Running the Model
Run the basic example script to synthesize speech:
python -m examples.basic_example \
--input_text "My name is Andy. I'm 25 and I just moved to London. The underground is pretty confusing, but it gets me around in no time at all." \
--ref_audio samples/jo.wav \
--ref_text samples/jo.txt
To specify a particular model repo for the backbone or codec, add the --backbone argument. Available backbones are listed in NeuTTS-Air and NeuTTS-Nano huggingface collections.
Several examples are available, including a Jupyter notebook in the examples folder.
One-Code Block Usage
from neutts import NeuTTS
import soundfile as sf
tts = NeuTTS(
backbone_repo="neuphonic/neutts-nano", # or 'neuphonic/neutts-nano-q4-gguf' with llama-cpp-python installed
backbone_device="cpu",
codec_repo="neuphonic/neucodec",
codec_device="cpu"
)
input_text = "My name is Andy. I'm 25 and I just moved to London. The underground is pretty confusing, but it gets me around in no time at all."
ref_text = "samples/jo.txt"
ref_audio_path = "samples/jo.wav"
ref_text = open(ref_text, "r").read().strip()
ref_codes = tts.encode_reference(ref_audio_path)
wav = tts.infer(input_text, ref_codes, ref_text)
sf.write("test.wav", wav, 24000)
Streaming
Speech can also be synthesised in streaming mode, where audio is generated in chunks and plays as generated. Note that this requires pyaudio to be installed. To do this, run:
python -m examples.basic_streaming_example \
--input_text "My name is Andy. I'm 25 and I just moved to London. The underground is pretty confusing, but it gets me around in no time at all." \
--ref_codes samples/jo.pt \
--ref_text samples/jo.txt
Again, a particular model repo can be specified with the --backbone argument - note that for streaming the model must be in GGUF format.
Preparing References for Cloning
NeuTTS requires two inputs:
- A reference audio sample (
.wavfile) - A text string
The model then synthesises the text as speech in the style of the reference audio. This is what enables NeuTTS models instant voice cloning capability.
Example Reference Files
You can find some ready-to-use samples in the examples folder:
samples/dave.wavsamples/jo.wav
Guidelines for Best Results
For optimal performance, reference audio samples should be:
- Mono channel
- 16-44 kHz sample rate
- 3–15 seconds in length
- Saved as a
.wavfile - Clean — minimal to no background noise
- Natural, continuous speech — like a monologue or conversation, with few pauses, so the model can capture tone effectively
Guidelines for minimizing Latency
For optimal performance on-device:
- Use the GGUF model backbones
- Pre-encode references
- Use the onnx codec decoder
Take a look at this example examples README to get started.
Responsibility
Every audio file generated by NeuTTS includes Perth (Perceptual Threshold) Watermarker.
Disclaimer
Don't use this model to do bad things… please.
Developer Requirements
To run the pre commit hooks to contribute to this project run:
pip install pre-commit
Then:
pre-commit install
Running Tests
First, install the dev requirements:
pip install -r requirements-dev.txt
To run the tests:
pytest tests/
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file neutts-0.1.2.tar.gz.
File metadata
- Download URL: neutts-0.1.2.tar.gz
- Upload date:
- Size: 19.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.8.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c120460c4d7c7cdd9a28ab53c17f52875fd5f194452539eaef36aeaeaa4a81f7
|
|
| MD5 |
ab095e0a8b07ed13355ddaf762eb4d15
|
|
| BLAKE2b-256 |
9aa21d8943562e1743ccfabb2fb3a693ccc4eef33a3371f0193a80750fbc7d47
|
File details
Details for the file neutts-0.1.2-py3-none-any.whl.
File metadata
- Download URL: neutts-0.1.2-py3-none-any.whl
- Upload date:
- Size: 14.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.8.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
be3011e763b529d2a708f8385b33c33953fdaafb2b8339aca3d716e15648234d
|
|
| MD5 |
e00133b2df08dfc86be3a9bbf93bf321
|
|
| BLAKE2b-256 |
4665bdf2fadf124d2b4453bb085dc833dcd288c6186f7c4fd391761ee1cd4410
|