TADA speech synthesis on Apple Silicon via MLX
Project description
MLX-TADA
TADA speech synthesis on Apple Silicon via MLX.
Also available on PyPI: pip install mlx-tada
Setup
cd apple
uv venv
uv pip install -e .
For auto-transcription of reference audio (optional):
uv pip install mlx-whisper
Weights
Download a reference audio clip:
curl -O "https://storage.googleapis.com/hume_reference_speakers/ljspeech.wav"
Pre-converted weights are downloaded and cached automatically. You still need gated access to Llama 3.2 for the tokenizer:
from mlx_tada import TadaForCausalLM, save_wav
model = TadaForCausalLM.from_pretrained("HumeAI/mlx-tada-3b", quantize=4)
ref = model.load_reference("ljspeech.wav")
out = model.generate("Hello, this is a test of TADA speech synthesis.", ref)
save_wav(out.audio, "output.wav")
Available models:
HumeAI/mlx-tada-1b— 1B English-only (~4.3 GB)HumeAI/mlx-tada-3b— 3B multilingual (~8.9 GB)
Offline Use
To download weights locally for offline inference:
from huggingface_hub import snapshot_download
snapshot_download("HumeAI/mlx-tada-3b", local_dir="./weights/3b")
Then load from the local path:
model = TadaForCausalLM.from_weights("./weights/3b", quantize=4)
Generate Speech
CLI
uv run python -m mlx_tada.generate \
--weights ./weights/3b \
--audio ljspeech.wav \
--text "The history of artificial intelligence is a fascinating journey that spans decades of research and innovation. It all began in the 1950s when pioneers like Alan Turing first posed the question of whether machines could think." \
--output output.wav
With 4-bit quantization (10x faster, 60% less memory):
uv run python -m mlx_tada.generate \
--weights ./weights/3b \
--audio ljspeech.wav \
--text "The history of artificial intelligence is a fascinating journey that spans decades of research and innovation. It all began in the 1950s when pioneers like Alan Turing first posed the question of whether machines could think." \
--quantize 4 \
--output output.wav
Python
from mlx_tada import TadaForCausalLM, save_wav
model = TadaForCausalLM.from_pretrained("HumeAI/mlx-tada-3b", quantize=4)
ref = model.load_reference("ljspeech.wav")
out = model.generate("The history of artificial intelligence is a fascinating journey that spans decades of research and innovation. It all began in the 1950s when pioneers like Alan Turing first posed the question of whether machines could think.", ref)
save_wav(out.audio, "output.wav")
# out.audio - numpy float32 array (24kHz)
# out.duration - audio duration in seconds
# out.rtf - real-time factor
# out.num_tokens
Inference Options
Control generation behavior with InferenceOptions:
from mlx_tada import TadaForCausalLM, InferenceOptions, save_wav
model = TadaForCausalLM.from_weights("./weights/3b", quantize=4)
ref = model.load_reference("ljspeech.wav")
opts = InferenceOptions(
acoustic_cfg_scale=1.6,
duration_cfg_scale=1.0,
num_flow_matching_steps=10,
time_schedule="logsnr",
cfg_schedule="cosine",
)
out = model.generate(text="Hello world, today is a nice day.", reference=ref, inference_options=opts)
save_wav(out.audio, "output.wav")
The following inference options from the PyTorch version are not currently supported in MLX:
speed_up_factornum_acoustic_candidatesscorernegative_condition_sourcetext_only_logit_scalespkr_verification_weight
Speech Continuation
Use num_extra_steps to let the model generate speech beyond the provided text. The model continues speaking naturally and stops when it produces an end-of-sequence token:
from mlx_tada import TadaForCausalLM, InferenceOptions, save_wav
model = TadaForCausalLM.from_weights("./weights/3b", quantize=4)
ref = model.load_reference("ljspeech.wav")
opts = InferenceOptions(
acoustic_cfg_scale=1.6,
num_flow_matching_steps=10,
time_schedule="logsnr",
)
out = model.generate(
text="The history of artificial intelligence is a fascinating journey that spans decades of research and innovation.",
reference=ref,
inference_options=opts,
num_extra_steps=50,
)
save_wav(out.audio, "output.wav")
Save and Reuse References
from mlx_tada import Reference
ref = model.load_reference("ljspeech.wav")
ref.save("speaker.npz")
ref = Reference.load("speaker.npz")
out = model.generate("Reusing the same voice.", ref)
Save Audio
from mlx_tada import save_wav
save_wav(out.audio, "output.wav")
Debug Logging
DEBUG=1 uv run python -m mlx_tada.generate \
--weights ./weights/3b \
--audio ljspeech.wav \
--text "Hello"
from mlx_tada import setup_logging
setup_logging()
Running Tests
MLX_WEIGHTS=./weights/1b uv run pytest tests/ -v
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file mlx_tada-0.1.1.tar.gz.
File metadata
- Download URL: mlx_tada-0.1.1.tar.gz
- Upload date:
- Size: 29.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.15
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ee3a9769385efd93ddb539d602ec7a04e06916f7257fcc82e2f056cccf17d243
|
|
| MD5 |
c674068a510e2328cbf851e248b8a367
|
|
| BLAKE2b-256 |
ce23f3e1c3d1d214a1efcbd60a441a8faee20dc8b9d86e93841112fcea882207
|
File details
Details for the file mlx_tada-0.1.1-py3-none-any.whl.
File metadata
- Download URL: mlx_tada-0.1.1-py3-none-any.whl
- Upload date:
- Size: 30.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.15
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
21419b9a1223271d6318f5aef61b21b8b1e3a3750f1a1fdcc82beeb840cfa235
|
|
| MD5 |
2c3ee25b2721476a4e4e46f54e7a107b
|
|
| BLAKE2b-256 |
7b921ad386de5054cf075f66cacfeffc3605b8580874eee7adfbe39a9e86767f
|