Skip to main content

TADA speech synthesis on Apple Silicon via MLX

Project description

MLX-TADA

TADA speech synthesis on Apple Silicon via MLX.

Installation

pip install mlx-tada

For auto-transcription of reference audio (optional):

pip install mlx-whisper

To convert weights yourself (requires gated Llama access):

pip install "mlx-tada[convert]"

Quick Start

Pre-converted weights from Hugging Face (recommended)

No gated model access required. Weights are downloaded and cached automatically:

from mlx_tada import TadaForCausalLM, save_wav

model = TadaForCausalLM.from_pretrained("HumeAI/mlx-tada-3b", quantize=4)
ref = model.load_reference("speaker.wav")
out = model.generate("Hello, this is a test of TADA speech synthesis.", ref)
save_wav(out.audio, "output.wav")

Available models:

Convert weights yourself (alternative)

Requires a Hugging Face account with access to the gated Llama 3.2 models.

pip install "mlx-tada[convert]"
huggingface-cli login

# 3B model
python -m mlx_tada.convert_3b ./weights/3b

# 1B model
python -m mlx_tada.convert_1b ./weights/1b

Then load from the local path:

from mlx_tada import TadaForCausalLM, save_wav

model = TadaForCausalLM.from_weights("./weights/3b", quantize=4)

Generate Speech

CLI

python -m mlx_tada.generate \
  --weights ./weights/3b \
  --audio speaker.wav \
  --text "The history of artificial intelligence is a fascinating journey that spans decades of research and innovation. It all began in the 1950s when pioneers like Alan Turing first posed the question of whether machines could think." \
  --output output.wav

With 4-bit quantization (10x faster, 60% less memory):

python -m mlx_tada.generate \
  --weights ./weights/3b \
  --audio speaker.wav \
  --text "The history of artificial intelligence is a fascinating journey that spans decades of research and innovation. It all began in the 1950s when pioneers like Alan Turing first posed the question of whether machines could think." \
  --quantize 4 \
  --output output.wav

Python

from mlx_tada import TadaForCausalLM, save_wav

model = TadaForCausalLM.from_pretrained("HumeAI/mlx-tada-3b", quantize=4)
ref = model.load_reference("speaker.wav")
out = model.generate("The history of artificial intelligence is a fascinating journey that spans decades of research and innovation. It all began in the 1950s when pioneers like Alan Turing first posed the question of whether machines could think.", ref)
save_wav(out.audio, "output.wav")

# out.audio     - numpy float32 array (24kHz)
# out.duration  - audio duration in seconds
# out.rtf       - real-time factor
# out.num_tokens

Inference Options

Control generation behavior with InferenceOptions:

from mlx_tada import TadaForCausalLM, InferenceOptions, save_wav

model = TadaForCausalLM.from_weights("./weights/3b", quantize=4)
ref = model.load_reference("speaker.wav")

opts = InferenceOptions(
    acoustic_cfg_scale=1.6,
    duration_cfg_scale=1.0,
    num_flow_matching_steps=10,
    time_schedule="logsnr",
    cfg_schedule="cosine",
)

out = model.generate(text="Hello world, today is a nice day.", reference=ref, inference_options=opts)
save_wav(out.audio, "output.wav")

The following inference options from the PyTorch version are not currently supported in MLX:

  • speed_up_factor
  • num_acoustic_candidates
  • scorer
  • negative_condition_source
  • text_only_logit_scale
  • spkr_verification_weight

Speech Continuation

Use num_extra_steps to let the model generate speech beyond the provided text. The model continues speaking naturally and stops when it produces an end-of-sequence token:

from mlx_tada import TadaForCausalLM, InferenceOptions, save_wav

model = TadaForCausalLM.from_weights("./weights/3b", quantize=4)
ref = model.load_reference("speaker.wav")

opts = InferenceOptions(
    acoustic_cfg_scale=1.6,
    num_flow_matching_steps=10,
    time_schedule="logsnr",
)

out = model.generate(
    text="The history of artificial intelligence is a fascinating journey that spans decades of research and innovation.",
    reference=ref,
    inference_options=opts,
    num_extra_steps=50,
)
save_wav(out.audio, "output.wav")

Save and Reuse References

from mlx_tada import Reference

ref = model.load_reference("speaker.wav")
ref.save("speaker.npz")

ref = Reference.load("speaker.npz")
out = model.generate("Reusing the same voice.", ref)

Save Audio

from mlx_tada import save_wav
save_wav(out.audio, "output.wav")

Debug Logging

DEBUG=1 python -m mlx_tada.generate \
  --weights ./weights/3b \
  --audio speaker.wav \
  --text "Hello"
from mlx_tada import setup_logging

setup_logging()

Running Tests

MLX_WEIGHTS=./weights/1b pytest tests/ -v

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mlx_tada-0.1.0.tar.gz (29.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mlx_tada-0.1.0-py3-none-any.whl (30.3 kB view details)

Uploaded Python 3

File details

Details for the file mlx_tada-0.1.0.tar.gz.

File metadata

  • Download URL: mlx_tada-0.1.0.tar.gz
  • Upload date:
  • Size: 29.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.15

File hashes

Hashes for mlx_tada-0.1.0.tar.gz
Algorithm Hash digest
SHA256 0db72eea93cad779534ce71ac40d942fd56f27de8ffc4c679e2a8de7384e7dcd
MD5 d88936e17287e462455b915f6936eede
BLAKE2b-256 cfcaaad58eb87cab881b636e85a3c0864be930484bfb52ab0c766beb88e6259d

See more details on using hashes here.

File details

Details for the file mlx_tada-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: mlx_tada-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 30.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.15

File hashes

Hashes for mlx_tada-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 ce94d6714834ab6410b18b44f2eac52a9c6fceee28c49fa5ada89bd708b17167
MD5 a63d764b55b616cc18f6564ef9b8d5da
BLAKE2b-256 ca8a82acbe9a1b44e638430115acb9df3cc99bd8dc0c9e970c2a453a3c44c1a9

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page