Skip to main content

Kokoro TTS inference on Apple Silicon via MLX

Project description

kokoro-mlx

License: MIT Platform: macOS Apple Silicon Python 3.10–3.12

Kokoro TTS inference on Apple Silicon via MLX.

An MLX implementation of the Kokoro-82M text-to-speech pipeline, with no PyTorch or transformers dependency.

This package provides inference code only. Model weights are developed by hexgrad under the Apache 2.0 license and downloaded separately from HuggingFace Hub on first use.


Quick Start

Apple Silicon required. Python 3.10–3.12, MLX 0.31+.

pip install kokoro-mlx
from kokoro_mlx import KokoroTTS

tts = KokoroTTS.from_pretrained()
tts.speak("Hello, world.")

Model weights download automatically from HuggingFace Hub on first use.


Features

  • On-device via MLX. No server, no network during inference.
  • No PyTorch or transformers dependency.
  • 48 kHz output from native 24 kHz via FFT upsampling.
  • Mixed-precision vocoder: bf16 through the network, float32 for waveform reconstruction.
  • Gapless streaming over a single persistent audio stream.
  • 54 voices across American English, British English, and additional languages.
  • Language-aware G2P inferred from the voice prefix, with explicit language override.
  • WAV export in one call.
  • Thread-safe with internal lock for concurrent callers.
  • Context manager for resource cleanup.
  • Speed control via a single multiplier.

API

KokoroTTS.from_pretrained(model_id_or_path)

Load a model from a local directory or the HuggingFace Hub.

tts = KokoroTTS.from_pretrained()
# or a specific repo
tts = KokoroTTS.from_pretrained("mlx-community/Kokoro-82M-bf16")
# or a local directory
tts = KokoroTTS.from_pretrained("/path/to/model")

tts.generate(text, voice, speed, sample_rate, language) -> TTSResult

Synthesize text and return a TTSResult.

Parameter Type Default Description
text str required Input text to synthesize
voice str "af_heart" Voice name (see Available Voices)
speed float 1.0 Speaking rate multiplier (>1 faster, <1 slower)
sample_rate int 24000 Output sample rate: 24000 (native) or 48000 (2x upsampled)
language str or None None Optional G2P language code/name. None infers from the voice prefix.

tts.generate_stream(text, voice, speed, sample_rate, language) -> Iterator[np.ndarray]

Synthesize text and yield audio chunks sentence by sentence. Lower latency than generate for longer inputs.

tts.speak(text, voice, speed, stream, stop_event, sample_rate, language)

Synthesize and immediately play text through the speakers.

Parameter Type Default Description
text str required Input text to synthesize
voice str "af_heart" Voice name
speed float 1.0 Speaking rate multiplier
stream bool False Play chunk-by-chunk for lower latency
stop_event threading.Event or None None Set to interrupt playback
sample_rate int 24000 Output sample rate: 24000 or 48000
language str or None None Optional G2P language code/name. None infers from the voice prefix.

tts.save(text, path, voice, speed, sample_rate, language) -> TTSResult

Synthesize text and write audio to a WAV file.

result = tts.save("Hello, world.", "output.wav", sample_rate=48000)

Language Selection

Default behavior: kokoro-mlx infers G2P language from the voice prefix.

Voice prefix Language
af_, am_ American English
bf_, bm_ British English
ef_, em_ Spanish
ff_ French
hf_, hm_ Hindi
if_, im_ Italian
jf_, jm_ Japanese
pf_, pm_ Portuguese
zf_, zm_ Mandarin Chinese

Japanese and Mandarin need their optional G2P extras:

pip install "kokoro-mlx[ja]"
pip install "kokoro-mlx[zh]"

Override language when the text and voice prefix intentionally differ:

tts.generate("Bonjour.", voice="ff_siwis", language="fr")

tts.list_voices() -> list[str]

Return a sorted list of all available voice names.

voices = tts.list_voices()
# ['af_alloy', 'af_aoede', 'af_bella', ...]

tts.close()

Release held resources. Called automatically when using the context manager.

with KokoroTTS.from_pretrained() as tts:
    tts.save("Hello, world.", "output.wav")

TTSResult

@dataclass
class TTSResult:
    audio: np.ndarray   # float32
    sample_rate: int    # 24000 or 48000
    duration: float     # seconds
    voice: str          # voice name used

Available Voices

Voice names follow a prefix convention: the first two characters identify the accent and gender.

Prefix Description
af_ American English, Female
am_ American English, Male
bf_ British English, Female
bm_ British English, Male
ef_ Other English, Female
em_ Other English, Male
ff_ French, Female
hf_ Hindi, Female
hm_ Hindi, Male
if_ Italian, Female
im_ Italian, Male
jf_ Japanese, Female
jm_ Japanese, Male
pf_ Portuguese, Female
pm_ Portuguese, Male
zf_ Chinese Mandarin, Female
zm_ Chinese Mandarin, Male

American English (Female): af_alloy, af_aoede, af_bella, af_heart (default), af_jessica, af_kore, af_nicole, af_nova, af_river, af_sarah, af_sky

American English (Male): am_adam, am_echo, am_eric, am_fenrir, am_liam, am_michael, am_onyx, am_puck, am_santa

British English (Female): bf_alice, bf_emma, bf_isabella, bf_lily

British English (Male): bm_daniel, bm_fable, bm_george, bm_lewis


Architecture

Text Input
  │
  ▼
G2P / Phonemizer (misaki)
  │
  ▼
Phoneme Sequence
  │
  ▼
TextEncoder (PL-BERT / ALBERT, 12 layers, 768 hidden)
  │
  ▼
ProsodyPredictor (duration + pitch)
  │
  ├── Voice Style Vector (per-voice, 256-dim)
  │
  ▼
Decoder (StyleTTS2-style, AdaIN + residual blocks) [bf16]
  │
  ▼
ISTFTNet Vocoder (80-bin mel → waveform) [float32]
  │
  ▼
Optional 2x FFT upsample (24 kHz → 48 kHz)
  │
  ▼
TTSResult { audio float32, duration, voice }

The network runs in bf16 for throughput. At the vocoder output, the signal is promoted to float32 for waveform reconstruction: magnitude recovery, phase extraction, inverse DFT, and overlap-add synthesis. This keeps inference fast while preserving the precision the iSTFT path needs.


Requirements

  • Apple Silicon Mac (M1 or later)
  • macOS 13+
  • Python 3.10–3.12
  • MLX 0.31+

Development

git clone https://github.com/gabrimatic/kokoro-mlx.git
cd kokoro-mlx
python3 -m venv .venv && source .venv/bin/activate
pip install -e ".[dev]"
python -m pytest tests/ -v

Skip model-loading tests with -m "not slow".


Credits

Kokoro-82M by hexgrad · MLX by Apple · misaki G2P by hexgrad · MLX weights from mlx-community

Legal notices

Model License

This package provides inference code only. It does not include model weights.

The Kokoro-82M model weights are developed by hexgrad and released under the Apache License 2.0. The MLX conversion is hosted by mlx-community under the same license. By downloading and using the model weights, you agree to the terms of the Apache 2.0 license.

Trademarks

"MLX" is a trademark of Apple Inc. "HuggingFace" is a trademark of Hugging Face, Inc.

This project is not affiliated with, endorsed by, or sponsored by Apple, Hugging Face, or any other trademark holder. All trademark names are used solely to describe compatibility with their respective technologies.

Third-Party Licenses

This project depends on:

Package License
mlx MIT
numpy BSD-3-Clause
huggingface-hub Apache-2.0
soundfile BSD-3-Clause
misaki Apache-2.0
sounddevice (optional) MIT

License

This inference code is released under the MIT License. See LICENSE for details.

The model weights have their own license (Apache 2.0). See Model License above.


Created by Soroush Yousefpour

"Buy Me A Coffee"

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

kokoro_mlx-0.1.2.tar.gz (39.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

kokoro_mlx-0.1.2-py3-none-any.whl (32.5 kB view details)

Uploaded Python 3

File details

Details for the file kokoro_mlx-0.1.2.tar.gz.

File metadata

  • Download URL: kokoro_mlx-0.1.2.tar.gz
  • Upload date:
  • Size: 39.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.13

File hashes

Hashes for kokoro_mlx-0.1.2.tar.gz
Algorithm Hash digest
SHA256 129c7e6059d63930e51b490f3c214aceb2ad76d7514ffbe68bc25d917931b986
MD5 526b61e6fa6d46f08764a5f007741c86
BLAKE2b-256 406c425c1a0cf162a60c05b6f87846abb968772e4ae8ce5399db3203ac809928

See more details on using hashes here.

Provenance

The following attestation bundles were made for kokoro_mlx-0.1.2.tar.gz:

Publisher: publish.yml on gabrimatic/kokoro-mlx

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file kokoro_mlx-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: kokoro_mlx-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 32.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.13

File hashes

Hashes for kokoro_mlx-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 2f13cfb6bb3673f8387454c2ae196579c0b198f3a03899dab01bbfa526788e9c
MD5 5263baf5acd27056b30b4942a59f981b
BLAKE2b-256 96b8668d0932fe1992f0c873a19f30d34e36985c73ac1c9561c89313401abb11

See more details on using hashes here.

Provenance

The following attestation bundles were made for kokoro_mlx-0.1.2-py3-none-any.whl:

Publisher: publish.yml on gabrimatic/kokoro-mlx

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page