An implementation of the Nvidia's Parakeet models for Apple Silicon using MLX.
Project description
Parakeet MLX
An implementation of the Parakeet models - Nvidia's ASR(Automatic Speech Recognition) models - for Apple Silicon using MLX.
Installation
[!NOTE] Make sure you have
ffmpeginstalled on your system first, otherwise CLI won't work properly.
Using uv - recommended way:
uv add parakeet-mlx -U
Or, for the CLI:
uv tool install parakeet-mlx -U
Using pip:
pip install parakeet-mlx -U
CLI Quick Start
parakeet-mlx <audio_files> [OPTIONS]
Arguments
audio_files: One or more audio files to transcribe (WAV, MP3, etc.)
Options
-
--model(default:mlx-community/parakeet-tdt-0.6b-v2)- Hugging Face repository of the model to use
-
--output-dir(default: current directory)- Directory to save transcription outputs
-
--output-format(default: srt)- Output format (txt/srt/vtt/json/all)
-
--output-template(default:{filename})- Template for output filenames,
{filename},{index},{date}is supported.
- Template for output filenames,
-
--highlight-words(default: False)- Enable word-level timestamps in SRT/VTT outputs
-
--verbose/-v(default: False)- Print detailed progress information
-
--chunk-duration(default: 120 seconds)- Chunking duration in seconds for long audio,
0to disable chunking
- Chunking duration in seconds for long audio,
-
--overlap-duration(default: 15 seconds)- Overlap duration in seconds if using chunking
-
--fp32/--bf16(default:bf16)- Determine the precision to use
Examples
# Basic transcription
parakeet-mlx audio.mp3
# Multiple files with word-level timestamps of VTT subtitle
parakeet-mlx *.mp3 --output-format vtt --highlight-words
# Generate all output formats
parakeet-mlx audio.mp3 --output-format all
Python API Quick Start
Transcribe a file:
from parakeet_mlx import from_pretrained
model = from_pretrained("mlx-community/parakeet-tdt-0.6b-v2")
result = model.transcribe("audio_file.wav")
print(result.text)
Check timestamps:
from parakeet_mlx import from_pretrained
model = from_pretrained("mlx-community/parakeet-tdt-0.6b-v2")
result = model.transcribe("audio_file.wav")
print(result.sentences)
# [AlignedSentence(text="Hello World.", start=1.01, end=2.04, duration=1.03, tokens=[...])]
Do chunking:
from parakeet_mlx import from_pretrained
model = from_pretrained("mlx-community/parakeet-tdt-0.6b-v2")
result = model.transcribe("audio_file.wav", chunk_duration=60 * 2.0, overlap_duration=15.0)
print(result.sentences)
Timestamp Result
AlignedResult: Top-level result containing the full text and sentencestext: Full transcribed textsentences: List ofAlignedSentence
AlignedSentence: Sentence-level alignments with start/end timestext: Sentence textstart: Start time in secondsend: End time in secondsduration: Betweenstartandend.tokens: List ofAlignedToken
AlignedToken: Word/token-level alignments with precise timestampstext: Token textstart: Start time in secondsend: End time in secondsduration: Betweenstartandend.
Streaming Transcription
For real-time transcription, use the transcribe_stream method which creates a streaming context:
from parakeet_mlx import from_pretrained
from parakeet_mlx.audio import load_audio
import numpy as np
model = from_pretrained("mlx-community/parakeet-tdt-0.6b-v2")
# Create a streaming context
with model.transcribe_stream(
context_size=(256, 256), # (left_context, right_context) frames
) as transcriber:
# Simulate real-time audio chunks
audio_data = load_audio("audio_file.wav", model.preprocessor_config.sample_rate)
chunk_size = model.preprocessor_config.sample_rate # 1 second chunks
for i in range(0, len(audio_data), chunk_size):
chunk = audio_data[i:i+chunk_size]
transcriber.add_audio(chunk)
# Access current transcription
result = transcriber.result
print(f"Current text: {result.text}")
# Access finalized and draft tokens
# transcriber.finalized_tokens
# transcriber.draft_tokens
Streaming Parameters
-
context_size: Tuple of (left_context, right_context) for attention windows- Controls how many frames the model looks at before and after current position
- Default: (256, 256)
-
depth: Number of encoder layers that preserve exact computation across chunks- Controls how many layers maintain exact equivalence with non-streaming forward pass
- depth=1: Only first encoder layer matches non-streaming computation exactly
- depth=2: First two layers match exactly, and so on
- depth=N (total layers): Full equivalence to non-streaming forward pass
- Higher depth means more computational consistency with non-streaming mode
- Default: 1
-
keep_original_attention: Whether to keep original attention mechanism- False: Switches to local attention for streaming (recommended)
- True: Keeps original attention (less suitable for streaming)
- Default: False
Low-Level API
To transcribe log-mel spectrum directly, you can do the following:
import mlx.core as mx
from parakeet_mlx.audio import get_logmel, load_audio
# Load and preprocess audio manually
audio = load_audio("audio.wav", model.preprocessor_config.sample_rate)
mel = get_logmel(audio, model.preprocessor_config)
# Generate transcription with alignments
# Accepts both [batch, sequence, feat] and [sequence, feat]
# `alignments` is list of AlignedResult. (no matter if you fed batch dimension or not!)
alignments = model.generate(mel)
Todo
- Add CLI for better usability
- Add support for other Parakeet variants
- Streaming input (real-time transcription with
transcribe_stream) - Option to enhance chosen words' accuracy
- Chunking with continuous context (partially achieved with streaming)
Acknowledgments
- Thanks to Nvidia for training these awesome models and writing cool papers and providing nice implementation.
- Thanks to MLX project for providing the framework that made this implementation possible.
- Thanks to audiofile and audresample, numpy, librosa for audio processing.
- Thanks to dacite for config management.
License
Apache 2.0
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file parakeet_mlx-0.3.1.tar.gz.
File metadata
- Download URL: parakeet_mlx-0.3.1.tar.gz
- Upload date:
- Size: 31.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.7.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
526f10a041b0ef470db80dad1da8ff15f3fb87124506e15d2a793189141260f8
|
|
| MD5 |
c6563bb4970e918805d385f6d09f2da6
|
|
| BLAKE2b-256 |
2936e12616be5fa1930d3c30f6c364806ab4107beaa8806230af9fcd97525e1a
|
File details
Details for the file parakeet_mlx-0.3.1-py3-none-any.whl.
File metadata
- Download URL: parakeet_mlx-0.3.1-py3-none-any.whl
- Upload date:
- Size: 32.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.7.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d6110f38352de8a4a957e922fd0d35ac27d8bc0d30d6af63736a4cd33245b5fb
|
|
| MD5 |
9efb279916f6f38fa2f25de59a52f731
|
|
| BLAKE2b-256 |
ee2b3e3f4df3bf7cec880505d08676a75d4717a5e9e61f07d5adba0d6766e689
|