Skip to main content

An implementation of the Nvidia's Parakeet models for Apple Silicon using MLX.

Project description

Parakeet MLX

An implementation of the Parakeet models - Nvidia's ASR(Automatic Speech Recognition) models - for Apple Silicon using MLX.

Installation

[!NOTE] Make sure you have ffmpeg installed on your system first, otherwise CLI won't work properly.

Using uv - recommended way:

uv add parakeet-mlx -U

Or, for the CLI:

uv tool install parakeet-mlx -U

Using pip:

pip install parakeet-mlx -U

CLI Quick Start

parakeet-mlx <audio_files> [OPTIONS]

Arguments

  • audio_files: One or more audio files to transcribe (WAV, MP3, etc.)

Options

  • --model (default: mlx-community/parakeet-tdt-0.6b-v2)

    • Hugging Face repository of the model to use
  • --output-dir (default: current directory)

    • Directory to save transcription outputs
  • --output-format (default: srt)

    • Output format (txt/srt/vtt/json/all)
  • --output-template (default: {filename})

    • Template for output filenames, {filename}, {index}, {date} is supported.
  • --highlight-words (default: False)

    • Enable word-level timestamps in SRT/VTT outputs
  • --verbose / -v (default: False)

    • Print detailed progress information
  • --chunk-duration (default: 120 seconds)

    • Chunking duration in seconds for long audio, 0 to disable chunking
  • --overlap-duration (default: 15 seconds)

    • Overlap duration in seconds if using chunking
  • --fp32 / --bf16 (default: bf16)

    • Determinate the precision to use

Examples

# Basic transcription
parakeet-mlx audio.mp3

# Multiple files with word-level timestamps of VTT subtitle
parakeet-mlx *.mp3 --output-format vtt --highlight-words

# Generate all output formats
parakeet-mlx audio.mp3 --output-format all

Python API Quick Start

Transcribe a file:

from parakeet_mlx import from_pretrained

model = from_pretrained("mlx-community/parakeet-tdt-0.6b-v2")

result = model.transcribe("audio_file.wav")

print(result.text)

Check timestamps:

from parakeet_mlx import from_pretrained

model = from_pretrained("mlx-community/parakeet-tdt-0.6b-v2")

result = model.transcribe("audio_file.wav")

print(result.sentences)
# [AlignedSentence(text="Hello World.", start=1.01, end=2.04, duration=1.03, tokens=[...])]

Do chunking:

from parakeet_mlx import from_pretrained

model = from_pretrained("mlx-community/parakeet-tdt-0.6b-v2")

result = model.transcribe("audio_file.wav", chunk_duration=60 * 2.0, overlap_duration=15.0)

print(result.sentences)

Timestamp Result

  • AlignedResult: Top-level result containing the full text and sentences
    • text: Full transcribed text
    • sentences: List of AlignedSentence
  • AlignedSentence: Sentence-level alignments with start/end times
    • text: Sentence text
    • start: Start time in seconds
    • end: End time in seconds
    • duration: Between start and end.
    • tokens: List of AlignedToken
  • AlignedToken: Word/token-level alignments with precise timestamps
    • text: Token text
    • start: Start time in seconds
    • end: End time in seconds
    • duration: Between start and end.

Low-Level API

To transcribe log-mel spectrum directly, you can do the following:

import mlx.core as mx
from parakeet_mlx.audio import get_logmel, load_audio

# Load and preprocess audio manually
audio = load_audio("audio.wav", model.preprocessor_config.sample_rate)
mel = get_logmel(audio, model.preprocessor_config)

# Generate transcription with alignments
# Accepts both [batch, sequence, feat] and [sequence, feat]
# `alignments` is list of AlignedResult. (no matter you fed batch dimension or not!)
alignments = model.generate(mel)

Todo

  • Add CLI for better usability
  • Add support for other Parakeet varients
  • Streaming input (Although RTFx is MUCH higher than 1 currently - it should be much sufficient to stream with current state)
  • Option to enhance choosen words' accuracy
  • Chunking with continuous context (I think it might be able to achieve by preserving decoder state - just a speculation though)

Acknowledgments

  • Thanks to Nvidia for training this awesome models and writing cool papers and providing nice implementation.
  • Thanks to MLX project for providing the framework that made this implementation possible.
  • Thanks to audiofile and audresample, numpy, librosa for audio processing.
  • Thanks to dacite for config management.

License

Apache 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

parakeet_mlx-0.2.6.tar.gz (23.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

parakeet_mlx-0.2.6-py3-none-any.whl (24.7 kB view details)

Uploaded Python 3

File details

Details for the file parakeet_mlx-0.2.6.tar.gz.

File metadata

  • Download URL: parakeet_mlx-0.2.6.tar.gz
  • Upload date:
  • Size: 23.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.6.8

File hashes

Hashes for parakeet_mlx-0.2.6.tar.gz
Algorithm Hash digest
SHA256 df8220826af886d5bad91118bebb0eef36c49c983644f7f46b35a8765133b47c
MD5 a7d00bd40cbb9f2d56fbc55367b4b7ff
BLAKE2b-256 128c862ca65d67eb54190c73c558a6a792db129c12347e986c91003b48fbfb6e

See more details on using hashes here.

File details

Details for the file parakeet_mlx-0.2.6-py3-none-any.whl.

File metadata

File hashes

Hashes for parakeet_mlx-0.2.6-py3-none-any.whl
Algorithm Hash digest
SHA256 5636d131dc8b6afabc88f9ed3d8234d9707d6b6250be42ba5fffa47d1d9fa27b
MD5 8449818b04806fcf6101eaaaa6940e89
BLAKE2b-256 e0948d309e7fb8fc32c40aea35241750c2e9533f30239a626c1a3904fdb08508

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page