High-performance Whisper transcription with Turbo v3 for Apple Silicon
Project description
Whisper Turbo
High-performance Whisper transcription using MLX for Apple Silicon, featuring Turbo v3 for blazing-fast speech-to-text.
Features
- 🚀 Optimized for Apple Silicon (M1/M2/M3) using MLX framework
- ⚡ Supports Whisper Turbo v3 for fast, high-quality transcription
- 🎯 Automatic language detection
- 📝 Word-level timestamps
- 🔧 Handles various audio formats (AAC, MP3, MP4, WAV, etc.)
- 📡 Optional API posting functionality
- 💾 Save transcriptions to JSON format
Installation
From PyPI
pip install whisper-turbo
From Source
git clone https://github.com/xbattlax/whisper-turbo.git
cd whisper-turbo
pip install -e .
Quick Start
# Transcribe with Turbo v3 (fastest, recommended)
whisper-turbo your_audio.mp3 --model turbo-v3 --output result.json
# Or use as a Python module
from whisper_turbo import MLXWhisperTranscriber
transcriber = MLXWhisperTranscriber(model_name="turbo-v3")
text, segments = transcriber.transcribe_file("audio.mp3")
print(text)
Requirements
- Python 3.8+
- Apple Silicon Mac (M1/M2/M3)
- ~2GB disk space for model download
Usage
Command Line Usage
# Basic transcription
whisper-turbo audio.mp3
# Use Whisper Turbo v3 (fastest, high quality)
whisper-turbo audio.mp3 --model turbo-v3
# Use other models
whisper-turbo audio.mp3 --model large-v3
whisper-turbo audio.mp3 --model medium
whisper-turbo audio.mp3 --model base
# Save output to file
whisper-turbo audio.mp3 --model turbo-v3 --output transcription.json
# Disable API posting
whisper-turbo audio.mp3 --model turbo-v3 --no-api
# Custom API endpoint
whisper-turbo audio.mp3 --api-endpoint https://your-api.com/transcript
# Enable SSL verification
whisper-turbo audio.mp3 --verify-ssl
Python API Usage
from whisper_turbo import MLXWhisperTranscriber
# Initialize transcriber
transcriber = MLXWhisperTranscriber(
model_name="turbo-v3",
api_enabled=False # Disable API posting
)
# Transcribe audio file
text, segments = transcriber.transcribe_file("path/to/audio.mp3")
# Print full transcription
print("Full text:", text)
# Print segments with timestamps
for segment in segments:
print(f"[{segment['start']:.2f}s - {segment['end']:.2f}s] {segment['text']}")
Available Models
turbo-v3/turbo- Whisper Large v3 Turbo (fastest, recommended)large-v3/large- Whisper Large v3medium- Medium model (balanced speed/quality)small- Small model (faster, good quality)base- Base model (very fast, decent quality)tiny- Tiny model (fastest, lower quality)
API Format
When API posting is enabled, the script posts transcriptions in the following format:
{
"external_call_ref": "session-uuid",
"timestamp": "2024-01-15T10:30:45.123Z",
"segments": [
{
"start": 0.0,
"end": 5.2,
"is_final": true,
"speaker_id": "speaker_0"
}
],
"transcription": [
{
"start": 0.0,
"end": 5.2,
"text": "Hello, welcome to the meeting",
"is_final": true
}
]
}
Output Format
Saved transcriptions include:
{
"file": "audio.mp3",
"session_id": "uuid",
"timestamp": "2024-01-15T10:30:45",
"model": "turbo-v3",
"device": "Apple Silicon (MLX)",
"full_text": "Complete transcription text...",
"segments": [
{
"start": 0.0,
"end": 5.2,
"text": "Segment text",
"is_final": true
}
]
}
Performance
On Apple Silicon (M1/M2/M3), MLX Whisper provides:
- Hardware-accelerated transcription using Metal
- Efficient memory usage
- Fast model loading and inference
- Typical processing: ~20-30 seconds for 5-minute audio with Turbo v3
Example
# Transcribe a meeting recording with Turbo v3
whisper-turbo ~/Downloads/meeting.mp4 --model turbo-v3 --output meeting_transcript.json
# Output
🔧 Loading MLX Whisper model 'turbo-v3' on Apple Silicon...
✅ MLX Whisper ready with model: turbo-v3
✅ MLX Whisper model loaded successfully
🎧 Processing audio file: ~/Downloads/meeting.mp4
🚀 Using Apple MLX framework for optimal Metal performance
🤖 Using MLX model: mlx-community/whisper-large-v3-turbo
Detected language: English
✅ MLX Transcription complete in 23.45s
📝 Text length: 5420 characters
📊 Segments: 125
🔧 Used device: Apple Silicon (MLX)
💾 Saved transcription to: meeting_transcript.json
Troubleshooting
Installation Issues
- Ensure you're on an Apple Silicon Mac (M1/M2/M3)
- Update pip:
pip install --upgrade pip - Install in a virtual environment if conflicts occur
Performance Tips
- Use
turbo-v3model for best speed/quality balance - Close other applications to free up memory
- For very long audio files, consider splitting them
Common Issues
- Import Error: Ensure
mlx-whisperis installed - Memory Error: Try a smaller model (base, small)
- Audio Format Error: Convert to supported format (MP3, WAV)
Development
Setting up for development
# Clone the repository
git clone https://github.com/xbattlax/whisper-turbo.git
cd whisper-turbo
# Create virtual environment
python -m venv venv
source venv/bin/activate # On macOS/Linux
# Install in development mode
pip install -e ".[dev]"
# Run tests
pytest
# Format code
black whisper_turbo/
# Lint code
flake8 whisper_turbo/
Building for PyPI
# Install build tools
pip install build twine
# Build the package
python -m build
# Upload to TestPyPI (for testing)
twine upload -r testpypi dist/*
# Upload to PyPI (for release)
twine upload dist/*
Contributing
Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
License
This project is licensed under the MIT License - see the LICENSE file for details.
Acknowledgments
- MLX Whisper team for the excellent MLX implementation
- OpenAI for the original Whisper model
- Apple for the MLX framework
Citation
If you use this tool in your research or project, please consider citing:
@software{whisper_turbo,
author = {Nathan Metzger},
title = {Whisper Turbo: High-performance Whisper transcription for Apple Silicon},
year = {2024},
url = {https://github.com/xbattlax/whisper-turbo}
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file whisper_turbo-0.1.0.tar.gz.
File metadata
- Download URL: whisper_turbo-0.1.0.tar.gz
- Upload date:
- Size: 12.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
da9ab9315a64505b63f73644ef0f9521f66da90852f589bdb4f716e5df62433f
|
|
| MD5 |
4e267ff9d86537ba0ead8ea6dc26dfc9
|
|
| BLAKE2b-256 |
5fd02b7a69ccf0cc4c0beacd94cd7c8edb46ceb39b8938dff817a63701918b24
|
File details
Details for the file whisper_turbo-0.1.0-py3-none-any.whl.
File metadata
- Download URL: whisper_turbo-0.1.0-py3-none-any.whl
- Upload date:
- Size: 9.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3b5275a8f1fd89bf795d1c5e64a4fb65e2d4d95fd6646c420bb3724c18e54fa0
|
|
| MD5 |
cab8e2d027e742ddda3ad8076725c6cb
|
|
| BLAKE2b-256 |
0646eacc4b5f1aac62f8066999b8911112f315e6be3b81b377d6c2d68de66825
|