MLX-powered LLM/VLM/TTS/STT fine-tuning for Apple Silicon - Unsloth-compatible API for Mac
Project description
Fine-tune LLMs, Vision, and Audio models on your Mac
SFT, DPO, GRPO, Vision, TTS, and STT fine-tuning — natively on MLX. Unsloth-compatible API.
Documentation · Quick Start · Training Methods · Examples · Status
[!NOTE] Name Change: This project was originally called
unsloth-mlx. Since it's not an official Unsloth project and to avoid any confusion, it has been renamed tomlx-tune. The vision remains the same — bringing the Unsloth experience to Mac users via MLX. If you were usingunsloth-mlx, simply switch topip install mlx-tuneand update your imports fromunsloth_mlxtomlx_tune.
[!NOTE] Why I Built This (A Personal Note)
I rely on Unsloth for my daily fine-tuning on cloud GPUs—it's the gold standard for me. But recently, I started working on a MacBook M4 and hit a friction point: I wanted to prototype locally on my Mac, then scale up to the cloud without rewriting my entire training script.
Since Unsloth relies on Triton (which Macs don't have, yet), I couldn't use it locally. I built
mlx-tuneto solve this specific "Context Switch" problem. It wraps Apple's native MLX framework in an Unsloth-compatible API.The goal isn't to replace Unsloth or claim superior performance. The goal is code portability: allowing you to write
FastLanguageModelcode once on your Mac, test it, and then push that exact same script to a CUDA cluster. It solves a workflow problem, not just a hardware one.This is an "unofficial" project built by a fan, for fans who happen to use Macs. It's helping me personally, and if it helps others like me, then I'll have my satisfaction.
Why MLX-Tune?
Bringing the Unsloth experience to Mac users via Apple's MLX framework.
- 🚀 Fine-tune LLMs, VLMs, TTS & STT locally on your Mac (M1/M2/M3/M4/M5)
- 💾 Leverage unified memory (up to 512GB on Mac Studio)
- 🔄 Unsloth-compatible API - your existing training scripts just work!
- 📦 Export anywhere - HuggingFace format, GGUF for Ollama/llama.cpp
- 🎙️ Audio fine-tuning - Orpheus TTS (SNAC codec) + Whisper STT
# Unsloth (CUDA) # MLX-Tune (Apple Silicon)
from unsloth import FastLanguageModel from mlx_tune import FastLanguageModel
from trl import SFTTrainer from mlx_tune import SFTTrainer
# Rest of your code stays exactly the same!
What This Is (and Isn't)
This is NOT a replacement for Unsloth or an attempt to compete with it. Unsloth is incredible - it's the gold standard for efficient LLM fine-tuning on CUDA.
This IS a bridge for Mac users who want to:
- 🧪 Prototype locally - Experiment with fine-tuning before committing to cloud GPU costs
- 📚 Learn & iterate - Develop your training pipeline with fast local feedback loops
- 🔄 Then scale up - Move to cloud NVIDIA GPUs + original Unsloth for production training
Local Mac (MLX-Tune) → Cloud GPU (Unsloth)
Prototype & experiment Full-scale training
Small datasets Large datasets
Quick iterations Production runs
Project Status
🚀 v0.4.7 - Audio fine-tuning (TTS + STT), post-training workflow, track-based docs
| Feature | Status | Notes |
|---|---|---|
| SFT Training | ✅ Stable | Native MLX training |
| Model Loading | ✅ Stable | Any HuggingFace model (quantized & non-quantized) |
| Save/Export | ✅ Stable | HF format, GGUF (see limitations) |
| DPO Training | ✅ Stable | Full DPO loss |
| ORPO Training | ✅ Stable | Full ORPO loss |
| GRPO Training | ✅ Stable | Multi-generation + reward |
| KTO/SimPO | ✅ Stable | Proper loss implementations |
| Chat Templates | ✅ Stable | 15 models (llama, gemma, qwen, phi, mistral) |
| Response-Only Training | ✅ Stable | train_on_responses_only() |
| Multi-turn Merging | ✅ Stable | to_sharegpt() + conversation_extension |
| Column Mapping | ✅ Stable | apply_column_mapping() auto-rename |
| Dataset Config | ✅ Stable | HFDatasetConfig structured loading |
| Vision Models | ✅ Stable | Full VLM fine-tuning via mlx-vlm |
| TTS Fine-Tuning | ✅ NEW | Orpheus-3B via SNAC audio codec |
| STT Fine-Tuning | ✅ NEW | Whisper with encoder-decoder LoRA |
convert() |
✅ NEW | HF → MLX conversion (LLM, TTS, STT) |
push_to_hub() |
✅ NEW | Upload to HuggingFace Hub |
| PyPI Package | ✅ Available | uv pip install mlx-tune |
Installation
# Using uv (recommended - faster and more reliable)
uv pip install mlx-tune
# With audio support (TTS/STT fine-tuning)
uv pip install 'mlx-tune[audio]'
# Or using pip
pip install mlx-tune
# From source (for development)
git clone https://github.com/ARahim3/mlx-tune.git
cd mlx-tune
uv pip install -e .
Quick Start
from mlx_tune import FastLanguageModel, SFTTrainer, SFTConfig
from datasets import load_dataset
# Load any HuggingFace model (1B model for quick start)
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="mlx-community/Llama-3.2-1B-Instruct-4bit",
max_seq_length=2048,
load_in_4bit=True,
)
# Add LoRA adapters
model = FastLanguageModel.get_peft_model(
model,
r=16,
target_modules=["q_proj", "k_proj", "v_proj", "o_proj"],
lora_alpha=16,
)
# Load a dataset (or create your own)
dataset = load_dataset("yahma/alpaca-cleaned", split="train[:100]")
# Train with SFTTrainer (same API as TRL!)
trainer = SFTTrainer(
model=model,
train_dataset=dataset,
tokenizer=tokenizer,
args=SFTConfig(
output_dir="outputs",
per_device_train_batch_size=2,
learning_rate=2e-4,
max_steps=50,
),
)
trainer.train()
# Save (same API as Unsloth!)
model.save_pretrained("lora_model") # Adapters only
model.save_pretrained_merged("merged", tokenizer) # Full model
model.save_pretrained_gguf("model", tokenizer) # GGUF (see note below)
[!NOTE] GGUF Export: Works with non-quantized base models. If using a 4-bit model (like above), see Known Limitations for workarounds.
Chat Templates & Response-Only Training
from mlx_tune import get_chat_template, train_on_responses_only
# Apply chat template (supports llama-3, gemma, qwen, phi, mistral, etc.)
tokenizer = get_chat_template(tokenizer, chat_template="llama-3")
# Or auto-detect from model name
tokenizer = get_chat_template(tokenizer, chat_template="auto")
# Train only on responses (not prompts) - more efficient!
trainer = train_on_responses_only(
trainer,
instruction_part="<|start_header_id|>user<|end_header_id|>\n\n",
response_part="<|start_header_id|>assistant<|end_header_id|>\n\n",
)
Vision Model Fine-Tuning (NEW!)
Fine-tune vision-language models like Qwen3.5 on image+text tasks:
from mlx_tune import FastVisionModel, UnslothVisionDataCollator, VLMSFTTrainer
from mlx_tune.vlm import VLMSFTConfig
# Load a vision model
model, processor = FastVisionModel.from_pretrained(
"mlx-community/Qwen3.5-0.8B-bf16",
)
# Add LoRA (same params as Unsloth!)
model = FastVisionModel.get_peft_model(
model,
finetune_vision_layers=True,
finetune_language_layers=True,
r=16, lora_alpha=16,
)
# Train on image-text data
FastVisionModel.for_training(model)
trainer = VLMSFTTrainer(
model=model,
tokenizer=processor,
data_collator=UnslothVisionDataCollator(model, processor),
train_dataset=dataset,
args=VLMSFTConfig(max_steps=30, learning_rate=2e-4),
)
trainer.train()
See examples/10_qwen35_vision_finetuning.py for the full workflow, or examples/11_qwen35_text_finetuning.py for text-only fine-tuning on Qwen3.5.
TTS Fine-Tuning (NEW!)
Fine-tune text-to-speech models like Orpheus-3B using the SNAC audio codec:
from mlx_tune import FastTTSModel, TTSSFTTrainer, TTSSFTConfig, TTSDataCollator
from datasets import load_dataset, Audio
# Load TTS model + SNAC codec
model, tokenizer = FastTTSModel.from_pretrained("mlx-community/orpheus-3b-0.1-ft-bf16")
model = FastTTSModel.get_peft_model(model, r=16, lora_alpha=16)
# Train on audio data
dataset = load_dataset("MrDragonFox/Elise", split="train[:100]")
dataset = dataset.cast_column("audio", Audio(sampling_rate=24000))
trainer = TTSSFTTrainer(
model=model, tokenizer=tokenizer,
data_collator=TTSDataCollator(model, tokenizer),
train_dataset=dataset,
args=TTSSFTConfig(output_dir="./tts_output", max_steps=60),
)
trainer.train()
# Generate speech
FastTTSModel.for_inference(model)
audio = model.generate("Hello, how are you?")
See examples/12_orpheus_tts_finetuning.py for the full workflow.
STT Fine-Tuning (NEW!)
Fine-tune Whisper speech-to-text models with encoder-decoder LoRA:
from mlx_tune import FastSTTModel, STTSFTTrainer, STTSFTConfig, STTDataCollator
# Load Whisper model
model, processor = FastSTTModel.from_pretrained("mlx-community/whisper-tiny-asr-fp16")
model = FastSTTModel.get_peft_model(model, r=8, finetune_encoder=True, finetune_decoder=True)
# Train on transcription data
trainer = STTSFTTrainer(
model=model, tokenizer=processor,
data_collator=STTDataCollator(model, processor),
train_dataset=dataset,
args=STTSFTConfig(output_dir="./stt_output", max_steps=60),
)
trainer.train()
# Transcribe audio
FastSTTModel.for_inference(model)
text = model.transcribe("audio.wav")
See examples/13_whisper_stt_finetuning.py for the full workflow.
Post-Training Workflow
All model types (LLM, VLM, TTS, STT) support the full post-training workflow:
# Save LoRA adapters
model.save_pretrained("./adapters")
# Merge LoRA into base model
model.save_pretrained_merged("./merged")
# Convert HF model to MLX format
FastLanguageModel.convert("model-name", mlx_path="./mlx_model")
# Push to HuggingFace Hub
model.push_to_hub("username/my-model")
Supported Training Methods
| Method | Trainer | Implementation | Use Case |
|---|---|---|---|
| SFT | SFTTrainer |
✅ Native MLX | Instruction fine-tuning |
| DPO | DPOTrainer |
✅ Native MLX | Preference learning (proper log-prob loss) |
| ORPO | ORPOTrainer |
✅ Native MLX | Combined SFT + odds ratio preference |
| GRPO | GRPOTrainer |
✅ Native MLX | Reasoning with multi-generation (DeepSeek R1 style) |
| KTO | KTOTrainer |
✅ Native MLX | Kahneman-Tversky optimization |
| SimPO | SimPOTrainer |
✅ Native MLX | Simple preference optimization |
| VLM SFT | VLMSFTTrainer |
✅ Native MLX | Vision-Language model fine-tuning |
| TTS SFT | TTSSFTTrainer |
✅ Native MLX | Text-to-Speech (Orpheus + SNAC codec) |
| STT SFT | STTSFTTrainer |
✅ Native MLX | Speech-to-Text (Whisper encoder-decoder) |
Examples
Check examples/ for working code:
- Basic model loading and inference (01–07)
- Complete SFT fine-tuning pipeline (08)
- RL training methods — DPO, GRPO, ORPO (09)
- Vision model fine-tuning — Qwen3.5 (10–11)
- TTS fine-tuning — Orpheus-3B (12)
- STT fine-tuning — Whisper (13)
Requirements
- Hardware: Apple Silicon Mac (M1/M2/M3/M4/M5)
- OS: macOS 13.0+
- Memory: 8GB+ unified RAM (16GB+ recommended)
- Python: 3.9+
Comparison with Unsloth
| Feature | Unsloth (CUDA) | MLX-Tune |
|---|---|---|
| Platform | NVIDIA GPUs | Apple Silicon |
| Backend | Triton Kernels | MLX Framework |
| Memory | VRAM (limited) | Unified (up to 512GB) |
| API | Original | 100% Compatible |
| Best For | Production training | Local dev, large models |
Known Limitations
GGUF Export from Quantized Models
The Issue: GGUF export (save_pretrained_gguf) doesn't work directly with quantized (4-bit) base models. This is a known limitation in mlx-lm, not an mlx-tune bug.
What Works:
- ✅ Training with quantized models (QLoRA) - works perfectly
- ✅ Saving adapters (
save_pretrained) - works - ✅ Saving merged model (
save_pretrained_merged) - works - ✅ Inference with trained model - works
- ❌ GGUF export from quantized base model - mlx-lm limitation
Workarounds:
-
Use a non-quantized base model (recommended for GGUF export):
# Use fp16 model instead of 4-bit model, tokenizer = FastLanguageModel.from_pretrained( model_name="mlx-community/Llama-3.2-1B-Instruct", # NOT -4bit max_seq_length=2048, load_in_4bit=False, # Train in fp16 ) # Train normally, then export model.save_pretrained_gguf("model", tokenizer) # Works!
-
Dequantize during export (results in large fp16 file):
model.save_pretrained_gguf("model", tokenizer, dequantize=True) # Then re-quantize with llama.cpp: # ./llama-quantize model.gguf model-q4_k_m.gguf Q4_K_M
-
Skip GGUF, use MLX format: If you only need the model for MLX/Python inference, just use
save_pretrained_merged()- no GGUF needed.
Related Issues:
- mlx-lm #353 - MLX to GGUF conversion
- mlx-examples #1382 - Quantized to GGUF
Contributing
Contributions welcome! Areas that need help:
- Custom MLX kernels for even faster training
- More comprehensive test coverage (currently 60%, target 70%+)
- Testing on different M-series chips (M1, M2, M3, M4, M5)
- Additional TTS/STT model support (Sesame, Spark-TTS, distil-whisper)
- Batched RL training (currently single-sample)
License
Apache 2.0 - See LICENSE file.
Acknowledgments
- Unsloth - The original, incredible CUDA library
- MLX - Apple's ML framework
- MLX-LM - LLM utilities for MLX
- MLX-VLM - Vision model support
- MLX-Audio - Audio inference (TTS/STT) for MLX
Community project, not affiliated with Unsloth AI or Apple.
⭐ Star this repo if you find it useful!
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file mlx_tune-0.4.7.tar.gz.
File metadata
- Download URL: mlx_tune-0.4.7.tar.gz
- Upload date:
- Size: 131.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.12 {"installer":{"name":"uv","version":"0.9.12"},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6b7d50cf9799c210fae60c539cb81f6bb67a3ef91a5d48ddeaf174842431de61
|
|
| MD5 |
3e4a88c68cf12a0b73062c5f5ce5899d
|
|
| BLAKE2b-256 |
6dafee5077e72dfecbfe9a62d99a768b4dda9042e2c47f524689b2b5dd873a77
|
File details
Details for the file mlx_tune-0.4.7-py3-none-any.whl.
File metadata
- Download URL: mlx_tune-0.4.7-py3-none-any.whl
- Upload date:
- Size: 94.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.12 {"installer":{"name":"uv","version":"0.9.12"},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4974f3ec904553f919233af0315dfd3248ef0a38945ea1486454972454a4b5d8
|
|
| MD5 |
94192b97e51371ea40d042e90a492d74
|
|
| BLAKE2b-256 |
548e5c1377193e1125e5bbf92a6cf157f3b7aebffb92458ed4b8d10ba9cb13ca
|