High-performance LLM fine-tuning library built with Candle and Rust.
Project description
🚀 Unsloth-Candle
Unsloth-Candle brings the blazing fast performance of Unsloth to the Candle ecosystem. By leveraging optimized Rust kernels and efficient memory management, it enables 2x faster training and 70% less memory usage compared to standard implementations.
✨ Core Advantages
- Zero Learning Curve: 1:1 API compatibility with Unsloth's Python interface.
- Hardware Optimized: Native support for CUDA, Metal (Apple Silicon), and AVX/Neon (CPU).
- Memory Efficient: Native 4-bit NF4 quantization and gradient checkpointing.
- Unified Support: One engine for Llama 3.2, Mistral, Qwen 2.5, DeepSeek-V3, and more.
📦 Installation
Via Pip (Recommended)
pip install unsloth-candle
Build from Source
git clone https://github.com/unslothai/unsloth-candle.git
cd unsloth-candle
pip install -e .
To enable GPU acceleration:
- CUDA:
pip install -e . --features cuda - Metal:
pip install -e . --features metal
🛠 Usage
1. Load Model & Tokenizer
from unsloth_candle import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Llama-3.2-1B-Instruct",
max_seq_length = 2048,
load_in_4bit = True,
)
2. Apply LoRA/DoRA
model = FastLanguageModel.get_peft_model(
model,
r = 16,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj"],
lora_alpha = 16,
use_gradient_checkpointing = True,
use_dora = False, # Set to True for DoRA
)
3. Fine-tuning with SFTTrainer
from unsloth_candle import SFTTrainer, SFTConfig
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = dataset,
dataset_text_field = "text",
max_seq_length = 2048,
args = SFTConfig(
max_steps = 60,
learning_rate = 2e-4,
logging_steps = 1,
),
)
trainer.train()
4. Save & Export
# Save as merged HF weights
model.save_pretrained_merged("output_hf", tokenizer)
# Save as GGUF (for Ollama/llama.cpp)
model.save_pretrained_gguf("output_gguf", tokenizer, quantization_type="q4_k_m")
🗺️ Model Catalog
| Model | Architecture | 4-bit | LoRA | DoRA |
|---|---|---|---|---|
| Llama 3.2 | LlamaForCausalLM | ✅ | ✅ | ✅ |
| Mistral Nemo | MistralForCausalLM | ✅ | ✅ | ✅ |
| Qwen 2.5 | Qwen2ForCausalLM | ✅ | ✅ | ✅ |
| DeepSeek V3 | DeepSeekV3 (MLA) | ✅ | ✅ | ✅ |
| Gemma 3 | Gemma3 (GeGLU) | ✅ | ✅ | ✅ |
| Phi 4 | Phi4 | ✅ | ✅ | ✅ |
📜 License
Licensed under the Apache License, Version 2.0.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file unsloth_candle-0.1.0.tar.gz.
File metadata
- Download URL: unsloth_candle-0.1.0.tar.gz
- Upload date:
- Size: 498.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
cf3f905b885dc5f12f93976ee91370f96771dd043cd2f58a92bdd2fee59d0441
|
|
| MD5 |
edf920798345c6bd4e94e640d94d1715
|
|
| BLAKE2b-256 |
ac8e4a3dca619cfc10fe1aad65b0d62b6ad946cc021b788742cc4bf9583e1b0a
|
File details
Details for the file unsloth_candle-0.1.0-cp311-cp311-manylinux_2_39_x86_64.whl.
File metadata
- Download URL: unsloth_candle-0.1.0-cp311-cp311-manylinux_2_39_x86_64.whl
- Upload date:
- Size: 2.5 MB
- Tags: CPython 3.11, manylinux: glibc 2.39+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a5c5b7f3e3732e1ab7b10d7357133b39d4a643d78608e484498aefdc825a0e89
|
|
| MD5 |
da87532fd91c03550be1d62340988baf
|
|
| BLAKE2b-256 |
36cbd4d833cebacd686fdfd5925c55b50222180ab2d80f4f6208f5148c4f8e84
|