Utils for Unsloth
Project description
✨ Finetune for Free
All notebooks are beginner friendly! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, Ollama, vLLM or uploaded to Hugging Face.
Unsloth supports | Free Notebooks | Performance | Memory use |
---|---|---|---|
Llama 3.2 (3B) | ▶️ Start for free | 2x faster | 60% less |
Llama 3.1 (8B) | ▶️ Start for free | 2x faster | 60% less |
Phi-3.5 (mini) | ▶️ Start for free | 2x faster | 50% less |
Gemma 2 (9B) | ▶️ Start for free | 2x faster | 63% less |
Mistral Small (22B) | ▶️ Start for free | 2x faster | 60% less |
Ollama | ▶️ Start for free | 1.9x faster | 43% less |
Mistral v0.3 (7B) | ▶️ Start for free | 2.2x faster | 73% less |
ORPO | ▶️ Start for free | 1.9x faster | 43% less |
DPO Zephyr | ▶️ Start for free | 1.9x faster | 43% less |
- Kaggle Notebooks for Llama 3.1 (8B), Gemma 2 (9B), Mistral (7B)
- Run Llama 3.2 1B 3B notebook and Llama 3.2 conversational notebook
- Run Llama 3.1 conversational notebook and Mistral v0.3 ChatML
- This text completion notebook is for continued pretraining / raw text
- This continued pretraining notebook is for learning another language
- Click here for detailed documentation for Unsloth.
🔗 Links and Resources
Type | Links |
---|---|
📚 Documentation & Wiki | Read Our Docs |
Twitter (aka X) | Follow us on X |
💾 Installation | unsloth/README.md |
🥇 Benchmarking | Performance Tables |
🌐 Released Models | Unsloth Releases |
✍️ Blog | Read our Blogs |
⭐ Key Features
- All kernels written in OpenAI's Triton language. Manual backprop engine.
- 0% loss in accuracy - no approximation methods - all exact.
- No change of hardware. Supports NVIDIA GPUs since 2018+. Minimum CUDA Capability 7.0 (V100, T4, Titan V, RTX 20, 30, 40x, A100, H100, L40 etc) Check your GPU! GTX 1070, 1080 works, but is slow.
- Works on Linux and Windows via WSL.
- Supports 4bit and 16bit QLoRA / LoRA finetuning via bitsandbytes.
- Open source trains 5x faster - see Unsloth Pro for up to 30x faster training!
- If you trained a model with 🦥Unsloth, you can use this cool sticker!
💾 Installation Instructions
These are utilities for Unsloth, so install Unsloth as well! For stable releases for Unsloth Zoo, use pip install unsloth_zoo
. We recommend pip install "unsloth_zoo @ git+https://github.com/unslothai/unsloth-zoo.git"
for most installations though.
pip install unsloth_zoo
License
Unsloth Zoo is licensed under the GNU Affero General Public License.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file unsloth_zoo-2024.11.4.tar.gz
.
File metadata
- Download URL: unsloth_zoo-2024.11.4.tar.gz
- Upload date:
- Size: 29.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.12.4
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 1fe06cec59a516180ba2ca1887c754c4deca63cd5851099c1446283a699bebb2 |
|
MD5 | 52ff757eb6ee085fbf4dc556989f3b6a |
|
BLAKE2b-256 | ed5b329f9a523e137dd4632960039cdc3dd64b3147ba963536a62691c6409ea5 |
File details
Details for the file unsloth_zoo-2024.11.4-py3-none-any.whl
.
File metadata
- Download URL: unsloth_zoo-2024.11.4-py3-none-any.whl
- Upload date:
- Size: 30.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.12.4
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 495bfd54810f3178fa34e079638f58e9afe3cdac725c2c61a8b8692eb083f2a8 |
|
MD5 | bae3fd0c5d0921058b46a81213b27773 |
|
BLAKE2b-256 | edfa45a9a03cbc1beeb0b3e20d1919544f7443359be28e9dc0bcaa2047fbbf9a |