LLM Trainer
Project description
🎉 Latest Updates
- 2025/06: Magistral with mistral-common tokenizer support has been added to Axolotl. See examples to start training your own Magistral models with Axolotl!
- 2025/05: Quantization Aware Training (QAT) support has been added to Axolotl. Explore the docs to learn more!
- 2025/04: Llama 4 support has been added in Axolotl. See examples to start training your own Llama 4 models with Axolotl's linearized version!
- 2025/03: Axolotl has implemented Sequence Parallelism (SP) support. Read the blog and docs to learn how to scale your context length when fine-tuning.
- 2025/03: (Beta) Fine-tuning Multimodal models is now supported in Axolotl. Check out the docs to fine-tune your own!
- 2025/02: Axolotl has added LoRA optimizations to reduce memory usage and improve training speed for LoRA and QLoRA in single GPU and multi-GPU training (DDP and DeepSpeed). Jump into the docs to give it a try.
- 2025/02: Axolotl has added GRPO support. Dive into our blog and GRPO example and have some fun!
- 2025/01: Axolotl has added Reward Modelling / Process Reward Modelling fine-tuning support. See docs.
✨ Overview
Axolotl is a tool designed to streamline post-training for various AI models.
Features:
- Multiple Model Support: Train various models like LLaMA, Mistral, Mixtral, Pythia, and more. We are compatible with HuggingFace transformers causal language models.
- Training Methods: Full fine-tuning, LoRA, QLoRA, GPTQ, QAT, Preference Tuning (DPO, IPO, KTO, ORPO), RL (GRPO), Multimodal, and Reward Modelling (RM) / Process Reward Modelling (PRM).
- Easy Configuration: Re-use a single YAML file between dataset preprocess, training, evaluation, quantization, and inference.
- Performance Optimizations: Multipacking, Flash Attention, Xformers, Flex Attention, Liger Kernel, Cut Cross Entropy, Sequence Parallelism (SP), LoRA optimizations, Multi-GPU training (FSDP1, FSDP2, DeepSpeed), Multi-node training (Torchrun, Ray), and many more!
- Flexible Dataset Handling: Load from local, HuggingFace, and cloud (S3, Azure, GCP, OCI) datasets.
- Cloud Ready: We ship Docker images and also PyPI packages for use on cloud platforms and local hardware.
🚀 Quick Start
Requirements:
- NVIDIA GPU (Ampere or newer for
bf16
and Flash Attention) or AMD GPU - Python 3.11
- PyTorch ≥2.6.0
Installation
Using pip
pip3 install -U packaging==23.2 setuptools==75.8.0 wheel ninja
pip3 install --no-build-isolation axolotl[flash-attn,deepspeed]
# Download example axolotl configs, deepspeed configs
axolotl fetch examples
axolotl fetch deepspeed_configs # OPTIONAL
Using Docker
Installing with Docker can be less error prone than installing in your own environment.
docker run --gpus '"all"' --rm -it axolotlai/axolotl:main-latest
Other installation approaches are described here.
Your First Fine-tune
# Fetch axolotl examples
axolotl fetch examples
# Or, specify a custom path
axolotl fetch examples --dest path/to/folder
# Train a model using LoRA
axolotl train examples/llama-3/lora-1b.yml
That's it! Check out our Getting Started Guide for a more detailed walkthrough.
📚 Documentation
- Installation Options - Detailed setup instructions for different environments
- Configuration Guide - Full configuration options and examples
- Dataset Loading - Loading datasets from various sources
- Dataset Guide - Supported formats and how to use them
- Multi-GPU Training
- Multi-Node Training
- Multipacking
- API Reference - Auto-generated code documentation
- FAQ - Frequently asked questions
🤝 Getting Help
- Join our Discord community for support
- Check out our Examples directory
- Read our Debugging Guide
- Need dedicated support? Please contact ✉️wing@axolotl.ai for options
🌟 Contributing
Contributions are welcome! Please see our Contributing Guide for details.
❤️ Sponsors
Thank you to our sponsors who help make Axolotl possible:
- Modal - Modal lets you run jobs in the cloud, by just writing a few lines of Python. Customers use Modal to deploy Gen AI models at large scale, fine-tune large language models, run protein folding simulations, and much more.
Interested in sponsoring? Contact us at wing@axolotl.ai
📜 License
This project is licensed under the Apache 2.0 License - see the LICENSE file for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file axolotl-0.11.0.tar.gz
.
File metadata
- Download URL: axolotl-0.11.0.tar.gz
- Upload date:
- Size: 366.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 |
2d13cfc3afcf30a3c41eb7b2794004e8a0ea853d3311bbf9b0c4a7f16db3326b
|
|
MD5 |
117a7ec11a455cc717793bc7c3ab3793
|
|
BLAKE2b-256 |
beff1090f78938cbe4a8776e81a9008d65e9d88c90ccae288a4f21617f5572f8
|
Provenance
The following attestation bundles were made for axolotl-0.11.0.tar.gz
:
Publisher:
pypi.yml
on axolotl-ai-cloud/axolotl
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1
-
Predicate type:
https://docs.pypi.org/attestations/publish/v1
-
Subject name:
axolotl-0.11.0.tar.gz
-
Subject digest:
2d13cfc3afcf30a3c41eb7b2794004e8a0ea853d3311bbf9b0c4a7f16db3326b
- Sigstore transparency entry: 268701989
- Sigstore integration time:
-
Permalink:
axolotl-ai-cloud/axolotl@c6d69d5c1bb5c55ed67465c3824432d3413fabf0
-
Branch / Tag:
refs/tags/v0.11.0
- Owner: https://github.com/axolotl-ai-cloud
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com
-
Runner Environment:
github-hosted
-
Publication workflow:
pypi.yml@c6d69d5c1bb5c55ed67465c3824432d3413fabf0
-
Trigger Event:
push
-
Statement type: