Batteries-included toolkit for LoRA / QLoRA fine-tuning with Hugging Face Transformers
Project description
easylora
Batteries-included toolkit for LoRA / QLoRA fine-tuning with Hugging Face Transformers.
Fine-tune any causal language model with LoRA in under 20 lines of Python, or with a single CLI command.
What is LoRA / QLoRA?
LoRA (Low-Rank Adaptation) freezes pre-trained model weights and injects small trainable rank-decomposition matrices, reducing trainable parameters by orders of magnitude.
QLoRA adds 4-bit quantisation on top of LoRA, enabling fine-tuning of large models on consumer GPUs.
easylora wraps transformers + peft with safe defaults, reproducibility, and a clean config-driven API.
Installation
pip install easylora
Optional extras:
pip install "easylora[qlora]" # adds bitsandbytes for 4-bit/8-bit quantisation
pip install "easylora[wandb]" # adds Weights & Biases logging
pip install "easylora[dev]" # adds ruff, pyright, pytest, mkdocs, pre-commit
pip install "easylora[all]" # everything
For development from source:
git clone https://github.com/alexsuw/easylora.git
cd easylora
pip install -e ".[dev]"
Quickstart (Python)
from easylora import train, TrainConfig
from easylora.config import ModelConfig, DataConfig
config = TrainConfig(
model=ModelConfig(base_model="meta-llama/Llama-3.2-1B"),
data=DataConfig(
dataset_name="tatsu-lab/alpaca",
format="alpaca",
max_seq_len=2048,
),
)
artifacts = train(config)
print(f"Adapter saved to: {artifacts.adapter_dir}")
QLoRA (4-bit)
config = TrainConfig(
model=ModelConfig(base_model="meta-llama/Llama-3.2-1B", load_in_4bit=True),
data=DataConfig(dataset_name="tatsu-lab/alpaca", format="alpaca"),
)
artifacts = train(config)
Quickstart (CLI)
# Generate a starter config
easylora init-config --template sft-lora
# Train
easylora train --config easylora_config.yaml
# Train with autopilot (no config file)
easylora train \
--autopilot \
--model meta-llama/Llama-3.2-1B \
--dataset tatsu-lab/alpaca
# Validate config without training
easylora train --config config.yaml --dry-run
# Plan autopilot choices without training
easylora autopilot plan \
--model meta-llama/Llama-3.2-1B \
--dataset tatsu-lab/alpaca \
--quality balanced
# Evaluate
easylora eval --base-model meta-llama/Llama-3.2-1B --adapter-dir ./output/adapter --dataset eval.jsonl
# Merge adapter into base model
easylora merge --base-model meta-llama/Llama-3.2-1B --adapter-dir ./output/adapter --output-dir ./merged
# Check environment
easylora doctor
Config Reference
Config files use YAML or JSON. See the full reference.
model:
base_model: "meta-llama/Llama-3.2-1B"
torch_dtype: "auto"
data:
dataset_name: "tatsu-lab/alpaca"
format: "alpaca"
max_seq_len: 2048
lora:
r: 16
alpha: 32
target_modules: "auto"
training:
epochs: 3
batch_size: 4
grad_accum: 4
gradient_checkpointing: true
output:
output_dir: "./output"
repro:
seed: 42
Autopilot Training
Autopilot generates a full TrainConfig from minimal inputs, then runs the existing training
pipeline unchanged.
from easylora import autopilot_plan, autopilot_train
plan = autopilot_plan(
model="meta-llama/Llama-3.2-1B",
dataset="tatsu-lab/alpaca",
quality="balanced", # fast | balanced | high
)
print(plan.to_pretty_lines())
artifacts = autopilot_train(
model="meta-llama/Llama-3.2-1B",
dataset="tatsu-lab/alpaca",
)
When autopilot is used, easylora also saves:
resolved_config.yaml— full resolved configuration used for trainingautopilot_report.json— hardware profile, dataset/model analysis, chosen values, and reasoning
Model Compatibility
easylora auto-detects LoRA target modules for 16+ architectures:
| Family | Models |
|---|---|
| LLaMA | LLaMA 1/2/3, Code Llama, Vicuna |
| Mistral | Mistral, Mixtral |
| Qwen | Qwen, Qwen2 |
| Gemma, Gemma 2 | |
| Microsoft | Phi-2, Phi-3 |
| Others | Falcon, GPT-NeoX, Pythia, MPT, Bloom, OPT, GPT-2, StarCoder |
For unknown architectures, easylora scans the model for nn.Linear layers and selects attention-like modules automatically. Use easylora inspect-targets --model <id> to preview what would be selected.
Merging Adapters
from easylora import merge_adapter
merge_adapter(
base_model_name_or_path="meta-llama/Llama-3.2-1B",
adapter_dir="./output/adapter",
output_dir="./merged_model",
)
The merged model loads with AutoModelForCausalLM.from_pretrained without PEFT.
Dataset Formats
| Format | Columns | Description |
|---|---|---|
raw |
text |
Single text field for language modelling |
alpaca |
instruction, input (optional), output |
Instruction-following with prompt masking |
chatml |
messages |
Chat messages with role/content dicts |
Output Artifacts
output/
adapter/ # LoRA adapter weights
train_config.json # Config used for this run
train_log.jsonl # Step-by-step training metrics
summary.json # Final loss, steps, runtime
metadata.json # Base model, versions, timestamp
logs.jsonl # Application logs
Troubleshooting
| Issue | Solution |
|---|---|
bitsandbytes not installed |
pip install bitsandbytes (CUDA required) |
| CUDA OOM | Reduce batch_size, increase grad_accum, enable gradient_checkpointing, use QLoRA |
pad_token was None |
Handled automatically (set to eos_token) |
| Output dir exists | Use --force or allow_overwrite: true |
Run easylora doctor for environment diagnostics.
Running Tests
pip install -e ".[dev]"
make test # fast tests
make test-slow # includes smoke training
Contributing
Contributions are welcome. See CONTRIBUTING.md for dev setup, code style, and the PR process.
License
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file easylora-0.2.1.tar.gz.
File metadata
- Download URL: easylora-0.2.1.tar.gz
- Upload date:
- Size: 43.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0529f2d382fe6fac8aa9b4d2dae4ea152c733f8effbfdd6a38645d5819bba440
|
|
| MD5 |
ef088a02be815000db05a140e1b4e053
|
|
| BLAKE2b-256 |
e634c397eb54104e14d82f31ff7bfb646b1545ecab043326ae40258ae489d2ce
|
Provenance
The following attestation bundles were made for easylora-0.2.1.tar.gz:
Publisher:
release.yml on alexsuw/easylora
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
easylora-0.2.1.tar.gz -
Subject digest:
0529f2d382fe6fac8aa9b4d2dae4ea152c733f8effbfdd6a38645d5819bba440 - Sigstore transparency entry: 1005186756
- Sigstore integration time:
-
Permalink:
alexsuw/easylora@87dcf8eac103381b970f3714d9b7aafe78db8b0d -
Branch / Tag:
refs/tags/v0.2.1 - Owner: https://github.com/alexsuw
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@87dcf8eac103381b970f3714d9b7aafe78db8b0d -
Trigger Event:
push
-
Statement type:
File details
Details for the file easylora-0.2.1-py3-none-any.whl.
File metadata
- Download URL: easylora-0.2.1-py3-none-any.whl
- Upload date:
- Size: 52.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0149a0667a2d3bce9405f2d84a1fcf273688bbe7f6aadc9b613a273f9e32a3cd
|
|
| MD5 |
58c83880de5742f8543adafb72ab83a0
|
|
| BLAKE2b-256 |
9d080922a9450fd8b2ab52899aaa83522b7c0665f1f8b449476727bfb678232b
|
Provenance
The following attestation bundles were made for easylora-0.2.1-py3-none-any.whl:
Publisher:
release.yml on alexsuw/easylora
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
easylora-0.2.1-py3-none-any.whl -
Subject digest:
0149a0667a2d3bce9405f2d84a1fcf273688bbe7f6aadc9b613a273f9e32a3cd - Sigstore transparency entry: 1005186759
- Sigstore integration time:
-
Permalink:
alexsuw/easylora@87dcf8eac103381b970f3714d9b7aafe78db8b0d -
Branch / Tag:
refs/tags/v0.2.1 - Owner: https://github.com/alexsuw
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@87dcf8eac103381b970f3714d9b7aafe78db8b0d -
Trigger Event:
push
-
Statement type: