Production-ready headless LLM fine-tuning with smart defaults, Windows support, and modular architecture
Project description
日本語 | 中文 | Español | Français | हिन्दी | Italiano | Português (BR)
Headless LLM fine-tuning in 3 lines. Smart defaults, VRAM-aware batch sizing, multi-run SLAO, and one-click GGUF export for Ollama.
Train LLMs in 3 lines of code. Export to Ollama in one more.
Quick Start
pip install backpropagate[standard]
from backpropagate import Trainer
trainer = Trainer("unsloth/Qwen2.5-7B-Instruct-bnb-4bit")
trainer.train("my_data.jsonl", steps=100)
trainer.export("gguf", quantization="q4_k_m") # Ready for Ollama
Why Backpropagate?
| Problem | Solution |
|---|---|
| Fine-tuning is complex | 3 lines: load, train, save |
| Windows is a nightmare | First-class Windows support |
| VRAM management is hard | Auto batch sizing, GPU monitoring |
| Model export is confusing | One-click GGUF + Ollama registration |
| Long runs cause forgetting | Multi-run SLAO training |
Key Features
- Headless by Design: Built for CI/CD pipelines, automated workflows, and programmatic execution.
- Smart Defaults: Automatically configures optimal hyperparameters based on your hardware and dataset.
- Multi-Run SLAO Training: Advanced training strategies to prevent catastrophic forgetting during long runs.
- First-Class Windows Support: Tested and optimized for Windows environments, avoiding common PyTorch/CUDA pitfalls.
- Seamless Export: One-click export to GGUF format and automatic registration with Ollama.
- Modular Architecture: Install only the dependencies you need (e.g.,
[unsloth],[ui],[export]).
Installation
pip install backpropagate # Core only (minimal)
pip install backpropagate[unsloth] # + Unsloth 2x faster training
pip install backpropagate[ui] # + Gradio web UI
pip install backpropagate[standard] # unsloth + ui (recommended)
pip install backpropagate[full] # Everything
| Extra | Description | Dependencies |
|---|---|---|
unsloth |
2x faster training, 50% less VRAM | unsloth |
ui |
Gradio web interface | gradio>=5.6.0 |
validation |
Pydantic config validation | pydantic, pydantic-settings |
export |
GGUF export for Ollama | llama-cpp-python |
monitoring |
WandB + system monitoring | wandb, psutil |
Requirements: Python 3.10+ · CUDA GPU (8GB+ VRAM) · PyTorch 2.0+
Usage
Basic Training
from backpropagate import Trainer
trainer = Trainer("unsloth/Qwen2.5-7B-Instruct-bnb-4bit")
trainer.train("my_data.jsonl", steps=100)
trainer.save("./my-model")
trainer.export("gguf", quantization="q4_k_m")
Multi-Run SLAO Training
from backpropagate import Trainer
trainer = Trainer("unsloth/Qwen2.5-7B-Instruct-bnb-4bit")
result = trainer.multi_run(
dataset="HuggingFaceH4/ultrachat_200k",
num_runs=5,
steps_per_run=100,
samples_per_run=1000,
merge_mode="slao", # Smart LoRA merging
)
Export to Ollama
trainer.export(
format="gguf",
quantization="q4_k_m",
register_ollama=True,
model_name="my-finetuned-model",
)
# ollama run my-finetuned-model
CLI
backprop train --data my_data.jsonl --model unsloth/Qwen2.5-7B-Instruct-bnb-4bit --steps 100
backprop multi-run --data my_data.jsonl --runs 5 --steps 100
backprop export ./output/lora --format gguf --quantization q4_k_m --ollama --ollama-name my-model
backpropagate --ui --port 7862
Windows Support
Backpropagate is designed to work on Windows out of the box:
- Pre-tokenization to avoid multiprocessing crashes
- Automatic xformers disable for RTX 40/50 series
- Safe dataloader settings
- Tested on RTX 5080 (16GB VRAM)
Model Presets
| Preset | VRAM | Speed | Quality |
|---|---|---|---|
| Qwen 2.5 7B | ~12GB | Medium | Best |
| Qwen 2.5 3B | ~8GB | Fast | Good |
| Llama 3.2 3B | ~8GB | Fast | Good |
| Llama 3.2 1B | ~6GB | Fastest | Basic |
| Mistral 7B | ~12GB | Medium | Good |
Architecture
backpropagate/
├── trainer.py # Core Trainer class
├── multi_run.py # Multi-run SLAO training
├── slao.py # SLAO LoRA merging algorithm
├── datasets.py # Dataset loading & filtering
├── export.py # GGUF/Ollama export
├── config.py # Pydantic settings
├── gpu_safety.py # GPU monitoring & safety
└── ui.py # Gradio interface
Privacy
All training happens locally on your GPU. Backpropagate makes no network requests except to download models from HuggingFace (which you initiate). No telemetry, no cloud dependency.
Scorecard
| Category | Score | Notes |
|---|---|---|
| A. Security | 10/10 | SECURITY.md, Bandit+Semgrep+Trivy+TruffleHog in CI, path traversal protection |
| B. Error Handling | 8/10 | Structured errors, GPU safety thresholds, checkpoint recovery |
| C. Operator Docs | 9/10 | README, CHANGELOG, modular install guide, CLI help |
| D. Shipping Hygiene | 9/10 | CI + tests (33 files), PyPI published, Codecov coverage |
| E. Identity | 10/10 | Logo, translations, landing page, PyPI listing |
| Total | 46/50 |
License
MIT — see LICENSE for details.
Built by MCP Tool Shop
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file backpropagate-0.1.7.tar.gz.
File metadata
- Download URL: backpropagate-0.1.7.tar.gz
- Upload date:
- Size: 309.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ff4fc429dd86701f0b2b0b8e403d113f3213e5be1d17a753d5ff13c2ed325b30
|
|
| MD5 |
cf0229ff83ce2956ac4015595a8075e0
|
|
| BLAKE2b-256 |
76f4da04a196daa9206504f22d1dba94e661c1ff9c4c26fc4a36a056925e3786
|
File details
Details for the file backpropagate-0.1.7-py3-none-any.whl.
File metadata
- Download URL: backpropagate-0.1.7-py3-none-any.whl
- Upload date:
- Size: 133.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
45e3fac62c289983689df3a30fca48742f9bc0f1c5dc60d6f6bb94ceea1abc51
|
|
| MD5 |
2ed40c1b2b2747470786b58ea894e799
|
|
| BLAKE2b-256 |
e6cad37e32b98d1ce1337ea9422b0281a8fc60eef96bffd94baeab8cb586265d
|