Automated Hyperparameter Optimization Platform for Efficient LLM Fine-Tuning
Project description
Auto-LoRA
🚀 Automated Hyperparameter Optimization Platform for Efficient LLM Fine-Tuning
Auto-LoRA automates parameter-efficient fine-tuning of Large Language Models using LoRA/QLoRA. It reduces trial-and-error by automatically selecting optimal training configurations based on dataset characteristics and available hardware.
Features
- ⚡ Unsloth Powered — 2x–5x faster training and 70% less VRAM (automatic detection)
- 📊 Scientific Benchmarking — Journal-grade metrics (ROUGE-L, TPS, Latency, Peak VRAM)
- 🎨 Dynamic Feedback — Real-time row-by-row dataset loading and analysis progress bars
- 🎯 Automatic Tuning — Bayesian optimization via Optuna finds the best LoRA configuration
- 🔍 Hardware-Aware — Auto-detects GPU/VRAM and adjusts strategy (LoRA vs QLoRA)
- 📈 Rich Reports — HTML dashboards, publication-quality charts, CSV exports
- 🔧 Presets — Optimized defaults for chatbot, coding, QA, summarization, domain tasks
- 💾 Run History — SQLite tracking of all runs, trials, and artifacts
Quick Start
Installation
# Clone the repository
git clone https://github.com/shrey1720/alora.git
cd alora
# Install in development mode
pip install -e ".[dev]"
# Optional: Install Unsloth for 2-5x speedup
# (Recommended for NVIDIA GPUs)
pip install unsloth xformers
Usage
# 1. Check system health
auto-lora doctor
# 2. Initialize project
auto-lora init
# 3. Train with automatic tuning
auto-lora train --model TinyLlama/TinyLlama-1.1B-Chat-v1.0 --data datasets/sample_faq.json --max-trials 5
# 4. View results
auto-lora runs list
auto-lora runs show <run_id>
# 5. Generate report
auto-lora report --run <run_id>
# 6. Scientific Benchmark
# Generates a journal-ready technical profile (TPS, ROUGE-L, VRAM)
auto-lora benchmark --run <run_id> --references gold_answers.txt
# 7. Export adapter
auto-lora export --run <run_id>
Using Presets
# Chatbot preset (conversational focus)
auto-lora train --model mistralai/Mistral-7B-v0.1 --data chat_data.json --preset chatbot
# Coding preset (lower LR, code tokenization)
auto-lora train --model codellama/CodeLlama-7b-hf --data code_data.json --preset coding
# QA preset (fact-oriented)
auto-lora train --model meta-llama/Llama-2-7b-hf --data qa_data.json --preset qa
CLI Commands
| Command | Description |
|---|---|
auto-lora init |
Initialize project structure |
auto-lora doctor |
System health check |
auto-lora train |
Train with automatic tuning |
auto-lora runs list |
List all training runs |
auto-lora runs show <id> |
Show run details |
auto-lora report --run <id> |
Generate HTML report |
auto-lora benchmark --run <id> |
Benchmark trained adapter |
auto-lora export --run <id> |
Export adapter for deployment |
Architecture
auto_lora/
├── cli/ # Typer CLI commands
├── core/ # Config, presets, constants
├── hardware/ # GPU/CPU detection
├── dataset/ # Loading, validation, formatting
├── tuner/ # Optuna hyperparameter search
├── trainer/ # LoRA/QLoRA training engine
├── metrics/ # Scoring, benchmarking
├── reports/ # HTML reports, charts, exports
├── db/ # SQLite run/trial storage
└── utils/ # Logging, display, filesystem
Requirements
- Python >= 3.10
- PyTorch >= 2.1.0
- NVIDIA GPU with CUDA (recommended, CPU supported)
- 8GB+ VRAM for QLoRA, 16GB+ for LoRA
License
MIT
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file auto_lora-0.1.0.tar.gz.
File metadata
- Download URL: auto_lora-0.1.0.tar.gz
- Upload date:
- Size: 61.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
350e335743e90b187ad5a092db2dc67c9b6e65af4751a86a7ed56b77d7c98b1c
|
|
| MD5 |
82b7867c1588b0c0307d088580d44b77
|
|
| BLAKE2b-256 |
655b44164d3acec26922bf2cfa2278a300918006e90565a2e3d98ada57508d11
|
File details
Details for the file auto_lora-0.1.0-py3-none-any.whl.
File metadata
- Download URL: auto_lora-0.1.0-py3-none-any.whl
- Upload date:
- Size: 72.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b4b710a29aee33470c6cb41d79f227250a21fce7e6fd864e3e1769d7a6e8c8ff
|
|
| MD5 |
2c2d2e479d14c3ee0a61b9ba9dce7847
|
|
| BLAKE2b-256 |
d152a7797e3918aa1adef0797a37ee3d01706a46527cba13150238da1e7c983c
|