Skip to main content

Automated Hyperparameter Optimization Platform for Efficient LLM Fine-Tuning

Project description

🚀 Auto-LoRA

The Automated Hyperparameter Optimization Platform for Efficient LLM Fine-Tuning.

PyPI version License: MIT Python 3.10+

Auto-LoRA is a powerful, scientific framework designed to take the guesswork out of Large Language Model (LLM) fine-tuning. By combining Bayesian Optimization (via Optuna) with High-Performance Training Engines (via Unsloth and PEFT), Auto-LoRA automatically identifies the optimal LoRA (Low-Rank Adaptation) configurations for your specific dataset and hardware constraints.


🌟 Key Features

🎯 Intelligent Hyperparameter Tuning

Stop guessing ranks and learning rates. Auto-LoRA uses Optuna to search for the best combination of:

  • LoRA Rank (r) and Alpha
  • Learning Rate and Scheduler
  • Dropout Rates
  • Target Modules

⚡ Unsloth Integration

Built-in support for Unsloth, providing:

  • 2x–5x faster training speeds.
  • 70% less VRAM usage.
  • Automatic fallback to standard PEFT if hardware is incompatible.

📊 Scientific Metric Suite

Move beyond simple loss curves. Auto-LoRA generates journal-grade reports including:

  • NLP Quality: ROUGE-L, BLEU, and Semantic Similarity (via Sentence-Transformers).
  • Inference Efficiency: Tokens Per Second (TPS), Latency (ms).
  • Hardware Profile: Peak VRAM usage, System VRAM efficiency.

📈 Dynamic Visualization

Generate stunning HTML dashboards and publication-quality Matplotlib charts with a single command.


🚀 Quick Start

Installation

Standard Installation (Recommended)

pip install auto-lora

From Source (For Developers)

git clone https://github.com/shrey1720/auto-lora.git
cd auto-lora
pip install -e ".[dev]"

Recommended for NVIDIA GPUs

pip install unsloth xformers

🛠 Usage Guide

1. System Health Check

Ensure your GPU and VRAM are ready for training.

auto-lora doctor

2. Basic Training

Train with default settings and automatic tuning.

auto-lora train --model "meta-llama/Llama-3.2-1B" --data "my_dataset.json" --max-trials 5

3. Using Expert Presets

Auto-LoRA comes with pre-configured settings for specific domains:

  • --preset chatbot: Optimized for conversational flow.
  • --preset coding: Lower learning rate, optimized for logic.
  • --preset summarization: Focuses on context retention.

4. Scientific Benchmarking

Compare your trained adapter against ground truth answers to get a technical profile.

auto-lora benchmark --run <run_id> --references test_set.json

📂 Project Architecture

The system is modularly designed for extensibility:

auto_lora/
├── tuner/        # Bayesian optimization and search spaces
├── trainer/      # LoRA/QLoRA engine (Unsloth & PEFT)
├── dataset/      # Dynamic loading and scientific validation
├── hardware/     # VRAM analysis and hardware-aware strategy
├── metrics/      # Scorer engine (NLP & Performance)
├── reports/      # HTML Exporters and Chart Generators
├── db/           # SQLite persistence for all runs/trials
└── cli/          # Typer-powered command interface

🔬 Technical Roadmap

  • Multi-GPU Support: DDP and FSDP integration.
  • DPO Tuning: Direct Preference Optimization tuning loop.
  • Custom Scoring Functions: Allow users to define their own success metrics.
  • HuggingFace Hub Integration: Direct upload of tuned adapters.

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

auto_lora-0.2.1.tar.gz (61.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

auto_lora-0.2.1-py3-none-any.whl (72.4 kB view details)

Uploaded Python 3

File details

Details for the file auto_lora-0.2.1.tar.gz.

File metadata

  • Download URL: auto_lora-0.2.1.tar.gz
  • Upload date:
  • Size: 61.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.5

File hashes

Hashes for auto_lora-0.2.1.tar.gz
Algorithm Hash digest
SHA256 545572a7d909154fc424cb286dff00f4e7473342537649ba32ef9176037effa5
MD5 9d3dcf40c9eb5852da4840e4f3ab94d1
BLAKE2b-256 829e0d326d759cbb5bc79f9e717fd44d782d3283d6e36771f6c203e2bac9b0e9

See more details on using hashes here.

File details

Details for the file auto_lora-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: auto_lora-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 72.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.5

File hashes

Hashes for auto_lora-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 d01518659d8e43f1961c2c69b71d44f48eebcf7af78743ec34f609810a142a9d
MD5 a81207753a69464c31cb688d6cf2e2f0
BLAKE2b-256 c18597e87aad4e53a8e49bbddf0fe8f5fab08b7d8c2c863dd3c881c8e5d780de

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page