Automated Hyperparameter Optimization Platform for Efficient LLM Fine-Tuning
Project description
🚀 Ellora
The Automated Hyperparameter Optimization Platform for Efficient LLM Fine-Tuning.
Ellora is a powerful, scientific framework designed to take the guesswork out of Large Language Model (LLM) fine-tuning. By combining Bayesian Optimization (via Optuna) with High-Performance Training Engines (via Unsloth and PEFT), Ellora automatically identifies the optimal LoRA (Low-Rank Adaptation) configurations for your specific dataset and hardware constraints.
🌟 Key Features
🎯 Intelligent Hyperparameter Tuning
Stop guessing ranks and learning rates. Ellora uses Optuna to search for the best combination of:
- LoRA Rank (r) and Alpha
- Learning Rate and Scheduler
- Dropout Rates
- Target Modules
⚡ Unsloth Integration
Built-in support for Unsloth, providing:
- 2x–5x faster training speeds.
- 70% less VRAM usage.
- Automatic fallback to standard PEFT if hardware is incompatible.
📊 Scientific Metric Suite
Move beyond simple loss curves. Ellora generates journal-grade reports including:
- NLP Quality: ROUGE-L, BLEU, and Semantic Similarity (via Sentence-Transformers).
- Inference Efficiency: Tokens Per Second (TPS), Latency (ms).
- Hardware Profile: Peak VRAM usage, System VRAM efficiency.
📈 Dynamic Visualization
Generate stunning HTML dashboards and publication-quality Matplotlib charts with a single command.
🚀 Quick Start
Installation
Standard Installation (Recommended)
pip install ellora
From Source (For Developers)
git clone https://github.com/shrey1720/ellora.git
cd ellora
pip install -e ".[dev]"
Recommended for NVIDIA GPUs
pip install unsloth xformers
🛠 Usage Guide
1. System Health Check
Ensure your GPU and VRAM are ready for training.
ellora doctor
2. Basic Training
Train with default settings and automatic tuning.
ellora train --model "meta-llama/Llama-3.2-1B" --data "my_dataset.json" --max-trials 5
3. Using Expert Presets
Ellora comes with pre-configured settings for specific domains:
--preset chatbot: Optimized for conversational flow.--preset coding: Lower learning rate, optimized for logic.--preset summarization: Focuses on context retention.
4. Safe Carry-Forward Training
Tune as usual, then continue the best trial for extra epochs from its checkpoint.
ellora train --model "meta-llama/Llama-3.2-1B" --data "my_dataset.json" --max-trials 3 --continue-best-epochs 2
5. Scientific Benchmarking
Compare your trained adapter against ground truth answers to get a technical profile.
ellora benchmark --run <run_id> --references test_set.json
📂 Project Architecture
The system is modularly designed for extensibility:
ellora/
├── tuner/ # Bayesian optimization and search spaces
├── trainer/ # LoRA/QLoRA engine (Unsloth & PEFT)
├── dataset/ # Dynamic loading and scientific validation
├── hardware/ # VRAM analysis and hardware-aware strategy
├── metrics/ # Scorer engine (NLP & Performance)
├── reports/ # HTML Exporters and Chart Generators
├── db/ # SQLite persistence for all runs/trials
└── cli/ # Typer-powered command interface
🔬 Technical Roadmap
- Multi-GPU Support: DDP and FSDP integration.
- DPO Tuning: Direct Preference Optimization tuning loop.
- Custom Scoring Functions: Allow users to define their own success metrics.
- HuggingFace Hub Integration: Direct upload of tuned adapters.
📄 License
This project is licensed under the MIT License - see the LICENSE file for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file ellora-0.2.6.tar.gz.
File metadata
- Download URL: ellora-0.2.6.tar.gz
- Upload date:
- Size: 66.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
738b306b4a77d461b2c1e9aaa4189900c6c2efddc6a8247aad85fe55f9381ffe
|
|
| MD5 |
6ba957d7680df031f811eca3970bc748
|
|
| BLAKE2b-256 |
bc127ce35d99b08131b9ff87fcdffcc291257a3dc8ec7c6d394cf742d5aa64f9
|
File details
Details for the file ellora-0.2.6-py3-none-any.whl.
File metadata
- Download URL: ellora-0.2.6-py3-none-any.whl
- Upload date:
- Size: 75.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6350108c77b0b145d4fb72eebc01d9825caffb1c1bf5c0cb3e9518982afed76d
|
|
| MD5 |
1c691ab67162405599b4f8a4bd2facad
|
|
| BLAKE2b-256 |
0e9162bc5f2fd666add6349f9783cfc125e9ef3d523eb0ef54f9201f3504edd5
|