Skip to main content

Automated Hyperparameter Optimization Platform for Efficient LLM Fine-Tuning

Project description

🚀 Adyft

The Automated Hyperparameter Optimization Platform for Efficient LLM Fine-Tuning.

PyPI version License: MIT Python 3.10+

Adyft is a powerful, scientific framework designed to take the guesswork out of Large Language Model (LLM) fine-tuning. By combining Bayesian Optimization (via Optuna) with High-Performance Training Engines (via Unsloth and PEFT), Adyft automatically identifies the optimal LoRA (Low-Rank Adaptation) configurations for your specific dataset and hardware constraints.


🌟 Key Features

🎯 Intelligent Hyperparameter Tuning

Stop guessing ranks and learning rates. Adyft uses Optuna to search for the best combination of:

  • LoRA Rank (r) and Alpha
  • Learning Rate and Scheduler
  • Dropout Rates
  • Target Modules

⚡ Unsloth Integration

Built-in support for Unsloth, providing:

  • 2x–5x faster training speeds.
  • 70% less VRAM usage.
  • Automatic fallback to standard PEFT if hardware is incompatible.

📊 Scientific Metric Suite

Move beyond simple loss curves. Adyft generates journal-grade reports including:

  • NLP Quality: ROUGE-L, BLEU, and Semantic Similarity (via Sentence-Transformers).
  • Inference Efficiency: Tokens Per Second (TPS), Latency (ms).
  • Hardware Profile: Peak VRAM usage, System VRAM efficiency.

📈 Dynamic Visualization

Generate stunning HTML dashboards and publication-quality Matplotlib charts with a single command.


🚀 Quick Start

Installation

Standard Installation (Recommended)

pip install adyft

From Source (For Developers)

git clone https://github.com/shrey1720/adyft.git
cd adyft
pip install -e ".[dev]"

Recommended for NVIDIA GPUs

pip install unsloth xformers

🛠 Usage Guide

1. System Health Check

Ensure your GPU and VRAM are ready for training.

adyft doctor

2. Basic Training

Train with default settings and automatic tuning.

adyft train --model "meta-llama/Llama-3.2-1B" --data "my_dataset.json" --max-trials 5

3. Using Expert Presets

Adyft comes with pre-configured settings for specific domains:

  • --preset chatbot: Optimized for conversational flow.
  • --preset coding: Lower learning rate, optimized for logic.
  • --preset summarization: Focuses on context retention.

4. Safe Carry-Forward Training

Tune as usual, then continue the best trial for extra epochs from its checkpoint.

adyft train --model "meta-llama/Llama-3.2-1B" --data "my_dataset.json" --max-trials 3 --continue-best-epochs 2

5. Scientific Benchmarking

Compare your trained adapter against ground truth answers to get a technical profile.

adyft benchmark --run <run_id> --references test_set.json

📂 Project Architecture

The system is modularly designed for extensibility:

adyft/
├── tuner/        # Bayesian optimization and search spaces
├── trainer/      # LoRA/QLoRA engine (Unsloth & PEFT)
├── dataset/      # Dynamic loading and scientific validation
├── hardware/     # VRAM analysis and hardware-aware strategy
├── metrics/      # Scorer engine (NLP & Performance)
├── reports/      # HTML Exporters and Chart Generators
├── db/           # SQLite persistence for all runs/trials
└── cli/          # Typer-powered command interface

🛠 Detailed Command Reference

Command Description
adyft init Initialize project structure in the current directory.
adyft doctor Run hardware and dependency health checks.
adyft train Start HPO training (see options below).
adyft export Export the best adapter for deployment.
adyft report Generate an HTML performance dashboard.
adyft benchmark Run scientific NLP and throughput evaluations.
adyft runs list View training history and best scores.

🚀 Training Options (adyft train)

Option Shorthand Default Description
--model -m - HuggingFace model name (Required).
--data -d - Dataset path (.json, .jsonl, .txt) (Required).
--max-trials -t 10 Maximum tuning iterations.
--epochs -e 3 Training epochs per trial.
--unsloth - True Use Unsloth for 2x faster training.
--wandb - False Enable Weights & Biases logging.

🧩 Key Terminology

  • HPO: Hyperparameter Optimization — automatically finding the best settings.
  • LoRA: Low-Rank Adaptation — efficient fine-tuning by only training a small adapter.
  • Trial: A single training attempt with one hyperparameter set.
  • Run: A complete experiment containing multiple trials.
  • Adapter: The learned weights, typically saved as a small file.
  • Throughput: Training speed measured in Tokens Per Second (TPS).

🔬 Technical Roadmap

  • Multi-GPU Support: DDP and FSDP integration.
  • DPO Tuning: Direct Preference Optimization tuning loop.
  • Custom Scoring Functions: User-defined success metrics.
  • HuggingFace Hub Integration: Direct upload of tuned adapters.

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

adyft-0.2.14.tar.gz (69.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

adyft-0.2.14-py3-none-any.whl (78.6 kB view details)

Uploaded Python 3

File details

Details for the file adyft-0.2.14.tar.gz.

File metadata

  • Download URL: adyft-0.2.14.tar.gz
  • Upload date:
  • Size: 69.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.5

File hashes

Hashes for adyft-0.2.14.tar.gz
Algorithm Hash digest
SHA256 16458aa3475340edf31f6eb62254cdd618d7f7b261455360ff8e54a50869bd0d
MD5 d65de226283a5c68cacef0a233ecffa9
BLAKE2b-256 c9980121cf66ba9aa248e476b1ae47f46169a11bac01c7aa6464e11e1ad3ae0a

See more details on using hashes here.

File details

Details for the file adyft-0.2.14-py3-none-any.whl.

File metadata

  • Download URL: adyft-0.2.14-py3-none-any.whl
  • Upload date:
  • Size: 78.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.5

File hashes

Hashes for adyft-0.2.14-py3-none-any.whl
Algorithm Hash digest
SHA256 2ba3f22d502d2d64832ca782d283547c870b06eb0160ba32996d436144a77ba8
MD5 03324bbb3fc7ae016393ce93424b0cff
BLAKE2b-256 6959c6c6d63c56130f522ea41dfffabb9bb233fc01b66c813e95dc6c002dcf38

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page