NexusLoRA — Unified LLM Fine-Tuning Engine (FastLoRA × NexusTrain)
Project description
NexusLoRA ⚡🛡️🧠🔮
Unified LLM Fine-Tuning Engine — FastLoRA v4 + NexusTrain v1 tek dosyada birleştirilmiş hali.
Kurulum
# Temel kurulum
pip install nexuslora
# Eğitim araçlarıyla birlikte (önerilen)
pip install nexuslora[train]
# Tam kurulum (flash-attn dahil)
pip install nexuslora[full]
# Geliştirici kurulumu
git clone https://github.com/fastloraoffical/nexuslora
cd nexuslora
pip install -e ".[dev]"
Hızlı Başlangıç
from nexuslora import NexusLoRA
# Tam özellikli — tüm NexusLoRA modülleri aktif
nl = NexusLoRA("meta-llama/Llama-3.2-3B",
nexus_enabled=True, nexus_power=1.0)
model, tokenizer = nl.load()
trainer = nl.get_trainer(train_dataset)
nl.train(trainer)
Sadece temel LoRA (NexusTrain modülleri kapalı)
nl = NexusLoRA("meta-llama/Llama-3.2-3B", nexus_enabled=False)
model, tokenizer = nl.load()
trainer = nl.get_trainer(train_dataset)
nl.train(trainer)
Seçici modül aktivasyonu
nl = NexusLoRA(
"meta-llama/Llama-3.2-3B",
nexus_enabled=True,
nexus_power=1.0,
# Çakışma önleme:
torch_compile=False, # CrystalCore™ aktif olacak
nexus_crystal_core=True,
mixed_precision_optimize=False, # ChromaticPrecision™ aktif olacak
nexus_chromatic_precision=True,
lr_scheduler="constant", # ResonanceScheduler™ aktif olacak
)
Özellikler
FastLoRA Çekirdeği
| Özellik | Açıklama |
|---|---|
| Custom Triton Kernels | RMSNorm, SwiGLU, RoPE fused implementasyonlar |
| 2-bit / 4-bit / 8-bit Quant | bitsandbytes bağımsız 2-bit dahil |
| Mixture of Experts | Sparse MoE router + otomatik patch |
| CPU/NVMe Offloading | ZeRO-Infinity tarzı bellek yönetimi |
| OOM Recovery | Eğitimi kesmeden OOM'dan devam |
| Optuna AutoTune | lr / rank / batch otomatik optimizasyonu |
| UnstoppableTrainer | Her hatadan otomatik kurtarma |
NexusTrain Modülleri
| Modül | Açıklama |
|---|---|
| CrystalCore™ | Runtime kernel kristalizasyonu |
| MorphicMemory™ | Markov tahminli tensor yeniden kullanımı |
| SpectraOptimizer™ | FFT tabanlı AdamW üstü optimizer |
| ResonanceScheduler™ | Gradient spektrumundan öz-ayarlı LR |
| ChromaticPrecision™ | Per-layer dinamik dtype ataması |
| GradientHarmonics™ | Wavelet tabanlı gradient işleme |
| NeuralProfiler™ | LSTM ile OOM/explode tahmini |
| CrystalPipeline™ | Dinamik grad-accum + async checkpoint |
| ZeroWaste™ | Ölü parametre eliminasyonu |
| UniversalAdapter™ | HF/TRL/PEFT/DeepSpeed otomatik patch |
Çakışma Uyarıları
| FastLoRA | NexusTrain | Öneri |
|---|---|---|
torch_compile=True |
nexus_crystal_core=True |
Birini kapatın |
paged_optimizer=True |
nexus_spectra_optimizer=True |
Küçük VRAM'de spectra'yı kapatın |
mixed_precision_optimize=True |
nexus_chromatic_precision=True |
Birini kapatın |
dynamic_batch_scaling=True |
nexus_crystal_pipeline=True |
Birini kapatın |
smart_checkpoint=True |
nexus_crystal_pipeline=True |
Birini kapatın |
loss_spike_detection=True |
nexus_neural_profiler=True |
Birini kapatın |
gradient_noise_monitor=True |
nexus_gradient_harmonics=True |
Monitor yanıltıcı olur |
mem_defrag=True |
nexus_morphic_memory=True |
Zararsız; defrag interval 2x artırın |
API Referansı
nl = NexusLoRA(model_name, **kwargs) # Konfigürasyon
model, tokenizer = nl.load() # Model yükle + tüm optimizasyonlar
trainer = nl.get_trainer(dataset) # Trainer oluştur
nl.train(trainer) # Eğit
nl.save("./output") # Kaydet
nl.push_to_hub("kullanici/model") # HuggingFace Hub
nl.merge_and_unload() # LoRA merge
response = nl.generate("prompt") # Inference
nl.nexus_async_checkpoint() # Async checkpoint
nl.profile() # Benchmark
nl.stop() # Temiz kapatma
Gereksinimler
- Python ≥ 3.9
- PyTorch ≥ 2.1.0
- Transformers ≥ 4.40.0
- Accelerate ≥ 0.27.0
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
nexuslora-1.0.0.tar.gz
(44.9 kB
view details)
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
nexuslora-1.0.0-py3-none-any.whl
(44.3 kB
view details)
File details
Details for the file nexuslora-1.0.0.tar.gz.
File metadata
- Download URL: nexuslora-1.0.0.tar.gz
- Upload date:
- Size: 44.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
367c7bd576965991d980f413e72dbbe337ee720c4b6a1f3759f07e47639f4198
|
|
| MD5 |
9fa2bfb31568c807850607edb9ede7b0
|
|
| BLAKE2b-256 |
2a60e02721d7b57963f558d9341ed935a0df4629fadd48b5ea7ee0eccbc54337
|
File details
Details for the file nexuslora-1.0.0-py3-none-any.whl.
File metadata
- Download URL: nexuslora-1.0.0-py3-none-any.whl
- Upload date:
- Size: 44.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
551d27fa432b8fb8e83d43fd40d7fba4adc537e5f5fd964ce069e89f92c55862
|
|
| MD5 |
b313a6ca371e0d1fee799d34eb11dde1
|
|
| BLAKE2b-256 |
01c3871d20855ab55303539c9b8efe48c308afe73fb7b7ebbb4734f21ec887ff
|