A PyTorch training engine with plugin system and advanced model components
Project description
Orbit
Orbit is a flexible, plugin-based PyTorch training engine designed to simplify the training loop while providing powerful components for modern deep learning models, including LLMs.
It features a modular design with a rich set of plugins, advanced model building blocks (like MoE, RoPE, GQA), comprehensive LoRA/DoRA support, and cutting-edge optimizers.
Features
🚀 Core Engine
- Plugin System: Decoupled training logic using plugins for callbacks, logging, and training strategies.
- Simplified Loop: Clean
trainandevalinterfaces. - Flexible Updates:
auto_update(): Automatically handles forward pass, loss calculation, backward pass, optimizer step, and zero grad.update(loss): Allows manual control over the update step if you need custom forward/loss logic.
🧩 Model Components (orbit.model)
Orbit provides a collection of high-performance, reusable layers:
- Attention:
MultiHeadAttentionwith support for GQA (Grouped Query Attention), RoPE (Rotary Positional Embeddings), and FlashAttention. - LoRA & DoRA: Full support for Low-Rank Adaptation and Weight-Decomposed Low-Rank Adaptation (DoRA) across
Linear,Conv2d,Conv1d, andEmbeddinglayers. Also supports Gated LoRA. - MoE: Mixture of Experts block with
TopKGaterouting. - Gates: A variety of gating mechanisms including
SigmoidGate,TanhGate,SoftmaxGate,GLUGate,ContextGate, andTopKGate. - Others:
FiLM(Feature-wise Linear Modulation),MLP(with Gated support),RotaryPositionalEmbedding.
🛠️ Utilities & Kit (orbit.utils)
Orbit provides a comprehensive toolkit to speed up development:
🔧 LoRA Utilities
Manual control over LoRA injection and management (alternative to the Plugin approach).
- Injection:
inject_lora(model, r=8, ...): Manually inject LoRA/DoRA/Gated LoRA into specific layers.inject_lora_file(model, path): Automatically inject and load LoRA configuration/weights from a file.
- Management:
merge_lora(model)/unmerge_lora(model): Merge weights for faster inference or unmerge to resume training.save_lora(model, path)/load_lora(model, path): Efficiently save/load only LoRA parameters.freeze_backbone_only(model): Helper to freeze the base model while keeping LoRA and specified heads trainable.
- Diagnosis:
LoRADiagnoser: Check for rank collapse and monitor gradient norms during training.
❄️ Model Freezing
freeze_layers(model, targets=['encoder']): Freeze layers matching the target names (supports wildcards).unfreeze_layers(model, targets): Unfreeze specific layers.get_trainable_params(model): Get parameters for the optimizer.
🎭 Masking
make_causal_mask: Create causal masks for autoregressive models.make_padding_mask,make_lookahead_mask,make_sliding_window_mask.
💾 Layer I/O
save_layer(model, layer_name, path): Save weights of a specific sub-module (e.g., just the backbone).load_layer(model, layer_name, path): Load weights into a specific sub-module.get_model_by_name(model, name): Access sub-modules using dot notation strings (e.g., "backbone.layer1").
📝 SFT Helpers
build_sft: Prepares data for Supervised Fine-Tuning (handles chat templates, tokenization, and label masking).train_sft(engine): A specialized training step for SFT that handles the forward pass and loss calculation automatically.
⚙️ Optimization (orbit.optim)
- Muon: MomentUm Orthogonalized by Newton-schulz optimizer.
- SAM: Sharpness-Aware Minimization wrapper.
🌱 Initialization & Seeding
auto_initialize(model): Automatically initializes weights based on layer type (Linear, Conv, Embedding, etc.).seed_everything(seed): Sets seeds for Python, NumPy, PyTorch, and CUDA for reproducibility.
🖥️ CUDA
cuda_alloc(size): Optimizes PyTorch CUDA memory allocation configuration (e.g.,max_split_size_mb).
🔌 Plugins (orbit.plugin)
EarlyStopping: Stop training when a metric stops improving.GradientAccumulation: Simulate larger batch sizes.Warmup: Learning rate warmup.Mentor: Training assistant/logger.MemoryEstimator: Monitor CUDA memory usage.LoRA: Easy injection of LoRA layers via plugin.Board: TensorBoard integration.
Installation
pip install orbit-torch
Requirements:
- Python >= 3.8
- PyTorch >= 2.0.0 (Required for FlashAttention backend)
Quick Start
1. Basic Training (CIFAR-10)
import torch
import torch.nn as nn
from orbit.engine import Engine
from orbit.plugin import EarlyStopping, GradientAccumulation, Mentor
from orbit.utils import auto_initialize
# Define your model
model = MyConvNet()
auto_initialize(model)
# Setup Engine
trainer = Engine(
model=model,
criterion=nn.CrossEntropyLoss(),
optimizer=torch.optim.Adam(model.parameters(), lr=1e-3),
plugins=[
Mentor(),
EarlyStopping(monitor='val_acc', patience=3),
GradientAccumulation(steps=2)
]
)
# Train
for _ in trainer.train(train_loader, num_epochs=10):
trainer.auto_update() # Handles forward, backward, step, zero_grad
# Handle Epoch End (e.g., Validation)
if not trainer.is_epoch_end: continue
for _ in trainer.eval(test_loader):
trainer.auto_update()
2. LLM SFT with LoRA/DoRA
Orbit makes it easy to fine-tune LLMs using LoRA or DoRA.
from transformers import AutoModelForCausalLM, AutoTokenizer
from orbit.engine import Engine
from orbit.plugin import LoRA, GradientAccumulation
from orbit.utils import train_sft, seed_everything
seed_everything(42)
# Load Model
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-3B", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-3B")
# Setup Engine with LoRA Plugin
trainer = Engine(
model=model,
optimizer=torch.optim.AdamW(model.parameters(), lr=1e-4),
plugins=[
# Inject DoRA into MLP layers
LoRA(target_names=['mlp'], dora=True, r=16, alpha=32),
GradientAccumulation(steps=8)
]
)
# Train Loop
# Assuming `dataloader` yields SFT batches (input_ids, attention_mask, labels)
for _ in trainer.train(dataloader, num_epochs=3):
# train_sft handles the forward pass and loss calculation for CausalLM
train_sft(trainer)
3. Chat Interface
Interact with your trained model in the terminal:
from orbit.kit import ChatInterface
chat = ChatInterface(model_id="path/to/model", device="cuda")
chat.interact()
License
MIT License
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file orbit_torch-0.1.0.tar.gz.
File metadata
- Download URL: orbit_torch-0.1.0.tar.gz
- Upload date:
- Size: 124.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
66aee0144250728e324cbb00c39e3bc97d5147c37cf1f50a557f0a6ec3c2ab8a
|
|
| MD5 |
4638d15413f189b1b44853bdf30b0ad1
|
|
| BLAKE2b-256 |
1f0626a38f8411a808ee1518fee392ef274f43da1c47db35a654b7c5e4c81dd6
|
File details
Details for the file orbit_torch-0.1.0-py3-none-any.whl.
File metadata
- Download URL: orbit_torch-0.1.0-py3-none-any.whl
- Upload date:
- Size: 144.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
60c5123b24855d41398ea85748105a625634045bf8dbe79ce063305d4bb136c9
|
|
| MD5 |
8de3eee48d2f5eca4bf6352272b79e78
|
|
| BLAKE2b-256 |
3071ec6f0b2b0d99896a9057bad3d88b0b03583c7da9192f358ac441e403226a
|