Wrapper Library for training deep neural networks using PyTorch framework.
Project description
deep-ml
deep-ml is a high-level PyTorch training framework that simplifies deep learning workflows for computer vision tasks. It provides easy-to-use trainers with distributed training support, comprehensive task implementations, and seamless experiment tracking.
Key Features
Multiple Training Backends
- FabricTrainer: Lightning Fabric for distributed training (recommended for multi-GPU)
- AcceleratorTrainer: HuggingFace Accelerate integration (recommended for multi-GPU)
- Learner: Classic PyTorch trainer (single-device, notebook-friendly)
Pre-built Task Implementations
- Image Classification (single & multi-label)
- Semantic Segmentation (binary & multiclass)
- Image Regression
- Custom tasks via extensible base classes
Experiment Tracking
- TensorBoard integration (default)
- MLflow support
- Weights & Biases (wandb) integration
- Custom logger interface
Advanced Training Features
- ✅ Automatic Mixed Precision (AMP)
- ✅ Gradient accumulation & clipping
- ✅ Learning rate scheduling with warmup
- ✅ Multi-GPU and distributed training
- ✅ Checkpoint management
- ✅ Progress bars and real-time metrics
Installation
Basic Installation
pip install deepml
With Optional Dependencies
# For Lightning Fabric
pip install deepml lightning-fabric
# For HuggingFace Accelerate
pip install deepml accelerate
# For MLflow tracking
pip install deepml mlflow
# For Weights & Biases
pip install deepml wandb
# For Albumentations (segmentation)
pip install deepml albumentations
Quick Start
Image Classification
from deepml.tasks import ImageClassification
from deepml.fabric_trainer import FabricTrainer
import torch
from torch.optim import Adam
from torchvision.models import resnet18
# 1. Define your model
model = resnet18(num_classes=10)
# 2. Create a task
task = ImageClassification(
model=model,
model_dir="./checkpoints",
classes=['cat', 'dog', 'bird', ...] # Optional
)
# 3. Setup optimizer and loss
optimizer = Adam(model.parameters(), lr=1e-3)
criterion = torch.nn.CrossEntropyLoss()
# 4. Create trainer
trainer = FabricTrainer(
task=task,
optimizer=optimizer,
criterion=criterion,
accelerator="auto", # Use GPU if available
devices="auto", # Use all available devices
precision="16-mixed" # Mixed precision training
)
# 5. Train!
trainer.fit(
train_loader=train_loader,
val_loader=val_loader,
epochs=50
)
# 6. Visualize predictions
task.show_predictions(loader=val_loader, samples=9)
Semantic Segmentation
from deepml.tasks import Segmentation
from deepml.fabric_trainer import FabricTrainer
from deepml.losses import JaccardLoss
# Define model (e.g., U-Net)
model = UNet(in_channels=3, out_channels=1)
# Create task
task = Segmentation(
model=model,
model_dir="./checkpoints",
mode="binary",
num_classes=1,
threshold=0.5
)
# Setup training
optimizer = torch.optim.Adam(model.parameters(), lr=1e-4)
criterion = torch.nn.BCEWithLogitsLoss()
trainer = FabricTrainer(task=task, optimizer=optimizer, criterion=criterion)
# Train
trainer.fit(
train_loader=train_loader,
val_loader=val_loader,
epochs=100
)
Documentation
📚 Full documentation is available at: https://deep-ml.readthedocs.io/
Documentation Structure
Getting Started
User Guide
- Trainers - FabricTrainer, AcceleratorTrainer, Learner
- Tasks - Classification, Segmentation, Regression
- Datasets - Data loading utilities
- Loss Functions - Custom losses for CV tasks
- Metrics - Evaluation metrics
- Tracking - MLflow, TensorBoard, Wandb
- Visualization - Result visualization
API Reference
Additional Resources
Tutorials
Available Tutorials
- Image Classification: Train ResNet on CIFAR-10
- Transfer Learning: Fine-tune pre-trained models
- Semantic Segmentation: U-Net for binary segmentation
- Multi-GPU Training: Distributed training across GPUs
- Hyperparameter Tuning: Optimize with Optuna
- Model Deployment: Export to TorchScript/ONNX
📖 See the complete tutorials on ReadTheDocs.
💡 Advanced Features
Distributed Training
# Multi-GPU training with DDP
trainer = FabricTrainer(
task=task,
optimizer=optimizer,
criterion=criterion,
accelerator="gpu",
strategy="ddp",
devices="auto" # Use all GPUs
)
Gradient Accumulation
# Simulate larger batch sizes
trainer.fit(
train_loader=train_loader,
val_loader=val_loader,
epochs=50,
gradient_accumulation_steps=4 # Effective batch = 4x
)
Learning Rate Scheduling
from deepml.lr_scheduler_utils import setup_one_cycle_lr_scheduler_with_warmup
lr_scheduler_fn = lambda opt: setup_one_cycle_lr_scheduler_with_warmup(
optimizer=opt,
steps_per_epoch=len(train_loader),
warmup_ratio=0.1,
num_epochs=50,
max_lr=1e-3
)
trainer = FabricTrainer(
...,
lr_scheduler_fn=lr_scheduler_fn
)
Experiment Tracking
from deepml.tracking import MLFlowLogger, WandbLogger
# MLflow
logger = MLFlowLogger(
experiment_name='my-experiment',
tracking_uri='./mlruns'
)
# Weights & Biases
logger = WandbLogger(
project='my-project',
name='experiment-1'
)
trainer.fit(..., logger=logger)
Supported Tasks
| Task | Description | Typical Use Cases |
|---|---|---|
ImageClassification |
Single-label classification | CIFAR-10, ImageNet |
MultiLabelImageClassification |
Multi-label classification | Object attributes |
Segmentation |
Pixel-level classification | Medical imaging, autonomous driving |
ImageRegression |
Continuous value prediction | Age estimation, depth prediction |
NeuralNetTask |
Generic task template | Custom tasks |
Custom Loss Functions
- JaccardLoss: IoU loss for segmentation
- RMSELoss: Root mean squared error
- WeightedBCEWithLogitsLoss: Weighted binary cross-entropy
- ContrastiveLoss: For siamese networks
- AngularPenaltySMLoss: ArcFace, SphereFace, CosFace for face recognition
Metrics
- Classification: Accuracy, BinaryAccuracy
- Segmentation: IoU, Dice Coefficient, Pixel Accuracy
- Custom: Easy to implement custom metrics
Datasets
- ImageDataFrameDataset: Load from pandas DataFrame
- ImageRowDataFrameDataset: Flattened arrays in DataFrame
- SegmentationDataFrameDataset: Images + masks with Albumentations
- ImageListDataset: Directory of images
Contributing
Contributions are welcome! See our Contributing Guide for guidelines.
Development Setup
git clone https://github.com/sagar100rathod/deep-ml.git
cd deep-ml
pip install -e ".[dev]"
pytest # Run tests
License
This project is licensed under the MIT License - see the LICENSE file for details.
Acknowledgments
- PyTorch team for the amazing framework
- Lightning AI for Lightning Fabric
- HuggingFace for Accelerate
- All contributors to this project
Contact
- Author: Sagar Rathod
- Email: sagar100rathod@gmail.com
- Issues: GitHub Issues
- Discussions: GitHub Discussions
⭐ Star History
If you find this project useful, please consider giving it a star!
Citation
If you use deep-ml in your research, please cite:
@software{deepml2026,
author = {Rathod, Sagar},
title = {deep-ml: High-level PyTorch Training Framework for Computer Vision},
year = {2026},
version = {3.0.0},
url = {https://github.com/sagar100rathod/deep-ml},
doi = {10.5281/zenodo.XXXXXXX}
}
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file deepml-3.0.1.tar.gz.
File metadata
- Download URL: deepml-3.0.1.tar.gz
- Upload date:
- Size: 178.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.2.1 CPython/3.12.13 Linux/6.17.0-1010-azure
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8f6f437c9265729e52ff898342f071d58d48f5bb484bf7a266f01d6c6331320d
|
|
| MD5 |
82ba8d80df81230630e261a12a5f33c3
|
|
| BLAKE2b-256 |
d2078d5cc7f73abe0ab8386a6a12ca3148fe84392d46a069e27c2262995839b4
|
File details
Details for the file deepml-3.0.1-py3-none-any.whl.
File metadata
- Download URL: deepml-3.0.1-py3-none-any.whl
- Upload date:
- Size: 183.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.2.1 CPython/3.12.13 Linux/6.17.0-1010-azure
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
fef2d5591fbb75ac8dfe58bbe75dfcc89bca4b6759890d08f67dc57adccc1373
|
|
| MD5 |
7b9d9f088348eac4c6e0f3db4ddd189a
|
|
| BLAKE2b-256 |
6ea30870017ea57170312af1e19d8498cc7f1be9fe747c3301d5d47b5a24a0a9
|