Dynamic Learning Rate Scheduler for PyTorch
Project description
DLRS: Dynamic Learning Rate Scheduler for PyTorch
A PyTorch implementation of the Dynamic Learning Rate Scheduler (DLRS) algorithm from the paper "Improving Neural Network Training using Dynamic Learning Rate Schedule for PINNs and Image Classification" by Veerababu Dharanalakota, Ashwin Arvind Raikar, and Prasanta Kumar Ghosh (arXiv:2507.21749v1, July 2025).
DLRS automatically adjusts learning rates based on loss dynamics during training, eliminating the need for manual learning rate tuning and schedules.
Key Features
- Adaptive learning rate adjustment based on loss slope analysis
- Compatible with any PyTorch optimizer (SGD, Adam, AdamW, etc.)
- Can be used alongside standard PyTorch schedulers
- Minimal configuration required
- Suitable for both image classification and PINNs
Note: This is an independent implementation based on the research paper. It is not part of the official PyTorch library yet.
Links
- GitHub: https://github.com/Thabhelo/pytorch-dlrs
- PyPI: https://pypi.org/project/pytorch-dlrs/
- Paper: https://arxiv.org/abs/2507.21749
Installation
From PyPI
pip install pytorch-dlrs
From Source
git clone https://github.com/Thabhelo/pytorch-dlrs.git
cd pytorch-dlrs
pip install -e .
Requirements
- Python 3.8+
- PyTorch 2.0+
- NumPy 1.21+
Quick Start
import torch
import torch.nn as nn
from dlrs import DLRSScheduler
# Create your model, optimizer, and scheduler
model = nn.Sequential(nn.Linear(10, 2)) # Replace with your model
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
scheduler = DLRSScheduler(optimizer)
criterion = nn.CrossEntropyLoss()
for epoch in range(num_epochs):
batch_losses = []
for data, target in train_loader:
# Forward pass
optimizer.zero_grad()
output = model(data)
loss = criterion(output, target)
# Backward pass
loss.backward()
optimizer.step()
# Record loss for DLRS
batch_losses.append(loss.item())
# Update learning rate based on epoch's batch losses
scheduler.step(batch_losses)
How It Works
DLRS analyzes the trend of batch losses within each epoch to determine whether the model is:
- Converging (ΔL < 0): Increases learning rate to accelerate training
- Diverging (ΔL > 1): Decreases learning rate to stabilize training
- Stagnating (0 ≤ ΔL ≤ 1): Makes minimal adjustments
The algorithm computes:
- Normalized loss slope: ΔL = (L_last - L_first) / L_mean
- Adjustment granularity: n = floor(log10(α))
- Learning rate update: α_new = α - (10^n × δ × ΔL)
Parameters
delta_d(float, default=0.5): Decremental factor for divergencedelta_o(float, default=1.0): Adjustment factor for stagnationdelta_i(float, default=0.1): Incremental factor for convergencemin_lr(float, default=1e-8): Minimum learning rate bound
Examples
MNIST Classification
Clone the repository to access the examples:
git clone https://github.com/Thabhelo/pytorch-dlrs.git
cd pytorch-dlrs
python examples/mnist_example.py --epochs 10 --device cpu
View the example code on GitHub: examples/mnist_example.py
Results
According to the original paper, DLRS demonstrates:
- Accelerated training and improved stability for neural networks
- Effective performance on Physics-Informed Neural Networks (PINNs)
- Strong results on image classification tasks (MNIST, CIFAR-10)
- Adaptive learning rate adjustment based on loss dynamics
Testing
Run the test suite:
pytest tests/ -v
With coverage:
pytest tests/ --cov=dlrs --cov-report=html
Citation
If you use DLRS in your research, please cite the original paper:
@article{DHARANALAKOTA2025100697,
title = {Improving neural network training using dynamic learning rate schedule for PINNs and image classification},
author = {Veerababu Dharanalakota and Ashwin Arvind Raikar and Prasanta Kumar Ghosh},
journal = {Machine Learning with Applications},
volume = {21},
pages = {100697},
year = {2025},
issn = {2666-8270},
doi = {https://doi.org/10.1016/j.mlwa.2025.100697},
url = {https://www.sciencedirect.com/science/article/pii/S2666827025000805},
keywords = {Adaptive learning, Multilayer perceptron, CNN, MNIST, CIFAR-10}
}
Code Author
Implementation by Thabhelo (thabhelo@deepubuntu.com)
Based on the research paper by Veerababu Dharanalakota, Ashwin Arvind Raikar, and Prasanta Kumar Ghosh.
License
MIT License. See LICENSE for details.
Contributing
Contributions are welcome. Please see CONTRIBUTING.md for guidelines.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file pytorch_dlrs-0.2.1.tar.gz.
File metadata
- Download URL: pytorch_dlrs-0.2.1.tar.gz
- Upload date:
- Size: 23.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
931ae16d374c08ccfe153d9ffb8a5a0309cf0e14f68256d4b45f914a274530ba
|
|
| MD5 |
fd1beb84bf17b0319b91acad375d7216
|
|
| BLAKE2b-256 |
6ee34318bbe9a6bf3b313ecaf551d4ac30d56141809d9259d3926ebaad651a20
|
File details
Details for the file pytorch_dlrs-0.2.1-py3-none-any.whl.
File metadata
- Download URL: pytorch_dlrs-0.2.1-py3-none-any.whl
- Upload date:
- Size: 10.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b31c798df04ed50515ade5f17f469dfa5172d70c1015ec1b5ea3d54343c14f9d
|
|
| MD5 |
8d0ddf7245712f2e9997a0cf467ada47
|
|
| BLAKE2b-256 |
fc5d33a272331a757a77f8f0e81f35e7caa968ecebed8bdb06009f6bbcb2ef49
|