Optimum learning rate finder for PyTorch Models
Project description
PyTorch Learning Rate Tuner
Python package to plot loss against varied learning rate for PyTorch neural network models and finding optimal learning rate for specific optimizer.
Installation:
pip install pytorch-lr-tuner
Dependency:
- Python 3.6
- Numpy
- Pandas
- Matplotlib
- PyTorch
Example:
The package includes LearningRateFinder
class which can be instantiated with pytorch model reference, optimizer, criterion and training set. The fit()
method searches for optimal learning rate with multiplicative increment and smoothing with exponential weighted average and bias correction and the visualization of this log can be obtained through calling plot()
method.
from pytorch_lr_tuner import LearningRateFinder
ESTIMATOR_CONFIG = {'input_shape': 21, 'output_shape': 1, 'hidden_units': [32, 64, 16]}
binary_crossentropy = nn.BCELoss()
lr_finder = LearningRateFinder(estimator=VanillaNet, config=ESTIMATOR_CONFIG, optimizer='sgd', criterion=binary_crossentropy, train_set=train_set, val_set=val_set)
lr_finder.fit()
lr_finder.plot()
Output:
Here, the learning rate with steepest gradient in loss can be inferred as an optimal one for this specific architecture.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for pytorch_lr_tuner-0.0.2-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 88f5b2a9de62db880fbb7639b70e92b62e613c2b78c82fd657018a4038eee25e |
|
MD5 | 994eaa8dcb901123fd7b5078525af474 |
|
BLAKE2b-256 | 71ce2681e7e3247fe362118a9d589f1d9fb13b18abaa358c746b81b72878dd7f |