Pytorch implementation of the learning rate range test
PyTorch learning rate finder
A PyTorch implementation of the learning rate range test detailed in Cyclical Learning Rates for Training Neural Networks by Leslie N. Smith and the tweaked version used by fastai.
The learning rate range test is a test that provides valuable information about the optimal learning rate. During a pre-training run, the learning rate is increased linearly or exponentially between two boundaries. The low initial learning rate allows the network to start converging and as the learning rate is increased it will eventually be too large and the network will diverge.
Typically, a good static learning rate can be found half-way on the descending loss curve. In the plot below that would be
lr = 0.002.
For cyclical learning rates (also detailed in Leslie Smith's paper) where the learning rate is cycled between two boundaries
(start_lr, end_lr), the author advises the point at which the loss starts descending and the point at which the loss stops descending or becomes ragged for
end_lr respectively. In the plot below,
start_lr = 0.0002 and
Python 2.7 and above:
pip install torch-lr-finder
Install with the support of mixed precision training (requires Python 3, see also this section):
pip install torch-lr-finder -v --global-option="amp"
Implementation details and usage
Tweaked version from fastai
Increases the learning rate in an exponential manner and computes the training loss for each learning rate.
lr_finder.plot() plots the training loss versus logarithmic learning rate.
from torch_lr_finder import LRFinder model = ... criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=1e-7, weight_decay=1e-2) lr_finder = LRFinder(model, optimizer, criterion, device="cuda") lr_finder.range_test(trainloader, end_lr=100, num_iter=100) lr_finder.plot() # to inspect the loss-learning rate graph lr_finder.reset() # to reset the model and optimizer to their initial state
Leslie Smith's approach
Increases the learning rate linearly and computes the evaluation loss for each learning rate.
lr_finder.plot() plots the evaluation loss versus learning rate.
This approach typically produces more precise curves because the evaluation loss is more susceptible to divergence but it takes significantly longer to perform the test, especially if the evaluation dataset is large.
from torch_lr_finder import LRFinder model = ... criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=0.1, weight_decay=1e-2) lr_finder = LRFinder(model, optimizer, criterion, device="cuda") lr_finder.range_test(trainloader, val_loader=val_loader, end_lr=1, num_iter=100, step_mode="linear") lr_finder.plot(log_lr=False) lr_finder.reset()
- Examples for CIFAR10 and MNIST can be found in the examples folder.
- The optimizer passed to
LRFindershould not have an
LRSchedulerattached to it.
LRFinder.range_test()will change the model weights and the optimizer parameters. Both can be restored to their initial state with
- The learning rate and loss history can be accessed through
lr_finder.history. This will return a dictionary with
- When using
step_mode="linear"the learning rate range should be within the same order of magnitude.
Additional support for training
You can set the
accumulation_steps parameter in
LRFinder.range_test() with a proper value to perform gradient accumulation:
from torch.utils.data import DataLoader from torch_lr_finder import LRFinder desired_batch_size, real_batch_size = 32, 4 accumulation_steps = desired_batch_size // real_batch_size dataset = ... # Beware of the `batch_size` used by `DataLoader` trainloader = DataLoader(dataset, batch_size=real_bs, shuffle=True) model = ... criterion = ... optimizer = ... lr_finder = LRFinder(model, optimizer, criterion, device="cuda") lr_finder.range_test(trainloader, end_lr=10, num_iter=100, step_mode="exp", accumulation_steps=accumulation_steps) lr_finder.plot() lr_finder.reset()
Mixed precision training
Currently, we use
apex as the dependency for mixed precision training.
To enable mixed precision training, you just need to call
amp.initialize() before running
from torch_lr_finder import LRFinder from apex import amp # Add this line before running `LRFinder` model, optimizer = amp.initialize(model, optimizer, opt_level='O1') lr_finder = LRFinder(model, optimizer, criterion, device='cuda') lr_finder.range_test(trainloader, end_lr=10, num_iter=100, step_mode='exp') lr_finder.plot() lr_finder.reset()
Note that the benefit of mixed precision training requires a nvidia GPU with tensor cores (see also: NVIDIA/apex #297)
Besides, you can try to set
torch.backends.cudnn.benchmark = True to improve the training speed. (but it won't work for some cases, you should use it at your own risk)
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|Filename, size||File type||Python version||Upload date||Hashes|
|Filename, size torch_lr_finder-0.1-py3-none-any.whl (9.7 kB)||File type Wheel||Python version py3||Upload date||Hashes View hashes|
|Filename, size torch-lr-finder-0.1.tar.gz (8.6 kB)||File type Source||Python version None||Upload date||Hashes View hashes|
Hashes for torch_lr_finder-0.1-py3-none-any.whl