Skip to main content

Pytorch implementation of the learning rate range test

Project description

PyTorch learning rate finder

ci-build status codecov

A PyTorch implementation of the learning rate range test detailed in Cyclical Learning Rates for Training Neural Networks by Leslie N. Smith and the tweaked version used by fastai.

The learning rate range test is a test that provides valuable information about the optimal learning rate. During a pre-training run, the learning rate is increased linearly or exponentially between two boundaries. The low initial learning rate allows the network to start converging and as the learning rate is increased it will eventually be too large and the network will diverge.

Typically, a good static learning rate can be found half-way on the descending loss curve. In the plot below that would be lr = 0.002.

For cyclical learning rates (also detailed in Leslie Smith's paper) where the learning rate is cycled between two boundaries (start_lr, end_lr), the author advises the point at which the loss starts descending and the point at which the loss stops descending or becomes ragged for start_lr and end_lr respectively. In the plot below, start_lr = 0.0002 and end_lr=0.2.

Learning rate range test

Installation

Python 3.5 and above:

pip install torch-lr-finder

Install with the support of mixed precision training (see also this section):

pip install torch-lr-finder -v --global-option="apex"

Implementation details and usage

Tweaked version from fastai

Increases the learning rate in an exponential manner and computes the training loss for each learning rate. lr_finder.plot() plots the training loss versus logarithmic learning rate.

from torch_lr_finder import LRFinder

model = ...
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=1e-7, weight_decay=1e-2)
lr_finder = LRFinder(model, optimizer, criterion, device="cuda")
lr_finder.range_test(trainloader, end_lr=100, num_iter=100)
lr_finder.plot() # to inspect the loss-learning rate graph
lr_finder.reset() # to reset the model and optimizer to their initial state

Leslie Smith's approach

Increases the learning rate linearly and computes the evaluation loss for each learning rate. lr_finder.plot() plots the evaluation loss versus learning rate. This approach typically produces more precise curves because the evaluation loss is more susceptible to divergence but it takes significantly longer to perform the test, especially if the evaluation dataset is large.

from torch_lr_finder import LRFinder

model = ...
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.1, weight_decay=1e-2)
lr_finder = LRFinder(model, optimizer, criterion, device="cuda")
lr_finder.range_test(trainloader, val_loader=val_loader, end_lr=1, num_iter=100, step_mode="linear")
lr_finder.plot(log_lr=False)
lr_finder.reset()

Notes

  • Examples for CIFAR10 and MNIST can be found in the examples folder.
  • The optimizer passed to LRFinder should not have an LRScheduler attached to it.
  • LRFinder.range_test() will change the model weights and the optimizer parameters. Both can be restored to their initial state with LRFinder.reset().
  • The learning rate and loss history can be accessed through lr_finder.history. This will return a dictionary with lr and loss keys.
  • When using step_mode="linear" the learning rate range should be within the same order of magnitude.
  • LRFinder.range_test() expects a pair of input, label to be returned from the DataLoader objects passed to it. The input must be ready to be passed to the model and the label must be ready to be passed to the criterion without any further data processing/handling/conversion. If you find yourself needing a workaround you can make use of the classes TrainDataLoaderIter and ValDataLoaderIter to perform any data processing/handling/conversion inbetween the DataLoader and the training/evaluation loop. You can find an example of how to use these classes in examples/lrfinder_cifar10_dataloader_iter.

Additional support for training

Gradient accumulation

You can set the accumulation_steps parameter in LRFinder.range_test() with a proper value to perform gradient accumulation:

from torch.utils.data import DataLoader
from torch_lr_finder import LRFinder

desired_batch_size, real_batch_size = 32, 4
accumulation_steps = desired_batch_size // real_batch_size

dataset = ...

# Beware of the `batch_size` used by `DataLoader`
trainloader = DataLoader(dataset, batch_size=real_batch_size, shuffle=True)

model = ...
criterion = ...
optimizer = ...

# (Optional) With this setting, `amp.scale_loss()` will be adopted automatically.
# model, optimizer = amp.initialize(model, optimizer, opt_level='O1')

lr_finder = LRFinder(model, optimizer, criterion, device="cuda")
lr_finder.range_test(trainloader, end_lr=10, num_iter=100, step_mode="exp", accumulation_steps=accumulation_steps)
lr_finder.plot()
lr_finder.reset()

Mixed precision training

Both apex.amp and torch.amp are supported now, here are the examples:

  • Using apex.amp:

    from torch_lr_finder import LRFinder
    from apex import amp
    
    # Add this line before running `LRFinder`
    model, optimizer = amp.initialize(model, optimizer, opt_level='O1')
    
    lr_finder = LRFinder(model, optimizer, criterion, device='cuda', amp_backend='apex')
    lr_finder.range_test(trainloader, end_lr=10, num_iter=100, step_mode='exp')
    lr_finder.plot()
    lr_finder.reset()
    
  • Using torch.amp

    from torch_lr_finder import LRFinder
    
    amp_config = {
        'device_type': 'cuda',
        'dtype': torch.float16,
    }
    grad_scaler = torch.cuda.amp.GradScaler()
    
    lr_finder = LRFinder(
        model, optimizer, criterion, device='cuda',
        amp_backend='torch', amp_config=amp_config, grad_scaler=grad_scaler
    )
    lr_finder.range_test(trainloader, end_lr=10, num_iter=100, step_mode='exp')
    lr_finder.plot()
    lr_finder.reset()
    

Note that the benefit of mixed precision training requires a nvidia GPU with tensor cores (see also: NVIDIA/apex #297)

Besides, you can try to set torch.backends.cudnn.benchmark = True to improve the training speed. (but it won't work for some cases, you should use it at your own risk)

Contributing and pull requests

All contributions are welcome but first, have a look at CONTRIBUTING.md.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

torch_lr_finder-0.2.2.tar.gz (18.0 kB view details)

Uploaded Source

Built Distribution

torch_lr_finder-0.2.2-py3-none-any.whl (12.4 kB view details)

Uploaded Python 3

File details

Details for the file torch_lr_finder-0.2.2.tar.gz.

File metadata

  • Download URL: torch_lr_finder-0.2.2.tar.gz
  • Upload date:
  • Size: 18.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.6

File hashes

Hashes for torch_lr_finder-0.2.2.tar.gz
Algorithm Hash digest
SHA256 d659725028e1ab860deeabe01f445e07f758285c524c8720bd9cc3f39343c778
MD5 42856f17172e596fbbd73b0551255f67
BLAKE2b-256 40052113d8d9ce2ac2d6a308d309f1b46816a1defec28a382f388af8df7bc8d6

See more details on using hashes here.

File details

Details for the file torch_lr_finder-0.2.2-py3-none-any.whl.

File metadata

File hashes

Hashes for torch_lr_finder-0.2.2-py3-none-any.whl
Algorithm Hash digest
SHA256 926f451a213a6f593902c6ed2a1fa1603fb2fbdd8fb2a427d2eac8f46086ad3b
MD5 03efdbccc4228e7221d8d3540621679a
BLAKE2b-256 a6e4e1cceb992f874aef6dd66d641ac9214ad071a7d2760978bb22066c0b6ffe

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page