Keras implementation of One Cycle Policy and LR Finder
Project description
Keras-training-tools
Implementation of some of the very effective tools for training Deep Learning (DL) models that I came across while doing the fastai course on Practical Deep Learning for Coders.
The tools were first presented in the following papers by Leslie N. Smith:
- LR Finder: A disciplined approach to neural network hyper-parameters: Part 1 -- learning rate, batch size, momentum, and weight decay
- One Cycle Scheduler: Cyclical Learning Rates for Training Neural Networks
My implementations are a port of the code in fastai library (originally, based on Pytorch) to Keras and are heavily inspired by some of earlier efforts in this direction:
Here's another article I referred to: How Do You Find A Good Learning Rate by Sylvain Gugger of fastai which provides an intuitive understanding of how fastai's LR finder works.
I'll keep updating this repository with the new tools I come across that could be practically useful for training a DL model.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for keras_one_cycle_lr-0.0.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | e39e058530d3fb0478252fdd2f082ebebfb331c6711d835f05bbd51c7353ddb3 |
|
MD5 | 24c6cdce4dab3fee652eab397aabd1b3 |
|
BLAKE2b-256 | e1a09a34d6514c11d6507ccd9fcbae65b356b38ec45b4b161a0f36d2f15e0e9b |