Skip to main content

No project description provided

Project description

Implementation of https://arxiv.org/abs/1904.00962 for large batch, large learning rate training.

The paper doesn't specify clamp values for ϕ, so I use 10.

Bonus: TensorboardX logging (example below).

Try the sample

git clone git@github.com:cybertronai/pytorch-lamb.git
cd pytorch-lamb
pip install -e .
python test_lamb.py
tensorboard --logdir=runs

Sample results

At --lr=.02, the Adam optimizer is unable to train.

Red: python test_lamb.py --batch-size=512 --lr=.02 --wd=.01 --log-interval=30 --optimizer=adam

Blue: python test_lamb.py --batch-size=512 --lr=.02 --wd=.01 --log-interval=30 --optimizer=lamb

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pytorch_lamb-1.0.0.tar.gz (3.1 kB view hashes)

Uploaded source

Built Distribution

pytorch_lamb-1.0.0-py3-none-any.whl (4.4 kB view hashes)

Uploaded py3

Supported by

AWS AWS Cloud computing Datadog Datadog Monitoring Facebook / Instagram Facebook / Instagram PSF Sponsor Fastly Fastly CDN Google Google Object Storage and Download Analytics Huawei Huawei PSF Sponsor Microsoft Microsoft PSF Sponsor NVIDIA NVIDIA PSF Sponsor Pingdom Pingdom Monitoring Salesforce Salesforce PSF Sponsor Sentry Sentry Error logging StatusPage StatusPage Status page