Skip to main content

Learning Rate Free Learning for Adam, SGD and AdaGrad

Project description

D-Adaptation

Downloads Downloads

Learning rate free learning for SGD, AdaGrad and Adam!

by Aaron Defazio and Konstantin Mishchenko (Arxiv)

pip install dadaptation

NEW V3.0 release uses an improved algorithm that may give different results from past versions. The old version is still availiable under experimental/d_adapt_adam_preprint.

NEW: Prodigy

We have recently released the Prodigy method, which grows the adapted learning rate faster than D-Adaptation in theory and practice. Try it out if D-Adaptation is under-estimating the learning rate.

How To Cite

If you use D-Adaptation in a publication, please cite our work as

@ARTICLE{defazio2023dadapt,
author = {Aaron Defazio and Konstantin Mishchenko},
title = {Learning-Rate-Free Learning by D-Adaptation},
journal = {The 40th International Conference on Machine Learning (ICML 2023)},
year = {2023}
}

Details

The provided Pytorch Optimizer classes are drop-in replacements, either copy into your project or use via pip with dadaptation.DAdaptSGD, dadaptation.DAdaptAdam or dadaptation.DAdaptAdaGrad.

  • Set the LR parameter to 1.0. This parameter is not ignored. Setting it larger to smaller will directly scale up or down the D-Adapted learning rate estimate.
  • Different per-layer learning rates can be achieved by setting the layer_scale value in each parameter-group. It defaults to 1.0, and scales each layer's learning rate relative to the other layers.
  • Use the same learning rate scheduler you would normally use on the problem.
  • The Adam variant supports AdamW style weight decay, just set decouple=True. It is not turned on by default, so if you are replacing your adam implementation, make sure you use decoupled if necessary.
  • It may be necessary to use larger weight decay than you would normally use, try a factor of 2 or 4 bigger if you see overfitting. D-Adaptation uses larger learning rates than people typically hand-choose, in some cases that requires more decay.
  • Use the log_every setting to see the learning rate being used (d*lr) and the current D bound.
  • Only the AdaGrad version supports sparse gradients. It does not adapt as efficiently as the other variants and should be considered experimental.

Change Log

Version 3.2

  • Added support for layer-wise scaling to DAdaptAdam.

Version 3.0

  • Major improvements to DAdaptAdam, improving the performance particularly on Transformer models. This variant may behave differently in practice. The old version is availiable under experimental/d_adapt_adam_preprint if you wish to continue to use it.
  • The IP variant is now the main variant of the method.
  • Added Lion. This is highly experimental. Feedback on it's performance is welcome.

Version 2.0

  • Added Adan - should still be considered experimental.
  • Added support for PyTorch's Fully Sharded Data Parallel.
  • Improved support of edge cases such as learning rate zero.
  • Improved logging - uses Python logging rather than print statements

Experimental results

vision vision vision vision vision vision vision vision vision vision

License

See the License file.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dadaptation-3.2.tar.gz (13.8 kB view details)

Uploaded Source

File details

Details for the file dadaptation-3.2.tar.gz.

File metadata

  • Download URL: dadaptation-3.2.tar.gz
  • Upload date:
  • Size: 13.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.3.0 pkginfo/1.7.0 requests/2.25.1 setuptools/52.0.0.post20210125 requests-toolbelt/0.9.1 tqdm/4.59.0 CPython/3.7.3

File hashes

Hashes for dadaptation-3.2.tar.gz
Algorithm Hash digest
SHA256 de8e8289d56bfdee0c8e3ca353143295d8a48363bcb02d6868a708354b47ccc0
MD5 a6f859461d4939b79b0c1983ec271b29
BLAKE2b-256 d646d6fb2546c58a1abdd804ad5c8a1a53314a669fbb64ca97281931942928fd

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page