Learning Rate Free Learning for Adam, SGD and AdaGrad
Project description
D-Adaptation
Learning rate free learning for SGD, AdaGrad and Adam!
Details
The provided Pytorch Optimizer classes can be dropped into your project and used as normal.
- Set the LR parameter to 1.0. This parameter is not ignored, rather, setting it larger to smaller will directly scale up or down the D-adapted learning rate.
- If you encounter divergence early on, try change rho to match a reasonable warmup schedule rate for your problem.
- Use the same learning rate scheduler you would normally use on the problem.
- The Adam variant supports AdamW style weight decay, just set decouple=True. It is not turned on by default, so if you are replacing your adam implementation, make sure you use decoupled if necessary.
- Use the log_every setting to see the learning rate being used (d*lr) and the current D bound.
- Only the AdaGrad version supports sparse gradients.
- The IP variants implement a tighter D bound.
Experimental results
License
See the License file.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
dadaptation-1.1.tar.gz
(7.5 kB
view details)
File details
Details for the file dadaptation-1.1.tar.gz
.
File metadata
- Download URL: dadaptation-1.1.tar.gz
- Upload date:
- Size: 7.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.3.0 pkginfo/1.7.0 requests/2.25.1 setuptools/52.0.0.post20210125 requests-toolbelt/0.9.1 tqdm/4.59.0 CPython/3.7.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 8e117d13a1bdbc71d3d1b2f75ee36c0ddf28189d1ca731b585cdb8892987c0d4 |
|
MD5 | 953f15b409d0fba63f3ba91d6495c82f |
|
BLAKE2b-256 | ff7da868b0165b9cdb0c9ce307e84046f0208ff8a49a0cfa35b99795e52d0278 |