RAdam implemented in Tensorflow 1.x
Project description
RAdam-Tensorflow
On the Variance of the Adaptive Learning Rate and Beyond
Paper | Official Pytorch code
Usage
from RAdam import RAdamOptimizer
train_op = RAdamOptimizer(learning_rate=0.001, beta1=0.9, beta2=0.999, weight_decay=0.0).minimize(loss)
Algorithm
Result
Author
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
No source distribution files available for this release.See tutorial on generating distribution archives.
Built Distribution
Close
Hashes for tf_1.x_rectified_adam-0.0.2-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 0541a62722ee99ae62044fcfcd685602c8e2d2dee4419dbcf20b85404cf8ed9d |
|
MD5 | 061d2a6808fded0c60b09b9bffb7f07b |
|
BLAKE2b-256 | 036ee0485e1a6c93bee9105c7dec62ee484dfdd7b1b5e879ca04a2362281d3e9 |