Skip to main content

Package for gradient accumulation in TensorFlow

Project description

GradientAccumulator

License PyPI version CI DOI

Pip Downloads GitHub Downloads

This repo contains a TensorFlow 2 compatible implementation of accumulated gradients.

The proposed implementation simply overloads the train_step of a given tf.keras.Model, to update correctly according to a user-specified number of accumulation steps. This enables gradient accumulation, which reduces memory consumption and enables usage of theoretically infinitely large batch size (among other things), at the cost of increased training runtime.

Implementation is compatible with and have been tested against TF >= 2.2 and Python >= 3.6, and works cross-platform (Ubuntu, Windows, macOS).

Install

Stable release from PyPI:

pip install gradient-accumulator

Or from source:

pip install git+https://github.com/andreped/GradientAccumulator

Usage

from gradient_accumulator.GAModelWrapper import GAModelWrapper
from tensorflow.keras.models import Model

model = Model(...)
model = GAModelWrapper(accum_steps=4, inputs=model.input, outputs=model.output)

Then simply use the model as you normally would!

Mixed precision

There has also been added experimental support for mixed precision:

from tensorflow.keras import mixed_precision

mixed_precision.set_global_policy('mixed_float16')
model = GAModelWrapper(accum_steps=4, mixed_precision=True, inputs=model.input, outputs=model.output)

Adaptive gradient clipping

There has also been added support for adaptive gradient clipping, based on this implementation:

model = GAModelWrapper(accum_steps=4, use_acg=True, clip_factor=0.01, eps=1e-3, inputs=model.input, outputs=model.output)

The hyperparameters values for clip_factor and eps presented here are the default values.

Disclaimer

In theory, one should be able to get identical results for batch training and using gradient accumulation. However, in practice, one may observe a slight difference. One of the cause may be when operations are used (or layers/optimizer/etc) that update for each step, such as Batch Normalization. It is not recommended to use BN with GA, as BN would update too frequently. However, you could try to adjust the momentum of BN (see here).

It was also observed a small difference when using adaptive optimizers, which I believe might be due to how frequently they are updated. Nonetheless, for the optimizers, the difference was quite small, and one may approximate batch training quite well using our GA implementation, as rigorously tested here).

TODOs:

  • Add generic wrapper class for adding accumulated gradients to any optimizer
  • Add CI to build wheel and test that it works across different python versions, TF versions, and operating systems.
  • Add benchmarks to verfiy that accumulated gradients actually work as intended
  • Add class_weight support
  • GAModelWrapper gets expected identical results to batch training!
  • Test method for memory leaks
  • Add multi-input/-output architecture support
  • Add mixed precision support
  • Add adaptive gradient clipping support
  • Add wrapper class for BatchNormalization layer, similar as done for optimizers
  • Add proper multi-GPU support

Acknowledgements

The gradient accumulator model wrapper is based on the implementation presented in this thread on stack overflow.

The adaptive gradient clipping method is based on the implementation by @sayakpaul.

This repository serves as an open solution for everyone to use, until TF/Keras integrates a proper solution into their framework(s).

Troubleshooting

Overloading of train_step method of tf.keras.Model was introduced in TF 2.2, hence, this code is compatible with TF >= 2.2.

Also, note that TF depends on different python versions. If you are having problems getting TF working, try a different TF version or python version.

For TF 1, I suggest using the AccumOptimizer implementation in the H2G-Net repository instead, which wraps the optimizer instead of overloading the train_step of the Model itself (new feature in TF2).

How to cite

If you use this package in your research, please, cite this reference:

@software{andre_pedersen_2022_6671449,
  author       = {André Pedersen},
  title        = {andreped/GradientAccumulator: v0.1.5},
  month        = jun,
  year         = 2022,
  publisher    = {Zenodo},
  version      = {v0.1.5},
  doi          = {10.5281/zenodo.6671449},
  url          = {https://doi.org/10.5281/zenodo.6671449}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

gradient_accumulator-0.1.5-py3-none-any.whl (10.0 kB view details)

Uploaded Python 3

File details

Details for the file gradient_accumulator-0.1.5-py3-none-any.whl.

File metadata

  • Download URL: gradient_accumulator-0.1.5-py3-none-any.whl
  • Upload date:
  • Size: 10.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.7.1 importlib_metadata/4.8.1 pkginfo/1.8.2 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.2 CPython/3.7.9

File hashes

Hashes for gradient_accumulator-0.1.5-py3-none-any.whl
Algorithm Hash digest
SHA256 d9302b01d1d96fd03cc7e248ec48daecb8c1f39ace0d9257cd1418886facfceb
MD5 27b2038f973f9dcec66965f33473671b
BLAKE2b-256 f969a8bd73b0449c2dbe6d0b1d209591db84a2dc2f12d6d024830e6607059add

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page