Skip to main content

Package for gradient accumulation in TensorFlow

Project description

gradient-accumulator

GradientAccumulator

Seemless gradient accumulation for TensorFlow 2

Pip Downloads PyPI version License DOI

GradientAccumulator was developed by SINTEF Health due to the lack of an easy-to-use method for gradient accumulation in TensorFlow 2.

The package is available on PyPI and is compatible with and have been tested against TensorFlow 2.2-2.12 and Python 3.6-3.12, and works cross-platform (Ubuntu, Windows, macOS).

Continuous integration

Build Type Status
Code coverage codecov
Documentations Documentation Status
Unit tests CI

Install

Stable release from PyPI:

pip install gradient-accumulator

Or from source:

pip install git+https://github.com/andreped/GradientAccumulator

Quickstart

A simple example to add gradient accumulation to an existing model is by:

from gradient_accumulator import GradientAccumulateModel
from tensorflow.keras.models import Model

model = Model(...)
model = GradientAccumulateModel(accum_steps=4, inputs=model.input, outputs=model.output)

Then simply use the model as you normally would!

In practice, using gradient accumulation with a custom pipeline might require some extra overhead and tricks to get working.

For more information, see documentations which are hosted at gradientaccumulator.readthedocs.io.

What?

Gradient accumulation (GA) enables reduced GPU memory consumption through dividing a batch into smaller reduced batches, and performing gradient computation either in a distributing setting across multiple GPUs or sequentially on the same GPU. When the full batch is processed, the gradients are the accumulated to produce the full batch gradient.

Why?

In TensorFlow 2, there did not exist a plug-and-play method to use gradient accumulation with any custom pipeline. Hence, we have implemented two generic TF2-compatible approaches:

Method Usage
GradientAccumulateModel model = GradientAccumulateModel(accum_steps=4, inputs=model.input, outputs=model.output)
GradientAccumulateOptimizer opt = GradientAccumulateOptimizer(accum_steps=4, optimizer=tf.keras.optimizers.SGD(1e-2))

Both approaches control how frequently the weigths are updated, but in their own way. Approach (1) is for single-GPU only, whereas (2) supports both single-GPU and distributed training (multi-GPU). However, note that (2) is not yet working as intended. Hence, use (1) for most applications.

Our implementations enable theoretically infinitely large batch size, with identical memory consumption as for a regular mini batch. If a single GPU is used, this comes at the cost of increased training runtime. Multiple GPUs could be used to increase runtime performance.

Technique Usage
Adaptive Gradient Clipping model = GradientAccumulateModel(accum_steps=4, agc=True, inputs=model.input, outputs=model.output)
Batch Normalization layer = AccumBatchNormalization(accum_steps=4)
Mixed precision model = GradientAccumulateModel(accum_steps=4, mixed_precision=True, inputs=model.input, outputs=model.output)

As batch normalization (BN) is not natively compatible with GA, we have implemented a custom BN layer which can be used as a drop-in replacement.

Support for adaptive gradient clipping has been added as an alternative to BN.

Mixed precision can also be utilized on both GPUs and TPUs.

For more information on usage, supported techniques, and examples, refer to the documentations.

Acknowledgements

The gradient accumulator model wrapper is based on the implementation presented in this thread on stack overflow. The adaptive gradient clipping method is based on the implementation by @sayakpaul. The optimizer wrapper is derived from the implementation by @fsx950223 and @stefan-falk.

The documentations hosted here was made possible by the incredible ReadTheDocs team which offer free documentation hosting!

How to cite?

If you used this package or found the project relevant in your research, please, considering including the following citation:

@software{andre_pedersen_2023_7831244,
  author       = {André Pedersen and
                  Javier Pérez de Frutos and
                  David Bouget},
  title        = {andreped/GradientAccumulator: v0.4.0},
  month        = apr,
  year         = 2023,
  publisher    = {Zenodo},
  version      = {v0.4.0},
  doi          = {10.5281/zenodo.7831244},
  url          = {https://doi.org/10.5281/zenodo.7831244}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gradient-accumulator-0.4.1.tar.gz (12.3 kB view details)

Uploaded Source

Built Distribution

gradient_accumulator-0.4.1-py3-none-any.whl (12.2 kB view details)

Uploaded Python 3

File details

Details for the file gradient-accumulator-0.4.1.tar.gz.

File metadata

  • Download URL: gradient-accumulator-0.4.1.tar.gz
  • Upload date:
  • Size: 12.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.16

File hashes

Hashes for gradient-accumulator-0.4.1.tar.gz
Algorithm Hash digest
SHA256 587326bad2148c1369aa7e443192d7d016ea94a3f1e70d1d9f920615825c82a9
MD5 436d1c8d26cc0da04b87432cbbea313f
BLAKE2b-256 c0c5658c17710c98962bcbc954ead2b48ae164a9657c2cadb9d46c29b6b67e86

See more details on using hashes here.

File details

Details for the file gradient_accumulator-0.4.1-py3-none-any.whl.

File metadata

File hashes

Hashes for gradient_accumulator-0.4.1-py3-none-any.whl
Algorithm Hash digest
SHA256 96255367a1fb11e169f82e8cc2330a5e51e3cd85605662b7aca639ed2841f46f
MD5 d958747e6f1aa52b60141b8261c7d1f2
BLAKE2b-256 760aeb75896b81c7f1c8a2f8757fc64933c0de3e0c0e40f8506f06d9e27175bd

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page