Skip to main content

Fast, Modern, & Low Precision PyTorch Optimizers

Project description

optimī

Fast, Modern, Memory Efficient, and Low Precision PyTorch Optimizers

optimi enables accurate low precision training via Kahan summation, integrates gradient release and optimizer accumulation for additional memory efficiency, supports fully decoupled weight decay, and features fast implementations of modern optimizers.

Low Precision Training with Kahan Summation

optimi optimizers can nearly reach or match the performance of mixed precision when training in BFloat16 by using Kahan summation.

Training in BFloat16 with Kahan summation can reduce non-activation training memory usage by 37.5 to 45.5 percent when using an Adam optimizer. BFloat16 training increases single GPU training speed by ~10 percent at the same batch size.

Gradient Release: Fused Backward and Optimizer Step

optimi optimizers can perform the optimization step layer-by-layer during the backward pass, immediately freeing gradient memory.

Unlike the current PyTorch implementation, optimi’s gradient release optimizers are a drop-in replacement for standard optimizers and seamlessly work with exisiting hyperparmeter schedulers.

Optimizer Accumulation: Gradient Release and Accumulation

optimi optimizers can approximate gradient accumulation with gradient release by accumulating gradients into the optimizer states.

Fully Decoupled Weight Decay

In addition to supporting PyTorch-style decoupled weight decay, optimi optimizers also support fully decoupled weight decay.

Fully decoupled weight decay decouples weight decay from the learning rate, more accurately following Decoupled Weight Decay Regularization. This can help simplify hyperparameter tuning as the optimal weight decay is no longer tied to the learning rate.

Foreach Implementations

All optimi optimizers have fast foreach implementations, which can significantly outperform the for-loop versions. optimi reuses the gradient buffer for temporary variables to reduce foreach memory usage.

Documentation

https://optimi.benjaminwarner.dev

Install

optimi is available to install from pypi.

pip install torch-optimi

Usage

To use an optimi optimizer with Kahan summation and fully decoupled weight decay:

import torch
from torch import nn
from optimi import AdamW

# create or cast model in low precision (bfloat16)
model = nn.Linear(20, 1, dtype=torch.bfloat16)

# initialize any optimi optimizer with parameters & fully decoupled weight decay
# Kahan summation is automatically enabled since model & inputs are bfloat16
opt = AdamW(model.parameters(), lr=1e-3, weight_decay=1e-5, decouple_lr=True)

# forward and backward, casting input to bfloat16 if needed
loss = model(torch.randn(20, dtype=torch.bfloat16))
loss.backward()

# optimizer step
opt.step()
opt.zero_grad()

To use with PyTorch-style weight decay with float32 or mixed precision:

# create model
model = nn.Linear(20, 1)

# initialize any optimi optimizer with parameters
opt = AdamW(model.parameters(), lr=1e-3, weight_decay=1e-2)

To use with gradient release:

# initialize any optimi optimizer with `gradient_release=True`
# and call `prepare_for_gradient_release` on model and optimizer
opt = AdamW(model.parameters(), lr=1e-3, gradient_release=True)
prepare_for_gradient_release(model, opt)

# setup a learning rate scheduler like normal
scheduler = CosineAnnealingLR(opt, ...)

# calling backward on the model will peform the optimzier step
loss = model(torch.randn(20, dtype=torch.bfloat16))
loss.backward()

# optimizer step and zero_grad are no longer needed, and will
# harmlessly no-op if called by an existing training framework
# opt.step()
# opt.zero_grad()

# step the learning rate scheduler like normal
scheduler.step()

# optionally remove gradient release hooks when done training
remove_gradient_release(model)

To use with optimizer accumulation:

# initialize any optimi optimizer with `gradient_release=True`
# and call `prepare_for_gradient_release` on model and optimizer
opt = AdamW(model.parameters(), lr=1e-3, gradient_release=True)
prepare_for_gradient_release(model, opt)

# update model parameters every four steps after accumulating
# gradients directly into the optimizer states
accumulation_steps = 4

# setup a learning rate scheduler for gradient accumulation
scheduler = CosineAnnealingLR(opt, ...)

# use existing PyTorch dataloader
for idx, batch in enumerate(dataloader):
    # `optimizer_accumulation=True` accumulates gradients into
    # optimizer states. set `optimizer_accumulation=False` to
    # update parameters by performing a full gradient release step
    opt.optimizer_accumulation = (idx+1) % accumulation_steps != 0

    # calling backward on the model will peform the optimizer step
    # either accumulating gradients or updating model parameters
    loss = model(batch)
    loss.backward()

    # optimizer step and zero_grad are no longer needed, and will
    # harmlessly no-op if called by an existing training framework
    # opt.step()
    # opt.zero_grad()

    # step the learning rate scheduler after accumulating gradients
    if not opt.optimizer_accumulation:
        scheduler.step()

# optionally remove gradient release hooks when done training
remove_gradient_release(model)

Differences from PyTorch

optimi optimizers do not support compilation, differentiation, complex numbers, or have capturable versions.

optimi Adam optimizers do not support AMSGrad and SGD does not support Nesterov momentum. Optimizers which debias updates (Adam optimizers and Adan) calculate the debias term per parameter group, not per parameter.

Optimizers

optimi implements the following optimizers: Adam, AdamW, Adan, Lion, RAdam, Ranger, SGD, & StableAdamW

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

torch_optimi-0.2.1.tar.gz (20.1 kB view details)

Uploaded Source

Built Distribution

torch_optimi-0.2.1-py3-none-any.whl (37.8 kB view details)

Uploaded Python 3

File details

Details for the file torch_optimi-0.2.1.tar.gz.

File metadata

  • Download URL: torch_optimi-0.2.1.tar.gz
  • Upload date:
  • Size: 20.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.0.0 CPython/3.12.3

File hashes

Hashes for torch_optimi-0.2.1.tar.gz
Algorithm Hash digest
SHA256 31bf5f11d4ddd8fd995f3b148411b3a761d1a6e77052e1996a1f97a0dda6dd2b
MD5 44879459b0144040c7538747a678f1b5
BLAKE2b-256 df75e1b4d39318abd3c45542d4eed85ef45421f131e1343760e96afd35e6eb71

See more details on using hashes here.

File details

Details for the file torch_optimi-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: torch_optimi-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 37.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.0.0 CPython/3.12.3

File hashes

Hashes for torch_optimi-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 d466b76c849290bc06420b3e555dd6ea95c22189217fc800db080aef77d16e6b
MD5 d7f17539a07e5c102783f1b77881523e
BLAKE2b-256 d0397139b48fad1c3abe33978ef47fa9735350197055adf4d51b49a481ea44d3

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page