Skip to main content

Efficient optimizers

Project description

HeavyBall

[!IMPORTANT]
The SOAP implementation was broken until 0.9.0. Please upgrade to 0.9.0 or later.

A simple package of efficient optimizers

The goal is not to thrive for completeness, full maintenance or abstraction, but instead to provide a simple largely static alternative to torch.optim with more and better optimizers.

Currently (2024-11-16, 0.14.3), the recommended stable optimizer is PrecondSchedulePaLMForeachSOAP (see below). The recommended experimental optimizer is ForeachPSGDKron.

Features

  • Stochastic Rounding: FP32 convergence with BF16 parameters
  • Inplace EMA: Same math, but less memory, less compute and higher stability
  • Foreach: Fast multi-tensor application
  • PaLM Beta2: Fast initial convergence, stable late convergence
  • ScheduleFree: No learning rate schedule, but better convergence
  • Preconditioner Schedule: Improved loss-per-step in early convergence, better step-per-second in late convergence (explained below)

Getting started

pip install heavyball
import torch
import heavyball

# Create a model
model = torch.nn.Linear(16, 1)

# Create an optimizer
optimizer = heavyball.PrecondSchedulePaLMForeachSOAP(model.parameters(), lr=1e-3)

x = torch.randn(128, 16)
y = torch.randn(128, 1)

for _ in range(1000):
    optimizer.zero_grad()
    loss = torch.nn.functional.mse_loss(model(x), y)
    loss.backward()
    optimizer.step()

Optimizers

Name Description Advantages / Disadvantages
ForeachAdamW More efficient (speed, memory) AdamW + Faster than AdamW
+ Possibly more (numerically) stable
ForeachLaProp More efficient (speed, memory) LaProp + Same cost as AdamW
+ Marginally better converence (better proofs)
+ Higher hyperparameter stability
- Not a guaranteed win (can be neutral)
- No "Slingshot"
ForeachADOPT More efficient (speed, memory) ADOPT + Same cost as AdamW
+ Rigorous mathematical convergence proofs, even for challenging models (GANs)
- Empirically underperforms LaProp
- no bf16
ForeachSFAdamW More efficient (speed, memory) ScheduleFree AdamW + Same cost as AdamW, but better eval perf
+ Full control over hyperparameters
PaLMForeachSFAdamW ForeachSFAdamW with PaLM's beta2 schedule + Same cost as AdamW, but better eval perf
+ Less control, but faster early and more stable late convergence
+ ScheduleFree
- slow early convergence
ForeachSOAP More efficient (speed, memory) SOAP + Faster convergence (loss-at-step)
+ Full control over hyperparameters
- more memory usage
- more hyperparameters
- higher overhead than AdamW (can be ammortized; better loss-at-second)
PaLMForeachSOAP ForeachSOAP with PaLM's beta2 schedule + Faster convergence (loss-at-step)
+ Less control, but faster early and more stable late convergence
- more memory usage
- more hyperparameters
- higher overhead than AdamW (can be ammortized; better loss-at-second)
SFPaLMForeachSOAP ScheduleFree PaLMForeachSOAP + Fast convergence (loss-at-step)
+ less memory usage than PaLMForeachSOAP (more tham AdamW)
- slower initial convergence than PaLMForeachSOAP (but allows higher LRs)
- higher overhead than AdamW (can be ammortized)
PrecondScheduleSFPaLMForeachSOAP SFPaLMForeachSOAP with preconditioner schedule, matching the error of PrecondEvery=2 with the cost of PrecondEvery=512 + Better initial convergence than SFPaLMForeachSOAP
+ Significantly faster (sec/it) later
+ less memory usage than PaLMForeachSOAP (more tham AdamW)
- slower initial convergence than PaLMForeachSOAP (but allows higher LRs)
- higher overhead than AdamW (can be ammortized), goes to 0 with increasing number of step
PrecondSchedulePaLMForeachSOAP PrecondScheduleSFPaLMForeachSOAP without schedule-free + Best initial convergence
+ Significantly faster (sec/it) later
+ high stability
- more memory usage than PrecondScheduleSFPaLMForeachSOAP
- higher overhead than AdamW (can be ammortized), goes to 0 with increasing number of steps
PrecondScheduleForeachSOAP PrecondScheduleSFPaLMForeachSOAP without PaLM's beta2 schedule + Better initial convergence
+ Significantly faster (sec/it) later
- more memory usage than PrecondScheduleSFPaLMForeachSOAP
- higher overhead than AdamW (can be ammortized), goes to 0 with increasing number of steps

Precond Schedule

The default preconditioner schedule (f) would yield the following update intervals:

Steps Interval, f Total (schedule) Total (constant, every 2) Total (constant, every 16)
10 1.00005 10 5 (0.5x) 0 (0.0x)
100 1.026 99 50 (0.5x) 6 (0.1x)
1,000 2.0 738 500 (0.7x) 62 (0.1x)
10,000 14.3 2,168 5,000 (2.3x) 625 (0.3x)
100,000 100.2 4,049 50,000 (12.3x) 6,250 (1.5x)
1,000,000 513 7,245 500,000 (69.0x) 62,500 (8.6x)

Memory

Second order optimizers make it difficult to estimate memory usage, as it depends on shapes and hyperparameters. To estimate your memory usage, you may use test/test_memory.py which attempts to ensure there are no regressions.
Furthermore, you can find real-world memory usage of a 300M parameters video diffusion model below: img.png

Utils

To access heavyball.utils, you need to explicitly import heavyball.utils.
It has several handy functions:

  • set_torch() sets pytorch optimization settings (TF32, opt_einsum, benchmark, ...)
  • compile_mode, a string passed as-is to torch.compile(mode=compile_mode) in all compiled heavyball calls
  • zeroth_power_mode, a string determining whether to use QR, newtonschulz{iterations}, or svd or eigh to approximate the eigenvectors. Eigh has the highest precision and cost

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

heavyball-0.14.4.tar.gz (28.1 kB view details)

Uploaded Source

Built Distribution

heavyball-0.14.4-py3-none-any.whl (42.4 kB view details)

Uploaded Python 3

File details

Details for the file heavyball-0.14.4.tar.gz.

File metadata

  • Download URL: heavyball-0.14.4.tar.gz
  • Upload date:
  • Size: 28.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for heavyball-0.14.4.tar.gz
Algorithm Hash digest
SHA256 175c9b7b0ea70277026e5029df501296de7ec2a3b92cc17e193387830bb3788f
MD5 9a7fa4e4cdf457a7fca2e9871d6aafb5
BLAKE2b-256 8291c6f2479cce245de71c8dbe7388d663cf927bdcd83a4fbfcdfdc77c1521ce

See more details on using hashes here.

File details

Details for the file heavyball-0.14.4-py3-none-any.whl.

File metadata

  • Download URL: heavyball-0.14.4-py3-none-any.whl
  • Upload date:
  • Size: 42.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for heavyball-0.14.4-py3-none-any.whl
Algorithm Hash digest
SHA256 93c959062b34204ddb9232b12fec67b298cd5fe9cd4d89a12133872a2ba9fdc5
MD5 df0dabd6c811b6c0cff1cfca4b7c754a
BLAKE2b-256 8f684cba62f344325e56b6c72e8c44f998f8095a6111535c3b0bef002a3d2391

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page