Skip to main content
Help the Python Software Foundation raise $60,000 USD by December 31st!  Building the PSF Q4 Fundraiser

Library to simplify autograd computations in PyTorch

Project description

autograd_lib

By Yaroslav Bulatov, Kazuki Osawa

Library to simplify gradient computations in PyTorch.

example 1: per-example gradient norms

Example of using it to compute per-example gradient norms for linear layers, using trick from https://arxiv.org/abs/1510.01799

See example_norms.py for a runnable example. The important parts:

!pip install autograd-lib

from autograd_lib import autograd_lib

loss_fn = ...
data = ...
model = ...
autograd_lib.register(model)


activations = {}

def save_activations(layer, A, _):
    activations[layer] = A

with autograd_lib.module_hook(save_activations):
    output = model(data)
    loss = loss_fn(output)

norms = [torch.zeros(n)]

def per_example_norms(layer, _, B):
    A = activations[layer]
    norms[0]+=(A*A).sum(dim=1)*(B*B).sum(dim=1)

with autograd_lib.module_hook(per_example_norms):
    loss.backward()

print('per-example gradient norms squared:', norms[0])

Example 2: Hessian quantities

To compute exact Hessian, Hessian diagonal and KFAC approximation for all linear layers of a ReLU network in a single pass.

See example_hessian.py for a self-contained example. The important parts:

!pip install autograd-lib

autograd_lib.register(model)

hess = defaultdict(float)
hess_diag = defaultdict(float)
hess_kfac = defaultdict(lambda: AttrDefault(float))

activations = {}
def save_activations(layer, A, _):
    activations[layer] = A

    # KFAC left factor
    hess_kfac[layer].AA += torch.einsum("ni,nj->ij", A, A)

with autograd_lib.module_hook(save_activations):
    output = model(data)
    loss = loss_fn(output, targets)

def compute_hess(layer, _, B):
    A = activations[layer]
    BA = torch.einsum("nl,ni->nli", B, A)

    # full Hessian
    hess[layer] += torch.einsum('nli,nkj->likj', BA, BA)

    # Hessian diagonal
    hess_diag[layer] += torch.einsum("ni,nj->ij", B * B, A * A)

    # KFAC right factor
    hess_kfac[layer].BB += torch.einsum("ni,nj->ij", B, B)


with autograd_lib.module_hook(compute_hess):
    autograd_lib.backward_hessian(output, loss='CrossEntropy')

Variations:

  • autograd_lib.backward_hessian for Hessian
  • autograd_lib.backward_jacobian for Jacobian squared
  • loss.backward() for empirical Fisher Information Matrix

See autograd_lib_test.py for correctness checks against PyTorch autograd.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for autograd-lib, version 0.0.7
Filename, size File type Python version Upload date Hashes
Filename, size autograd_lib-0.0.7-py3-none-any.whl (9.2 kB) File type Wheel Python version py3 Upload date Hashes View
Filename, size autograd-lib-0.0.7.tar.gz (8.5 kB) File type Source Python version None Upload date Hashes View

Supported by

Pingdom Pingdom Monitoring Google Google Object Storage and Download Analytics Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN DigiCert DigiCert EV certificate StatusPage StatusPage Status page