Library to simplify autograd computations in PyTorch
Project description
autograd_lib
By Yaroslav Bulatov, Kazuki Osawa
Library to simplify gradient computations in PyTorch.
An example of computing exact Hessian, Hessian diagonal and KFAC approximation for all linear layers of a model in a single pass:
autograd_lib.register(model)
hess = defaultdict(float)
hess_diag = defaultdict(float)
hess_kfac = defaultdict(lambda: AttrDefault(float))
activations = {}
def save_activations(layer, a, _):
activations[layer] = a
# KFAC left factor
hess_kfac[layer].AA += torch.einsum("ni,nj->ij", A, A)
with autograd_lib.module_hook(save_activations):
output = model(data)
loss = loss_fn(output, targets)
def compute_hess(layer, _, B):
A = activations[layer]
BA = torch.einsum("nl,ni->nli", B, A)
# full Hessian
hess[layer] += torch.einsum('nli,nkj->likj', BA, BA)
# Hessian diagonal
hess_diag[layer] += torch.einsum("ni,nj->ij", B * B, A * A)
# KFAC right factor
hess_kfac[layer].BB += torch.einsum("ni,nj->ij", B, B)
with autograd_lib.module_hook(compute_hess):
autograd_lib.backward_hessian(output, loss='CrossEntropy')
Variations:
autograd_lib.backward_hessian
for Hessianautograd_lib.backward_jacobian
for Jacobian squaredloss.backward()
for empirical Fisher Information Matrix
See autograd_lib_test.py for correctness checks against PyTorch autograd.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
autograd-lib-0.0.1.tar.gz
(2.7 kB
view hashes)
Built Distribution
Close
Hashes for autograd_lib-0.0.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | ca157bcdd809ac8051e8b3dba33e6bf534c19f6ec10e5d7e39e06a5fe9769af9 |
|
MD5 | aa6161979f498aec95d77af5ca4409bc |
|
BLAKE2b-256 | 5dbc7110f0715da880fc573bc0fc31dc0c3b9b37d4bd557e1c6795191ca8e3ac |