Library for Jacobian Descent with PyTorch.
Project description
TorchJD
TorchJD is a library extending autograd to enable Jacobian descent with PyTorch. It can be used to train neural networks with multiple objectives. In particular, it supports multi-task learning, with a wide variety of aggregators from the literature. It also enables the instance-wise risk minimization paradigm. The full documentation is available at torchjd.org, with several usage examples.
Jacobian descent (JD)
Jacobian descent is an extension of gradient descent supporting the optimization of vector-valued functions. This algorithm can be used to train neural networks with multiple loss functions. In this context, JD iteratively updates the parameters of the model using the Jacobian matrix of the vector of losses (the matrix stacking each individual loss' gradient). For more details, please refer to Section 2.1 of the paper.
How does this compare to averaging the different losses and using gradient descent?
Averaging the losses and computing the gradient of the mean is mathematically equivalent to computing the Jacobian and averaging its rows. However, this approach has limitations. If two gradients are conflicting (they have a negative inner product), simply averaging them can result in an update vector that is conflicting with one of the two gradients. Averaging the losses and making a step of gradient descent can thus lead to an increase of one of the losses.
This is illustrated in the following picture, in which the two objectives' gradients $g_1$ and $g_2$ are conflicting, and averaging them gives an update direction that is detrimental to the first objective. Note that in this picture, the dual cone, represented in green, is the set of vectors that have a non-negative inner product with both $g_1$ and $g_2$.
With Jacobian descent, $g_1$ and $g_2$ are computed individually and carefully aggregated using an aggregator $\mathcal A$. In this example, the aggregator is the Unconflicting Projection of Gradients $\mathcal A_{\text{UPGrad}}$: it projects each gradient onto the dual cone, and averages the projections. This ensures that the update will always be beneficial to each individual objective (given a sufficiently small step size). In addition to $\mathcal A_{\text{UPGrad}}$, TorchJD supports more than 10 aggregators from the literature.
Installation
TorchJD can be installed directly with pip:
pip install torchjd
Usage
The main way to use TorchJD is to replace the usual call to loss.backward()
by a call to
torchjd.backward
or torchjd.mtl_backward
, depending on the use-case.
The following example shows how to use TorchJD to train a multi-task model with Jacobian descent, using UPGrad.
import torch
from torch.nn import Linear, MSELoss, ReLU, Sequential
from torch.optim import SGD
from torchjd import mtl_backward
from torchjd.aggregation import UPGrad
shared_module = Sequential(Linear(10, 5), ReLU(), Linear(5, 3), ReLU())
task1_module = Linear(3, 1)
task2_module = Linear(3, 1)
params = [
*shared_module.parameters(),
*task1_module.parameters(),
*task2_module.parameters(),
]
loss_fn = MSELoss()
optimizer = SGD(params, lr=0.1)
A = UPGrad()
inputs = torch.randn(8, 16, 10) # 8 batches of 16 random input vectors of length 10
task1_targets = torch.randn(8, 16, 1) # 8 batches of 16 targets for the first task
task2_targets = torch.randn(8, 16, 1) # 8 batches of 16 targets for the second task
for input, target1, target2 in zip(inputs, task1_targets, task2_targets):
features = shared_module(input)
output1 = task1_module(features)
output2 = task2_module(features)
loss1 = loss_fn(output1, target1)
loss2 = loss_fn(output2, target2)
optimizer.zero_grad()
mtl_backward(
losses=[loss1, loss2],
features=features,
tasks_params=[task1_module.parameters(), task2_module.parameters()],
shared_params=shared_module.parameters(),
A=A,
)
optimizer.step()
[!NOTE] In this example, the Jacobian is only with respect to the shared parameters. The task-specific parameters are simply updated via the gradient of their task’s loss with respect to them.
More usage examples can be found here.
Supported Aggregators
TorchJD provides many existing aggregators from the literature, listed in the following table.
The following example shows how to instantiate
UPGrad and aggregate a simple matrix J
with it.
from torch import tensor
from torchjd.aggregation import UPGrad
A = UPGrad()
J = tensor([[-4., 1., 1.], [6., 1., 1.]])
A(J)
# Output: tensor([0.2929, 1.9004, 1.9004])
[!TIP] When using TorchJD, you generally don't have to use aggregators directly. You simply instantiate one and pass it to the backward function (
torchjd.backward
ortorchjd.mtl_backward
), which will in turn apply it to the Jacobian matrix that it will compute.
Contribution
Please read the Contribution page.
Citation
If you use TorchJD for your research, please cite:
@article{jacobian_descent,
title={Jacobian Descent For Multi-Objective Optimization},
author={Quinton, Pierre and Rey, Valérian},
journal={arXiv preprint arXiv:2406.16232},
year={2024}
}
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file torchjd-0.2.2.tar.gz
.
File metadata
- Download URL: torchjd-0.2.2.tar.gz
- Upload date:
- Size: 34.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: pdm/2.20.1 CPython/3.10.12 Linux/6.5.0-1025-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 774bad8a0ac542b50dad5fef8f1d99f41e8c7b1714d09de1d059e805dfd4632e |
|
MD5 | c764d824e6ba39d972bdf7dc40e72ab9 |
|
BLAKE2b-256 | 7dad1746f35142c7b15a2e92094022212980b4c481a53bf8ca267a0dc70f30b9 |
File details
Details for the file torchjd-0.2.2-py3-none-any.whl
.
File metadata
- Download URL: torchjd-0.2.2-py3-none-any.whl
- Upload date:
- Size: 47.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: pdm/2.20.1 CPython/3.10.12 Linux/6.5.0-1025-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 4b41d86f1943ceb01b7332285b28fc080904a088cb5e39bb5e001b8377e5027d |
|
MD5 | 80ee2392a4d97eeba9dee54912ee9e3f |
|
BLAKE2b-256 | 9101d19f046b7eea97970686cc87fba5d46bd101ab07adad360df47e6a55383f |