cuPyLMA: a Multi-GPU Levenberg-Marquardt Optimizer powered by cuPyNumeric
Project description
cuPyLMA: a Multi-GPU Levenberg-Marquardt (Deep Learning) Optimizer Powered by NVIDIA cuPyNumeric
Background | Installation | Training | Examples | Performance | Change logs
cuPyLMA is a scalable multi-GPU (deep learning optimizer) optimizer which implements the Levenberg-Marquardt algorithm (LMA). This library is built on PyTorch and NVIDIA cuPyNumeric (a NumPy-like scientific computing framework).
Background
The Levenberg-Marquardt algorithm (LMA) is a second-order optimization algorithm that utilizes the Jacobian matrix of the residuals to compute optimal parameter updates. In contrast, the widely used first-order optimizer Adam relies on the gradient of the loss function to determine these updates.
$$ \large (\mathbf{J}^T\mathbf{J}+\lambda \mathbf{I})\triangle\mathbf{x} = \mathbf{J}^T\mathbf{r} $$
($\mathbf{J}$: Jacobian matrix of residuals, $\mathbf{r}$: residuals, $\triangle\mathbf{x}$: updates to be solved)
The LMA has the following advantages and disadvantages compared to the Adam:
- Pros
- Faster convergence.
- More optimal solutions due to using the second-order information.
- Cons
- Higher memory and computation requirement due to computing the Jacobian matrix and solving the equation, especially when the model has many parameters.
Our cuPyLMA aims to resolve the memory and computation bottlenecks of the LMA via utilizing multiple GPUs.
Installation
To install cuPyLMA along with dependencies, please run:
pip install cupylma
Training
It is easy to migrate the training code that uses the Adam optimizer to cuPyLMA. cuPyLMA consists of the following components and each holds a seperate set of GPUs.
- Model component stores the model parameters and computes the Jacobian matrix.
- Optimizer component stores the Jacobian matrix and computes the optimal parameter updates.
Creating the model
The model should be in one of GPUs held by the model component. The get_available_gpus() function gets the list of available GPUs for the model component.
from cupylma import get_available_gpus
devices = get_available_gpus()
model = MyModel().to(devices[0])
Configuring the optimizer
The LMA optimizer requires a residual function rather than a loss function. The devices option specifies the GPUs for the model component.
from cupylma import LMA
residual_fn = lambda a, b : a - b # For simple regression
lma = LMA(model, devices, residual_fn)
To find the residual function for more complex problems, please check examples/mnist.
Training
The LMA optimizer is stateless, so there is no need to reset gradients at each step. The loss return value is the average loss. The terminated return value indicates whether the train should be terminated.
loss, terminated = lma.step(x, y)
if terminated:
# Exit the train and save the model
Running the code
The legate command was installed together with cuPyLMA. The number of GPUs for the optimizer component is specified using the --gpus option.
legate --gpus 3 train.py
Examples
- For curve fitting example, see examples/curve.
- For MNIST image classification example, see examples/mnist.
Performance
TODO
References
[1] fabiodimarco/torch-levenberg-marquardt: Our base code refers to the repository.
[2] H. P. Gavin, “The Levenberg-Marquardt algorithm for nonlinear least squares curve-fitting problems,” 2024.: It provides theoretical explanation of LMA.
Citation
J. Taylor, W. Wang, B. Bala, and T. Bednarz, “Optimizing the optimizer for data driven deep neural networks and physics informed neural networks,” May 16, 2022, arXiv: arXiv:2205.07430. doi: 10.48550/arXiv.2205.07430.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file cupylma-0.2.0.dev10.tar.gz.
File metadata
- Download URL: cupylma-0.2.0.dev10.tar.gz
- Upload date:
- Size: 10.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9cc41c49f8a7cc3bab8987eb2f8fafa4ceb666e1dde87cddd2520edb01040dc3
|
|
| MD5 |
8b6c55cb37ad1c71321da6f83aa4d6af
|
|
| BLAKE2b-256 |
b88ca82d706398b6fbee455dd8b4f2a46d5dd0e8754777f5ce1eaf0337bf76fa
|
File details
Details for the file cupylma-0.2.0.dev10-py3-none-any.whl.
File metadata
- Download URL: cupylma-0.2.0.dev10-py3-none-any.whl
- Upload date:
- Size: 9.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
68b2b5a282cab33d8069ba3f11fe9c7c70113f484c6d2f5bc3e91174c2a768c7
|
|
| MD5 |
fad66e2dd1e302921015e465d0855884
|
|
| BLAKE2b-256 |
6dafa48f570bdd57812a5937990ef3f8455cfec9af0ed4e494808fd111454d72
|