Skip to main content

cuPyLMA: a Multi-GPU Levenberg-Marquardt Optimizer powered by cuPyNumeric

Project description

cuPyLMA: a Multi-GPU Levenberg-Marquardt (Deep Learning) Optimizer Powered by NVIDIA cuPyNumeric

Background | Installation | Training | Examples | Performance | Change logs

cuPyLMA is a scalable multi-GPU (deep learning optimizer) optimizer which implements the Levenberg-Marquardt algorithm (LMA). This library is built on PyTorch and NVIDIA cuPyNumeric (a NumPy-like scientific computing framework).

Background

The Levenberg-Marquardt algorithm (LMA) is a second-order optimization algorithm that utilizes the Jacobian matrix of the residuals to compute optimal parameter updates. In contrast, the widely used first-order optimizer Adam relies on the gradient of the loss function to determine these updates.

$$ \large (\mathbf{J}^T\mathbf{J}+\lambda \mathbf{I})\triangle\mathbf{x} = \mathbf{J}^T\mathbf{r} $$

($\mathbf{J}$: Jacobian matrix of residuals, $\mathbf{r}$: residuals, $\triangle\mathbf{x}$: updates to be solved)

The LMA has the following advantages and disadvantages compared to the Adam:

  • Pros
    • Faster convergence.
    • More optimal solutions due to using the second-order information.
  • Cons
    • Higher memory and computation requirement due to computing the Jacobian matrix and solving the equation, especially when the model has many parameters.

Our cuPyLMA aims to resolve the memory and computation bottlenecks of the LMA via utilizing multiple GPUs.

Installation

To install cuPyLMA along with dependencies, please run:

pip install cupylma

Training

It is easy to migrate the training code that uses the Adam optimizer to cuPyLMA. cuPyLMA consists of the following components and each holds a seperate set of GPUs.

  • Model component stores the model parameters and computes the Jacobian matrix.
  • Optimizer component stores the Jacobian matrix and computes the optimal parameter updates.

Creating the model

The model should be in one of GPUs held by the model component. The get_available_gpus() function gets the list of available GPUs for the model component.

from cupylma import get_available_gpus

devices = get_available_gpus()
model = MyModel().to(devices[0])

Configuring the optimizer

The LMA optimizer requires a residual function rather than a loss function. The devices option specifies the GPUs for the model component.

from cupylma import LMA

residual_fn = lambda a, b : a - b # For simple regression
lma = LMA(model, devices, residual_fn)

To find the residual function for more complex problems, please check examples/mnist.

Training

The LMA optimizer is stateless, so there is no need to reset gradients at each step. The loss return value is the average loss. The terminated return value indicates whether the train should be terminated.

loss, terminated = lma.step(x, y)
if terminated:
    # Exit the train and save the model

Running the code

The legate command was installed together with cuPyLMA. The number of GPUs for the optimizer component is specified using the --gpus option.

legate --gpus 3 train.py

Examples

Performance

TODO

References

[1] fabiodimarco/torch-levenberg-marquardt: Our base code refers to the repository.

[2] H. P. Gavin, “The Levenberg-Marquardt algorithm for nonlinear least squares curve-fitting problems,” 2024.: It provides theoretical explanation of LMA.

Citation

J. Taylor, W. Wang, B. Bala, and T. Bednarz, “Optimizing the optimizer for data driven deep neural networks and physics informed neural networks,” May 16, 2022, arXiv: arXiv:2205.07430. doi: 10.48550/arXiv.2205.07430.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cupylma-0.2.0.dev9.tar.gz (10.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cupylma-0.2.0.dev9-py3-none-any.whl (9.9 kB view details)

Uploaded Python 3

File details

Details for the file cupylma-0.2.0.dev9.tar.gz.

File metadata

  • Download URL: cupylma-0.2.0.dev9.tar.gz
  • Upload date:
  • Size: 10.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for cupylma-0.2.0.dev9.tar.gz
Algorithm Hash digest
SHA256 78ee96c843d671f7259767e355240ce0a1cb2b5ade1dc6914ec8c6c77d315a4a
MD5 eabd1d4b60d0341bb6066c2d2ca9c22d
BLAKE2b-256 663470f92a94cb5c9e8a61427394c2ac0570c739f28dd4f03561efe8f91830fd

See more details on using hashes here.

File details

Details for the file cupylma-0.2.0.dev9-py3-none-any.whl.

File metadata

  • Download URL: cupylma-0.2.0.dev9-py3-none-any.whl
  • Upload date:
  • Size: 9.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for cupylma-0.2.0.dev9-py3-none-any.whl
Algorithm Hash digest
SHA256 d78abaeb0ff7519299ae5ec4cbd0f312a501689b52a79252f1c0c693f70a4970
MD5 8bddcf35487d7ee617f49026b9003dea
BLAKE2b-256 1b63f9315c0581da0056c09d04f07b4cab30740027a152300363943901541ef9

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page