Skip to main content

cuPyLMA: a Multi-GPU Levenberg-Marquardt Optimizer powered by cuPyNumeric

Project description

cuPyLMA: a Multi-GPU Levenberg-Marquardt (Deep Learning) Optimizer Powered by NVIDIA cuPyNumeric

cuPyLMA is a scalable multi-GPU (deep learning) optimizer that implements the Levenberg-Marquardt algorithm (LMA). This library is built on PyTorch and powered by NVIDIA cuPyNumeric (a NumPy-like scientific computing framework).

Background

The Levenberg-Marquardt algorithm (LMA) is a second-order optimizer. It solves parameter updates $\mathbf{v}$ from the equation involving the Jacobian matrix $\mathbf{J}$ of the batched outputs with respect to model parameters, and the residuals $\mathbf{r}$.

$$ \large (\mathbf{J}^T\mathbf{J}+\lambda \mathbf{I})\mathbf{v} = \mathbf{J}^Tr $$

The Jacobian matrix requires large computation and memory space. To resolve them, we take advantage of NVIDIA cuPyNumeric. This NumPy-like scientific computing framework automatically distributes the Jacobian matrix and schedules computation to multiple GPUs.

Features

  • cuPyLMA utilizes LMA as the optimizer, which can converge to a lower loss in a shorter time than the Adam optimizer.

  • cuPyLMA explicityly computes the Jacobian matrix required by LMA, which boosts the performance while sacrificing the memory space. This is different from most LMA implementations which utilizes the preconditioned conjugate gradient solver (PCG).

  • cuPyLMA reduces the computation cost and the peak memory usage of the Jacobian matrix computation by selecting the best strategy.

  • cuPyLMA scalably solves LMA parameter updates via cuPyNumeric.

  • cuPyLMA reduces the number of hyperparameter tuning. All we need to tune is the batch size.

Install

Pip

pip install cuPyLMA

Usage

See curve fitting example.

References

[1] fabiodimarco/torch-levenberg-marquardt: Our base code refers to the repository.

[2] H. P. Gavin, “The Levenberg-Marquardt algorithm for nonlinear least squares curve-fitting problems,” 2024.: It provides theoretical explanation of LMA.

Citation

J. Taylor, W. Wang, B. Bala, and T. Bednarz, “Optimizing the optimizer for data driven deep neural networks and physics informed neural networks,” May 16, 2022, arXiv: arXiv:2205.07430. doi: 10.48550/arXiv.2205.07430.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cupylma-0.1.1.tar.gz (9.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cupylma-0.1.1-py3-none-any.whl (8.8 kB view details)

Uploaded Python 3

File details

Details for the file cupylma-0.1.1.tar.gz.

File metadata

  • Download URL: cupylma-0.1.1.tar.gz
  • Upload date:
  • Size: 9.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.5

File hashes

Hashes for cupylma-0.1.1.tar.gz
Algorithm Hash digest
SHA256 3ddf1a7e9d40bdd8d82596cac863fa782a1669ffda50d0d5a4cd164d17fc0b6f
MD5 a937c4c1501ea3264ddbaa76eed37044
BLAKE2b-256 7f5db021fa7ee4de198298a55224e77474fe6f7cde81f26d224cd1ef847556cf

See more details on using hashes here.

File details

Details for the file cupylma-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: cupylma-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 8.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.5

File hashes

Hashes for cupylma-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 ceb569f7ff263ec0d2ff2321ff1e89019fea2e001c41fd11a77d55f982392654
MD5 8b14fb88e8402d75f7120ecb8199f473
BLAKE2b-256 68f7e12f01e26512191d892da055264d9879533b24d2d200f3ca6a09b36a3eee

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page