Skip to main content

cuPyLMA: a Multi-GPU Levenberg-Marquardt Optimizer powered by cuPyNumeric

Project description

cuPyLMA: a Multi-GPU Levenberg-Marquardt (Deep Learning) Optimizer Powered by NVIDIA cuPyNumeric

cuPyLMA is a scalable multi-GPU (deep learning) optimizer that implements the Levenberg-Marquardt algorithm (LMA). This library is built on PyTorch and powered by NVIDIA cuPyNumeric (a NumPy-like scientific computing framework).

Background

The Levenberg-Marquardt algorithm (LMA) is a second-order optimizer. It solves parameter updates $\mathbf{v}$ from the equation involving the Jacobian matrix $\mathbf{J}$ of the batched outputs with respect to model parameters, and the residuals $\mathbf{r}$.

$$ \large (\mathbf{J}^T\mathbf{J}+\lambda \mathbf{I})\mathbf{v} = \mathbf{J}^Tr $$

The Jacobian matrix requires large computation and memory space. To resolve them, we take advantage of NVIDIA cuPyNumeric. This NumPy-like scientific computing framework automatically distributes the Jacobian matrix and schedules computation to multiple GPUs.

Features

  • cuPyLMA utilizes LMA as the optimizer, which can converge to a lower loss in a shorter time than the Adam optimizer.

  • cuPyLMA explicityly computes the Jacobian matrix required by LMA, which boosts the performance while sacrificing the memory space. This is different from most LMA implementations which utilizes the preconditioned conjugate gradient solver (PCG).

  • cuPyLMA reduces the computation cost and the peak memory usage of the Jacobian matrix computation by selecting the best strategy.

  • cuPyLMA scalably solves LMA parameter updates via cuPyNumeric.

  • cuPyLMA reduces the number of hyperparameter tuning. All we need to tune is the batch size.

Install

Pip

pip install cupylma

Usage

See curve fitting example.

References

[1] fabiodimarco/torch-levenberg-marquardt: Our base code refers to the repository.

[2] H. P. Gavin, “The Levenberg-Marquardt algorithm for nonlinear least squares curve-fitting problems,” 2024.: It provides theoretical explanation of LMA.

Citation

J. Taylor, W. Wang, B. Bala, and T. Bednarz, “Optimizing the optimizer for data driven deep neural networks and physics informed neural networks,” May 16, 2022, arXiv: arXiv:2205.07430. doi: 10.48550/arXiv.2205.07430.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cupylma-0.1.2.tar.gz (9.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cupylma-0.1.2-py3-none-any.whl (8.8 kB view details)

Uploaded Python 3

File details

Details for the file cupylma-0.1.2.tar.gz.

File metadata

  • Download URL: cupylma-0.1.2.tar.gz
  • Upload date:
  • Size: 9.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for cupylma-0.1.2.tar.gz
Algorithm Hash digest
SHA256 ca793278c5d861fb34cc66862388dfcb5c0e5e57212fb80551107c5959220f05
MD5 16490d0f84196c19b60e826db97136ca
BLAKE2b-256 7fdb31586cc42b0c3fdd785dcdde95e4030eff0c4f9927f8e294f6f3b5967794

See more details on using hashes here.

File details

Details for the file cupylma-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: cupylma-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 8.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for cupylma-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 dad46f7f2fd5fc6ec22ccc7221e81f9b7bff12712af843ee5f314461c9973eea
MD5 8fb2947e06e9aa174989862eaa3aefe5
BLAKE2b-256 2252d4248862b2042d612c65f2010cac135e6e4c16ff010e65243f4804742ffe

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page