Classic numerical optimizaton methods for Pytorch neural networks
Project description
Torch Numerical Optimization
This package implements classic numerical optimization methods for training Artificial Neural Networks.
These methods are not very common in Deep learning frameworks due to their computational requirements, like in the case of Newton-Raphson and Levemberg-Marquardt, which require a large amount of memory since they use information about the second derivative of the loss function. For this reason, it is recommended that these algorithms are applied only to Neural Networks with few hidden layers.
There are also a couple of methods that do not require that much memory such as SGD with line search and the Conjugate Gradient method.
References
Numerical Optimization, Jorge Nocedal, Stephen J. Wright
Note: Approximate Greatest Descent is not interesting enough to be included, the author of the method is shared with the author of the review paper, making it's inclusion in the review seem biased. The method can be replicated by applying damping to the hessian on Newton's method along with a trust region method to calculate $\mu$.
Planned optimizers
- Newton-Raphson
- Gauss-Newton
- Levenberg-Marquard (LM)
- Stochastic Gradient Descent with Line Search
- Conjugate Gradient
- AdaHessian
- Quasi-Newton (LBFGS already in pytorch)
- Hessian-free / truncated Newton
If you feel like there's a missing algorithm you can open an issue with the name of the algorithm with some references and a justification why you think it should be included in the package.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file torch_numopt-0.3.1.tar.gz.
File metadata
- Download URL: torch_numopt-0.3.1.tar.gz
- Upload date:
- Size: 12.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.16
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f7d7e359021daca999456013221668532e6e7019843d7a407bd36d70be8fa440
|
|
| MD5 |
ddc6a31d0f727b50458efebe4c27424f
|
|
| BLAKE2b-256 |
acfb0b78fb821d3df300bebad0b87923c5a7d50a7c27a88d641c444ae91adaf5
|
File details
Details for the file torch_numopt-0.3.1-py3-none-any.whl.
File metadata
- Download URL: torch_numopt-0.3.1-py3-none-any.whl
- Upload date:
- Size: 18.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.16
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6e364a934a22ca1cfaaa2b23c20ec098b9d9637041682f83637dae2fa876149a
|
|
| MD5 |
7377501dbf9b75dc87cdff41fba7b916
|
|
| BLAKE2b-256 |
d80bd88b03b414cfe24e12d510bc881bedaa51d9b7c286bb68452f472484a2c7
|