A gradient processing and optimization library in JAX.
Project description
Optax
Introduction
Optax is a gradient processing and optimization library for JAX.
Optax is designed to facilitate research by providing building blocks that can be easily recombined in custom ways.
Our goals are to
- Provide simple, well-tested, efficient implementations of core components.
- Improve research productivity by enabling to easily combine low-level ingredients into custom optimizers (or other gradient processing components).
- Accelerate adoption of new ideas by making it easy for anyone to contribute.
We favor focusing on small composable building blocks that can be effectively combined into custom solutions. Others may build upon these basic components in more complicated abstractions. Whenever reasonable, implementations prioritize readability and structuring code to match standard equations, over code reuse.
An initial prototype of this library was made available in JAX's experimental
folder as jax.experimental.optix. Given the wide adoption across DeepMind
of optix, and after a few iterations on the API, optix was eventually moved
out of experimental as a standalone open-source library, and renamed optax.
Documentation on Optax can be found at optax.readthedocs.io.
Installation
You can install the latest released version of Optax from PyPI via:
pip install optax
or you can install the latest development version from GitHub:
pip install git+https://github.com/google-deepmind/optax.git
Quickstart
Optax contains implementations of many popular optimizers and
loss functions.
For example, the following code snippet uses the Adam optimizer from optax.adam
and the mean squared error from optax.l2_loss. We initialize the optimizer
state using the init function and params of the model.
optimizer = optax.adam(learning_rate)
# Obtain the `opt_state` that contains statistics for the optimizer.
params = {'w': jnp.ones((num_weights,))}
opt_state = optimizer.init(params)
To write the update loop we need a loss function that can be differentiated by
Jax (with jax.grad in this
example) to obtain the gradients.
compute_loss = lambda params, x, y: optax.l2_loss(params['w'].dot(x), y)
grads = jax.grad(compute_loss)(params, xs, ys)
The gradients are then converted via optimizer.update to obtain the updates
that should be applied to the current parameters to obtain the new ones.
optax.apply_updates is a convenience utility to do this.
updates, opt_state = optimizer.update(grads, opt_state)
params = optax.apply_updates(params, updates)
You can continue the quick start in the Optax 🚀 Getting started notebook.
Development
We welcome issues reports and pull requests solving issues or improving existing functionalities. If you are interested in adding a feature like a new optimizer, open an issue first! We are focused on making optax more flexible, versatile and easy to use for you to define your own optimizers.
Source code
You can check the latest sources with the following command.
git clone https://github.com/google-deepmind/optax.git
Testing
To run the tests, please execute the following script.
sh test.sh
Documentation
To build the documentation, first ensure that all the dependencies are installed.
pip install -e ".[docs]"
Then, execute the following.
cd docs
make html
Benchmarking
Some benchmarks
-
Benchmarking Neural Network Training Algorithms, Dahl G. et al, 2023,
-
Descending through a Crowded Valley — Benchmarking Deep Learning Optimizers, Schmidt R. et al, 2021.
Making your own benchmark
Optimizer tuning handbook
Other optimization-adjacent libraries in JAX
-
optimistix: nonlinear solvers: root finding, minimisation, fixed points, and least squares.
-
matfree: matrix free methods useful to study curvature dynamics in deep learning.
Citing Optax
This repository is part of the DeepMind JAX Ecosystem, to cite Optax please use the citation:
@software{deepmind2020jax,
title = {The {D}eep{M}ind {JAX} {E}cosystem},
author = {DeepMind and Babuschkin, Igor and Baumli, Kate and Bell, Alison and Bhupatiraju, Surya and Bruce, Jake and Buchlovsky, Peter and Budden, David and Cai, Trevor and Clark, Aidan and Danihelka, Ivo and Dedieu, Antoine and Fantacci, Claudio and Godwin, Jonathan and Jones, Chris and Hemsley, Ross and Hennigan, Tom and Hessel, Matteo and Hou, Shaobo and Kapturowski, Steven and Keck, Thomas and Kemaev, Iurii and King, Michael and Kunesch, Markus and Martens, Lena and Merzic, Hamza and Mikulik, Vladimir and Norman, Tamara and Papamakarios, George and Quan, John and Ring, Roman and Ruiz, Francisco and Sanchez, Alvaro and Sartran, Laurent and Schneider, Rosalia and Sezener, Eren and Spencer, Stephen and Srinivasan, Srivatsan and Stanojevi\'{c}, Milo\v{s} and Stokowiec, Wojciech and Wang, Luyu and Zhou, Guangyao and Viola, Fabio},
url = {http://github.com/google-deepmind},
year = {2020},
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file optax-0.2.7.tar.gz.
File metadata
- Download URL: optax-0.2.7.tar.gz
- Upload date:
- Size: 297.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8b6b2e5bd62bcc6c11f6172a1aff0d86da0eaeecbd5465b2b366b5d3d64f6efc
|
|
| MD5 |
08f6388a98988385d3186f4b556c9631
|
|
| BLAKE2b-256 |
def7a63fc3d262d7a58d7d53050dea1408a63738739569af34f8f754cf181ab1
|
Provenance
The following attestation bundles were made for optax-0.2.7.tar.gz:
Publisher:
pypi-publish.yml on google-deepmind/optax
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
optax-0.2.7.tar.gz -
Subject digest:
8b6b2e5bd62bcc6c11f6172a1aff0d86da0eaeecbd5465b2b366b5d3d64f6efc - Sigstore transparency entry: 920076500
- Sigstore integration time:
-
Permalink:
google-deepmind/optax@6d5d7626af8b176378762ccf608bf087fab145d5 -
Branch / Tag:
refs/tags/v0.2.7 - Owner: https://github.com/google-deepmind
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
pypi-publish.yml@6d5d7626af8b176378762ccf608bf087fab145d5 -
Trigger Event:
release
-
Statement type:
File details
Details for the file optax-0.2.7-py3-none-any.whl.
File metadata
- Download URL: optax-0.2.7-py3-none-any.whl
- Upload date:
- Size: 399.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
241f2dfa104eab4fec2e16e7919f88df24a3da1481f95e264b3db396b30d4ff6
|
|
| MD5 |
25dfe46e5cc565da0b68bda942507bac
|
|
| BLAKE2b-256 |
b81e94ad43e06887244b4d25f58b689122270ba3c129d3448052958eecf7518a
|
Provenance
The following attestation bundles were made for optax-0.2.7-py3-none-any.whl:
Publisher:
pypi-publish.yml on google-deepmind/optax
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
optax-0.2.7-py3-none-any.whl -
Subject digest:
241f2dfa104eab4fec2e16e7919f88df24a3da1481f95e264b3db396b30d4ff6 - Sigstore transparency entry: 920076503
- Sigstore integration time:
-
Permalink:
google-deepmind/optax@6d5d7626af8b176378762ccf608bf087fab145d5 -
Branch / Tag:
refs/tags/v0.2.7 - Owner: https://github.com/google-deepmind
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
pypi-publish.yml@6d5d7626af8b176378762ccf608bf087fab145d5 -
Trigger Event:
release
-
Statement type: