Skip to main content

Differentiable QP solver in JAX.

Project description

qpax

Differentiable QP solver in JAX.

Paper

This package can be used for solving convex quadratic programs of the following form:

$$ \begin{align*} \underset{x}{\text{minimize}} & \quad \frac{1}{2}x^TQx + q^Tx \ \text{subject to} & \quad Ax = b, \ & \quad Gx \leq h \end{align*} $$

where $Q \succeq 0$. This solver can be combined with JAX's jit and vmap functionality, as well as differentiated with reverse-mode grad.

The QP is solved with a primal-dual interior point algorithm detailed in cvxgen, with the solution to the linear systems computed with reduction techniques from cvxopt. At an approximate primal-dual solution, the the primal variable $x$ is differentiated with respect to the problem parameters using the implicit function theorem as shown in optnet, and their pytorch-based qp solver qpth.

Installation

To install directly from github using pip:

$ pip install qpax

Alternatively, to install from source in editable mode:

$ pip install -e .

Usage

🚨 Float32 Warning 🚨

The solver tolerance (solver_tol) should be something reasonable given the available precision. With 32bit precision (the default in JAX), solver_tol should be greater than 1e-5.

Precision Tolerance
jnp.float32 solver_tol$\in$ [1e-5, 1e-2]
jnp.float64 solver_tol$\in$ [1e-12, 1e-2]

In order to enable 64bit precision, you can do the following at startup:

# again, this only works on startup!
import jax
jax.config.update("jax_enable_x64", True)

This is taken from the JAX - The Sharp Bits.

Solving a QP

We can solve QPs with qpax in a way that plays nice with JAX's jit and vmap:

import qpax

# solve QP (this can be combined with jit or vmap)
x, s, z, y, converged, iters = qpax.solve_qp(Q, q, A, b, G, h, solver_tol=1e-6)

Solving a batch of QP's

Here let's solve a batch of nonnegative least squares problems as QPs. This outlines two bits of functionality from qpax, first is the ability to solve QPs without any equality constraints, and second is the ability to vmap over a batch of QPs.

import numpy as np
import jax 
import jax.numpy as jnp 
from jax import jit, grad, vmap  
import qpax 
import timeit

"""
solve batched non-negative least squares (nnls) problems
 
min_x    |Fx - g|^2 
st        x >= 0 
"""

n = 5   # size of x 
m = 10  # rows in F 

# create data for N_qps random nnls problems  
N_qps = 10000 
Fs = jnp.array(np.random.randn(N_qps, m, n))
gs = jnp.array(np.random.randn(N_qps, m))

@jit
def form_qp(F, g):
  # convert the least squares to qp form 
  n = F.shape[1]
  Q = F.T @ F 
  q = -F.T @ g 
  G = -jnp.eye(n)
  h = jnp.zeros(n)
  A = jnp.zeros((0, n))
  b = jnp.zeros(0)
  return Q, q, A, b, G, h

# create the QPs in a batched fashion 
Qs, qs, As, bs, Gs, hs = vmap(form_qp, in_axes = (0, 0))(Fs, gs)

# create function for solving a batch of QPs 
batch_qp = jit(vmap(qpax.solve_qp_primal, in_axes = (0, 0, 0, 0, 0, 0)))

xs = batch_qp(Qs, qs, As, bs, Gs, hs)

Differentiating a QP

Alternatively, if we are only looking to use the primal variable x, we can use solve_qp_primal which enables automatic differentiation:

import jax 
import jax.numpy as jnp 
import qpax 

def loss(Q, q, A, b, G, h):
    x = qpax.solve_qp_primal(Q, q, A, b, G, h, solver_tol=1e-4, target_kappa=1e-3) 
    x_bar = jnp.ones(len(q))
    return jnp.dot(x - x_bar, x - x_bar)
  
# gradient of loss function   
loss_grad = jax.grad(loss, argnums = (0, 1, 2, 3, 4, 5))

# compatible with jit 
loss_grad_jit = jax.jit(loss_grad)

# calculate derivatives 
derivs = loss_grad_jit(Q, q, A, b, G, h)
dl_dQ, dl_dq, dl_dA, dl_db, dl_dG, dl_dh = derivs 

where target_kappa is used to determine how much smoothing should be applied to the gradients through solve_qp_primal. For more detail on target_kappa, please refer to the paper.

Citation

Paper

@misc{tracy2024differentiability,
    title={On the Differentiability of the Primal-Dual Interior-Point Method},
    author={Kevin Tracy and Zachary Manchester},
    year={2024},
    eprint={2406.11749},
    archivePrefix={arXiv},
    primaryClass={math.OC}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

qpax-0.0.10.tar.gz (71.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

qpax-0.0.10-py3-none-any.whl (12.2 kB view details)

Uploaded Python 3

File details

Details for the file qpax-0.0.10.tar.gz.

File metadata

  • Download URL: qpax-0.0.10.tar.gz
  • Upload date:
  • Size: 71.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.25

File hashes

Hashes for qpax-0.0.10.tar.gz
Algorithm Hash digest
SHA256 fd2152c074dbf76841267f3279b41d32db662f2774fdc87522b01e66090b3811
MD5 f141e8da4f967a59f35017dfdd167263
BLAKE2b-256 4426f5c8c497f26536382e91944e0807c643d2e3ac1bb965dc5208c19a592380

See more details on using hashes here.

File details

Details for the file qpax-0.0.10-py3-none-any.whl.

File metadata

  • Download URL: qpax-0.0.10-py3-none-any.whl
  • Upload date:
  • Size: 12.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.25

File hashes

Hashes for qpax-0.0.10-py3-none-any.whl
Algorithm Hash digest
SHA256 62b00096bd79acb7206d83d195909df0b00920b77bed9da907526ba64dd999a0
MD5 643b9c3dd439f9e5a4417c6031a80655
BLAKE2b-256 9ca51b277ca08381a977dc282d9f4eeee002e282ea25df384e2f56cc30b69785

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page