Skip to main content

Learning parametric convex functions

Project description

LPCF

LPCF stands for learning parametrized convex functions. A parametrized convex function, or PCF, depends on a variable and a parameter, and is convex in the variable for any valid value of the parameter.

LPCF is a framework for fitting a parametrized convex function to some given data that is compatible with disciplined convex programming. This allows to fit a function that can be used in a convex optimization formulation directly to observed or simulated data.

The PCF is represented as a simple neural network whose architecture is designed to ensure disciplined convexity in the variable, for any valid parameter value. After fitting this neural network to triplets of observed (or simulated) values of the function, the variable, and the parameter, the learned PCF can be exported for use in optimization frameworks like CVXPY or JAX.

LPCF supports learning vector functions that depend on multiple variables and parameters. An overview of LPCF can be found in our manuscript.

Installation

LPCF is available on PyPI, and can be installed with

pip install lpcf

LPCF has the following dependencies:

  • Python >= 3.10, <3.13
  • jax-sysid >= 1.0.6
  • CVXPY >= 1.6.0
  • NumPy >= 1.21.6

Example

The following code fits a PCF to observed function values Y, variable values X, and parameter values Theta, and exports the result to CVXPY.

from lpcf.pcf import PCF

# observed data
Y = ...      # shape (N, d)
X = ...      # shape (N, n)
Theta = ...  # shape (N, p)

# fit PCF to data
pcf = PCF()
pcf.fit(Y, X, Theta)

# export PCF to CVXPY
x = cp.Variable((n, 1))
theta = cp.Parameter((p, 1))
pcf_cvxpy = pcf.tocvxpy(x=x, theta=theta)

The CVXPY expression pcf_cvxpy might appear in the objective or the constraints of a CVXPY problem.

Settings

Neural network architecture

The function is approximated as an input-convex main network mapping variables to function values. The weights of the main network are generated by another parameter network, whose inputs are the parameters.

When constructing the PCF object, we allow for a number of customizations to the neural network architecture:

Argument Description Type Default
widths widths of the main network's hidden layers array-like [2((n+d)//2), 2((n+d)//2)]
widths_psi widths of the parameter network's hidden layers array-like [2((p+m)//2), 2((p+m)//2)]
activation activation function used in the main network str 'relu'
activation_psi activation function used in the parameter network str 'relu'
nonneg Force the PCF to be nonnegative Bool False
increasing Force the PCF to be increasing Bool False
decreasing Force the PCF to be decreasing Bool False
quadratic Include a convex quadratic term in the PCF Bool False
quadratic_r Include a quadratic term with low-rank + diagonal structure Bool False
classification Use the PCF to solve a classification problem Bool False

Note that d is the number of components of the function, n the number of variables, p the number of parameters, and m the number of outputs of the parameter network, i.e., the number of weights of the main network.

Learning configuration

When fitting the PCF to data with its .fit() method, we provide the following options:

Argument Description Type Default
rho_th regularization on the sum of squared weights of the parameter network float 1e-8
tau_th regularization on the sum of absolute weights of the parameter network float 0
zero_coeff entries smaller (in abs value) than zero_coeff are zeroed float 1e-4
cores number of cores used for parallel training int 4
seeds random seeds for training from multiple initial guesses array-like max(10, cores)
adam_epochs number of epochs for running ADAM int 200
lbfgs_epochs number of epochs for running L-BFGS-B int 2000
tune auto-tune tau_th? Bool False
n_folds number of cross-validation folds when auto-tuning tau_th int 5
warm_start warm-start training? Bool False

Citing LPCF

Please cite the following paper if you use this software:

@article{SBB25,
    author={Maximilian Schaller and Alberto Bemporad and Stephen Boyd},
    title={Learning Parametric Convex Functions},
    note = {available on arXiv at \url{https://arxiv.org/pdf/2506.04183}},
    year=2025
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lpcf-0.1.1.tar.gz (17.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

lpcf-0.1.1-py3-none-any.whl (14.7 kB view details)

Uploaded Python 3

File details

Details for the file lpcf-0.1.1.tar.gz.

File metadata

  • Download URL: lpcf-0.1.1.tar.gz
  • Upload date:
  • Size: 17.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for lpcf-0.1.1.tar.gz
Algorithm Hash digest
SHA256 17c857d893884295c35d6c4a78959c539b8709227847d978d86ed21828062033
MD5 a9aba8e8b1813b5ce0f6d1e30573ca98
BLAKE2b-256 2c392af24df4796af40079c86d0ae216ea6955ca379d1ac0d820b6e8d3de9e12

See more details on using hashes here.

File details

Details for the file lpcf-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: lpcf-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 14.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for lpcf-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 24dd9947aad19d2cbbe737fc1a1f547ba47189e9c39d78ab398ea856df6e3318
MD5 7c1b86f9e780d0a64b23c08eaa47e04c
BLAKE2b-256 33fb00e084737a0cd7f4c24093eb48a348b3002e61170d09ad05a7eba4a98ec5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page