MaxFit - A Python Library for Worst-Case Nonlinear Regression.
Project description
MaxFit
A Python library for Worst-Case Nonlinear Regression
This repository includes a library for solving worst-case nonlinear regression problems. Given the function to approximate, the library attempts fitting a model that minimizes the worst-case error over a given set. Optionally, it can compute upper and lower bound functions on the resulting error.
For more details about the mathematical formulations implemented in the library, see the arXiv preprint 2601.12334.
Installation
pip install maxfit
Overview
MaxFit trains a nonlinear surrogate model $\hat f(x)$ to minimize the worst-case approximation error
$$e^* = \max_{x \in [l_b,, u_b]} |f(x) - \hat f(x)|$$
rather than the usual mean-squared error. The algorithm alternates between:
- Training the surrogate on the current dataset using a soft-maximum loss function.
- Active learning: finding the input $x^*$ where the current error is largest via global optimization.
- Augmenting the dataset with $x^$ and repeating until $e^$ falls below a tolerance or a maximum number of iterations is reached.
The library provides two main functions:
-
maxfit(model, fun, X, init_fcn, lb, ub, ...)— runs the active-learning loop and returns the best surrogate model together with convergence statistics. Also supports set approximation: fitting a surrogate $\hat f(x) \leq 0$ to the set ${x : f(x) \leq 0}$. -
uncertainty_bounds(model, fun, X, Y, lb, ub, ...)— fits input-dependent error envelopes $[\hat f(x) - \varepsilon_\ell(x),; \hat f(x) + \varepsilon_u(x)]$ that contain $f(x)$ for all $x \in [l_b, u_b]$, using either constant or neural-network-parameterized bound functions. The envelopes are certified bounds, assuming the used global optimizer is able to determine the worst-case errors.
Surrogate models are jax-sysid StaticModel objects, so any differentiable JAX architecture can be used. Global optimization is performed via the NLopt library, with DIRECT as default optimizer.
Example
Approximate the scalar function $f(x) = \frac{(\sin(x(1-x/10)) + (x/10)^3 - 4x/10),e^{-x}}{1+e^{-x}}$ on $[-10, 10]$ starting from 20 random samples, then fit asymmetric input-dependent uncertainty envelopes.
import numpy as np
from maxfit import maxfit, uncertainty_bounds
from jax_sysid.models import StaticModel
import jax
from flax import linen as nn
jax.config.update('jax_platform_name', 'cpu')
jax.config.update("jax_enable_x64", True)
def fun(x):
x0 = x[0]
return (np.sin(x0*(1.-x0/10.))+(x0/10.)**3-4.*x0/10.)*np.exp(-x0)/(1.+np.exp(-x0))
# Initial dataset
np.random.seed(0)
N = 20
nx, n1, n2 = 1, 2, 1
xmin, xmax = -10., 10.
X = np.random.uniform(xmin, xmax, N).reshape(-1, 1)
lb, ub = np.array([xmin]), np.array([xmax])
# Neural network model (residual connections, tanh activation)
act = nn.tanh
@jax.jit
def output_fcn_(u, params):
V1, b1, W2, V2, b2, W3, V3, b3 = params
u = u.reshape(-1, 1)
y = W2 @ act(V1 @ u + b1) + V2 @ u + b2
return (W3 @ act(y) + V3 @ u + b3).reshape(-1)
output_fcn = jax.jit(jax.vmap(output_fcn_, in_axes=(0, None)))
rho_th = 1.e-8
model = StaticModel(1, nx, output_fcn)
model.optimization(adam_epochs=1000, lbfgs_epochs=1000, iprint=-1)
model.loss(rho_th=rho_th, tau_th=0.)
def init_fcn(seed):
np.random.seed(seed)
rn = np.random.randn
return [rn(n1, nx), rn(n1, 1), rn(n2, n1), rn(n2, nx), rn(n2, 1),
rn(1, n2), rn(1, nx), np.random.rand(1, 1)]
# Worst-case regression
model, results = maxfit(model, fun, X, init_fcn, lb=lb, ub=ub, method='function',
maxiter=30, gamma=10., nu=0., model_init='mse',
warm_retrain=False, adam_epochs=1000, lbfgs_epochs=1000,
rho_th=rho_th, global_optimizer='direct')
print(f"Worst-case error: {results.worst_error:.6f}")
print(f"Actively acquired {results.N_act} corner-case samples")
# Input-dependent asymmetric uncertainty envelopes
x = np.linspace(xmin, xmax, 101).reshape(-1, 1)
y = np.array([fun(xi) for xi in x]).reshape(-1, 1)
X_eps = np.vstack((results.X, x))
Y_eps = np.vstack((results.Y.reshape(-1, 1), y))
model_eps_u, model_eps_ell = uncertainty_bounds(
model, fun, X_eps, Y_eps, lb, ub,
worst_error=results.worst_error,
uncertainty='variable-asymmetric',
uncertainty_neurons=[n1, n2],
uncertainty_activation=act,
rho=1.e-6, alpha=1.e-3, gamma_eps=100.,
global_optimizer='direct',
adam_epochs=2000, lbfgs_epochs=2000)
The full example including plots is available in examples/example_scalar_fun.py. See examples/ for additional examples.
References
[1] A. Bemporad, "Worst-case Nonlinear Regression with Error Bounds," arXiv preprint 2601.12334, 2026.
[2] A. Bemporad, "An L-BFGS-B approach for linear and nonlinear system identification under ℓ1 and group-Lasso regularization," IEEE Transactions on Automatic Control, vol. 70, no. 7, pp. 4857–4864, 2025. (jax-sysid)
[3] D.R. Jones, M. Schonlau, and W.J. Matthias, "Efficient global optimization of expensive black-box functions," Journal of Global Optimization, vol. 13, no. 4, pp. 455–492, 1998. (DIRECT global optimizer)
Citation
@article{MaxFit,
author={A. Bemporad},
title={Worst-case Nonlinear Regression with Error Bounds},
journal = {arXiv preprint 2601.12334},
note = {\url{https://github.com/bemporad/maxfit}},
year=2026
}
Related packages
jax-sysid a Python package based on JAX for linear and nonlinear system identification of state-space models, recurrent neural network (RNN) training, and nonlinear regression/classification
License
Apache 2.0
(C) 2026 A. Bemporad
Acknowledgement
This work was funded by the European Union (ERC Advanced Research Grant COMPACT, No. 101141351). Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file maxfit-0.1.1.tar.gz.
File metadata
- Download URL: maxfit-0.1.1.tar.gz
- Upload date:
- Size: 14.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
03773c75104acb6f8f4301a610f0cad51e0c06a8f1b7d35f6d08ae2f0ab0e911
|
|
| MD5 |
b3c5cb9660a26b66f6f5852d36310380
|
|
| BLAKE2b-256 |
3db56201cd2fa74e7dfa4b3b1820319ece415086560bb51777d5cf298e7e7834
|
File details
Details for the file maxfit-0.1.1-py3-none-any.whl.
File metadata
- Download URL: maxfit-0.1.1-py3-none-any.whl
- Upload date:
- Size: 15.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
bd5673b5553d81219e9dfa5e048df149d5eee4f5a613be521ee4ff472f1b4040
|
|
| MD5 |
9952e4da8b41c7e96a8bfd134aeacbea
|
|
| BLAKE2b-256 |
15a9de55dbd44ca6aa5756202f3745445a1928453bc19826753c4562e88410b3
|