A drop-in replacement for scipy optimize functions with quality of life improvements
Project description
Better Optimization!
better_optimize is a friendlier front-end to scipy's optimize.minimize and optimize.root functions. Features
include:
- Progress bar!
- Early stopping!
- Better propagation of common arguments (
maxiters,tol)!
Installation
To install better_optimize, simply use conda:
conda install -c conda-forge better_optimize
Or, if you prefer pip:
pip install better_optimize
What does better_optimize provide over basic scipy?
1. Progress Bars
All optimization routines in better_optimize can display a rich, informative progress bar using the rich library. This includes:
- Iteration counts, elapsed time, and objective values.
- Gradient and Hessian norms (when available).
- Separate progress bars for global (basinhopping) and local (minimizer) steps.
- Toggleable display for headless or script environments.
2. Flat and Generalized Keyword Arguments
- No more nested
optionsdictionaries! You can passtol,maxiter, and other common options directly as top-level keyword arguments. better_optimizeautomatically sorts and promotes these arguments to the correct place for each optimizer.- Generalizes argument handling: always provides
tolandmaxiter(or their equivalents) to the optimizer, even if you forget.
3. Argument Checking and Validation
- Automatic checking of provided gradient (
jac), Hessian (hess), and Hessian-vector (hessp) functions. - Warns if you provide unnecessary or unused arguments for a given method.
- Detects and handles fused objective functions (e.g., functions returning
(loss, grad)or(loss, grad, hess)tuples). - Ensures that the correct function signatures and return types are used for each optimizer.
4. LRUCache1 for Fused Functions
- Provides an
LRUCache1utility to cache the results of expensive objective/gradient/Hessian computations. - Especially useful for triple-fused functions that return value, gradient, and Hessian together, avoiding redundant computation.
- Totally invisible -- just pass a function with 3 return values. Seamlessly integrated into the optimization workflow.
5. Robust Basin-Hopping with Failure Tolerance
- Enhanced
basinhoppingimplementation allows you to continue even if the local minimizer fails. - Optionally accepts and stores failed minimizer results if they improve the global minimum.
- Useful for noisy or non-smooth objective functions where local minimization may occasionally fail.
Example Usage
Simple Example
from better_optimize import minimize
def rosenbrock(x):
return sum(100.0*(x[1:] - x[:-1]**2.0)**2.0 + (1 - x[:-1])**2.0)
result = minimize(
rosenbrock,
x0=[-1, 2],
method="L-BFGS-B",
tol=1e-6,
maxiter=1000,
progressbar=True, # Show a rich progress bar!
)
Minimizing Elapsed Iteration Objective ||grad||
──────────────────────────────────────────────────────────────────────────────────────────────────
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0:00:00 721/721 0.34271757 0.92457651
The result object is a standard OptimizeResult from scipy.optimize, so there are no surprises there!
Triple-Fused Function using Pytensor
from better_optimize import minimize
import pytensor.tensor as pt
from pytensor import function
import numpy as np
x = pt.vector('x')
value = pt.sum(100.0*(x[1:] - x[:-1]**2.0)**2.0 + (1 - x[:-1])**2.0)
grad = pt.grad(value, x)
hess = pt.hessian(value, x)
fused_fn = function([x], [value, grad, hess])
x0 = np.array([1.3, 0.7, 0.8, 1.9, 1.2])
result = minimize(
fused_fn, # No need to set flags separately, `better_optimize` handles it!
x0=x0,
method="Newton-CG",
tol=1e-6,
maxiter=1000,
progressbar=True, # Show a rich progress bar!
)
Many sub-computations are repeated between the objective, gradient, and hessian functions. Scipy allows you to pass a
fused value_and_grad function, but better_optimize also lets you pass a triple-fused value_grad_and_hess function.
This avoids redundant computation and speeds up the optimization process.
Parallel Optimization from Multiple Starting Points
Real-world objectives often have multiple local minima. A common workaround is to throw many random starting points at
the optimizer and keep the best result. better_optimize makes this painless with multi_optimize:
from better_optimize import minimize, multi_optimize
def rosenbrock(x):
return sum(100.0*(x[1:] - x[:-1]**2.0)**2.0 + (1 - x[:-1])**2.0)
result = multi_optimize(
solver=minimize,
solver_kwargs=dict(f=rosenbrock, method="L-BFGS-B", tol=1e-10),
x0=np.zeros(5),
n_runs=16,
init_strategy="uniform",
bounds=(-5, 5),
backend="loky",
n_jobs=-1,
seed=42,
progressbar=True,
)
print(result.best) # Best OptimizeResult
print(result.x_best) # Best parameter vector
print(result.fun_best) # Best objective value
result.summary() # Rich table of all runs, ranked
multi_optimize works with any solver that follows the (x0, **kwargs) → OptimizeResult signature — that includes
minimize, root, basinhopping, or your own custom wrapper. It just calls solver(x0=x0_i, **solver_kwargs) for
each starting point; it never inspects the solver internals.
A few highlights:
- Initialization strategies —
"uniform","normal","sobol","lhs", or pass your own callable. Bounded strategies (uniform,sobol,lhs) require aboundsargument;"normal"perturbs aroundx0with a configurableinit_scale. Or just pass an explicitlist[np.ndarray]asx0and skip the generation entirely. - Parallel backends —
"sequential"(for debugging),"loky"(CPU-bound work, default), or"threading"(GIL-releasing code). Under the hood this isjoblib, so the usualn_jobs=-1convention works. - BLAS thread control — When many workers each spawn a full BLAS/OpenMP thread pool, you get noisy-neighbor
over-subscription. The
blas_coresargument (default"auto") caps the total thread budget so workers don't fight over the same cores.
The returned MultiStartResult gives you best, top_k(k), ranked(), success_rate, a summary() table, and
to_dataframe() for further analysis.
Contributing
We welcome contributions! If you find a bug, have a feature request, or want to improve the documentation, please open an issue or submit a pull request on GitHub.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file better_optimize-0.3.1.tar.gz.
File metadata
- Download URL: better_optimize-0.3.1.tar.gz
- Upload date:
- Size: 39.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a8d580b59b4888620aae1ab40638054063d51cb73e8fdced7bad062c66cc8bc5
|
|
| MD5 |
09d5a02723dc96163cd7c977d2156c78
|
|
| BLAKE2b-256 |
d1ac14cc1410ac21731a77e0df0cd1769e3c68efe26b638d714ee05eadde1cb8
|
Provenance
The following attestation bundles were made for better_optimize-0.3.1.tar.gz:
Publisher:
release.yml on jessegrabowski/better_optimize
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
better_optimize-0.3.1.tar.gz -
Subject digest:
a8d580b59b4888620aae1ab40638054063d51cb73e8fdced7bad062c66cc8bc5 - Sigstore transparency entry: 1139586922
- Sigstore integration time:
-
Permalink:
jessegrabowski/better_optimize@0478ed4b401b74272d06dee5e904e77be2916235 -
Branch / Tag:
refs/tags/v0.3.1 - Owner: https://github.com/jessegrabowski
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@0478ed4b401b74272d06dee5e904e77be2916235 -
Trigger Event:
release
-
Statement type:
File details
Details for the file better_optimize-0.3.1-py3-none-any.whl.
File metadata
- Download URL: better_optimize-0.3.1-py3-none-any.whl
- Upload date:
- Size: 27.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
df5a2a5cbae005e39b25e2135ef9e2067bde6de4aaac955607933c010247ef03
|
|
| MD5 |
3fc0ae89c62e2b377aafc789053dfbc2
|
|
| BLAKE2b-256 |
879d20e35464c618a6a159e6232435173026fc8a1c7cf624c6da531c6a4a5bb7
|
Provenance
The following attestation bundles were made for better_optimize-0.3.1-py3-none-any.whl:
Publisher:
release.yml on jessegrabowski/better_optimize
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
better_optimize-0.3.1-py3-none-any.whl -
Subject digest:
df5a2a5cbae005e39b25e2135ef9e2067bde6de4aaac955607933c010247ef03 - Sigstore transparency entry: 1139586974
- Sigstore integration time:
-
Permalink:
jessegrabowski/better_optimize@0478ed4b401b74272d06dee5e904e77be2916235 -
Branch / Tag:
refs/tags/v0.3.1 - Owner: https://github.com/jessegrabowski
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@0478ed4b401b74272d06dee5e904e77be2916235 -
Trigger Event:
release
-
Statement type: