Skip to main content

A drop-in replacement for scipy optimize functions with quality of life improvements

Project description

Better Optimization!

better_optimize is a friendlier front-end to scipy's optimize.minimize and optimize.root functions. Features include:

  • Progress bar!
  • Early stopping!
  • Better propagation of common arguments (maxiters, tol)!

Installation

To install better_optimize, simply use conda:

conda install -c conda-forge better_optimize

Or, if you prefer pip:

pip install better_optimize

What does better_optimize provide over basic scipy?

1. Progress Bars

All optimization routines in better_optimize can display a rich, informative progress bar using the rich library. This includes:

  • Iteration counts, elapsed time, and objective values.
  • Gradient and Hessian norms (when available).
  • Separate progress bars for global (basinhopping) and local (minimizer) steps.
  • Toggleable display for headless or script environments.

2. Flat and Generalized Keyword Arguments

  • No more nested options dictionaries! You can pass tol, maxiter, and other common options directly as top-level keyword arguments.
  • better_optimize automatically sorts and promotes these arguments to the correct place for each optimizer.
  • Generalizes argument handling: always provides tol and maxiter (or their equivalents) to the optimizer, even if you forget.

3. Argument Checking and Validation

  • Automatic checking of provided gradient (jac), Hessian (hess), and Hessian-vector (hessp) functions.
  • Warns if you provide unnecessary or unused arguments for a given method.
  • Detects and handles fused objective functions (e.g., functions returning (loss, grad) or (loss, grad, hess) tuples).
  • Ensures that the correct function signatures and return types are used for each optimizer.

4. LRUCache1 for Fused Functions

  • Provides an LRUCache1 utility to cache the results of expensive objective/gradient/Hessian computations.
  • Especially useful for triple-fused functions that return value, gradient, and Hessian together, avoiding redundant computation.
  • Totally invisible -- just pass a function with 3 return values. Seamlessly integrated into the optimization workflow.

5. Robust Basin-Hopping with Failure Tolerance

  • Enhanced basinhopping implementation allows you to continue even if the local minimizer fails.
  • Optionally accepts and stores failed minimizer results if they improve the global minimum.
  • Useful for noisy or non-smooth objective functions where local minimization may occasionally fail.

Example Usage

Simple Example

from better_optimize import minimize

def rosenbrock(x):
    return sum(100.0*(x[1:] - x[:-1]**2.0)**2.0 + (1 - x[:-1])**2.0)

result = minimize(
    rosenbrock,
    x0=[-1, 2],
    method="L-BFGS-B",
    tol=1e-6,
    maxiter=1000,
    progressbar=True,  # Show a rich progress bar!
)
  Minimizing                                         Elapsed   Iteration   Objective    ||grad||
 ──────────────────────────────────────────────────────────────────────────────────────────────────
  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━   0:00:00   721/721     0.34271757   0.92457651

The result object is a standard OptimizeResult from scipy.optimize, so there are no surprises there!

Triple-Fused Function using Pytensor

from better_optimize import minimize
import pytensor.tensor as pt
from pytensor import function
import numpy as np

x = pt.vector('x')
value = pt.sum(100.0*(x[1:] - x[:-1]**2.0)**2.0 + (1 - x[:-1])**2.0)
grad = pt.grad(value, x)
hess = pt.hessian(value, x)

fused_fn = function([x], [value, grad, hess])
x0 = np.array([1.3, 0.7, 0.8, 1.9, 1.2])

result = minimize(
    fused_fn, # No need to set flags separately, `better_optimize` handles it!
    x0=x0,
    method="Newton-CG",
    tol=1e-6,
    maxiter=1000,
    progressbar=True,  # Show a rich progress bar!
)

Many sub-computations are repeated between the objective, gradient, and hessian functions. Scipy allows you to pass a fused value_and_grad function, but better_optimize also lets you pass a triple-fused value_grad_and_hess function. This avoids redundant computation and speeds up the optimization process.

Contributing

We welcome contributions! If you find a bug, have a feature request, or want to improve the documentation, please open an issue or submit a pull request on GitHub.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

better_optimize-0.3.tar.gz (36.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

better_optimize-0.3-py3-none-any.whl (25.3 kB view details)

Uploaded Python 3

File details

Details for the file better_optimize-0.3.tar.gz.

File metadata

  • Download URL: better_optimize-0.3.tar.gz
  • Upload date:
  • Size: 36.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for better_optimize-0.3.tar.gz
Algorithm Hash digest
SHA256 46ed74731a2496446a19cc45b808e7500a5fe6916d825c6e22396a6fd2655168
MD5 aa75bbbf840f0a1366749d42ae447b9e
BLAKE2b-256 40043bc88a1ef1b11bd6234a7dde18746739436dc02f8326140f1f2a2d8fab32

See more details on using hashes here.

Provenance

The following attestation bundles were made for better_optimize-0.3.tar.gz:

Publisher: release.yml on jessegrabowski/better_optimize

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file better_optimize-0.3-py3-none-any.whl.

File metadata

  • Download URL: better_optimize-0.3-py3-none-any.whl
  • Upload date:
  • Size: 25.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for better_optimize-0.3-py3-none-any.whl
Algorithm Hash digest
SHA256 0e06df043cbce5fdbcbac042341048a7dcaef114ba8a9cb946cef2c9f94d6c88
MD5 a9de51b455b04cd72cafd573b716d6e9
BLAKE2b-256 891adac187caffaa1026ce46b42890e44601155ae1a8347bb562bde3d537fcc8

See more details on using hashes here.

Provenance

The following attestation bundles were made for better_optimize-0.3-py3-none-any.whl:

Publisher: release.yml on jessegrabowski/better_optimize

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page