Skip to main content

PyTorch-native nonlinear optimization toolbox

Project description

PTNL

PTNL is a PyTorch-native library for nonlinear optimization, with a current focus on dense nonlinear least squares and constrained nonlinear programs.

It is aimed at researchers and data scientists who want solver logic, diagnostics, and differentiation behavior to remain visible in ordinary PyTorch workflows rather than disappearing behind a black-box wrapper.

Current Scope

PTNL currently includes:

  • dense nonlinear least-squares problems
  • constrained nonlinear programs with equality constraints, inequality constraints, and bounds
  • Gauss-Newton, Levenberg-Marquardt, trust-region, SQP, and interior-point solver paths
  • explicit solver diagnostics, iteration history, and reproducibility metadata
  • shared-structure batching for repeated least-squares solves
  • explicit unrolled and conservative implicit differentiation modes
  • CPU and CUDA benchmark harnesses and example scripts

Install

Create the development environment with uv:

uv sync

This installs the default dev group, including pytest, and resolves torch from the configured PyTorch wheel index.

If you need an editable install on top of the synced environment:

uv pip install -e . --no-deps

Run the tests:

uv run --group dev python -m pytest

Basic Use

from pytorch_nonlinear import NonlinearLeastSquaresProblem, SolverConfig, solve


def residual(state, params):
    x = params["x"]
    y = params["y"]
    prediction = state[0] * torch.exp(-state[1] * x)
    return prediction - y


problem = NonlinearLeastSquaresProblem(residual=residual)
result = solve(
    problem,
    x0=torch.tensor([1.0, 0.1], dtype=torch.float64),
    params={"x": x_data, "y": y_data},
    config=SolverConfig(method="lm"),
)

print(result.x)
print(result.objective_value)
print(result.gradient_norm)

If device is not specified, PTNL follows the device placement of the input tensors.

Common Patterns

Choose a device explicitly:

result = solve(
    problem,
    x0=x0,
    params=params,
    config=SolverConfig(method="lm", device="cuda"),
)

Run the trust-region least-squares solver:

result = solve(problem, x0=x0, params=params, config=SolverConfig(method="trust_region"))
print(result.history[-1].trust_region_radius)
print(result.history[-1].trust_region_ratio)

Run a shared-structure batch of least-squares solves:

from pytorch_nonlinear import BatchMode

batch_result = solve(
    problem,
    x0=x0_batch,
    params=params_batch,
    config=SolverConfig(method="lm", batch_mode=BatchMode.SHARED_STRUCTURE),
)

print(batch_result.summary())
print(batch_result.results[0].summary())

Enable automatic scaling:

from pytorch_nonlinear import ScalingConfig, ScalingMode

result = solve(
    problem,
    x0=x0,
    params=params,
    config=SolverConfig(
        method="trust_region",
        scaling=ScalingConfig(variable_mode=ScalingMode.AUTO, residual_mode=ScalingMode.AUTO),
    ),
)

print(result.diagnosis)
print(result.reproducibility["scaling"])

Differentiate through a solve with unrolling:

from pytorch_nonlinear import DiffMode

result = solve(
    problem,
    x0=x0,
    params=params,
    config=SolverConfig(method="lm", diff_mode=DiffMode.UNROLL),
)

outer_loss = result.x.square().sum()
outer_loss.backward()
print(result.diff_mode_used, result.diff_valid)

Use the conservative implicit differentiation path:

from pytorch_nonlinear import DiffMode

result = solve(
    problem,
    x0=x0,
    params=params,
    config=SolverConfig(method="lm", diff_mode=DiffMode.IMPLICIT),
)

print(result.diff_mode_used, result.diff_valid)
print(result.diff_condition_estimate, result.diff_linear_residual)

Implicit differentiation is attached only when PTNL can certify that the returned point is safe for that path.

Benchmarks And Examples

Run the least-squares benchmark harness:

python benchmarks/run_benchmarks.py

Run the constrained benchmark harness:

python benchmarks/run_constrained_benchmarks.py --method sqp

Useful example scripts include:

  • python examples/least_squares_curve_fit.py
  • python examples/least_squares_scaling_effect.py
  • python examples/rosenbrock_gn_vs_lm.py
  • python examples/cuda_rosenbrock_gn_lm_tr.py
  • python examples/cuda_robust_loss_comparison_hard.py
  • python examples/learned_range_sensor_fusion.py
  • python examples/cpu_vs_gpu_gn_lm.py

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pytorch_nonlinear-0.1.0a0.tar.gz (202.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pytorch_nonlinear-0.1.0a0-py3-none-any.whl (100.3 kB view details)

Uploaded Python 3

File details

Details for the file pytorch_nonlinear-0.1.0a0.tar.gz.

File metadata

  • Download URL: pytorch_nonlinear-0.1.0a0.tar.gz
  • Upload date:
  • Size: 202.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.8.15

File hashes

Hashes for pytorch_nonlinear-0.1.0a0.tar.gz
Algorithm Hash digest
SHA256 c4bc325bbf2314c880d8efa838506557ee6c8f166defe2cee4ceca5276498da7
MD5 de77777d07a53381813a817234b76679
BLAKE2b-256 e415492966db41b2af593452b65473852ae8747bd9fd3c2c2d839e3a7bdcd85e

See more details on using hashes here.

File details

Details for the file pytorch_nonlinear-0.1.0a0-py3-none-any.whl.

File metadata

File hashes

Hashes for pytorch_nonlinear-0.1.0a0-py3-none-any.whl
Algorithm Hash digest
SHA256 fdcac2e9ba7d3072b7cc4f9c4583705d00d8cfe215dac7f20b200311984cc658
MD5 6391bd649ed4bb39673c3ea2896595e9
BLAKE2b-256 e35b396a9576d023c76c29c3fcc0f4be1548fce5e3d5bf43871ab32163037b39

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page