PyTorch-native nonlinear optimization toolbox
Project description
PTNL is a PyTorch-native library for nonlinear optimization, with a current focus on dense nonlinear least squares and constrained nonlinear programs.
It is aimed at researchers and data scientists who want solver logic, diagnostics, and differentiation behavior to remain visible in ordinary PyTorch workflows rather than disappearing behind a black-box wrapper.
Current Scope
PTNL currently includes:
- dense nonlinear least-squares problems
- constrained nonlinear programs with equality constraints, inequality constraints, and bounds
- Gauss-Newton, Levenberg-Marquardt, trust-region, SQP, and interior-point solver paths
- explicit solver diagnostics, iteration history, and reproducibility metadata
- shared-structure batching for repeated least-squares solves
- explicit unrolled and conservative implicit differentiation modes
- CPU and CUDA benchmark harnesses and example scripts
Install
Create the development environment with uv:
uv sync
This installs the default dev group, including pytest, and resolves torch from the configured PyTorch wheel index.
If you need an editable install on top of the synced environment:
uv pip install -e . --no-deps
Run the tests:
uv run --group dev python -m pytest
Basic Use
from pytorch_nonlinear import NonlinearLeastSquaresProblem, SolverConfig, solve
def residual(state, params):
x = params["x"]
y = params["y"]
prediction = state[0] * torch.exp(-state[1] * x)
return prediction - y
problem = NonlinearLeastSquaresProblem(residual=residual)
result = solve(
problem,
x0=torch.tensor([1.0, 0.1], dtype=torch.float64),
params={"x": x_data, "y": y_data},
config=SolverConfig(method="lm"),
)
print(result.x)
print(result.objective_value)
print(result.gradient_norm)
If device is not specified, PTNL follows the device placement of the input tensors.
Common Patterns
Choose a device explicitly:
result = solve(
problem,
x0=x0,
params=params,
config=SolverConfig(method="lm", device="cuda"),
)
Run the trust-region least-squares solver:
result = solve(problem, x0=x0, params=params, config=SolverConfig(method="trust_region"))
print(result.history[-1].trust_region_radius)
print(result.history[-1].trust_region_ratio)
Run a shared-structure batch of least-squares solves:
from pytorch_nonlinear import BatchMode
batch_result = solve(
problem,
x0=x0_batch,
params=params_batch,
config=SolverConfig(method="lm", batch_mode=BatchMode.SHARED_STRUCTURE),
)
print(batch_result.summary())
print(batch_result.results[0].summary())
Enable automatic scaling:
from pytorch_nonlinear import ScalingConfig, ScalingMode
result = solve(
problem,
x0=x0,
params=params,
config=SolverConfig(
method="trust_region",
scaling=ScalingConfig(variable_mode=ScalingMode.AUTO, residual_mode=ScalingMode.AUTO),
),
)
print(result.diagnosis)
print(result.reproducibility["scaling"])
Differentiate through a solve with unrolling:
from pytorch_nonlinear import DiffMode
result = solve(
problem,
x0=x0,
params=params,
config=SolverConfig(method="lm", diff_mode=DiffMode.UNROLL),
)
outer_loss = result.x.square().sum()
outer_loss.backward()
print(result.diff_mode_used, result.diff_valid)
Use the conservative implicit differentiation path:
from pytorch_nonlinear import DiffMode
result = solve(
problem,
x0=x0,
params=params,
config=SolverConfig(method="lm", diff_mode=DiffMode.IMPLICIT),
)
print(result.diff_mode_used, result.diff_valid)
print(result.diff_condition_estimate, result.diff_linear_residual)
Implicit differentiation is attached only when PTNL can certify that the returned point is safe for that path.
Benchmarks And Examples
Run the least-squares benchmark harness:
python benchmarks/run_benchmarks.py
Run the constrained benchmark harness:
python benchmarks/run_constrained_benchmarks.py --method sqp
Useful example scripts include:
python examples/least_squares_curve_fit.pypython examples/least_squares_scaling_effect.pypython examples/rosenbrock_gn_vs_lm.pypython examples/cuda_rosenbrock_gn_lm_tr.pypython examples/cuda_robust_loss_comparison_hard.pypython examples/learned_range_sensor_fusion.pypython examples/cpu_vs_gpu_gn_lm.py
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file ptnl-0.1.0a0.tar.gz.
File metadata
- Download URL: ptnl-0.1.0a0.tar.gz
- Upload date:
- Size: 202.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.8.15
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
282286fac5bb5b9de1159ca94b59fad6ebc33596d9751cb6d53d18dab9426246
|
|
| MD5 |
0be25a501be691f32c4f1ce5de7e2caf
|
|
| BLAKE2b-256 |
3b980cf683d68bbdee9531f69b8abfba8ff2efdebe14e242f266cd90e8d77c17
|
File details
Details for the file ptnl-0.1.0a0-py3-none-any.whl.
File metadata
- Download URL: ptnl-0.1.0a0-py3-none-any.whl
- Upload date:
- Size: 100.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.8.15
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
303a4bdb892d411cfa383c1d17d421c287213ff265337da37e2b1a378fc9a18e
|
|
| MD5 |
1fa88b95fee3d98d0f7757ff5ce2f28f
|
|
| BLAKE2b-256 |
081902178249c0e1bdef7f6530961afd85f8ddb613912d038f121134309bf0c0
|