Skip to main content

A JAX-based package for projected learning of parametric optimization problems

Project description

NLPOpt-Net

NonLinear Parametric Optimization Network (NLPOpt-Net) is a GPU supported python package for learning parameters to solution mappings for nonlinear parametric optimization problem.

Figure 1: Overview of the NLPOpt-Net framework.

Installation

Currently, NLPOpt-Net requires Python 3.9 or later and supports installation for both CPU-only and GPU-accelerated environments.

  • CPU-only (Linux/macOS/Windows)
pip install nlpoptnet
  • GPU (NVIDIA, CUDA 12)
pip install "nlpoptnet[cuda12]"

Quick Overview

To give a quick understanding of what we are doing, we are trying to predict a solution ($y$) of the parameterized optimization problem, $$ \begin{aligned} \quad \min \quad & f(x,y) \ \text{s.t.} \quad & h(x,y) = 0, \ & g(x,y) \le 0, \ & l(x) \le y \le u(x), \end{aligned} $$

for a given set of parameter ($x$). The core idea is to train a machine learning model in an unsupervised way and predict new solutions. For feasibility guarantee a projection layer is employed. Particularly, we approximate the original problem into a specific structure of quadratic program and solve that to retain feasibility. Graphically, the projection as training progresses would look like this:

Figure 2: Graphical interpretation of the projection.

For a detailed idea, we refer readers to this article. Complete user documentation can be found here.

Core API

  • NLPOptNet(config=..., type=...) is the user-facing entry point.
  • model.extract(problem.npz) loads structured matrices and exposes them as model attributes like model.Q, model.A, model.b, and so on.
  • model.dataset(...), model.simplex(...), and model.box(...) define the parameter region.
  • model.constraints.box.add(...) is separate from general inequalities so the bound constraints stay on the dedicated projection path.
  • model.optimize() trains and returns a result dictionary with output_dir, metadata_path, summary, and history.
  • model.load(metadata_path) restores a trained model, and model.predict(x_value) returns projected variable predictions.

General Workflow

The core ideas can be generalized in a workflow for training and inference tasks.

Model Creation and Training

Following is a general pseudocode for building and running a problem with NLPOpt-Net.

from nlpoptnet import NLPOptNet 

CONFIG = {...}

model = NLPOptNet(config=CONFIG, name="problem_name")

x = model.add_parameter([...])
y = model.add_variable([...])

model.extract("path/to/problem_data")
model.objective(...)

model.constraints.equality.add(...)
model.constraints.inequality.add(...)
model.constraints.box.add(...)

model.dataset(parameters="path/to/parameters.csv")

model.build()
result = model.optimize()

run_dir = result["output_dir"]

Load and Use a Trained Model

After training, a saved model can be reloaded to inspect the run summary, visualize the training history, and make new predictions.

reloaded = NLPOptNet().load(metadata_path)

reloaded.summary()
reloaded.plot_history()

sample_pred = reloaded.predict(sample_x, projection_backend="backend/type/for/prediction")

Example

Following is an example of the summary and history plot for a trained model.

model.summary()
📊 NLPOptNet Summary
------------------------------------------------------------
Model Name                        : example_model
No. of Parameters                 : 50
No. of Variables                  : 100
No. of Equalities                 : 50
No. of Inequalities               : 50
No. of Train Samples              : 1000
No. of Validation Samples         : 1000
Maximum Constraint Violation      : 7.5433e-11
Training Time                     : 1518.25 s
Est. JAX Single Inference Time    : 91.49 ms
Est. Native Single Inference Time : 1.99 ms
Est. JAX Batch Inference Time     : 140.02 ms
------------------------------------------------------------
model.plot_history()

Reporting a Bug or Error

We understand that running into issues can be frustrating. If you experience any errors while using NLPOpt-Net, please let us know so we can address them in future updates.

When reporting a bug, it is helpful to include:

  • A brief description of the issue
  • Relevant code snippets
  • Error messages or logs
  • Steps to reproduce the problem

Your feedback is greatly appreciated and helps us improve the package.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

nlpoptnet-0.1.0.tar.gz (68.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

nlpoptnet-0.1.0-py3-none-any.whl (75.9 kB view details)

Uploaded Python 3

File details

Details for the file nlpoptnet-0.1.0.tar.gz.

File metadata

  • Download URL: nlpoptnet-0.1.0.tar.gz
  • Upload date:
  • Size: 68.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.25

File hashes

Hashes for nlpoptnet-0.1.0.tar.gz
Algorithm Hash digest
SHA256 219bf3b18773f0f1928dceb13cef989357fd32e6bbaadfb467eede7d26791684
MD5 5ffd4f288634a316a5ed91cff082f404
BLAKE2b-256 7c2aae2d7f85fdd2cc45d9b69d963a8620cd011dabc9e975327b61c5bc219742

See more details on using hashes here.

File details

Details for the file nlpoptnet-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: nlpoptnet-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 75.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.25

File hashes

Hashes for nlpoptnet-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 7c2de12f425568e7f9eed88a718f966d0a416de60d9c0f88dcadda06a8efe104
MD5 011a134ace76910d66cf3e9be7274353
BLAKE2b-256 d841c0575c9fba646cb6c774d8e9abb68836ea71eb26bab5308f410ad5f95e18

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page