This provides the SOLNP optimizaiton Algorithm.
Reason this release was yanked:
Release created by mistake, use 2021.3.8
Project description
See full documentation on http://solnp.readthedocs.io.
pysolnp - Nonlinear optimization with the augmented Lagrange method
Description
SOLNP solves the general nonlinear optimization problem on the form:
minimize f(x)
subject to
g(x) = e_x
l_h <= h(x) <= u_h
l_x < x < u_X
where f(x), g(x) and h(x) are smooth functions.
Compatability
Precompiled Wheels are available for CPython:
- Windows: Python 2.7, 3.6+
- Linux: Python 2.7, 3.5+
- Mac OS: Python 2.7, 3.5+
For other systems, or to have BLAS and LAPACK support, please build the wheels manually. Note: For best results, building it from source is recommended, as BLAS and LAPACK will make a difference.
Installation
Simply install the package through PyPi with:
pip install pysolnp
When compiling from source code you will need CMake.
See the README for the C++ code for details.
Usage
Below is the Box example, for the complete example see /python_examples/example_box.py.
import pysolnp
def f_objective_function(x):
return -1 * x[0] * x[1] * x[2]
def g_equality_constraint_function(x):
return [4 * x[0] * x[1] + 2 * x[1] * x[2] + 2 * x[2] * x[0]]
x_starting_point = [1.1, 1.1, 9.0]
x_l = [1.0, 1.0, 1.0]
x_u = [10.0, 10.0, 10.0]
e_x = [100]
result = pysolnp.solve(
obj_func=f_objective_function,
par_start_value=x_starting_point,
par_lower_limit=x_l,
par_upper_limit=x_u,
eq_func=g_equality_constraint_function,
eq_values=e_x)
result.solve_value
result.optimum
result.callbacks
result.converged
Output:
>>> result.solve_value
-48.11252206814995
>>> result.optimum
[2.8867750707815447, 2.8867750713194273, 5.773407748939196]
>>> result.callbacks
118
>>> result.converged
True
Parameters
The basic signature is:
solve(obj_func: function, par_start_value: List, par_lower_limit: object = None, par_upper_limit: object = None, eq_func: object = None, eq_values: object = None, ineq_func: object = None, ineq_lower_bounds: object = None, ineq_upper_bounds: object = None, rho: float = 1.0, max_major_iter: int = 10, max_minor_iter: int = 10, delta: float = 1e-05, tolerance: float = 0.0001, debug: bool = False) -> pysolnp.Result
Inputs:
Parameter | Type | Default value* | Description |
---|---|---|---|
obj_func | Callable[List, float] | - | The objective function f(x) to minimize. |
par_start_value | List | - | The starting parameter x_0. |
par_lower_limit | List | None | The parameter lower limit x_l. |
par_upper_limit | List | None | The parameter upper limit x_u. |
eq_func | Callable[List, float] | None | The equality constraint function h(x). |
eq_values | List | None | The equality constraint values e_x. |
ineq_func | Callable[List, float] | None | The inequality constraint function g(x). |
ineq_lower_bounds | List | None | The inequality constraint lower limit g_l. |
ineq_upper_bounds | List | None | The inequality constraint upper limit g_l. |
rho | float | 1.0 | Penalty weighting scalar for infeasability in the augmented objective function.** |
max_major_iter | int | 400 | Maximum number of outer iterations. |
max_minor_iter | int | 800 | Maximum number of inner iterations. |
delta | float | 1e-07 | Step-size for forward differentiation. |
tolerance | float | 1e-08 | Relative tolerance on optimality. |
debug | bool | False | If set to true some debug output will be printed. |
*Defaults for configuration parameters are based on the defaults for Rsolnp.
**Higher values means the solution will bring the solution into the feasible region with higher weight. Very high values might lead to numerical ill conditioning or slow down convergence.
Output:
The function returns the pysolnp.Result
with the below properties.
Property | Type | Description |
---|---|---|
solve_value | float | The value of the objective function at optimum f(x*). |
optimum | List[float] | A list of parameters for the optimum x*. |
callbacks | int | Number of callbacks done to find this optimum. |
converged | boolean | Indicates if the algorithm converged or not. |
hessian_matrix | List[List[float]] | The final Hessian Matrix used by pysolnp. |
Use-cases and Applications
- NMPC - Nonlinear model predictive controls-case studies using Matlab, REXYGEN and pysolnp NLP solver under Python environment by Štěpán Ožana. [NMPC Overhead Crane (PDF)] [GitHub Source Code] [Štěpán's Homepage]
Authors
- Krister S Jakobsson - Implementation - krister.s.jakobsson@gmail.com
License
This project is licensed under the Boost License - see the license file for details.
Acknowledgments
- Yinyu Ye - Publisher and mastermind behind the original SOLNP algorithm, Original Sources
- Alexios Ghalanos and Stefan Theussl - The people behind RSOLNP, Github repository
- Davis King - The mastermind behind Dlib, check out his blog! Blog
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distributions
Hashes for pysolnp-2021.3.7-cp39-cp39-win_amd64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 0bbb148db40f0261c520e6918d8dcda908f5fb31814d7b3792959bd73083e28e |
|
MD5 | 2869b569d8d9658cf41e4d508e432564 |
|
BLAKE2b-256 | b447f6c17f3128f03669b627f0d58a328356b2dead94038ff83d59eff447916a |
Hashes for pysolnp-2021.3.7-cp39-cp39-win32.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | d067cdad92d9bd09be2bfdc84a768a828cdc37812d01aea9c42e055f5c8ab2ed |
|
MD5 | beec0eed2dedcd6a3b5a553779057346 |
|
BLAKE2b-256 | ef72187e0a5f107399dc72b215e136a251ef1d56797c6d31d084fb7700324d13 |
Hashes for pysolnp-2021.3.7-cp27-cp27m-win32.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 0f5023b6539342a9b7c49c4bba751ad2a8963bd7bdc399c32588b8d698014168 |
|
MD5 | 0f58827590decc482c54b3ca5a0fce1b |
|
BLAKE2b-256 | 7a42eb9a563d5c7a5b857abcf21de7e0c00b42c309962c07563fd75b59d84a70 |