Benchmark for quadratic programming solvers available in Python.
Project description
QP solvers benchmark
Benchmark for quadratic programming (QP) solvers available in Python.
The goal of this benchmark is to help users compare and select QP solvers. Its methodology is open to discussions. The benchmark ships standard and community test sets, as well as a qpsolvers_benchmark
command-line tool to run any test set directly on your machine. For instance:
qpsolvers_benchmark maros_meszaros/maros_meszaros.py run
The outcome from running a test set is a standardized report evaluating all metrics of the benchmark across all available QP solvers. This repository also distributes results from running the benchmark on all test sets using the same computer.
New test sets are welcome. The benchmark is designed so that each test-set directory is standalone, so that the qpsolvers_benchmark
command can be run on test sets from other repositories. Feel free to create ones that better represent the kind of problems you are working on.
Solvers
Solver | Keyword | Algorithm | Matrices | License |
---|---|---|---|---|
Clarabel | clarabel |
Interior point | Sparse | Apache-2.0 |
CVXOPT | cvxopt |
Interior point | Dense | GPL-3.0 |
DAQP | daqp |
Active set | Dense | MIT |
ECOS | ecos |
Interior point | Sparse | GPL-3.0 |
Gurobi | gurobi |
Interior point | Sparse | Commercial |
HiGHS | highs |
Active set | Sparse | MIT |
MOSEK | mosek |
Interior point | Sparse | Commercial |
NPPro | nppro |
Active set | Dense | Commercial |
OSQP | osqp |
Douglas–Rachford | Sparse | Apache-2.0 |
ProxQP | proxqp |
Augmented Lagrangian | Dense & Sparse | BSD-2-Clause |
qpOASES | qpoases |
Active set | Dense | LGPL-2.1 |
qpSWIFT | qpswift |
Interior point | Sparse | GPL-3.0 |
quadprog | quadprog |
Goldfarb-Idnani | Dense | GPL-2.0 |
SCS | scs |
Douglas–Rachford | Sparse | MIT |
Test sets
The benchmark comes with standard and community test sets to represent different use cases for QP solvers:
Test set | Keyword | Description |
---|---|---|
GitHub free-for-all | github_ffa |
Test set built by the community on GitHub, new problems are welcome! |
Maros-Meszaros | maros_meszaros |
Standard set of problems designed to be difficult. |
Maros-Meszaros dense | maros_meszaros_dense |
Subset of the Maros-Meszaros test set restricted to smaller dense problems. |
Results
The outcome from running a test set is a standardized report. Here are the results obtained from running all test sets in this repository with the same computer:
Metrics
We evaluate QP solvers based on the following metrics:
- Success rate: percentage of problems a solver is able to solve on a given test set.
- Computation time: time a solver takes to solve a given problem.
- Optimality conditions: we evaluate all three optimality conditions:
- Primal residual: maximum error on equality and inequality constraints at the returned solution.
- Dual residual: maximum error on the dual feasibility condition at the returned solution.
- Duality gap: value of the duality gap at the returned solution.
- Cost error: difference between the solution cost and the known optimal cost.
Shifted geometric mean
Each metric (computation time, primal and dual residuals, duality gap) produces a different ranking of solvers for each problem. To aggregate those rankings into a single metric over the whole test set, we use the shifted geometric mean (shm), which is a standard to aggregate computation times in benchmarks for optimization software. This mean has the advantage of being compromised by neither large outliers (as opposed to the arithmetic mean) nor by small outliers (in contrast to the geometric geometric mean). Check out the references below for further details.
Here are some intuitive interpretations:
- A solver with a shifted-geometric-mean runtime of $Y$ is $Y$ times slower than the best solver over the test set.
- A solver with a shifted-geometric-mean primal residual $R$ is $R$ times less accurate on equality and inequality constraints than the best solver over the test set.
Limitations
Here are some known areas of improvement for this benchmark:
- Cold start only: we don't evaluate warm-start performance for now.
Check out the issue tracker for ongoing works and future improvements.
Installation
You can install the benchmark and its dependencies in an isolated environment using conda
:
conda create -f environment.yaml
conda activate qpsolvers_benchmark
Alternatively, you can install the benchmark on your system using pip
:
pip install qpsolvers_benchmark
By default, the benchmark will run all supported solvers it finds.
Running the benchmark
Once the benchmark is installed, you will be able to run the qpsolvers_benchmark
command. Provide it with the script corresponding to the test set you want to run, followed by a benchmark command such as "run". For instance, let's run the "dense" subset of the Maros-Meszaros test set:
qpsolvers_benchmark maros_meszaros/maros_meszaros_dense.py run
You can also run a specific solver, problem or set of solver settings:
qpsolvers_benchmark maros_meszaros/maros_meszaros_dense.py run --solver proxqp --settings default
Check out qpsolvers_benchmark --help
for a list of available commands and arguments.
Plots
The command line ships a plot
command to compare solver performances over a test set for a specific metric. For instance, run:
qpsolvers_benchmark maros_meszaros/maros_meszaros_dense.py plot runtime high_accuracy
To generate the following plot:
Contributing
Contributions to improving this benchmark are welcome. You can for instance propose new problems, or share the runtimes you obtain on your machine. Check out the contribution guidelines for details.
See also
References
- How not to lie with statistics: the correct way to summarize benchmark results: why geometric means should always be used to summarize normalized results.
- Optimality conditions and numerical tolerances in QP solvers: note written while figuring out the
high_accuracy
settings of this benchmark.
Other benchmarks
- Benchmarks for optimization software by Hans Mittelmann, which includes reports on the Maros-Meszaros test set.
- jrl-qp/benchmarks: benchmark of QP solvers available in C++.
- osqp_benchmark: benchmark examples for the OSQP solver.
- proxqp_benchmark: benchmark examples for the ProxQP solver.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file qpsolvers_benchmark-1.0.0.tar.gz
.
File metadata
- Download URL: qpsolvers_benchmark-1.0.0.tar.gz
- Upload date:
- Size: 30.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: python-requests/2.28.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | e970ae5c4b8a79bde8c6c6e6f306d7f072c2e7860a274c904bca24c9996dfcf6 |
|
MD5 | 3f8782db3c1f8169d3421227126e10a5 |
|
BLAKE2b-256 | d5b8f9222e567cbbbcdbb09162a78fb9312a41b4953255b91bd500b5450e4d63 |
File details
Details for the file qpsolvers_benchmark-1.0.0-py3-none-any.whl
.
File metadata
- Download URL: qpsolvers_benchmark-1.0.0-py3-none-any.whl
- Upload date:
- Size: 36.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: python-requests/2.28.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 0715d43f13ca0357f5956dab3f4f1cf5340aa920438e4f8c76a36d4707e672dc |
|
MD5 | 0fc0f749973e33cd2adc568df2f28bb4 |
|
BLAKE2b-256 | aeb898f0181fd011166b2d352daac947d0c4224fa5dd1df31f36ee1f7da5ab8c |