Skip to main content

Benchmark for quadratic programming solvers available in Python.

Project description

QP solvers benchmark

Contributing

Benchmark for quadratic programming (QP) solvers available in Python.

The goal of this benchmark is to help us compare and select QP solvers. Its methodology is open to discussions. New test sets are also welcome. Feel free to add one that better represents the kind of problems you are working on.

Solvers

Solver Keyword Algorithm Matrices License
CVXOPT cvxopt Interior point Dense GPL-3.0
ECOS ecos Interior point Sparse GPL-3.0
Gurobi gurobi Interior point Sparse Commercial
HiGHS highs Active set Sparse MIT
MOSEK mosek Interior point Sparse Commercial
OSQP osqp Douglas–Rachford Sparse Apache-2.0
ProxQP proxqp Augmented Lagrangian Dense & Sparse BSD-2-Clause
qpOASES qpoases Active set Dense LGPL-2.1
qpSWIFT qpswift Interior point Sparse GPL-3.0
quadprog quadprog Goldfarb-Idnani Dense GPL-2.0
SCS scs Douglas–Rachford Sparse MIT

Results

The benchmark has different test sets that represent different use cases for QP solvers. Click on a test set to check out its report.

Test set Keyword Description
GitHub free-for-all github_ffa Test set built by the community on GitHub, new problems are welcome!
Maros-Meszaros maros_meszaros Standard set of problems designed to be difficult.
Maros-Meszaros dense maros_meszaros_dense Subset of the Maros-Meszaros test set restricted to smaller dense problems.

Metrics

We evaluate QP solvers based on the following metrics:

  • Success rate: percentage of problems a solver is able to solve on a given test set.
  • Computation time: time a solver takes to solve a given problem.
  • Primal error: maximum error on equality and inequality constraints at the returned solution.
  • Cost error: difference between the solution cost and the known optimal cost.

Shifted geometric mean

Problem-specific metrics (computation time, primal error, cost error) produce a different ranking of solvers for each problem. To aggregate those rankings into a single metric over the whole test set, we use the shifted geometric mean (SGM), which is a standard to aggregate computation times in benchmarks for optimization software. This mean has the advantage of being compromised by neither large outliers (as opposed to the arithmetic mean) nor by small outliers (in contrast to the geometric geometric mean). Check out the references below for further details.

Here are some intuitive interpretations:

  • A solver with a shifted-geometric-mean runtime of $Y$ is $Y$ times slower than the best solver over the test set.
  • A solver with a shifted-geometric-mean primal error $P$ is $P$ times less accurate on equality and inequality constraints than the best solver over the test set.

Limitations

Here are some known areas of improvement for this benchmark:

  • Dual feasibility: we don't check the dual multipliers computed by solvers, as the API for them is not yet unified.
  • Optimality conditions: for the same reason, we don't verify complementarity slackness or duality gap, which would provide a full optimality check.
  • Cold start only: we don't evaluate warm-start performance for now.

Check out the issue tracker for ongoing works and future improvements.

Installation

Install all required dependencies by:

pip install qpsolvers_benchmark

By default, the benchmark will run with any supported solver installed on your system. You can install all supported open-source solvers at once by installing qpsolvers_benchmark[open_source_solvers], i.e. appending the open_source_solvers dependency.

Running the benchmark

Pick up the keyword corresponding to the desired test set, for instance maros_meszaros, and pass it to the run command:

python benchmark.py run maros_meszaros

You can also run a specific solver, problem or set of solver settings:

python benchmark.py run maros_meszaros_dense --solver proxqp --settings default

Check out python benchmark.py --help for all available commands and arguments.

Contributing

Contributions to improving this benchmark are welcome. You can for instance propose new problems, or share the runtimes you obtain on your machine. Check out the contribution guidelines for details.

See also

References

Other benchmarks

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

qpsolvers_benchmark-0.1.0rc3.tar.gz (28.8 kB view details)

Uploaded Source

Built Distribution

qpsolvers_benchmark-0.1.0rc3-py3-none-any.whl (35.8 kB view details)

Uploaded Python 3

File details

Details for the file qpsolvers_benchmark-0.1.0rc3.tar.gz.

File metadata

File hashes

Hashes for qpsolvers_benchmark-0.1.0rc3.tar.gz
Algorithm Hash digest
SHA256 418ba7e345c954d19df96c1f3daa4c77eca4354e4d5245e1c3f50664fda26d64
MD5 62ae4d2eb8ab391fd68789ec742f72a0
BLAKE2b-256 2fb5c110f07f996a6a1216548910794fe9eec2eeaf17a76e4feb60d8b1160c85

See more details on using hashes here.

File details

Details for the file qpsolvers_benchmark-0.1.0rc3-py3-none-any.whl.

File metadata

File hashes

Hashes for qpsolvers_benchmark-0.1.0rc3-py3-none-any.whl
Algorithm Hash digest
SHA256 b72dd2f33a8b847fd844d67b6f8ee4c069319c6e3246f36a4f4877ad731a3172
MD5 9092d5feb35f825c31fc402e7ec82f9e
BLAKE2b-256 6d745865334f9dd94a9b9aca3292f72aff481988af1df77a3660d0e104d9f0e1

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page