Skip to main content

A toolbox for efficient global optimization

Project description

egobox

tests pytests linting status

Rust toolbox for Efficient Global Optimization algorithms inspired from SMT.

egobox consists of the following sub-packages.

Name Version Documentation Description
doe crates.io docs sampling methods; contains LHS, FullFactorial, Random methods
gp crates.io docs gaussian process regression; contains Kriging and PLS dimension reduction
moe crates.io docs mixture of experts using GP models
ego crates.io docs efficient global optimization with basic constraints and mixed integer handling

Usage

Depending on the sub-packages you want to use, you have to add following declarations to your Cargo.toml

[dependencies]
egobox-doe = { version = "0.4.0" }
egobox-gp  = { version = "0.4.0" }
egobox-moe = { version = "0.4.0" }
egobox-ego = { version = "0.4.0" }

Features

serializable-gp

The serializable-gp feature enables the serialization of GP models using the serde crate.

persistent-moe

The persistent-moe feature enables save() and load() methods for MoE model to/from a json file using the serde crate.

linfa BLAS/Lapack backend feature

By default, we use a pure-Rust implementation for linear algebra routines. However, you can also choose an external BLAS/LAPACK backend library instead, by enabling the blas feature and a feature corresponding to your BLAS backend.

It relies on linfa BLAS/Lapack backend features.

End user project using gp, moe and ego can select a BLAS/Lapack backend depending its environment; it can be either:

  • Openblas: linfa/openblas-system or linfa/openblas-static
  • Netlib: linfa/netlib-system or linfa/netlib-static
  • Intel MKL: linfa/intel-mkl-system or linfa/intel-mkl-static

where

  • *-system features: try to find the corresponding backend in your installation.
  • *-static features: try to download and compile the corresponing backend.

More information in linfa features

For instance, using gp with the Intel MKL BLAS/Lapack backend, you have to specify the linfa backend feature :

[dependencies]
egobox-gp = { version = "0.4.0", features = ["blas", "linfa/intel-mkl-static"] }

Note: only end-user projects should specify a provider in Cargo.toml (not librairies). In case of library development, the backend is specified on the command line as for examples below.

Examples

Examples (in examples/ sub-packages folder) are run as follows:

$ cd doe && cargo run --example samplings --release
$ cd gp && cargo run --example kriging --release
$ cd moe && cargo run --example clustering --release
$ cd ego && cargo run --example ackley --release

Using the Intel MKL BLAS/Lapack backend, you can also run for instance:

$ cd gp && cargo run --example kriging --release --features linfa/intel-mkl-static

Thanks to the PyO3 project, which makes Rust well suited for building Python extensions, the EGO algorithm written in Rust (aka Egor) is binded in Python. You can install the Python package using:

$ pip install egobox

See the tutorial notebook for usage of the optimizer.

Why egobox?

I started this library as a way to learn Rust and see if it can be used to implement algorithms like those in the SMT toolbox[^1]. As the first components (doe, gp) emerged, it appears I could translate Python code almost line by line in Rust (well... after a great deal of borrow-checker fight!) and thanks to Rust ndarray library ecosystem.

This library relies also on the linfa project which aims at being the "scikit-learn-like ML library for Rust". Along the way I could contribute to linfa by porting gaussian mixture model (linfa-clustering/gmm) and partial least square family methods (linfa-pls) confirming the fact that Python algorithms translation in Rust could be pretty straightforward.

While I did not benchmark exactly my Rust code against SMT Python one, from my debugging sessions, I noticed I did not get such a great speed up. Actually, algorithms like doe and gp relies extensively on linear algebra and Python famous libraries numpy/scipy which are strongly optimized by calling C or Fortran compiled code.

My guess at this point is that interest could come from some Rust algorithms built upon these initial building blocks hence I started to implement mixture of experts algorithm (moe) and on top surrogate-based optimization EGO algorithm (ego) which gives its name to the library[^2][^3]. Aside from performance, such library can also take advantage from the others Rust selling points, namely reliability and productivity.

Cite

If you happen to find this Rust library useful for your research, you can cite this project as follows:

@Misc{egobox,
  author = {Rémi Lafage},
  title = {Egobox: efficient global optimization toolbox in Rust},
  year = {2020--},
  url = "https://github.com/relf/egobox"
}

[^1]: M. A. Bouhlel and J. T. Hwang and N. Bartoli and R. Lafage and J. Morlier and J. R. R. A. Martins. A Python surrogate modeling framework with derivatives. Advances in Engineering Software, 2019.

[^2]: Bartoli, Nathalie, et al. "Adaptive modeling strategy for constrained global optimization with application to aerodynamic wing design." Aerospace Science and technology 90 (2019): 85-102.

[^3]: Dubreuil, Sylvain, et al. "Towards an efficient global multidisciplinary design optimization algorithm." Structural and Multidisciplinary Optimization 62.4 (2020): 1739-1765.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

egobox-0.4.0.tar.gz (98.9 kB view hashes)

Uploaded Source

Built Distributions

egobox-0.4.0-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (5.0 MB view hashes)

Uploaded PyPy manylinux: glibc 2.17+ x86-64

egobox-0.4.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (5.0 MB view hashes)

Uploaded CPython 3.10 manylinux: glibc 2.17+ x86-64

egobox-0.4.0-cp39-none-win_amd64.whl (5.0 MB view hashes)

Uploaded CPython 3.9 Windows x86-64

egobox-0.4.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (5.0 MB view hashes)

Uploaded CPython 3.9 manylinux: glibc 2.17+ x86-64

egobox-0.4.0-cp38-none-win_amd64.whl (5.0 MB view hashes)

Uploaded CPython 3.8 Windows x86-64

egobox-0.4.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (5.0 MB view hashes)

Uploaded CPython 3.8 manylinux: glibc 2.17+ x86-64

egobox-0.4.0-cp37-none-win_amd64.whl (5.0 MB view hashes)

Uploaded CPython 3.7 Windows x86-64

egobox-0.4.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (5.0 MB view hashes)

Uploaded CPython 3.7m manylinux: glibc 2.17+ x86-64

egobox-0.4.0-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (5.0 MB view hashes)

Uploaded CPython 3.6m manylinux: glibc 2.17+ x86-64

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page