Skip to main content

Bayesian optimization for constrained or unconstrained, continuous, discrete or mixed data problems

Project description

pyBOWIE (python Bayesian Optimization WIth supErlevel set filtration)

A Python library for Constrained Mixed-Integer Bayesian Optimization with Superlevel Set Filtration

Superlevel set filtration

The library implements parallelization by exploiting the system structure to partition the design space using level-set partitioning, automatic covariance selection, a framework for working with constraint functions in case the optimization problem requires it, as well as the ability to work with continuous and/or discrete variables.

The algorithm can be summarized as follows: It begin with estimating the size of the initial training data based on the calculation time of the cost function and generating samples using a space-filling strategy to evaluate the cost and constraint functions. The samples are mapped to the feature space. Then, the covariance function is determined automatically and the parameters of the optimization algorithm are initialized to search for new points iteratively. This process first trains a surrogate model for the cost function and surrogate models for the constraint function. The acquisition function is then evaluated in a grid layout over the design space. The superlevel set filtration is carried out to identify regions of interest that are used as constraints of the acquisition function and to focus the search for new points on which the objective and constraint functions are evaluated. The parameters of the optimization algorithm are updated, and the cycle is repeated until the maximum number of iterations is reached.

For tutorial notebooks, check out the examples folder.

Getting started

Installing with pip

pip install pyBOWIE

Notebook tutorials

Read the basic notebook for an introduction to the basic concepts of the library.

For more details please refer to the advaned tour notebook

Dependencies:

  • numpy
  • scipy
  • sympy
  • itertools
  • pandas
  • sklearn
  • GPy
  • prince
  • properscoring
  • multiprocess
  • matplotlib.pyplot

Library parameters

The library parameters are:

Required arguments

  • function :
    • Python function with one output
    • Cost or objective function
  • domain :
    • List
    • The kind and bounds of each of the variables. The valid kinds are {'continuous', 'integer', 'categorical'}
  • sense :
    • {'maximize', 'minimize'}
    • Specifies whether to minimize or maximize the objective function

      Default Arguments

      • surrogate :
        • {'GP', 'SGP'}, default 'GP'
        • Specifies the surrogate model of the bayesian optimization algorithm
      • acquisition_function :
        • {'UCB', 'PI', 'EI'}, default 'UCB'
        • Specifies the acquisition function of the bayesian optimization algorithm
      • xi_0 :
        • float, default 2
        • Initial value of the acquisition function hyperparameter
      • xi_f :
        • float, default 0.1
        • Final value of the acquisition function hyperparameter
      • xi_decay :
        • {'yes', 'no'}, default 'yes'
        • Specifies whether the hyperparameter of the acquisition function decays.
      • kernel :
        • GPy.kern, default None
        • The kernel is a function. If specified must be from GPy package
      • kern_discovery :
        • {'yes', 'no'}, default 'yes'
        • Specifies whether the kernel function is selected automatically. If 'no', then specify a kernel from GPy
      • kern_discovery_evals :
        • int, default 2
        • Specifies the number of evaluations to find the covariance function, only of kern_discovery is 'yes'
      • x_0 :
        • numpy array, default None
        • Specifies the initial points to evaluate the surrogate model.
      • f_0 :
        • numpy array, default None
        • Specifies the values ​​of the objective function evaluated at x_0, only of x_0 is specified.
      • design :
        • {'LHS', 'Sobol', 'Halton', 'random'}, default 'LHS'
        • Specifies the initial design strategy to initialize x_0 if it is not specified.
      • n_p_design :
        • int, default None
        • Specifies the number of points of the initial design, i.e. the size of the initial points matrix. If none, the value of n_p_design is calculated depending on the computing time of a random evaluation of the objective function
      • n_jobs :
        • int, default None
        • The number of jobs to run in parallel for evaluations of the objective function. None means not using any processors (1 evaluation), and -1 means using all processors.
      • n_restarts :
        • int, default 5
        • The number of times the surrogate model optimizer is restarted.
      • max_iter :
        • int, default 100
        • Strict limit on the maximum number of iterations within the algorithm.
      • constraints :
        • tuple, default None
        • Specifies the constraints functions that define the feasible region.
      • constraints_method :
        • {'PoF', 'GPC'}, default 'PoF'
        • Specifies the method to model the constraints functions.
      • reducer :
        • A dimension reduction module, default None
        • Specifies a dimension reduction technique specified by the user. If none, certain rules are used to define the dimension reduction technique depending in the characteristics of the design variables.
      • inverter_transform :
        • {'yes', 'no'}, default 'no'
        • Specifies wheter to use an analytic inverse of the dimension reduction technique. Only valid for PCA
      • alpha :
        • float, default 0.95
        • Confidence level for the Chi-squares and T-scores. Must be between 0 and 1.
      • c1_param :
        • int, default 50
        • Value used to calculate the number of points for the matrix of initial estimates.
      • c2_param :
        • int, default 10
        • Value used to control the number of grid sampling points.
      • verbose :
        • {0, 1}, default 0
        • Enable verbose output.

      Returns

      optimize() returns a dict with:

      • x_best : The values ​​of the variables that yield the best value of objective function
      • f_best : The best value ​​of the objective function
      • x_init : The initial sampling points to evaluate the objective function
      • f_init : The value of the objective function at the initial sampling points
      • x_iters : The new points suggested during the progress of the algorithm.
      • f_iters : The value of the objective function at the new suggested points.
      • x_l : Lower bounds of the variables.
      • x_u : Upper bounds of the variables.
      • dims : Dimensions of the optimization problem.
      • iters : Maximum number of iterations within the algorithm.
      • initial_design : Initial design strategy for the initial sampling points.
      • initial_points : Number of points of the initial sampling points matrix.
      • acquisition_function : Acquisition function employed during the algorithm.
      • xi : Last value of the acquisition function hyperparameter.
      • regret : Metric to asses the performance of the algorithm.
      • constraint_method : Stategy used to handle the constraint functions.
      • models_constraints : Surrogate model for the constraint functions.
      • model : Surrogate model for the objective function.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pyBOWIE-1.0.2.tar.gz (24.9 kB view details)

Uploaded Source

File details

Details for the file pyBOWIE-1.0.2.tar.gz.

File metadata

  • Download URL: pyBOWIE-1.0.2.tar.gz
  • Upload date:
  • Size: 24.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.10.13

File hashes

Hashes for pyBOWIE-1.0.2.tar.gz
Algorithm Hash digest
SHA256 4c16c0f4aebcf65d0c560423f5e306412ba53509a2046525f73c970f73b74888
MD5 04b2acdfdcc98a218667041555eeeb52
BLAKE2b-256 34616190935896cf9d812be3837e0334e3bd9934b19c388d18ededaad59db4d9

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page