parallel Bayesian optimization over complex search spaces
Project description
Mango: A parallel black-box optimization library
Mango is a python library for parallel optimization over complex search spaces. Currently, Mango is intended to find the optimal hyperparameters for machine learning algorithms.
Check out the quick 12 seconds demo of Mango approximating a complex decision boundary of SVM
Mango internally uses a parallel implementation of a multi-armed bandit bayesian optimizer based on the gaussian process. Some of the salient features of Mango are:
- Ability to easily define complex search spaces that are compatible with the scikit-learn random search and gridsearch functions.
- Internally uses state of the art optimizer, which allows sampling a batch of values in parallel for evaluation.
- The objective function can be arbitrarily complex, which can be scheduled on local, cluster, or cloud infrastructure.
- The ease of usage was kept in mind with the ability to plugin new distributions for search space and new optimizer algorithms.
Index
- Installation
- Getting started
- Hyperparameter tuning example
- Search space definitions
- Scheduler
- Optional configurations
1. Installation
Using pip
:
pip install arm-mango
From source:
$ git clone https://github.com/ARM-software/mango.git
$ cd mango
$ pip3 install .
2. Getting Started
Mango is straightforward to use. Following example minimizes the quadratic function whose input is an integer between -10 and 10.
from mango import scheduler, Tuner
# Search space
param_space = dict(x=range(-10,10))
# Quadratic objective Function
@scheduler.serial
def objective(x):
return x * x
# Initialize and run Tuner
tuner = Tuner(param_space, objective)
results = tuner.minimize()
print(f'Optimal value of parameters: {results["best_params"]} and objective: {results["best_objective"]}')```
# => Optimal value of parameters: {'x': 0} and objective: 0
3. Hyperparameter Tuning Example
from sklearn import datasets
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import cross_val_score
from mango import Tuner, scheduler
# search space for KNN classifier's hyperparameters
# n_neighbors can vary between 1 and 50, with different choices of algorithm
param_space = dict(n_neighbors=range(1, 50),
algorithm=['auto', 'ball_tree', 'kd_tree', 'brute'])
@scheduler.serial
def objective(**params):
X, y = datasets.load_breast_cancer(return_X_y=True)
clf = KNeighborsClassifier(**params)
score = cross_val_score(clf, X, y, scoring='accuracy').mean()
return score
tuner = Tuner(param_space, objective)
results = tuner.maximize()
print('best parameters:', results['best_params'])
print('best accuracy:', results['best_objective'])
# => best parameters: {'algorithm': 'auto', 'n_neighbors': 11}
# => best accuracy: 0.931486122714193
Note that best parameters may be different but accuracy should be ~ 0.9315. More examples are available
in the examples
directory (Facebook's Prophet,
XGBoost, SVM).
4. Search Space
The search space defines the range and distribution of input parameters to the objective function. Mango search space is compatible with scikit-learn's parameter space definitions used in RandomizedSearchCV or GridSearchCV. The search space is defined as a dictionary with keys being the parameter names (string) and values being list of discreet choices, range of integers or the distributions. Example of some common search spaces are:
Integer
Following space defines x
as an integer parameters with values in range(-10, 11)
(11 is not included):
param_space = dict(x=range(-10, 11)) #=> -10, -9, ..., 10
# you can use steps for sparse ranges
param_space = dict(x=range(0, 101, 10)) #=> 0, 10, 20, ..., 100
Integers are uniformly sampled from the given range and are assumed to be ordered and treated as continuous variables.
Categorical
Discreet categories can be defined as lists. For example:
# string
param_space = dict(color=['red', 'blue', 'green'])
# float
param_space = dict(v=[0.2, 0.1, 0.3])
# mixed
param_space = dict(max_features=['auto', 0.2, 0.3])
Lists are uniformly sampled and are assumed to be unordered. They are one-hot encoded internally.
Distributions
All the distributions supported by scipy.stats
are supported.
In general, distributions must provide a rvs
method for sampling.
Uniform distribution
Using uniform(loc, scale)
one obtains the uniform distribution on [loc, loc + scale]
.
from scipy.stats import uniform
# uniformly distributed between -1 and 1
param_space = dict(a=uniform(-1, 2))
Log uniform distribution
We have added loguniform distribution by extending the scipy.stats.distributions
constructs.
Using loguniform(loc, scale)
one obtains the loguniform distribution on [10loc, 10loc + scale]
.
from mango.domain.distribution import loguniform
# log uniformly distributed between 10^-3 and 10^-1
param_space = dict(learning_rate=loguniform(-3, 2))
Hyperparameter search space examples
Example hyperparameter search space for Random Forest Classifier:
param_space = dict(
max_features=['sqrt', 'log2', .1, .3, .5, .7, .9],
n_estimators=range(10, 1000, 50), # 10 to 1000 in steps of 50
bootstrap=[True, False],
max_depth=range(1, 20),
min_samples_leaf=range(1, 10)
)
Example search space for XGBoost Classifier:
from scipy.stats import uniform
from mango.domain.distribution import loguniform
param_space = {
'n_estimators': range(10, 2001, 100), # 10 to 2000 in steps of 100
'max_depth': range(1, 15), # 1 to 14
'reg_alpha': loguniform(-3, 6), # 10^-3 to 10^3
'booster': ['gbtree', 'gblinear'],
'colsample_bylevel': uniform(0.05, 0.95), # 0.05 to 1.0
'colsample_bytree': uniform(0.05, 0.95), # 0.05 to 1.0
'learning_rate': loguniform(-3, 3), # 0.001 to 1
'reg_lambda': loguniform(-3, 6), # 10^-3 to 10^3
'min_child_weight': loguniform(0, 2), # 1 to 100
'subsample': uniform(0.1, 0.89) # 0.1 to 0.99
}
Example search space for SVM:
from scipy.stats import uniform
from mango.domain.distribution import loguniform
param_dict = {
'kernel': ['rbf', 'sigmoid'],
'gamma': uniform(0.1, 4), # 0.1 to 4.1
'C': loguniform(-7, 8) # 10^-7 to 10
}
5. Scheduler
Mango is designed to take advantage of distributed computing. The objective function can be scheduled to
run locally or on a cluster with parallel evaluations. Mango is designed to allow the use of any distributed
computing framework (like Celery or Kubernetes). The scheduler
module comes with some pre-defined
schedulers.
Serial scheduler
Serial scheduler runs locally with one objective function evaluation at a time
from mango import scheduler
@scheduler.serial
def objective(x):
return x * x
Parallel scheduler
Parallel scheduler runs locally and uses joblib
to evaluate the objective functions in parallel
from mango import scheduler
@scheduler.parallel(n_jobs=2)
def objective(x):
return x * x
n_jobs
specifies the number of parallel evaluations. n_jobs = -1
uses all the available cpu cores
on the machine. See simple_parallel
for full working example.
Custom distributed scheduler
Users can define their own distribution strategies using custom
scheduler. To do so, users need to define
an objective function that takes a list of parameters and returns the list of results:
from mango import scheduler
@scheduler.custom(n_jobs=4)
def objective(params_batch):
""" Template for custom distributed objective function
Args:
params_batch (list): Batch of parameter dictionaries to be evaluated in parallel
Returns:
list: Values of objective function at given parameters
"""
# evaluate the objective on a distributed framework
...
return results
For example the following snippet uses Celery:
import celery
from mango import Tuner, scheduler
# connect to celery backend
app = celery.Celery('simple_celery', backend='rpc://')
# remote celery task
@app.task
def remote_objective(x):
return x * x
@scheduler.custom(n_jobs=4)
def objective(params_batch):
jobs = celery.group(remote_objective.s(params['x']) for params in params_batch)()
return jobs.get()
param_space = dict(x=range(-10, 10))
tuner = Tuner(param_space, objective)
results = tuner.minimize()
A working example to tune hyperparameters of KNN using Celery is here.
6. Optional configurations
The default configuration parameters used by the Mango as below:
{'param_dict': ...,
'userObjective': ...,
'domain_size': 5000,
'initial_random': 1,
'num_iteration': 20,
'batch_size': 1}
The configuration parameters are:
- domain_size: The size which is explored in each iteration by the gaussian process. Generally, a larger size is preferred if higher dimensional functions are optimized. More on this will be added with details about the internals of bayesian optimization.
- initial_random: The number of random samples tried.
- num_iteration: The total number of iterations used by Mango to find the optimal value.
- batch_size: The size of args_list passed to the objective function for parallel evaluation. For larger batch sizes, Mango internally uses intelligent sampling to decide the optimal samples to evaluate.
- early_stopping: A callback to specify custom stopping criteria. The callback has the following signature:
def early_stopping(results): ''' results is the same as dict returned by tuner keys available: params_tries, objective_values, best_objective, best_params ''' ... return True/False
For usage see early stopping examples notebook.
The default configuration parameters can be modified, as shown below. Only the parameters whose values need to adjusted can be passed as the dictionary.
conf_dict = dict(num_iteration=40, domain_size=10000, initial_random=3)
tuner = Tuner(param_dict, objective, conf_dict)
Participate
Paper
More technical details are available in the Mango paper (ICASSP 2020 Conference). Please cite this as:
@inproceedings{sandha2020mango,
title={Mango: A Python Library for Parallel Hyperparameter Tuning},
author={Sandha, Sandeep Singh and Aggarwal, Mohit and Fedorov, Igor and Srivastava, Mani},
booktitle={ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={3987--3991},
year={2020},
organization={IEEE}
}
Slides
Slides explaining Mango abstractions and design choices are available. Mango Slides.
Contribute
Please take a look at open issues if you are looking for areas to contribute to.
Questions
For any questions feel free to reach out by creating an issue here.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
Hashes for arm_mango-1.1.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 5fa78459c6da3f15a1cf70fbe3bcbed29661e0af41b657722bc28e1c58dcf477 |
|
MD5 | 3352b4cb303a3665f3e7c63d644b7969 |
|
BLAKE2b-256 | 91ce76096abaaffec09933f1503ecd877476069d3eba2faaa0966ed192c835e0 |