No project description provided
Project description
Gradient-Free-Optimizers
Simple and reliable optimization with local, global, population-based and sequential techniques in numerical search spaces.
Master status: | |
Code quality: | |
Latest versions: |
Introduction
Gradient-Free-Optimizers provides a collection of easy to use optimization techniques, whose objective function only requires an arbitrary score that gets maximized. This makes gradient-free methods capable of solving various optimization problems, including:
- Optimizing arbitrary mathematical functions.
- Fitting multiple gauss-distributions to data.
- Hyperparameter-optimization of machine-learning methods.
Main features • Installation • Examples • API-info • Citation • License
Main features
-
Easy to use:
- Simple API-design
- Receive prepared information about ongoing and finished optimization runs
-
High performance:
- Modern optimization techniques
- Lightweight backend
- Save time with "short term memory"
-
High reliability:
- Extensive testing
- Performance test for each optimizer
Optimization strategies:
Optimization Strategy | Convex Function | Non-convex Function |
---|---|---|
Hill Climbing Evaluates the score of n neighbours in an epsilon environment and moves to the best one. |
||
Tabu Search Hill climbing iteration + increases epsilon by a factor if no better neighbour was found. |
||
Simulated Annealing Hill climbing iteration + accepts moving to worse positions with decreasing probability over time (transition probability). |
||
Random Search Moves to random positions in each iteration. |
||
Random Restart Hill Climbing Hill climbing + moves to a random position after n iterations. |
||
Random Annealing Hill Climbing + large epsilon that decreases over time. |
||
Parallel Tempering Population of n simulated annealers, which occasionally swap transition probabilities. |
||
Particle Swarm Optimization Population of n particles attracting each other and moving towards the best particle. |
||
Evolution Strategy Population of n hill climbers occasionally mixing positional information. |
||
Bayesian Optimization Gaussian process fitting to explored positions and predicting promising new positions. |
||
Tree of Parzen Estimators Kernel density estimators fitting to good and bad explored positions and predicting promising new positions. |
||
Decision Tree Optimizer Ensemble of decision trees fitting to explored positions and predicting promising new positions. |
Installation
The most recent version of Gradient-Free-Optimizers is available on PyPi:
pip install gradient-free-optimizers
Examples
Convex function
import numpy as np
from gradient_free_optimizers import RandomSearchOptimizer
def parabola_function(para):
loss = para["x"] * para["x"]
return -loss
search_space = {"x": np.arange(-10, 10, 0.1)}
opt = RandomSearchOptimizer(search_space)
opt.search(parabola_function, n_iter=100000)
Non-convex function
import numpy as np
from gradient_free_optimizers import RandomSearchOptimizer
def ackley_function(pos_new):
x = pos_new["x1"]
y = pos_new["x2"]
a1 = -20 * np.exp(-0.2 * np.sqrt(0.5 * (x * x + y * y)))
a2 = -np.exp(0.5 * (np.cos(2 * np.pi * x) + np.cos(2 * np.pi * y)))
score = a1 + a2 + 20
return -score
search_space = {
"x1": np.arange(-100, 101, 0.1),
"x2": np.arange(-100, 101, 0.1),
}
opt = RandomSearchOptimizer(search_space)
opt.search(ackley_function, n_iter=30000)
Machine learning example
import numpy as np
from sklearn.model_selection import cross_val_score
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.datasets import load_wine
from gradient_free_optimizers import HillClimbingOptimizer
data = load_wine()
X, y = data.data, data.target
def model(para):
gbc = GradientBoostingClassifier(
n_estimators=para["n_estimators"],
max_depth=para["max_depth"],
min_samples_split=para["min_samples_split"],
min_samples_leaf=para["min_samples_leaf"],
)
scores = cross_val_score(gbc, X, y, cv=3)
return scores.mean()
search_space = {
"n_estimators": np.arange(20, 120, 1),
"max_depth": np.arange(2, 12, 1),
"min_samples_split": np.arange(2, 12, 1),
"min_samples_leaf": np.arange(1, 12, 1),
}
opt = HillClimbingOptimizer(search_space)
opt.search(model, n_iter=50)
Basic API-information
Optimization classes
HillClimbingOptimizer
- search_space
- epsilon=0.03
- distribution="normal"
- n_neighbours=3
- rand_rest_p=0.03
RepulsingHillClimbingOptimizer
- search_space
- epsilon=0.03
- distribution="normal"
- n_neighbours=3
- rand_rest_p=0.03
- repulsion_factor=5
SimulatedAnnealingOptimizer
- search_space
- epsilon=0.03
- distribution="normal"
- n_neighbours=3
- rand_rest_p=0.03
- p_accept=0.1
- norm_factor="adaptive"
- annealing_rate=0.975
- start_temp=1
RandomSearchOptimizer
- search_space
RandomRestartHillClimbingOptimizer
- search_space
- epsilon=0.03
- distribution="normal"
- n_neighbours=3
- rand_rest_p=0.03
- n_iter_restart=10
RandomAnnealingOptimizer
- search_space
- epsilon=0.03
- distribution="normal"
- n_neighbours=3
- rand_rest_p=0.03
- annealing_rate=0.975
- start_temp=1
ParallelTemperingOptimizer
- search_space
- n_iter_swap=10
- rand_rest_p=0.03
ParticleSwarmOptimizer
- search_space
- inertia=0.5
- cognitive_weight=0.5
- social_weight=0.5
- temp_weight=0.2
- rand_rest_p=0.03
EvolutionStrategyOptimizer
- search_space
- mutation_rate=0.7
- crossover_rate=0.3
- rand_rest_p=0.03
BayesianOptimizer
- search_space
- gpr=gaussian_process["gp_nonlinear"]
- xi=0.03
- warm_start_smbo=None
- rand_rest_p=0.03
TreeStructuredParzenEstimators
- search_space
- gamma_tpe=0.5
- warm_start_smbo=None
- rand_rest_p=0.03
DecisionTreeOptimizer
- search_space
- tree_regressor="extra_tree"
- xi=0.01
- warm_start_smbo=None
- rand_rest_p=0.03
EnsembleOptimizer
- search_space
- estimators=[
GradientBoostingRegressor(n_estimators=5),
GaussianProcessRegressor(),
]
- xi=0.01
- warm_start_smbo=None
- rand_rest_p=0.03
Input parameters
- search_space
-
Pass the search_space to the optimizer class to define the space were the optimization algorithm can search for the best parameters for the given objective function.
example:
{ "x1": numpy.arange(-10, 31, 0.3), "x2": numpy.arange(-10, 31, 0.3), }
-
Search method parameters
-
objective_function
-
(callable)
-
The objective function defines the optimization problem. The optimization algorithm will try to maximize the numerical value that is returned by the objective function by trying out different parameters from the search space.
example:
def objective_function(para): score = -(para["x1"] * para["x1"] + para["x2"] * para["x2"]) return score
-
-
n_iter
-
(int)
-
The number of iterations that will be performed during the optimiation run. The entire iteration consists of the optimization-step, which decides the next parameter that will be evaluated and the evaluation-step, which will run the objective function with the chosen parameter and return the score.
-
-
initialize={"grid": 8, "random": 4, "vertices": 8}
-
(dict, None)
-
The initialization dictionary automatically determines a number of parameters that will be evaluated in the first n iterations (n is the sum of the values in initialize).
-
-
warm_start=None
- (list, None)
- List of parameter dictionaries that marks additional start points for the optimization run.
-
max_time=None
- (float, None)
- Maximum number of seconds until the optimization stops. The time will be checked after each completed iteration.
-
max_score=None
- (float, None)
- Maximum score until the optimization stops. The score will be checked after each completed iteration.
-
memory=True
- (bool)
- Whether or not to use the "memory"-feature. The memory is a dictionary, which gets filled with parameters and scores during the optimization run. If the optimizer encounters a parameter that is already in the dictionary it just extracts the score instead of reevaluating the objective function (which can take a long time).
-
memory_warm_start=None
-
(pandas dataframe, None)
-
Pandas dataframe that contains score and paramter information that will be automatically loaded into the memory-dictionary.
example:
score x1 x2 x... 0.756 0.1 0.2 ... 0.823 0.3 0.1 ... ... ... ... ... ... ... ... ...
-
-
verbosity={ "progress_bar": True, "print_results": True, "print_times": True, }
- (dict, None)
- The verbosity dictionary determines what part of the optimization information will be printed in the command line.
-
random_state=None
- (int, None)
- Random state for random processes in the random, numpy and scipy module.
Results from attributes
-
.results
-
Dataframe, that contains information about the score, the value of each parameter and the evaluation and iteration time. Each row shows the information of one optimization iteration.
example:
score x1 x2 x... eval_time iter_time 0.756 0.1 0.2 ... 0.000016 0.0.000034 0.823 0.3 0.1 ... 0.000017 0.0.000032 ... ... ... ... ... ... ... ... ... ... ... ...
-
-
.best_score
- numerical value of the best score, that was found during the optimization run.
-
.best_para
-
parameter dictionary of the best score, that was found during the optimization run.
example:
{ 'x1': 0.2, 'x2': 0.3, }
-
Gradient Free Optimizers <=> Hyperactive
This package was created as the optimization backend of the Hyperactive package. The separation of Gradient-Free-Optimizers from Hyperactive enables multiple advantages:
- Other developers can easily use GFOs as an optimizaton backend if desired
- Separate and more thorough testing
- Better isolation from the complex information flow in Hyperactive. GFOs only uses positions and scores in a N-dimensional search-space. It returns only the new position after each iteration.
- a smaller and cleaner code base, if you want to explore my implementation of these optimization techniques.
Citation
@Misc{gfo2020,
author = {{Simon Blanke}},
title = {{Gradient-Free-Optimizers}: Simple and reliable optimization with local, global, population-based and sequential techniques in numerical search spaces.},
howpublished = {\url{https://github.com/SimonBlanke}},
year = {since 2020}
}
License
Gradient-Free-Optimizers is licensed under the following License:
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
Hashes for gradient_free_optimizers-0.2.3-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6319022e0c2992f783e3dea96c0b3aba241e681ee00102177733edb6b8bb0658 |
|
MD5 | b3d4edaeb87a56c7f8664ba14cf68095 |
|
BLAKE2b-256 | c9745d26c82c8db288b039291abed33057b54f328553433146e4ecdbb104eb6a |