No project description provided
Project description
Gradient-Free-Optimizers
Simple and reliable optimization with local, global, population-based and sequential techniques in numerical search spaces.
Master status: | |
Code quality: | |
Latest versions: |
Introduction
Gradient-Free-Optimizers provides a collection of easy to use optimization techniques, whose objective function only requires an arbitrary score that gets maximized.
This makes gradient-free optimization methods capable of performing hyperparameter-optimization of machine learning methods. The optimizers in this package only requires the score of the point to decide which point to evaluate next.
Main features • Installation • Examples • API-info • Citation • License
Main features
-
Easy to use:
- Simple API-design
- Receive prepared information about ongoing and finished optimization runs
-
High performance:
- Modern optimization techniques
- Lightweight backend
- Save time with "short term memory"
-
High reliability:
- Extensive testing
- Performance test for each optimizer
Installation
The most recent version of Gradient-Free-Optimizers is available on PyPi:
pip install gradient-free-optimizers
Examples
Convex function
import numpy as np
from gradient_free_optimizers import RandomSearchOptimizer
def parabola_function(para):
loss = para["x"] * para["x"]
return -loss
search_space = {"x": np.arange(-10, 10, 0.1)}
opt = RandomSearchOptimizer(search_space)
opt.search(parabola_function, n_iter=100000)
Non-convex function
import numpy as np
from gradient_free_optimizers import RandomSearchOptimizer
def ackley_function(pos_new):
x = pos_new["x1"]
y = pos_new["x2"]
a1 = -20 * np.exp(-0.2 * np.sqrt(0.5 * (x * x + y * y)))
a2 = -np.exp(0.5 * (np.cos(2 * np.pi * x) + np.cos(2 * np.pi * y)))
score = a1 + a2 + 20
return -score
search_space = {
"x1": np.arange(-100, 101, 0.1),
"x2": np.arange(-100, 101, 0.1),
}
opt = RandomSearchOptimizer(search_space)
opt.search(ackley_function, n_iter=30000)
Machine learning example
import numpy as np
from sklearn.model_selection import cross_val_score
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.datasets import load_wine
from gradient_free_optimizers import HillClimbingOptimizer
data = load_wine()
X, y = data.data, data.target
def model(para):
gbc = GradientBoostingClassifier(
n_estimators=para["n_estimators"],
max_depth=para["max_depth"],
min_samples_split=para["min_samples_split"],
min_samples_leaf=para["min_samples_leaf"],
)
scores = cross_val_score(gbc, X, y, cv=3)
return scores.mean()
search_space = {
"n_estimators": np.arange(20, 120, 1),
"max_depth": np.arange(2, 12, 1),
"min_samples_split": np.arange(2, 12, 1),
"min_samples_leaf": np.arange(1, 12, 1),
}
opt = HillClimbingOptimizer(search_space)
opt.search(model, n_iter=50)
Basic API-information
Optimization classes:
- HillClimbingOptimizer
- StochasticHillClimbingOptimizer
- TabuOptimizer
- RandomSearchOptimizer
- RandomRestartHillClimbingOptimizer
- RandomAnnealingOptimizer
- SimulatedAnnealingOptimizer
- StochasticTunnelingOptimizer
- ParallelTemperingOptimizer
- ParticleSwarmOptimizer
- EvolutionStrategyOptimizer
- BayesianOptimizer
- TreeStructuredParzenEstimators
- DecisionTreeOptimizer
Search method arguments:
- objective_function
- n_iter
- initialize
- warm_start
- max_time
- max_score
- memory
- memory_warm_start
- verbosity
- random_state
GFOs-design
This package was created as the optimization backend of the Hyperactive package. The separation of Gradient-Free-Optimizers from Hyperactive enables multiple advantages:
- Other developers can easily use GFOs as an optimizaton backend if desired
- Separate and more thorough testing
- Better isolation from the complex information flow in Hyperactive. GFOs only uses positions and scores in a N-dimensional search-space. It returns only the new position after each iteration.
- a smaller and cleaner code base, if you want to explore my implementation of these optimization techniques.
Citing Gradient-Free-Optimizers
@Misc{gfo2020,
author = {{Simon Blanke}},
title = {{Gradient-Free-Optimizers}: Simple and reliable optimization with local, global, population-based and sequential techniques in numerical search spaces.},
howpublished = {\url{https://github.com/SimonBlanke}},
year = {since 2020}
}
License
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
Hashes for gradient_free_optimizers-0.2.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 8ecf39714ee062a0c2291e75c0ace647e192827d42f5ee369238a13431a28193 |
|
MD5 | b1692a8bdea833e8499bce37a5083358 |
|
BLAKE2b-256 | 21ef61a9839a4e751d39d73cf2ab6204a8ce4040255dbf7d5a3266652bc98976 |