Skip to main content

No project description provided

Project description

Gradient-Free-Optimizers

Simple and reliable optimization with local, global, population-based and sequential techniques in numerical search spaces.


Master status: img not loaded: try F5 :) img not loaded: try F5 :)
Code quality: img not loaded: try F5 :) img not loaded: try F5 :)
Latest versions: img not loaded: try F5 :)

Introduction

Gradient-Free-Optimizers provides a collection of easy to use optimization techniques, whose objective function only requires an arbitrary score that gets maximized.

This makes gradient-free optimization methods capable of performing hyperparameter-optimization of machine learning methods. The optimizers in this package only requires the score of the point to decide which point to evaluate next.





Main features

  • Easy to use:

    • Simple API-design
    • Receive prepared information about ongoing and finished optimization runs
  • High performance:

    • Modern optimization techniques
    • Lightweight backend
    • Save time with "short term memory"
  • High reliability:

    • Extensive testing
    • Performance test for each optimizer

Installation

PyPI version

The most recent version of Gradient-Free-Optimizers is available on PyPi:

pip install gradient-free-optimizers

Examples

Convex function
import numpy as np
from gradient_free_optimizers import RandomSearchOptimizer


def parabola_function(para):
    loss = para["x"] * para["x"]
    return -loss


search_space = {"x": np.arange(-10, 10, 0.1)}

opt = RandomSearchOptimizer(search_space)
opt.search(parabola_function, n_iter=100000)
Non-convex function
import numpy as np
from gradient_free_optimizers import RandomSearchOptimizer


def ackley_function(pos_new):
    x = pos_new["x1"]
    y = pos_new["x2"]

    a1 = -20 * np.exp(-0.2 * np.sqrt(0.5 * (x * x + y * y)))
    a2 = -np.exp(0.5 * (np.cos(2 * np.pi * x) + np.cos(2 * np.pi * y)))
    score = a1 + a2 + 20
    return -score


search_space = {
    "x1": np.arange(-100, 101, 0.1),
    "x2": np.arange(-100, 101, 0.1),
}

opt = RandomSearchOptimizer(search_space)
opt.search(ackley_function, n_iter=30000)
Machine learning example
import numpy as np
from sklearn.model_selection import cross_val_score
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.datasets import load_wine

from gradient_free_optimizers import HillClimbingOptimizer


data = load_wine()
X, y = data.data, data.target


def model(para):
    gbc = GradientBoostingClassifier(
        n_estimators=para["n_estimators"],
        max_depth=para["max_depth"],
        min_samples_split=para["min_samples_split"],
        min_samples_leaf=para["min_samples_leaf"],
    )
    scores = cross_val_score(gbc, X, y, cv=3)

    return scores.mean()


search_space = {
    "n_estimators": np.arange(20, 120, 1),
    "max_depth": np.arange(2, 12, 1),
    "min_samples_split": np.arange(2, 12, 1),
    "min_samples_leaf": np.arange(1, 12, 1),
}

opt = HillClimbingOptimizer(search_space)
opt.search(model, n_iter=50)

Basic API-information

Optimization classes:

  • HillClimbingOptimizer
  • StochasticHillClimbingOptimizer
  • TabuOptimizer
  • RandomSearchOptimizer
  • RandomRestartHillClimbingOptimizer
  • RandomAnnealingOptimizer
  • SimulatedAnnealingOptimizer
  • StochasticTunnelingOptimizer
  • ParallelTemperingOptimizer
  • ParticleSwarmOptimizer
  • EvolutionStrategyOptimizer
  • BayesianOptimizer
  • TreeStructuredParzenEstimators
  • DecisionTreeOptimizer

Search method arguments:

  • objective_function
  • n_iter
  • initialize
  • warm_start
  • max_time
  • max_score
  • memory
  • memory_warm_start
  • verbosity
  • random_state

GFOs-design

This package was created as the optimization backend of the Hyperactive package. The separation of Gradient-Free-Optimizers from Hyperactive enables multiple advantages:

  • Other developers can easily use GFOs as an optimizaton backend if desired
  • Separate and more thorough testing
  • Better isolation from the complex information flow in Hyperactive. GFOs only uses positions and scores in a N-dimensional search-space. It returns only the new position after each iteration.
  • a smaller and cleaner code base, if you want to explore my implementation of these optimization techniques.

Citing Gradient-Free-Optimizers

@Misc{gfo2020,
  author =   {{Simon Blanke}},
  title =    {{Gradient-Free-Optimizers}: Simple and reliable optimization with local, global, population-based and sequential techniques in numerical search spaces.},
  howpublished = {\url{https://github.com/SimonBlanke}},
  year = {since 2020}
}

License

LICENSE

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

gradient_free_optimizers-0.2.1-py3-none-any.whl (45.3 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page