Lightweight optimization with local, global, population-based and sequential techniques across mixed search spaces
Project description
Lightweight optimization with local, global, population-based and sequential techniques across mixed search spaces
| Documentation | ▸ | Homepage · Optimizers · API Reference · Examples |
| On this page | ▸ | Features · Examples · Concepts · Citation |
Gradient-Free-Optimizers is a Python library for gradient-free optimization of black-box functions. It provides a unified interface to 23 optimization algorithms, from simple hill climbing to Bayesian optimization, all operating on mixed search spaces that combine continuous ranges, discrete grids, categorical choices, and SciPy distribution-backed dimensions.
Designed for hyperparameter tuning, simulation optimization, feature selection, engineering design, and any scenario where gradients are unavailable or impractical. The library prioritizes simplicity: define your objective function, specify the search space, and run. All algorithms share one consistent API, so switching from hill climbing to Bayesian optimization is a one-line change. SciPy is optional; GFO works with only pandas as a required dependency, making it suitable as an optimization backend or for minimal environments, containers, and embedded systems.
Installation
pip install gradient-free-optimizers
Optional dependencies
pip install gradient-free-optimizers[progress] # Progress bar with tqdm
pip install gradient-free-optimizers[sklearn] # scikit-learn for surrogate models
pip install gradient-free-optimizers[full] # All optional dependencies
Key Features
|
23 Optimization Algorithms Local, global, population-based, and sequential model-based optimizers. Switch algorithms with one line of code. |
Zero Configuration Sensible defaults for all parameters. Start optimizing immediately without tuning the optimizer itself. |
Memory System Built-in caching prevents redundant evaluations. Critical for expensive objective functions like ML models. |
|
Mixed Search Spaces Combine continuous ranges, discrete grids, categorical choices, and SciPy distributions in a single search space. |
Constraints Support Define constraint functions to restrict the search space. Invalid regions are automatically avoided. |
Minimal Dependencies Only pandas required. Optional integrations for progress bars (tqdm) and surrogate models (scikit-learn). |
Quick Start
import numpy as np
from gradient_free_optimizers import HillClimbingOptimizer
def objective(params):
x, y = params["x"], params["y"]
return -(x**2 + y**2)
search_space = {
"x": (-5.0, 5.0), # continuous range
"y": np.arange(-5, 5, 0.1), # discrete grid
}
opt = HillClimbingOptimizer(search_space)
opt.search(objective, n_iter=1000)
print(f"Best score: {opt.best_score}")
print(f"Best params: {opt.best_para}")
Core Concepts
flowchart LR
O["Optimizer
━━━━━━━━━━
23 algorithms"]
S["Search Space
━━━━━━━━━━━━
mixed dimensions"]
F["Objective
━━━━━━━━━━
f(params) → score"]
D[("Search Data
━━━━━━━━━━━
history")]
O -->|propose| S
S -->|params| F
F -->|score| O
O -.-> D
D -.->|warm start| O
Optimizer: Implements the search strategy. Choose from 23 algorithms across four categories: local search, global search, population-based, and sequential model-based.
Search Space: Defines valid parameter ranges and choices. Each key is a parameter name, each value is a tuple (min, max) for continuous, a NumPy array for discrete, a list for categorical, or a SciPy distribution.
Objective Function: Your function to maximize. Takes a dictionary of parameters, returns a score. Use negation to minimize.
Search Data: Complete history of all evaluations accessible via opt.search_data for analysis and warm-starting future searches.
Examples
Hyperparameter Optimization
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.model_selection import cross_val_score
from sklearn.datasets import load_wine
import numpy as np
from gradient_free_optimizers import BayesianOptimizer
X, y = load_wine(return_X_y=True)
def objective(params):
model = GradientBoostingClassifier(
n_estimators=params["n_estimators"],
max_depth=params["max_depth"],
learning_rate=params["learning_rate"],
)
return cross_val_score(model, X, y, cv=5).mean()
search_space = {
"n_estimators": np.arange(50, 300, 10),
"max_depth": np.arange(2, 10),
"learning_rate": np.logspace(-3, 0, 20),
}
opt = BayesianOptimizer(search_space)
opt.search(objective, n_iter=50)
Bayesian Optimization
import numpy as np
from gradient_free_optimizers import BayesianOptimizer
def ackley(params):
x, y = params["x"], params["y"]
return -(
-20 * np.exp(-0.2 * np.sqrt(0.5 * (x**2 + y**2)))
- np.exp(0.5 * (np.cos(2 * np.pi * x) + np.cos(2 * np.pi * y)))
+ np.e + 20
)
search_space = {
"x": np.arange(-5, 5, 0.01),
"y": np.arange(-5, 5, 0.01),
}
opt = BayesianOptimizer(search_space)
opt.search(ackley, n_iter=100)
Particle Swarm Optimization
import numpy as np
from gradient_free_optimizers import ParticleSwarmOptimizer
def rastrigin(params):
A = 10
values = [params[f"x{i}"] for i in range(5)]
return -sum(v**2 - A * np.cos(2 * np.pi * v) + A for v in values)
search_space = {f"x{i}": np.arange(-5.12, 5.12, 0.1) for i in range(5)}
opt = ParticleSwarmOptimizer(search_space, population=20)
opt.search(rastrigin, n_iter=500)
Simulated Annealing
import numpy as np
from gradient_free_optimizers import SimulatedAnnealingOptimizer
def sphere(params):
return -(params["x"]**2 + params["y"]**2)
search_space = {
"x": np.arange(-10, 10, 0.1),
"y": np.arange(-10, 10, 0.1),
}
opt = SimulatedAnnealingOptimizer(
search_space,
start_temp=1.2,
annealing_rate=0.99,
)
opt.search(sphere, n_iter=1000)
Constrained Optimization
import numpy as np
from gradient_free_optimizers import RandomSearchOptimizer
def objective(params):
return params["x"] + params["y"]
def constraint(params):
# Only positions where x + y < 5 are valid
return params["x"] + params["y"] < 5
search_space = {
"x": np.arange(0, 10, 0.1),
"y": np.arange(0, 10, 0.1),
}
opt = RandomSearchOptimizer(search_space, constraints=[constraint])
opt.search(objective, n_iter=1000)
Mixed Search Space
import numpy as np
from scipy import stats
from gradient_free_optimizers import BayesianOptimizer
def objective(params):
x = params["x"]
n_layers = params["n_layers"]
lr = params["learning_rate"]
activation_scores = {"relu": 0.0, "tanh": 0.1, "gelu": 0.3}
return -(x**2) - 0.1 * n_layers + activation_scores[params["activation"]] - abs(lr - 0.001)
search_space = {
"x": (-5.0, 5.0), # continuous
"n_layers": np.arange(1, 6), # discrete
"activation": ["relu", "tanh", "gelu"], # categorical
"learning_rate": stats.loguniform(1e-5, 1), # distribution
}
opt = BayesianOptimizer(search_space)
opt.search(objective, n_iter=100)
Memory and Warm Starting
import numpy as np
from gradient_free_optimizers import HillClimbingOptimizer
def expensive_function(params):
# Simulating an expensive computation
return -(params["x"]**2 + params["y"]**2)
search_space = {
"x": np.arange(-10, 10, 0.1),
"y": np.arange(-10, 10, 0.1),
}
# First search
opt1 = HillClimbingOptimizer(search_space)
opt1.search(expensive_function, n_iter=100, memory=True)
# Continue with warm start using previous search data
opt2 = HillClimbingOptimizer(search_space)
opt2.search(expensive_function, n_iter=100, memory_warm_start=opt1.search_data)
Ask/Tell Interface
import numpy as np
from gradient_free_optimizers import BayesianOptimizer
def objective(params):
return -(params["x"]**2 + params["y"]**2)
search_space = {
"x": np.arange(-10, 10, 0.1),
"y": np.arange(-10, 10, 0.1),
}
# Manual control over the optimization loop
opt = BayesianOptimizer(search_space)
opt.setup_search(objective, n_iter=100)
for _ in range(100):
params = opt.ask() # Get next parameters to evaluate
score = objective(params)
opt.tell(params, score) # Report result back
Early Stopping
import numpy as np
from gradient_free_optimizers import BayesianOptimizer
def objective(params):
return -(params["x"]**2 + params["y"]**2)
search_space = {
"x": np.arange(-10, 10, 0.1),
"y": np.arange(-10, 10, 0.1),
}
opt = BayesianOptimizer(search_space)
opt.search(
objective,
n_iter=1000,
max_time=60, # Stop after 60 seconds
max_score=-0.01, # Stop when score reaches -0.01
early_stopping={ # Stop if no improvement for 50 iterations
"n_iter_no_change": 50,
},
)
Ecosystem
GFO is used as the optimization engine in other packages and integrates with the broader Python optimization ecosystem. For updates, follow on GitHub.
| Package | Description |
|---|---|
| Hyperactive | High-level hyperparameter optimization framework, uses GFO as its optimization backend |
| Surfaces | Test functions and benchmark surfaces for optimization algorithm evaluation |
Documentation
| Resource | Description |
|---|---|
| User Guide | Comprehensive tutorials and explanations |
| API Reference | Complete API documentation |
| Optimizers | Detailed description of all 23 algorithms |
| Examples | Code examples for various use cases |
Contributing
Contributions welcome! See CONTRIBUTING.md for guidelines.
- Bug reports: GitHub Issues
- Feature requests: GitHub Discussions
- Questions: GitHub Issues
Citation
If you use this software in your research, please cite:
@software{gradient_free_optimizers,
author = {Simon Blanke},
title = {Gradient-Free-Optimizers: Simple and reliable optimization with local, global, population-based and sequential techniques in mixed search spaces},
year = {2020},
url = {https://github.com/SimonBlanke/Gradient-Free-Optimizers},
}
License
MIT License - Free for commercial and academic use.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file gradient_free_optimizers-1.13.0-py3-none-any.whl.
File metadata
- Download URL: gradient_free_optimizers-1.13.0-py3-none-any.whl
- Upload date:
- Size: 382.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
863f8d0cbb7a553c8ec7853d92b3a8a2110c77e806edab88a373df3a152f8b46
|
|
| MD5 |
ba2f870a2d7511965462d9fa0db132b9
|
|
| BLAKE2b-256 |
8541519c2ff1a45c39b90b5dee406cc88c3af5b11c5127f952282a1f9784243e
|