Skip to main content

An optimization and data collection toolbox for convenient and fast prototyping of computationally expensive models.

Project description

Hyperactive Logo


A unified interface for optimization algorithms and experiments in Python.

Tests Coverage


Documentation Homepage · User Guide · API Reference · Examples
On this page Features · Examples · Concepts · Citation


Bayesian Optimization on Ackley Function

Hyperactive provides 31 optimization algorithms across 3 backends (GFO, Optuna, scikit-learn), accessible through a unified experiment-based interface. The library separates optimization problems from algorithms, enabling you to swap optimizers without changing your experiment code.

Designed for hyperparameter tuning, model selection, and black-box optimization. Native integrations with scikit-learn, sktime, skpro, and PyTorch allow tuning ML models with minimal setup. Define your objective, specify a search space, and run.

LinkedIn Discord


Installation

pip install hyperactive

PyPI Python

Optional dependencies
pip install hyperactive[sklearn-integration]  # scikit-learn integration
pip install hyperactive[sktime-integration]   # sktime/skpro integration
pip install hyperactive[all_extras]           # Everything including Optuna

Key Features

31 Optimization Algorithms
Local, global, population-based, and model-based methods across 3 backends (GFO, Optuna, sklearn).
Experiment Abstraction
Clean separation between what to optimize (experiments) and how to optimize (algorithms).
Flexible Search Spaces
Discrete, continuous, and mixed parameter types. Define spaces with NumPy arrays or lists.
ML Framework Integrations
Native support for scikit-learn, sktime, skpro, and PyTorch with minimal code changes.
Multiple Backends
GFO algorithms, Optuna samplers, and sklearn search methods through one unified API.
Stable & Tested
5+ years of development, comprehensive test coverage, and active maintenance since 2019.

Quick Start

import numpy as np
from hyperactive.opt.gfo import HillClimbing

# Define objective function (maximize)
def objective(params):
    x, y = params["x"], params["y"]
    return -(x**2 + y**2)  # Negative paraboloid, optimum at (0, 0)

# Define search space
search_space = {
    "x": np.arange(-5, 5, 0.1),
    "y": np.arange(-5, 5, 0.1),
}

# Run optimization
optimizer = HillClimbing(
    search_space=search_space,
    n_iter=100,
    experiment=objective,
)
best_params = optimizer.solve()

print(f"Best params: {best_params}")

Output:

Best params: {'x': 0.0, 'y': 0.0}

Core Concepts

Hyperactive separates what you optimize from how you optimize. Define your experiment (objective function) and search space once, then swap optimizers freely without changing your code. The unified interface abstracts away backend differences, letting you focus on your optimization problem.

flowchart TB
    subgraph USER["Your Code"]
        direction LR
        F["def objective(params):<br/>    return score"]
        SP["search_space = {<br/>    'x': np.arange(...),<br/>    'y': [1, 2, 3]<br/>}"]
    end

    subgraph HYPER["Hyperactive"]
        direction TB
        OPT["Optimizer"]

        subgraph BACKENDS["Backends"]
            GFO["GFO<br/>21 algorithms"]
            OPTUNA["Optuna<br/>8 algorithms"]
            SKL["sklearn<br/>2 algorithms"]
            MORE["...<br/>more to come"]
        end

        OPT --> GFO
        OPT --> OPTUNA
        OPT --> SKL
        OPT --> MORE
    end

    subgraph OUT["Output"]
        BEST["best_params"]
    end

    F --> OPT
    SP --> OPT
    HYPER --> OUT

Optimizer: Implements the search strategy (Hill Climbing, Bayesian, Particle Swarm, etc.).

Search Space: Defines valid parameter combinations as NumPy arrays or lists.

Experiment: Your objective function or a built-in experiment (SklearnCvExperiment, etc.).

Best Parameters: The optimizer returns the parameters that maximize the objective.


Examples

Scikit-learn Hyperparameter Tuning
from sklearn.svm import SVC
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split

from hyperactive.integrations.sklearn import OptCV
from hyperactive.opt.gfo import HillClimbing

# Load data
X, y = load_iris(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)

# Define search space and optimizer
search_space = {"kernel": ["linear", "rbf"], "C": [1, 10, 100]}
optimizer = HillClimbing(search_space=search_space, n_iter=20)

# Create tuned estimator
tuned_svc = OptCV(SVC(), optimizer)
tuned_svc.fit(X_train, y_train)

print(f"Best params: {tuned_svc.best_params_}")
print(f"Test accuracy: {tuned_svc.score(X_test, y_test):.3f}")
Bayesian Optimization
import numpy as np
from hyperactive.opt.gfo import BayesianOptimizer

def ackley(params):
    x, y = params["x"], params["y"]
    return -(
        -20 * np.exp(-0.2 * np.sqrt(0.5 * (x**2 + y**2)))
        - np.exp(0.5 * (np.cos(2 * np.pi * x) + np.cos(2 * np.pi * y)))
        + np.e + 20
    )

search_space = {
    "x": np.arange(-5, 5, 0.01),
    "y": np.arange(-5, 5, 0.01),
}

optimizer = BayesianOptimizer(
    search_space=search_space,
    n_iter=50,
    experiment=ackley,
)
best_params = optimizer.solve()
Particle Swarm Optimization
import numpy as np
from hyperactive.opt.gfo import ParticleSwarmOptimizer

def rastrigin(params):
    A = 10
    values = [params[f"x{i}"] for i in range(5)]
    return -sum(v**2 - A * np.cos(2 * np.pi * v) + A for v in values)

search_space = {f"x{i}": np.arange(-5.12, 5.12, 0.1) for i in range(5)}

optimizer = ParticleSwarmOptimizer(
    search_space=search_space,
    n_iter=500,
    experiment=rastrigin,
    population_size=20,
)
best_params = optimizer.solve()
Experiment Abstraction with SklearnCvExperiment
import numpy as np
from sklearn.svm import SVC
from sklearn.datasets import load_iris
from sklearn.metrics import accuracy_score
from sklearn.model_selection import KFold

from hyperactive.experiment.integrations import SklearnCvExperiment
from hyperactive.opt.gfo import HillClimbing

X, y = load_iris(return_X_y=True)

# Create reusable experiment
sklearn_exp = SklearnCvExperiment(
    estimator=SVC(),
    scoring=accuracy_score,
    cv=KFold(n_splits=3, shuffle=True),
    X=X,
    y=y,
)

search_space = {
    "C": np.logspace(-2, 2, num=10),
    "kernel": ["linear", "rbf"],
}

optimizer = HillClimbing(
    search_space=search_space,
    n_iter=100,
    experiment=sklearn_exp,
)
best_params = optimizer.solve()
Optuna Backend (TPE)
import numpy as np
from hyperactive.opt.optuna import TPEOptimizer

def objective(params):
    x, y = params["x"], params["y"]
    return -(x**2 + y**2)

search_space = {
    "x": np.arange(-5, 5, 0.1),
    "y": np.arange(-5, 5, 0.1),
}

optimizer = TPEOptimizer(
    search_space=search_space,
    n_iter=100,
    experiment=objective,
)
best_params = optimizer.solve()
Time Series Forecasting with sktime
from sktime.forecasting.naive import NaiveForecaster
from sktime.datasets import load_airline

from hyperactive.integrations.sktime import ForecastingOptCV
from hyperactive.opt.gfo import RandomSearch

y = load_airline()

search_space = {
    "strategy": ["last", "mean", "drift"],
    "sp": [1, 12],
}

optimizer = RandomSearch(search_space=search_space, n_iter=10)
tuned_forecaster = ForecastingOptCV(NaiveForecaster(), optimizer)
tuned_forecaster.fit(y)

print(f"Best params: {tuned_forecaster.best_params_}")
PyTorch Neural Network Tuning
import numpy as np
import torch
import torch.nn as nn
from torch.utils.data import DataLoader, TensorDataset
from hyperactive.opt.gfo import BayesianOptimizer

# Example data
X_train = torch.randn(1000, 10)
y_train = torch.randint(0, 2, (1000,))

def train_model(params):
    learning_rate = params["learning_rate"]
    batch_size = params["batch_size"]
    hidden_size = params["hidden_size"]

    model = nn.Sequential(
        nn.Linear(10, hidden_size),
        nn.ReLU(),
        nn.Linear(hidden_size, 2),
    )

    optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
    criterion = nn.CrossEntropyLoss()
    loader = DataLoader(TensorDataset(X_train, y_train), batch_size=batch_size)

    model.train()
    for epoch in range(10):
        for X_batch, y_batch in loader:
            optimizer.zero_grad()
            loss = criterion(model(X_batch), y_batch)
            loss.backward()
            optimizer.step()

    # Return validation accuracy
    model.eval()
    with torch.no_grad():
        predictions = model(X_train).argmax(dim=1)
        accuracy = (predictions == y_train).float().mean().item()

    return accuracy

search_space = {
    "learning_rate": np.logspace(-5, -1, 20),
    "batch_size": [16, 32, 64, 128],
    "hidden_size": [64, 128, 256, 512],
}

optimizer = BayesianOptimizer(
    search_space=search_space,
    n_iter=30,
    experiment=train_model,
)
best_params = optimizer.solve()

Ecosystem

This library is part of a suite of optimization and machine learning tools. For updates on these packages, follow on GitHub.

Package Description
Hyperactive Hyperparameter optimization framework with experiment abstraction and ML integrations
Gradient-Free-Optimizers Core optimization algorithms for black-box function optimization
Surfaces Test functions and benchmark surfaces for optimization algorithm evaluation

Documentation

Resource Description
User Guide Comprehensive tutorials and explanations
API Reference Complete API documentation
Examples Jupyter notebooks with use cases
FAQ Common questions and troubleshooting

Contributing

Contributions welcome! See CONTRIBUTING.md for guidelines.


Citation

If you use this software in your research, please cite:

@software{hyperactive2019,
  author = {Simon Blanke},
  title = {Hyperactive: A hyperparameter optimization and meta-learning toolbox},
  year = {2019},
  url = {https://github.com/SimonBlanke/Hyperactive},
}

License

MIT License - Free for commercial and academic use.

Project details


Release history Release notifications | RSS feed

This version

5.0.4

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

hyperactive-5.0.4-py3-none-any.whl (153.6 kB view details)

Uploaded Python 3

File details

Details for the file hyperactive-5.0.4-py3-none-any.whl.

File metadata

  • Download URL: hyperactive-5.0.4-py3-none-any.whl
  • Upload date:
  • Size: 153.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.4

File hashes

Hashes for hyperactive-5.0.4-py3-none-any.whl
Algorithm Hash digest
SHA256 861bd06b23ed87312fb8cd7098ee24b77bd5fec963cfd9619e977091f3e52f9a
MD5 3ff80a85bc121078db15f5c7afa6e8df
BLAKE2b-256 5f11f7fed7beef16cbb429e069a04aa30f89886133e8c3437ee149ea21baf9e9

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page