Skip to main content

A hyperparameter optimization toolbox for convenient and fast prototyping

Project description



img not loaded: try F5 :) img not loaded: try F5 :) img not loaded: try F5 :) img not loaded: try F5 :) img not loaded: try F5 :) img not loaded: try F5 :)

An optimization and data collection toolbox for convenient and fast prototyping of computationally expensive models.


Hyperactive:







What's new?


Overview

Hyperactive features a collection of optimization algorithms that can be used for a variety of optimization problems. The following table shows examples of its capabilities:


Optimization Techniques Tested and Supported Packages Optimization Applications
Local Search:
Global Search:
Population Methods:
Sequential Methods:
Machine Learning:
Deep Learning:
Parallel Computing:
Feature Engineering: Machine Learning: Deep Learning: Data Collection: Visualization: Miscellaneous:

The examples above are not necessarily done with realistic datasets or training procedures. The purpose is fast execution of the solution proposal and giving the user ideas for interesting usecases.


Hyperactive is very easy to use:

Regular training Hyperactive
from sklearn.model_selection import cross_val_score
from sklearn.tree import DecisionTreeRegressor
from sklearn.datasets import load_wine


data = load_wine()
X, y = data.data, data.target


gbr = DecisionTreeRegressor(max_depth=10)
score = cross_val_score(gbr, X, y, cv=3).mean()
from sklearn.model_selection import cross_val_score
from sklearn.tree import DecisionTreeRegressor
from sklearn.datasets import load_wine
from hyperactive import Hyperactive

data = load_wine()
X, y = data.data, data.target

def model(opt):
    gbr = DecisionTreeRegressor(max_depth=opt["max_depth"])
    return cross_val_score(gbr, X, y, cv=3).mean()


search_space = {"max_depth": list(range(3, 25))}

hyper = Hyperactive()
hyper.add_search(model, search_space, n_iter=50)
hyper.run()

Installation

The most recent version of Hyperactive is available on PyPi:

pyversions PyPI version PyPI version

pip install hyperactive

Example

from sklearn.model_selection import cross_val_score
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.datasets import load_wine
from hyperactive import Hyperactive

data = load_wine()
X, y = data.data, data.target

# define the model in a function
def model(opt):
    # pass the suggested parameter to the machine learning model
    gbr = GradientBoostingRegressor(
        n_estimators=opt["n_estimators"]
    )
    scores = cross_val_score(gbr, X, y, cv=3)

    # return a single numerical value, which gets maximized
    return scores.mean()


# search space determines the ranges of parameters you want the optimizer to search through
search_space = {"n_estimators": list(range(10, 200, 5))}

# start the optimization run
hyper = Hyperactive()
hyper.add_search(model, search_space, n_iter=50)
hyper.run()

Hyperactive API reference


Basic Usage

Hyperactive(verbosity, distribution, n_processes)
  • verbosity = ["progress_bar", "print_results", "print_times"]

    • Possible parameter types: (list, False)
    • The verbosity list determines what part of the optimization information will be printed in the command line.
  • distribution = "multiprocessing"

    • Possible parameter types: ("multiprocessing", "joblib", "pathos")
    • Determine, which distribution service you want to use. Each library uses different packages to pickle objects:
      • multiprocessing uses pickle
      • joblib uses dill
      • pathos uses cloudpickle
  • n_processes = "auto",

    • Possible parameter types: (str, int)
    • The maximum number of processes that are allowed to run simultaneously. If n_processes is of int-type there will only run n_processes-number of jobs simultaneously instead of all at once. So if n_processes=10 and n_jobs_total=35, then the schedule would look like this 10 - 10 - 10 - 5. This saves computational resources if there is a large number of n_jobs. If "auto", then n_processes is the sum of all n_jobs (from .add_search(...)).
.add_search(objective_function, search_space, n_iter, optimizer, n_jobs, initialize, max_score, random_state, memory, memory_warm_start, progress_board)
  • objective_function

    • Possible parameter types: (callable)
    • The objective function defines the optimization problem. The optimization algorithm will try to maximize the numerical value that is returned by the objective function by trying out different parameters from the search space.
  • search_space

    • Possible parameter types: (dict)
    • Defines the space were the optimization algorithm can search for the best parameters for the given objective function.
  • n_iter

    • Possible parameter types: (int)
    • The number of iterations that will be performed during the optimization run. The entire iteration consists of the optimization-step, which decides the next parameter that will be evaluated and the evaluation-step, which will run the objective function with the chosen parameter and return the score.
  • optimizer = "default"

    • Possible parameter types: ("default", initialized optimizer object)

    • Instance of optimization class that can be imported from Hyperactive. "default" corresponds to the random search optimizer. The following classes can be imported and used:

      • HillClimbingOptimizer
      • StochasticHillClimbingOptimizer
      • RepulsingHillClimbingOptimizer
      • RandomSearchOptimizer
      • RandomRestartHillClimbingOptimizer
      • RandomAnnealingOptimizer
      • SimulatedAnnealingOptimizer
      • ParallelTemperingOptimizer
      • ParticleSwarmOptimizer
      • EvolutionStrategyOptimizer
      • BayesianOptimizer
      • TreeStructuredParzenEstimators
      • ForestOptimizer
      • EnsembleOptimizer
    • Example:

      ...
      
      opt_hco = HillClimbingOptimizer(epsilon=0.08)
      hyper = Hyperactive()
      hyper.add_search(..., optimizer=opt_hco)
      hyper.run()
      
      ...
      
  • n_jobs = 1

    • Possible parameter types: (int)
    • Number of jobs to run in parallel. Those jobs are optimization runs that work independent from another (no information sharing). If n_jobs == -1 the maximum available number of cpu cores is used.
  • initialize = {"grid": 4, "random": 2, "vertices": 4}

    • Possible parameter types: (dict)

    • The initialization dictionary automatically determines a number of parameters that will be evaluated in the first n iterations (n is the sum of the values in initialize). The initialize keywords are the following:

      • grid

        • Initializes positions in a grid like pattern. Positions that cannot be put into a grid are randomly positioned.
      • vertices

        • Initializes positions at the vertices of the search space. Positions that cannot be put into a new vertex are randomly positioned.
      • random

        • Number of random initialized positions
      • warm_start

        • List of parameter dictionaries that marks additional start points for the optimization run.

      Example:

      ... 
      search_space = {
          "x1": list(range(10, 150, 5)),
          "x2": list(range(2, 12)),
      }
      
      ws1 = {"x1": 10, "x2": 2}
      ws2 = {"x1": 15, "x2": 10}
      
      hyper = Hyperactive()
      hyper.add_search(
          model,
          search_space,
          n_iter=30,
          initialize={"grid": 4, "random": 10, "vertices": 4, "warm_start": [ws1, ws2]},
      )
      hyper.run()
      
  • max_score = None

    • Possible parameter types: (float, None)
    • Maximum score until the optimization stops. The score will be checked after each completed iteration.
  • early_stopping=None

    • (dict, None)

    • Stops the optimization run early if it did not achive any score-improvement within the last iterations. The early_stopping-parameter enables to set three parameters:

      • n_iter_no_change: Non-optional int-parameter. This marks the last n iterations to look for an improvement over the iterations that came before n. If the best score of the entire run is within those last n iterations the run will continue (until other stopping criteria are met), otherwise the run will stop.
      • tol_abs: Optional float-paramter. The score must have improved at least this absolute tolerance in the last n iterations over the best score in the iterations before n. This is an absolute value, so 0.1 means an imporvement of 0.8 -> 0.9 is acceptable but 0.81 -> 0.9 would stop the run.
      • tol_rel: Optional float-paramter. The score must have imporved at least this relative tolerance (in percentage) in the last n iterations over the best score in the iterations before n. This is a relative value, so 10 means an imporvement of 0.8 -> 0.88 is acceptable but 0.8 -> 0.87 would stop the run.
    • random_state = None

    • Possible parameter types: (int, None)

    • Random state for random processes in the random, numpy and scipy module.

  • memory = True

    • Possible parameter types: (bool, "share")
    • Whether or not to use the "memory"-feature. The memory is a dictionary, which gets filled with parameters and scores during the optimization run. If the optimizer encounters a parameter that is already in the dictionary it just extracts the score instead of reevaluating the objective function (which can take a long time). If memory is set to "share" and there are multiple jobs for the same objective function then the memory dictionary is automatically shared between the different processes.
  • memory_warm_start = None

    • Possible parameter types: (pandas dataframe, None)

    • Pandas dataframe that contains score and parameter information that will be automatically loaded into the memory-dictionary.

      example:

      score x1 x2 x...
      0.756 0.1 0.2 ...
      0.823 0.3 0.1 ...
      ... ... ... ...
      ... ... ... ...
  • progress_board = None

    • Possible parameter types: (initialized ProgressBoard object, None)
    • Initialize the ProgressBoard class and pass the object to the progress_board-parameter.
.run(max_time)
  • max_time = None
    • Possible parameter types: (float, None)
    • Maximum number of seconds until the optimization stops. The time will be checked after each completed iteration.

Special Parameters

Objective Function

Each iteration consists of two steps:

  • The optimization step: decides what position in the search space (parameter set) to evaluate next
  • The evaluation step: calls the objective function, which returns the score for the given position in the search space

The objective function has one argument that is often called "para", "params" or "opt". This argument is your access to the parameter set that the optimizer has selected in the corresponding iteration.

def objective_function(opt):
    # get x1 and x2 from the argument "opt"
    x1 = opt["x1"]
    x2 = opt["x1"]

    # calculate the score with the parameter set
    score = -(x1 * x1 + x2 * x2)

    # return the score
    return score

The objective function always needs a score, which shows how "good" or "bad" the current parameter set is. But you can also return some additional information with a dictionary:

def objective_function(opt):
    x1 = opt["x1"]
    x2 = opt["x1"]

    score = -(x1 * x1 + x2 * x2)

    other_info = {
      "x1 squared" : x1**2,
      "x2 squared" : x2**2,
    }

    return score, other_info

When you take a look at the results (a pandas dataframe with all iteration information) after the run has ended you will see the additional information in it. The reason we need a dictionary for this is because Hyperactive needs to know the names of the additonal parameters. The score does not need that, because it is always called "score" in the results. You can run this example script if you want to give it a try.

Search Space Dictionary

The search space defines what values the optimizer can select during the search. These selected values will be inside the objective function argument and can be accessed like in a dictionary. The values in each search space dimension should always be in a list. If you use np.arange you should put it in a list afterwards:

search_space = {
    "x1": list(np.arange(-100, 101, 1)),
    "x2": list(np.arange(-100, 101, 1)),
}

A special feature of Hyperactive is shown in the next example. You can put not just numeric values into the search space dimensions, but also strings and functions. This enables a very high flexibility in how you can create your studies.

def func1():
  # do stuff
  return stuff
  

def func2():
  # do stuff
  return stuff


search_space = {
    "x": list(np.arange(-100, 101, 1)),
    "str": ["a string", "another string"],
    "function" : [func1, func2],
}

If you want to put other types of variables (like numpy arrays, pandas dataframes, lists, ...) into the search space you can do that via functions:

def array1():
  return np.array([0, 1, 2])
  

def array2():
  return np.array([0, 1, 2])


search_space = {
    "x": list(np.arange(-100, 101, 1)),
    "str": ["a string", "another string"],
    "numpy_array" : [array1, array2],
}

The functions contain the numpy arrays and returns them. This way you can use them inside the objective function.

Optimizer Classes

Each of the following optimizer classes can be initialized and passed to the "add_search"-method via the "optimizer"-argument. During this initialization the optimizer class accepts additional paramters. You can read more about each optimization-strategy and its parameters in the Optimization Tutorial.

  • HillClimbingOptimizer
  • RepulsingHillClimbingOptimizer
  • SimulatedAnnealingOptimizer
  • RandomSearchOptimizer
  • RandomRestartHillClimbingOptimizer
  • RandomAnnealingOptimizer
  • ParallelTemperingOptimizer
  • ParticleSwarmOptimizer
  • EvolutionStrategyOptimizer
  • BayesianOptimizer
  • TreeStructuredParzenEstimators
  • ForestOptimizer
Progress Board

The progress board enables the visualization of search data during the optimization run. This will help you to understand what is happening during the optimization and give an overview of the explored parameter sets and scores.

  • filter_file
    • Possible parameter types: (None, True)
    • If the filter_file-parameter is True Hyperactive will create a file in the current directory, which allows the filtering of parameters or the score by setting an upper or lower bound.

The following script provides an example:

from sklearn.model_selection import cross_val_score
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.datasets import load_wine

from hyperactive import Hyperactive
# import the ProgressBoard
from hyperactive.dashboards import ProgressBoard

data = load_wine()
X, y = data.data, data.target


def model(opt):
    gbr = GradientBoostingRegressor(
        n_estimators=opt["n_estimators"],
        max_depth=opt["max_depth"],
        min_samples_split=opt["min_samples_split"],
    )
    scores = cross_val_score(gbr, X, y, cv=3)

    return scores.mean()


search_space = {
    "n_estimators": list(range(50, 150, 5)),
    "max_depth": list(range(2, 12)),
    "min_samples_split": list(range(2, 22)),
}

# create an instance of the ProgressBoard
progress_board = ProgressBoard()

hyper = Hyperactive()

# pass the instance of the ProgressBoard to .add_search(...)
hyper.add_search(
    model,
    search_space,
    n_iter=120,
    progress_board=progress_board,
)

# a terminal will open, which opens a dashboard in your browser
hyper.run()

Result Attributes

.best_para(objective_function)
  • objective_function

    • (callable)
  • returnes: dictionary

  • Parameter dictionary of the best score of the given objective_function found in the previous optimization run.

    example:

    {
      'x1': 0.2, 
      'x2': 0.3,
    }
    
.best_score(objective_function)
  • objective_function
    • (callable)
  • returns: int or float
  • Numerical value of the best score of the given objective_function found in the previous optimization run.
.search_data(objective_function)
  • objective_function

    • (callable)
  • returns: Pandas dataframe

  • The dataframe contains score, parameter information, iteration times and evaluation times of the given objective_function found in the previous optimization run.

    example:

    score x1 x2 x... eval_times iter_times
    0.756 0.1 0.2 ... 0.953 1.123
    0.823 0.3 0.1 ... 0.948 1.101
    ... ... ... ... ... ...
    ... ... ... ... ... ...

Roadmap

v2.0.0 :heavy_check_mark:
  • Change API
v2.1.0 :heavy_check_mark:
  • Save memory of evaluations for later runs (long term memory)
  • Warm start sequence based optimizers with long term memory
  • Gaussian process regressors from various packages (gpy, sklearn, GPflow, ...) via wrapper
v2.2.0 :heavy_check_mark:
  • Add basic dataset meta-features to long term memory
  • Add helper-functions for memory
    • connect two different model/dataset hashes
    • split two different model/dataset hashes
    • delete memory of model/dataset
    • return best known model for dataset
    • return search space for best model
    • return best parameter for best model
v2.3.0 :heavy_check_mark:
  • Tree-structured Parzen Estimator
  • Decision Tree Optimizer
  • add "max_sample_size" and "skip_retrain" parameter for sbom to decrease optimization time
v3.0.0 :heavy_check_mark:
  • New API
    • expand usage of objective-function
    • No passing of training data into Hyperactive
    • Removing "long term memory"-support (better to do in separate package)
    • More intuitive selection of optimization strategies and parameters
    • Separate optimization algorithms into other package
    • expand api so that optimizer parameter can be changed at runtime
    • add extensive testing procedure (similar to Gradient-Free-Optimizers)
v3.1.0 :heavy_check_mark:
  • Decouple number of runs from active processes (Thanks to PartiallyTyped)
v3.2.0 :heavy_check_mark:
  • Dashboard for visualization of search-data at runtime via streamlit (Progress-Board)
v3.3.0 :heavy_check_mark:
  • Early stopping
  • Shared memory dictionary between processes with the same objective function
Upcoming Features
  • "long term memory" for search-data storage and usage
  • Data collector tool to use inside the objective function
  • Dashboard for visualization of stored search-data
  • Data collector tool to store data (from inside the objective function) into csv- or sql-files

Experimental algorithms

The following algorithms are of my own design and, to my knowledge, do not yet exist in the technical literature. If any of these algorithms already exist I would like you to share it with me in an issue.

Random Annealing

A combination between simulated annealing and random search.


FAQ

Known Errors + Solutions

Read this before opening a bug-issue
  • Are you sure the bug is located in Hyperactive?

    The error might be located in the optimization-backend. Look at the error message from the command line. If one of the last messages look like this:

    • File "/.../gradient_free_optimizers/...", line ...

    Then you should post the bug report in:


    Otherwise you can post the bug report in Hyperactive

  • Do you have the correct Hyperactive version?

    Every major version update (e.g. v2.2 -> v3.0) the API of Hyperactive changes. Check which version of Hyperactive you have. If your major version is older you have two options:

    Recommended: You could just update your Hyperactive version with:

    pip install hyperactive --upgrade
    

    This way you can use all the new documentation and examples from the current repository.

    Or you could continue using the old version and use an old repository branch as documentation. You can do that by selecting the corresponding branch. (top right of the repository. The default is "master" or "main") So if your major version is older (e.g. v2.1.0) you can select the 2.x.x branch to get the old repository for that version.

MemoryError: Unable to allocate ... for an array with shape (...)

This is expected of the current implementation of smb-optimizers. For all Sequential model based algorithms you have to keep your eyes on the search space size:

search_space_size = 1
for value_ in search_space.values():
    search_space_size *= len(value_)
    
print("search_space_size", search_space_size)

Reduce the search space size to resolve this error.

TypeError: cannot pickle '_thread.RLock' object

This is because you have classes and/or non-top-level objects in the search space. Pickle (used by multiprocessing) cannot serialize them. Setting distribution to "joblib" or "pathos" may fix this problem:

hyper = Hyperactive(distribution="joblib")
Command line full of warnings

Very often warnings from sklearn or numpy. Those warnings do not correlate with bad performance from Hyperactive. Your code will most likely run fine. Those warnings are very difficult to silence.

It should help to put this at the very top of your script:

def warn(*args, **kwargs):
    pass


import warnings

warnings.warn = warn

References

[dto] Scikit-Optimize


Citing Hyperactive

@Misc{hyperactive2021,
  author =   {{Simon Blanke}},
  title =    {{Hyperactive}: An optimization and data collection toolbox for convenient and fast prototyping of computationally expensive models.},
  howpublished = {\url{https://github.com/SimonBlanke}},
  year = {since 2019}
}

License

LICENSE

Project details


Release history Release notifications | RSS feed

This version

4.0.0

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

hyperactive-4.0.0-py3-none-any.whl (47.5 kB view details)

Uploaded Python 3

File details

Details for the file hyperactive-4.0.0-py3-none-any.whl.

File metadata

  • Download URL: hyperactive-4.0.0-py3-none-any.whl
  • Upload date:
  • Size: 47.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.3.0 pkginfo/1.7.0 requests/2.26.0 setuptools/59.4.0 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.8.5

File hashes

Hashes for hyperactive-4.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 bdf28002462feee6c04ac423ef8f0757b147ca6d13f0db566dc5b6720f0514d5
MD5 2fcb7f631c2077b68249c5c2e6472ca6
BLAKE2b-256 cbe87eb972d0eecb7a77f3e36fcf5e70b089424c57f8c4587b6367c2ffc74dd2

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page