Skip to main content

Machine Learning Experiment Hyperparameter Optimization

Project description

Lightweight Hyperparameter Optimization 🚂

Pyversions PyPI version Code style: black codecov Colab

The mle-hyperopt package provides a simple and intuitive API for hyperparameter optimization of your Machine Learning Experiment (MLE) pipeline. It supports real, integer & categorical search variables and single- or multi-objective optimization.

Core features include the following:

  • API Simplicity: strategy.ask(), strategy.tell() interface & space definition.
  • Strategy Diversity: Grid, random, coordinate search, SMBO & wrapping FAIR's nevergrad, Successive Halving, Hyperband, Population-Based Training.
  • Search Space Refinement based on the top performing configs via strategy.refine(top_k=10).
  • Export of configurations to execute via e.g. python train.py --config_fname config.yaml.
  • Storage & reload search logs via strategy.save(<log_fname>), strategy.load(<log_fname>).

For a quickstart check out the notebook blog 📖.

The API 🎮

from mle_hyperopt import RandomSearch

# Instantiate random search class
strategy = RandomSearch(real={"lrate": {"begin": 0.1,
                                        "end": 0.5,
                                        "prior": "log-uniform"}},
                        integer={"batch_size": {"begin": 32,
                                                "end": 128,
                                                "prior": "uniform"}},
                        categorical={"arch": ["mlp", "cnn"]})

# Simple ask - eval - tell API
configs = strategy.ask(5)
values = [train_network(**c) for c in configs]
strategy.tell(configs, values)

Implemented Search Types 🔭

Search Type Description search_config
drawing GridSearch Search over list of discrete values -
drawing RandomSearch Random search over variable ranges refine_after, refine_top_k
drawing CoordinateSearch Coordinate-wise optimization with fixed defaults order, defaults
drawing SMBOSearch Sequential model-based optimization (Hutter et al., 2011) base_estimator, acq_function, n_initial_points
drawing NevergradSearch Multi-objective nevergrad wrapper optimizer, budget_size, num_workers
drawing HalvingSearch Successive Halving (Karmin et al., 2013) min_budget, num_arms, halving_coeff
drawing HyperbandSearch Hyperband (Li et al., 2018) max_resource, eta
drawing PBTSearch Population-Based Training (Jaderberg et al., 2017) explore, exploit

Variable Types & Hyperparameter Spaces 🌍

Variable Type Space Specification
drawing real Real-valued Dict: begin, end, prior/bins (grid)
drawing integer Integer-valued Dict: begin, end, prior/bins (grid)
drawing categorical Categorical List: Values to search over

Installation ⏳

A PyPI installation is available via:

pip install mle-hyperopt

If you want to get the most recent commit, please install directly from the repository:

pip install git+https://github.com/mle-infrastructure/mle-hyperopt.git@main

Search Method Highlights 🔎

Grid Search 🟥

strategy = GridSearch(
    real={"lrate": {"begin": 0.1,
                    "end": 0.5,
                    "bins": 5}},
    integer={"batch_size": {"begin": 1,
                            "end": 5,
                            "bins": 1}},
    categorical={"arch": ["mlp", "cnn"]},
    fixed_params={"momentum": 0.9})  # Add fixed param setting to each config

configs = strategy.ask()

Hyperband 🎸

strategy = HyperbandSearch(
    real={"lrate": {"begin": 0.1,
                    "end": 0.5,
                    "prior": "uniform"}},
    integer={"batch_size": {"begin": 1,
                            "end": 5,
                            "prior": "log-uniform"}},
    categorical={"arch": ["mlp", "cnn"]},
    search_config={"max_resource": 81,
                   "eta": 3},
    seed_id=42,  # Fix randomness for reproducibility
    verbose=True)

configs = strategy.ask()

Population-Based Training 🦎

strategy = PBTSearch(
    real={"lrate": {"begin": 0.1,
                    "end": 0.5,
                    "prior": "uniform"}}
    search_config={
        "exploit": {"strategy": "truncation", "selection_percent": 0.2},
        "explore": {"strategy": "perturbation", "perturb_coeffs": [0.8, 1.2]},
        "steps_until_ready": 4,
        "num_workers": 10,
    },
    maximize_objective=True  # Max score instead of min
)

configs = strategy.ask()

Further Options 🚴

Saving & Reloading Logs 🏪

# Storing & reloading of results from .json/.yaml/.pkl
strategy.save("search_log.json")
strategy = RandomSearch(..., reload_path="search_log.json")

# Or manually add info after class instantiation
strategy = RandomSearch(...)
strategy.load("search_log.json")

Search Decorator 🧶

from mle_hyperopt import hyperopt

@hyperopt(strategy_type="Grid",
          num_search_iters=25,
          real={"x": {"begin": 0., "end": 0.5, "bins": 5},
                "y": {"begin": 0, "end": 0.5, "bins": 5}})
def circle(config):
    distance = abs((config["x"] ** 2 + config["y"] ** 2))
    return distance

strategy = circle()

Storing Configuration Files 📑

# Store 2 proposed configurations - eval_0.yaml, eval_1.yaml
strategy.ask(2, store=True)
# Store with explicit configuration filenames - conf_0.yaml, conf_1.yaml
strategy.ask(2, store=True, config_fnames=["conf_0.yaml", "conf_1.yaml"])

Storing Checkpoint Paths 🛥️

# Ask for 5 configurations to evaluate and get their scores
configs = strategy.ask(5)
values = ...
# Get list of checkpoint paths corresponding to config runs
ckpts = [f"ckpt_{i}.pt" for i in range(len(configs))]
# `tell` parameter configs, eval scores & ckpt paths
# Required for Halving, Hyperband and PBT
strategy.tell(configs, scores, ckpts)

Retrieving Top Performers & Visualizing Results 📉

# Get the top k best performing configurations
id, configs, values = strategy.get_best(top_k=4)

# Plot timeseries of best performing score over search iterations
strategy.plot_best()

# Print out ranking of best performers
strategy.print_ranking(top_k=3)

Refining the Search Space of Your Strategy 🪓

# Refine the search space after 5 & 10 iterations based on top 2 configurations
strategy = RandomSearch(real={"lrate": {"begin": 0.1,
                                        "end": 0.5,
                                        "prior": "log-uniform"}},
                        integer={"batch_size": {"begin": 1,
                                                "end": 5,
                                                "prior": "uniform"}},
                        categorical={"arch": ["mlp", "cnn"]},
                        search_config={"refine_after": [5, 10],
                                       "refine_top_k": 2})

# Or do so manually using `refine` method
strategy.tell(...)
strategy.refine(top_k=2)

Note that the search space refinement is only implemented for random, SMBO and nevergrad-based search strategies.

Simple Command Line interface ⌨️

You can also directly launch a search for your applications. This requires a couple of things: A python script <script>.py containing a function main(config), which runs your simulation for a given configuration dictionary. It should return a single scalar performance score, which will be logged.

def main(config):
    ...
    return score

Furthermore, you will need a search configuration <search>.yaml file and can add default fixed parameter settings in <base>.yaml.

mle-search <script>.py -base <base>.yaml -search <search>.yaml -iters <search_iters>

Have a look at the example, which can be executed via mle-search run_mle_search.py -search search.yaml -base base.yaml. You can reload a previous search log by adding the option -reload.

Citing the MLE-Infrastructure ✏️

If you use mle-hyperopt in your research, please cite it as follows:

@software{mle_infrastructure2021github,
  author = {Robert Tjarko Lange},
  title = {{MLE-Infrastructure}: A Set of Lightweight Tools for Distributed Machine Learning Experimentation},
  url = {http://github.com/mle-infrastructure},
  year = {2021},
}

Development 👷

You can run the test suite via python -m pytest -vv tests/. If you find a bug or are missing your favourite feature, feel free to create an issue and/or start contributing 🤗.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mle_hyperopt-0.0.8.tar.gz (40.9 kB view details)

Uploaded Source

Built Distribution

mle_hyperopt-0.0.8-py3-none-any.whl (54.9 kB view details)

Uploaded Python 3

File details

Details for the file mle_hyperopt-0.0.8.tar.gz.

File metadata

  • Download URL: mle_hyperopt-0.0.8.tar.gz
  • Upload date:
  • Size: 40.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.2

File hashes

Hashes for mle_hyperopt-0.0.8.tar.gz
Algorithm Hash digest
SHA256 466198e6787b7c725f5e4f4da3c8f2eb9c38caf8d2d7248d24c239b5c3c56e2a
MD5 2f66055b9ee3911b21086730cb83797e
BLAKE2b-256 3c8e5a713afd68e5cf6c10c1abfa2cb55168639d31e456d5ceef7c21d79bf2ea

See more details on using hashes here.

File details

Details for the file mle_hyperopt-0.0.8-py3-none-any.whl.

File metadata

File hashes

Hashes for mle_hyperopt-0.0.8-py3-none-any.whl
Algorithm Hash digest
SHA256 cd69d979e4165772190052af9ae7f4cf6c3eca729da99213dc623920f034dcc7
MD5 9c1d7280a2eebeade766de0a9d4d23bd
BLAKE2b-256 a9698b4de3c8d7fa6e60dc5f5a338c258ed7b59ea625e753ec013bd105444f3c

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page