Skip to main content

Hyper parameter optimization for deep learning

Project description

Hyperparameter Optimization for Deep Learning (HPO4DL)

HPO4DL is a framework for multi-fidelity (gray-box) hyperparameter optimization. The core optimizer in HPO4DL is DyHPO, a novel Bayesian Optimization approach to hyperparameter optimization tailored for deep learning. DyHPO dynamically determines the best hyperparameter configurations to train further by using a deep kernel for Gaussian Processes that captures the details of the learning curve and an acquisition function that incorporates multi-budget information.

Installation

To install the package:

pip install hpo4dl

Getting started

Following is a simple example to get you started:

from typing import List, Dict, Union
from hpo4dl.tuner import Tuner
from ConfigSpace import ConfigurationSpace


def objective_function(
    configuration: Dict,
    epoch: int,
    previous_epoch: int,
    checkpoint_path: str
) -> Union[Dict, List[Dict]]:
    x = configuration["x"]
    evaluated_info = [
        {'epoch': i, 'metric': (x - 2) ** 2}
        for i in range(previous_epoch + 1, epoch + 1)
    ]
    return evaluated_info


configspace = ConfigurationSpace({"x": (-5.0, 10.0)})

tuner = Tuner(
    objective_function=objective_function,
    configuration_space=configspace,
    minimize=True,
    max_budget=1000,
    optimizer='dyhpo',
    seed=0,
    max_epochs=27,
    num_configurations=1000,
    output_path='hpo4dl_results',
)

incumbent = tuner.run()

Key Parameters Explained:

  • objective_function: The function you aim to optimize.

  • configuration_space: The hyperparameter configuration space over which the optimization is performed.

  • minimize: Boolean flag indicates whether the objective function should be minimized (True) or maximized (False).

  • max_budget: The cumulative number of epochs the tuner will evaluate. This budget gets distributed across various hyperparameter configurations.

  • optimizer: Specifies the optimization technique employed.

  • seed: Random seed for reproducibility.

  • max_epochs: Maximum number of epochs a single configuration is evaluated.

  • num_configurations: Determines the number of configurations DyHPO reviews before selecting the next one for evaluation. Essentially, it guides the balance between exploration and exploitation in the optimization process.

  • output_path: Designates the location to save the results and the checkpoint for the best hyperparameter optimization.

Objective function

def objective_function(
    configuration: Dict,
    epoch: int,
    previous_epoch: int,
    checkpoint_path: str
) -> Union[Dict, List[Dict]]

The objective function is tailored to support interrupted and resumed training processes. Specifically, it should continue training from a previous_epoch to the designated epoch.

The function should return a dictionary or a list of dictionaries upon completion. Every dictionary must include the epoch and metric keys. Here's a sample return value:

{
    “epoch”: 5,
    “metric”: 0.76
}

For optimal performance with DyHPO, ensure the metric is normalized.

Lastly, the checkpoint_path is allocated for saving any intermediate files produced during training pertinent to the current configuration. It facilitates storing models, logs, and other relevant data, ensuring that training can resume seamlessly.

Detailed Examples

For a detailed exploration of the HPO4DL framework, we've provided an in-depth example under: examples/timm_main.py

To execute the provided example, use the following command:

python examples/timm_main.py 
    --dataset torch/cifar100 
    --train-split train 
    --val-split validation 
    --optimizer dyhpo  
    --output-dir ./hpo4dl_results

Citation

@inproceedings{
wistuba2022supervising,
title={Supervising the Multi-Fidelity Race of Hyperparameter Configurations},
author={Martin Wistuba and Arlind Kadra and Josif Grabocka},
booktitle={Thirty-Sixth Conference on Neural Information Processing Systems},
year={2022},
url={https://openreview.net/forum?id=0Fe7bAWmJr}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

hpo4dl-0.1.0.tar.gz (23.4 kB view details)

Uploaded Source

Built Distribution

hpo4dl-0.1.0-py3-none-any.whl (30.2 kB view details)

Uploaded Python 3

File details

Details for the file hpo4dl-0.1.0.tar.gz.

File metadata

  • Download URL: hpo4dl-0.1.0.tar.gz
  • Upload date:
  • Size: 23.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.5.1 CPython/3.10.0 Windows/10

File hashes

Hashes for hpo4dl-0.1.0.tar.gz
Algorithm Hash digest
SHA256 c0faf41ae0cd58324c898765d64d4a330d29c0a99c65d9671accc50f5b4c094c
MD5 97696125cc4851688d6fdeea44d6e14e
BLAKE2b-256 6ee4550c6912c6e71240f406cd6b35c3cca7ac4f78847244efad118832291a23

See more details on using hashes here.

File details

Details for the file hpo4dl-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: hpo4dl-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 30.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.5.1 CPython/3.10.0 Windows/10

File hashes

Hashes for hpo4dl-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 3c6ec58409fa4b062a60b173f172e9a66c479569440bbca90594bc5ddf319951
MD5 9768052e7cc9cbf0af757e586c300f7b
BLAKE2b-256 71f3513f69a91c98f33917a6fd6e6c43aeba260c06839b40200affe05b6d46d3

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page