Skip to main content

A friendly python package for Keras Hyperparameters Tuning based only on NumPy and Hyperopt.

Project description

keras-hypetune

A friendly python package for Keras Hyperparameters Tuning based only on NumPy and Hyperopt.

Overview

A very simple wrapper for fast Keras hyperparameters optimization. keras-hypetune lets you use the power of Keras without having to learn a new syntax. All you need it's just create a python dictionary where to put the parameter boundaries for the experiments and define your Keras model (in any format: Functional or Sequential) inside a callable function.

def get_model(param):

    model = Sequential()
    model.add(Dense(param['unit_1'], activation=param['activ']))
    model.add(Dense(param['unit_2'], activation=param['activ']))
    model.add(Dense(1))
    model.compile(optimizer=Adam(learning_rate=param['lr']), 
                  loss='mse', metrics=['mae'])

    return model

The optimization process is easily trackable using the callbacks provided by Keras. At the end of the searching, you can access all you need by querying the keras-hypetune searcher. The best solutions can be automatically saved in proper locations.

Installation

pip install --upgrade keras-hypetune

Tensorflow and Keras are not needed requirements. keras-hypetune is specifically for tf.keras with TensorFlow 2.0. The usage of GPU is normally available.

Fixed Validation Set

This tuning modality operates the optimization on a fixed validation set. The parameter combinations are evaluated always on the same set of data. In this case, it's allowed the usage of any kind of input data format accepted by Keras.

KerasGridSearch

All the passed parameter combinations are created and evaluated.

param_grid = {
    'unit_1': [128,64], 
    'unit_2': [64,32],
    'lr': [1e-2,1e-3], 
    'activ': ['elu','relu'],
    'epochs': 100, 
    'batch_size': 512
}

kgs = KerasGridSearch(get_model, param_grid, monitor='val_loss', greater_is_better=False)
kgs.search(x_train, y_train, validation_data=(x_valid, y_valid))

KerasRandomSearch

Only random parameter combinations are created and evaluated.

The number of parameter combinations that are tried is given by n_iter. If all parameters are presented as a list, sampling without replacement is performed. If at least one parameter is given as a distribution (from scipy.stats random variables), sampling with replacement is used.

param_grid = {
    'unit_1': [128,64], 
    'unit_2': stats.randint(32, 128),
    'lr': stats.uniform(1e-4, 0.1), 
    'activ': ['elu','relu'],
    'epochs': 100, 
    'batch_size': 512
}

krs = KerasRandomSearch(get_model, param_grid, monitor='val_loss', greater_is_better=False, 
                        n_iter=15, sampling_seed=33)
krs.search(x_train, y_train, validation_data=(x_valid, y_valid))

KerasBayesianSearch

The parameter values are chosen according to bayesian optimization algorithms based on gaussian processes and regression trees (from hyperopt).

The number of parameter combinations that are tried is given by n_iter. Parameters must be given as hyperopt distributions.

param_grid = {
    'unit_1': 64 + hp.randint('unit_1', 64),
    'unit_2': 32 + hp.randint('unit_2', 96),
    'lr': hp.loguniform('lr', np.log(0.001), np.log(0.02)), 
    'activ': hp.choice('activ', ['elu','relu']),
    'epochs': 100, 
    'batch_size': 512
}

kbs = KerasBayesianSearch(get_model, param_grid, monitor='val_loss', greater_is_better=False, 
                          n_iter=15, sampling_seed=33)
kbs.search(x_train, y_train, trials=Trials(), validation_data=(x_valid, y_valid))

Cross Validation

This tuning modality operates the optimization using a cross-validation approach. The CV strategies available are the same provided by scikit-learn splitter classes. The parameter combinations are evaluated on the mean score of the folds. In this case, it's allowed the usage of only numpy array data. For tasks involving multi-input/output, the arrays can be wrapped into list or dict like in normal Keras.

KerasGridSearchCV

All the passed parameter combinations are created and evaluated.

param_grid = {
    'unit_1': [128,64], 
    'unit_2': [64,32],
    'lr': [1e-2,1e-3], 
    'activ': ['elu','relu'],
    'epochs': 100, 
    'batch_size': 512
}

cv = KFold(n_splits=3, random_state=33, shuffle=True)

kgs = KerasGridSearchCV(get_model, param_grid, cv=cv, monitor='val_loss', greater_is_better=False)
kgs.search(X, y)

KerasRandomSearchCV

Only random parameter combinations are created and evaluated.

The number of parameter combinations that are tried is given by n_iter. If all parameters are presented as a list, sampling without replacement is performed. If at least one parameter is given as a distribution (from scipy.stats random variables), sampling with replacement is used.

param_grid = {
    'unit_1': [128,64], 
    'unit_2': stats.randint(32, 128),
    'lr': stats.uniform(1e-4, 0.1), 
    'activ': ['elu','relu'],
    'epochs': 100, 
    'batch_size': 512
}

cv = KFold(n_splits=3, random_state=33, shuffle=True)

krs = KerasRandomSearchCV(get_model, param_grid, cv=cv, monitor='val_loss', greater_is_better=False,
                          n_iter=15, sampling_seed=33)
krs.search(X, y)

KerasBayesianSearchCV

The parameter values are chosen according to bayesian optimization algorithms based on gaussian processes and regression trees (from hyperopt).

The number of parameter combinations that are tried is given by n_iter. Parameters must be given as hyperopt distributions.

param_grid = {
    'unit_1': 64 + hp.randint('unit_1', 64),
    'unit_2': 32 + hp.randint('unit_2', 96),
    'lr': hp.loguniform('lr', np.log(0.001), np.log(0.02)), 
    'activ': hp.choice('activ', ['elu','relu']),
    'epochs': 100, 
    'batch_size': 512
}

cv = KFold(n_splits=3, random_state=33, shuffle=True)

kbs = KerasBayesianSearchCV(get_model, param_grid, cv=cv, monitor='val_loss', greater_is_better=False,
                            n_iter=15, sampling_seed=33)
kbs.search(X, y, trials=Trials())

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

keras-hypetune-0.2.1.tar.gz (10.7 kB view details)

Uploaded Source

Built Distribution

keras_hypetune-0.2.1-py3-none-any.whl (12.7 kB view details)

Uploaded Python 3

File details

Details for the file keras-hypetune-0.2.1.tar.gz.

File metadata

  • Download URL: keras-hypetune-0.2.1.tar.gz
  • Upload date:
  • Size: 10.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/49.2.0.post20200714 requests-toolbelt/0.9.1 tqdm/4.47.0 CPython/3.7.7

File hashes

Hashes for keras-hypetune-0.2.1.tar.gz
Algorithm Hash digest
SHA256 b9374e6471373fa4f9e974bd1246ac35873018708f61fe9fa35f22ceb1ae91cd
MD5 1869f4155043d36d2314b6799163d520
BLAKE2b-256 b3eb3ff699f360291fd245fa3847cec6f21bac162fca02113dea3ecdf0f0be72

See more details on using hashes here.

File details

Details for the file keras_hypetune-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: keras_hypetune-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 12.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/49.2.0.post20200714 requests-toolbelt/0.9.1 tqdm/4.47.0 CPython/3.7.7

File hashes

Hashes for keras_hypetune-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 a31c8ffbb58a985f8c1266ca36acab0ac404b908db9dd696ef0014f36cdc87e5
MD5 e51325fa18c4837c36ba7e58ae944f6b
BLAKE2b-256 8403f3115632d693b9ab61b499b78c803bfd5b47ff69deae23e920f9827ed009

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page