Skip to main content

Hyper parameters search for scikit-learn components using Microsoft NNI

Project description

scikit-nni

https://img.shields.io/pypi/v/scikit-nni.svg https://img.shields.io/travis/ksachdeva/scikit-nni.svg Documentation Status

Hyper parameters search for scikit-learn components using Microsoft NNI

Features

  • Hyperparameters search for scikit-learn pipelines using Microsoft NNI

  • No code required to define the pipelines

  • Builtin datasource reader for reading npz files for classification

  • Support for using custom datasource reader

  • Single configuration file to define NNI configuration and search space

I plan to add more datasource readers (e.g. CSV, libSVM format files etc). Contributions are always welcome !

Usage

Step 1 - Write specification file

The specification file is essentially a YAML file but with extension .nni.yml

There are 4 parts (sections) in the configuration file.

Datasource Section

This is where you will specify the (python) callable that sknni would invoking to the training and test dataset.

The callable should return 2 values where each value is a tuple of two items. The first tuple consists of training data (X_train, y_train) and the second tuple consists of test data (X_test, y_test).

An example callable would look like this:

import numpy as np

from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split

class ACustomDataSource(object):
    def __init__(self):
        pass

    def __call__(self, test_size:float=0.25):
        digits = load_digits()
        X_train, X_test, y_train, y_test = train_test_split(digits.data, digits.target, random_state=99, test_size=test_size)

        return (X_train, y_train), (X_test, y_test)

In the above example, the callable generates the train and test dataset. The callable can even have paramaters for e.g. in this example you could optionally pass the fraction of data to be used for testing purposes.

Now let’s see how you would specify in the specification file.

# Datasource is how you specify which callable
# sknni will invoke to get the data
dataSource:
    reader: yourmodule.ACustomDataSource
    params:
        test_size: 0.30

Make sure that during the exeuction of the experiment your datasource (i.e. in this case yourmodule.ACustomDataSource) is available in the PYTHONPATH.

Here is an additional example showing the usage of an built-in datasource reader

dataSource:
    reader: sknni.datasource.NpzClassificationSource
    params:
        dir_path: /Users/ksachdeva/Desktop/Dev/myoss/scikit-nni/examples/data/multiclass-classification
Pipline definition Section

Below is the example of the section. You simply specify the list of steps of your typical scikit-learn Pipeline.

Note - The sequence of steps is very important.

What you MUST ensure is that the full qualified name of your scikit-learn preprocessors, transformers and estimators is correctly specified. sknni uses reflection and introspection to create the instances so if you have a typo in the names and/or they are not available in your PYTHONPATH you will get an error at experiment execution time.

sklearnPipeline:
    name: normalizer_svc
    steps:
        normalizer: sklearn.preprocessing.Normalizer
        svc: sklearn.svm.SVC

In above example, there are 2 steps. The first step is to normalize the data and the second step is train a classifier using Support Vector Machine.

Search Space Section

This section corresponds to the search space for your hyperparameters. When you `nnictrl` this is typically specified in search-space.json file.

Here are the important things to note about this section -

  • The syntax is the same (except we are using YAML here instead of JSON) for specifiying parameter types and ranges.

  • You MUST specifiy the parameters corresponding to the step in your scikit pipeline.

  • You MUST use the names of the parameters that are same as the ones accepted by scikit-learn components (i.e. preprocessors, estimators etc).

Below is an example of this section.

nniConfigSearchSpace:
    - normalizer:
        norm:
            _type: choice
            _value: [l2, l1]
    - svc:
        C:
            _type: uniform
            _value: [0.1,0.0]
        kernel:
            _type: choice
            _value: [linear,rbf,poly,sigmoid]
        degree:
            _type: choice
            _value: [1,2,3,4]
        gamma:
            _type: uniform
            _value: [0.01,0.1]
        coef0:
            _type: uniform
            _value: [0.01,0.1]

Note that sklearn.svm.SVC takes C, kernel, degree, gamman and coef0 is the paramaters and hence we have used here the same names (keys) in the search space specification. You can add as many or as little parameters to search for.

NNI Config Section

This is the simplest of all sections as there is nothing new here from sknni perspective. You just copy-paste here your NNI’s config.yaml here. You do not have to specify codedir and command field in the trial subsection as this is added by the sknni in the generated configuration files.

Here is an example.

# This is exactly same as the one that of NNI
# except that you do not have to specify the command
# and code fields. They are automatically added by the sknni generator
nniConfig:
    authorName: default
    experimentName: example_sklearn-classification
    trialConcurrency: 1
    maxExecDuration: 1h
    maxTrialNum: 100
    trainingServicePlatform: local
    useAnnotation: false
    tuner:
        builtinTunerName: TPE
        classArgs:
            optimize_mode: maximize
    trial:
        gpuNum: 0

You can look at the various examples in the repository to learn how to define your own specification file.

Step 2 - Generate your experiment

sknni generate-experiment --spec example/basic_svc.nni.yml --output-dir experiments

Above command will create a directory experiments/svc-classification will following files

  • The original specification file i.e. basic_svc.nni.yml (used during experiment run as well)

  • Generated Microsoft NNI’s config.yml

  • Generated Microsoft NNI’s search-space.json

Step 3 - Run your experiment

This is same as running nnitctl

nnictl create --config experiments/svc-classification/config.yml

Credits

This package was created with Cookiecutter and the audreyr/cookiecutter-pypackage project template.

History

0.1.1 (2019-10-20)

  • First release on PyPI.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

scikit-nni-0.1.1.tar.gz (15.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

scikit_nni-0.1.1-py2.py3-none-any.whl (9.8 kB view details)

Uploaded Python 2Python 3

File details

Details for the file scikit-nni-0.1.1.tar.gz.

File metadata

  • Download URL: scikit-nni-0.1.1.tar.gz
  • Upload date:
  • Size: 15.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.14.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.4.0 requests-toolbelt/0.9.1 tqdm/4.36.1 CPython/3.7.3

File hashes

Hashes for scikit-nni-0.1.1.tar.gz
Algorithm Hash digest
SHA256 408fe1faa1c8c718a4beab9edbab14620dcd0f57e7ef1f13d34282e9762f906b
MD5 67a19ac43f21a220df41b12f6f5ae099
BLAKE2b-256 76008041c3f83650da3f43e1aba40d118ade1aca48ea17eb750a80db17b36a87

See more details on using hashes here.

File details

Details for the file scikit_nni-0.1.1-py2.py3-none-any.whl.

File metadata

  • Download URL: scikit_nni-0.1.1-py2.py3-none-any.whl
  • Upload date:
  • Size: 9.8 kB
  • Tags: Python 2, Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.14.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.4.0 requests-toolbelt/0.9.1 tqdm/4.36.1 CPython/3.7.3

File hashes

Hashes for scikit_nni-0.1.1-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 d1c2ca5ddc8420f2911387f15d8f96aa9aaff23fe3a88ad89e1d4c8226a1a048
MD5 8663acdc2e67ee326a358e9d1e72e5fa
BLAKE2b-256 2c7c192a4cc0c102ba0532b16f66279dd957576896da7a8b8ef708e25a1bf29c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page