Skip to main content

A python package for simultaneous Hyperparameters Tuning and Features Selection for Gradient Boosting Models.

Project description

shap-hypetune

A python package for simultaneous Hyperparameters Tuning and Features Selection for Gradient Boosting Models.

shap-hypetune diagram

Overview

Hyperparameters tuning and features selection are two common steps in every machine learning pipeline. Most of the time they are computed separately and independently. This may result in suboptimal performances and in a more time expensive process.

shap-hypetune aims to combine hyperparameters tuning and features selection in a single pipeline optimizing the optimal number of features while searching for the optimal parameters configuration. Hyperparameters Tuning or Features Selection can also be carried out as standalone operations.

shap-hypetune main features:

  • designed for gradient boosting models, as LGBModel or XGBModel;
  • developed to be integrable with the scikit-learn ecosystem;
  • effective in both classification or regression tasks;
  • customizable training process, supporting early-stopping and all the other fitting options available in the standard algorithms api;
  • ranking feature selection algorithms: Recursive Feature Elimination (RFE); Recursive Feature Addition (RFA); or Boruta;
  • classical boosting based feature importances or SHAP feature importances (the later can be computed also on the eval_set);
  • apply grid-search, random-search, or bayesian-search (from hyperopt);
  • parallelized computations with joblib.

Installation

pip install --upgrade shap-hypetune

lightgbm, xgboost are not needed requirements. The module depends only on NumPy, shap, scikit-learn and hyperopt. Python 3.6 or above is supported.

Media

Usage

from shaphypetune import BoostSearch, BoostRFE, BoostRFA, BoostBoruta

Hyperparameters Tuning

BoostSearch(
    estimator,					            # LGBModel or XGBModel
    param_grid=None,				        # parameters to be optimized
    greater_is_better=False,			    # minimize or maximize the monitored score
    n_iter=None,				            # number of sampled parameter configurations
    sampling_seed=None,				        # the seed used for parameter sampling
    verbose=1,					            # verbosity mode
    n_jobs=None					            # number of jobs to run in parallel
)

Feature Selection (RFE)

BoostRFE(
    estimator,                              # LGBModel or XGBModel
    min_features_to_select=None,            # the minimum number of features to be selected
    step=1,	                                # number of features to remove at each iteration
    param_grid=None,				        # parameters to be optimized
    greater_is_better=False,		        # minimize or maximize the monitored score
    importance_type='feature_importances',	# which importance measure to use: default or shap
    train_importance=True,			        # where to compute the shap feature importance
    n_iter=None,				            # number of sampled parameter configurations
    sampling_seed=None,				        # the seed used for parameter sampling
    verbose=1,					            # verbosity mode
    n_jobs=None					            # number of jobs to run in parallel
)

Feature Selection (BORUTA)

BoostBoruta(
    estimator,					            # LGBModel or XGBModel
    perc=100,					            # threshold used to compare shadow and real features
    alpha=0.05,					            # p-value levels for feature rejection
    max_iter=100,				            # maximum Boruta iterations to perform
    early_stopping_boruta_rounds=None,	    # maximum iterations without confirming a feature
    param_grid=None,				        # parameters to be optimized
    greater_is_better=False,			    # minimize or maximize the monitored score
    importance_type='feature_importances',	# which importance measure to use: default or shap
    train_importance=True,			        # where to compute the shap feature importance
    n_iter=None,				            # number of sampled parameter configurations
    sampling_seed=None,				        # the seed used for parameter sampling
    verbose=1,					            # verbosity mode
    n_jobs=None					            # number of jobs to run in parallel
)

Feature Selection (RFA)

BoostRFA(
    estimator,                              # LGBModel or XGBModel
    min_features_to_select=None,            # the minimum number of features to be selected
    step=1,	                                # number of features to remove at each iteration
    param_grid=None,				        # parameters to be optimized
    greater_is_better=False,		        # minimize or maximize the monitored score
    importance_type='feature_importances',	# which importance measure to use: default or shap
    train_importance=True,			        # where to compute the shap feature importance
    n_iter=None,				            # number of sampled parameter configurations
    sampling_seed=None,				        # the seed used for parameter sampling
    verbose=1,					            # verbosity mode
    n_jobs=None					            # number of jobs to run in parallel
)

Full examples in the notebooks folder.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

shap-hypetune-0.2.0.tar.gz (14.2 kB view details)

Uploaded Source

Built Distribution

shap_hypetune-0.2.0-py3-none-any.whl (16.4 kB view details)

Uploaded Python 3

File details

Details for the file shap-hypetune-0.2.0.tar.gz.

File metadata

  • Download URL: shap-hypetune-0.2.0.tar.gz
  • Upload date:
  • Size: 14.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/49.2.0.post20200714 requests-toolbelt/0.9.1 tqdm/4.47.0 CPython/3.7.7

File hashes

Hashes for shap-hypetune-0.2.0.tar.gz
Algorithm Hash digest
SHA256 e324e290b72d7ccf9c2cf274ac41a7bd4050675effd42f10f227c7da21f600a5
MD5 071e6351af8f7589812e195ba81e7378
BLAKE2b-256 50e98c050ea70e3c623939d97ab1cc0d47b79577c19977e08f1681827fbd451f

See more details on using hashes here.

File details

Details for the file shap_hypetune-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: shap_hypetune-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 16.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/49.2.0.post20200714 requests-toolbelt/0.9.1 tqdm/4.47.0 CPython/3.7.7

File hashes

Hashes for shap_hypetune-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 e85741cf05cc0a6f3cd94ddf0771757ef98d7e537817e2b6853b7ae3723572ae
MD5 f24147a56028970323b11d9ae6468f14
BLAKE2b-256 ffa212a5595fd69388b45388946d7143deb8809baf86d3a54ca300170a5b6c15

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page