A python package for simultaneous Hyperparameters Tuning and Features Selection for Gradient Boosting Models.
Project description
shap-hypetune
A python package for simultaneous Hyperparameters Tuning and Features Selection for Gradient Boosting Models.
Overview
Hyperparameters tuning and features selection are two common steps in every machine learning pipeline. Most of the time they are computed separately and independently. This may result in suboptimal performances and in a more time expensive process.
shap-hypetune aims to combine hyperparameters tuning and features selection in a single pipeline optimizing the optimal number of features while searching for the optimal parameters configuration. Hyperparameters Tuning or Features Selection can also be carried out as standalone operations.
shap-hypetune main features:
- designed for gradient boosting models, as LGBModel or XGBModel;
- effective in both classification or regression tasks;
- customizable training process, supporting early-stopping and all the other fitting options available in the standard algorithms api;
- ranking feature selection algorithms: Recursive Feature Elimination (RFE) or Boruta;
- classical boosting based feature importances or SHAP feature importances (the later can be computed also on the eval_set);
- apply grid-search or random-search.
Installation
pip install shap-hypetune
lightgbm, xgboost are not needed requirements. The module depends only on NumPy and shap. Python 3.6 or above is supported.
Media
Usage
from scipy import stats
from lightgbm import LGBMClassifier
from shaphypetune import BoostSearch, BoostRFE, BoostBoruta
Only Hyperparameters Tuning
- GRID-SEARCH
param_grid = {'n_estimators': 150,
'learning_rate': [0.2, 0.1],
'num_leaves': [25, 30, 35],
'max_depth': [10, 12]}
model = BoostSearch(LGBMClassifier(), param_grid=param_grid)
model.fit(X_train, y_train, eval_set=[(X_valid, y_valid)], early_stopping_rounds=6, verbose=0)
- RANDOM-SEARCH
param_dist = {'n_estimators': 150,
'learning_rate': stats.uniform(0.09, 0.25),
'num_leaves': stats.randint(20,40),
'max_depth': [10, 12]}
model = BoostSearch(LGBMClassifier(), param_grid=param_dist, n_iter=10, sampling_seed=0)
model.fit(X_train, y_train, eval_set=[(X_valid, y_valid)], early_stopping_rounds=6, verbose=0)
Only Features Selection
- RFE
model = BoostRFE(LGBMClassifier(),
min_features_to_select=1, step=1)
model.fit(X_train, y_train, eval_set=[(X_valid, y_valid)], early_stopping_rounds=6, verbose=0)
- Boruta
model = BoostBoruta(LGBMClassifier(),
max_iter=100, perc=100)
model.fit(X_train, y_train, eval_set=[(X_valid, y_valid)], early_stopping_rounds=6, verbose=0)
Only Features Selection with SHAP
- RFE with SHAP
model = BoostRFE(LGBMClassifier(),
min_features_to_select=1, step=1,
importance_type='shap_importances', train_importance=False)
model.fit(X_train, y_train, eval_set=[(X_valid, y_valid)], early_stopping_rounds=6, verbose=0)
- Boruta with SHAP
model = BoostBoruta(LGBMClassifier(),
max_iter=100, perc=100,
importance_type='shap_importances', train_importance=False)
model.fit(X_train, y_train, eval_set=[(X_valid, y_valid)], early_stopping_rounds=6, verbose=0)
Hyperparameters Tuning + Features Selection
- RANDOM-SEARCH + RFE
param_dist = {'n_estimators': 150,
'learning_rate': stats.uniform(0.09, 0.25),
'num_leaves': stats.randint(20,40),
'max_depth': [10, 12]}
model = BoostRFE(LGBMClassifier(), param_grid=param_dist, n_iter=10, sampling_seed=0,
min_features_to_select=1, step=1)
model.fit(X_train, y_train, eval_set=[(X_valid, y_valid)], early_stopping_rounds=6, verbose=0)
- RANDOM-SEARCH + Boruta
param_dist = {'n_estimators': 150,
'learning_rate': stats.uniform(0.09, 0.25),
'num_leaves': stats.randint(20,40),
'max_depth': [10, 12]}
model = BoostBoruta(LGBMClassifier(), param_grid=param_dist, n_iter=10, sampling_seed=0,
max_iter=100, perc=100)
model.fit(X_train, y_train, eval_set=[(X_valid, y_valid)], early_stopping_rounds=6, verbose=0)
Hyperparameters Tuning + Features Selection with SHAP
- GRID-SEARCH + RFE with SHAP
param_grid = {'n_estimators': 150,
'learning_rate': [0.2, 0.1],
'num_leaves': [25, 30, 35],
'max_depth': [10, 12]}
model = BoostRFE(LGBMClassifier(), param_grid=param_grid,
min_features_to_select=1, step=1,
importance_type='shap_importances', train_importance=False)
model.fit(X_train, y_train, eval_set=[(X_valid, y_valid)], early_stopping_rounds=6, verbose=0)
- GRID-SEARCH + Boruta with SHAP
param_grid = {'n_estimators': 150,
'learning_rate': [0.2, 0.1],
'num_leaves': [25, 30, 35],
'max_depth': [10, 12]}
model = BoostBoruta(LGBMClassifier(), param_grid=param_grid,
max_iter=100, perc=100,
importance_type='shap_importances', train_importance=False)
model.fit(X_train, y_train, eval_set=[(X_valid, y_valid)], early_stopping_rounds=6, verbose=0)
All the examples are reproducible in regression contexts and with XGBModel.
More examples in the notebooks folder.
All the available estimators are fully integrable with sklearn (see here).
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file shap-hypetune-0.1.1.tar.gz
.
File metadata
- Download URL: shap-hypetune-0.1.1.tar.gz
- Upload date:
- Size: 12.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/49.2.0.post20200714 requests-toolbelt/0.9.1 tqdm/4.47.0 CPython/3.7.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 468c8cc6c289ce1ac8783e7d69847a80816a4a42ee4a13e0e022c9b00e7e5d90 |
|
MD5 | 56d0c9defecad356e632376fbcca4fa6 |
|
BLAKE2b-256 | ae6aec49a182447e2ee152459f740a50c87c19c84bf3391e1f087df716baf1c5 |
File details
Details for the file shap_hypetune-0.1.1-py3-none-any.whl
.
File metadata
- Download URL: shap_hypetune-0.1.1-py3-none-any.whl
- Upload date:
- Size: 14.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/49.2.0.post20200714 requests-toolbelt/0.9.1 tqdm/4.47.0 CPython/3.7.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 2461cf0f5cab7e1e8253539203df39e64fef0b32a820a34be910aec9b0115b43 |
|
MD5 | 1bffea4965e11dad7fe4a0f243577f47 |
|
BLAKE2b-256 | 0816be6f8db1635fec1f0a673495e49c0518c8d812572a98baf4d5b89fb5f1e5 |