Skip to main content

A feature selection algorithm.

Project description

DOI PyPI version

Boruta-Shap

BorutaShap is a wrapper feature selection method which combines both the Boruta feature selection algorithm with shapley values. This combination has proven to out perform the original Permutation Importance method in both speed, and the quality of the feature subset produced. Not only does this algorithm provide a better subset of features, but it can also simultaneously provide the most accurate and consistent global feature rankings which can be used for model inference too. Unlike the orginal R package, which limits the user to a Random Forest model, BorutaShap allows the user to choose any Tree Based learner as the base model in the feature selection process.

Despite BorutaShap's runtime improvements the SHAP TreeExplainer scales linearly with the number of observations making it's use cumbersome for large datasets. To combat this, BorutaShap includes a sampling procedure which uses the smallest possible subsample of the data availble at each iteration of the algorithm. It finds this sample by comparing the distributions produced by an isolation forest of the sample and the data using ks-test. From experiments, this procedure can reduce the run time up to 80% while still creating a valid approximation of the entire data set. Even with these improvments the user still might want a faster solution so BorutaShap has included an option to use the mean decrease in gini impurity. This importance measure is independent of the size dataset as it uses the tree's structure to compute a global feature ranking making it much faster than SHAP at larger datasets. Although this metric returns somewhat comparable feature subsets, it is not a reliable measure of global feature importance in spite of it's wide spread use. Thus, I would recommend to using the SHAP metric whenever possible.

Algorithm

  1. Start by creating new copies of all the features in the data set and name them shadow + feature_name, shuffle these newly added features to remove their correlations with the response variable.

  2. Run a classifier on the extended data with the random shadow features included. Then rank the features using a feature importance metric the original algorithm used permutation importance as it's metric of choice.

  3. Create a threshold using the maximum importance score from the shadow features. Then assign a hit to any feature that had exceeded this threshold.

  4. For every unassigned feature preform a two sided T-test of equality.

  5. Attributes which have an importance significantly lower than the threshold are deemed 'unimportant' and are removed them from process. Deem the attributes which have importance significantly higher than than the threshold as 'important'.

  6. Remove all shadow attributes and repeat the procedure until an importance has been assigned for each feature, or the algorithm has reached the previously set limit of runs.

If the algorithm has reached its set limit of runs and an importance has not been assigned to each feature the user has two choices. Either increase the number of runs or use the tentative rough fix function which compares the median importance values between unassigned features and the maximum shadow feature to make the decision.

Installation

Use the package manager pip to install BorutaShap.

pip install BorutaShap

Usage

For more use cases such as alternative models, sampling or changing the importance metric please view the notebooks here.

Using Shap and Basic Random Forest

from BorutaShap import BorutaShap, load_data

X, y = load_data(data_type='regression')
X.head()
# no model selected default is Random Forest, if classification is True it is a Classification problem
Feature_Selector = BorutaShap(importance_measure='shap',
                              classification=False)

'''
Sample: Boolean
	if true then a rowise sample of the data will be used to calculate the feature importance values

sample_fraction: float
	The sample fraction of the original data used in calculating the feature importance values only
        used if Sample==True.

train_or_test: string
	Decides whether the feature improtance should be calculated on out of sample data see the dicussion here.
        https://slds-lmu.github.io/iml_methods_limitations/pfi-data.html

normalize: boolean
            if true the importance values will be normalized using the z-score formula

verbose: Boolean
	a flag indicator to print out all the rejected or accepted features.
'''
Feature_Selector.fit(X=X, y=y, n_trials=100, sample=False,
            	     train_or_test = 'test', normalize=True,
		     verbose=True)
# Returns Boxplot of features
Feature_Selector.plot(which_features='all')
# Returns a subset of the original data with the selected features
subset = Feature_Selector.Subset()

Using BorutaShap with another model XGBoost

from BorutaShap import BorutaShap, load_data
from xgboost import XGBClassifier

X, y = load_data(data_type='classification')
X.head()
model = XGBClassifier()

# if classification is False it is a Regression problem
Feature_Selector = BorutaShap(model=model,
                              importance_measure='shap',
                              classification=True)

Feature_Selector.fit(X=X, y=y, n_trials=100, sample=False,
            	     train_or_test = 'test', normalize=True,
		     verbose=True)
# Returns Boxplot of features
Feature_Selector.plot(which_features='all')
# Returns a subset of the original data with the selected features
subset = Feature_Selector.Subset()

Contributing

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

License

If you wish to cite this work please click on the zenodo badge at the top of this READme file MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

BorutaShap-1.0.17.tar.gz (14.5 kB view details)

Uploaded Source

Built Distribution

BorutaShap-1.0.17-py3-none-any.whl (15.0 kB view details)

Uploaded Python 3

File details

Details for the file BorutaShap-1.0.17.tar.gz.

File metadata

  • Download URL: BorutaShap-1.0.17.tar.gz
  • Upload date:
  • Size: 14.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/45.2.0.post20200210 requests-toolbelt/0.9.1 tqdm/4.42.1 CPython/3.7.6

File hashes

Hashes for BorutaShap-1.0.17.tar.gz
Algorithm Hash digest
SHA256 6b19219614efd791069863571932a9d11dfffdaa22346ec19ae054794895f628
MD5 6ef105f7201283ff1185d80a7e0155ba
BLAKE2b-256 450a084832698a63e07e0d37ec689ba1fb2097e584b046ed7b77757565f5994e

See more details on using hashes here.

File details

Details for the file BorutaShap-1.0.17-py3-none-any.whl.

File metadata

  • Download URL: BorutaShap-1.0.17-py3-none-any.whl
  • Upload date:
  • Size: 15.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/45.2.0.post20200210 requests-toolbelt/0.9.1 tqdm/4.42.1 CPython/3.7.6

File hashes

Hashes for BorutaShap-1.0.17-py3-none-any.whl
Algorithm Hash digest
SHA256 bed672e2178b9a78f43ebe7d9b63c0bf93179e56f38b50919e91899124b39100
MD5 489b83b8e79baf3a3ddbcb8fe266f076
BLAKE2b-256 ced09f21859e5a574e142ce574ffb22425019c34d4f7c9ded003b7d31c62a86f

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page