Skip to main content

Extended Boruta -- a flexible transparent sklearn-compatible python Boruta implementation

Project description

eBoruta -- an extended Boruta algorithm

drawing

Introduction

Boruta is an "all-relevant" feature selection algorithm initially suggested for Random Forests [1]. It's categorized as a "wrapper" method since it uses an ML model to filter for features relevant to the model's learning objective.

To decide what's relevant, Boruta first creates a copy of the data and scrambles the values, thus breaking the connection to the response variable. Such "shadow" features are used as a reference point to separate relevant variables from irrelevant ones. Indeed, if a real feature behaves in the same way as would random noise, it doesn't contribute anything useful, and one should consider discarding it.

Thus, Boruta is an iterative procedure. First, it trains the model on the dataset augmented by shadow features and computes feature importance values. It then checks the importance distribution for the shadow features and how real features relate to this distribution. If, given these data, Boruta can't tell the difference between real and shadow features, it assigns a "miss" to this feature in the current iteration. Otherwise, it assigns a "hit".

Accumulating "hits" and "misses," Boruta uses statistical testing to decide whether their composition at the current number of steps could have occurred by chance. Thus, Boruta accepts or rejects a feature consistently yielding "hits" or "misses," resp. However, Boruta is not always confident about accepting or rejecting, leaving some features "tentative," i.e., requiring further testing. So the loop continues until reaching some set number of steps or accepting/rejecting the whole feature pool.

drawing

Why making this library?

Indeed, two Python implementations already exist:

The first introduced modifications that improved the original algorithm, while the second allowed Boruta to use the SHAP importance [2]. Cool! We combine these features into a more user-friendly, transparent, and flexible interface.

So what is extended?

Despite BorutaShap existing for quite a while, not many people seem to have realized the benefit of using SHAP importance in Boruta. Namely, it makes the method model-agnostic. Indeed, the original publication likely picked RF since it's general purpose and provides an optimal balance between speed and accuracy. However, nothing in the algorithm itself would justify hard dependency on RF. While BorutaShap supports other ensemble models such as XGBoost or LightGBM, we explicitly state its potential to be fully model-agnostic. Hence, eBoruta supports any model as long as the method to compute the feature importance is defined. Want to try SVM with permutation importance? Go ahead! Neural networks? We've got you covered! To heck with it, why use a single model when it's known that there isn't a single one good for everything? With eBoruta, you can (and probably should) try various approaches.

Currently,

Installation

eBoruta is a pure Python package installable via pip:

pip install eboruta

Usage

Below we present the simplest use case corresponding to the default RF-based algorithm.

from eBoruta import eBoruta
from sklearn.datasets import make_classification

x, y = make_classification()

selector = eBoruta()
selector.fit(x, y)

However, you're encouraged to explore documentation here, more usage examples.

Disclaimer

eBoruta is still undergoing development. Not all the model types were tested. We encourage you to raise an issue if you find a bug, an example where it fails, or you want to propose a new feature/interface improvement/whatever.

References

[1] Miron B. Kursa and Witold R. Rudnicki. “Feature Selection with the Boruta Package”. In: Journal of Statistical Software 36.11 (2010), pp. 1–13. [2] Scott Lundberg and Su-In Lee. “A Unified Approach to Interpreting Model Predictions”. In: arXiv (2017). eprint: 1705.07874.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

eboruta-0.2.dev0.tar.gz (3.5 MB view details)

Uploaded Source

Built Distribution

eboruta-0.2.dev0-py3-none-any.whl (20.3 kB view details)

Uploaded Python 3

File details

Details for the file eboruta-0.2.dev0.tar.gz.

File metadata

  • Download URL: eboruta-0.2.dev0.tar.gz
  • Upload date:
  • Size: 3.5 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: python-httpx/0.24.0

File hashes

Hashes for eboruta-0.2.dev0.tar.gz
Algorithm Hash digest
SHA256 e044ccc24ff7a745e87d647403d1d4f3f8f99f2316caa651c34a178c905fff41
MD5 2ff96a76947e77249fcaf49294510dde
BLAKE2b-256 f444e9e4cf645ae0bd99ddad754f0d476857b514eaee3fcd6bfb9164ed674786

See more details on using hashes here.

File details

Details for the file eboruta-0.2.dev0-py3-none-any.whl.

File metadata

  • Download URL: eboruta-0.2.dev0-py3-none-any.whl
  • Upload date:
  • Size: 20.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: python-httpx/0.24.0

File hashes

Hashes for eboruta-0.2.dev0-py3-none-any.whl
Algorithm Hash digest
SHA256 beea40fb41a938cd8e44f78773a0f5997ad1ad0d9f41214fbb5d4921d327b1b7
MD5 e7f5825346f60e338f68441bff5073ac
BLAKE2b-256 6e3e36d8246c6d8d857f1bb14d1fe47d44466b56632a2c7c52b7f5834c8f6b07

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page