Python implementation of integrated path stability selection (IPSS)
Project description
Integrated path stability selection (IPSS)
Integrated path stability selection (IPSS) is a general method for improving feature selection algorithms. Given an n-by-p data matrix X (n = number of samples, p = number of features), and an n-dimensional response variable y, IPSS applies a base selection algorithm to subsamples of the data to select features (columns of X) that are most related to y. This package includes IPSS for gradient boosting (IPSSGB), random forests (IPSSRF), and L1-regularized linear models (IPSSL). The final outputs are efp scores and q-values for each feature in X.
- The efp score of feature j is the expected number of false positives selected when j is selected.
- The q-value of feature j is the false discovery rate (FDR) when feature j is selected.
Key attributes
- Error control: IPSS controls the number of false positives and the FDR.
- Generality: IPSSGB and IPSSRF are nonlinear, nonparametric methods. IPSSL is linear.
- Speed: IPSS is efficient. For example, IPSSGB runs in <20 seconds when
n = 500
andp = 5000
. - Simplicity: The only required inputs are
X
andy
. Users can also specify the base method (IPSSGB, IPSSRF, or IPSSL), and the target number of false positives or the target FDR.
Associated papers
IPSSL: https://arxiv.org/abs/2403.15877
IPSSGB and IPSSRF:
Installation
To install from PyPI:
pip install ipss
Examples
Examples are available in the examples folder. These include
- A simple example,
simple_example.py
(Open in Google Colab). - Various simulation experiments,
simulation.py
(Open in Google Colab). - IPSS applied to cancer data,
cancer.py
(Open in Google Colab).
Usage
from ipss import ipss
# load data X and y
# run ipss:
ipss_output = ipss(X, y)
# select features based on target number of false positives
target_fp = 1
efp_scores = ipss_output['efp_scores']
selected_features = []
for feature_index, efp_score in efp_scores:
if efp_score <= target_fp:
selected_features.append(feature_index)
print(f'Selected features (target E(FP) = {target_fp}): {selected_features}')
# select features based on target FDR
target_fdr = 0.1
q_values = ipss_output['q_values']
selected_features = []
for feature_index, q_value in q_values:
if q_value <= target_fdr:
selected_features.append(feature_index)
print(f'Selected features (target FDR = {target_fdr}): {selected_features}')
Results
ipss_output = ipss(X, y)
is a dictionary containing:
efp_scores
: List of tuples(feature_index, efp_score)
with features ordered by their efp scores from smallest to largest (list of lengthp
).q_values
: List of tuples(feature_index, q_value)
with features ordered by their q-values from smallest to largest (list of lengthp
).runtime
: Runtime of the algorithm in seconds (float).selected_features
: List of indices of features selected by IPSS; empty list iftarget_fp
andtarget_fdr
are not specified (list of ints).stability_paths
: Estimated selection probabilities at each parameter value (array of shape(n_alphas, p)
)
Full list of ipss
arguments
Required arguments:
X
: Features (array of shape(n, p)
), wheren
is the number of samples andp
is the number of features.y
: Response (array of shape(n,)
or(n, 1)
).ipss
automatically detects ify
is continuous or binary.
Optional arguments:
selector
: Base algorithm to use (str; default'gb'
). Options are:'gb'
: Gradient boosting with XGBoost.'l1'
: L1-regularized linear or logistic regression.'rf'
: Random forest with sci-kit learn.
selector_args
: Arguments for the base algorithm (dict; defaultNone
).target_fp
: Target number of false positives to control (positive float; defaultNone
).target_fdr
: Target false discovery rate (FDR) (positive float; defaultNone
).B
: Number of subsampling steps (int; default100
for IPSSGB,50
otherwise).n_alphas
: Number of values in the regularization or threshold grid (int; default100
).ipss_function
: Function to apply to selection probabilities (str; default'h3'
). Options are:'h1'
: Linear function,h1(x) = 2x - 1 if x >= 0.5 else 0
.'h2'
: Quadratic function,h2(x) = (2x - 1)**2 if x >= 0.5 else 0
.'h3'
: Cubic function,h3(x) = (2x - 1)**3 if x >= 0.5 else 0
.
preselect
: Number (if int) or percentage (if float) of features to preselect.False
for no preselection (default0.05
).preselect_min
: Minimum number of features to keep in the preselection step (int; default200
).preselect_args
: Arguments for the preselection algorithm (dict; defaultNone
).cutoff
: Maximum value of the theoretical integral boundI(Lambda)
(positive float; default0.05
).delta
: Defines probability measure; seeAssociated papers
(float; default1
).standardize_X
: Scale features to have mean 0, standard deviation 1 (bool; defaultNone
).center_y
: Center response to have mean 0 (bool; defaultNone
).true_features
: List of true feature indices when known, e.g., in simulations (list; defaultNone
).n_jobs
: Number of jobs to run in parallel (int; default1
).
General observations/recommendations:
- IPSSGB is usually best for capturing nonlinear relationships between features and response
- IPSSL is usually best for capturing linear relationships between features and response
target_fp
ortarget_fdr
(at most one is specified) are problem specific/left to the user- In general, all other parameters should not changed
selector_args
include, e.g., decision tree parameters for tree-based models- Results are robust to
B
provided it is bigger than25
- Results are robust to
n_alphas
provided it is bigger than50
'h3'
yields the most true positives.'h2'
is more conservative, and'h1'
even more so.- Preselection can significantly reduce computation time. Results are robust otherwise.
- Results are robust to
cutoff
provided it is between0.025
and0.1
. - Results are robust to
delta
provided it is between0
and1.5
. - Standardization is automatically applied for IPSSL. IPSSGB and IPSSRF are unaffected by this.
- Centering
y
is automatically applied for IPSSL. IPSSGB and IPSSRF are unaffected by this.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file ipss-1.0.2.tar.gz
.
File metadata
- Download URL: ipss-1.0.2.tar.gz
- Upload date:
- Size: 16.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.12.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | f8d5f97461ba81d07815207097bc484afa18a8a704856985b5426ced7f73a09f |
|
MD5 | 29726b2b52cc27ca2743b3f6718bef36 |
|
BLAKE2b-256 | 9ad6db58cf181a3efb50ec70d5cd81ab03bc26d2ff123925dde6206d0ecf34c2 |
File details
Details for the file ipss-1.0.2-py3-none-any.whl
.
File metadata
- Download URL: ipss-1.0.2-py3-none-any.whl
- Upload date:
- Size: 18.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.12.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | ae8610539c5b6cdae033f45935c2046bbfaf412c7739ea564e85119d67daef80 |
|
MD5 | 7be6112c0ecae316afbb1068bdcc4dc0 |
|
BLAKE2b-256 | 4f21b6df920f84a78e3116baea74bb65bae06765562b5d699672d2ab0428a397 |