Confound-isolating cross-validation approach to control for a confounding effect in a predictive model.
Confound_prediction is a Python module to control confound effect in the prediction or classification model.
Any successful prediction model may be driven by a confounding effect that is correlated with the effect of interest. It is important to control that detected associations are not driven by unwanted effects. It is common issue in in neuroscience, epidemiology, economy, agriculture, etc.
This module provides 3 methods to tackle confounding effects in predictive models:
1. Confound-isolating cross-validation
2. Out-of-sample deconfounding
3. Deconfounding test and train jointly (which should not be used, and is provided only for illustration)
“Confound-isolating cross-validation” is a non-parametric approach to control for a confounding effect in a predictive model. It is based on crafting a test set on which the effect of interest is independent from the confounding effect.
What expect from Confound_prediction?
Developed framework is based on anti mutual information sampling, a novel sampling approach to create a test set in which the effect of interest is independent from the confounding effect. The grafical illustration of classic and confound-isolating cross-validation:
How does it work?
You provide us
- X - data with shape (n_samples, n_features)
- y - target vector with shape (n_samples)
- z - confound vector with shape (n_samples)
- min_sample_size - minimum sample size to be reached, default is 10% of the data
- n_remove - number of the samples to be removed on each iteration of sampling, default is 4
- prng - control the pseudo random number generator, default is None
- cv_folds - number of folders in the cross validation, default is 10
We return you
- x_test, x_train, y_test, y_train, ids_test, ids_train - train and test of X, y and sampled indexes
confound_prediction package requires:
- Python (>= 3.5)
- Scipy (>=1.1.0)
- Scikit-learn (>=0.21.2)
- Numpy (>=1.14.2)
- Matplotlib (>=2.2.2) for example visualization
- Seaborn (>=0.8) for example visualization
pip install TBD
Example: create train set and test set without confounding effect
Example: compare prediction on data sampled with different deconfounding methods on the data with direct link between target and confound
Example: compare prediction on the data with different confound effect
Example: evolution of mutual information and correlation on each itteration of 'confound-isolation cross-validation' method
 D. Chyzhyk, G. Varoquaux, B. Thirion and M. Milham, "Controlling a confound in predictive models with a test set minimizing its effect," 2018 International Workshop on Pattern Recognition in Neuroimaging (PRNI), Singapore, 2018, pp. 1-4. doi: 10.1109/PRNI.2018.8423961 PDF
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|Filename, size||File type||Python version||Upload date||Hashes|
|Filename, size confound_prediction-0.0.1a1-py3-none-any.whl (19.7 kB)||File type Wheel||Python version py3||Upload date||Hashes View hashes|
|Filename, size confound_prediction-0.0.1a1.tar.gz (21.9 kB)||File type Source||Python version None||Upload date||Hashes View hashes|
Hashes for confound_prediction-0.0.1a1-py3-none-any.whl
Hashes for confound_prediction-0.0.1a1.tar.gz