Use FFT-based mutual information and accelerated gradient method to screen variables and optimize nonconvex sparse learning problems on large CSV files or large genetic bed/bim/fam files. Multiprocessing is now available.
Project description
fastHDMI -- fast High-Dimensional Mutual Information estimation
Kai Yang
kai.yang2@mail.mcgill.ca
To be rewritten...
This packages uses FFT-based mutual information screening and accelerated gradient method for important variables from (potentially very) high-dimensional large datasets. A share_memory
option is added for multiprocess computing. As a feature, it can be applied on large .csv
data in parallel in a memory-efficient manner and use FFT for KDE to estimate the mutual information extremely fast. A tqdm progress bar is now added to be more useful on cloud computing platforms. verbose
option can take values of 0,1,2
, with 2
being most verbal; 1
being only show progress bar, and 0
being not verbal at all. The corresponding paper by Yang et al. is coming soon...
The available functions are:
-
continuous_screening_plink
caculates the mutual information between a continuous outcome and a bialletic SNP using FFT. Missing data in the input variables is acceptable and will be removed per bivariate calculation. The arguments are:bed_file
,bim_file
,fam_file
are the location of the plink1 files;outcome
,outcome_iid
are the outcome values and the iids for the outcome. For genetic data, it is usual that the order of SNP iid and the outcome iid don't match. While SNP iid can be obtained from the plink1 files, outcome iid here is to be declared separately.outcome_iid
should be a list of strings or a one-dimensional numpy string array.N=500
is the default values for grid size for FFT.
-
binary_screening_plink
works similarly. -
continuous_screening_plink_parallel
andbinary_screening_plink_parallel
are the multiprocessing version of the above two functions, withcore_num
can be used to declare the number of cores to be used for multiprocessing. -
MI_continuous_continuous
andMI_binary_continuous
are to calculate mutual information between two continuous variables and binary and continuous variables, respectively.MI_binary_012
andMI_012_012
arejit
complied functions -- the later can be used for clumping for very large genetic datasets. -
binary_screening_csv
,continuous_screening_csv
,binary_screening_csv_parallel
, andcontinuous_screening_csv_parallel
are to work on large CSV files directly in a memory efficient manner. Note that it is assumed the left first column should be the outcome; if not, use_usecols
to set the first element to be the outcome column label._usecols
is a list of column labels to be used, the first element should be the outcome. Returned mutual information calculation results match_usecols
.Pearson_screening_csv_parallel
calculate Pearson's correlation between only the outcome and the covariates in similiar manner -- sincepandas.DataFrame.corr
calculate pairwise Pearson's correlation for the entire dataframe.csv_engine
can usedask
for low memory situations, orpandas
'sread_csv
engine
s, orfastparquet
engine for a createdparquet
file for faster speed. Iffastparquet
is chosen, declareparquet_file
as the filepath to the parquet file; ifdask
is chosen to read very large CSV, it might need to specify a largersample
.
-
continuous_skMIscreening_csv_parallel
uses the MI calculation fromsklearn.feature_selection.mutual_info_regression
to carry out the screening process instead. -
clump_plink_parallel
andclump_continuous_csv_parallel
carry out mutual information based clumping in parallel at a very fast speed. -
UAG_LM_SCAD_MCP
,UAG_logistic_SCAD_MCP
: these functions find a local minizer for the SCAD/MCP penalized linear models/logistic models. The arguments are:design_matrix
: the design matrix input, should be a two-dimensional numpy array;outcome
: the outcome, should be one dimensional numpy array, continuous for linear model, binary for logistic model;beta_0
: starting value; optional, if not declared, it will be calculated based on the Gauss-Markov theory estimators of $\beta$;tol
: tolerance parameter; the tolerance parameter is set to be the uniform norm of two iterations;maxit
: maximum number of iteratios allowed;_lambda
: _lambda value;penalty
: could be"SCAD"
or"MCP"
;a=3.7
,gamma=2
:a
for SCAD andgamma
for MCP; it is recommended fora
to be set as $3.7$;L_convex
: the L-smoothness constant for the convex component, if not declared, it will be calculated by itselfadd_intercept_column
: boolean, should the fucntion add an intercept column?
-
solution_path_LM
,solution_path_logistic
: calculate the solution path for linear/logistic models; the only difference from above is thatlambda_
is now a one-dimensional numpy array for the values of $\lambda$ to be used. -
UAG_LM_SCAD_MCP_strongrule
,UAG_logistic_SCAD_MCP_strongrule
work just likeUAG_LM_SCAD_MCP
,UAG_logistic_SCAD_MCP
-- except they use strong rule to screening out many covariates before carrying out the optimization step. Same forsolution_path_LM_strongrule
andsolution_path_logistic_strongrule
. Strong rule increases the computational speed dramatically. -
SNP_UAG_LM_SCAD_MCP
andSNP_UAG_logistic_SCAD_MCP
work similar toUAG_LM_SCAD_MCP
andUAG_logistic_SCAD_MCP
; andSNP_solution_path_LM
andSNP_solution_path_logistic
work similar tosolution_path_LM
,solution_path_logistic
-- except that it takes plink1 files so it will be more memory-efficient. Since PCA adjustment is usually used to adjust for population structure, PCA can be given forpca
as a 2-d array -- each column should be one principal component. The pca version isSNP_UAG_LM_SCAD_MCP_PCA
andSNP_UAG_logistic_SCAD_MCP_PCA
.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.