No project description provided
Project description
Fast CPU, GPU, and TPU Python implementations of Improved Kernel PLS Algorithm #1 and Algorithm #2 by Dayal and MacGregor [1]. Improved Kernel PLS is both fast [2] and numerically stable [3]. The CPU implementations use NumPy [4] and subclass BaseEstimator from scikit-learn [5], allowing integration into scikit-learn’s ecosystem of machine learning algorithms and pipelines. For example, the CPU implementations can be used with scikit-learn’s cross_validate. The GPU and TPU implementations use Google’s JAX [6]. JAX supports automatic differentiation while allowing CPU, GPU, and TPU execution. This implies that the JAX implementations can be combined with deep learning approaches, as the PLS fit is differentiable.
The documentation is available at https://ikpls.readthedocs.io/en/latest/; examples can be found at https://github.com/Sm00thix/IKPLS/tree/main/examples.
Fast Cross-Validation
In addition to the implementations mentioned above, this package contains the novel, fast cross-validation algorithms mentioned in [7] using both IKPLS algorithms. The fast cross-validation algorithms benefit both IKPLS Algorithms and especially Algorithm #2. The fast cross-validation algorithms are mathematically equivalent to the classical cross-validation algorithm. Still, they are much quicker if cross-validation splits exceed 3. The fast cross-validation algorithms correctly handle (column-wise) centering and scaling of the X and Y input matrices using training set means and standard deviations to avoid data leakage from the validation set. This centering and scaling can be enabled by setting the center parameter to True and the scale parameter to True, respectively. The fast cross-validation algorithms correctly handle row-wise preprocessing such as (row-wise) centering and scaling of the X and Y input matrices, convolution, or other preprocessing. Row-wise preprocessing can safely be applied before passing the data to the fast cross-validation algorithms.
Pre-requisites
The JAX implementations support running on both CPU, GPU, and TPU. To use the GPU or TPU, follow the instructions from the JAX Installation Guide.
To ensure that JAX implementations use Float64, set the environment variable JAX_ENABLE_X64=True as per the Current Gotchas.
Installation
- Install the package for Python3 using the following command:$ pip3 install ikpls
- Now you can import the NumPy and JAX implementations with:from ikpls.numpy_ikpls import PLS as NpPLSfrom ikpls.jax_ikpls_alg_1 import PLS as JAXPLS_Alg_1from ikpls.jax_ikpls_alg_2 import PLS as JAXPLS_Alg_2from ikpls.fast_cross_validation.numpy_ikpls import PLS as NpPLS_FastCV
Quick Start
Use the ikpls package for PLS modeling
import numpy as np from ikpls.numpy_ikpls import PLS N = 100 # Number of samples. K = 50 # Number of features. M = 10 # Number of targets. A = 20 # Number of latent variables (PLS components). # Using float64 is important for numerical stability. X = np.random.uniform(size=(N, K)).astype(np.float64) Y = np.random.uniform(size=(N, M)).astype(np.float64) # The other PLS algorithms and implementations have the same interface for fit() # and predict(). The fast cross-validation implementation with IKPLS has a # different interface. np_ikpls_alg_1 = PLS(algorithm=1) np_ikpls_alg_1.fit(X, Y, A) # Has shape (A, N, M) = (20, 100, 10). Contains a prediction for all possible # numbers of components up to and including A. y_pred = np_ikpls_alg_1.predict(X) # Has shape (N, M) = (100, 10). y_pred_20_components = np_ikpls_alg_1.predict(X, n_components=20) (y_pred_20_components == y_pred[19]).all() # True # The internal model parameters can be accessed as follows: # Regression coefficients tensor of shape (A, K, M) = (20, 50, 10). np_ikpls_alg_1.B # X weights matrix of shape (K, A) = (50, 20). np_ikpls_alg_1.W # X loadings matrix of shape (K, A) = (50, 20). np_ikpls_alg_1.P # Y loadings matrix of shape (M, A) = (10, 20). np_ikpls_alg_1.Q # X rotations matrix of shape (K, A) = (50, 20). np_ikpls_alg_1.R # X scores matrix of shape (N, A) = (100, 20). # This is only computed for IKPLS Algorithm #1. np_ikpls_alg_1.T
Examples
In examples you will find:
Contribute
To contribute, please read the Contribution Guidelines.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.