Improved Kernel PLS and Fast Cross-Validation.
Project description
Improved Kernel Partial Least Squares (IKPLS) and Fast Cross-Validation
Fast CPU, GPU, and TPU Python implementations of Improved Kernel PLS Algorithm #1 and Algorithm #2 [1]. Improved Kernel PLS is both fast [2] and numerically stable [3]. The CPU implementations use [4] and subclass BaseEstimator from scikit-learn [5], allowing integration into scikit-learn's ecosystem of machine learning algorithms and pipelines. For example, the CPU implementations can be used with scikit-learn's cross_validate. The GPU and TPU implementations use Google's JAX [6]. JAX supports automatic differentiation while allowing CPU, GPU, and TPU execution. This implies that the JAX implementations can be combined with deep learning approaches, as the PLS fit is differentiable.
The documentation is available at https://ikpls.readthedocs.io/en/latest/; examples can be found at https://github.com/Sm00thix/IKPLS/tree/main/examples.
Fast Cross-Validation
In addition to the implementations mentioned above, this package
contains the novel, fast cross-validation algorithms by Engstrøm [7]
using both IKPLS algorithms. The fast cross-validation algorithms
benefit both IKPLS Algorithms and especially Algorithm #2. The fast
cross-validation algorithms are mathematically equivalent to the
classical cross-validation algorithm. Still, they are much quicker if
cross-validation splits exceed 3. The fast cross-validation algorithms
correctly handle (column-wise) centering and scaling of the X and Y
input matrices using training set means and standard deviations to avoid
data leakage from the validation set. This centering and scaling can be
enabled or disabled independently from eachother and for X and Y by setting
the parameters center_X
, center_Y
, scale_X
, and scale_Y
, respectively.
The fast cross-validation algorithms correctly handle row-wise preprocessing
such as (row-wise) centering and scaling of the X and Y input matrices,
convolution, or other preprocessing. Row-wise preprocessing can safely be
applied before passing the data to the fast cross-validation algorithms.
Pre-requisites
The JAX implementations support running on both CPU, GPU, and TPU. To use the GPU or TPU, follow the instructions from the JAX Installation Guide.
To ensure that JAX implementations use Float64, set the environment variable JAX_ENABLE_X64=True as per the Current Gotchas.
Installation
-
Install the package for Python3 using the following command:
pip3 install ikpls
-
Now you can import the NumPy and JAX implementations with:
from ikpls.numpy_ikpls import PLS as NpPLS from ikpls.jax_ikpls_alg_1 import PLS as JAXPLS_Alg_1 from ikpls.jax_ikpls_alg_2 import PLS as JAXPLS_Alg_2 from ikpls.fast_cross_validation.numpy_ikpls import PLS as NpPLS_FastCV
Quick Start
Use the ikpls package for PLS modeling
import numpy as np from ikpls.numpy_ikpls import PLS N = 100 # Number of samples. K = 50 # Number of features. M = 10 # Number of targets. A = 20 # Number of latent variables (PLS components). # Using float64 is important for numerical stability. X = np.random.uniform(size=(N, K)).astype(np.float64) Y = np.random.uniform(size=(N, M)).astype(np.float64) # The other PLS algorithms and implementations have the same interface for fit() # and predict(). The fast cross-validation implementation with IKPLS has a # different interface. np_ikpls_alg_1 = PLS(algorithm=1) np_ikpls_alg_1.fit(X, Y, A) # Has shape (A, N, M) = (20, 100, 10). Contains a prediction for all possible # numbers of components up to and including A. y_pred = np_ikpls_alg_1.predict(X) # Has shape (N, M) = (100, 10). y_pred_20_components = np_ikpls_alg_1.predict(X, n_components=20) (y_pred_20_components == y_pred[19]).all() # True # The internal model parameters can be accessed as follows: # Regression coefficients tensor of shape (A, K, M) = (20, 50, 10). np_ikpls_alg_1.B # X weights matrix of shape (K, A) = (50, 20). np_ikpls_alg_1.W # X loadings matrix of shape (K, A) = (50, 20). np_ikpls_alg_1.P # Y loadings matrix of shape (M, A) = (10, 20). np_ikpls_alg_1.Q # X rotations matrix of shape (K, A) = (50, 20). np_ikpls_alg_1.R # X scores matrix of shape (N, A) = (100, 20). # This is only computed for IKPLS Algorithm #1. np_ikpls_alg_1.T
Examples
In examples, you will find:
- Fit and Predict with NumPy.
- Fit and Predict with JAX.
- Cross-validate with NumPy.
- Cross-validate with NumPy and fast cross-validation.
- Cross-validate with JAX.
- Compute the gradient of a preprocessing convolution filter with respect to the RMSE between the target value and the value predicted by PLS after fitting with JAX.
Contribute
To contribute, please read the Contribution Guidelines.
References
- Dayal, B. S., & MacGregor, J. F. (1997). Improved PLS algorithms. Journal of Chemometrics, 11(1), 73-85.
- Alin, A. (2009). Comparison of PLS algorithms when the number of objects is much larger than the number of variables. Statistical Papers, 50, 711-720.
- Andersson, M. (2009). A comparison of nine PLS1 algorithms. Journal of Chemometrics, 23(10), 518-529.
- NumPy
- scikit-learn
- JAX
- Engstrøm, O.-C. G. (2024). Shortcutting Cross-Validation: Efficiently Deriving Column-Wise Centered and Scaled Training Set $\mathbf{X}^\mathbf{T}\mathbf{X}$ and $\mathbf{X}^\mathbf{T}\mathbf{Y}$ Without Full Recomputation of Matrix Products or Statistical Moments
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for ikpls-1.2.2.post2-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | c54c7e6a9d6d8ea0ef1c8209fbac2f513e23b6572fadd7043796dfcbff1466ab |
|
MD5 | a1d75efa3379ea65130130d53252f4fe |
|
BLAKE2b-256 | f5833f7ea4f03b50fa0c734adcb197fcaa09e0c895159aea25e3fef5a48c6dfe |