Python implementation of fastglmpca [Weine et al., Bioinformatics, 2024] algorithm with PyTorch
Project description
py-fastglmpca
Python implementation of fastglmpca (Weine et al., Bioinformatics, 2024) algorithm with PyTorch backend.
The main concept of fastglmpca is to use a fast iterative algorithm ("Alternative Poisson Regression") to find a low-rank approximation of the input matrix X with a Poisson distribution. It might be used for dimensionality reduction of count data matrices (e.g. scRNA-Seq UMI matrices or nearest neighbours count matrices in Skip-Gram like representations).
The original R package is available at GitHub, this Python package is not an official implementation that was tested in the paper.
Installation
fastglmpca might be installed via pip:
pip install fastglmpca
or the latest development version can be installed from GitHub using:
pip install git+https://github.com/serjisa/py-fastglmpca
Quck start
fastglmpca works with both sparse and dense matrices. The input matrix X should be a 2D array-like object with shape (n_samples, n_features). The output matrix Z will have shape (n_samples, n_components), where n_components is the number of components to be computed.
import fastglmpca
# Fitting the model
model = fastglmpca.poisson(X, n_pcs=10, return_model=True)
X_PoiPCA = model.U
# Alternatively, you can run
# X_PoiPCA = fastglmpca.poisson(X, n_pcs=10)
# Fitting new data to existing model
Y_PoiPCA = model.project(Y)
Examples with scRNA-Seq dataset processing are available in this and this notebooks.
API
Function fastglmpca.poisson has the following parameters:
X: np.ndarray or torch.Tensor or scipy.sparse matrix Input data matrix of shape(n_samples, n_features).n_pcs: int, optional Number of principal components to compute. Default is 30.max_iter: int, optional Maximum number of iterations for the optimization algorithm. Default is 1000.tol: float, optional Tolerance for convergence of the optimization algorithm. Default is 1e-4.col_size_factor: bool, optional Whether to use column size factor in the model. Default is True.row_intercept: bool, optional Whether to use row intercept in the model. Default is True.verbose: bool, optional Whether to print verbose output during fitting. Default is False.device: str or None, optional Device to use for computation. If None, uses "cuda" if available, otherwise "mps" if available, otherwise "cpu". Default is None.progress_bar: bool, optional Whether to show a progress bar during fitting. Default is True.seed: int or None, optional Random seed for reproducibility. Default is 42.return_model: bool, optional Whether to return the fitted model object. Default is False.learning_rate: float, optional Step size used in updates. Default is 0.5.num_ccd_iter: int, optional Number of cyclic coordinate descent iterations per main iteration to refine factors. Default is 3.batch_size_rows: int or None, optional Number of rows for batched computations of expectation terms; tunes memory vs speed. Default uses an adaptive value up to 1024.batch_size_cols: int or None, optional Number of columns for batched computations of expectation terms; tunes memory vs speed. Default uses an adaptive value up to 1024.init: str, optional Initialization method for factor matrices.'svd'(default) uses SVD onlog1p(X)to produce a strong starting point.'random'uses small Gaussian noise for LL and FF which can be useful for stress-testing convergence or avoiding SVD costs on extremely large inputs.adaptive_lr: bool, optional Whether to use adaptive learning rate with backtracking. Default is True.lr_decay: float, optional Decay factor for learning rate. Default is 0.5.slowing_loglik: bool, optional Whether to adaptively reduce learning rate when log-likelihood changing rate increases. Default is True.min_learning_rate: float, optional Minimum learning rate. Default is 1e-5.max_backtracks: int, optional Maximum number of backtracks for line search. Default is 3.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file fastglmpca-0.1.1.tar.gz.
File metadata
- Download URL: fastglmpca-0.1.1.tar.gz
- Upload date:
- Size: 15.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.19
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6d8476769bba7dd5de1711d0ee748e4e877a2fc1e76b6d40a59bc6c563422157
|
|
| MD5 |
01f814d4e70719b87ff28722de87bc77
|
|
| BLAKE2b-256 |
9502b4a44f61d33e0be42e5f4dfb2d7738ae14f1d3f39e4b181a91c781b8daad
|
File details
Details for the file fastglmpca-0.1.1-py3-none-any.whl.
File metadata
- Download URL: fastglmpca-0.1.1-py3-none-any.whl
- Upload date:
- Size: 14.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.19
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
929ea932f1faf625b5c57d57f6ddcd6509e041136165dc46549ef11fd6a5589f
|
|
| MD5 |
dd978df728d9807f4b7a75bab486f19d
|
|
| BLAKE2b-256 |
c5baeba1cdd01b536a25935697c61770cc20ab4cb66848fd7f667f7a45b6fd7f
|