Skip to main content

Python implementation of fastglmpca [Weine et al., Bioinformatics, 2024] algorithm with PyTorch

Project description

py-fastglmpca

Tests PyPI License: MIT

Python implementation of fastglmpca (Weine et al., Bioinformatics, 2024) algorithm with PyTorch backend.

The main concept of fastglmpca is to use a fast iterative algorithm ("Alternative Poisson Regression") to find a low-rank approximation of the input matrix X with a Poisson distribution. It might be used for dimensionality reduction of count data matrices (e.g. scRNA-Seq UMI matrices or nearest neighbours count matrices in Skip-Gram like representations).

The original R package is available at GitHub, this Python package is not an official implementation that was tested in the paper. In contrast to the original implementation, we don't use line search and instead use adaptive learning rate with backtracking.

Installation

fastglmpca might be installed via pip:

pip install fastglmpca

or the latest development version can be installed from GitHub using:

pip install git+https://github.com/serjisa/py-fastglmpca

Quck start

fastglmpca works with both sparse and dense matrices. The input matrix X should be a 2D array-like object with shape (n_samples, n_features). The output matrix Z will have shape (n_samples, n_components), where n_components is the number of components to be computed.

import fastglmpca

# Fitting the model
model = fastglmpca.poisson(X, n_pcs=10, return_model=True)
X_PoiPCA = model.U
# Alternatively, you can run
# X_PoiPCA = fastglmpca.poisson(X, n_pcs=10)

# Fitting new data to existing model
Y_PoiPCA = model.project(Y)

Examples with scRNA-Seq dataset processing are available in this and this notebooks.

API

Function fastglmpca.poisson has the following parameters:

  • X : np.ndarray or torch.Tensor or scipy.sparse matrix Input data matrix of shape (n_samples, n_features).
  • n_pcs : int, optional Number of principal components to compute. Default is 30.
  • max_iter : int, optional Maximum number of iterations for the optimization algorithm. Default is 1000.
  • tol : float, optional Tolerance for convergence of the optimization algorithm. Default is 1e-4.
  • col_size_factor : bool, optional Whether to use column size factor in the model. Default is True.
  • row_intercept : bool, optional Whether to use row intercept in the model. Default is True.
  • verbose : bool, optional Whether to print verbose output during fitting. Default is False.
  • device : str or None, optional Device to use for computation. If None, uses "cuda" if available, otherwise "mps" if available, otherwise "cpu". Default is None.
  • progress_bar : bool, optional Whether to show a progress bar during fitting. Default is True.
  • seed : int or None, optional Random seed for reproducibility. Default is 42.
  • return_model : bool, optional Whether to return the fitted model object. Default is False.
  • learning_rate : float, optional Step size used in updates. Default is 0.5.
  • num_ccd_iter : int, optional Number of cyclic coordinate descent iterations per main iteration to refine factors. Default is 3.
  • batch_size_rows : int or None, optional Number of rows for batched computations of expectation terms; tunes memory vs speed. Default uses an adaptive value up to 1024.
  • batch_size_cols : int or None, optional Number of columns for batched computations of expectation terms; tunes memory vs speed. Default uses an adaptive value up to 1024.
  • init : str, optional Initialization method for factor matrices. 'svd' (default) uses SVD on log1p(X) to produce a strong starting point. 'random' uses small Gaussian noise for LL and FF which can be useful for stress-testing convergence or avoiding SVD costs on extremely large inputs.
  • adaptive_lr : bool, optional Whether to use adaptive learning rate with backtracking. Default is True.
  • lr_decay : float, optional Decay factor for learning rate. Default is 0.5.
  • min_learning_rate : float, optional Minimum learning rate. Default is 1e-5.
  • max_backtracks : int, optional Maximum number of backtracks for line search. Default is 3.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fastglmpca-0.1.0.tar.gz (13.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

fastglmpca-0.1.0-py3-none-any.whl (13.4 kB view details)

Uploaded Python 3

File details

Details for the file fastglmpca-0.1.0.tar.gz.

File metadata

  • Download URL: fastglmpca-0.1.0.tar.gz
  • Upload date:
  • Size: 13.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.18

File hashes

Hashes for fastglmpca-0.1.0.tar.gz
Algorithm Hash digest
SHA256 5e76f7a34d71f84f6a730c5e3bc735c8db8265b319860060238d244ccba8bac1
MD5 56bf2b4dec63095bfc1b675ab2f50759
BLAKE2b-256 0785494cb5f00ca5bc3ba0f79f04bc7d71dd6d6c62f405a25357eab1c4962bbc

See more details on using hashes here.

File details

Details for the file fastglmpca-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: fastglmpca-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 13.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.18

File hashes

Hashes for fastglmpca-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 b381e77694bf80fcda96b471a2aa1ac96a1ca23a8e6a8c5e84a1a5c9f3ceef8a
MD5 5fb0707745b6aba7e6caea2577b50db4
BLAKE2b-256 eb7ce99270220f6a9a845b1764f9918c0cc9b82077e5111c7d9e7df23893f65a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page