Skip to main content

Python implementation of fastglmpca [Weine et al., Bioinformatics, 2024] algorithm with PyTorch

Project description

py-fastglmpca

Tests PyPI License: MIT

Python implementation of fastglmpca (Weine et al., Bioinformatics, 2024) algorithm with PyTorch backend.

The main concept of fastglmpca is to use a fast iterative algorithm ("Alternative Poisson Regression") to find a low-rank approximation of the input matrix X with a Poisson distribution. It might be used for dimensionality reduction of count data matrices (e.g. scRNA-Seq UMI matrices or nearest neighbours count matrices in Skip-Gram like representations).

The original R package is available at GitHub, this Python package is not an official implementation that was tested in the paper.

Installation

fastglmpca might be installed via pip:

pip install fastglmpca

or the latest development version can be installed from GitHub using:

pip install git+https://github.com/serjisa/py-fastglmpca

Quck start

fastglmpca works with both sparse and dense matrices. The input matrix X should be a 2D array-like object with shape (n_samples, n_features). The output matrix Z will have shape (n_samples, n_components), where n_components is the number of components to be computed.

import fastglmpca

# Fitting the model
model = fastglmpca.poisson(X, n_pcs=10, return_model=True)
X_PoiPCA = model.U
# Alternatively, you can run
# X_PoiPCA = fastglmpca.poisson(X, n_pcs=10)

# Fitting new data to existing model
Y_PoiPCA = model.project(Y)

Examples with scRNA-Seq dataset processing are available in this and this notebooks.

API

Function fastglmpca.poisson has the following parameters:

  • X : np.ndarray or torch.Tensor or scipy.sparse matrix Input data matrix of shape (n_samples, n_features).
  • n_pcs : int, optional Number of principal components to compute. Default is 30.
  • max_iter : int, optional Maximum number of iterations for the optimization algorithm. Default is 1000.
  • tol : float, optional Tolerance for convergence of the optimization algorithm. Default is 1e-4.
  • col_size_factor : bool, optional Whether to use column size factor in the model. Default is True.
  • row_intercept : bool, optional Whether to use row intercept in the model. Default is True.
  • verbose : bool, optional Whether to print verbose output during fitting. Default is False.
  • device : str or None, optional Device to use for computation. If None, uses "cuda" if available, otherwise "mps" if available, otherwise "cpu". Default is None.
  • progress_bar : bool, optional Whether to show a progress bar during fitting. Default is True.
  • seed : int or None, optional Random seed for reproducibility. Default is 42.
  • return_model : bool, optional Whether to return the fitted model object. Default is False.
  • learning_rate : float, optional Step size used in updates. Default is 0.5.
  • num_ccd_iter : int, optional Number of cyclic coordinate descent iterations per main iteration to refine factors. Default is 3.
  • batch_size_rows : int or None, optional Number of rows for batched computations of expectation terms; tunes memory vs speed. Default uses an adaptive value up to 1024.
  • batch_size_cols : int or None, optional Number of columns for batched computations of expectation terms; tunes memory vs speed. Default uses an adaptive value up to 1024.
  • init : str, optional Initialization method for factor matrices. 'svd' (default) uses SVD on log1p(X) to produce a strong starting point. 'random' uses small Gaussian noise for LL and FF which can be useful for stress-testing convergence or avoiding SVD costs on extremely large inputs.
  • adaptive_lr : bool, optional Whether to use adaptive learning rate with backtracking. Default is True.
  • lr_decay : float, optional Decay factor for learning rate. Default is 0.5.
  • slowing_loglik : bool, optional Whether to adaptively reduce learning rate when log-likelihood changing rate increases. Default is True.
  • min_learning_rate : float, optional Minimum learning rate. Default is 1e-5.
  • max_backtracks : int, optional Maximum number of backtracks for line search. Default is 3.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fastglmpca-0.1.2.tar.gz (15.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

fastglmpca-0.1.2-py3-none-any.whl (14.8 kB view details)

Uploaded Python 3

File details

Details for the file fastglmpca-0.1.2.tar.gz.

File metadata

  • Download URL: fastglmpca-0.1.2.tar.gz
  • Upload date:
  • Size: 15.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.19

File hashes

Hashes for fastglmpca-0.1.2.tar.gz
Algorithm Hash digest
SHA256 a38485ef60adbf2783dcefa1bbb583b23abc13e8cfdbfc82b892368c491f388c
MD5 0f138a7034145b530f4b852f59281aa0
BLAKE2b-256 5f26d9d89a64eb429a90fda39da4737c3d2a94c2f62414448687bd982127daf1

See more details on using hashes here.

File details

Details for the file fastglmpca-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: fastglmpca-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 14.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.19

File hashes

Hashes for fastglmpca-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 bb1d3ee37e8c726f316c8c8b17ed3462063a4bcddd096bf8d8ac93299ef817a7
MD5 c9c9fd8cedcf78baebd0c05a08011fdf
BLAKE2b-256 905fd2004a8db8ef4444de69becf9d95f3a55b2dadb28150c16efea776a5c4c1

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page