Skip to main content

PyPsupertime

Project description

PyPsupertime

PyPsupertime is a scalable python re-implementation of the R package psupertime for analysis of single-cell RNA sequencing data where the cell have an ordinal annotation (e.g. time series, or dosage). It can be used to identify a small subset of cells which contribute to the ordering and reconstruct a pseudotime.

The original methodology is published in Bioinformatics: https://doi.org/10.1093/bioinformatics/btac227

Getting Started

Install via pip:

pip install pypsupertime

This installs pypsupertime and its dependencies automatically

We recommend installing inside a virtualenv or pipenv environment.

Description

This package implements a modular API for preprocessing of single cell data, restructuring of the input data under different statistical assumptions, as well as creating and fitting memory-efficient supervised ordinal logistic models.

The central idea of the original work remained unchanged: Find a linear model that accurately predicts the ordinal labels while being restricted to a sparse set of features (genes), by searching along a path of regularization hyperparameters. To adress memory inefficiencies in the original work, the coordinate descent approach is replaced in favor of a stochistic gradient descent model with online fitting. Additionaly, new parametrized penalties are possible, allowing more control over the sparsity.

From a statistical perspective, the ordinal nature of the input data can be modeled under the cumulative proportional odds, forward continuation ratio and backward continuation ratio assumption.

All model and preprocessing classes fit implement scikit-learn estimators or transformers to fit seamlessly into its ecosystem and are wrapped by the Psupertime class at the core of this pacakge that allows all input and output data to be represented as anndata objects.

Please find a more detailed description documentation hosted by readthedocs here.

Basic Usage

The code below runs a psupertime analysis with default settings on a data set represented as AnnData object and stored in a .h5ad file. The data has a numeric ordinal cell annotation representing stored in the obs dataframe under the key "time".

from pypsupertime import Psupertime
p = Psupertime()
anndata = p.run("/path/to/data_sce.h5ad", "time")
Input Data: n_genes=24153, n_cells=992
Preprocessing: done. mode='all', n_genes=11305, n_cells=992
Grid Search CV: CPUs=4, n_folds=5
Regularization: done   
Refit on all data: done. accuracy=0.5195, n_genes=113
Total elapsed time:  0:01:26.141356

The code loads the single-cell data and perfoms default preprocessing, then runs a 5-fold cross validatied grid search along the default regularization path to identify the hyperparameter which results in the best-scoring sparse model, and finally refits the model with the ideal regularization on all data. Using that model, relevant genes are identified and the psupertime is predicted for all cells.

The following snippets show how to quickly inspect and evaluate the results. The regularization progress, model performance, and selected genes are shown here, but many more are available.

p.plot_grid_search(title="Grid Search")

grid search

p.plot_model_perf((adata.X, adata.obs.time), figsize=(6,5))

confusion matrix

p.plot_identified_gene_coefficients(adata, n_top=20)

genes

p.plot_labels_over_psupertime(adata, "time")

labels over psupertime

It is highly recommended to be aware of the preprocessing steps performed or perform key preprocessing manually. For a complete overview, look at the documentation.

Development Roadmap

  • Extension of the pypsupertime.plots module with further analyses
  • Extension of the Preprocessing to allow custom pipelines (see version 1.1.0)
  • Integration into the scanpy project
  • Unit Tests, when the code is stable enough

Changelog:

  • Version 1.2.2:

    • Bugfixes
    • Removes inplace option in run(). It cannot currently be enforced in preprocessing. A copy of the input adata will always be created and the processed object returned.
  • Version 1.2.0:

    • Returns an anndata object in Psupertime.run() if a filename is given
  • Versions 1.1.1, 1.1.2, 1.1.3:

    • Bugfixes
  • Version 1.1.0:

    • Add preprocessing_class parameter to enable using custom / no preprocessing
    • Adds heuristic for selecting the lowest regularization parameter when none is specified
    • Adds shorthand for selecting optimal regularization parameter at 1/2 standard error from the best score
    • Fix bug when using smooth with sparse matrices

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pypsupertime-1.2.2.tar.gz (19.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pypsupertime-1.2.2-py3-none-any.whl (20.2 kB view details)

Uploaded Python 3

File details

Details for the file pypsupertime-1.2.2.tar.gz.

File metadata

  • Download URL: pypsupertime-1.2.2.tar.gz
  • Upload date:
  • Size: 19.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.5

File hashes

Hashes for pypsupertime-1.2.2.tar.gz
Algorithm Hash digest
SHA256 6acdcf7884ec429fa831002b9c90228a15481cadc13567b52ad02aea98947288
MD5 4c755e6bce527937e2bcb6146929e1d6
BLAKE2b-256 97acac5f6686fbddb148308b5b91e30323af23e04710288740a3286aac9419f5

See more details on using hashes here.

File details

Details for the file pypsupertime-1.2.2-py3-none-any.whl.

File metadata

  • Download URL: pypsupertime-1.2.2-py3-none-any.whl
  • Upload date:
  • Size: 20.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.5

File hashes

Hashes for pypsupertime-1.2.2-py3-none-any.whl
Algorithm Hash digest
SHA256 38210a27146d19500a5b1a822497894fbda73eef8432431fcbe62f90b8381b44
MD5 732e5d3fce21f9988ce4bd37882698e7
BLAKE2b-256 26f4b70f7067039aac822b0e507c85dc41cc5cc9ecdcf4c4c2304136a3435d32

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page