Skip to main content

Python implementation of the SCENIC pipeline for transcription factor inference from single-cell transcriptomics experiments.

Project description

buildstatus pypipackage docstatus

pySCENIC is a lightning-fast python implementation of the SCENIC pipeline (Single-Cell rEgulatory Network Inference and Clustering) which enables biologists to infer transcription factors, gene regulatory networks and cell types from single-cell RNA-seq data.

The pioneering work was done in R and results were published in Nature Methods [1].

pySCENIC can be run on a single desktop machine but easily scales to multi-core clusters to analyze thousands of cells in no time. The latter is achieved via the dask framework for distributed computing [2].

The pipeline has three steps:

  1. First transcription factors (TFs) and their target genes, together defining a regulon, are derived using gene inference methods which solely rely on correlations between expression of genes across cells. The arboreto package is used for this step.

  2. These regulons are refined by pruning targets that do not have an enrichment for a corresponding motif of the TF effectively separating direct from indirect targets based on the presence of cis-regulatory footprints.

  3. Finally, the original cells are differentiated and clustered on the activity of these discovered regulons.

Features

All the functionality of the original R implementation is available and in addition:

  1. You can leverage multi-core and multi-node clusters using dask and its distributed scheduler.

  2. We implemented a version of the recovery of input genes that takes into account weights associated with these genes.

  3. Regulons, i.e. the regulatory network that connects a TF with its target genes, with targets that are repressed are now also derived and used for cell enrichment analysis.

Installation

The lastest stable release of the package itself can be installed via pip install pyscenic.

You can also install the bleeding edge (i.e. less stable) version of the package directly from the source:

git clone https://github.com/aertslab/pySCENIC.git
cd pySCENIC/
pip install .

To successfully use this pipeline you also need auxilliary datasets:

  1. Databases ranking the whole genome of your species of interest based on regulatory features (i.e. transcription factors). Ranking databases are typically stored in the feather format and can be downloaded from cisTargetDBs.

  2. Motif annotation database providing the missing link between an enriched motif and the transcription factor that binds this motif. This pipeline needs a TSV text file where every line represents a particular annotation.

Annotations

Species

HGNC annotations

Homo sapiens

MGI annotations

Mus musculus

Flybase annotations

Drosophila melanogaster

Tutorial

For this tutorial 3,005 single cell transcriptomes taken from the mouse brain (somatosensory cortex and hippocampal regions) are used as an example [4]. The analysis is done in a Jupyter notebook.

First we import the necessary modules and declare some constants:

import os
import glob
import pickle
import pandas as pd
import numpy as np

from dask.diagnostics import ProgressBar

from arboreto.utils import load_tf_names
from arboreto.algo import grnboost2

from pyscenic.rnkdb import FeatherRankingDatabase as RankingDatabase
from pyscenic.utils import modules_from_adjacencies, load_motifs
from pyscenic.prune import prune2df, df2regulons
from pyscenic.aucell import aucell

import seaborn as sns

DATA_FOLDER="~/tmp"
RESOURCES_FOLDER="~/resources"
DATABASE_FOLDER = "~/databases/"
SCHEDULER="123.122.8.24:8786"
DATABASES_GLOB = os.path.join(DATABASE_FOLDER, "mm9-*.feather")
MOTIF_ANNOTATIONS_FNAME = os.path.join(RESOURCES_FOLDER, "motifs-v9-nr.mgi-m0.001-o0.0.tbl")
MM_TFS_FNAME = os.path.join(RESOURCES_FOLDER, 'mm_tfs.txt')
SC_EXP_FNAME = os.path.join(RESOURCES_FOLDER, "GSE60361_C1-3005-Expression.txt")
REGULONS_FNAME = os.path.join(DATA_FOLDER, "regulons.p")
MOTIFS_FNAME = os.path.join(DATA_FOLDER, "motifs.csv")

Preliminary work

The scRNA-Seq data is downloaded from GEO: https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE60361 and loaded into memory:

ex_matrix = pd.read_csv(SC_EXP_FNAME, sep='\t', header=0, index_col=0).T
ex_matrix.shape
(3005, 19970)

and the list of Transcription Factors (TF) for Mus musculus are read from file. The list of known TFs for Mm was prepared from TFCat (cf. notebooks section).

tf_names = load_tf_names(MM_TFS_FNAME)

Finally the ranking databases are loaded:

db_fnames = glob.glob(DATABASES_GLOB)
def name(fname):
    return os.path.basename(fname).split(".")[0]
dbs = [RankingDatabase(fname=fname, name=name(fname)) for fname in db_fnames]
dbs
[FeatherRankingDatabase(name="mm9-tss-centered-10kb-10species"),
 FeatherRankingDatabase(name="mm9-500bp-upstream-7species"),
 FeatherRankingDatabase(name="mm9-500bp-upstream-10species"),
 FeatherRankingDatabase(name="mm9-tss-centered-5kb-10species"),
 FeatherRankingDatabase(name="mm9-tss-centered-10kb-7species"),
 FeatherRankingDatabase(name="mm9-tss-centered-5kb-7species")]

Phase I: Inference of co-expression modules

In the initial phase of the pySCENIC pipeline the single cell expression profiles are used to infer co-expression modules from.

Run GENIE3 or GRNBoost from arboreto to infer co-expression modules

The arboreto package is used for this phase of the pipeline. For this notebook only a sample of 1,000 cells is used for the co-expression module inference is used.

adjacencies = grnboost2(ex_matrix, tf_names=tf_names, verbose=True)

Derive potential regulons from these co-expression modules

Regulons are derived from adjacencies based on three methods.

The first method to create the TF-modules is to select the best targets for each transcription factor:

  1. Targets with importance > the 50th percentile.

  2. Targets with importance > the 75th percentile

  3. Targets with importance > the 90th percentile.

The second method is to select the top targets for a given TF:

  1. Top 50 targets (targets with highest weight)

The alternative way to create the TF-modules is to select the best regulators for each gene (this is actually how GENIE3 internally works). Then, these targets can be assigned back to each TF to form the TF-modules. In this way we will create three more gene-sets:

  1. Targets for which the TF is within its top 5 regulators

  2. Targets for which the TF is within its top 10 regulators

  3. Targets for which the TF is within its top 50 regulators

A distinction is made between modules which contain targets that are being activated and genes that are being repressed. Relationship between TF and its target, i.e. activator or repressor, is derived using the original expression profiles. The Pearson product-moment correlation coefficient is used to derive this information.

In addition, the transcription factor is added to the module and modules that have less than 20 genes are removed.

modules = list(modules_from_adjacencies(adjacencies, ex_matrix))

Phase II: Prune modules for targets with cis regulatory footprints (aka RcisTarget)

# Calculate a list of enriched motifs and the corresponding target genes for all modules.
with ProgressBar():
    df = prune2df(dbs, modules, MOTIF_ANNOTATIONS_FNAME)

# Create regulons from this table of enriched motifs.
regulons = df2regulons(df)

# Save the enriched motifs and the discovered regulons to disk.
df.to_csv(MOTIFS_FNAME)
with open(REGULONS_FNAME, "wb") as f:
    pickle.dump(regulons, f)

Clusters can be leveraged in the following way:

# The clusters can be leveraged via the dask framework:
df = prune2df(dbs, modules, MOTIF_ANNOTATIONS_FNAME, client_or_address=SCHEDULER)

Reloading the enriched motifs and regulons from file should be done as follows:

df = load_motifs(MOTIFS_FNAME)
with open(REGULONS_FNAME, "rb") as f:
    regulons = pickle.load(f)

Phase III: Cellular regulon enrichment matrix (aka AUCell)

We characterize the different cells in a single-cell transcriptomics experiment via the enrichment of the previously discovered regulons. Enrichment of a regulon is measured as the Area Under the recovery Curve (AUC) of the genes that define this regulon.

auc_mtx = aucell(ex_matrix, regulons, num_workers=4)
sns.clustermap(auc_mtx, figsize=(8,8))

Command Line Interface

A command line version of the tool is included. This tool is available after proper installation of the package via pip.

{ ~ }  » pyscenic                                            ~
usage: pySCENIC [-h] {grnboost,ctx,aucell} ...

Single-CEll regulatory Network Inference and Clustering

positional arguments:
  {grnboost,ctx,aucell}
                        sub-command help
    grnboost            Derive co-expression modules from expression matrix.
    ctx                 Find enriched motifs for a gene signature and
                        optionally prune targets from this signature based on
                        cis-regulatory cues.
    aucell              Find enrichment of regulons across single cells.

optional arguments:
  -h, --help            show this help message and exit

Arguments can be read from file using a @args.txt construct.

Docker and Singularity Images

pySCENIC is available to use with both Docker and Singularity, and tool usage from a container is similar to that of the command line interface. Note that the feather databases, transcription factors, and motif annotation databases need to be accessible to the container. In the below examples, separate mounts are created for the input, output, and databases directories.

Docker

To build the Docker image:

cd pySCENIC/
docker build -t pyscenic .

To run pySCENIC in Docker (three steps):

docker run \
    -v /path/to/inputdata:/scenic-input \
    -v /path/to/resources:/scenic-db \
    -v /path/to/outputdata:/scenic-output \
    pyscenic grnboost \
        --num_workers 6 \
        -o /scenic-output/expr_mat.adjacencies.tsv \
        /scenic-input/expr_mat.tsv \
        /scenic-db/allTFs_hg38.txt

docker run \
    -v /path/to/inputdata:/scenic-input \
    -v /path/to/resources:/scenic-db \
    -v /path/to/outputdata:/scenic-output \
    pyscenic ctx \
        /scenic-output/expr_mat.adjacencies.tsv \
        /scenic-db/hg19-500bp-upstream-7species.mc9nr.feather \
        /scenic-db/hg19-tss-centered-5kb-7species.mc9nr.feather \
        /scenic-db/hg19-tss-centered-10kb-7species.mc9nr.feather \
        --annotations_fname /scenic-db/motifs-v9-nr.hgnc-m0.001-o0.0.tbl \
        --expression_mtx_fname /scenic-input/expr_mat.tsv \
        --mode "dask_multiprocessing" \
        --output_type csv \
        --output /scenic-output/regulons.csv \
        --num_workers 6

docker run \
    -v /path/to/inputdata:/scenic-input \
    -v /path/to/outputdata:/scenic-output \
    pyscenic aucell \
        /scenic-input/expr_mat.tsv \
        /scenic-output/regulons.csv \
        -o /scenic-output/auc_mtx.csv \
        --num_workers 6

Singularity

To build the Singularity image:

cd pySCENIC/
singularity build pyscenic.sif Singularity

To run pySCENIC in Singularity (three steps):

singularity exec \
    --bind /path/to/inputdata:/scenic-input \
    --bind /path/to/resources:/scenic-db \
    --bind /path/to/outputdata:/scenic-output \
    pyscenic.sif \
        pyscenic grnboost \
            --num_workers 6 \
            -o /scenic-output/expr_mat.adjacencies.tsv \
            /scenic-input/expr_mat.tsv \
            /scenic-db/allTFs_hg38.txt


singularity exec \
    --bind /path/to/inputdata:/scenic-input \
    --bind /path/to/resources:/scenic-db \
    --bind /path/to/outputdata:/scenic-output \
    pyscenic.sif \
        pyscenic ctx \
            /scenic-output/expr_mat.adjacencies.tsv \
            /scenic-db/hg19-500bp-upstream-7species.mc9nr.feather \
            /scenic-db/hg19-tss-centered-5kb-7species.mc9nr.feather \
            /scenic-db/hg19-tss-centered-10kb-7species.mc9nr.feather \
            --annotations_fname /scenic-db/motifs-v9-nr.hgnc-m0.001-o0.0.tbl \
            --expression_mtx_fname /scenic-input/expr_mat.tsv \
            --mode "dask_multiprocessing" \
            --output_type csv \
            --output /scenic-output/regulons.csv \
            --num_workers 6

singularity exec \
    --bind /path/to/inputdata:/scenic-input \
    --bind /path/to/outputdata:/scenic-output \
    pyscenic.sif \
        pyscenic aucell \
            /scenic-input/expr_mat.tsv \
            /scenic-output/regulons.csv \
            -o /scenic-output/auc_mtx.csv \
            --num_workers 6

Frequently Asked Questions

Can I create my own ranking databases?

Yes you can. The code snippet below shows you how to create your own databases:

from pyscenic.rnkdb import DataFrameRankingDatabase as RankingDatabase
import numpy as np
import pandas as pd

# Every model in a database is represented by a whole genome ranking. The rankings of the genes must be 0-based.
df = pd.DataFrame(
        data=[[0, 1],
              [1, 0]],
        index=['Model1', 'Model2'],
        columns=['Symbol1', 'Symbol2'],
        dtype=np.int32)
RankingDatabase(df, 'custom').save('custom.db')

Can I draw the distribution of AUC values for a regulon across cells?

import pandas as pd
import matplotlib.pyplot as plt


def plot_binarization(auc_mtx: pd.DataFrame, regulon_name: str, threshold: float, bins: int=200, ax=None) -> None:
    """
    Plot the "binarization" process for the given regulon.

    :param auc_mtx: The dataframe with the AUC values for all cells and regulons (n_cells x n_regulons).
    :param regulon_name: The name of the regulon.
    :param bins: The number of bins to use in the AUC histogram.
    :param threshold: The threshold to use for binarization.
    """
    if ax is None:
        ax=plt.gca()
    auc_mtx[regulon_name].hist(bins=bins,ax=ax)

    ylim = ax.get_ylim()
    ax.plot([threshold]*2, ylim, 'r:')
    ax.set_ylim(ylim)
    ax.set_xlabel('AUC')
    ax.set_ylabel('#')
    ax.set_title(regulon_name)

Website

For more information, please visit LCB and SCENIC.

License

GNU General Public License v3

Acknowledgments

We are grateful to all providers of TF-annotated position weight matrices, in particular Martha Bulyk (UNIPROBE), Wyeth Wasserman and Albin Sandelin (JASPAR), BioBase (TRANSFAC), Scot Wolfe and Michael Brodsky (FlyFactorSurvey) and Timothy Hughes (cisBP).

References

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pyscenic-0.9.2.tar.gz (7.0 MB view details)

Uploaded Source

Built Distribution

pyscenic-0.9.2-py3-none-any.whl (14.2 MB view details)

Uploaded Python 3

File details

Details for the file pyscenic-0.9.2.tar.gz.

File metadata

  • Download URL: pyscenic-0.9.2.tar.gz
  • Upload date:
  • Size: 7.0 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.21.0 setuptools/40.6.3 requests-toolbelt/0.8.0 tqdm/4.28.1 CPython/3.6.7

File hashes

Hashes for pyscenic-0.9.2.tar.gz
Algorithm Hash digest
SHA256 af994ef1d506666bdb0d19970c5d3df83fb1be75ae850bab2225e278678f41bc
MD5 253b1a051d7bebdf41bb3e51c2cb93a2
BLAKE2b-256 dd063a946cffb635cb6e8b97490d9dc76f4689c6ef41243d1a44136a0143b2a5

See more details on using hashes here.

File details

Details for the file pyscenic-0.9.2-py3-none-any.whl.

File metadata

  • Download URL: pyscenic-0.9.2-py3-none-any.whl
  • Upload date:
  • Size: 14.2 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.21.0 setuptools/40.6.3 requests-toolbelt/0.8.0 tqdm/4.28.1 CPython/3.6.7

File hashes

Hashes for pyscenic-0.9.2-py3-none-any.whl
Algorithm Hash digest
SHA256 08d2b7754eaadff0fc5d0acb560d9aac248e663fe4c672831b8e67f743768889
MD5 92701d2674d0ea45f2e6f435c4073140
BLAKE2b-256 1d87f1e704eb317810dc1b458ca922b7ed0802fbbcd71369aeaf1fe5e87016c5

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page