Python implementation of the SCENIC pipeline for transcription factor inference from single-cell transcriptomics experiments.
Project description
pySCENIC is a lightning-fast python implementation of the SCENIC pipeline (Single-Cell rEgulatory Network Inference and Clustering) which enables biologists to infer transcription factors, gene regulatory networks and cell types from single-cell RNA-seq data.
The pioneering work was done in R and results were published in Nature Methods [1].
pySCENIC can be run on a single desktop machine but easily scales to multi-core clusters to analyze thousands of cells in no time. The latter is achieved via the dask framework for distributed computing [2].
The pipeline has three steps:
First transcription factors (TFs) and their target genes, together defining a regulon, are derived using gene inference methods which solely rely on correlations between expression of genes across cells. The arboreto package is used for this step.
These regulons are refined by pruning targets that do not have an enrichment for a corresponding motif of the TF effectively separating direct from indirect targets based on the presence of cis-regulatory footprints.
Finally, the original cells are differentiated and clustered on the activity of these discovered regulons.
Features
All the functionality of the original R implementation is available and in addition:
You can leverage multi-core and multi-node clusters using dask and its distributed scheduler.
We implemented a version of the recovery of input genes that takes into account weights associated with these genes.
Regulons, i.e. the regulatory network that connects a TF with its target genes, with targets that are repressed are now also derived and used for cell enrichment analysis.
Installation
The lastest stable release of the package itself can be installed via pip install pyscenic
.
You can also install the bleeding edge (i.e. less stable) version of the package directly from the source:
git clone https://github.com/aertslab/pySCENIC.git
cd pySCENIC/
pip install .
To successfully use this pipeline you also need auxilliary datasets:
Databases ranking the whole genome of your species of interest based on regulatory features (i.e. transcription factors). Ranking databases are typically stored in the feather format and can be downloaded from cisTargetDBs.
Motif annotation database providing the missing link between an enriched motif and the transcription factor that binds this motif. This pipeline needs a TSV text file where every line represents a particular annotation.
Annotations |
Species |
---|---|
Homo sapiens |
|
Mus musculus |
|
Drosophila melanogaster |
Tutorial
For this tutorial 3,005 single cell transcriptomes taken from the mouse brain (somatosensory cortex and hippocampal regions) are used as an example [4]. The analysis is done in a Jupyter notebook.
First we import the necessary modules and declare some constants:
import os
import glob
import pickle
import pandas as pd
import numpy as np
from dask.diagnostics import ProgressBar
from arboreto.utils import load_tf_names
from arboreto.algo import grnboost2
from pyscenic.rnkdb import FeatherRankingDatabase as RankingDatabase
from pyscenic.utils import modules_from_adjacencies, load_motifs
from pyscenic.prune import prune2df, df2regulons
from pyscenic.aucell import aucell
import seaborn as sns
DATA_FOLDER="~/tmp"
RESOURCES_FOLDER="~/resources"
DATABASE_FOLDER = "~/databases/"
SCHEDULER="123.122.8.24:8786"
DATABASES_GLOB = os.path.join(DATABASE_FOLDER, "mm9-*.feather")
MOTIF_ANNOTATIONS_FNAME = os.path.join(RESOURCES_FOLDER, "motifs-v9-nr.mgi-m0.001-o0.0.tbl")
MM_TFS_FNAME = os.path.join(RESOURCES_FOLDER, 'mm_tfs.txt')
SC_EXP_FNAME = os.path.join(RESOURCES_FOLDER, "GSE60361_C1-3005-Expression.txt")
REGULONS_FNAME = os.path.join(DATA_FOLDER, "regulons.p")
MOTIFS_FNAME = os.path.join(DATA_FOLDER, "motifs.csv")
Preliminary work
The scRNA-Seq data is downloaded from GEO: https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE60361 and loaded into memory:
ex_matrix = pd.read_csv(SC_EXP_FNAME, sep='\t', header=0, index_col=0).T
ex_matrix.shape
(3005, 19970)
and the list of Transcription Factors (TF) for Mus musculus are read from file. The list of known TFs for Mm was prepared from TFCat (cf. notebooks section).
tf_names = load_tf_names(MM_TFS_FNAME)
Finally the ranking databases are loaded:
db_fnames = glob.glob(DATABASES_GLOB)
def name(fname):
return os.path.basename(fname).split(".")[0]
dbs = [RankingDatabase(fname=fname, name=name(fname)) for fname in db_fnames]
dbs
[FeatherRankingDatabase(name="mm9-tss-centered-10kb-10species"), FeatherRankingDatabase(name="mm9-500bp-upstream-7species"), FeatherRankingDatabase(name="mm9-500bp-upstream-10species"), FeatherRankingDatabase(name="mm9-tss-centered-5kb-10species"), FeatherRankingDatabase(name="mm9-tss-centered-10kb-7species"), FeatherRankingDatabase(name="mm9-tss-centered-5kb-7species")]
Phase I: Inference of co-expression modules
In the initial phase of the pySCENIC pipeline the single cell expression profiles are used to infer co-expression modules from.
Run GENIE3 or GRNBoost from arboreto to infer co-expression modules
The arboreto package is used for this phase of the pipeline. For this notebook only a sample of 1,000 cells is used for the co-expression module inference is used.
adjacencies = grnboost2(ex_matrix, tf_names=tf_names, verbose=True)
Derive potential regulons from these co-expression modules
Regulons are derived from adjacencies based on three methods.
The first method to create the TF-modules is to select the best targets for each transcription factor:
Targets with importance > the 50th percentile.
Targets with importance > the 75th percentile
Targets with importance > the 90th percentile.
The second method is to select the top targets for a given TF:
Top 50 targets (targets with highest weight)
The alternative way to create the TF-modules is to select the best regulators for each gene (this is actually how GENIE3 internally works). Then, these targets can be assigned back to each TF to form the TF-modules. In this way we will create three more gene-sets:
Targets for which the TF is within its top 5 regulators
Targets for which the TF is within its top 10 regulators
Targets for which the TF is within its top 50 regulators
A distinction is made between modules which contain targets that are being activated and genes that are being repressed. Relationship between TF and its target, i.e. activator or repressor, is derived using the original expression profiles. The Pearson product-moment correlation coefficient is used to derive this information.
In addition, the transcription factor is added to the module and modules that have less than 20 genes are removed.
modules = list(modules_from_adjacencies(adjacencies, ex_matrix))
Phase II: Prune modules for targets with cis regulatory footprints (aka RcisTarget)
# Calculate a list of enriched motifs and the corresponding target genes for all modules.
with ProgressBar():
df = prune2df(dbs, modules, MOTIF_ANNOTATIONS_FNAME)
# Create regulons from this table of enriched motifs.
regulons = df2regulons(df)
# Save the enriched motifs and the discovered regulons to disk.
df.to_csv(MOTIFS_FNAME)
with open(REGULONS_FNAME, "wb") as f:
pickle.dump(regulons, f)
Clusters can be leveraged in the following way:
# The clusters can be leveraged via the dask framework:
df = prune2df(dbs, modules, MOTIF_ANNOTATIONS_FNAME, client_or_address=SCHEDULER)
Reloading the enriched motifs and regulons from file should be done as follows:
df = load_motifs(MOTIFS_FNAME)
with open(REGULONS_FNAME, "rb") as f:
regulons = pickle.load(f)
Phase III: Cellular regulon enrichment matrix (aka AUCell)
We characterize the different cells in a single-cell transcriptomics experiment via the enrichment of the previously discovered regulons. Enrichment of a regulon is measured as the Area Under the recovery Curve (AUC) of the genes that define this regulon.
auc_mtx = aucell(ex_matrix, regulons, num_workers=4)
sns.clustermap(auc_mtx, figsize=(8,8))
Command Line Interface
A command line version of the tool is included. This tool is available after proper installation of the package via pip
.
{ ~ } » pyscenic ~
usage: pySCENIC [-h] {grn,ctx,aucell} ...
Single-CEll regulatory Network Inference and Clustering
positional arguments:
{grnboost,ctx,aucell}
sub-command help
grn Derive co-expression modules from expression matrix.
ctx Find enriched motifs for a gene signature and
optionally prune targets from this signature based on
cis-regulatory cues.
aucell Find enrichment of regulons across single cells.
optional arguments:
-h, --help show this help message and exit
Arguments can be read from file using a @args.txt construct.
Docker and Singularity Images
pySCENIC is available to use with both Docker and Singularity, and tool usage from a container is similar to that of the command line interface. Note that the feather databases, transcription factors, and motif annotation databases need to be accessible to the container. In the below examples, separate mounts are created for the input, output, and databases directories.
Docker
Docker images are available from Docker Hub.
To run pySCENIC in Docker, use the following three steps. A mount point (or more than one) needs to be specified, which contains the input data and necessary resources).
docker run \
-v /path/to/data:/scenicdata \
aertslab/pyscenic grn \
--num_workers 6 \
-o /scenicdata/expr_mat.adjacencies.tsv \
/scenicdata/expr_mat.tsv \
/scenicdata/allTFs_hg38.txt
docker run \
-v /path/to/data:/scenicdata \
aertslab/pyscenic ctx \
/scenicdata/expr_mat.adjacencies.tsv \
/scenicdata/hg19-500bp-upstream-7species.mc9nr.feather \
/scenicdata/hg19-tss-centered-5kb-7species.mc9nr.feather \
/scenicdata/hg19-tss-centered-10kb-7species.mc9nr.feather \
--annotations_fname /scenicdata/motifs-v9-nr.hgnc-m0.001-o0.0.tbl \
--expression_mtx_fname /scenicdata/expr_mat.tsv \
--mode "dask_multiprocessing" \
--output_type csv \
--output /scenicdata/regulons.csv \
--num_workers 6
docker run \
-v /path/to/data:/scenic-input \
aertslab/pyscenic aucell \
/scenicdata/expr_mat.tsv \
/scenicdata/regulons.csv \
-o /scenicdata/auc_mtx.csv \
--num_workers 6
Singularity
Singularity images are available from Singularity Hub.
To run pySCENIC in Singularity, use the following three steps. Note that in Singularity 3.0+, the mount points are automatically overlaid.
singularity exec pySCENIC_latest.sif \
pyscenic grn \
--num_workers 6 \
-o /scenic-output/expr_mat.adjacencies.tsv \
/scenic-input/expr_mat.tsv \
/scenic-db/allTFs_hg38.txt
singularity exec pySCENIC_latest.sif \
pyscenic ctx \
expr_mat.adjacencies.tsv \
hg19-500bp-upstream-7species.mc9nr.feather \
hg19-tss-centered-5kb-7species.mc9nr.feather \
hg19-tss-centered-10kb-7species.mc9nr.feather \
--annotations_fname motifs-v9-nr.hgnc-m0.001-o0.0.tbl \
--expression_mtx_fname expr_mat.tsv \
--mode "dask_multiprocessing" \
--output_type csv \
--output regulons.csv \
--num_workers 6
singularity exec pySCENIC_latest.sif \
pyscenic aucell \
expr_mat.tsv \
regulons.csv \
-o auc_mtx.csv \
--num_workers 6
Frequently Asked Questions
Can I create my own ranking databases?
Yes you can. The code snippet below shows you how to create your own databases:
from pyscenic.rnkdb import DataFrameRankingDatabase as RankingDatabase
import numpy as np
import pandas as pd
# Every model in a database is represented by a whole genome ranking. The rankings of the genes must be 0-based.
df = pd.DataFrame(
data=[[0, 1],
[1, 0]],
index=['Model1', 'Model2'],
columns=['Symbol1', 'Symbol2'],
dtype=np.int32)
RankingDatabase(df, 'custom').save('custom.db')
Can I draw the distribution of AUC values for a regulon across cells?
import pandas as pd
import matplotlib.pyplot as plt
def plot_binarization(auc_mtx: pd.DataFrame, regulon_name: str, threshold: float, bins: int=200, ax=None) -> None:
"""
Plot the "binarization" process for the given regulon.
:param auc_mtx: The dataframe with the AUC values for all cells and regulons (n_cells x n_regulons).
:param regulon_name: The name of the regulon.
:param bins: The number of bins to use in the AUC histogram.
:param threshold: The threshold to use for binarization.
"""
if ax is None:
ax=plt.gca()
auc_mtx[regulon_name].hist(bins=bins,ax=ax)
ylim = ax.get_ylim()
ax.plot([threshold]*2, ylim, 'r:')
ax.set_ylim(ylim)
ax.set_xlabel('AUC')
ax.set_ylabel('#')
ax.set_title(regulon_name)
Website
License
GNU General Public License v3
Acknowledgments
We are grateful to all providers of TF-annotated position weight matrices, in particular Martha Bulyk (UNIPROBE), Wyeth Wasserman and Albin Sandelin (JASPAR), BioBase (TRANSFAC), Scot Wolfe and Michael Brodsky (FlyFactorSurvey) and Timothy Hughes (cisBP).
References
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file pyscenic-0.9.5.tar.gz
.
File metadata
- Download URL: pyscenic-0.9.5.tar.gz
- Upload date:
- Size: 7.0 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.21.0 setuptools/40.6.3 requests-toolbelt/0.8.0 tqdm/4.28.1 CPython/3.6.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | bf45d6b45f6c18073ab1f8fe1e3fa188636d7cfd2975436107b250d00ca31a48 |
|
MD5 | 6c2337540dae5a3291572edac693df64 |
|
BLAKE2b-256 | 4cda6b784b81369c3b66b79f6775fb924c232c42b8309025781f9be218609d1b |
File details
Details for the file pyscenic-0.9.5-py3-none-any.whl
.
File metadata
- Download URL: pyscenic-0.9.5-py3-none-any.whl
- Upload date:
- Size: 14.2 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.21.0 setuptools/40.6.3 requests-toolbelt/0.8.0 tqdm/4.28.1 CPython/3.6.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 7c840452ae0c059317cdfd301dd8498ce82e7221f115c69954823f364dc19154 |
|
MD5 | 9c1e9198d2a2678751fd3741b455232d |
|
BLAKE2b-256 | 4662c2930a4f2b253d8e795e2b0b4dff7732045468870965cb4f30353c4cfd56 |