There are a number of tutorials available to demonstrate how to use pySCENIC. Some of these are in the form of Jupyter notebooks in the pySCENIC notebooks directory, and contain example analyses.
PBMC 10k dataset (10x Genomics)¶
Full SCENIC analysis, plus filtering, clustering, visualization, and SCope-ready loom file creation:
Extended analysis post-SCENIC:
Zeisel et al. dataset¶
If you run this from a python script instead of a Jupyter notebook, please enclose the code in
if __name__ == '__main__': construct.
First we import the necessary modules and declare some constants:
import os import glob import pickle import pandas as pd import numpy as np from dask.diagnostics import ProgressBar from arboreto.utils import load_tf_names from arboreto.algo import grnboost2 from pyscenic.rnkdb import FeatherRankingDatabase as RankingDatabase from pyscenic.utils import modules_from_adjacencies, load_motifs from pyscenic.prune import prune2df, df2regulons from pyscenic.aucell import aucell import seaborn as sns DATA_FOLDER="~/tmp" RESOURCES_FOLDER="~/resources" DATABASE_FOLDER = "~/databases/" SCHEDULER="18.104.22.168:8786" DATABASES_GLOB = os.path.join(DATABASE_FOLDER, "mm9-*.mc9nr.feather") MOTIF_ANNOTATIONS_FNAME = os.path.join(RESOURCES_FOLDER, "motifs-v9-nr.mgi-m0.001-o0.0.tbl") MM_TFS_FNAME = os.path.join(RESOURCES_FOLDER, 'mm_tfs.txt') SC_EXP_FNAME = os.path.join(RESOURCES_FOLDER, "GSE60361_C1-3005-Expression.txt") REGULONS_FNAME = os.path.join(DATA_FOLDER, "regulons.p") MOTIFS_FNAME = os.path.join(DATA_FOLDER, "motifs.csv")
The scRNA-Seq data is downloaded from GEO: https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE60361 and loaded into memory:
ex_matrix = pd.read_csv(SC_EXP_FNAME, sep='\t', header=0, index_col=0).T ex_matrix.shape
and the list of Transcription Factors (TF) for Mus musculus are read from file. The list of known TFs for Mm was prepared from TFCat (cf. notebooks section).
tf_names = load_tf_names(MM_TFS_FNAME)
Finally the ranking databases are loaded:
db_fnames = glob.glob(DATABASES_GLOB) def name(fname): return os.path.splitext(os.path.basename(fname)) dbs = [RankingDatabase(fname=fname, name=name(fname)) for fname in db_fnames] dbs
[FeatherRankingDatabase(name="mm9-tss-centered-10kb-10species.mc9nr"), FeatherRankingDatabase(name="mm9-500bp-upstream-7species.mc9nr"), FeatherRankingDatabase(name="mm9-500bp-upstream-10species.mc9nr"), FeatherRankingDatabase(name="mm9-tss-centered-5kb-10species.mc9nr"), FeatherRankingDatabase(name="mm9-tss-centered-10kb-7species.mc9nr"), FeatherRankingDatabase(name="mm9-tss-centered-5kb-7species.mc9nr")]
In the initial phase of the pySCENIC pipeline the single cell expression profiles are used to infer co-expression modules from.
The arboreto package is used for this phase of the pipeline. For this notebook only a sample of 1,000 cells is used for the co-expression module inference is used.
adjacencies = grnboost2(ex_matrix, tf_names=tf_names, verbose=True)
Regulons are derived from adjacencies based on three methods.
The first method to create the TF-modules is to select the best targets for each transcription factor:
- Targets with importance > the 50th percentile.
- Targets with importance > the 75th percentile
- Targets with importance > the 90th percentile.
The second method is to select the top targets for a given TF:
- Top 50 targets (targets with highest weight)
The alternative way to create the TF-modules is to select the best regulators for each gene (this is actually how GENIE3 internally works). Then, these targets can be assigned back to each TF to form the TF-modules. In this way we will create three more gene-sets:
- Targets for which the TF is within its top 5 regulators
- Targets for which the TF is within its top 10 regulators
- Targets for which the TF is within its top 50 regulators
A distinction is made between modules which contain targets that are being activated and genes that are being repressed. Relationship between TF and its target, i.e. activator or repressor, is derived using the original expression profiles. The Pearson product-moment correlation coefficient is used to derive this information.
In addition, the transcription factor is added to the module and modules that have less than 20 genes are removed.
modules = list(modules_from_adjacencies(adjacencies, ex_matrix))
# Calculate a list of enriched motifs and the corresponding target genes for all modules. with ProgressBar(): df = prune2df(dbs, modules, MOTIF_ANNOTATIONS_FNAME) # Create regulons from this table of enriched motifs. regulons = df2regulons(df) # Save the enriched motifs and the discovered regulons to disk. df.to_csv(MOTIFS_FNAME) with open(REGULONS_FNAME, "wb") as f: pickle.dump(regulons, f)
Clusters can be leveraged in the following way:
# The clusters can be leveraged via the dask framework: df = prune2df(dbs, modules, MOTIF_ANNOTATIONS_FNAME, client_or_address=SCHEDULER)
The nodes of the clusters need to have access to a shared network drive on which the ranking databases are stored.
Reloading the enriched motifs and regulons from file should be done as follows:
df = load_motifs(MOTIFS_FNAME) with open(REGULONS_FNAME, "rb") as f: regulons = pickle.load(f)
We characterize the different cells in a single-cell transcriptomics experiment via the enrichment of the previously discovered regulons. Enrichment of a regulon is measured as the Area Under the recovery Curve (AUC) of the genes that define this regulon.
auc_mtx = aucell(ex_matrix, regulons, num_workers=4) sns.clustermap(auc_mtx, figsize=(8,8))