scPRINT is a Large Cell Model for Gene Network Inference, Denoising and more from scRNAseq data
Project description
ℹ️ main place where scprint is built and maintained
scPRINT: Large Cell Model for scRNAseq data
scPRINT is a large transformer model built for the inference of gene networks (connections between genes explaining the cell's expression profile) from scRNAseq data.
It uses novel encoding and decoding of the cell expression profile and new pre-training methodologies to learn a cell model.
scPRINT can be used to perform the following analyses:
- expression denoising: increase the resolution of your scRNAseq data
- cell embedding: generate a low-dimensional representation of your dataset
- label prediction: predict the cell type, disease, sequencer, sex, and ethnicity of your cells
- gene network inference: generate a gene network from any cell or cell cluster in your scRNAseq dataset
Read the manuscript! if you would like to know more about scPRINT. Have a look at some of my X-plainers.
Table of Contents
- scPRINT: Large Cell Model for scRNAseq data
- Table of Contents
- Install
scPRINT
- Usage
- FAQ
- I want to generate gene networks from scRNAseq data:
- I want to generate cell embeddings and cell label predictions from scRNAseq data:
- I want to denoise my scRNAseq dataset:
- I want to generate an atlas-level embedding
- I need to generate gene tokens using pLLMs
- I want to pre-train scPRINT from scratch on my own data
- how can I find if scPRINT was trained on my data?
- can I use scPRINT on other organisms rather than human?
- how long does scPRINT takes? what kind of resources do I need? (or in alternative: can i run scPRINT locally?)
- I have different scRNASeq batches. Should I integrate my data before running scPRINT?
- where to find the gene embeddings?
- Documentation
- Model Weights
- Docker
- Development
- Work in progress (PR welcomed):
Install scPRINT
For the moment scPRINT has been tested on MacOS and Linux (Ubuntu 20.04) with Python 3.10. Its instalation takes on average 10 minutes.
If you want to be using flashattention2, know that it only supports triton 2.0 MLIR's version and torch==2.0.0 for now.
lamin.ai
To use scPRINT, you will need to use lamin.ai. This is needed to load biological informations like genes, cell types, organisms etc...
install
To start you will need to do:
conda create -n <env-name> python==3.10 #scprint might work with python >3.10, but it is not tested
#one of
pip install scprint # OR
pip install scprint[dev] # for the dev dependencies (building etc..) OR
pip install scprint[flash] # to use flashattention2 with triton: only if you have a compatible gpu (e.g. not available for apple GPUs for now, see https://github.com/triton-lang/triton?tab=readme-ov-file#compatibility)
#OR pip install scPRINT[dev,flash]
lamin init --storage ./testdb --name test --schema bionty
if you start with lamin and had to do a lamin init
, you will also need to populate your ontologies. This is because scPRINT is using ontologies to define its cell types, diseases, sexes, ethnicities, etc.
you can do it manually or with our function:
from scdataloader.utils import populate_my_ontology
populate_my_ontology() #to populate everything (recommended) (can take 2-10mns)
populate_my_ontology( #the minimum for scprint to run some inferences (denoising, grn inference)
organisms: List[str] = ["NCBITaxon:10090", "NCBITaxon:9606"],
sex: List[str] = ["PATO:0000384", "PATO:0000383"],
celltypes = None,
ethnicities = None,
assays = None,
tissues = None,
diseases = None,
dev_stages = None,
)
We make use of some additional packages we developed alongside scPRint.
Please refer to their documentation for more information:
- scDataLoader: a dataloader for training large cell models.
- GRnnData: a package to work with gene networks from single cell data.
- benGRN: a package to benchmark gene network inference methods from single cell data.
pytorch and GPUs
scPRINT can run on machines without GPUs, but it will be slow. It is highly recommended to use a GPU for inference.
Once you have a GPU, and installed the required drivers, you might need to install a specific version of pytorch that is compatible with your drivers (e.g. nvidia 550 drivers will lead to a nvidia toolkit 11.7 or 11.8 which might mean you need to re-install a different flavor of pytorch for things to work. e.g. using the command:
pip install torch==2.2.0 torchvision==0.17.0 torchaudio==2.2.0 --index-url https://download.pytorch.org/whl/cu118
on my case on linux
).
I was able to test it with nvidia 11.7, 11.8, 12.2.
dev install
If you want to use the latest version of scPRINT and work on the code yourself use git clone
and pip -e
instead of pip install
.
git clone https://github.com/cantinilab/scPRINT
git clone https://github.com/jkobject/scDataLoader
git clone https://github.com/cantinilab/GRnnData
git clone https://github.com/jkobject/benGRN
pip install -e scPRINT[dev]
pip install -e scDataLoader[dev]
pip install -e GRnnData[dev]
pip install -e benGRN[dev]
Usage
scPRINT's basic commands
This is the most minimal example of how scPRINT works:
from lightning.pytorch import Trainer
from scprint import scPrint
from scdataloader import DataModule
datamodule = DataModule(...)
model = scPrint(...)
# to train / fit / test the model
trainer = Trainer(...)
trainer.fit(model, datamodule=datamodule)
# to do predictions Denoiser, Embedder, GNInfer
denoiser = Denoiser(...)
adata = sc.read_h5ad(...)
denoiser(model, adata=adata)
...
or, from a bash command line
$ scprint fit/train/predict/test/denoise/embed/gninfer --config config/[medium|large|vlarge] ...
find out more about the commands by running scprint --help
or scprint [command] --help
.
more examples of using the command line are available in the docs.
Notes on GPU/CPU usage with triton
If you do not have triton installed you will not be able to take advantage of GPU acceleration, but you can still use the model on the CPU.
In that case, if loading from a checkpoint that was trained with flashattention, you will need to specify transformer="normal"
in the load_from_checkpoint
function like so:
model = scPrint.load_from_checkpoint(
'../data/temp/last.ckpt', precpt_gene_emb=None,
transformer="normal")
Simple tests:
An instalation of scPRINT and a simple test of the denoiser is performed during each commit to the main branch with a Github action and pytest workflow. It also provides an expected runtime for the installation and run of scPRINT.
We now explore the different usages of scPRINT:
FAQ
I want to generate gene networks from scRNAseq data:
-> Refer to the section . gene network inference in this notebook.
-> More examples in this notebook ./notebooks/assessments/bench_omni.ipynb.
I want to generate cell embeddings and cell label predictions from scRNAseq data:
-> Refer to the embeddings and cell annotations section in this notebook.
I want to denoise my scRNAseq dataset:
-> Refer to the Denoising of B-cell section in this notebook.
-> More example in our benchmark notebook ./notebooks/assessments/bench_denoising.ipynb.
I want to generate an atlas-level embedding
-> Refer to the notebook nice_umap.ipynb.
I need to generate gene tokens using pLLMs
To run scPRINT, you can use the option to define the gene tokens using protein language model embeddings of genes. This is done by providing the path to a parquet file of the precomputed set of embeddings for each gene name to scPRINT via "precpt_gene_emb"
-> To generate this file please refer to the notebook generate_gene_embeddings.
I want to pre-train scPRINT from scratch on my own data
-> Refer to the documentation page pretrain scprint
how can I find if scPRINT was trained on my data?
If your data is available in cellxgene, scPRINT was likely trained on it. However some cells, datasets were dropped due to low quality data and some were randomly removed to be part of the validation / test sets.
can I use scPRINT on other organisms rather than human?
scPRINT has been pretrained on both humans and mouse, and can be used on any organism with a similar gene set. If you want to use scPRINT on very different organisms, you will need to generate gene embeddings for that organism and re-train scPRINT
how long does scPRINT takes? what kind of resources do I need? (or in alternative: can i run scPRINT locally?)
please look at our supplementary tables in the manuscript
I have different scRNASeq batches. Should I integrate my data before running scPRINT?
scPRINT takes raw count as inputs, so please don't use integrated data. Just give the raw counts to scPRINT and it will take care of the rest.
where to find the gene embeddings?
If you think you need the gene embeddings file for loading the model from a checkpoint, you don't, as the embeddings are also stored in the model weights. You just need to load the weights like this:
model = scPrint.load_from_checkpoint(
'../../data/temp/last.ckpt',
precpt_gene_emb=None,
)
You can also recreate the gene embedding file through this notebook. Just call the functions, and it should recreate the file itself.
the file itself is also available on hugging face
Documentation
For more information on usage please see the documentation in https://www.jkobject.com/scPRINT/
Model Weights
Model weights are available on hugging face.
Docker
By using the scPRINT Docker image
, you can bypass the complexities of manual package installation, ensuring a consistent deployment environment. Included in this repository is a Dockerfile that lets you craft a container for the project; you have the choice to either build this image on your own or conveniently pull it from Docker Hub.
Make sure that you have the docker
command line interface installed on your system.
A recommended way to install docker with the correct nvidia drivers on linux is to use this script
Building the Docker Image
To build the Docker image from the provided Dockerfile
, run the following command from the root directory of this repository:
docker build -t scprint:latest -f Dockerfile .
Pulling the Docker Image from Docker Hub
If you don't want to build the image yourself, you can pull it directly from Docker Hub:
docker pull jkobject/scprint:1.1.3
docker tag jkobject/scprint:1.1.3 scprint:latest
Running the Docker Container
Once you have the image (either by building it or pulling it), you can start a container with:
docker run --gpus all --rm -it scprint:latest bash
Please note: When running the Docker container, ensure you mount any necessary folders using the -v option to access them inside the container. `
Development
Read the CONTRIBUTING.md file.
Read the training runs document to know more about how pre-training was performed and the its behavior.
code coverage is not right as I am using the command line interface for now. >50% of the code is covered by my current unit test.
Acknowledgement: python template laminDB lightning
Work in progress (PR welcomed):
- remove the triton dependencies
- add version with additional labels (tissues, age) and organisms (mouse, zebrafish) and more datasets from cellxgene
- version with separate transformer blocks for the encoding part of the bottleneck learning and for the cell embeddings
- improve classifier to output uncertainties and topK predictions when unsure
- setup latest lamindb version
Awesome Large Cell Model created by Jeremie Kalfon.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file scprint-1.6.1.tar.gz
.
File metadata
- Download URL: scprint-1.6.1.tar.gz
- Upload date:
- Size: 527.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.4.19
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | d791542bfb3263fb3a3b471dda899309ccb95a55e801286ff49588c575760451 |
|
MD5 | 38dbf1b5d2e7842a5a15010182924539 |
|
BLAKE2b-256 | c46b09577cfeb5b7514ddab2d5c6571424d5d012dc35ed073f0c0429bba7e621 |
File details
Details for the file scprint-1.6.1-py3-none-any.whl
.
File metadata
- Download URL: scprint-1.6.1-py3-none-any.whl
- Upload date:
- Size: 563.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.4.19
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 1280f92cb1896283bbef32bce978fa9b6898401b4c098c2dcd8c522d363d4fd7 |
|
MD5 | 5004ee1791513ede10b8cdf4a994e073 |
|
BLAKE2b-256 | 04a01b5daac62e69a62acd0a24692202de669821efcc8637c0f3b3e8c05f974e |