Skip to main content

a dataloader for single cell data in lamindb

Project description

scdataloader

codecov CI PyPI version Downloads Downloads Downloads GitHub issues Code style: black DOI

This single cell pytorch dataloader / lighting datamodule is designed to be used with:

and:

It allows you to:

  1. load thousands of datasets containing millions of cells in a few seconds.
  2. preprocess the data per dataset and download it locally (normalization, filtering, etc.)
  3. create a more complex single cell dataset
  4. extend it to your need

built on top of lamindb and the .mapped() function by Sergey: https://github.com/Koncopd

The package has been designed together with the scPRINT paper and model.

More

I needed to create this Data Loader for my PhD project. I am using it to load & preprocess thousands of datasets containing millions of cells in a few seconds. I believed that individuals employing AI for single-cell RNA sequencing and other sequencing datasets would eagerly utilize and desire such a tool, which presently does not exist.

scdataloader.drawio.png

Install it from PyPI

pip install scdataloader
# or
pip install scDataLoader[dev] # for dev dependencies

lamin init --storage ./testdb --name test --schema bionty

if you start with lamin and had to do a lamin init, you will also need to populate your ontologies. This is because scPRINT is using ontologies to define its cell types, diseases, sexes, ethnicities, etc.

you can do it manually or with our function:

from scdataloader.utils import populate_my_ontology

populate_my_ontology() #to populate everything (recommended) (can take 2-10mns)

populate_my_ontology( #the minimum to the tool
organisms: List[str] = ["NCBITaxon:10090", "NCBITaxon:9606"],
    sex: List[str] = ["PATO:0000384", "PATO:0000383"],
    celltypes = None,
    ethnicities = None,
    assays = None,
    tissues = None,
    diseases = None,
    dev_stages = None,
)

Dev install

If you want to use the latest version of scDataLoader and work on the code yourself use git clone and pip -e instead of pip install.

git clone https://github.com/jkobject/scDataLoader.git
pip install -e scDataLoader[dev]

Usage

DataModule usage

# initialize a local lamin database
#! lamin init --storage ./cellxgene --name cellxgene --schema bionty
from scdataloader import utils, Preprocessor, DataModule


# preprocess datasets
preprocessor = Preprocessor(
    do_postp=False,
    force_preprocess=True,
)
adata = preprocessor(adata)

art = ln.Artifact(adata, description="test")
art.save()
ln.Collection(art, name="test", description="test").save()

datamodule = DataModule(
    collection_name="test",
    organisms=["NCBITaxon:9606"], #organism that we will work on
    how="most expr", # for the collator (most expr genes only will be selected)
    max_len=1000, # only the 1000 most expressed
    batch_size=64,
    num_workers=1,
    validation_split=0.1,
)

lightning-free usage (Dataset+Collator+DataLoader)

# initialize a local lamin database
#! lamin init --storage ./cellxgene --name cellxgene --schema bionty

from scdataloader import utils, Preprocessor, SimpleAnnDataset, Collator, DataLoader

# preprocess dataset
preprocessor = Preprocessor(
    do_postp=False,
    force_preprocess=True,
)
adata = preprocessor(adata)

# create dataset
adataset = SimpleAnnDataset(
    adata, obs_to_output=["organism_ontology_term_id"]
)
# create collator
col = Collator(
    organisms="NCBITaxon:9606",
    valid_genes=adata.var_names,
    max_len=2000, #maximum number of genes to use
    how="some" |"most expr"|"random_expr",
    # genelist = [geneA, geneB] if how=='some'
)
# create dataloader
dataloader = DataLoader(
    adataset,
    collate_fn=col,
    batch_size=64,
    num_workers=4,
    shuffle=False,
)

# predict
for batch in tqdm(dataloader):
    gene_pos, expression, depth = (
        batch["genes"],
        batch["x"],
        batch["depth"],
    )
    model.predict(
        gene_pos,
        expression,
        depth,
    )

Usage on all of cellxgene

# initialize a local lamin database
#! lamin init --storage ./cellxgene --name cellxgene --schema bionty

from scdataloader import utils
from scdataloader.preprocess import LaminPreprocessor, additional_postprocess, additional_preprocess

# preprocess datasets
DESCRIPTION='preprocessed by scDataLoader'

cx_dataset = ln.Collection.using(instance="laminlabs/cellxgene").filter(name="cellxgene-census", version='2023-12-15').one()
cx_dataset, len(cx_dataset.artifacts.all())


do_preprocess = LaminPreprocessor(additional_postprocess=additional_postprocess, additional_preprocess=additional_preprocess, skip_validate=True, subset_hvg=0)

preprocessed_dataset = do_preprocess(cx_dataset, name=DESCRIPTION, description=DESCRIPTION, start_at=6, version="2")

# create dataloaders
from scdataloader import DataModule
import tqdm

datamodule = DataModule(
    collection_name="preprocessed dataset",
    organisms=["NCBITaxon:9606"], #organism that we will work on
    how="most expr", # for the collator (most expr genes only will be selected)
    max_len=1000, # only the 1000 most expressed
    batch_size=64,
    num_workers=1,
    validation_split=0.1,
    test_split=0)

for i in tqdm.tqdm(datamodule.train_dataloader()):
    # pass #or do pass
    print(i)
    break

# with lightning:
# Trainer(model, datamodule)

see the notebooks in docs:

  1. load a dataset
  2. create a dataset

command line preprocessing

You can use the command line to preprocess a large database of datasets like here for cellxgene. this allows parallelizing and easier usage.

scdataloader --instance "laminlabs/cellxgene" --name "cellxgene-census" --version "2023-12-15" --description "preprocessed for scprint" --new_name "scprint main" --start_at 10 >> scdataloader.out

command line usage

The main way to use

please refer to the scPRINT documentation and lightning documentation for more information on command line usage

FAQ

how to update my ontologies?

import bionty as bt
bt.reset_sources()

# Run via CLI: lamin load <your instance>

import lnschema_bionty as lb
lb.dev.sync_bionty_source_to_latest()

how to load all ontologies?

from scdataloader import utils
utils.populate_ontologies() # this might take from 5-20mins

Development

Read the CONTRIBUTING.md file.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgments

Awesome single cell dataloader created by @jkobject

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

scdataloader-1.2.2.tar.gz (42.4 kB view details)

Uploaded Source

Built Distribution

scdataloader-1.2.2-py3-none-any.whl (45.5 kB view details)

Uploaded Python 3

File details

Details for the file scdataloader-1.2.2.tar.gz.

File metadata

  • Download URL: scdataloader-1.2.2.tar.gz
  • Upload date:
  • Size: 42.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.5.1

File hashes

Hashes for scdataloader-1.2.2.tar.gz
Algorithm Hash digest
SHA256 7b63dbae18f06e82c25840becc0499ab6c990710c16638e64df63a7249d949d5
MD5 4c57ca12c78060a36a2887b6ea95755f
BLAKE2b-256 0f9da01dc9fda437e2f79cabca2cfee991505b428fa89678713c4c44381d4386

See more details on using hashes here.

File details

Details for the file scdataloader-1.2.2-py3-none-any.whl.

File metadata

File hashes

Hashes for scdataloader-1.2.2-py3-none-any.whl
Algorithm Hash digest
SHA256 e497d40fd5602ef4ef32a11f6ea60cd305b4c04a7e58d9c5268c587db5bd2cae
MD5 e92b5d53d36fc29b53c9873f0b64798f
BLAKE2b-256 04655ebfcad4e9f401ad3fb73fd3ca2a0509907211ccf4c0f0837616aaafc7dc

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page