Skip to main content

A minibatch loader for AnnData stores

Project description

annbatch

[!IMPORTANT] This package will now only make breaking changes on the minor version release until its major release.

Tests Documentation PyPI Downloads Downloads CZI's Essential Open Source Software for Science

A data loader and io utilities for mini-batched data loading of on-disk AnnData files, co-developed by Lamin Labs and scverse

Getting started

Please refer to the documentation, in particular, the API documentation.

Installation

You need to have Python 3.12 or newer installed on your system. If you don't have Python installed, we recommend installing uv.

To install the latest release of annbatch from PyPI:

pip install "annbatch[zarrs]"

We provide extras for torch, cupy-cuda12, cupy-cuda13, and zarrs-python. cupy provides accelerated handling of the data via preload_to_gpu once it has been read off disk and does not need to be used in conjunction with torch.

[!IMPORTANT] zarrs-python gives the necessary performance boost for the sharded data produced by our preprocessing functions to be useful when loading data off a local filesystem.

To install all optional dependencies::

pip install "annbatch[zarrs,torch,cupy-cuda13]"

(Note: Replace cupy-cuda13 with the extra matching your local CUDA version)

Detailed tutorial

For a detailed tutorial, please see the in-depth section of our docs

Basic usage example

Basic preprocessing:

from annbatch import DatasetCollection

import zarr
from pathlib import Path

# Using zarrs is necessary for local filesystem performance.
# Ensure you installed it using our `[zarrs]` extra i.e., `pip install "annbatch[zarrs]"` to get the right version.
zarr.config.set(
    {"codec_pipeline.path": "zarrs.ZarrsCodecPipeline"}
)

# Create a collection at the given path. The subgroups will all be anndata stores.
collection = DatasetCollection("path/to/output/collection.zarr")
collection.add_adatas(
    adata_paths=[
        "path/to/your/file1.h5ad",
        "path/to/your/file2.h5ad"
    ],
    shuffle=True,  # shuffling is needed if you want to use chunked access, but is the default
)

Data loading:

[!IMPORTANT] Without custom loading via {meth}annbatch.Loader.use_collection or load_adata{s} or load_dataset{s}, all columns of the (obs) {class}pandas.DataFrame will be loaded and yielded potentially degrading performance.

from pathlib import Path

from annbatch import DatasetCollection, Loader
import anndata as ad
import zarr

# Using zarrs is necessary for local filesystem performance.
# Ensure you installed it using our `[zarrs]` extra i.e., `pip install "annbatch[zarrs]"` to get the right version.
zarr.config.set(
    {"codec_pipeline.path": "zarrs.ZarrsCodecPipeline"}
)


# WARNING: Without custom loading *all* obs columns will be loaded and yielded potentially degrading performance.
def custom_load_func(g: zarr.Group) -> ad.AnnData:
    return ad.AnnData(
        X=ad.io.sparse_dataset(g["layers"]["counts"]),
        obs=ad.io.read_elem(g["obs"])[some_subset_of_columns_useful_for_training]
    )


# A non empty collection
collection = DatasetCollection("path/to/output/collection.zarr")
# This settings override ensures that you don't lose/alter your categorical codes when reading the data in!
with ad.settings.override(remove_unused_categories=False):
    ds = Loader(
        batch_size=4096,
        chunk_size=32,
        preload_nchunks=256,
        to_torch=True
    )
    # `use_collection` automatically uses the on-disk `X` and full `obs` in the `Loader`
    # but the `load_adata` arg can override this behavior
    # (see `custom_load_func` above for an example of customization).
    ds = ds.use_collection(collection, load_adata=custom_load_func)

# Iterate over dataloader (plugin replacement for torch.utils.DataLoader)
for batch in ds:
    x, obs = batch["X"], batch["obs"]
    # Important: For performance reasons convert to dense on GPU
    x = x.cuda().to_dense()

[!IMPORTANT] For usage of our loader inside of torch, please see this note for more info. At the minimum, be aware that deadlocking will occur on linux unless you pass multiprocessing_context="spawn" to the torch.utils.data.DataLoader class.

Release notes

See the changelog.

Contact

For questions and help requests, you can reach out in the scverse discourse. If you found a bug, please use the issue tracker.

Citation

If you use annbatch in your work, please cite the annbatch publication as follows:

annbatch unlocks terabyte-scale training of biological data in anndata

Gold, I., Fischer, F., Arnoldt, L., Wolf, F. A., & Theis, F. J. (2026b). annbatch unlocks terabyte-scale training of biological data in anndata. arXiv (Cornell University). https://doi.org/10.48550/arxiv.2604.01949

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

annbatch-0.1.5.tar.gz (262.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

annbatch-0.1.5-py3-none-any.whl (42.7 kB view details)

Uploaded Python 3

File details

Details for the file annbatch-0.1.5.tar.gz.

File metadata

  • Download URL: annbatch-0.1.5.tar.gz
  • Upload date:
  • Size: 262.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for annbatch-0.1.5.tar.gz
Algorithm Hash digest
SHA256 6b747c097135af54e8dcfd9e1c6eb5acdd9467ba1cb52376483f08723f42de89
MD5 99f607833653258bd0c8c6b467f6cf9f
BLAKE2b-256 e74cf1e182c6397a40a794009d55eacf083bb89fd5a4bd2d3d5e76e0952ec32f

See more details on using hashes here.

Provenance

The following attestation bundles were made for annbatch-0.1.5.tar.gz:

Publisher: release.yaml on scverse/annbatch

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file annbatch-0.1.5-py3-none-any.whl.

File metadata

  • Download URL: annbatch-0.1.5-py3-none-any.whl
  • Upload date:
  • Size: 42.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for annbatch-0.1.5-py3-none-any.whl
Algorithm Hash digest
SHA256 853c412c82b19d46bf6ce0f8105b517d4195b1b3a54814b00897cab581a58d6d
MD5 8fceb1b06470cb73aab3b57dab441f34
BLAKE2b-256 f4a47de8b3d44cc993aa8d5782fe42c810535cd1b83566af7ad8d407385778ba

See more details on using hashes here.

Provenance

The following attestation bundles were made for annbatch-0.1.5-py3-none-any.whl:

Publisher: release.yaml on scverse/annbatch

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page