Skip to main content

Pipeline for efficient genomic data processing.

Project description

GenVarLoader

GenVarLoader provides a fast, memory efficient data loader for training sequence models on genetic variation. For example, this can be used to train a DNA language model on human genetic variation (e.g. Nucleotide Transformer) or train sequence to function models with genetic variation (e.g. BigRNA).

Features

  • Avoids writing any sequences to disk
  • Generates haplotypes up to 1,000 times faster than reading a FASTA file
  • Generates tracks up to 450 times faster than reading a BigWig
  • Supports indels and re-aligns tracks to haplotypes that have them
  • Extensible to new file formats: drop a feature request! Currently supports VCF, PGEN, and BigWig

Tutorial

Installation

pip install genvarloader

A PyTorch dependency is not included since it may require special instructions.

Write a gvl.Dataset

GenVarLoader has both a CLI and Python API for writing datasets. The Python API provides some extra flexibility, for example for a multi-task objective.

genvarloader cool_dataset.gvl interesting_regions.bed --variants cool_variants.vcf --bigwig-table samples_to_bigwigs.csv --length 2048 --max-jitter 128

Where samples_to_bigwigs.csv has columns sample and path mapping each sample to its BigWig.

This could equivalently be done in Python as:

import genvarloader as gvl

gvl.write(
    path="cool_dataset.gvl",
    bed="interesting_regions.bed",
    variants="cool_variants.vcf",
    bigwigs=gvl.BigWigs.from_table("bigwig", "samples_to_bigwigs.csv"),
    length=2048,
    max_jitter=128,
)

Open a gvl.Dataset and get a PyTorch DataLoader

import genvarloader as gvl

dataset = gvl.Dataset.open(path="cool_dataset.gvl", reference="hg38.fa")
train_samples = ["David", "Aaron"]
train_dataset = dataset.subset_to(regions="train_regions.bed", samples=train_samples)
train_dataloader = train_dataset.to_dataloader(batch_size=32, shuffle=True, num_workers=1)

# use it in your training loop
for haplotypes, tracks in train_dataloader:
    ...

Inspect specific instances

dataset[99]  # 100-th instance of the raveled dataset
dataset[0, 9]  # first region, 10th sample
dataset.isel(regions=0, samples=9)
dataset.sel(regions=dataset.get_bed()[0], samples=dataset.samples[9])
dataset[:10]  # first 10 instances
dataset[:10, :5]  # first 10 regions and 5 samples

Transform the data on-the-fly

import seqpro as sp
from einops import rearrange

def transform(haplotypes, tracks):
    ohe = sp.DNA.ohe(haplotypes)
    ohe = rearrange(ohe, "batch length alphabet -> batch alphabet length")
    return ohe, tracks

transformed_dataset = dataset.with_settings(transform=transform)

Pre-computing transformed tracks

Suppose we want to return tracks that are the z-scored, log(CPM + 1) version of the original. Sometimes it is better to write this to disk to avoid having to recompute it during training or inference.

import numpy as np

# We'll assume we already have an array of total counts for each sample.
# This usually can't be derived from a gvl.Dataset since it only has data for specific regions.
total_counts = np.load('total_counts.npy')  # shape: (samples) float32

# We'll compute the mean and std log(CPM + 1) using the training split
means = np.empty((train_dataset.n_regions, train_dataset.region_length), np.float32)
stds = np.empty_like(means)
just_tracks = train_dataset.with_settings(return_sequences=False, jitter=0)
for region in range(len(means)):
    cpm = np.log1p(just_tracks[region, :] / total_counts[:, None] * 1e6)
    means[region] = cpm.mean(0)
    stds[region] = cpm.std(0)

# Define our transformation
def z_log_cpm(dataset_indices, region_indices, sample_indices, tracks: gvl.Ragged[np.float32]):
    # In the event that the dataset only has SNPs, the full length tracks will all be the same length.
    # So, we can reshape the ragged data into a regular array.
    _tracks = tracks.data.reshape(-1, dataset.region_length)
    
    # Otherwise, we would have to leave `tracks`as a gvl.Ragged array to accommodate different lengths.
    # In that case, we could do the transformation with a Numba compiled function instead.

    # original tracks -> log(CPM + 1) -> z-score
    _tracks = np.log1p(_tracks / total_counts[sample_indices, None] * 1e6)
    _tracks = (_tracks - means[region_indices]) / stds[region_indices]

    return gvl.Ragged.from_offsets(_tracks.ravel(), tracks.shape, tracks.offsets)

# This can take about as long as writing the original tracks or longer, depending on the transformation.
dataset_with_zlogcpm = dataset.write_transformed_track("z-log-cpm", "bigwig", transform=z_log_cpm)

# The dataset now has both tracks available, "bigwig" and "z-log-cpm", and we can choose to return either one or both.
haps_and_zlogcpm = dataset_with_zlogcpm.with_settings(return_tracks="z-log-cpm")

# If we re-opened the dataset after running this then we could write...
dataset = gvl.Dataset.open("cool_dataset.gvl", "hg38.fa", return_tracks="z-log-cpm")

Performance tips

  • GenVarLoader uses multithreading extensively, so it's best to use 0 or 1 workers with your PyTorch DataLoader.
  • A GenVarLoader Dataset is most efficient when given batches of indices, rather than one at a time. PyTorch DataLoader by default uses one index at a time, so if you want to use a custom PyTorch Sampler you should wrap it with a PyTorch BatchSampler before passing it to Dataset.to_dataloader().

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

genvarloader-0.5.0.tar.gz (137.9 kB view details)

Uploaded Source

Built Distribution

genvarloader-0.5.0-cp39-abi3-manylinux_2_28_x86_64.whl (408.7 kB view details)

Uploaded CPython 3.9+ manylinux: glibc 2.28+ x86-64

File details

Details for the file genvarloader-0.5.0.tar.gz.

File metadata

  • Download URL: genvarloader-0.5.0.tar.gz
  • Upload date:
  • Size: 137.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: maturin/1.5.1

File hashes

Hashes for genvarloader-0.5.0.tar.gz
Algorithm Hash digest
SHA256 3420467f497a43eab033d7aea45880c3fb28fe56e61550cf49bf843e2db344d6
MD5 c992324af9cec75d782a053d6994a7b2
BLAKE2b-256 494e1b6ec35ce939e3cb5de5d9ba85bb05f5eac686c37001c093e317b87e763e

See more details on using hashes here.

File details

Details for the file genvarloader-0.5.0-cp39-abi3-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for genvarloader-0.5.0-cp39-abi3-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 fecbde341f6cde9da69a2adbb16c5244ac5d54170b260aaa3dcd50031a872a3a
MD5 bba4ba68c741d73b705a623fda7400d1
BLAKE2b-256 4c5f82d5c107615a2e489f49800cb047ab13395181b09f8c87db86a33db881d3

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page