Skip to main content

No project description provided

Project description

🧬 Bioinformatics on Flyte


This repo contains tasks, workflows, image definitions, and datatypes used to standardize the orchestration of common bioinformatics tasks using Flyte.

🐳 Container Images

Adding custom dependencies alongside Flytekit

ImageSpecs contained in the images module build a standard set of OCI-compliant container images for use throughout the different workflows. They can be built with entrypoints present in pyproject.toml.

📈 Datatypes

Leverage dataclasses to keep things organized

Using dataclasses to define your samples provides a clean and extensible data structure to keep your workflows tidy. Instead of writing to directories and keeping track of things manually on the commandline, these dataclasses will capture relevant metadata about your samples and let you know where to find them in object storage.

🔍 Quality Control and Pre-processing

FastQC

Run arbitrary shell commands

FastQC is a very common tool written in Java for gathering QC metrics about raw reads. It doesn't have any python bindings, but luckily Flyte lets us run arbitrary ShellTasks with a clean way of passing in inputs and receiving outputs. Just define a script for what you need to do and ShellTask will handle the rest.

Automatic QC checkpointing

Decide wether to continue workflow execution based on QC metrics via conditionals

FastQC generates a summary file with a simple PASS / WARN / FAIL call across a number of different metrics. We can use conditionals in our workflow to check for any FAIL lines in the summary and automatically halt execution. This can surface an early failure without wasting valuable compute or anyone's time doing manual review.

FastP

Specify resources and parallelize via map task

FastP is another common pre-processing tool for filtering out bad reads, trimming, and adapter removal. It can be a bit more memory hungry than Flyte's defaults are set to; luckily we can use Resources to bump that up and allow it to run efficiently. Additionally, we can make use of a map task in our workflow to parallelize the processing of fastp across all our samples.

👩‍🔬 Human-in-the-Loop Approval

Pause processing while waiting for human input

As a final check before moving onto the alignment, we can define an explicit approval right in the workflow. Aggregating reports of all processing done up to this point, and visualizing it via Decks (more on that later), a researcher is able to quickly get a high level view of the work done so far and approve the analysis for further processing.

📏 Alignment

Generate indices

Leverage caching to save time on successive runs

Index generation can be a very compute intensive step. Luckily, we can take advantage of Flyte's native caching when building that index for bowtie and hisat. We've also defined a cache_version in the config that relies on a hash of the reference location in the object store. This means that changing the reference will invalidate the cache and trigger a rebuild, while allowing you to go back to your old reference with impunity.

Bowtie2 vs Hisat2

Compare aligners across an arbitrary number of inputs via dynamic workflows

When prototyping a new pipeline, it's usually a good idea to evaluate a few different tools to see how they perform with respect to runtime and resource requirements. This is easy with a dynamic workflow, which allows us to pass in an arbitrary number of inputs to be used with whatever tasks we want. In the main workflow you'll pass a list of filtered samples to each tool and be able to capture run statistics in the Alignment dataclass as well as visualize their runtimes in the Flyte console.

📋 Reporting

Visualize performance via Decks

We use MultiQC, an excellent multi-modal visualization tool for reporting. After gathering all relative metrics from a workflow, we're able to render that report via Decks, giving us rich run statistics without ever leaving the Flyte console!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

unionbio-0.1.1.tar.gz (28.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

unionbio-0.1.1-py3-none-any.whl (41.0 kB view details)

Uploaded Python 3

File details

Details for the file unionbio-0.1.1.tar.gz.

File metadata

  • Download URL: unionbio-0.1.1.tar.gz
  • Upload date:
  • Size: 28.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.11.9 Darwin/23.5.0

File hashes

Hashes for unionbio-0.1.1.tar.gz
Algorithm Hash digest
SHA256 d1e68b4c02de962e36b5cd2d63e9e85614d358e49214a3f3298352f0bc605529
MD5 3094105f3af00a70278ca05c2bc4dda6
BLAKE2b-256 2ebab54ad5a77b614622820d64640fe4ea701d7cad81d6649d699fd1a638060d

See more details on using hashes here.

File details

Details for the file unionbio-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: unionbio-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 41.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.11.9 Darwin/23.5.0

File hashes

Hashes for unionbio-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 08525efd48e63e410ee168a14438048789b80c3e7ff739048aed284c928e73a2
MD5 39e8ed2932d4cc1bd588b55731062bcf
BLAKE2b-256 ff9c1554b69d68e8389123f9b0ae213694e453c0d680ad58ab0972b91b0a501d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page