Skip to main content

Neural network sequence error correction.

Project description

Oxford Nanopore Technologies logo

Medaka

medaka is a tool to create consensus sequences and variant calls from nanopore sequencing data. This task is performed using neural networks applied a pileup of individual sequencing reads against a reference sequence, mostly commonly either a draft assembly or a database reference sequence. It provides state-of-the-art results outperforming sequence-graph based methods and signal-based methods, whilst also being faster.

© 2018- Oxford Nanopore Technologies Ltd.

Features

  • Requires only basecalled data. (.fasta or .fastq)
  • Improved accuracy over graph-based methods (e.g. Racon).
  • 50X faster than Nanopolish (and can run on GPUs).
  • Includes extras for implementing and training bespoke correction networks.
  • Works on Linux and MacOS.
  • Open source (Oxford Nanopore Technologies PLC. Public License Version 1.0)

For creating draft assemblies we recommend Flye.

Installation

Medaka can be installed in one of several ways.

Installation with pip

Official binary releases of medaka are available on PyPI and can be installed using pip:

pip install medaka

On contemporaray Linux and macOS platforms this will install a precompiled binary, on other platforms a source distribution may be fetched and compiled.

We recommend using medaka within a virtual environment, viz.:

python3 -m venv medaka
. ./medaka/bin/activate
pip install --upgrade pip
pip install medaka

Using this method requires the user to provide several binaries:

and place these within the PATH. samtools/bgzip/tabix versions >=1.14 and minimap2 version >=2.17 are recommended as these are those used in development of medaka.

The default installation has the capacity to run on a GPU (see Using a GPU below), or on CPU. If you are using medaka exclusively on CPU, and don't need the ability to run on GPU, you may wish to install the CPU-only version with:

pip install medaka --extra-index-url https://download.pytorch.org/whl/cpu

The medaka-cpu package has been deprecated for versions >2.2.0 because these are now identical to the standard medaka package. To upgrade an existing virtual environment, remove the medaka-cpu package and install medaka instead.

pip uninstall medaka-cpu
pip install medaka

Installation with conda

The bioconda medaka packages are not supported by Oxford Nanopore Technologies.

For those who prefer the conda package manager, medaka is available via the anaconda.org channel:

conda create -n medaka -c conda-forge -c nanoporetech -c bioconda medaka

Installations with this method will bundle the additional tools required to run an end-to-end correction workflow.

Installation from source

This method is useful only when the above methods have failed, as it will assist in building various dependencies. Its unlikely that our developers will be able to provide further assistance in your specific circumstances if you install using this method.

Medaka can be installed from its source quite easily on most systems.

Before installing medaka it may be required to install some prerequisite libraries, best installed by a package manager. On Ubuntu theses are:

bzip2 g++ zlib1g-dev libbz2-dev liblzma-dev libffi-dev libncurses5-dev
libcurl4-gnutls-dev libssl-dev curl make cmake wget python3-all-dev
python-virtualenv

In addition it is required to install and set up git LFS before cloning the repository.

A Makefile is provided to fetch, compile and install all direct dependencies into a python virtual environment. To set-up the environment run:

# Note: certain files are stored in git-lfs, https://git-lfs.github.com/,
#       which must therefore be installed first.
git clone https://github.com/nanoporetech/medaka.git
cd medaka
make install
. ./venv/bin/activate

Using this method both samtools and minimap2 are built from source and need not be provided by the user.

When building from source, to install a CPU-only version without the capacity to run on GPU, modify the above to:

MEDAKA_CPU=1 make install

Using a GPU

Since version 2.0 medaka uses PyTorch. Prior versions (v1.x) used Tensorflow.

The default version of PyTorch that is installed when building from source or when installing through pip can make immediate use of GPUs via NVIDIA CUDA. However, note that the torch package is compiled against specific versions of the CUDA and cuDNN libraries; users are directed to the torch installation pages for further information. cuDNN can be obtained from the cuDNN Archive, whilst CUDA from the CUDA Toolkit Archive.

Installation with conda is a little different. See the [conda-forge]https://conda-forge.org/docs/user/tipsandtricks/#installing-cuda-enabled-packages-like-tensorflow-and-pytorch) documentation. In summary, the conda package should do something sensible bespoke to the computer it is being installed on.

As described above, if the capability to run on GPU is not required, medaka can be installed with a CPU-only version of PyTorch that doesn't depend on the CUDA libraries, as follows:

pip install medaka --extra-index-url https://download.pytorch.org/whl/cpu

if using the prebuilt packages, or

MEDAKA_CPU=1 make install

if building from source.

GPU Usage notes

Depending on your GPU, medaka may show out of memory errors when running. To avoid these the inference batch size can be reduced from the default value by setting the -b option when running medaka_consensus. A value -b 100 is suitable for 11Gb GPUs.

Usage

medaka can be run using its default settings through the medaka_consensus program. An assembly in .fasta format and basecalls in .fasta or .fastq formats are required. The program uses both samtools and minimap2. If medaka has been installed using the from-source method these will be present within the medaka environment, otherwise they will need to be provided by the user.

source ${MEDAKA}  # i.e. medaka/venv/bin/activate
NPROC=$(nproc)
BASECALLS=basecalls.fa
DRAFT=draft_assm/assm_final.fa
OUTDIR=medaka_consensus
medaka_consensus -i ${BASECALLS} -d ${DRAFT} -o ${OUTDIR} -t ${NPROC}

The variables BASECALLS, DRAFT, and OUTDIR in the above should be set appropriately. The -t option specifies the number of CPU threads to use.

When medaka_consensus has finished running, the consensus will be saved to ${OUTDIR}/consensus.fasta.

Haploid variant calling

Variant calling for haploid samples is enabled through the medaka_variant workflow:

medaka_variant -i <reads.fastq> -r <ref.fasta>

which requires the reads as a .fasta or .fastq and a reference sequence as a .fasta file.

Diploid variant calling

The diploid variant calling workflow that was historically implemented within the medaka package has been surpassed in accuracy and compute performance by other methods, it has therefore been deprecated. Our current recommendation for performing this task is to use Clair3 either directly or through the Oxford Nanopore Technologies provided Nextflow implementation available through EPI2ME Labs.

Models

For best results it is important to specify the correct inference model, according to the basecaller used. Allowed values can be found by running medaka tools list\_models.

Recent basecallers

Recent basecaller versions annotate their output with their model version. In such cases medaka can inspect the files and attempt to select an appropriate model for itself. This typically works best in the case of BAM output from basecallers. It will work also for FASTQ input provided the FASTQ has been created from basecaller output using:

samtools fastq -T '*' dorado.bam | gzip -c > dorado.fastq.gz

The command medaka inference will attempt to automatically determine a correct model by inspecting its BAM input file. The helper scripts medaka_consensus and medaka_variant will make similar attempts from their FASTQ input.

To inspect files for yourself, the command:

medaka tools resolve_model --auto_model <consensus/variant> <input.bam/input.fastq>

will print the model that automatic model selection will use.

Bacterial and plasmid sequencing

For native data with bacterial modifications, such as bacterial isolates, metagenomic samples, or plasmids expressed in bacteria, there is a research model that shows improved consensus accuracy. This model is compatible with several basecaller versions for the R10 chemistries. By adding the flag --bacteria the bacterial model will be selected if it is compatible with the input basecallers:

medaka_consensus -i ${BASECALLS} -d ${DRAFT} -o ${OUTDIR} -t ${NPROC} --bacteria

A legacy default model will be used if the bacterial model is not compatible with the input files. The model selection can be confirmed by running:

medaka tools resolve_model --auto_model consensus_bacteria <input.bam/input.fastq>

which will display the model r1041_e82_400bps_bacterial_methylation if compatible or the default model name otherwise.

Read-level models

Recently, "read-level" consensus polishing models have been developed that integrate additional read information, such as base quality scores, alignment quality scores, and (optionally) dwell time information from the signal. These models are intended primarily for human genome polishing. For those wishing to test these models, we recommend using dorado polish for the best performance and ease of use.

The same models are also provided in medaka for completeness, indicated by "rl" in the model name, but they will not be selected automatically. Since read-level models are significantly more computationally intensive than previous medaka models, it is highly recommended to run them on GPU.

When automatic selection is unsuccessful, and older basecallers

If the name of the basecaller model used is known, but has been lost from the input files, the basecaller model can been provided to medaka directly. It must however be appended with either :consensus or :variant according to whether the user wishing to use the consensus or variant calling medaka model. For example:

medaka inference input.bam output.hdf \
    --model dna_r10.4.1_e8.2_400bps_hac@v4.1.0:variant

will use the medaka variant calling model appropriate for use with the basecaller model named dna_r10.4.1_e8.2_400bps_hac@v4.1.0.

Historically medaka models followed a nomenclature describing both the chemistry and basecaller versions. These old models are now deprecated, users are encouraged to rebasecall their data with a more recent basecaller version prior to using medaka.

Improving parallelism

The medaka_consensus program is good for simple datasets but perhaps not optimal for running large datasets at scale. A higher level of parallelism can be achieved by running independently the component steps of medaka_consensus. The program performs three tasks:

  1. alignment of reads to input assembly (via mini_align which is a thin veil over minimap2)
  2. running of inference algorithm across assembly regions (medaka inference)
  3. aggregation of the results of 2. to create consensus sequences (medaka sequence)

The three steps are discrete, and can be split apart and run independently. In most cases, Step 2. is the bottleneck and can be trivially parallelized. The medaka inference program can be supplied a --regions argument which will restrict its action to particular assembly sequences from the .bam file output in Step 1. Therefore individual jobs can be run for batches of assembly sequences simultaneously. In the final step, medaka sequence can take as input one or more of the .hdf files output by Step 2.

So in summary something like this is possible:

# align reads to assembly
mini_align -i basecalls.fasta -r assembly.fasta -P -m \
    -p calls_to_draft.bam -t <threads>
# run lots of jobs like this:
mkdir results
medaka inference calls_to_draft.bam results/contigs1-4.hdf \
    --region contig1 contig2 contig3 contig4
...
# wait for jobs, then collate results
medaka sequence results/*.hdf assembly.fasta polished.assembly.fasta

It is not recommended to specify a value of --threads greater than 2 for medaka inference since the compute scaling efficiency is poor beyond this. Note also that medaka inference may been seen to use resources equivalent to <threads> + 4 as an additional 4 threads are used for reading and preparing input data.

Origin of the draft sequence

Medaka has been trained to correct draft sequences output from the Flye assembler.

Processing a draft sequence from alternative sources (e.g. the output of canu or wtdbg2) may lead to different results.

Historical correction models in medaka were trained to correct draft sequences output from the canu assembler with racon applied either once, or four times iteratively. For contemporary models this is not the case and medaka should be used directly on the output of Flye.

Acknowledgements

We thank Joanna Pineda and Jared Simpson for providing htslib code samples which aided greatly development of the optimised feature generation code, and for testing the version 0.4 release candidates.

We thank Devin Drown for working through use of medaka with his RTX 2080 GPU.

Help

Licence and Copyright

© 2018- Oxford Nanopore Technologies Ltd.

medaka is distributed under the terms of the Oxford Nanopore Technologies PLC. Public License Version 1.0

Research Release

Research releases are provided as technology demonstrators to provide early access to features or stimulate Community development of tools. Support for this software will be minimal and is only provided directly by the developers. Feature requests, improvements, and discussions are welcome and can be implemented by forking and pull requests. However much as we would like to rectify every issue and piece of feedback users may have, the developers may have limited resource for support of this software. Research releases may be unstable and subject to rapid iteration by Oxford Nanopore Technologies.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

medaka-2.2.1.tar.gz (10.3 MB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

medaka-2.2.1-cp313-cp313-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl (11.3 MB view details)

Uploaded CPython 3.13manylinux: glibc 2.27+ x86-64manylinux: glibc 2.28+ x86-64

medaka-2.2.1-cp313-cp313-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl (11.1 MB view details)

Uploaded CPython 3.13manylinux: glibc 2.27+ ARM64manylinux: glibc 2.28+ ARM64

medaka-2.2.1-cp313-cp313-macosx_12_0_arm64.whl (8.7 MB view details)

Uploaded CPython 3.13macOS 12.0+ ARM64

medaka-2.2.1-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl (11.3 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.27+ x86-64manylinux: glibc 2.28+ x86-64

medaka-2.2.1-cp312-cp312-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl (11.1 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.27+ ARM64manylinux: glibc 2.28+ ARM64

medaka-2.2.1-cp312-cp312-macosx_12_0_arm64.whl (8.7 MB view details)

Uploaded CPython 3.12macOS 12.0+ ARM64

medaka-2.2.1-cp311-cp311-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl (11.3 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.27+ x86-64manylinux: glibc 2.28+ x86-64

medaka-2.2.1-cp311-cp311-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl (11.1 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.27+ ARM64manylinux: glibc 2.28+ ARM64

medaka-2.2.1-cp311-cp311-macosx_12_0_arm64.whl (8.7 MB view details)

Uploaded CPython 3.11macOS 12.0+ ARM64

medaka-2.2.1-cp310-cp310-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl (11.3 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.27+ x86-64manylinux: glibc 2.28+ x86-64

medaka-2.2.1-cp310-cp310-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl (11.1 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.27+ ARM64manylinux: glibc 2.28+ ARM64

medaka-2.2.1-cp310-cp310-macosx_12_0_arm64.whl (8.7 MB view details)

Uploaded CPython 3.10macOS 12.0+ ARM64

File details

Details for the file medaka-2.2.1.tar.gz.

File metadata

  • Download URL: medaka-2.2.1.tar.gz
  • Upload date:
  • Size: 10.3 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for medaka-2.2.1.tar.gz
Algorithm Hash digest
SHA256 c4a16872c8c1747f2329b931150e59016689366d0ca6d62ff98ad8707cd24cda
MD5 8cf61f57040f86258e0f611e5cb49a39
BLAKE2b-256 8d9d2edf65e33d441d8c9a3a52b14d3c4bf94ea162f77bd13ff48f2defe0297d

See more details on using hashes here.

File details

Details for the file medaka-2.2.1-cp313-cp313-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for medaka-2.2.1-cp313-cp313-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 8730ed666b7c271c2e79c14457d01f3eccee78c422150cd0a4a04242b018cb78
MD5 ea84486722dd2a30d3cef838ff915da6
BLAKE2b-256 37fc077d99799fa4a07bfa947317791f1e718d49f6de6ec1ac5a440a0e70f593

See more details on using hashes here.

File details

Details for the file medaka-2.2.1-cp313-cp313-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for medaka-2.2.1-cp313-cp313-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 b774d416d63b2575ab0a1b543fbae2fcc3fd2f67d9d690d06ae97791d907d72a
MD5 cc4a294198c1c60579ea2d64d1b22cdc
BLAKE2b-256 3738e7621e0c75f67bc9b542465cff1d770ec8a0b2fee1346fd50061d1924c02

See more details on using hashes here.

File details

Details for the file medaka-2.2.1-cp313-cp313-macosx_12_0_arm64.whl.

File metadata

File hashes

Hashes for medaka-2.2.1-cp313-cp313-macosx_12_0_arm64.whl
Algorithm Hash digest
SHA256 98a29188c86f0d26aaade2a79bc8b3b6841c4a75aa6b43044c7e21f857b83e94
MD5 1b471adb1689bc93eab1dc1bcd9d3305
BLAKE2b-256 508ac5d8bd82c1c2f540060ad68a831fc60baa609c19c1e9e0e3f429af5c27c1

See more details on using hashes here.

File details

Details for the file medaka-2.2.1-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for medaka-2.2.1-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 a3e8253bcd023c45abcc5fc63a3a5cb3e99ac0c192476672858a91cfd1c0b120
MD5 dee5caa0ee3074b755b35620836ac5d5
BLAKE2b-256 58cc44395f3533f857269b970331388a9525d60ad0c5c9754e220a332d930e56

See more details on using hashes here.

File details

Details for the file medaka-2.2.1-cp312-cp312-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for medaka-2.2.1-cp312-cp312-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 3fb51f8ed520e2d08b9b3d1da2c647bd618ed764f4c05ab16d22813a38a24ae5
MD5 5043ea672369a719b3e84698c1009dbe
BLAKE2b-256 06f6ea73ea50d988496d574e2dfd0cef5863e7f1d445534fb49f6f9d410e762b

See more details on using hashes here.

File details

Details for the file medaka-2.2.1-cp312-cp312-macosx_12_0_arm64.whl.

File metadata

File hashes

Hashes for medaka-2.2.1-cp312-cp312-macosx_12_0_arm64.whl
Algorithm Hash digest
SHA256 1caf40f234e84c502d48cc319787fa1a46f371dcd110f0c8eac246def71e663c
MD5 0c4258155a466c8822511265160de4ed
BLAKE2b-256 3e06cbce6a23b1a421dcb62465be453192a9ea9074d2933bf4a1714fb547a25c

See more details on using hashes here.

File details

Details for the file medaka-2.2.1-cp311-cp311-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for medaka-2.2.1-cp311-cp311-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 b1c506035f740846ba2c52a7e7fed1db2080c04ecca3ea7ea0e37020923c139e
MD5 e9c455f2abed7aaea38de6e9b5ed0055
BLAKE2b-256 96cca66943cd280532aa168774103d3fbeaa1893f9078784619deaef0b5540fd

See more details on using hashes here.

File details

Details for the file medaka-2.2.1-cp311-cp311-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for medaka-2.2.1-cp311-cp311-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 e67c482da0982595df36180d702b82ee47cc8257a18dd7a75dfc66907e8d12ec
MD5 19e462b916786cc9d24dbf1f9638435f
BLAKE2b-256 f1e156326c22aa1a0d96936b8ac5d1f65582b8e0a9b5dac66cddd3fd2d4dfb14

See more details on using hashes here.

File details

Details for the file medaka-2.2.1-cp311-cp311-macosx_12_0_arm64.whl.

File metadata

File hashes

Hashes for medaka-2.2.1-cp311-cp311-macosx_12_0_arm64.whl
Algorithm Hash digest
SHA256 5d3b254ad356d3d94529415210f68656e90b865b5f87b752cc134e881f1644d1
MD5 18105ebe606b3662b8f84cca77ecbe51
BLAKE2b-256 ba721a8bd54e7d6e9411e2f4c5444d67958dd02db313616dcabe0daa99266a00

See more details on using hashes here.

File details

Details for the file medaka-2.2.1-cp310-cp310-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for medaka-2.2.1-cp310-cp310-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 4d71b218dc6ae3cc8888aa0b4e574de38a63a51e9470618949f736b4e6c304a2
MD5 9ef33dfd653ca3deca02228a10b91e5d
BLAKE2b-256 b0461388a5ecc21ac5fc84eec5aed210420e6075a932781a4d4125986c05c7be

See more details on using hashes here.

File details

Details for the file medaka-2.2.1-cp310-cp310-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for medaka-2.2.1-cp310-cp310-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 bcbec8d9fe02ccd31410b3d3d5f76faaeaf959d20a3f176b64864c91549b848b
MD5 78c3e1f752b19b2d9c68285b30066adf
BLAKE2b-256 19c819f2baa9a0eaae854b548ce5e467d7b3289682c828afc7c0fcaea8287252

See more details on using hashes here.

File details

Details for the file medaka-2.2.1-cp310-cp310-macosx_12_0_arm64.whl.

File metadata

File hashes

Hashes for medaka-2.2.1-cp310-cp310-macosx_12_0_arm64.whl
Algorithm Hash digest
SHA256 344e8ddb869a6f0aef90bf8804086413266631fa7d0912a0d9d0e6a3a3dfc1f1
MD5 0b1745a31e37a52fdb7686ee704790f6
BLAKE2b-256 da4a0a6ceb04e34e6e12ff2faea80b211f00097e60392ea5a0ee4cd5ff078e38

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page