Skip to main content

Neural network sequence error correction.

Project description

Oxford Nanopore Technologies logo

Medaka

medaka is a tool to create consensus sequences and variant calls from nanopore sequencing data. This task is performed using neural networks applied a pileup of individual sequencing reads against a reference sequence, mostly commonly either a draft assembly or a database reference sequence. It provides state-of-the-art results outperforming sequence-graph based methods and signal-based methods, whilst also being faster.

© 2018- Oxford Nanopore Technologies Ltd.

Features

  • Requires only basecalled data. (.fasta or .fastq)
  • Improved accuracy over graph-based methods (e.g. Racon).
  • 50X faster than Nanopolish (and can run on GPUs).
  • Includes extras for implementing and training bespoke correction networks.
  • Works on Linux and MacOS.
  • Open source (Oxford Nanopore Technologies PLC. Public License Version 1.0)

For creating draft assemblies we recommend Flye.

Installation

Medaka can be installed in one of several ways.

Installation with pip

Official binary releases of medaka are available on PyPI and can be installed using pip:

pip install medaka

On contemporaray Linux and macOS platforms this will install a precompiled binary, on other platforms a source distribution may be fetched and compiled.

We recommend using medaka within a virtual environment, viz.:

python3 -m venv medaka
. ./medaka/bin/activate
pip install --upgrade pip
pip install medaka

Using this method requires the user to provide several binaries:

and place these within the PATH. samtools/bgzip/tabix versions >=1.14 and minimap2 version >=2.17 are recommended as these are those used in development of medaka.

The default installation has the capacity to run on a GPU (see Using a GPU below), or on CPU. If you are using medaka exclusively on CPU, and don't need the ability to run on GPU, you may wish to install the CPU-only version with:

pip install medaka-cpu --extra-index-url https://download.pytorch.org/whl/cpu

Installation with conda

The bioconda medaka packages are not supported by Oxford Nanopore Technologies.

For those who prefer the conda package manager, medaka is available via the anaconda.org channel:

conda create -n medaka -c conda-forge -c nanoporetech -c bioconda medaka

Installations with this method will bundle the additional tools required to run an end-to-end correction workflow.

Installation from source

This method is useful only when the above methods have failed, as it will assist in building various dependencies. Its unlikely that our developers will be able to provide further assistance in your specific circumstances if you install using this method.

Medaka can be installed from its source quite easily on most systems.

Before installing medaka it may be required to install some prerequisite libraries, best installed by a package manager. On Ubuntu theses are:

bzip2 g++ zlib1g-dev libbz2-dev liblzma-dev libffi-dev libncurses5-dev
libcurl4-gnutls-dev libssl-dev curl make cmake wget python3-all-dev
python-virtualenv

In addition it is required to install and set up git LFS before cloning the repository.

A Makefile is provided to fetch, compile and install all direct dependencies into a python virtual environment. To set-up the environment run:

# Note: certain files are stored in git-lfs, https://git-lfs.github.com/,
#       which must therefore be installed first.
git clone https://github.com/nanoporetech/medaka.git
cd medaka
make install
. ./venv/bin/activate

Using this method both samtools and minimap2 are built from source and need not be provided by the user.

When building from source, to install a CPU-only version without the capacity to run on GPU, modify the above to:

MEDAKA_CPU=1 make install

Using a GPU

Since version 2.0 medaka uses PyTorch. Prior versions (v1.x) used Tensorflow.

The default version of PyTorch that is installed when building from source or when installing through pip can make immediate use of GPUs via NVIDIA CUDA. However, note that the torch package is compiled against specific versions of the CUDA and cuDNN libraries; users are directed to the torch installation pages for further information. cuDNN can be obtained from the cuDNN Archive, whilst CUDA from the CUDA Toolkit Archive.

Installation with conda is a little different. See the [conda-forge]https://conda-forge.org/docs/user/tipsandtricks/#installing-cuda-enabled-packages-like-tensorflow-and-pytorch) documentation. In summary, the conda package should do something sensible bespoke to the computer it is being installed on.

As described above, if the capability to run on GPU is not required, medaka-cpu can be installed with a CPU-only version of PyTorch that doesn't depend on the CUDA libraries, as follows:

pip install medaka-cpu --extra-index-url https://download.pytorch.org/whl/cpu

if using the prebuilt packages, or

MEDAKA_CPU=1 make install

if building from source.

GPU Usage notes

Depending on your GPU, medaka may show out of memory errors when running. To avoid these the inference batch size can be reduced from the default value by setting the -b option when running medaka_consensus. A value -b 100 is suitable for 11Gb GPUs.

Usage

medaka can be run using its default settings through the medaka_consensus program. An assembly in .fasta format and basecalls in .fasta or .fastq formats are required. The program uses both samtools and minimap2. If medaka has been installed using the from-source method these will be present within the medaka environment, otherwise they will need to be provided by the user.

source ${MEDAKA}  # i.e. medaka/venv/bin/activate
NPROC=$(nproc)
BASECALLS=basecalls.fa
DRAFT=draft_assm/assm_final.fa
OUTDIR=medaka_consensus
medaka_consensus -i ${BASECALLS} -d ${DRAFT} -o ${OUTDIR} -t ${NPROC}

The variables BASECALLS, DRAFT, and OUTDIR in the above should be set appropriately. The -t option specifies the number of CPU threads to use.

When medaka_consensus has finished running, the consensus will be saved to ${OUTDIR}/consensus.fasta.

Haploid variant calling

Variant calling for haploid samples is enabled through the medaka_variant workflow:

medaka_variant -i <reads.fastq> -r <ref.fasta>

which requires the reads as a .fasta or .fastq and a reference sequence as a .fasta file.

Diploid variant calling

The diploid variant calling workflow that was historically implemented within the medaka package has been surpassed in accuracy and compute performance by other methods, it has therefore been deprecated. Our current recommendation for performing this task is to use Clair3 either directly or through the Oxford Nanopore Technologies provided Nextflow implementation available through EPI2ME Labs.

Models

For best results it is important to specify the correct inference model, according to the basecaller used. Allowed values can be found by running medaka tools list\_models.

Recent basecallers

Recent basecaller versions annotate their output with their model version. In such cases medaka can inspect the files and attempt to select an appropriate model for itself. This typically works best in the case of BAM output from basecallers. It will work also for FASTQ input provided the FASTQ has been created from basecaller output using:

samtools fastq -T '*' dorado.bam | gzip -c > dorado.fastq.gz

The command medaka inference will attempt to automatically determine a correct model by inspecting its BAM input file. The helper scripts medaka_consensus and medaka_variant will make similar attempts from their FASTQ input.

To inspect files for yourself, the command:

medaka tools resolve_model --auto_model <consensus/variant> <input.bam/input.fastq>

will print the model that automatic model selection will use.

Bacterial and plasmid sequencing

For native data with bacterial modifications, such as bacterial isolates, metagenomic samples, or plasmids expressed in bacteria, there is a research model that shows improved consensus accuracy. This model is compatible with several basecaller versions for the R10 chemistries. By adding the flag --bacteria the bacterial model will be selected if it is compatible with the input basecallers:

medaka_consensus -i ${BASECALLS} -d ${DRAFT} -o ${OUTDIR} -t ${NPROC} --bacteria

A legacy default model will be used if the bacterial model is not compatible with the input files. The model selection can be confirmed by running:

medaka tools resolve_model --auto_model consensus_bacteria <input.bam/input.fastq>

which will display the model r1041_e82_400bps_bacterial_methylation if compatible or the default model name otherwise.

When automatic selection is unsuccessful, and older basecallers

If the name of the basecaller model used is known, but has been lost from the input files, the basecaller model can been provided to medaka directly. It must however be appended with either :consensus or :variant according to whether the user wishing to use the consensus or variant calling medaka model. For example:

medaka inference input.bam output.hdf \
    --model dna_r10.4.1_e8.2_400bps_hac@v4.1.0:variant

will use the medaka variant calling model appropriate for use with the basecaller model named dna_r10.4.1_e8.2_400bps_hac@v4.1.0.

Historically medaka models followed a nomenclature describing both the chemistry and basecaller versions. These old models are now deprecated, users are encouraged to rebasecall their data with a more recent basecaller version prior to using medaka.

Improving parallelism

The medaka_consensus program is good for simple datasets but perhaps not optimal for running large datasets at scale. A higher level of parallelism can be achieved by running independently the component steps of medaka_consensus. The program performs three tasks:

  1. alignment of reads to input assembly (via mini_align which is a thin veil over minimap2)
  2. running of consensus algorithm across assembly regions (medaka inference)
  3. aggregation of the results of 2. to create consensus sequences (medaka sequence)

The three steps are discrete, and can be split apart and run independently. In most cases, Step 2. is the bottleneck and can be trivially parallelized. The medaka consensus program can be supplied a --regions argument which will restrict its action to particular assembly sequences from the .bam file output in Step 1. Therefore individual jobs can be run for batches of assembly sequences simultaneously. In the final step, medaka stitch can take as input one or more of the .hdf files output by Step 2.

So in summary something like this is possible:

# align reads to assembly
mini_align -i basecalls.fasta -r assembly.fasta -P -m \
    -p calls_to_draft.bam -t <threads>
# run lots of jobs like this:
mkdir results
medaka inference calls_to_draft.bam results/contigs1-4.hdf \
    --region contig1 contig2 contig3 contig4
...
# wait for jobs, then collate results
medaka sequence results/*.hdf polished.assembly.fasta

It is not recommended to specify a value of --threads greater than 2 for medaka inference since the compute scaling efficiency is poor beyond this. Note also that medaka inference may been seen to use resources equivalent to <threads> + 4 as an additional 4 threads are used for reading and preparing input data.

Origin of the draft sequence

Medaka has been trained to correct draft sequences output from the Flye assembler.

Processing a draft sequence from alternative sources (e.g. the output of canu or wtdbg2) may lead to different results.

Historical correction models in medaka were trained to correct draft sequences output from the canu assembler with racon applied either once, or four times iteratively. For contemporary models this is not the case and medaka should be used directly on the output of Flye.

Acknowledgements

We thank Joanna Pineda and Jared Simpson for providing htslib code samples which aided greatly development of the optimised feature generation code, and for testing the version 0.4 release candidates.

We thank Devin Drown for working through use of medaka with his RTX 2080 GPU.

Help

Licence and Copyright

© 2018- Oxford Nanopore Technologies Ltd.

medaka is distributed under the terms of the Oxford Nanopore Technologies PLC. Public License Version 1.0

Research Release

Research releases are provided as technology demonstrators to provide early access to features or stimulate Community development of tools. Support for this software will be minimal and is only provided directly by the developers. Feature requests, improvements, and discussions are welcome and can be implemented by forking and pull requests. However much as we would like to rectify every issue and piece of feedback users may have, the developers may have limited resource for support of this software. Research releases may be unstable and subject to rapid iteration by Oxford Nanopore Technologies.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

medaka-2.0.1.tar.gz (9.6 MB view details)

Uploaded Source

Built Distributions

medaka-2.0.1-cp311-cp311-manylinux_2_28_x86_64.whl (11.1 MB view details)

Uploaded CPython 3.11 manylinux: glibc 2.28+ x86-64

medaka-2.0.1-cp311-cp311-macosx_12_0_arm64.whl (8.6 MB view details)

Uploaded CPython 3.11 macOS 12.0+ ARM64

medaka-2.0.1-cp310-cp310-manylinux_2_28_x86_64.whl (11.1 MB view details)

Uploaded CPython 3.10 manylinux: glibc 2.28+ x86-64

medaka-2.0.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (10.3 MB view details)

Uploaded CPython 3.10 manylinux: glibc 2.17+ ARM64

medaka-2.0.1-cp310-cp310-macosx_12_0_arm64.whl (8.6 MB view details)

Uploaded CPython 3.10 macOS 12.0+ ARM64

medaka-2.0.1-cp39-cp39-manylinux_2_28_x86_64.whl (11.1 MB view details)

Uploaded CPython 3.9 manylinux: glibc 2.28+ x86-64

medaka-2.0.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (10.7 MB view details)

Uploaded CPython 3.9 manylinux: glibc 2.17+ x86-64

medaka-2.0.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (10.3 MB view details)

Uploaded CPython 3.9 manylinux: glibc 2.17+ ARM64

medaka-2.0.1-cp39-cp39-macosx_12_0_arm64.whl (8.6 MB view details)

Uploaded CPython 3.9 macOS 12.0+ ARM64

medaka-2.0.1-cp38-cp38-manylinux_2_28_x86_64.whl (11.1 MB view details)

Uploaded CPython 3.8 manylinux: glibc 2.28+ x86-64

medaka-2.0.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (10.7 MB view details)

Uploaded CPython 3.8 manylinux: glibc 2.17+ x86-64

medaka-2.0.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (10.3 MB view details)

Uploaded CPython 3.8 manylinux: glibc 2.17+ ARM64

File details

Details for the file medaka-2.0.1.tar.gz.

File metadata

  • Download URL: medaka-2.0.1.tar.gz
  • Upload date:
  • Size: 9.6 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.8.10

File hashes

Hashes for medaka-2.0.1.tar.gz
Algorithm Hash digest
SHA256 7b7a0dc558f19d10fe8eb588f709a179ef5204a53aad5cfdfbd5c57039193a9f
MD5 6c89e59a2ebad1ba9672cb5c32aafff5
BLAKE2b-256 a9d9d1231b92b0591d07cd28293f3698842be938c2843c7f24b22494eb2039a6

See more details on using hashes here.

File details

Details for the file medaka-2.0.1-cp311-cp311-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for medaka-2.0.1-cp311-cp311-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 4113514195f5d0ccec19be614a36efdba37e94af66dd8b88048b6e28df543c48
MD5 06e8838d7a5027b981a8ed2ff93f1151
BLAKE2b-256 be844168eff0d3e9930254c06ae97b80266e9a5ae7147a84331dc4ccfdcb93ec

See more details on using hashes here.

File details

Details for the file medaka-2.0.1-cp311-cp311-macosx_12_0_arm64.whl.

File metadata

File hashes

Hashes for medaka-2.0.1-cp311-cp311-macosx_12_0_arm64.whl
Algorithm Hash digest
SHA256 9b4c69595616813c57637d2f0467b980a54c8e9fe08176910f5d894302d27c04
MD5 e7252bc75133a18133f31ce70d515383
BLAKE2b-256 40bab23538e398a9f81e79560cbddfa45755cc0994ef15d9243f48bdf7c64e76

See more details on using hashes here.

File details

Details for the file medaka-2.0.1-cp310-cp310-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for medaka-2.0.1-cp310-cp310-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 23644d7ca7798e1c2192b77426e2c8de5bdb92dbfcd156fc20580dc44542ff43
MD5 cad084fdcb4e977dc420f289e46496a2
BLAKE2b-256 b25c9c13073e97df6e99c4c510e066a76e90fbc48d9515b83eba041b5ac83e7f

See more details on using hashes here.

File details

Details for the file medaka-2.0.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for medaka-2.0.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 0514969192e7125f3b2c9b6554dd01b7982c956e717dafce1afd2e1a06c50f96
MD5 4d8bf1e4f13a91b5c49f0292350b2759
BLAKE2b-256 7156c44166d502e2f7aa2fd5b6bab13c826511c788af4f8a293c02a7b52a735f

See more details on using hashes here.

File details

Details for the file medaka-2.0.1-cp310-cp310-macosx_12_0_arm64.whl.

File metadata

File hashes

Hashes for medaka-2.0.1-cp310-cp310-macosx_12_0_arm64.whl
Algorithm Hash digest
SHA256 5c46075c2aeee47e164b902f3f840e5c7421c6c327bc45043584d3e14553d7b5
MD5 6edd2098b07026966268e31014ea8e63
BLAKE2b-256 2307ed559db44ee68313128f6e09b480e0b1893c83532e1fe8d31e8f1ed2deea

See more details on using hashes here.

File details

Details for the file medaka-2.0.1-cp39-cp39-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for medaka-2.0.1-cp39-cp39-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 b4c65db09c821a03a8e299896d1a97b302a7ea39658a0a4a9f820d2edf01dfe3
MD5 a08441292f6dedd3366d6f61c79a5f94
BLAKE2b-256 54cba48a29e163e0d490f77e3559713d03337151aa1cc68934feec669205294e

See more details on using hashes here.

File details

Details for the file medaka-2.0.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for medaka-2.0.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 cbf4db20a121724cfcd4cec5d8b28af1575b56cc59fbbc651834330232f29b1b
MD5 cbbac4dfd67ea41242e062f9f31cce98
BLAKE2b-256 849ae8e339951d6701d58a41d2147f24b9a785f8bb210e5dd166c37ebf047048

See more details on using hashes here.

File details

Details for the file medaka-2.0.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for medaka-2.0.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 002b4e0050443c6faef57d816a2d24d6b9419c43d2e982a0d55c724bf4b2fdf3
MD5 b904bfcdafe073e6574fe5c3db591634
BLAKE2b-256 776a918e1146321f567331a022ff6e8b57c75e934af9ec7072bde372ea4759a1

See more details on using hashes here.

File details

Details for the file medaka-2.0.1-cp39-cp39-macosx_12_0_arm64.whl.

File metadata

File hashes

Hashes for medaka-2.0.1-cp39-cp39-macosx_12_0_arm64.whl
Algorithm Hash digest
SHA256 4f93733b7e104e132c2e5b5aa8405d2c40ce5ea8c2e0a9457bdb465d955e306a
MD5 df19554f3d2696edc3cbc4bf0bed3092
BLAKE2b-256 559aa8b7af43df4399aa29c6fa6fc4e49228bae3f64622f52339c03c99ce1416

See more details on using hashes here.

File details

Details for the file medaka-2.0.1-cp38-cp38-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for medaka-2.0.1-cp38-cp38-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 ac1750813388b50d3c78c5dd3215367154fd2216cfadfb8bea995e9174fe49b2
MD5 f72f8bb494fac4f8f2cb2e9ef30da759
BLAKE2b-256 63a4c32e5bb6ff55f7ed3f0beb41ef70690cf876d4d26030109548e4f2cccaf3

See more details on using hashes here.

File details

Details for the file medaka-2.0.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for medaka-2.0.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 18a1176014ffb321a72d3431f1ae68df6f2eca8c4d17ea2ddc0032e8f7ce356f
MD5 2418e7c569548ed6b5b1886273a1ba41
BLAKE2b-256 8040b525c3d0421ed9f7d938ba611ac490e2e22bbfc9648c90c8ae5165b87c9a

See more details on using hashes here.

File details

Details for the file medaka-2.0.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for medaka-2.0.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 369f77b4d787bd10c45447007eb5e97a2beb5d2fda9a002cffd4158f748be3f9
MD5 7d55095af5ab11c289ea71941a893cb7
BLAKE2b-256 2b9d6a12b208f78e1dde5ac69d4e18a33216b7211f06ed4fdf90e35e32572f66

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page