Skip to main content

PyMIALSRTK: Nipype pipelines for the MIAL Super Resolution Toolkit

Project description

MIALSRTK logo


Copyright © 2016-2020 Medical Image Analysis Laboratory, University Hospital Center and University of Lausanne (UNIL-CHUV), Switzerland

This software is distributed under the open-source BSD 3-Clause License. See LICENSE file for details.


GitHub release (latest by date including pre-releases) DOI Docker Pulls Build Status CircleCI Code Coverage Documentation Status Code Quality Github All Contributors

The Medical Image Analysis Laboratory Super-Resolution ToolKit (MIALSRTK) provides a set of C++ and Python tools necessary to perform motion-robust super-resolution fetal MRI reconstruction.

The original C++ MIALSRTK library includes all algorithms and methods for brain extraction, intensity standardization, motion estimation and super-resolution. It uses the CMake build system and depends on the open-source image processing Insight ToolKit (ITK) library, the command line parser TCLAP library and OpenMP for multi-threading.

MIALSRTK has been recently extended with the pymialsrtk Python3 library following recent advances in standardization of neuroimaging data organization and processing workflows such as the Brain Imaging Data Structure (BIDS) and BIDS App standards. This library has a modular architecture built on top of the Nipype dataflow library which consists of (1) processing nodes that interface with each of the MIALSRTK C++ tools and (2) a processing pipeline that links the interfaces in a common workflow.

The processing pipeline with all dependencies including the C++ MIALSRTK tools are encapsulated in a Docker image container, which handles datasets organized following the BIDS standard and is distributed as a BIDS App @ Docker Hub. For execution on high-performance computing cluster, a Singularity image is also made freely available @ Sylabs Cloud. To facilitate the use of Docker or Singularity, pymialsrtk provides two Python commandline wrappers (mialsuperresolutiontoolkit_docker and mialsuperresolutiontoolkit_singularity) that can generate and run the appropriate command.

All these design considerations allow us not only to (1) represent the entire processing pipeline as an execution graph, where each MIALSRTK C++ tools are connected, but also to (2) provide a mecanism to record data provenance and execution details, and to (3) easily customize the BIDS App to suit specific needs as interfaces with new tools can be added with relatively little effort to account for additional algorithms.

Resources

Installation

  • Install Docker or Singularity engine

  • In a Python 3.7 environment, install pymialsrtk with pip:

    pip install pymialsrtk
    
  • You are ready to use MIALSRTK BIDS App wrappers!

Usage

mialsuperresolutiontoolkit_docker and mialsuperresolutiontoolkit_singularity python wrappers to the MIALSRTK BIDS App have the following command line arguments:

$ mialsuperresolutiontoolkit_[docker|singularity] -h

usage: mialsuperresolutiontoolkit_[docker|singularity] [-h]
                                     [--participant_label PARTICIPANT_LABEL [PARTICIPANT_LABEL ...]]
                                     [--param_file PARAM_FILE]
                                     [--openmp_nb_of_cores OPENMP_NB_OF_CORES]
                                     [--nipype_nb_of_cores NIPYPE_NB_OF_CORES]
                                     [--memory MEMORY]
                                     [--masks_derivatives_dir MASKS_DERIVATIVES_DIR]
                                     [-v]
                                     [--codecarbon_output_dir CODECARBON_OUTPUT_DIR]
                                     bids_dir output_dir {participant}

Argument parser of the MIALSRTK BIDS App Python wrapper

positional arguments:
  bids_dir              The directory with the input dataset formatted
                        according to the BIDS standard.
  output_dir            The directory where the output files should be stored.
                        If you are running group level analysis this folder
                        should be prepopulated with the results of the
                        participant level analysis.
  {participant}         Level of the analysis that will be performed. Only
                        participant is available

optional arguments:
  -h, --help            show this help message and exit
  --participant_label PARTICIPANT_LABEL [PARTICIPANT_LABEL ...]
                        The label(s) of the participant(s) that should be
                        analyzed. The label corresponds to
                        sub-<participant_label> from the BIDS spec (so it does
                        not include "sub-"). If this parameter is not provided
                        all subjects should be analyzed. Multiple participants
                        can be specified with a space separated list.
  --param_file PARAM_FILE
                        Path to a JSON file containing subjects' exams
                        information and super-resolution total variation
                        parameters.
  --openmp_nb_of_cores OPENMP_NB_OF_CORES
                        Specify number of cores used by OpenMP threads
                        Especially useful for NLM denoising and slice-to-
                        volume registration. (Default: 0, meaning it will be
                        determined automatically)
  --nipype_nb_of_cores NIPYPE_NB_OF_CORES
                        Specify number of cores used by the Niype workflow
                        library to distribute the execution of independent
                        processing workflow nodes (i.e. interfaces)
                        (Especially useful in the case of slice-by-slice bias
                        field correction and intensity standardization steps
                        for example). (Default: 0, meaning it will be
                        determined automatically)
  --memory MEMORY       Limit the workflow to using the amount of specified
                        memory [in gb] (Default: 0, the workflow memory
                        consumption is not limited)
  --masks_derivatives_dir MASKS_DERIVATIVES_DIR
                        Use manual brain masks found in
                        ``<output_dir>/<masks_derivatives_dir>/`` directory
  --codecarbon_output_dir CODECARBON_OUTPUT_DIR
                        Directory path in which `codecarbon` saves a CSV file
                        called `emissions.csv` reporting carbon footprint
                        details of the overall run (Defaults to user’s home
                        directory)
  -v, --version         show program's version number and exit

Credits


Sébastien Tourbier

🎨 ⚠️ 💻 💡 📖 👀

Priscille de Dumast

💡 ⚠️ 💻 📖

hamzake

💡 ⚠️ 💻 📖

Hélène Lajous

🐛 ⚠️

Patric Hagmann

🔣 🔍

Meritxell Bach

🔍

This project follows the all-contributors specification. Contributions of any kind welcome!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pymialsrtk-2.1.0.tar.gz (49.8 kB view hashes)

Uploaded Source

Built Distribution

pymialsrtk-2.1.0-py3-none-any.whl (60.8 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page