Skip to main content

Advanced Toolbox for Multitask Medical Imaging Consistency (ATOMMIC)

Project description

Advanced Toolbox for Multitask Medical Imaging Consistency (ATOMMIC)

HuggingFace GitHub issues License: Apache 2.0 Documentation Status PyPI version PyPI - Downloads PyPI - Python Version Code style: black

Your Image

👋 Introduction

The Advanced Toolbox for Multitask Medical Imaging Consistency (ATOMMIC) is a toolbox for applying AI methods for accelerated MRI reconstruction (REC), MRI segmentation (SEG), quantitative MR imaging (qMRI), as well as multitask learning (MTL), i.e., performing multiple tasks simultaneously, such as reconstruction and segmentation. Each task is implemented in a separate collection, which consists of data loaders, transformations, models, metrics, and losses. ATOMMIC is designed to be modular and extensible, and it is easy to add new tasks, models, and datasets. ATOMMIC uses PyTorch Lightning for feasible high-performance multi-GPU/multi-node mixed-precision training.

ATOMMIC Schematic Overview

The schematic overview of ATOMMIC showcases the main components of the toolbox. First, we need an MRI Dataset (e.g., CC359). Next, we need to define the high-level parameters, such as the task and the model, the undersampling, the transforms, the optimizer, the scheduler, the loss, the trainer parameters, and the experiment manager. All these parameters are defined in a .yaml file using Hydra and OmegaConf.

The trained model is an .atommic module, exported with ONNX and TorchScript support, which can be used for inference. The .atommic module can also be uploaded on HuggingFace. Pretrained models are available on our HF account and can be downloaded and used for inference.

🚀 Quick Start Guide

The best way to get started with ATOMMIC is to start with one of the tutorials:

You can also check the projects page to see how to use ATOMMIC for specific tasks and public datasets.

ATOMMIC paper is fully reproducible. Please check here for more information.

🤖 Training & Testing

Training and testing models in ATOMMIC is intuitive and easy. You just need to properly configure the .yaml file and just run the following command:

atommic run -c path-to-config-file

⚙️ Configuration

  1. Choose the task and the model, according to the collections.

  2. Choose the dataset and the dataset parameters, according to the datasets or your own dataset.

  3. Choose the undersampling.

  4. Choose the transforms.

  5. Choose the losses.

  6. Choose the optimizer.

  7. Choose the scheduler.

  8. Choose the trainer parameters.

  9. Choose the experiment manager.

You can also check the projects page to see how to configure the .yaml file for specific tasks.

🗂️ Collections

ATOMMIC is organized into collections, each of which implements a specific task. The following collections are currently available, implementing various models as listed:

MultiTask Learning (MTL)

  1. End-to-End Recurrent Attention Network (SERANet), 2. Image domain Deep Structured Low-Rank Network (IDSLR), 3. Image domain Deep Structured Low-Rank UNet (IDSLRUNet), 4. Multi-Task Learning for MRI Reconstruction and Segmentation (MTLRS), 5. Reconstruction Segmentation method using UNet (RecSegUNet), 6. Segmentation Network MRI (SegNet).

Quantitative MR Imaging (qMRI)

  1. Quantitative Recurrent Inference Machines (qRIMBlock), 2. Quantitative End-to-End Variational Network (qVarNet), 3. Quantitative Cascades of Independently Recurrent Inference Machines (qCIRIM).

MRI Reconstruction (REC)

  1. Cascades of Independently Recurrent Inference Machines (CIRIM), 2. Convolutional Recurrent Neural Networks (CRNNet), 3. Deep Cascade of Convolutional Neural Networks (CascadeNet), 4. Down-Up Net (DUNet), 5. End-to-End Variational Network (VarNet), 6. Independently Recurrent Inference Machines (RIMBlock), 7. Joint Deep Model-Based MR Image and Coil Sensitivity Reconstruction Network (JointICNet), 8. KIKINet, 9. Learned Primal-Dual Net (LPDNet), 10. Model-based Deep Learning Reconstruction (MoDL), 11. MultiDomainNet, 12. ProximalGradient, 13. Recurrent Inference Machines (RIMBlock), 14. Recurrent Variational Network (RecurrentVarNet), 15. UNet, 16. Variable Splitting Network (VSNet), 17. XPDNet, 18. Zero-Filled reconstruction (ZF).

MRI Segmentation (SEG)

  1. SegmentationAttentionUNet, 2. SegmentationDYNUNet, 3. SegmentationLambdaUNet, 4. SegmentationUNet, 5. Segmentation3DUNet, 6. SegmentationUNetR, 7. SegmentationVNet.

MRI Datasets

ATOMMIC supports public datasets, as well as private datasets. The following public datasets are supported natively:

🛠️ Installation

ATOMMIC is best to be installed in a Conda environment.

🐍 Conda

conda create -n atommic python=3.10
conda activate atommic

📦 Pip

Use this installation mode if you want the latest released version.

pip install atommic

From source

Use this installation mode if you are contributing to atommic.

git clone https://github.com/wdika/atommic
cd atommic
bash ./reinstall.sh

🐳 Docker containers

To build an atommic container with Dockerfile from a branch, please run

  DOCKER_BUILDKIT=1 docker build -f Dockerfile -t atommic:latest.

As NeMo suggests, if you choose to work with the main branch, use NVIDIA's PyTorch container version 21.05-py3, then install from GitHub.

    docker run --gpus all -it --rm -v <atommic_github_folder>:/ATOMMIC --shm-size=8g \
    -p 8888:8888 -p 6006:6006 --ulimit memlock=-1 --ulimit \
    stack=67108864 --device=/dev/snd nvcr.io/nvidia/pytorch:21.05-py3

📚 API Documentation

Documentation Status

Access the API Documentation here

📄 License

ATOMMIC is under License: Apache 2.0

📖 Citation

If you use ATOMMIC in your research, please cite as follows:

@misc{atommic,
    author = {Karkalousos Dimitrios, Isqum Ivana, Marquering Henk, Caan Matthan},
    title = {ATOMMIC: Advanced Toolbox for Multitask Medical Imaging Consistency},
    year = {2023},
    url = {https://github.com/wdika/atommic},
}

🔗 References

The following papers have used ATOMMIC:

  1. Karkalousos, D., Isgum, I., Marquering, H. & Caan, M.W.A.. (2024). MultiTask Learning for accelerated-MRI Reconstruction and Segmentation of Brain Lesions in Multiple Sclerosis. Medical Imaging with Deep Learning, in Proceedings of Machine Learning Research 227:991-1005 Available from https://proceedings.mlr.press/v227/karkalousos24a.html.

  2. Zhang, C., Karkalousos, D., Bazin, P. L., Coolen, B. F., Vrenken, H., Sonke, J. J., Forstmann, B. U., Poot, D. H. J., & Caan, M. W. A. (2022). A unified model for reconstruction and R2* mapping of accelerated 7T data using the quantitative recurrent inference machine. NeuroImage, 264. DOI

  3. Karkalousos, D., Noteboom, S., Hulst, H. E., Vos, F. M., & Caan, M. W. A. (2022). Assessment of data consistency through cascades of independently recurrent inference machines for fast and robust accelerated MRI reconstruction. Physics in Medicine & Biology. DOI

📧 Contact

For any questions, please contact Dimitris Karkalousos @ d.karkalousos@amsterdamumc.nl.

⚠️🙏 Disclaimer & Acknowledgements

Note: ATOMMIC is built on top of NeMo. NeMo is under Apache 2.0 license, so we are allowed to use it. We also assume that it is allowed to use the NeMo documentation, as long as we cite it and we always refer to the baselines everywhere and in the code and docs. ATOMMIC also includes implementations of reconstruction methods from fastMRI and DIRECT, and segmentation methods from MONAI, as well as other codebases which are always cited on the corresponding files. All methods in ATOMMIC are reimplemented and not called from the original libraries, allowing for full reproducibility, support, and easy extension. ATOMMIC is an open-source project under the Apache 2.0 license.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

atommic-1.0.0.tar.gz (401.3 kB view hashes)

Uploaded Source

Built Distribution

atommic-1.0.0-py3-none-any.whl (512.3 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page