Advanced Toolbox for Multitask Medical Imaging Consistency (ATOMMIC)
Project description
Advanced Toolbox for Multitask Medical Imaging Consistency (ATOMMIC)
👋 Introduction
The Advanced Toolbox for Multitask Medical Imaging Consistency (ATOMMIC) is a toolbox for applying AI methods for accelerated MRI reconstruction (REC), MRI segmentation (SEG), quantitative MR imaging (qMRI), as well as multitask learning (MTL), i.e., performing multiple tasks simultaneously, such as reconstruction and segmentation. Each task is implemented in a separate collection, which consists of data loaders, transformations, models, metrics, and losses. ATOMMIC is designed to be modular and extensible, and it is easy to add new tasks, models, and datasets. ATOMMIC uses PyTorch Lightning for feasible high-performance multi-GPU/multi-node mixed-precision training.
The schematic overview of ATOMMIC showcases the main components of the toolbox. First, we need an MRI Dataset (e.g., CC359). Next, we need to define the high-level parameters, such as the task and the model, the undersampling, the transforms, the optimizer, the scheduler, the loss, the trainer parameters, and the experiment manager. All these parameters are defined in a .yaml
file using Hydra and OmegaConf.
The trained model is an .atommic
module, exported with ONNX and TorchScript support, which can be used for inference. The .atommic
module can also be uploaded on HuggingFace. Pretrained models are available on our HF account and can be downloaded and used for inference.
🚀 Quick Start Guide
The best way to get started with ATOMMIC is to start with one of the tutorials:
- ATOMMIC Primer - demonstrates how to use ATOMMIC.
- ATOMMIC MRI transforms - demonstrates how to use ATOMMIC to undersample MRI data.
- ATOMMIC MRI undersampling - demonstrates how to use ATOMMIC to apply transforms to MRI data.
- ATOMMIC Upload Model on HuggingFace - demonstrates how to upload a model on HuggingFace.
You can also check the projects page to see how to use ATOMMIC for specific tasks and public datasets.
ATOMMIC paper is fully reproducible. Please check here for more information.
🤖 Training & Testing
Training and testing models in ATOMMIC is intuitive and easy. You just need to properly configure the .yaml
file and just run the following command:
atommic run -c path-to-config-file
⚙️ Configuration
-
Choose the task and the model, according to the collections.
-
Choose the dataset and the dataset parameters, according to the datasets or your own dataset.
-
Choose the undersampling.
-
Choose the transforms.
-
Choose the losses.
-
Choose the optimizer.
-
Choose the scheduler.
-
Choose the trainer parameters.
-
Choose the experiment manager.
You can also check the projects page to see how to configure the .yaml
file for specific tasks.
🗂️ Collections
ATOMMIC is organized into collections, each of which implements a specific task. The following collections are currently available, implementing various models as listed:
MultiTask Learning (MTL)
- End-to-End Recurrent Attention Network (
SERANet
), 2. Image domain Deep Structured Low-Rank Network (IDSLR
), 3. Image domain Deep Structured Low-Rank UNet (IDSLRUNet
), 4. Multi-Task Learning for MRI Reconstruction and Segmentation (MTLRS
), 5. Reconstruction Segmentation method using UNet (RecSegUNet
), 6. Segmentation Network MRI (SegNet
).
Quantitative MR Imaging (qMRI)
- Quantitative Recurrent Inference Machines (
qRIMBlock
), 2. Quantitative End-to-End Variational Network (qVarNet
), 3. Quantitative Cascades of Independently Recurrent Inference Machines (qCIRIM
).
MRI Reconstruction (REC)
- Cascades of Independently Recurrent Inference Machines (
CIRIM
), 2. Convolutional Recurrent Neural Networks (CRNNet
), 3. Deep Cascade of Convolutional Neural Networks (CascadeNet
), 4. Down-Up Net (DUNet
), 5. End-to-End Variational Network (VarNet
), 6. Independently Recurrent Inference Machines (RIMBlock
), 7. Joint Deep Model-Based MR Image and Coil Sensitivity Reconstruction Network (JointICNet
), 8.KIKINet
, 9. Learned Primal-Dual Net (LPDNet
), 10. Model-based Deep Learning Reconstruction (MoDL
), 11.MultiDomainNet
, 12.ProximalGradient
, 13. Recurrent Inference Machines (RIMBlock
), 14. Recurrent Variational Network (RecurrentVarNet
), 15.UNet
, 16. Variable Splitting Network (VSNet
), 17.XPDNet
, 18. Zero-Filled reconstruction (ZF
).
MRI Segmentation (SEG)
SegmentationAttentionUNet
, 2.SegmentationDYNUNet
, 3.SegmentationLambdaUNet
, 4.SegmentationUNet
, 5.Segmentation3DUNet
, 6.SegmentationUNetR
, 7.SegmentationVNet
.
MRI Datasets
ATOMMIC supports public datasets, as well as private datasets. The following public datasets are supported natively:
- AHEAD: Supports the
(qMRI)
and(REC)
tasks. - BraTS 2023 Adult Glioma: Supports the
(SEG)
task. - CC359: Supports the
(REC)
task. - fastMRI Brains Multicoil: Supports the
(REC)
task. - fastMRI Knees Multicoil: Supports the
(REC)
task. - fastMRI Knees Singlecoil: Supports the
(REC)
task. - ISLES 2022 Sub Acute Stroke: Supports the
(SEG)
task. - SKM-TEA: Supports the
(REC)
,(SEG)
, and(MTL)
tasks. - Stanford Knees: Supports the
(REC)
task.
🛠️ Installation
ATOMMIC is best to be installed in a Conda environment.
🐍 Conda
conda create -n atommic python=3.10
conda activate atommic
📦 Pip
Use this installation mode if you want the latest released version.
pip install atommic
From source
Use this installation mode if you are contributing to atommic.
git clone https://github.com/wdika/atommic
cd atommic
bash ./reinstall.sh
🐳 Docker containers
To build an atommic container with Dockerfile from a branch, please run
DOCKER_BUILDKIT=1 docker build -f Dockerfile -t atommic:latest.
As NeMo suggests, if you choose to work with the main
branch, use NVIDIA's PyTorch container version 21.05-py3, then install from GitHub.
docker run --gpus all -it --rm -v <atommic_github_folder>:/ATOMMIC --shm-size=8g \
-p 8888:8888 -p 6006:6006 --ulimit memlock=-1 --ulimit \
stack=67108864 --device=/dev/snd nvcr.io/nvidia/pytorch:21.05-py3
📚 API Documentation
Access the API Documentation here
📄 License
📖 Citation
If you use ATOMMIC in your research, please cite as follows:
@misc{atommic,
author = {Karkalousos Dimitrios, Isqum Ivana, Marquering Henk, Caan Matthan},
title = {ATOMMIC: Advanced Toolbox for Multitask Medical Imaging Consistency},
year = {2023},
url = {https://github.com/wdika/atommic},
}
🔗 References
The following papers have used ATOMMIC:
-
Karkalousos, D., Isgum, I., Marquering, H. & Caan, M.W.A.. (2024). MultiTask Learning for accelerated-MRI Reconstruction and Segmentation of Brain Lesions in Multiple Sclerosis. Medical Imaging with Deep Learning, in Proceedings of Machine Learning Research 227:991-1005 Available from https://proceedings.mlr.press/v227/karkalousos24a.html.
-
Zhang, C., Karkalousos, D., Bazin, P. L., Coolen, B. F., Vrenken, H., Sonke, J. J., Forstmann, B. U., Poot, D. H. J., & Caan, M. W. A. (2022). A unified model for reconstruction and R2* mapping of accelerated 7T data using the quantitative recurrent inference machine. NeuroImage, 264. DOI
-
Karkalousos, D., Noteboom, S., Hulst, H. E., Vos, F. M., & Caan, M. W. A. (2022). Assessment of data consistency through cascades of independently recurrent inference machines for fast and robust accelerated MRI reconstruction. Physics in Medicine & Biology. DOI
📧 Contact
For any questions, please contact Dimitris Karkalousos @ d.karkalousos@amsterdamumc.nl.
⚠️🙏 Disclaimer & Acknowledgements
Note: ATOMMIC is built on top of NeMo. NeMo is under Apache 2.0 license, so we are allowed to use it. We also assume that it is allowed to use the NeMo documentation, as long as we cite it and we always refer to the baselines everywhere and in the code and docs. ATOMMIC also includes implementations of reconstruction methods from fastMRI and DIRECT, and segmentation methods from MONAI, as well as other codebases which are always cited on the corresponding files. All methods in ATOMMIC are reimplemented and not called from the original libraries, allowing for full reproducibility, support, and easy extension. ATOMMIC is an open-source project under the Apache 2.0 license.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.