Multi-Planar UNet for autonomous segmentation of 3D medical images
Implementation of the Multi-Planar U-Net as described in:
Mathias Perslev, Erik Dam, Akshay Pai, and Christian Igel. One Network To Segment Them All: A General, Lightweight System for Accurate 3D Medical Image Segmentation. In: Medical Image Computing and Computer Assisted Intervention (MICCAI), 2019
Pre-print version: https://arxiv.org/abs/1911.01764
Published version: https://doi.org/10.1007/978-3-030-32245-8_4#
The Multi-Planar U-Net as implemented here was also used in the following context(s):
- The International Workshop on Osteoarthritis Imaging Knee MRI Segmentation Challenge, described in https://arxiv.org/abs/2004.14003. Data supporting our team's contribution may be found here (hyperparameter files, parameter files, test-set predictions etc.).
# From GitHub git clone https://github.com/perslev/MultiPlanarUNet pip install -e MultiPlanarUNet
This package is still frequently updated and it is thus recommended to install the package with PIP with the -e ('editable') flag so that the package can be updated with recent changes on GitHub without re-installing:
cd MultiPlanarUNet git pull
However, the package is also occasionally updated on PyPi for install with:
# Note: renamed MultiPlanarUNet -> mpunet in versions 0.2.4 pip install mpunet
usage: mp [script] [script args...] Multi-Planar UNet (0.1.0) ------------------------- Available scripts: - cv_experiment - cv_split - init_project - predict - predict_3D - summary - train - train_fusion ...
This package implements fully autonomous deep learning based segmentation of any 3D medical image. It uses a fixed hyperparameter set and a fixed model topology, eliminating the need for conducting hyperparameter tuning experiments. No manual involvement is required except for supplying the training data.
The system has been evaluated on a wide range of tasks covering various organ and pathology segmentation tasks, tissue types, and imaging modalities. The model obtained a top-5 position at the 2018 Medical Segmentation Decathlon (http://medicaldecathlon.com/) despite its simplicity and computational efficiency.
This software may be used as-is and does not require deep learning expertise to get started. It may also serve as a strong baseline method for general purpose semantic segmentation of medical images.
The base model is a slightly modified 2D U-Net (https://arxiv.org/abs/1505.04597) trained under a multi-planar framework. Specifically, the 2D model is fed images sampled across multiple views onto the image volume simultaneously:
At test-time, the model predict along each of the views and recreates a set of full segmentation volumes. These volumes are fused into one using a learned function that weights each class from each view individually to maximise the performance.
Project initialization, model training, evaluation, prediction etc. can be
performed using the scripts located in
MultiPlanarUNet.bin. The script
mp.py serves as an entry point to all other scripts, and it is used
# Invoke the help menu mp --help # Launch the train script mp train [arguments passed to 'train'...] # Invoke the help menu of a sub-script mp train --help
You only need to specify the training data in the format described below. Training, evaluation and prediction will be handled automatically if using the above scripts.
Preparing the data
In order to train a model to solve a specific task, a set of manually annotated images must be stored in a folder under the following structure:
./data_folder/ |- train/ |--- images/ |------ image1.nii.gz |------ image5.nii.gz |--- labels/ |------ image1.nii.gz |------ image5.nii.gz |- val/ |--- images/ |--- labels/ |- test/ |--- images/ |--- labels/ |- aug/ <-- OPTIONAL |--- images/ |--- labels/
The names of these folders may be customized in the parameter file (see below), but default to those shown above. The image and corresponding label map files must be identically named.
aug folder may store additional images that can be included during
training with a lower weight assigned in optimization.
All images must be stored in the
It is important that the .nii files store correct 4x4 affines for mapping
voxel coordinates to the scanner space. Specifically, the framework needs to
know the voxel size and axis orientations in order to sample isotrophic images
in the scanner space.
Images should be arrays of dimension 4 with the first 3 corresponding to the image dimensions and the last the channels dimension (e.g. [256, 256, 256, 3] for a 256x256x256 image with 3 channels). Label maps should be identically shaped in the first 3 dimensions and have a single channel (e.g. [256, 256, 256, 1]). The label at a given voxel should be an integer representing the class at the given position. The background class is normally denoted '0'.
Initializing a Project
Once the data is stored under the above folder structure, a Multi-Planar project can be initialized as follows:
# Initialize a project at 'my_folder' # The --data_dir flag is optional mp init_project --name my_project --data_dir ./data_folder
This will create a folder at path
my_project and populate it with a YAML
train_hparams.yaml, which stores all hyperparameters. Any
parameter in this file may be specified manually, but can all be set
NOTE: By default the
init_project prepares a Multi-Planar model.
However, note that a 3D model is also supported, which can be selected by
--model=3D flag (default=
The model can now be trained as follows:
mp train --num_GPUs=2 # Any number of GPUs (or 0)
During training various information and images will be logged automatically to the project folder. Typically, after training, the folder will look as follows:
./my_project/ |- images/ # Example segmentations through training |- logs/ # Various log files |- model/ # Stores the best model parameters |- tensorboard/ # TensorBoard graph and metric visualization |- train_hparams.yaml # The hyperparameters file |- views.npz # An array of the view vectors used |- views.png # Visualization of the views used
Fusion Model Training
When using the MultiPlanar model, a fusion model must be computed after the base model has been trained. This model will learn to map the multiple predictions of the base model through each view to one, stronger segmentation volume:
mp train_fusion --num_GPUs=2
Predict and evaluate
The trained model can now be evaluated on the testing data in
data_folder/test by invoking:
mp predict --num_GPUs=2 --out_dir predictions
This will create a folder
my_project/predictions storing the predicted
images along with dice coefficient performance metrics.
The model can also be used to predict on images stored in the
folder but without corresponding label files using the
--no_eval flag or on
single files as follows:
# Predict on all images in 'test' folder without label files mp predict --no_eval # Predict on a single image mp predict -f ./new_image.nii.gz # Preidct on a single image and do eval against its label file mp predict -f ./im/new_image.nii.gz -l ./lab/new_image.nii.gz
A summary of the performance can be produced by invoking the following command
from inside the
my_project folder or
mp summary >> [***] SUMMARY REPORT FOR FOLDER [***] >> ./my_project/predictions/csv/ >> >> >> Per class: >> -------------------------------- >> Mean dice by class +/- STD min max N >> 1 0.856 0.060 0.672 0.912 34 >> 2 0.891 0.029 0.827 0.934 34 >> 3 0.888 0.027 0.829 0.930 34 >> 4 0.802 0.164 0.261 0.943 34 >> 5 0.819 0.075 0.552 0.926 34 >> 6 0.863 0.047 0.663 0.917 34 >> >> Overall mean: 0.853 +- 0.088 >> -------------------------------- >> >> By views: >> -------------------------------- >> [0.8477811 0.50449719 0.16355361] 0.825 >> [ 0.70659414 -0.35532932 0.6119361 ] 0.819 >> [ 0.11799461 -0.07137918 0.9904455 ] 0.772 >> [ 0.95572575 -0.28795306 0.06059151] 0.827 >> [-0.16704373 -0.96459936 0.20406974] 0.810 >> [-0.72188903 0.68418977 0.10373322] 0.819 >> --------------------------------
Cross Validation Experiments
Cross validation experiments may be easily performed. First, invoke the
mp cv_split command to split your
data_folder into a number of
mp cv_split --data_dir ./data_folder --CV=5
Here, we prepare for a 5-CV setup. By default, the above command will create a
data_folder/views/5-CV/ storing in this case 5 folders
split0, split1, ..., split5 each structured like the main data folder
set with the
--aug_sub_dir flag). Inside these sub-folders, images a
symlinked to their original position to safe storage.
Running a CV Experiment
A cross-validation experiment can now be performed. On systems with multiple GPUs, each fold can be assigned a given number of the total pool of GPUs'. In this case, multiple folds will run in parallel and new ones automatically start when previous folds terminate.
First, we create a new project folder. This time, we do not specify a data folder yet:
mp init_project --name CV_experiment
We also create a file named
script, giving the following folder structure:
./CV_experiment |- train_hparams.yaml |- script
The train_hparams.yaml file will serve as a template that will be applied
to all folds. We can set any parameters we want here, or let the framework
decide on proper parameters for each fold automatically. The script file
mp commands (and optionally various arguments) to execute on
each fold. For instance, a script file may look like:
mp train --no_images # Do not save example segmentations mp train_fusion mp predict --out_dir predictions
We can now execute the 5-CV experiment by running:
mp cv_experiment --CV_dir=./data_dir/views/5-CV \ --out_dir=./splits \ --num_GPUs=2 --monitor_GPUs_every=600
Above, we assign 2 GPUs to each fold. On a system of 8 GPUs, 4 folds will be
run in parallel. We set
--monitor_GPUs_every=600 to scan the system for
new free GPU resources every 600 seconds (otherwise, only GPUs that we
initially available will be cycled and new free ones will be ignored).
cv_experiment script will create a new project folder for each split
CV_experiment/splits in this case). For each
fold, each of the commands outlined in the
script file will be launched
one by one inside the respective project folder of the fold, so that the
predictions are stored in
fold 0 etc.
Afterwards, we may get a CV summary by invoking:
... from inside the
- Project packaged for PIP
- Added multi-task learning functionality.
- Many smaller fixes and performance improvements. Also fixes a critical error that in some cases would cause the validation callback to only consider a subset of the predicted batch when computing validation metrics, which could make validation metrics noisy especially for large batch sizes.
- One-hot encoded targets (set with sparse=False in the fit section of hyperparameter file) are no longer supported. Setting this value no longer has any effect and may not be allowed in future versions.
- The Validation callback has been changed significantly and now computes both loss and any metrics specified in the hyperparamter file as performed on the training set to facility a more easy comparison. Note that as is the case on the training set, these computations are averaged batch-wise metrics. The CB still computes the epoch-wise pr-class and average precision, recall and dice.
- Default parameter files no longer have pre-specified metrics. Metrics (such as categorical accuracy, fg_precision, etc.) must be manually specified.
- Minor changes over 0.1.3, including ability to set a pre-specified set of GPUs to cycle in mp cv_experiment
- MultiChannelScaler now ignores values equal to or smaller than the 'bg_value' for each channel separately. This value is either set manually by the user and must be a list of values equal to the number of channels or a single value (that will be applied to all channels). If bg_value='1pct' is specified (default for most models), or any other percentage following this specification ('2pct' for 2 percent etc), the 1st percentile will be computed for each channel individually and used to define the background value for that channel.
- ViewInterpolator similarly now accepts a channel-wise background value specification, so that bg_value=[0, 0.1, 1] will cause out-of-bounds interpolation to generate a pixel of value [0, 0.1, 1] for a 3-channel image. Before, all channels would share a single, global background value (this effect is still obtained if bg_value is set to a single integer or float).
- Note that these changes may affect performance negatively if using the v 0.2 software on projects with models trained with version <0.2.0. Users will be warned if trying to do so.
- v0.2.0 now checks which MultiPlanarUNet version was used to create/run code in a give project. Using a new version of the software on an older project folder is no longer allowed. This behaviour may however be overwritten manually setting the VERSION variable to the current software version in the hyperparamter file of the project (not recommended, instead, downgrade to a previous version by running 'git checkout v<VERSION>' inside the MultiPlanarUNet code folder).
- Various smaller changes and bug-fixes across the code base. Thread pools are now generally limited to a maximum of 7 threads; The cv_experiment script now correctly handles using the 'mp' script entry point in the 'script' file (before full paths to the given script had to be passed to the python interpreter)
- Process has started to re-factor/re-write scripts in the bin module to make them clearer, remove deprecated command-line arguments etc.
- Evaluation results as stored in .csv files are now always saved and loaded with an index column as the first column of the file.
- Simplified the functionality of the Validation callback so that it now only computes the F1/Dice, precision and recall scores. I.e. the callback no longer computes validation metrics. This choice was made to increase stability between TensorFlow versions. The callback should work for most versions of TF now, incl. TF 2.0. Future versions of MultiPlanarUNet will re-introduce validation metrics in a TF 2.0 only setting.
- Various smaller changes across the code
- Package was updated to comply with the TensorFlow >= 2.0 API.
- Package was renamed from 'MultiPlanarUNet' to 'mpunet'. This affects imports as well as installs from PyPi (i.e. 'pip install mpunet' now), but not the GitHub repo.
- Now requires the 'psutil' and 'tensorflow-addons' packages.
- Implements a temporary fix to the issue raised at https://github.com/perslev/MultiPlanarUNet/issues/8
- Fixed a number of smaller bugs
- Implements a fix to high memory usage reported during training on some systems
- Now uses tf.distribution for multi-GPU training and prediction
- Custom loss functions should now be wrapped by tf.python.keras.losses.LossFunctionWrapper. I.e. any loss function must be
a class which accepts a tf.keras.losses.Reduction parameter and potentially other parameters and returns the compiled loss function.
- Consequently, when setting a loss function for MultiPlanarUNet training in train_hparams.yaml one must specify the factory class verysion of the loss. E.g. for 'sparse_categorical_crossentropy' one must now specify 'SparseCategoricalCrossentropy' instead. The same naming convention applies to all custom losses.
- Arbitrary Parameters may now be passed to a loss function in the 'loss_kwargs' entry in train_hparams.yaml
- Some (deprecated) custom loss functions have been removed.
- Implemented ability to load training images from a queue of a given max size during training to reduce memory consumption (--max_images flag).
- Updated to work with TensorFlow 2.2
- Minor changes to LearningCurve callback and plot_training_curves function to no longer plot training time in default learning curves.
- Improved Windows compatability
- Reduced maximum time spent looking for valid batches which may speed up training on sparsely labelled images at the cost of using samples with few labels on average. Minor changes to logging.
- Fixed logging file path bug
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|Filename, size||File type||Python version||Upload date||Hashes|
|Filename, size mpunet-0.2.11.tar.gz (138.5 kB)||File type Source||Python version None||Upload date||Hashes View|