Skip to main content

A framework for developing neural network models for 3D image processing.

Project description

Nobrainer

Build Status

Model's prediction of brain mask Model's prediction of brain mask Figure: In the first column are T1-weighted brain scans, in the middle are a trained model's predictions, and on the right are binarized FreeSurfer segmentations. Despite being trained on binarized FreeSurfer segmentations, the model outperforms FreeSurfer in the bottom scan, which exhibits motion distortion. It took about three seconds for the model to predict each brainmask using an NVIDIA GTX 1080Ti. It takes about 70 seconds on a recent CPU.

Nobrainer is a deep learning framework for 3D image processing. It implements several 3D convolutional models from recent literature, methods for loading and augmenting volumetric data that can be used with any TensorFlow or Keras model, losses and metrics for 3D data, and simple utilities for model training, evaluation, prediction, and transfer learning.

Nobrainer also provides pre-trained models for brain extraction, brain segmentation, and other tasks. Please see the Nobrainer models repository for more information.

The Nobrainer project is supported by NIH R01 EB020470 and is distributed under the Apache 2.0 license.

Table of contents

Guide Jupyter Notebooks Open In Colab

Please refer to the Jupyter notebooks in the guide directory to get started with Nobrainer. Try them out in Google Collaboratory!

Installation

Container

We recommend using the official Nobrainer Docker container, which includes all of the dependencies necessary to use the framework. Please see the available images on DockerHub

GPU support

The Nobrainer containers with GPU support use CUDA 10, which requires Linux NVIDIA drivers >=410.48. These drivers are not included in the container.

$ docker pull kaczmarj/nobrainer:latest-gpu
$ singularity pull docker://kaczmarj/nobrainer:latest-gpu

CPU only

This container can be used on all systems that have Docker or Singularity and does not require special hardware. This container, however, should not be used for model training (it will be very slow).

$ docker pull kaczmarj/nobrainer:latest
$ singularity pull docker://kaczmarj/nobrainer:latest

pip

Nobrainer can also be installed with pip. Use the extra [gpu] to install TensorFlow with GPU support and the [cpu] extra to install TensorFlow without GPU support. GPU support requires CUDA 10, which requires Linux NVIDIA drivers >=410.48.

$ pip install --no-cache-dir nobrainer[gpu]

Using pre-trained networks

Pre-trained networks are available in the Nobrainer models repository. Prediction can be done on the command-line with nobrainer predict or in Python.

Predicting a brainmask for a T1-weighted brain scan

In the following examples, we will use a 3D U-Net trained for brain extraction and documented in Nobrainer models.

In the base case, we run the T1w scan through the model for prediction.

# Get sample T1w scan.
wget -nc https://dl.dropbox.com/s/g1vn5p3grifro4d/T1w.nii.gz
docker run --rm -v $PWD:/data kaczmarj/nobrainer \
  predict \
    --model=/models/brain-extraction-unet-128iso-model.h5 \
    --verbose \
    /data/T1w.nii.gz \
    /data/brainmask.nii.gz

For binary segmentation where we expect one predicted region, as is the case with brain extraction, we can reduce false positives by removing all predictions not connected to the largest contiguous label.

# Get sample T1w scan.
wget -nc https://dl.dropbox.com/s/g1vn5p3grifro4d/T1w.nii.gz
docker run --rm -v $PWD:/data kaczmarj/nobrainer \
  predict \
    --model=/models/brain-extraction-unet-128iso-model.h5 \
    --largest-label \
    --verbose \
    /data/T1w.nii.gz \
    /data/brainmask-largestlabel.nii.gz

Because the network was trained on randomly rotated data, it should be agnostic to orientation. Therefore, we can rotate the volume, predict on it, undo the rotation in the prediction, and average the prediction with that from the original volume. This can lead to a better overall prediction but will at least double the processing time. To enable this, use the flag --rotate-and-predict in nobrainer predict.

# Get sample T1w scan.
wget -nc https://dl.dropbox.com/s/g1vn5p3grifro4d/T1w.nii.gz
docker run --rm -v $PWD:/data kaczmarj/nobrainer \
  predict \
    --model=/models/brain-extraction-unet-128iso-model.h5 \
    --rotate-and-predict \
    --verbose \
    /data/T1w.nii.gz \
    /data/brainmask-withrotation.nii.gz

Combining the above, we can usually achieve the best brain extraction by using --rotate-and-predict in conjunction with --largest-label.

# Get sample T1w scan.
wget -nc https://dl.dropbox.com/s/g1vn5p3grifro4d/T1w.nii.gz
docker run --rm -v $PWD:/data kaczmarj/nobrainer \
  predict \
    --model=/models/brain-extraction-unet-128iso-model.h5 \
    --largest-label \
    --rotate-and-predict \
    --verbose \
    /data/T1w.nii.gz \
    /data/brainmask-maybebest.nii.gz

Transfer learning

The pre-trained models can be used for transfer learning. To avoid forgetting important information in the pre-trained model, you can apply regularization to the kernel weights and also use a low learning rate. For more information, please see the Nobrainer guide notebook on transfer learning.

As an example of transfer learning, @kaczmarj re-trained a brain extraction model to label meningiomas in 3D T1-weighted, contrast-enhanced MR scans. The original model is publicly available and was trained on 10,000 T1-weighted MR brain scans from healthy participants. These were all research scans (i.e., non-clinical) and did not include any contrast agents. The meningioma dataset, on the other hand, was composed of relatively few scans, all of which were clinical and used gadolinium as a contrast agent. You can observe the differences in contrast below.

Brain extraction model prediction Meningioma extraction model prediction

Despite the differences between the two datasets, transfer learning led to a much better model than training from randomly-initialized weights. As evidence, please see below violin plots of Dice coefficients on a validation set. In the left plot are Dice coefficients of predictions obtained with the model trained from randomly-initialized weights, and on the right are Dice coefficients of predictions obtained with the transfer-learned model. In general, Dice coefficients are higher on the right, and the variance of Dice scores is lower. Overall, the model on the right is more accurate and more robust than the one on the left.

Data augmentation

Nobrainer provides methods of augmenting volumetric data. Augmentation is useful when the amount of data is low, and it can create more generalizable and robust models. Other packages have implemented methods of augmenting volumetric data, but Nobrainer is unique in that its augmentation methods are written in pure TensorFlow. This allows these methods to be part of serializable tf.data.Dataset pipelines and used for training on TPUs.

In practice, @kaczmarj has found that augmentations improve the generalizability of semantic segmentation models for brain extraction. Augmentation also seems to improve transfer learning models. For example, a meningioma model trained from a brain extraction model that employed augmentation performed better than a meningioma model trained from a brain extraction model that did not use augmentation.

Random rigid transformation

A rigid transformation is one that allows for rotations, translations, and reflections. Nobrainer implements rigid transformations in pure TensorFlow. Please refer to nobrainer.transform.warp to apply a transformation matrix to a volume. You can also apply random rigid transformations to the data input pipeline. When creating you tf.data.Dataset with nobrainer.volume.get_dataset, simply set augment=True, and about 50% of volumes will be augmented with random rigid transformations. To use the function directly, please refer to nobrainer.volume.apply_random_transform. Features and labels are transformed in the same way. Features are interpolated linearly, whereas labels are interpolated using nearest neighbor. Below is an example of a random rigid transformation applied to features and labels. The mask in the right-hand column is a brain mask. Note that the MRI scan and brain mask are transformed in the same way.

Example of rigidly transforming features and labels volumes

Package layout

  • nobrainer.io: input/output methods
  • nobrainer.layers: custom layers, which conform to the Keras API
  • nobrainer.losses: loss functions for volumetric segmentation
  • nobrainer.metrics: metrics for volumetric segmentation
  • nobrainer.models: pre-defined Keras models
  • nobrainer.training: training utilities (supports training on single and multiple GPUs)
  • nobrainer.transform: random rigid transformations for data augmentation
  • nobrainer.volume: tf.data.Dataset creation and data augmentation utilities

Questions or issues

If you have questions about Nobrainer or encounter any issues using the framework, please submit a GitHub issue. If you have a feature request, we encourage you to submit a pull request.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for nobrainer, version 0.0.3
Filename, size File type Python version Upload date Hashes
Filename, size nobrainer-0.0.3-py3-none-any.whl (66.4 kB) File type Wheel Python version py3 Upload date Hashes View hashes

Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN DigiCert DigiCert EV certificate StatusPage StatusPage Status page