A Catalyst for Machine Learning in Fluoroscopy-guided Procedures.
Project description
DeepDRR
Implementation of our early-accepted MICCAI'18 paper "DeepDRR: A Catalyst for Machine Learning in Fluoroscopy-guided Procedures" and the subsequent Invited Journal Article in the IJCARS Special Issue of MICCAI "Enabling Machine Learning in X-ray-based Procedures via Realistic Simulation of Image Formation". The conference preprint can be accessed on arXiv here: https://arxiv.org/abs/1803.08606.
DeepDRR aims at providing medical image computing and computer assisted intervention researchers state-of-the-art tools to generate realistic radiographs and fluoroscopy from 3D CTs on a training set scale.
Implemented in Python, PyCuda, and PyTorch.
Currently, DeepDRR is in the process of being upgraded with new features and an improved focus on usability. See Releases for the latest alpha.
Getting Started (Version 1.0)
Installation
- Install CUDA 8.0 or higher.
- Ensure that a C compiler is on your
PATH
. - Install with one of the following options.
pip
pip install deepdrr==1.0.0a1
From source
Clone this branch or download the latest pre-release at https://github.com/arcadelab/DeepDRR/releases/ and install an editable copy from pip. It is probably preferable to use a virtual environment for this.
pip install -e /path/to/DeepDRR
Usage
For example usage, run
python example_projector.py
This script contains a typical use-case for projecting over a volume. More detailed tutorials for various use-cases are pending.
Method Overview
To this end, DeepDRR combines machine learning models for material decomposition and scatter estimation in 3D and 2D, respectively, with analytic models for projection, attenuation, and noise injection to achieve the required performance. The pipeline is illustrated below.
Representative Results
The figure below shows representative radiographs generated using DeepDRR from CT data downloaded from the NIH Cancer Imaging Archive. Please find qualitative results in the Applications section.
Applications - Pelvis Landmark Detection
We have applied DeepDRR to anatomical landmark detection in pelvic X-ray: "X-ray-transform Invariant Anatomical Landmark Detection for Pelvic Trauma Surgery", also early-accepted at MICCAI'18: https://arxiv.org/abs/1803.08608 and now with quantitative evaluation in the IJCARS Special Issue on MICCAI'18: https://link.springer.com/article/10.1007/s11548-019-01975-5. The ConvNet for prediction was trained on DeepDRRs of 18 CT scans of the NIH Cancer Imaging Archive and then applied to ex vivo data acquired with a Siemens Cios Fusion C-arm machine equipped with a flat panel detector (Siemens Healthineers, Forchheim, Germany). Some representative results on the ex vivo data are shown below.
Applications - Metal Tool Insertion
DeepDRR has also been applied to simulate X-rays of the femur during insertion of dexterous manipulaters in orthopedic surgery: "Localizing dexterous surgical tools in X-ray for image-based navigation", which has been accepted at IPCAI'19: https://arxiv.org/abs/1901.06672. Simulated images are used to train a concurrent segmentation and localization network for tool detection. We found consistent performance on both synthetic and real X-rays of ex vivo specimens. The tool model, simulation image and detection results are shown below.
This capability has not been tested in version 1.0. We recommend working with Version 0.1 for the time being.
Potential Challenges - General
- Our material decomposition V-net was trained on NIH Cancer Imagign Archive data. In case it does not generalize perfectly to other acquisitions, the use of intensity thresholds (as is done in conventional Monte Carlo) is still supported. In this case, however, thresholds will likely need to be selected on a per-dataset, or worse, on a per-region basis since bone density can vary considerably.
- Scatter estimation is currently limited to Rayleigh scatter and we are working on improving this. Scatter estimation was trained on images with 1240x960 pixels with 0.301 mm. The scatter signal is a composite of Rayleigh, Compton, and multi-path scattering. While all scatter sources produce low frequency signals, Compton and multi-path are more blurred compared to Rayleigh, suggesting that simple scatter reduction techniques may do an acceptable job. In most clinical products, scatter reduction is applied as pre-processing before the image is displayed and accessible. Consequently, the current shortcoming of not providing full scatter estimation is likely not critical for many applications, in fact, scatter can even be turned off completely. We would like to refer to the Applications section above for some preliminary evidence supporting this reasoning.
- Due to the nature of volumetric image processing, DeepDRR consumes a lot of GPU memory. We have successfully tested on 12 GB of GPU memory but cannot tell about 8 GB at the moment. The bottleneck is volumetric segmentation, which can be turned off and replaced by thresholds (see 1.).
- We currently provide the X-ray source sprectra from MC-GPU that are fairly standard. Additional spectra can be implemented in spectrum_generator.py.
- The current detector reading is the average energy deposited by a single photon in a pixel. If you are interested in modeling photon counting or energy resolving detectors, then you may want to take a look at
mass_attenuation(_gpu).py
to implement your detector. - Currently we do not support import of full projection matrices. But you will need to define K, R, and T seperately or use camera.py to define projection geometry online.
- It is important to check proper import of CT volumes. We have tried to account for many variations (HU scale offsets, slice order, origin, file extensions) but one can never be sure enough, so please double check for your files.
Potential Challenges - Tool Modeling
- Currently, the tool/implant model must be represented as a binary 3D volume, rather than a CAD surface model. However, this 3D volume can be of different resolution than the CT volume; particularly, it can be much higher to preserve fine structures of the tool/implant.
- The density of the tool needs to be provided via hard coding in the file 'load_dicom_tool.py' (line 127). The pose of the tool/implant with respect to the CT volume requires manual setup. We provide one example origin setting at line 23-24.
- The tool/implant will supersede the anatomy defined by the CT volume intensities. To this end, we sample the CT materials and densities at the location of the tool in the tool volume, and subtract them from the anatomy forward projections in detector domain (to enable different resolutions of CT and tool volume). Further information can be found in the IJCARS article.
Running DeepDRR in Google Colaboratory
The codebase provided here was not developed with Google Colaboratory in mind, but our userbase has found small tweaks to the code to make it work in Colab. Kindly refer to https://github.com/mathiasunberath/DeepDRR/issues/6 and https://github.com/mathiasunberath/DeepDRR/issues/5 for the required changes. More guidance is available in https://github.com/mathiasunberath/DeepDRR/issues/13#issuecomment-614246840.
Reference
We hope this proves useful for medical imaging research. If you use our work, we would kindly ask you to reference our work. The MICCAI article covers the basic DeepDRR pipeline and task-based evaluation:
@inproceedings{DeepDRR2018,
author = {Unberath, Mathias and Zaech, Jan-Nico and Lee, Sing Chun and Bier, Bastian and Fotouhi, Javad and Armand, Mehran and Navab, Nassir},
title = {{DeepDRR--A Catalyst for Machine Learning in Fluoroscopy-guided Procedures}},
date = {2018},
booktitle = {Proc. Medical Image Computing and Computer Assisted Intervention (MICCAI)},
publisher = {Springer},
}
The IJCARS paper describes the integration of tool modeling and provides quantitative results:
@article{DeepDRR2019,
author = {Unberath, Mathias and Zaech, Jan-Nico and Gao, Cong and Bier, Bastian and Goldmann, Florian and Lee, Sing Chun and Fotouhi, Javad and Taylor, Russell and Armand, Mehran and Navab, Nassir},
title = {{Enabling Machine Learning in X-ray-based Procedures via Realistic Simulation of Image Formation}},
year = {2019},
journal = {International journal of computer assisted radiology and surgery (IJCARS)},
publisher = {Springer},
}
Version 0.1 Installation Instructions
Download segmentation network weights
- Due to file size limitations, please download the segmentation network weights from https://www.dropbox.com/s/pn4aw4z2i01eoo4/model_segmentation.pth.tar?dl=0.
- Place the file "model_segmentation.pth.tar" in the DeepDRR source folder.
Install CUDA 8.0
conda create -n pytorch python=3.6
activate pytorch
Install packages
- Numpy+MKL from https://www.lfd.uci.edu/~gohlke/pythonlibs/#numpy
conda install matplotlib
conda install -c conda-forge pydicom
conda install -c anaconda scikit-image
pip install pycuda
Pip install tensorboard
Pip install tensorboardX
Install pytorch
- Follow peterjc123's scripts to run PyTorch on Windows.
conda install -c peterjc123 pytorch
pip install torchvision
Getting started
- The script example_projector.py implements a complete pipeline for data generation.
PyCuda not working?
- Try to add C compiler to path. Most likely the path is: “C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\bin\”.
Acknowledgments
CUDA Cubic B-Spline Interpolation (CI) used in the projector:
https://github.com/DannyRuijters/CubicInterpolationCUDA
D. Ruijters, B. M. ter Haar Romeny, and P. Suetens. Efficient GPU-Based Texture Interpolation using Uniform B-Splines. Journal of Graphics Tools, vol. 13, no. 4, pp. 61-69, 2008.
The projector is a heavily modified and ported version of the implementation in CONRAD:
https://github.com/akmaier/CONRAD
A. Maier, H. G. Hofmann, M. Berger, P. Fischer, C. Schwemmer, H. Wu, K. Müller, J. Hornegger, J. H. Choi, C. Riess, A. Keil, and R. Fahrig. CONRAD—A software framework for cone-beam imaging in radiology. Medical Physics 40(11):111914-1-8. 2013.
Spectra are taken from MCGPU:
A. Badal, A. Badano, Accelerating Monte Carlo simulations of photon transport in a voxelized geometry using a massively parallel graphics processing unit. Med Phys. 2009 Nov;36(11): 4878–80.
The segmentation pipeline is based on the Vnet architecture:
https://github.com/mattmacy/vnet.pytorch
F. Milletari, N. Navab, S-A. Ahmadi. V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation. arXiv:160604797. 2016.
We gratefully acknowledge the support of the NVIDIA Corporation with the donation of the GPUs used for this research.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for deepdrr-1.0.0a2-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6993c4995f7ef4ad8bb559ded0817b9e5667662d637dbabbfd71169756d9b3c3 |
|
MD5 | aac81475b85dc959f966b6bb02cf52a1 |
|
BLAKE2b-256 | ccdf261ae56aaab0b1a93ee7eeab49a1abde5b330ab26132968a81e210ffa9b9 |