Skip to main content

Medical data formatting and pre-processing module whose main objective is to build an HDF5 dataset containing all medical images of patients (DICOM format) and their associated segmentations. The HDF5 dataset is then easier to use to perform tasks on the medical data, such as machine learning tasks.

Project description

Medical data formatting module

This package provides a set of utilities for extracting data contained in DICOM files into an HDF5 database containing patients' medical images as well as binary label maps obtained from the segmentation of these images (if available). The HDF5 database is then easier to use to perform tasks on the medical data, such as machine learning tasks. It is a higher-level library that builds on the excellent lower-level pydicom library.

Anyone who is willing to contribute is welcome to do so.

Motivation

Digital Imaging and Communications in Medicine (DICOM) is the international standard for medical images and related information. The working group DICOM WG-23 on Artificial Intelligence / Application Hosting is currently working to identify or develop the DICOM mechanisms to support AI workflows, concentrating on the clinical context. Moreover, their future roadmap and objectives includes working on the concern that current DICOM mechanisms might not be adequate to cover some use cases, particularly bulk analysis of large repository data, e.g. for training deep learning neural networks. However, no tool has been developed to achieve this goal at present.

The purpose of this module is therefore to provide the necessary tools to facilitate the use of medical images in an AI workflow. This goal is accomplished by using the HDF file format to create a database containing patients' medical images as well as binary label maps obtained from the segmentation of these images (if available).

Installation

Latest stable version :

pip install dicom2hdf

Latest (possibly unstable) version :

pip install git+https://github.com/MaxenceLarose/dicom2hdf

How it works

Main concepts

There are 4 main concepts :

  1. PatientDataModel : It is the primary dicom2hdf data structure. It is a named tuple gathering the image and segmentation data available in a patient record.
  2. PatientsDataGenerator : A Generator that allows to iterate over several patient folders and create a PatientDataModel object for each of them.
  3. PatientsDatabase : An object that is used to create/interact with an HDF5 file (a database!) containing all patients information (images + label maps). The PatientsDataGenerator object is used to populate this database.
  4. RadiomicsDataset : An object that is used to create/interact with a csv file (a dataset!) containing radiomics features extracted from images. The PatientsDataGenerator object is used to populate this dataset.

A deeper look into the PatientsDataGenerator object

The PatientsDataGenerator has 3 important variables: a path_to_patients_folder (which dictates the path to the folder that contains all patient records), a series_descriptions (which dictates the images that needs to be extracted from the patient records) and transforms that defines a sequence of transformations to apply on images or segmentations. For each patient/folder available in the path_to_patients_folder, all DICOM files in their folder are read. If the series descriptions of a certain volume match one of the descriptions present in the given series_descriptions dictionary, this volume and its segmentation (if available) are automatically added to the PatientDataModel. Note that if no series_descriptions dictionary is given (series_descriptions = None), then all images (and associated segmentations) will be added to the database.

The PatientsDataGenerator can therefore be used to iteratively perform tasks on each of the patients, such as displaying certain images, transforming images into numpy arrays, or creating an HDF5 database using the PatientsDatabase. It is this last task that is highlighted in this package, but it must be understood that the data extraction is performed in a very general manner by the PatientsDataGenerator and is therefore not limited to this single application. For example, someone could easily develop a Numpydatabase whose creation would be ensured by the PatientsDataGenerator, similar to the current PatientsDatabase based on the HDF5 format.

Organize your data

Since this module requires the use of data, it is important to properly configure the data-related elements before using it.

File format

Images files must be in standard DICOM format and segmentation files must be in DICOM-SEG or RTStruct format.

If your segmentation files are in a research file format (.nrrd, .nii, etc.), you need to convert them into the standardized DICOM-SEG or RTStruct format. You can use the pydicom-seg library to create the DICOM-SEG files OR use the itkimage2dicomSEG python module, which provide a complete pipeline to perform this conversion. Also, you can use the RT-Utils library to create the RTStruct files.

Series descriptions (Optional)

This dictionary is not mandatory for the code to work and therefore its default value is None. Note that if no series_descriptions dictionary is given, i.e. series_descriptions = None, then all images associated with at least one segmentation will be added to the database.

The series descriptions are specified as a dictionary that contains the series descriptions of the images that needs to be extracted from the patients' files. Keys are arbitrary names given to the images we want to add and values are lists of series descriptions. The images associated with these series descriptions do not need to have a corresponding segmentation volume. If none of the descriptions match the series in a patient's files, a warning is raised and the patient is added to the list of patients for whom the pipeline has failed.

Note that the series descriptions can be specified as a classic dictionary or as a path to a json file that contains the series descriptions. Both methods are presented below.

Using a json file

Create a json file containing only the dictionary of the names given to the images we want to add (keys) and lists of series descriptions (values). Place this file in your data folder.

Here is an example of a json file configured as expected :

{
    "TEP": [
        "TEP WB CORR (AC)",
        "TEP WB XL CORR (AC)"
    ],
    "CT": [
        "CT 2.5 WB",
        "AC CT 2.5 WB"
    ]
}
Using a Python dictionary

Create the organs dictionary in your main.py python file.

Here is an example of a python dictionary instanced as expected :

series_descriptions = {
    "TEP": [
        "TEP WB CORR (AC)",
        "TEP WB XL CORR (AC)"
    ],
    "CT": [
        "CT 2.5 WB",
        "AC CT 2.5 WB"
    ]
}

Structure your patients directory

It is important to configure the directory structure correctly to ensure that the module interacts correctly with the data files. The patients folder, must be structured as follows. Note that all DICOM files in the patients' folder will be read.

|_📂 Project directory/
  |_📄 main.py
  |_📂 data/
    |_📄 series_descriptions.json
    |_📂 patients/
      |_📂 patient1/
       	|_📄 ...
       	|_📂 ...
      |_📂 patient2/
       	|_📄 ...
       	|_📂 ...
      |_📂 ...

Import the package

The easiest way to import the package is to explicitly use the objects sub-modules :

from dicom2hdf.databases import PatientsDatabase
from dicom2hdf.generators import PatientsDataGenerator
from dicom2hdf.radiomics import RadiomicsDataset, RadiomicsFeatureExtractor

Use the package

Example using the PatientsDatabase class

This file can then be executed to obtain an hdf5 database.

from dicom2hdf.databases import PatientsDatabase
from dicom2hdf.generators import PatientsDataGenerator
from dicom2hdf.transforms import (
    PETtoSUVD,
    ResampleD
)
from monai.transforms import (
    CenterSpatialCropD,
    Compose,
    ScaleIntensityD,
    ThresholdIntensityD
)

patients_data_generator = PatientsDataGenerator(
    path_to_patients_folder="data/patients",
    series_descriptions="data/series_descriptions.json",
    transforms=Compose(
        [
            ResampleD(keys=["CT_THORAX", "TEP", "Heart"], out_spacing=(1.5, 1.5, 1.5)),
            CenterSpatialCropD(keys=["CT_THORAX", "TEP", "Heart"], roi_size=(1000, 160, 160)),
            ThresholdIntensityD(keys=["CT_THORAX"], threshold=-250, above=True, cval=-250),
            ThresholdIntensityD(keys=["CT_THORAX"], threshold=500, above=False, cval=500),
            ScaleIntensityD(keys=["CT_THORAX"], minv=0, maxv=1),
            PETtoSUVD(keys=["TEP"])
        ]
    )
)

database = PatientsDatabase(path_to_database="data/patients_database.h5")

database.create(
    patients_data_generator=patients_data_generator,
    tags_to_use_as_attributes=[(0x0008, 0x103E), (0x0020, 0x000E), (0x0008, 0x0060)],
    overwrite_database=True
)

The created HDF5 database will then look something like :

patient_dataset

Example using the PatientsDataGeneratorclass

This file can then be executed to perform on-the-fly tasks on images.

from dicom2hdf.generators import PatientsDataGenerator
from dicom2hdf.transforms import Compose, CopySegmentationsD, PETtoSUVD, ResampleD
import SimpleITK as sitk

patients_data_generator = PatientsDataGenerator(
    path_to_patients_folder="data/patients",
    series_descriptions="data/series_descriptions.json",
    transforms=Compose(
        [
            ResampleD(keys=["CT_THORAX", "Heart"], out_spacing=(1.5, 1.5, 1.5)),
            PETtoSUVD(keys=["TEP"]),
            CopySegmentationsD(segmented_image_key="CT_THORAX", unsegmented_image_key="TEP")
        ]
    )
)

for patient_dataset in patients_data_generator:
	print(f"Patient ID: {patient_data.patient_id}")

    for patient_image_data in patient_dataset.data:
        dicom_header = patient_image_data.image.dicom_header
        simple_itk_image = patient_image_data.image.simple_itk_image
        numpy_array_image = sitk.GetArrayFromImage(simple_itk_image)

        """Perform any tasks on images on-the-fly."""
        print(numpy_array_image.shape)

Need more examples?

You can find more in the examples folder.

TODO

  • Generalize the use of arbitrary tags to choose images to extract. At the moment, the only tag available is series_descriptions.

License

This code is provided under the Apache License 2.0.

Citation

@article{dicom2hdf,
  title={dicom2hdf: DICOM to HDF python module},
  author={Maxence Larose},
  year={2022},
  publisher={Université Laval},
  url={https://github.com/MaxenceLarose/dicom2hdf},
}

Contact

Maxence Larose, B. Ing., maxence.larose.1@ulaval.ca

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dicom2hdf-0.3.0.tar.gz (42.5 kB view hashes)

Uploaded Source

Built Distribution

dicom2hdf-0.3.0-py3-none-any.whl (58.5 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page