Skip to main content

This is a consolidation of work from NAMIC efforts primarily at the University of Iowa.

Project description

Introduction

In this work, we developed a robust, easily extensible classification framework that extracts key features from well-characterized DICOM header fields for identifying image modality and acquisition plane. Our tool is crucial for eliminating error-prone human interaction and allowing automatization, increasing imaging applications' reliability and efficiency. We used Random Forrest and Decision Tree algorithms to determine the image modality and orientation. We trained on header meta-data of over 49000 scan volumes from multiple studies and achieved over 99% prediction accuracy on image modality and acquisition plane classification.

This project was supported by several funding sources including:

  • UCSF SCOUTS RO1
  • NIH-NINDS R01NS114405 and NINDS R01 NS119896
  • Botimageai.

Paper

Click here to view the published paper.

Citing

Please reference the manuscript:

Michal Brzus, Cavan J. Riley, Joel Bruss, Aaron Boes, Randall Jones, Hans J. Johnson, "DICOM sequence selection for medical imaging applications," Proc. SPIE 12931, Medical Imaging 2024: Imaging Informatics for Healthcare, Research, and Applications, 1293108 (2 April 2024); https://doi.org/10.1117/12.3006568

Additionally, please reference the citations located in the citations directory

Instructions

Below are instructions for installing and using the package as a user and developer.

Tutorials

Tutorial notebooks are provided in the scripts directory for training and using the classifier along with all the necessary scripts for training a custom model.

User Instructions

Pip install

Users have the ability to simply pip install the package which will install the classifier and all necessary dependencies to run the classifier for ease of use. This will also provide the user with the pretrained model for classification.

$ pip install dcm-classifier

Clone the repository

If you prefer to clone the git repository:

$ git clone https://github.com/BRAINSia/dcm-classifier.git

Navigate to the cloned repo

$ cd <repo path>

Setup virtual environment

$ python3 -m venv <venv_path> && source <venv_path>/bin/activate

Install required packages

$ pip install -r requirements.txt

Developer Instructions

For development, clone the repository and install the developer requirements in a virtual environment. Development allows for training of new models using the scripts directory.

$ pip install -r requirements_dev.txt

Install pre-commit hooks

$ pre-commit install

Run pre-commit hooks to ensure code quality

$ pre-commit run -a

Run the classify study script, the path to a model can be omitted and the default model provided in the package will be used.

$ python3 <path_to_scripts_directory>/classify_study.py -d <path_to_dicom_session>

or pass the path to a separate model

$ python3 <path_to_scripts_directory>/classify_study.py -m models/ova_rf_classifier.onnx -d <path_to_dicom_session>

Testing

Testing in the dcm-classifier package is done using pytest. To run the tests, navigate to the root directory of the package and run the following command:

  pytest
  # or to fail on warnings
  python3 -Werror::FutureWarning -m pytest

Coverage Analysis

To run coverage analysis, navigate to the root directory of the package and run the following commands:

 coverage run --concurrency=multiprocessing --parallel-mode -m pytest tests --junitxml=tests/pytest.xml
 coverage combine
 coverage report --format=text -m |tee tests/pytest-coverage.txt
 coverage xml -o tests/coverage.xml
 coverage xml -o tests/coverage.xml

Contributing

We welcome contributions from the community! Before getting started, please take a moment to review our Contribution Guidelines for instructions on how to contribute to this project. Whether you're fixing a bug, implementing a new feature, or improving documentation, your contributions are greatly appreciated!

FAQs

  1. What is the purpose of this package?

    The purpose of this package is to provide a tool for classifying DICOM images based on their header information. This tool can be used to automate the classification process and eliminate human error.

  2. What are the key features of this package?

    The key features of this package include:

    • Classification of DICOM images based on header information
    • Automated classification process
    • Elimination of human error
  3. What are the future plans for this package and how can I contribute?

    The future plans for this package include:

    • Adding support for more image modalities
    • Improving the classification accuracy
    • Adding support for more DICOM header fields

Authors

  1. Michal Brzus

    github: mbrzus, email: michal-brzus@uiowa.edu

  2. Hans J. Johnson

    github: BRAINSia, email: hans-johnson@uiowa.edu

  3. Cavan Riley

    github: CavRiley, email: cavan-riley@uiowa.edu

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dcm_classifier-0.8.0.tar.gz (2.1 MB view details)

Uploaded Source

Built Distribution

dcm_classifier-0.8.0-py3-none-any.whl (71.4 kB view details)

Uploaded Python 3

File details

Details for the file dcm_classifier-0.8.0.tar.gz.

File metadata

  • Download URL: dcm_classifier-0.8.0.tar.gz
  • Upload date:
  • Size: 2.1 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.10.12

File hashes

Hashes for dcm_classifier-0.8.0.tar.gz
Algorithm Hash digest
SHA256 8fcb5e0de49a045d3297756f11a5d16550dc371fe375d5a386909e285f007705
MD5 997e70f35e5165c16d66a73d7ce7b4f7
BLAKE2b-256 ae1bec637587b096d768d180cc02a203e1f78ca009408c4c2544a75f178da4c8

See more details on using hashes here.

File details

Details for the file dcm_classifier-0.8.0-py3-none-any.whl.

File metadata

File hashes

Hashes for dcm_classifier-0.8.0-py3-none-any.whl
Algorithm Hash digest
SHA256 2bad4ab0390e231aa0cc0f6653dca893710e5d0b0c8c95b80127664466e09401
MD5 3f06817a589e410ce8fb077792ce9f0f
BLAKE2b-256 907a979eb10513bda2b65cef3d6f456f0f83a3ce7a72e62036f8ca3e6e0e11d9

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page