Skip to main content

This is a consolidation of work from NAMIC efforts primarily at the University of Iowa.

Project description

Introduction

In this work, we developed a robust, easily extensible classification framework that extracts key features from well-characterized DICOM header fields for identifying image modality and acquisition plane. Our tool is crucial for eliminating error-prone human interaction and allowing automatization, increasing imaging applications' reliability and efficiency. We used Random Forrest and Decision Tree algorithms to determine the image modality and orientation. We trained on header meta-data of over 49000 scan volumes from multiple studies and achieved over 99% prediction accuracy on image modality and acquisition plane classification.

This project was supported by several funding sources including:

  • UCSF SCOUTS RO1
  • NIH-NINDS R01NS114405 and NINDS R01 NS119896
  • Botimageai.

Paper

Click here to view the published paper.

Citing

Please reference the manuscript:

Michal Brzus, Cavan J. Riley, Joel Bruss, Aaron Boes, Randall Jones, Hans J. Johnson, "DICOM sequence selection for medical imaging applications," Proc. SPIE 12931, Medical Imaging 2024: Imaging Informatics for Healthcare, Research, and Applications, 1293108 (2 April 2024); https://doi.org/10.1117/12.3006568

Additionally, please reference the citations located in the citations directory

Instructions

Below are instructions for installing and using the package as a user and developer.

Tutorials

Tutorial notebooks are provided in the scripts directory for training and using the classifier along with all the necessary scripts for training a custom model.

User Instructions

Pip install

Users have the ability to simply pip install the package which will install the classifier and all necessary dependencies to run the classifier for ease of use. This will also provide the user with the pretrained model for classification.

$ pip install dcm-classifier

Clone the repository

If you prefer to clone the git repository:

$ git clone https://github.com/BRAINSia/dcm-classifier.git

Navigate to the cloned repo

$ cd <repo path>

Setup virtual environment

$ python3 -m venv <venv_path> && source <venv_path>/bin/activate

Install required packages

$ pip install -r requirements.txt

Developer Instructions

For development, clone the repository and install the developer requirements in a virtual environment. Development allows for training of new models using the scripts directory.

$ pip install -r requirements_dev.txt

Install pre-commit hooks

$ pre-commit install

Run pre-commit hooks to ensure code quality

$ pre-commit run -a

Run the classify study script, the path to a model can be omitted and the default model provided in the package will be used.

$ python3 <path_to_scripts_directory>/classify_study.py -d <path_to_dicom_session>

or pass the path to a separate model

$ python3 <path_to_scripts_directory>/classify_study.py -m models/ova_rf_classifier.onnx -d <path_to_dicom_session>

Testing

Testing in the dcm-classifier package is done using pytest. To run the tests, navigate to the root directory of the package and run the following command:

  pytest
  # or to fail on warnings
  python3 -Werror::FutureWarning -m pytest

Coverage Analysis

To run coverage analysis, navigate to the root directory of the package and run the following commands:

 coverage run --concurrency=multiprocessing --parallel-mode -m pytest tests --junitxml=tests/pytest.xml
 coverage combine
 coverage report --format=text -m |tee tests/pytest-coverage.txt
 coverage xml -o tests/coverage.xml
 coverage xml -o tests/coverage.xml

Contributing

We welcome contributions from the community! Before getting started, please take a moment to review our Contribution Guidelines for instructions on how to contribute to this project. Whether you're fixing a bug, implementing a new feature, or improving documentation, your contributions are greatly appreciated!

FAQs

  1. What is the purpose of this package?

    The purpose of this package is to provide a tool for classifying DICOM images based on their header information. This tool can be used to automate the classification process and eliminate human error.

  2. What are the key features of this package?

    The key features of this package include:

    • Classification of DICOM images based on header information
    • Automated classification process
    • Elimination of human error
  3. What are the future plans for this package and how can I contribute?

    The future plans for this package include:

    • Adding support for more image modalities
    • Improving the classification accuracy
    • Adding support for more DICOM header fields

Authors

  1. Michal Brzus

    github: mbrzus, email: michal-brzus@uiowa.edu

  2. Hans J. Johnson

    github: BRAINSia, email: hans-johnson@uiowa.edu

  3. Cavan Riley

    github: CavRiley, email: cavan-riley@uiowa.edu

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dcm_classifier-0.8.3.tar.gz (2.1 MB view details)

Uploaded Source

Built Distribution

dcm_classifier-0.8.3-py3-none-any.whl (71.7 kB view details)

Uploaded Python 3

File details

Details for the file dcm_classifier-0.8.3.tar.gz.

File metadata

  • Download URL: dcm_classifier-0.8.3.tar.gz
  • Upload date:
  • Size: 2.1 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.10.12

File hashes

Hashes for dcm_classifier-0.8.3.tar.gz
Algorithm Hash digest
SHA256 bf683814d00a688dee79fb77dd7c436d2bfb67ed662d0ff98360d40c76a45572
MD5 fc540a554be76de52ab00f8a846c5881
BLAKE2b-256 d451db4484e4e2912aaf6a72d33e81673b2b81b2fe7d2ce96f4021b71632b6f1

See more details on using hashes here.

File details

Details for the file dcm_classifier-0.8.3-py3-none-any.whl.

File metadata

File hashes

Hashes for dcm_classifier-0.8.3-py3-none-any.whl
Algorithm Hash digest
SHA256 25b4a942f0ed4f3d5ea08e85e1e6939545451605ffb92d26a39434cbe49c4ca7
MD5 3315903af522d4735a0af75714bb17f7
BLAKE2b-256 8ca0e9b1f7acea702d7699e781cb24ae312dce174a0123ef824e141851cad883

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page