Skip to main content

No project description provided

Project description

Segmentation Pipeline

This package implements a machine learning pipeline for semantic segmentation on medical images. The package is a wrapper on monai and supports training and inference for UNETR and Swin-UNETR on arbitrary dataset. Development focused on BTCV(abdomen), MSD, and BRaTs datasets.

Set up

  1. Install segmentation pipeline package using
    pip install 2404-segmentation-pipeline
    
  2. Install pytorch.
    • If on windows
    pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
    
    • If on Linux
    pip3 install torch torchvision torchaudio
    
  3. (Optional) When working with BtCV dataset, the Swin-UNETR architecture offers self-supervised pretrained model on the dataset. When using pre-trained model before training, it allows the model to converge faster. Download the pretrained self-supervised model here

Documentation

Documention is provided here

Examples

from pipeline Import Pipeline
from monai.transforms import (
    AsDiscrete,
    EnsureChannelFirstd,
    Compose,
    CropForegroundd,
    LoadImaged,
    Orientationd,
    RandFlipd,
    RandCropByPosNegLabeld,
    RandShiftIntensityd,
    ScaleIntensityRanged,
    Spacingd,
    RandRotate90d,
    ResizeWithPadOrCropd,
    )

# Initialize Pipeline object. Below code works for BtCV but parameters need to be changed for other datasets.
pipeline = Pipeline(model_type="UNETR", modality=1, num_of_labels=14,
                       model_path="", debug=True)

# Transformations applied on training images
train_transforms = Compose(
    [
        LoadImaged(keys=["image", "label"]),
        EnsureChannelFirstd(keys=["image", "label"]),
        Orientationd(keys=["image", "label"], axcodes="RAS"),
        Spacingd(
            keys=["image", "label"],
            pixdim=(1.5, 1.5, 2.0),
            mode=("bilinear", "nearest"),
        ),
        ScaleIntensityRanged(
            keys=["image"],
            a_min=-175,
            a_max=250,
            b_min=0.0,
            b_max=1.0,
            clip=True,
        ),
        CropForegroundd(keys=["image", "label"], source_key="image"),
        RandCropByPosNegLabeld(
            keys=["image", "label"],
            label_key="label",
            # This here needs to be negative
            spatial_size=(96, 96, -1),
            pos=1,
            neg=1,
            num_samples=4,
            image_key="image",
            image_threshold=0,
        ),
        ResizeWithPadOrCropd(keys=["image", "label"],
            spatial_size=(96, 96, 96),
            mode='constant'
        ),
        RandFlipd(
            keys=["image", "label"],
            spatial_axis=[0],
            prob=0.10,
        ),
        RandFlipd(
            keys=["image", "label"],
            spatial_axis=[1],
            prob=0.10,
        ),
        RandFlipd(
            keys=["image", "label"],
            spatial_axis=[2],
            prob=0.10,
        ),
        RandRotate90d(
            keys=["image", "label"],
            prob=0.10,
            max_k=3,
        ),
        RandShiftIntensityd(
            keys=["image"],
            offsets=0.10,
            prob=0.50,
        ),
    ]
)

# Transformation applied on validation images
val_transforms = Compose(
    [
        LoadImaged(keys=["image", "label"]),
        EnsureChannelFirstd(keys=["image", "label"]),
        Orientationd(keys=["image", "label"], axcodes="RAS"),
        Spacingd(
            keys=["image", "label"],
            pixdim=(1.5, 1.5, 2.0),
            mode=("bilinear", "nearest"),
        ),
        ScaleIntensityRanged(keys=["image"], a_min=-175, a_max=250, b_min=0.0, b_max=1.0, clip=True),
        CropForegroundd(keys=["image", "label"], source_key="image"),
    ]
)

# Initialize training
trainer.train(150,10)

# Transformations applied on images for inferencing. Transformation should be similar to val_transform
inf_transforms = Compose(
    [
        LoadImaged(keys=["image"]),
        EnsureChannelFirstd(keys=["image"]),
        Orientationd(keys=["image"], axcodes="RAS"),
        Spacingd(
            keys=["image"],
            pixdim=(1.5, 1.5, 2.0),
            mode="bilinear",
        ),
        ScaleIntensityRanged(keys=["image"], a_min=-175, a_max=250, b_min=0.0, b_max=1.0, clip=True),
        CropForegroundd(keys=["image"], source_key="image"),
    ]
)

# Inference
trainer.inference(data_folder = 'path/to/inference/data/folder', output_folder="path/to/output/folder", transforms=inf_transforms)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

2404-segmentation-pipeline-1.0.0.tar.gz (8.5 kB view details)

Uploaded Source

Built Distribution

File details

Details for the file 2404-segmentation-pipeline-1.0.0.tar.gz.

File metadata

File hashes

Hashes for 2404-segmentation-pipeline-1.0.0.tar.gz
Algorithm Hash digest
SHA256 88aaac7fe8385879a4f155290cfce4a5ebc7e9ba9bca50cca1fcdb197f616937
MD5 a75f918e02fcbc8a2f85ad5073a4e235
BLAKE2b-256 98c504b6b9818b96d1c9e5656060d1cf571a7a66fd9be6b94e1fba21920c7735

See more details on using hashes here.

File details

Details for the file 2404_segmentation_pipeline-1.0.0-py3-none-any.whl.

File metadata

File hashes

Hashes for 2404_segmentation_pipeline-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 3cc246a0b6f4fd9b81cbac2f32ade33d1102b62f957de5abaaca619eca7d0877
MD5 5da3b4874e4b798332b06f5068636d69
BLAKE2b-256 a793888a79c1fbd4cb18b95dd068d3db401302362772c7a6fc07f764914b807a

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page