No project description provided
Project description
Segmentation Pipeline
This package implements a machine learning pipeline for semantic segmentation on medical images. The package is a wrapper on monai and supports training and inference for UNETR and Swin-UNETR on arbitrary dataset. Development focused on BTCV(abdomen), MSD, and BRaTs datasets.
Set up
- Install segmentation pipeline package using
pip install 2404-segmentation-pipeline
- Install pytorch.
- If on windows
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
- If on Linux
pip3 install torch torchvision torchaudio
- (Optional) When working with BtCV dataset, the Swin-UNETR architecture offers self-supervised pretrained model on the dataset. When using pre-trained model before training, it allows the model to converge faster. Download the pretrained self-supervised model here
Documentation
Documention is provided here
Examples
from pipeline Import Pipeline
from monai.transforms import (
AsDiscrete,
EnsureChannelFirstd,
Compose,
CropForegroundd,
LoadImaged,
Orientationd,
RandFlipd,
RandCropByPosNegLabeld,
RandShiftIntensityd,
ScaleIntensityRanged,
Spacingd,
RandRotate90d,
ResizeWithPadOrCropd,
)
# Initialize Pipeline object. Below code works for BtCV but parameters need to be changed for other datasets.
pipeline = Pipeline(model_type="UNETR", modality=1, num_of_labels=14,
model_path="", debug=True)
# Transformations applied on training images
train_transforms = Compose(
[
LoadImaged(keys=["image", "label"]),
EnsureChannelFirstd(keys=["image", "label"]),
Orientationd(keys=["image", "label"], axcodes="RAS"),
Spacingd(
keys=["image", "label"],
pixdim=(1.5, 1.5, 2.0),
mode=("bilinear", "nearest"),
),
ScaleIntensityRanged(
keys=["image"],
a_min=-175,
a_max=250,
b_min=0.0,
b_max=1.0,
clip=True,
),
CropForegroundd(keys=["image", "label"], source_key="image"),
RandCropByPosNegLabeld(
keys=["image", "label"],
label_key="label",
# This here needs to be negative
spatial_size=(96, 96, -1),
pos=1,
neg=1,
num_samples=4,
image_key="image",
image_threshold=0,
),
ResizeWithPadOrCropd(keys=["image", "label"],
spatial_size=(96, 96, 96),
mode='constant'
),
RandFlipd(
keys=["image", "label"],
spatial_axis=[0],
prob=0.10,
),
RandFlipd(
keys=["image", "label"],
spatial_axis=[1],
prob=0.10,
),
RandFlipd(
keys=["image", "label"],
spatial_axis=[2],
prob=0.10,
),
RandRotate90d(
keys=["image", "label"],
prob=0.10,
max_k=3,
),
RandShiftIntensityd(
keys=["image"],
offsets=0.10,
prob=0.50,
),
]
)
# Transformation applied on validation images
val_transforms = Compose(
[
LoadImaged(keys=["image", "label"]),
EnsureChannelFirstd(keys=["image", "label"]),
Orientationd(keys=["image", "label"], axcodes="RAS"),
Spacingd(
keys=["image", "label"],
pixdim=(1.5, 1.5, 2.0),
mode=("bilinear", "nearest"),
),
ScaleIntensityRanged(keys=["image"], a_min=-175, a_max=250, b_min=0.0, b_max=1.0, clip=True),
CropForegroundd(keys=["image", "label"], source_key="image"),
]
)
# Initialize training
trainer.train(150,10)
# Transformations applied on images for inferencing. Transformation should be similar to val_transform
inf_transforms = Compose(
[
LoadImaged(keys=["image"]),
EnsureChannelFirstd(keys=["image"]),
Orientationd(keys=["image"], axcodes="RAS"),
Spacingd(
keys=["image"],
pixdim=(1.5, 1.5, 2.0),
mode="bilinear",
),
ScaleIntensityRanged(keys=["image"], a_min=-175, a_max=250, b_min=0.0, b_max=1.0, clip=True),
CropForegroundd(keys=["image"], source_key="image"),
]
)
# Inference
trainer.inference(data_folder = 'path/to/inference/data/folder', output_folder="path/to/output/folder", transforms=inf_transforms)
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file 2404-segmentation-pipeline-0.1.2.tar.gz
.
File metadata
- Download URL: 2404-segmentation-pipeline-0.1.2.tar.gz
- Upload date:
- Size: 8.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.9.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | f40e7a35e1c155df79f11e2c404de3d92c79395699aa7635c176ca29737a1f91 |
|
MD5 | 3fdadabd4bf02fb4ff38c531fa730679 |
|
BLAKE2b-256 | 34ba7f91aedf4e11736fb033931bfd6c115ad2050d8edc614b8b43964c53b690 |
File details
Details for the file 2404_segmentation_pipeline-0.1.2-py3-none-any.whl
.
File metadata
- Download URL: 2404_segmentation_pipeline-0.1.2-py3-none-any.whl
- Upload date:
- Size: 8.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.9.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | e8561dbcf39d56e8209101ee6347aaf10a08875cba0a501ce4e019a5a4cc6404 |
|
MD5 | b613c348a11a238ba318dd9bfdd3713b |
|
BLAKE2b-256 | cfbbc1dc8cd21be93327aef21066659672d2e9ee9df32c53906f0560c48c2157 |