Volume Sample extractor for Color Doppler ultrasound data using an U-Net neural network architecture.
Project description
Volumetric Segmentation of Color Doppler Samples
This repository implements a U-Net based pipeline to segment and generate masks from volumetric Color Doppler imaging data:
[!NOTE] To see the latest updates and upcoming features, please check the Change Log.
Table of Contents
Installation
For uv users (recommended):
git clone https://github.com/serchugar/color-doppler-volume-sample-extractor
# In your uv project, install in editable mode with CUDA support (Nvidia GPU)
uv pip install -e "/path/to/color-doppler-volume-sample-extractor[cuda]"
# Or install in editable mode with CPU support
uv pip install -e "/path/to/color-doppler-volume-sample-extractor[cpu]"
Alternatively, if using standard Python and pip:
git clone https://github.com/serchugar/color-doppler-volume-sample-extractor
# Install with CUDA support
pip install -e "/path/to/color-doppler-volume-sample-extractor[cuda]"
# Or install with CPU support
pip install -e "/path/to/color-doppler-volume-sample-extractor[cpu]"
Training Workflow
If you want to skip training and use a pre-trained model, you can download the weights from the latest release.
1. Prepare Your Data
Organize your training images and masks in a directory with the following naming convention:
- Images:
img<number>.jpg(e.g.,img1.jpg,img2.jpg,img123.jpg) - Masks:
mask<number>.png(e.g.,mask1.png,mask2.png,mask123.png)
[!IMPORTANT] The number in the filename must match between corresponding image and mask files (e.g.,
img42.jpgshould have a correspondingmask42.png). This naming convention is required for the pipeline to correctly associate images with their segmentation masks.Mask images must be binary images without antialiasing and saved as PNG files to preserve lossless compression.
2. Run Training
import random
from pathlib import Path
from dv_extractor import DEVICE, DynamicUNet, train
from dv_extractor.utils import seed_all
# Not mandatory, but recommended for reproducibility
seed = random.getrandbits(32)
seed_all(seed)
print(f"Seed: {seed}")
model = DynamicUNet(in_channels=1, out_channels=1, depth=4, init_features=32)
model.to(DEVICE)
print(f"Model device: {model.device}\n")
labeled_data_dir = Path("path/to/your/labeled/data/dir")
train(
model,
labeled_data_dir,
epochs=100,
lr=0.001,
batch_size=5,
checkpoints_dir=Path("weights"),
)
The trained model weights will be saved in the checkpoints_dir folder.
[!NOTE] Due to the lack of data, and the time cost of creating each mask, the training does not run a validation process.
Inference
To run predictions, first train your model or load pretrained weights, then use the predict() method from the DynamicUNet class.
[!WARNING] The current pretrained weights were trained on images after applying a 95% threshold.
Loading "raw" images directly into the model without this thresholding will result in incorrect segmentations.
Thepredict()method handles this automatically, use it to avoid any issues. To change the threshold, modify it when creating the model instance, in its constructor.
from pathlib import Path
import torch
from dv_extractor import DEVICE, DynamicUNet, discover_images, visualize_predictions
from torchvision.io import decode_image
# Initialize model with the correct hyperparameters
model = DynamicUNet(in_channels=1, out_channels=1, depth=4, init_features=32)
model.to(DEVICE)
# Load the weights. Here we use the pretrained ones
model.load_weights(Path("weights/pretrained/unet_depth4_feat32_in1_out1_weights.pt"))
# Load the images
imgs_path: list[Path] = discover_images(Path("path/to/your/images"))
# Run the inference and visualize the results
masks: list[torch.Tensor] = model.predict(imgs_path)
visualize_predictions(imgs_path, masks, metadata=True)
# To save the mask results, use torchvision.utils.save_image()
Pretrained Model Configuration
The following hyperparameters were used to generate the weights available in the latest release. You can use these as a reference for your own training:
| Parameter | Value | Description |
|---|---|---|
| Architecture | U-Net | Base model structure |
| Depth | 4 | Number of downsampling/upsampling blocks |
| Init Features | 32 | Number of filters in the first layer |
| Input Size | 512 x 512 | Spatial resolution of the samples |
| Threshold | 0.95 | Doppler intensity cutoff during preprocessing |
| Learning Rate | 0.001 | Optimizer step size (Adam) |
| Epochs | 2000 | Total training iterations |
| Batch Size | 5 | Number of samples per training step |
GPU Acceleration
To enable CUDA support for faster training and inference, ensure you have CUDA and CUDNN installed.
Then, sync the environment with the cuda extra:
uv sync --extra cuda
In your code, don't forget to move your model and weights to the GPU:
from pathlib import Path
from dv_extractor import DEVICE, DynamicUNet
model = DynamicUNet(in_channels=1, out_channels=1, depth=4, init_features=32)
model.to(DEVICE)
model.load_weights(Path("path/to/your/weights.pt"))
...
DEVICE is just a utility constant defined as:
DEVICE: torch.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file color_doppler_volume_sample_extractor-0.2.1.tar.gz.
File metadata
- Download URL: color_doppler_volume_sample_extractor-0.2.1.tar.gz
- Upload date:
- Size: 317.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.11.0 {"installer":{"name":"uv","version":"0.11.0","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":null,"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8dece9e64128ad790c366e805a2636cc2bf45e344819d31aafe27e7eb6504dfd
|
|
| MD5 |
54f83463ba6c1ede2acef619733747cc
|
|
| BLAKE2b-256 |
0eb8f84ae1afd26c17e1e39543e16c2ef6859696b4c638d926818b202093d193
|
File details
Details for the file color_doppler_volume_sample_extractor-0.2.1-py3-none-any.whl.
File metadata
- Download URL: color_doppler_volume_sample_extractor-0.2.1-py3-none-any.whl
- Upload date:
- Size: 11.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.11.0 {"installer":{"name":"uv","version":"0.11.0","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":null,"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0de142233a7c98f0dfada350a8fad6b1a7aa3ce7735547069995e14142d991dc
|
|
| MD5 |
7ed4df51dfcb952e6f8a4d6efb961456
|
|
| BLAKE2b-256 |
45400db231fe0cd5189a79f6915636fdbf0292700f1c2e19bd922d0d7a866d1d
|