Skip to main content

A Python package providing the MONet Bundle for nnUNet, extending it with MONAI functionalities and access to the MAIA Segmentation Portal.

Project description

MONet Bundle

Build

Documentation Status Version License Python

GitHub Release Date - Published_At GitHub contributors GitHub top language GitHub language count GitHub Workflow Status (with event) GitHub all releases PyPI - Downloads GitHub PyPI - License

GitHub repo size GitHub release (with filter) PyPI

This repository contains the implementation of the MONet Bundle, with some instructions on how to use it and how to convert a generic nnUNet model to MONAI Bundle format.

For more details about the MONet Bundle, please refer to the Jupyter notebook MONet_Bundle.ipynb.

2025-06-25 UPDATE: Check out the MONet Bundle for FedBraTS and FedLymphoma!

The MONet Bundle has been used in the Federated Brain Tumor Segmentation FedBraTS and Federated Lymphoma Segmentation FedLymphoma projects, which are described in the following paper:

Download the MONet Bundle

You can download the MONet Bundle from the following link: MONet Bundle ALternatively, you can use the following command to download the MONet Bundle:

wget https://raw.githubusercontent.com/SimoneBendazzoli93/MONet-Bundle/main/MONetBundle.zip

or, through the Python Script:

MONet_fetch_bundle.py --bundle_path <FOLDER_PATH>

Convert a trained nnUNet model to MONAI Bundle

To convert a trained nnUNet model to MONAI Bundle format, you can start with exporting a nnUNet trained model with the nnUNetv2_export_model_to_zip command. This command will export the model to a zip file that can be used in the conversion process.

nnUNetv2_export_model_to_zip -d 009 -o Task09_Spleen.zip -c 3d_fullres -tr nnUNetTrainer -p nnUNetPlans -chk checkpoint_final.pth checkpoint_best.pth --not_strict

For testing purposes, you can use the Task09_Spleen.zip file provided in this repository: https://github.com/SimoneBendazzoli93/nnUNet-MONAI-Bundle/releases/download/v1.0/Task09_Spleen.zip. This file contains a trained nnUNet model for the Spleen segmentation task, for only the 3d_fullres configuration and the fold 0.

Next, you can build the provided Docker image to convert the model to MONAI Bundle format. The Dockerfile is provided in this repository, and you can build the image with the following command:

docker build -t nnunet-monai-bundle-converter .

The converter will first convert the nnUNet model to MONAI Bundle format, and then create the corresponding TorchScript model, which can be used for inference with MONAI Deploy. For testing purposes, you can use the Task09_Spleen.zip file provided in this repository.

To run the conversion, you can use the following command:

wget https://github.com/SimoneBendazzoli93/MONet-Bundle/releases/download/v1.0/Task09_Spleen.zip
python MONet_run_conversion.py --bundle_path <MONAI_BUNDLE_PATH> --nnunet_model <NNUNET_CHECKPOINT_PATH>.zip

Package the MONet Bundle with MONAI Deploy

To package the MONet Bundle with MONAI Deploy, you can use the monai-deploy package command. This command will create a deployable bundle that can be used for inference with MONAI Deploy.

monai-deploy package examples/apps/spleen_nnunet_seg_app -c examples/apps/spleen_nnunet_seg_app.yaml -t spleen:1.0 --platform x86_64

Run inference with MONAI Deploy

The resulting Docker context can be found in the deploy/spleen-x64-workstation-dgpu-linux-amd64:1.0 directory. You can use this context to build a Docker image that can be used for inference with MONAI Deploy:

# Copy the TorchScript model to the Docker context
cp nnUNetBundle/models/fold_0/model.ts deploy/spleen-x64-workstation-dgpu-linux-amd64:1.0/models/model/

docker build deploy/spleen-x64-workstation-dgpu-linux-amd64:1.0 --build-arg UID=1000 --build-arg GID=1000 --build-arg UNAME=holoscan -f deploy/spleen-x64-workstation-dgpu-linux-amd64:1.0/Dockerfile -t spleen-x64-workstation-dgpu-linux-amd64:1.0

To test the resulting Docker image, you can run:

MONet_inference_dicom.py

Specifying the input and output folders, together with the TorchScript model path. The input folder should contain all the DICOM files of the study you want to process, and the output folder will contain the predictions in DICOM SEG format, and an additional STL file with the 3D mesh of the segmentation.

To create the same Docker image running inference on NIFTI images, you can use the provided Dockerfile in the deploy/spleen-x64-workstation-dgpu-linux-amd64:1.0-nifti directory. The Dockerfile is already set up to run inference on NIfTI images, and it includes the necessary dependencies. To test the resulting Docker image, you can run:

MONet_inference_nifti.py

Specifying the input and output folders, together with the TorchScript model path. The input folder should contain all the NIfTI files of the study you want to process (one per modality, with the given suffix identifier), and the output folder will contain the predictions in NIfTI format.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

monet_bundle-1.0a0.tar.gz (38.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

monet_bundle-1.0a0-py3-none-any.whl (19.2 kB view details)

Uploaded Python 3

File details

Details for the file monet_bundle-1.0a0.tar.gz.

File metadata

  • Download URL: monet_bundle-1.0a0.tar.gz
  • Upload date:
  • Size: 38.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for monet_bundle-1.0a0.tar.gz
Algorithm Hash digest
SHA256 af6b07b02f58b91f9b34b7a45dcda9cc5b9382657b46f933c671a97b4b8d979d
MD5 ee8604528373bd56d33110e894fc2add
BLAKE2b-256 23731549ed78e0a70ba7a915d7cc5e34467e1a9a05f4f307d9c4c1801f57c322

See more details on using hashes here.

File details

Details for the file monet_bundle-1.0a0-py3-none-any.whl.

File metadata

  • Download URL: monet_bundle-1.0a0-py3-none-any.whl
  • Upload date:
  • Size: 19.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for monet_bundle-1.0a0-py3-none-any.whl
Algorithm Hash digest
SHA256 74639ff61e0473dfc4ad5205111a80f86b47ba9d70f380f4281590177006e6a0
MD5 3a05d77cd02f2aedc0b39fe03bbc833b
BLAKE2b-256 867d88d3d573ec887f69d28afaf2985c733d56ac7878699368c0d7c2e38b338f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page