Skip to main content

A Python package providing the MONet Bundle for nnUNet, extending it with MONAI functionalities and access to the MAIA Segmentation Portal.

Project description

MONet Bundle

Build

Documentation Status Version License Python

GitHub Release Date - Published_At GitHub contributors GitHub top language GitHub language count GitHub Workflow Status (with event) GitHub all releases PyPI - Downloads GitHub PyPI - License

GitHub repo size GitHub release (with filter) PyPI

This repository contains the implementation of the MONet Bundle, with some instructions on how to use it and how to convert a generic nnUNet model to MONAI Bundle format.

For more details about the MONet Bundle, please refer to the Jupyter notebook MONet_Bundle.ipynb.

2025-07-14 UPDATE: MAIA Segmentation Portal Released!

Curious to try the models through a user-friendly web interface? Explore the newly launched MAIA Segmentation Portal, where you can quickly upload your medical images and receive predictions in seconds. Prefer to keep your data local? No problem — simply download the models and run inference on your own machine with a single command!

2025-06-25 UPDATE: Check out the MONet Bundle for FedBraTS and FedLymphoma!

The MONet Bundle has been used in the Federated Brain Tumor Segmentation FedBraTS and Federated Lymphoma Segmentation FedLymphoma projects, which are described in the following paper:

Download the MONet Bundle

You can download the MONet Bundle from the following link: MONet Bundle ALternatively, you can use the following command to download the MONet Bundle:

wget https://raw.githubusercontent.com/SimoneBendazzoli93/MONet-Bundle/main/MONetBundle.zip

or, through the Python Script:

MONet_fetch_bundle.py --bundle_path <FOLDER_PATH>

Convert a trained nnUNet model to MONAI Bundle

To convert a trained nnUNet model to MONAI Bundle format, you can start with exporting a nnUNet trained model with the nnUNetv2_export_model_to_zip command. This command will export the model to a zip file that can be used in the conversion process.

nnUNetv2_export_model_to_zip -d 009 -o Task09_Spleen.zip -c 3d_fullres -tr nnUNetTrainer -p nnUNetPlans -chk checkpoint_final.pth checkpoint_best.pth --not_strict

For testing purposes, you can use the Task09_Spleen.zip file provided in this repository: https://github.com/SimoneBendazzoli93/nnUNet-MONAI-Bundle/releases/download/v1.0/Task09_Spleen.zip. This file contains a trained nnUNet model for the Spleen segmentation task, for only the 3d_fullres configuration and the fold 0.

Next, you can build the provided Docker image to convert the model to MONAI Bundle format. The Dockerfile is provided in this repository, and you can build the image with the following command:

docker build -t nnunet-monai-bundle-converter .

The converter will first convert the nnUNet model to MONAI Bundle format, and then create the corresponding TorchScript model, which can be used for inference with MONAI Deploy. For testing purposes, you can use the Task09_Spleen.zip file provided in this repository.

To run the conversion, you can use the following command:

wget https://github.com/SimoneBendazzoli93/MONet-Bundle/releases/download/v1.0/Task09_Spleen.zip
python MONet_run_conversion.py --bundle_path <MONAI_BUNDLE_PATH> --nnunet_model <NNUNET_CHECKPOINT_PATH>.zip

Package the MONet Bundle with MONAI Deploy

To package the MONet Bundle with MONAI Deploy, you can use the monai-deploy package command. This command will create a deployable bundle that can be used for inference with MONAI Deploy.

monai-deploy package examples/apps/spleen_nnunet_seg_app -c examples/apps/spleen_nnunet_seg_app.yaml -t spleen:1.0 --platform x86_64

Run inference with MONAI Deploy

The resulting Docker context can be found in the deploy/spleen-x64-workstation-dgpu-linux-amd64:1.0 directory. You can use this context to build a Docker image that can be used for inference with MONAI Deploy:

# Copy the TorchScript model to the Docker context
cp nnUNetBundle/models/fold_0/model.ts deploy/spleen-x64-workstation-dgpu-linux-amd64:1.0/models/model/

docker build deploy/spleen-x64-workstation-dgpu-linux-amd64:1.0 --build-arg UID=1000 --build-arg GID=1000 --build-arg UNAME=holoscan -f deploy/spleen-x64-workstation-dgpu-linux-amd64:1.0/Dockerfile -t spleen-x64-workstation-dgpu-linux-amd64:1.0

To test the resulting Docker image, you can run:

MONet_inference_dicom.py --dicom_study_folder <INPUT_FOLDER> --prediction_output_folder <OUTPUT_DIR> --docker-image maiacloud/spleen-x64-workstation-dgpu-linux-amd64:1.0

Specifying the input and output folders, together with the TorchScript model path. The input folder should contain all the DICOM files of the study you want to process, and the output folder will contain the predictions in DICOM SEG format, and an additional STL file with the 3D mesh of the segmentation.

To create the same Docker image running inference on NIFTI images, you can use the provided Dockerfile in the deploy/spleen-x64-workstation-dgpu-linux-amd64:1.0-nifti directory. The Dockerfile is already set up to run inference on NIfTI images, and it includes the necessary dependencies. To test the resulting Docker image, you can run:

MONet_inference_nifti.py --study_folder <INPUT_FOLDER> --prediction_output_folder <OUTPUT_DIR> --docker-image maiacloud/spleen-x64-workstation-dgpu-linux-amd64:1.0-nifti

Specifying the input and output folders, together with the TorchScript model path. The input folder should contain all the NIfTI files of the study you want to process (one per modality, with the given suffix identifier), and the output folder will contain the predictions in NIfTI format.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

monet_bundle-1.1.4.tar.gz (118.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

monet_bundle-1.1.4-py3-none-any.whl (35.9 kB view details)

Uploaded Python 3

File details

Details for the file monet_bundle-1.1.4.tar.gz.

File metadata

  • Download URL: monet_bundle-1.1.4.tar.gz
  • Upload date:
  • Size: 118.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for monet_bundle-1.1.4.tar.gz
Algorithm Hash digest
SHA256 df3ab2efc85f5c6745dc73d33bb4f2d256df16c867d8f45c939d4b8aba23187c
MD5 c18edaf84b7d4821ef297e08faf9b660
BLAKE2b-256 b40b33983b7e53894576df7aaace62a7823607f86d375ba2ce25b9a7791c58c4

See more details on using hashes here.

File details

Details for the file monet_bundle-1.1.4-py3-none-any.whl.

File metadata

  • Download URL: monet_bundle-1.1.4-py3-none-any.whl
  • Upload date:
  • Size: 35.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for monet_bundle-1.1.4-py3-none-any.whl
Algorithm Hash digest
SHA256 036967c19eab7ce0865d5b690eed34a59284b229b101150cba5d5d5b61e08798
MD5 c52545cce4388a317e76a6f9b1f36c76
BLAKE2b-256 e1034bc0ac9e24307011c6e08014e9ca83707991c862002026303cf2108733fb

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page