Skip to main content

Implementation of NAGINI-3D, a python package designed for Multi-Object Segmentation in Biological Imaging based on deep learning and active surfaces.

Project description

NAGINI-3D | Prediction of Parametric Surfaces for Multi-Object Segmentation in 3D Biological Imaging

We present NAGINI-3D (N-Active shapes for seGmentINg 3D biological Images), a method dedicated to 3D biological images segmentation, based on both deep learning (CNN) and Active Surfaces (Snakes).

This repository provides the code described in the paper:

Method description

Our approach consists in training a U-Net to:

  1. locate the objects of interest in a 3D image using a predicted probability map $\hat{p}$,
  2. for each object, predict a set of control points $\lbrace\hat{\boldsymbol{f}}_{\boldsymbol{x},i}\rbrace_i$ describing a parametric surface $\hat{\boldsymbol{s}}_{\boldsymbol{x}}$ representing the object located in $\boldsymbol{x}$,
  3. (optionnal) a snake optimisation procedure based on image gradient can be used to optimize the surfaces.

To evaluate the loss used to train the network, the Ground-Truth (GT) probability/spots map $p$ and a sampling $S$ representing each object of the training dataset are required. Some tools available in this Github will help you pre-process your data to create them.

The training and inference pipelines are summerized on the following figures.

Training pipeline: image

Inference pipeline image

More details on the method are provided in the paper mentionned above.

Installation

The experiments were run using Python 3.10.8. A list of all the packages installed to run the method is provided in requirements.txt.

To guarantee the proper functionning of the code and the reproducibility of the experiments, we recommend to create a Singularity container (installation guide) using the same recipe as us. The recipe nagini3D.def and the corresponding requirement file nagini3D.txt are provided in the singularity directory.

Once Singularity is installed, run the following command to create the Singularity image:

singularity build nagini3D.sif <path to nagini3D.def file>

The image can then be run:

singularity shell --nv -B <storage to your repository where the code and data are stored>:<storage to your repository where the code and data are stored> <path to the .sif image file>

If the Docker image selected to create the Singularity image (see in nagini3D.def) doesn't match your GPU and CUDA compatibility, find another one here that matches your requirements and with PyTorch>=1.13.

While the image is running, you should have the exact same version of Python, PyTorch and the important packages used to run the code.

Installation using pip

TODO avec Arthur

Applying the method

All the scripts are designed to process TIF images.

Preprocessing data for training

The script format_dataset.py pre-processes the GT masks to create the probability maps and the sampling of the objects.

python format_dataset.py -i <str: directory containing the masks> -o <str: directory to store the samplings and spot maps> -n <int (optionnal, default 101): number of points to sample on the surface> -v <bool: verbose>

Warnings:

  1. Here, the sampling procedure can produce any positive integer number of points. But the sampling procedure used for predicted surfaces (Fibonacci lattice) during training requires an odd number of sampled points. Make sure that the sampling size is greater or equal than the sampling size you will use during training.
  2. Make sur that your labels are indexed contiguously (no missing labels, ex: 1,2,4 but no mask correspond to index 3).

Formating dataset for training

The repository containing each dataset (training, test, validation) should be organized as follow:

directory_of_the_set
|--images     (directory containing the images of the set)
|--masks      (directory containing the masks, with the same name as the corresponding image)
|--samplings  (directory containing the output of the "format_dataset.py" script)

Training a model

Edit the file configs/train.yaml, then launch the scrpit train.py.

python train.py

If the wandb option is activated, you can follow the train logs on wandb.ai.

Infering on new data

Run the file inference_on_dir.py:

python inference_on_dir.py -i <images directory> -o <directory to store the results> -m <directory containing the trained model and its config file> -s <bool: weither to apply a snake optimisation step after the network prediction>

Optionnal parameters:

  • -t <(float,float): probability threshold used to extract local maxima and NMS thresholds used to remove duplicates>. If the training finished correctly, the last step consists in evaluating the best thresholds on the validation set, in this case, you don't need to provide this parameter.
  • -tt <(int,int,int): number of tiles to do along each dimension>. By default set to (1,1,1), can be useful to split some images in tiles if they are too big for your GPU/CPU.
  • -ot <bool: if True, apply an Otsu binarization of the image before snake optimization>. For sparse objects, this option improves drastically the results. For dense objects, keep it to False.

Dataset and pre-trained weights

To test the algorithm, we provide the CAPS dataset described in the article and the weights of the network obtained after being trained on it.

Link to CAPS dataset.

Link to the network config file and weights after being trained on CAPS (+optimal thresholds used for inference).

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

nagini3d-0.0.1.tar.gz (1.7 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

nagini3d-0.0.1-py3-none-any.whl (42.0 kB view details)

Uploaded Python 3

File details

Details for the file nagini3d-0.0.1.tar.gz.

File metadata

  • Download URL: nagini3d-0.0.1.tar.gz
  • Upload date:
  • Size: 1.7 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.6

File hashes

Hashes for nagini3d-0.0.1.tar.gz
Algorithm Hash digest
SHA256 73e096d794214b9738dfb430a65e312ad38f444e3618bfdda0a71ac2ef3aca99
MD5 f6bff9dff58e3f3d462af3255e507c46
BLAKE2b-256 d2ba0c53cb1f8faf904947fac87c2be6d7edcafe99ee271426de3960f47c0041

See more details on using hashes here.

File details

Details for the file nagini3d-0.0.1-py3-none-any.whl.

File metadata

  • Download URL: nagini3d-0.0.1-py3-none-any.whl
  • Upload date:
  • Size: 42.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.6

File hashes

Hashes for nagini3d-0.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 9c0b8690aaa2e03f68c8b0e93598dbd914616af25222d6a8d22a796912738f46
MD5 3b347c8c3afd23a0cc46daf66af5d870
BLAKE2b-256 4796edab37799ab91c5d2f70b4ceb6a9a7a9e879a2dea0909207b603e4d6c40c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page