Skip to main content

Segmentation of 3D volumetric image data.

Project description

biomedisa

About

Biomedisa (https://biomedisa.info) is a free and easy-to-use open-source application for segmenting large 3D volumetric images such as CT and MRI scans, developed at The Australian National University CTLab. Biomedisa's smart interpolation of sparsely pre-segmented slices enables accurate semi-automated segmentation by considering the complete underlying image data. Additionally, Biomedisa enables deep learning for fully automated segmentation across similar samples and structures. It is compatible with segmentation tools like Amira/Avizo, ImageJ/Fiji, and 3D Slicer.

Lösel, P. D. et al. Introducing Biomedisa as an open-source online platform for biomedical image segmentation. Nat. Commun. 11, 5577 (2020). https://doi.org/10.1038/s41467-020-19303-w

Hardware Requirements

  • One or more NVIDIA, AMD, or Intel GPUs

Installation (command-line based)

Installation (3D Slicer extension)

Installation (browser based)

Download Data

  • Download test data from our gallery

Revisions

Smart Interpolation

Python example

from biomedisa.features.biomedisa_helper import load_data, save_data
from biomedisa.interpolation import smart_interpolation

# load data
img, _ = load_data('Downloads/trigonopterus.tif')
labels, header = load_data('Downloads/labels.trigonopterus_smart.am')

# run smart interpolation with optional smoothing result
results = smart_interpolation(img, labels, smooth=100)

# get results
regular_result = results['regular']
smooth_result = results['smooth']

# save results
save_data('Downloads/final.trigonopterus.am', regular_result, header=header)
save_data('Downloads/final.trigonopterus.smooth.am', smooth_result, header=header)

Command-line based

python -m biomedisa.interpolation C:\Users\%USERNAME%\Downloads\tumor.tif C:\Users\%USERNAME%\Downloads\labels.tumor.tif

If pre-segmentation is not exclusively in the XY plane:

python -m biomedisa.interpolation C:\Users\%USERNAME%\Downloads\tumor.tif C:\Users\%USERNAME%\Downloads\labels.tumor.tif --allaxis

Deep Learning

Python example (training)

from biomedisa.features.biomedisa_helper import load_data
from biomedisa.deeplearning import deep_learning

# load image data
img1, _ = load_data('Head1.am')
img2, _ = load_data('Head2.am')
img_data = [img1, img2]

# load label data and header information to be stored in the network file (optional)
label1, _ = load_data('Head1.labels.am')
label2, header, ext = load_data('Head2.labels.am',
        return_extension=True)
label_data = [label1, label2]

# load validation data (optional)
img3, _ = load_data('Head3.am')
img4, _ = load_data('Head4.am')
label3, _ = load_data('Head3.labels.am')
label4, _ = load_data('Head4.labels.am')
val_img_data = [img3, img4]
val_label_data = [label3, label4]

# deep learning 
deep_learning(img_data, label_data, train=True, batch_size=12,
        val_img_data=val_img_data, val_label_data=val_label_data,
        header=header, extension=ext, path_to_model='honeybees.h5')

Command-line based (training)

python -m biomedisa.deeplearning C:\Users\%USERNAME%\Downloads\training_heart C:\Users\%USERNAME%\Downloads\training_heart_labels -t

Monitor training progress using validation data:

python -m biomedisa.deeplearning C:\Users\%USERNAME%\Downloads\training_heart C:\Users\%USERNAME%\Downloads\training_heart_labels -t -vi=C:\Users\%USERNAME%\Downloads\val_img -vl=C:\Users\%USERNAME%\Downloads\val_labels

If running into ResourceExhaustedError due to out of memory (OOM), try to use a smaller batch size (e.g. -bs=12).

Python example (prediction)

from biomedisa.features.biomedisa_helper import load_data, save_data
from biomedisa.deeplearning import deep_learning

# load data
img, _ = load_data('Head5.am')

# deep learning
results = deep_learning(img, predict=True,
        path_to_model='honeybees.h5', batch_size=6)

# save result
save_data('final.Head5.am', results['regular'], results['header'])

Command-line based (prediction)

python -m biomedisa.deeplearning C:\Users\%USERNAME%\Downloads\testing_axial_crop_pat13.nii.gz C:\Users\%USERNAME%\Downloads\heart.h5

Particle Segmentation

Checkout the preprint for more information. Download a test dataset from the paper (downsampled by a factor of 4):

wget https://biomedisa.info/media/images/large_particles_rescan_0_x4.tif
wget https://biomedisa.info/media/images/mask.large_particles_rescan_0_x4.tif

Using U-Net for implicit boundary detection:

Download a pretrained model:

wget https://biomedisa.info/media/Quartz/model_svl_step=2.h5

Instance segmentation of individual particles using implicit boundary detection:

python -m biomedisa.deeplearning large_particles_rescan_0_x4.tif model_svl_step=2.h5 --mask mask.large_particles_rescan_0_x4.tif

Using SAM backend (requires Biomedisa installation with PyTorch):

Install SAM:

python -m pip install git+https://github.com/facebookresearch/segment-anything.git

Download a pretrained model:

wget https://dl.fbaipublicfiles.com/segment_anything/sam_vit_l_0b3195.pth

Instance segmentation of individual particles:

python -m biomedisa.deeplearning large_particles_rescan_0_x4.tif sam_vit_l_0b3195.pth --mask mask.large_particles_rescan_0_x4.tif

Mesh Generator

Python example

Create STL mesh from segmentation (label values are saved as attributes)

from biomedisa.features.biomedisa_helper import load_data, save_data
from biomedisa.mesh import get_voxel_spacing, save_mesh

# load segmentation
data, header, extension = load_data('final.Head5.am', return_extension=True)

# get voxel spacing
x_res, y_res, z_res = get_voxel_spacing(header, extension)
print(f'Voxel spacing: x_spacing, y_spacing, z_spacing = {x_res}, {y_res}, {z_res}')

# save stl file
save_mesh('final.Head5.stl', data, x_res, y_res, z_res, poly_reduction=0.9, smoothing_iterations=15)

Command-line based

python -m biomedisa.mesh 'final.Head5.am'

Biomedisa Features

Load and save data (such as Amira Mesh, TIFF, NRRD, NIfTI or DICOM)

For DICOM, PNG files, or similar formats, file path must reference either a directory or a ZIP file containing the image slices.

from biomedisa.features.biomedisa_helper import load_data, save_data

# load data as numpy array
data, header = load_data('temp.tif')

# save data (for TIFF, header=None)
save_data('temp.tif', data, header)

Resize data

from biomedisa.features.biomedisa_helper import img_resize

# resize image data
zsh, ysh, xsh = data.shape
new_zsh, new_ysh, new_xsh = zsh//2, ysh//2, xsh//2
data = img_resize(data, new_zsh, new_ysh, new_xsh)

# resize label data
label_data = img_resize(label_data, new_zsh, new_ysh, new_xsh, labels=True)

Remove outliers and fill holes

from biomedisa.features.biomedisa_helper import clean, fill

# delete outliers smaller than 90% of the segment
label_data = clean(label_data, 0.9)

# fill holes
label_data = fill(label_data, 0.9)

Accuracy assessment

from biomedisa.features.biomedisa_helper import Dice_score, ASSD
dice = Dice_score(ground_truth, result)
assd = ASSD(ground_truth, result)

Authors

  • Philipp D. Lösel

See also the list of contributors who participated in this project.

FAQ

Frequently asked questions can be found at: https://biomedisa.info/faq/.

Citation

If you use Biomedisa or the data, please cite the following paper:

Lösel, P. D. et al. Introducing Biomedisa as an open-source online platform for biomedical image segmentation. Nat. Commun. 11, 5577 (2020). https://doi.org/10.1038/s41467-020-19303-w

If you use Biomedisa's Deep Learning, you may also cite:

Lösel, P. D. et al. Natural variability in bee brain size and symmetry revealed by micro-CT imaging and deep learning. PLoS Comput. Biol. 19, e1011529 (2023). https://doi.org/10.1371/journal.pcbi.1011529

If you use Biomedisa's Smart Interpolation, you can also cite the initial description of this method:

Lösel, P. & Heuveline, V. Enhancing a diffusion algorithm for 4D image segmentation using local information. Proc. SPIE 9784, 97842L (2016). https://biomedisa.info/media/97842L.pdf

If you use Biomedisa's Particle Separation or Self-Validated Learning, please cite the following preprint:

Lösel, P. D. et al. Self-validated learning for particle separation: A correctness-based self-training framework without human labels. Preprint at https://doi.org/10.48550/arXiv.2508.16224 (2025).

License

This project is covered under the EUROPEAN UNION PUBLIC LICENCE v. 1.2 (EUPL).

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

biomedisa-25.8.11.tar.gz (146.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

biomedisa-25.8.11-py3-none-any.whl (185.1 kB view details)

Uploaded Python 3

File details

Details for the file biomedisa-25.8.11.tar.gz.

File metadata

  • Download URL: biomedisa-25.8.11.tar.gz
  • Upload date:
  • Size: 146.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.10.12

File hashes

Hashes for biomedisa-25.8.11.tar.gz
Algorithm Hash digest
SHA256 d3c2efdb016f3c4e5ade9dfa3cc8496e071557ca85780c2929925ce4277fbccf
MD5 68c15da31050424416b2522ed1443b56
BLAKE2b-256 5738ef27f974d9b83ce8ea878bb6524528e2985e0939897b79bd5aedb1155cb6

See more details on using hashes here.

File details

Details for the file biomedisa-25.8.11-py3-none-any.whl.

File metadata

  • Download URL: biomedisa-25.8.11-py3-none-any.whl
  • Upload date:
  • Size: 185.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.10.12

File hashes

Hashes for biomedisa-25.8.11-py3-none-any.whl
Algorithm Hash digest
SHA256 7397a3eff8b983eb31cce5fb5a789cf2b6daefb0f6200690bb13cb0e07d13e61
MD5 6c7af29fa6920c8dac66a337311f9fcf
BLAKE2b-256 f491943dda18e55dd3dca1c4e9502f0adcc15c24faee385aff2b189c20cfb31f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page