A U-Net implementation of the EfficientNetV2.
Project description
A U-Net implementation of the EfficientNetV2.
EfficientV2-UNet
This package is a U-Net implementation of the EfficientNetV2, using TensorFlow.
EfficientNetV2 improves speed and parameter efficiency. This implementation also uses the ImageNet weights for training new models.
It is intended for segmentation of histological images (RGB) that are not saved in pyramidal file format (WSI).
The output segmentation are foreground / background. Multi-class segmentation is not (yet) possible.
It works on TIF images (and probably also PNG).
Installation
-
Create a python environment (e.g. with conda, python=3.9 and 3.10 work), in a CLI:
conda create --name ev2unet_env python=3.9
-
Activate environment:
conda activate ev2unet_env
-
GPU support for Windows (Non GPU installations not extensively tested)
a. Install the cudatoolkit and cudnn, e.g. with conda:
conda install -c conda-forge cudatoolkit=11.2 cudnn=8.1.0
-
Windows requires a specific version of TensorFlow (i.e. v2.10.1, higher versions are not supported on Windows), which will be installed by this package.
-
Linux GPU support and Apple Silicon support will be resolved by installing this library.
-
-
Install this library
pip install efficientv2-unet
-
Verify the GPU-support:
python -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"
>> lists your active GPUor
python -c "import tensorflow as tf; print(tf.test.is_gpu_available())"
>> printstrue
if GPU enabledor
ev2unet --version
>> prints the versions and whether GPU support is on.
Data preparation
Mask should have background values of 0, and foreground values of 1.
At least 3 image/mask TIF pairs are required to train a model, and should be located in separate folders.
Folder Structure:
├── images
├── image1.tif
├── image2.tif
├── image3.tif
└── ...
└── masks
├── image1.tif
├── image2.tif
├── image3.tif
└── ...
Training a model will split the data into train, validation and test images (by default 70%, 15%, 15%, respectively). And the images will be moved to corresponding sub-folders.
Training is performed not on the full images but on tiles (with no overlap), which will be saved into corresponding sub-folders.
Usage
Command-line:
ev2unet --help
# train example:
ev2unet --train --images path/to/images --masks path/to/masks --basedir . --name myUNetName --basemodel b2 --epochs 50 --train_batch_size 32
# predict example:
ev2unet --predict --dir path/to/images --model ./models/myUnetName/myUNetName.h5 --resolution 1 --threshold 0.5
Jupyter notebooks
Examples are also available from this repository.
QuPath extension
Get the qupath-extension-efficientv2unet!
With this QuPath extension you can easily create training data and train a model via the QuPath GUI (or script). And you can also use the GUI or a script to predict.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file efficientv2_unet-0.0.5.tar.gz
.
File metadata
- Download URL: efficientv2_unet-0.0.5.tar.gz
- Upload date:
- Size: 8.9 MB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.1.1 CPython/3.12.5
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 994d5c464779df4ca2bc333cc743494786172b7c226f55940db72d67712f74c5 |
|
MD5 | de30d005ceac3b88270383188dc718f1 |
|
BLAKE2b-256 | 765e2785be9adc69486c8001304f8be83378505d70d67eeec236681a21ac0351 |
File details
Details for the file efficientv2_unet-0.0.5-py3-none-any.whl
.
File metadata
- Download URL: efficientv2_unet-0.0.5-py3-none-any.whl
- Upload date:
- Size: 32.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.1.1 CPython/3.12.5
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | ce2c6dbd9f03d1d0ef8b15ca3a5e0939cbcf3ecc5cd7a7afcebad1d492ca7fcb |
|
MD5 | c6ed6faf89a0606198ce29d9e6927d47 |
|
BLAKE2b-256 | 5faa60b891787f4cd1a6da66192d81c1dac26675f5c4ff036e067766de20562c |