Pixel and label classification using OpenCL-based Random Forest Classifiers
Project description
napari-accelerated-pixel-and-object-classification (APOC)
clesperanto meets scikit-learn to classify pixels and objects in images, on a GPU using OpenCL in napari.
The processed example image maize_clsm.tif is licensed by David Legland under CC-BY 4.0 license
For using the accelerated pixel and object classifiers in python, check out apoc. Training classifiers from pairs of image and label-mask folders is explained in this notebook. For executing APOC classifiers in Fiji using clij2 please read the documentation of the corresponding Fiji plugin.
Usage
Object and Semantic Segmentation
Starting point is napari with at least one image layer and one labels layer (your annotation).
You find Object and Semantic Segmentation in the Tools > Segmentation / labeling
. When starting those, the following graphical user interface will show up.
- Choose one or multiple images to train on. These images will be considered as multiple channels. Thus, they need to be spatially correlated. Training from multiple images showing different scenes is not (yet) supported from the graphical user interface. Check out this notebook if you want to train from multiple image-annotation pairs.
- Select a file where the classifier should be saved. If the file exists already, it will be overwritten.
- Select the ground-truth annotation labels layer.
- Select which label corresponds to foreground (not available in Semantic Segmentation)
- Select the feature images that should be considered for segmentation. If segmentation appears pixelated, try increasing the selected sigma values and untick
Consider original image
. - Tree depth and number of trees allow you to fine-tune how to deal with manifold regions of different characteristics. The higher these numbers, the longer segmentation will take. In case you use many images and many features, high depth and number of trees might be necessary. (See also
max_depth
andn_estimators
in the scikit-learn documentation of the Random Forest Classifier. - The estimation of memory consumption allows you to tune the configuration to your GPU-hardware. Also consider the GPU-hardware of others who want to use your classifier.
- Click on Run when you're done with configuring. If the segmentation doesn't fit after the first execution, consider fine-tuning the ground-truth annotation and try again.
A successful segmentation can for example look like this:
After your classifier has been trained successfully, click on the "Application / Prediction" tab. If you apply the classifier again, python code will be generated. You can use this code for example to apply the same classifier to a folder of images. If you're new to this, check out this notebook.
A pre-trained classifier can be applied from scripts as shown in the example notebook or from the Tools > Segmentation / labeling > Object segmentation (apply pretrained, APOC)
.
The tools for generating semantic segmentations and probability maps (Tools > Filtering
menu) work analogously.
Object classification
Click the menu Tools > Segmentation post-processing > Object classification (APOC)
.
This user interface will be shown:
- The image layer will be used for intensity based feature extraction (see below).
- The labels layer should be contain the segmentation of objects that should be classified. You can use the Object Segmenter explained above to create this layer.
- The annotation layer should contain manual annotations of object classes. You can draw lines crossing single and multiple objects of the same kind. For example draw a line through some elongated objects with label "1" and another line through some rather roundish objects with label "2". If these lines touch the background, that will be ignored.
- Tree depth and number of trees allow you to fine-tune how to deal with manifold objects of different characteristics. The higher these numbers, the longer classification will take. In case you use many features, high depth and number of trees might be necessary. (See also
max_depth
andn_estimators
in the scikit-learn documentation of the Random Forest Classifier. - Select the right features for training. For example, for differentiating objects according to their shape as suggested above, select "shape". The features are extracted using clEsperanto and are shown by example in this notebook.
- Click on the
Run
button. If classification doesn't perform well in the first attempt, try changing selected features.
If classification worked well, it may for example look like this. Note the two thick lines which were drawn to annotate elongated and roundish objects with brown and cyan:
A pre-trained model can later be applied from scripts as shown in the example notebook or using the menu Tools > Segmentation post-processing > Object classification (apply pretrained, APOC)
.
This napari plugin was generated with Cookiecutter using with @napari's cookiecutter-napari-plugin template.
Installation
It is recommended to install the plugin in a conda environment. Therefore install conda first, e.g. mini-conda. If you never worked with conda before, reading this short introduction might be helpful.
Optional: Setup a fresh conda environment, activate it and install napari:
conda create --name napari_apoc python=3.9
conda activate napari_apoc
conda install napari
If your conda environment is set up, you can install napari-accelerated-pixel-and-object-classification
using pip. Note: you need pyopencl first.
conda install -c conda-forge pyopencl
pip install napari-accelerated-pixel-and-object-classification
Contributing
Contributions, feedback and suggestions are very welcome. Tests can be run with tox, please ensure the coverage at least stays the same before you submit a pull request.
Similar napari plugins
There are other plugins with similar functionality for interactive classification of pixels and objects.
License
Distributed under the terms of the BSD-3 license, "napari-accelerated-pixel-and-object-classification" is free and open source software
Issues
If you encounter any problems, please open a thread on image.sc along with a detailed description and tag @haesleinhuepf.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for napari-accelerated-pixel-and-object-classification-0.6.6.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 053924fef4b7b973113f9b60766e2869c7c2c58a501d1bed1b920256f13a80d8 |
|
MD5 | 74603a063f29bbea12d49cd4881589ed |
|
BLAKE2b-256 | 274d57a32cea49f9974321960aa52243dcbe7462697a0b7b01601ec128b43c60 |
Hashes for napari_accelerated_pixel_and_object_classification-0.6.6-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 31b50cc1fc46bc66965f7422a4241b0beae20f2fa1f1155a26a5d0d40a110a15 |
|
MD5 | d2c46ce47fdf8da2915ad1e57b8216cd |
|
BLAKE2b-256 | a8b9e22cd1301f3262f025ffd5fec3643dc092542dbd47af8489c74788b2191d |