Pixel and label classification using OpenCL-based Random Forest Classifiers
Project description
napari-accelerated-pixel-and-object-classification (APOC)
clEsperanto meets scikit-learn
A yet experimental OpenCL-based Random Forest Classifier for pixel and labeled object classification in napari.
The processed example image maize_clsm.tif is licensed by David Legland under CC-BY 4.0 license
For using the accelerated pixel and object classifiers in python, check out apoc.
This napari plugin was generated with Cookiecutter using with @napari's cookiecutter-napari-plugin template.
Installation
You can install napari-accelerated-pixel-and-object-classification
via pip. Note: you also need pyopencl.
conda install pyopencl
pip install napari-accelerated-pixel-and-object-classification
In case of issues in napari, make sure these dependencies are installed properly:
pip install pyclesperanto_prototype
pip install apoc
Usage
Usage: Object and Semantic Segmentation
Starting point is napari with at least one image layer and one labels layer (your annotation).
You find Object and Semantic Segmentation in the main plugins menu:
When clicking one of the first two, the following graphical user interface will show up.
- Choose one or multiple images to train on. These images will be considered as multiple channels. Thus, they need to be spatially correlated. Training from multiple images showing different scenes is not (yet) supported from the graphical user interface. Check out this notebook if you want to train from multiple image-annotation pairs.
- Select a file where the classifier should be saved. If the file exists already, it will be overwritten.
- Select the ground-truth annotation labels layer.
- Select which label corresponds to foreground (not available in Semantic Segmentation)
- Select the feature images that should be considered for segmentation. If segmentation appears pixelated, try increasing the selected sigma values and untick
Consider original image
. - Tree depth and number of trees allow you to fine-tune how to deal with manifold regions of different characteristics. The higher these numbers, the longer segmentation will take. In case you use many images and many features, high depth and number of trees might be necessary. (See also
max_depth
andn_estimators
in the scikit-learn documentation of the Random Forest Classifier. - The estimation of memory consumption allows you to tune the configuration to your GPU-hardware. Also consider the GPU-hardware of others who want to use your classifier.
- Click on Run when you're done with configuring. If the segmentation doesn't fit after the first execution, consider fine-tuning the ground-truth annotation and try again.
A successful segmentation can for example look like this:
After your classifier has been trained successfully, click on the "Application / Prediction" tab. If you apply the classifier again, python code will be generated. You can use this code for example to apply the same classifier to a folder of images. If you're new to this, check out this notebook.
Usage: Object classification
Click the menu Plugins > Segmentation (Accelerated Pixel and Object Classification) > Object classifier
.
This user interface will be shown:
- The image layer will be used for intensity based feature extraction (see below).
- The labels layer should be contain the segmentation of objects that should be classified. You can use the Object Segmenter explained above to create this layer.
- The annotation layer should contain manual annotations of object classes. You can draw lines crossing single and multiple objects of the same kind. For example draw a line through some elongated objects with label "1" and another line through some rather roundish objects with label "2". If these lines touch the background, that will be ignored.
- Tree depth and number of trees allow you to fine-tune how to deal with manifold objects of different characteristics. The higher these numbers, the longer classification will take. In case you use many features, high depth and number of trees might be necessary. (See also
max_depth
andn_estimators
in the scikit-learn documentation of the Random Forest Classifier. - Select the right features for training. For example, for differentiating objects according to their shape as suggested above, select "shape". The features are extracted using clEsperanto and are shown by example in this notebook.
- Click on the
Run
button. If classification doesn't perform well in the first attempt, try changing selected features.
If classification worked well, it may for example look like this. Note the two thick lines which were drawn to annotate elongated and roundish objects with brown and cyan:
Usage: Under-the-hood functions
Open an image in napari and add a labels layer. Annotate foreground and background with two different label identifiers. You can also add a third, e.g. a membrane-like region in between to improve segmentation quality.
Click the menu Plugins > Segmentation (Accelerated Pixel and Object Classification) > Train pixel classifier
.
Consider changing the featureset
. There are three options for selecting
small (about 1 pixel sized) objects,
medium (about 5 pixel sized) object and
large (about 25 pixel sized) objects.
Make sure the right image and annotation layers are selected and click on Run
.
The classifier was saved as temp.cl
to disc. You can later re-use it by clicking the menu Plugins > OpenCL Random Forest Classifiers > Predict pixel classifier
Optional: Hide the annotation layer.
Click the menu Plugins > Segmentation (Accelerated Pixel and Object Classification) > Connected Component Labeling
.
Make sure the right labels layer is selected. It is supposed to be the result layer from the pixel classification.
Select the object class identifier
you used for annotating objects, that's the intensity you drew on objects in the annotation layer.
Hint: If you want to analyse touching neigbors afterwards, activate the fill gaps between labels
checkbox.
Click on the Run
button.
Optional: Hide the pixel classification result layer. Change the opacity of the connected component labels layer.
Add a new labels layer and annotate different object classes by drawing lines through them. In the following example objects with different size and shape were annotated in three classes:
- round, small
- round, large
- elongated
Click the menu Plugins > Segmentation (Accelerated Pixel and Object Classification) > Train object classifier
. Select the right layers for training.
The labels layer should be the result from connected components labeling.
The annotation layer should be the just annotated object classes layer.
Select the right features for training. Click on the Run
button.
After training, the classifier will be stored to disc in the file you specified.
You can later re-use it by clicking the menu Plugins > Segmentation (Accelerated Pixel and Object Classification) > Predict label classifier
This is an experimental napari plugin. Feedback is very welcome!
Contributing
Contributions are very welcome. Tests can be run with tox, please ensure the coverage at least stays the same before you submit a pull request.
License
Distributed under the terms of the BSD-3 license, "napari-accelerated-pixel-and-object-classification" is free and open source software
Issues
If you encounter any problems, please open a thread on image.sc along with a detailed description and tag @haesleinhuepf.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for napari-accelerated-pixel-and-object-classification-0.5.4.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 64840395f87ca9b2252d313b3fe1478ab1d7f608b821745a5d6397856a8fb2aa |
|
MD5 | 52fe1d54d44417b377e179f6f2807fe8 |
|
BLAKE2b-256 | 8f9560315f188063aa0c2fbc01127cab7de857a797a6825e7c0907dd8e016af2 |
Hashes for napari_accelerated_pixel_and_object_classification-0.5.4-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | daf72b949ccd93737f5dbea958c54d2469572cadab7fd0d3a11b0c61dcef203a |
|
MD5 | 16cca42371d80359c1d9ec15b271d80c |
|
BLAKE2b-256 | e811513181f823e97ae5bedac47b2dd245820e16a9257a0c322360b4757897c0 |