Skip to main content

Action classification for TZYX/TYX shaped images, Static classification for TYX/YX shaped images

Project description

oneat

License BSD-3 PyPI Python Version tests codecov

Action classification for TZYX shaped images, Static classification for TYX shaped images


This caped package was generated with Cookiecutter using @caped's cookiecutter-template template.

Installation

You can install oneat via pip:

pip install oneat

To install latest development version :

pip install git+https://github.com/Kapoorlabs-CAPED/oneat.git

Contributing

Contributions are very welcome. Tests can be run with tox, please ensure the coverage at least stays the same before you submit a pull request.

License

Distributed under the terms of the BSD-3 license, "oneat" is free and open source software

Issues

If you encounter any problems, please file an issue along with a detailed description.

Algorithm and Code for finding mitotic cells in TZYX datasets

Program structure

We use hydra library to separate the parameters of the code from the actual file that contains the runnable code. We do so to minimize the interaction with the actual script/file/interactive code where the users do not have to change any lines to specify the paths/filenames/parameters. The configuration file to modify the parameters.

The params_train contains the training parameters for the hyperparameters of the network, these parameters are set once and for all and are not learned during the training process, hence the name hyperparameters.

The params_predict contains the parameters needed for model prediction such as the number of tiles, event threshold and confidence to veto the events below the threshold.

The trainclass contains the training class used by oneat and is input as a string. For VollNet (Resnet based) the training class is NEATVollNet, for DenseVollNet (Densenet based) the training class is DenseVollNet.

The defaults provides the filename and the paths, depending on where the data is you only have to select the path file which is supplied for local paths/ovh server paths/aws paths. This maybe a bit of gymnastics in the beginning but once the paths, files are set only parameters need to be changed during regular usage of the scripts and interactive programs.

The training data

The training data for 3D + time dataset was made by clicking on the location in ZYX of the mitotic (blue points layer) and non-mitotic cell (red points layer) using an interactive Napari widget.

We also have the segmentation image for the raw data that we use to create the clicks and we use the segmentation labels at the click location to refine the location of the clicked cell, get it's height, width and depth that we use to create the training label.

Using a custom training data creating script The training data consists of 1 timeframe before and after the division/mitosis splitting of the cell, 4 Z planes before and 4 Z planes after the click Z location and 32 pixels in XY around the click location in X and Y. In this fashion the non-mitotic and mitotic cells are always in the center, spatially and temporally making the task of learning easier. The shape of the training data TZYX hence is (3,8,64,64) and the training label consists of class label + 0.5,0.5,0.5,0.5,Height,Width,Depth,confidence. The 0.5 signifies the spatial and temporal centering of the cell.

ResNet and DenseNet based VollNet and DenseVollNet architectures

After the training data is saved as an npz file, the training can be done using a Resnet or a Densenet architecture based network. We see a better performance using architecture. See the Resnet implementation, see the Densnet implementation.

We have fully convolutional implementation of both these architectures, hence the training can be done on the data of our chosen size and shape but at the prediction stage we benifit from convolutionalization of the sliding window operation where the network finds the location of the mitotic cells using the indices that the prediction function provides to map the predictions to their proper spatial and temporal locations in the input data of arbitrary size/shape.

Program to train the model on a GPU based machine

Using this script and setting the training parameters in the configuration file we train the model with the chosen hyperparameters.

Visualizing training loss and accuracy with Tensorboard

Oneat supports visualization of the training loss, accuracy and other training metrics using tensorboard.

Tensorboard can be started from the same directory from where you launched the training script/interactive program for training. Inside that folder you will find an outputs directory, inside it is a timestamped directory of logs for the tensorboard, for example the directory is named 08-21-02/ then launch tensorboard with the following command from inside the outputs directory: tensorboard --logdir 08-21-02/.

Tensorboard will print a localhost url to copy and paste in the browser for example http://localhost:6007/, clicking on the menu item of scalars shows the loss and accuracy plots for the training epochs. You can refresh the page to update the curves if it does not happen automatically.

Model Evaluation and Prediction

Once the model has been trained, we can evaluate the performance of the model with metrics. The metrics measure the model performance on ground truth data which consists of the raw ground truth image, its corresponding segmentation image and the csv file containing the ground truth locations of the mitotic cells as TZYX columns.

For evaluating the model performance we have to run the model prediction on the ground truth raw image using its segmentation image in this script. The prediction program generates a csv file containing the location of mitotic cells and also the probability, confidence scores and radius of the cell to create bounding boxes around the cell location.

Using the ground truth and the predictions csv file we compute the tru positive, false positive and false negative rate of detection using this script.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

oneat-6.6.8.tar.gz (6.1 MB view details)

Uploaded Source

Built Distribution

oneat-6.6.8-py3-none-any.whl (4.9 MB view details)

Uploaded Python 3

File details

Details for the file oneat-6.6.8.tar.gz.

File metadata

  • Download URL: oneat-6.6.8.tar.gz
  • Upload date:
  • Size: 6.1 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.4

File hashes

Hashes for oneat-6.6.8.tar.gz
Algorithm Hash digest
SHA256 492649f2b9557516a187476c2037b1fc59b5e5594ed57e38d12608ac25ff9978
MD5 8b3d3f4e6fe5a184c445b99d26006ee6
BLAKE2b-256 32c2ca76ec7c315c835816f73243a45d53c4897765c08e4f76574a0685be23fc

See more details on using hashes here.

File details

Details for the file oneat-6.6.8-py3-none-any.whl.

File metadata

  • Download URL: oneat-6.6.8-py3-none-any.whl
  • Upload date:
  • Size: 4.9 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.4

File hashes

Hashes for oneat-6.6.8-py3-none-any.whl
Algorithm Hash digest
SHA256 32c9e79a99b92e3ac3201dc17b5c6c01bda6e45eefb39da67f990833352c864d
MD5 530f7b804ddd153cac35611ad887486c
BLAKE2b-256 2a7c8c2783f72840a925386a815a82f60c5da8d74142cc574ae3651b0a30d808

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page