Skip to main content

anatomical segmentation algorithm

Project description

Cellpose cellpose

A generalist algorithm for cell and nucleus segmentation.

This code was written by Carsen Stringer and Marius Pachitariu. To learn about Cellpose, read the paper or watch the talk. For support, please open an issue.

You can quickly try out Cellpose on the website first (some features disabled). If you want to improve Cellpose for yourself and for everyone else, please consider contributing manual segmentations for a few of your images via the built-in GUI interface (see instructions below).

Installation

We recommend installing an Anaconda distribution of Python -- Choose Python 3.7 and your operating system. Note you might need to use an anaconda prompt if you did not add anaconda to the path. From your base environment (or you can make a new environment) in an anaconda prompt/command prompt, run

pip install cellpose

Alternatively you can use the included environment file (if you'd like a cellpose-specific environment). This is recommended if you have problems with the pip. Please follow these instructions:

  1. Download the environment.yml file from the repository. You can do this by cloning the repository, or copy-pasting the text from the file into a text document on your local computer.
  2. Open an anaconda prompt / command prompt with conda for python 3 in the path
  3. Change directories to where the environment.yml is and run conda env create -f environment.yml
  4. To activate this new environment, run conda activate cellpose
  5. You should see (cellpose) on the left side of the terminal line. Now run python -m cellpose and you're all set.

To upgrade cellpose (package here), run the following in the environment:

pip install cellpose --upgrade

If you have an older cellpose environment you can remove it with conda env remove -n cellpose before creating a new one.

Note you will always have to run conda activate cellpose before you run cellpose. If you want to run jupyter notebooks in this environment, then also conda install jupyter.

Common issues

If you receive the error: Illegal instruction (core dumped), then likely mxnet does not recognize your MKL version. Please uninstall and reinstall mxnet without mkl:

pip uninstall mxnet-mkl
pip uninstall mxnet
pip install mxnet==1.4.0

If you receive the error: No module named PyQt5.sip, then try uninstalling and reinstalling pyqt5

pip uninstall pyqt5 pyqt5-tools
pip install pyqt5 pyqt5-tools pyqt5.sip

If you receive an error associated with matplotlib, try upgrading it:

pip install matplotlib --upgrade

If you are on Yosemite Mac OS, PyQt doesn't work and you won't be able to use the graphical interface for cellpose. More recent versions of Mac OS are fine. The software has been heavily tested on Windows 10 and Ubuntu 18.04, and less well tested on Mac OS. Please post an issue if you have installation problems.

CUDA version

If you plan on running many images, you may want to install a GPU version of mxnet. I recommend using CUDA 10.0 or greater. Follow the instructions here.

Before installing the GPU version, remove the CPU version:

pip uninstall mxnet-mkl
pip uninstall mxnet

When upgrading cellpose, you will want to ignore dependencies (so that mxnet-mkl does not install):

pip install --no-deps cellpose --upgrade

Installation of github version

Follow steps from above to install the dependencies. In the github repository, run pip install -e . and the github version will be installed. If you want to go back to the pip version of cellpose, then say pip install cellpose.

Running cellpose

The quickest way to start is to open the GUI from a command line terminal. You might need to open an anaconda prompt if you did not add anaconda to the path:

python -m cellpose

The first time cellpose runs it downloads the latest available trained model weights from the website.

You can now drag and drop any images (*.tif, *.png, *.jpg, *.gif) into the GUI and run Cellpose, and/or manually segment them. When the GUI is processing, you will see the progress bar fill up and during this time you cannot click on anything in the GUI. For more information about what the GUI is doing you can look at the terminal/prompt you opened the GUI with. For example data, See website. For best accuracy and runtime performance, resize images so cells are less than 100 pixels across.

For multi-channel, multi-Z tiff's, the expected format is Z x channels x Ly x Lx.

Contributing training data

We are very excited about receiving community contributions to the training data and re-training the cytoplasm model to make it better. Please follow these guidelines:

  1. Run cellpose on your data to see how well it does. Try varying the diameter, which can change results a little.
  2. If there are relatively few mistakes, it won't help much to contribute labelled data.
  3. If there are consistent mistakes, your data is likely very different from anything in the training set, and you should expect major improvements from contributing even just a few manually segmented images.
  4. For images that you contribute, the cells should be at least 10 pixels in diameter, and there should be at least several dozens of cells per image, ideally ~100. If your images are too small, consider combining multiple images into a single big one and then manually segmenting that. If they are too big, consider splitting them into smaller crops.
  5. For the manual segmentation, please try to outline the boundaries of the cell, so that everything (membrane, cytoplasm, nucleus) is inside the boundaries. Do not just outline the cytoplasm and exclude the membrane, because that would be inconsistent with our own labelling and we wouldn't be able to use that.
  6. Do not use the results of the algorithm in any way to do contributed manual segmentations. This can reinforce a vicious circle of mistakes, and compromise the dataset for further algorithm development.

If you are having problems with the nucleus model, please open an issue before contributing data. Nucleus images are generally much less diverse, and we think the current training dataset already covers a very large set of modalities.

Using the GUI

The GUI serves two main functions:

  1. Running the segmentation algorithm.
  2. Manually labelling data.

Main GUI mouse controls (works in all views):

  • Pan = left-click + drag
  • Zoom = scroll wheel
  • Full view = double left-click
  • Select mask = left-click on mask
  • Delete mask = Ctrl + left-click
  • Start draw mask = right-click
  • End draw mask = right-click, or return to circle at beginning

Overlaps in masks are NOT allowed. If you draw a mask on top of another mask, it is cropped so that it doesn't overlap with the old mask. Masks in 2D should be single strokes (if single_stroke is checked).

If you want to draw masks in 3D (experimental), then you can turn single_stroke option off and draw a stroke on each plane with the cell and then press ENTER. 3D labelling will fill in unlabelled z-planes so that you do not have to as densely label.

!NOTE!: The GUI automatically saves after you draw a mask but NOT after segmentation and NOT after 3D mask drawing (too slow). Save in the file menu or with Ctrl+S. The output file is in the same folder as the loaded image with _seg.npy appended.

Keyboard shortcuts Description
CTRL+H help
CTRL+Z undo previously drawn mask/stroke
CTRL+0 clear all masks
CTRL+L load image (can alternatively drag and drop image)
CTRL+S SAVE MASKS IN IMAGE to _seg.npy file
CTRL+P load _seg.npy file (note: it will load automatically with image if it exists)
CTRL+M load masks file (must be same size as image with 0 for NO mask, and 1,2,3... for masks)
CTRL+N load numpy stack (NOT WORKING ATM)
A/D or LEFT/RIGHT cycle through images in current directory
W/S or UP/DOWN change color (RGB/gray/red/green/blue)
PAGE-UP / PAGE-DOWN change to flows and cell prob views (if segmentation computed)
, / . increase / decrease brush size for drawing masks
X turn masks ON or OFF
Z toggle outlines ON or OFF
C cycle through labels for image type (saved to _seg.npy)

Segmentation options

SIZE: you can manually enter the approximate diameter for your cells, or press "calibrate" to let the model estimate it. The size is represented by a disk at the bottom of the view window (can turn this disk off by unchecking "scale disk on").

use GPU: if you have installed the cuda version of mxnet, then you can activate this, but it won't give huge speedups when running single images in the GUI.

MODEL: there is a cytoplasm model and a nuclei model, choose what you want to segment

CHAN TO SEG: this is the channel in which the cytoplasm or nuclei exist

CHAN2 (OPT): if cytoplasm model is chosen, then choose the nuclear channel for this option

In a notebook

See run_cellpose.ipynb.

From the command line

Run python -m cellpose and specify parameters as below. For instance to run on a folder with images where cytoplasm is green and nucleus is blue and save the output as a png:

python -m cellpose --dir ~/images_cyto/test/ --pretrained_model cyto --chan 1 --chan2 2 --save_png

You can specify the diameter for all the images or set to 0 if you want the algorithm to estimate it on an image by image basis. Here is how to run on nuclear data (grayscale) where the diameter is automatically estimated:

python -m cellpose --dir ~/images_nuclei/test/ --pretrained_model nuclei --diameter 0. --save_png

Parameters:

usage: __main__.py [-h] [--train] [--dir DIR] [--img_filter IMG_FILTER]
                   [--use_gpu] [--pretrained_model PRETRAINED_MODEL]
                   [--chan CHAN] [--chan2 CHAN2] [--all_channels]
                   [--diameter DIAMETER] [--save_png]
                   [--mask_filter MASK_FILTER] [--test_dir TEST_DIR]
                   [--n_epochs N_EPOCHS] [--batch_size BATCH_SIZE]

cellpose parameters

optional arguments:
  -h, --help            show this help message and exit
  --train               train network using images in dir
  --dir DIR             folder containing data to run or train on
  --img_filter IMG_FILTER
                        end string for images to run on
  --use_gpu             use gpu if mxnet with cuda installed
  --pretrained_model PRETRAINED_MODEL
                        model to use
  --chan CHAN           channel to segment; 0: GRAY, 1: RED, 2: GREEN, 3: BLUE
  --chan2 CHAN2         nuclear channel (if cyto, optional); 0: NONE, 1: RED,
                        2: GREEN, 3: BLUE
  --all_channels        use all channels in image if using own model and
                        images with special channels
  --diameter DIAMETER   cell diameter, if 0 cellpose will estimate for each
                        image
  --save_png            save masks as png
  --mask_filter MASK_FILTER
                        end string for masks to run on
  --test_dir TEST_DIR   folder containing test data (optional)
  --n_epochs N_EPOCHS   number of epochs
  --batch_size BATCH_SIZE
                        batch size

Outputs

*_seg.npy files have the following fields:

  • filename : filename of image
  • img : image with chosen channels (Z x nchan x Ly x Lx)
  • masks : masks (-1 = NO masks, 0,1,2,... = mask labels)
  • colors : colors for masks
  • outlines : outlines of masks (-1 = NO outline, 0,1,2,... = outline labels)
  • chan_choose : channels that you chose in GUI (0=gray/none, 1=red, 2=green, 3=blue)
  • ismanual : element k = whether or not mask k was manually drawn or computed by the cellpose algorithm
  • flows : flows[0] is XY flow in RGB, flows[1] is Z flow (if it exists), flows[2] is the cell probability in range 0-255 instead of 0.0 to 1.0
  • est_diam : estimated diameter (if run on command line)
import numpy as np
from cellpose import plot

dat = np.load('_seg.npy', allow_pickle=True).item()

# plot image with masks overlaid
RGB = plot.mask_overlay(dat['img'], dat['masks'],
                        colors=np.array(dat['colors']))

# plot image with outlines overlaid in red (can change color of outline)
RGB = plot.outline_overlay(dat['img'], dat['outlines'],
                           channels=dat['chan_choose'], color=[255,0,0])

Dependencies

cellpose relies on the following excellent packages (which are automatically installed with conda/pip if missing):

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cellpose-0.0.1.23.tar.gz (144.6 kB view details)

Uploaded Source

Built Distribution

cellpose-0.0.1.23-py3-none-any.whl (140.5 kB view details)

Uploaded Python 3

File details

Details for the file cellpose-0.0.1.23.tar.gz.

File metadata

  • Download URL: cellpose-0.0.1.23.tar.gz
  • Upload date:
  • Size: 144.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.23.0 setuptools/41.4.0 requests-toolbelt/0.9.1 tqdm/4.36.1 CPython/3.7.4

File hashes

Hashes for cellpose-0.0.1.23.tar.gz
Algorithm Hash digest
SHA256 06ce43a2c67b9af57b92f67142ec43cd687966612fa23ceb8d001c3d6cb38809
MD5 2e7fbf10f72e1da21820b6cd70680fd1
BLAKE2b-256 85be9875c430f0d83ca5feede9a3f3979128e0b64c27bfdefbc6bcc2949c71d9

See more details on using hashes here.

File details

Details for the file cellpose-0.0.1.23-py3-none-any.whl.

File metadata

  • Download URL: cellpose-0.0.1.23-py3-none-any.whl
  • Upload date:
  • Size: 140.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.23.0 setuptools/41.4.0 requests-toolbelt/0.9.1 tqdm/4.36.1 CPython/3.7.4

File hashes

Hashes for cellpose-0.0.1.23-py3-none-any.whl
Algorithm Hash digest
SHA256 a31aee8b2d0b17a7844373ef07a213b27d6646ef65e2f84fd1876239d867b1d1
MD5 d60f2f9d6b940b4880261b9acc7453dd
BLAKE2b-256 779a9d86bcd5dbbe68e88d2db9f56996975c0af9060d03614e22e92ade4c0b9e

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page