anatomical segmentation algorithm
Project description
Cellpose
A generalist algorithm for cell and nucleus segmentation.
This code was written by Carsen Stringer and Marius Pachitariu. To learn about Cellpose, read the paper or watch the talk. For support, please open an issue.
You can quickly try out Cellpose on the website first (some features disabled). If you want to improve Cellpose for yourself and for everyone else, please consider contributing manual segmentations for a few of your images via the built-in GUI interface (see instructions below).
Installation
We recommend installing an Anaconda distribution of Python -- Choose Python 3.7 and your operating system. Note you might need to use an anaconda prompt if you did not add anaconda to the path. From your base environment (or you can make a new environment) in an anaconda prompt/command prompt, run
pip install cellpose[gui]
If you want to install without the GUI dependencies, run pip install cellpose
.
Alternatively you can use the included environment file (if you'd like a cellpose-specific environment). This environment file includes all the dependencies for using the GUI. Using the environment file is recommended if you have problems with the pip. Please follow these instructions:
- Download the
environment.yml
file from the repository. You can do this by cloning the repository, or copy-pasting the text from the file into a text document on your local computer. - Open an anaconda prompt / command prompt with
conda
for python 3 in the path - Change directories to where the
environment.yml
is and runconda env create -f environment.yml
- To activate this new environment, run
conda activate cellpose
- You should see
(cellpose)
on the left side of the terminal line. Now runpython -m cellpose
and you're all set.
To upgrade cellpose (package here), run the following in the environment:
pip install cellpose --upgrade
If you have an older cellpose
environment you can remove it with conda env remove -n cellpose
before creating a new one.
Note you will always have to run conda activate cellpose before you run cellpose. If you want to run jupyter notebooks in this environment, then also conda install jupyter
.
Common issues
If you receive the error: Illegal instruction (core dumped)
, then likely mxnet does not recognize your MKL version. Please uninstall and reinstall mxnet without mkl:
pip uninstall mxnet-mkl
pip uninstall mxnet
pip install mxnet==1.4.0
If you receive the error: No module named PyQt5.sip
, then try uninstalling and reinstalling pyqt5
pip uninstall pyqt5 pyqt5-tools
pip install pyqt5 pyqt5-tools pyqt5.sip
If you receive an error associated with matplotlib, try upgrading it:
pip install matplotlib --upgrade
If you are on Yosemite Mac OS, PyQt doesn't work and you won't be able to use the graphical interface for cellpose. More recent versions of Mac OS are fine. The software has been heavily tested on Windows 10 and Ubuntu 18.04, and less well tested on Mac OS. Please post an issue if you have installation problems.
CUDA version
If you plan on running many images, you may want to install a GPU version of mxnet. I recommend using CUDA 10.0 or greater. Follow the instructions here.
Before installing the GPU version, remove the CPU version:
pip uninstall mxnet-mkl
pip uninstall mxnet
When upgrading cellpose, you will want to ignore dependencies (so that mxnet-mkl does not install):
pip install --no-deps cellpose --upgrade
Installation of github version
Follow steps from above to install the dependencies. In the github repository, run pip install -e .
and the github version will be installed. If you want to go back to the pip version of cellpose, then say pip install cellpose
.
Running cellpose
The quickest way to start is to open the GUI from a command line terminal. You might need to open an anaconda prompt if you did not add anaconda to the path:
python -m cellpose
The first time cellpose runs it downloads the latest available trained model weights from the website.
You can now drag and drop any images (*.tif, *.png, *.jpg, *.gif) into the GUI and run Cellpose, and/or manually segment them. When the GUI is processing, you will see the progress bar fill up and during this time you cannot click on anything in the GUI. For more information about what the GUI is doing you can look at the terminal/prompt you opened the GUI with. For example data, See website. For best accuracy and runtime performance, resize images so cells are less than 100 pixels across.
For multi-channel, multi-Z tiff's, the expected format is Z x channels x Ly x Lx.
Contributing training data
We are very excited about receiving community contributions to the training data and re-training the cytoplasm model to make it better. Please follow these guidelines:
- Run cellpose on your data to see how well it does. Try varying the diameter, which can change results a little.
- If there are relatively few mistakes, it won't help much to contribute labelled data.
- If there are consistent mistakes, your data is likely very different from anything in the training set, and you should expect major improvements from contributing even just a few manually segmented images.
- For images that you contribute, the cells should be at least 10 pixels in diameter, and there should be at least several dozens of cells per image, ideally ~100. If your images are too small, consider combining multiple images into a single big one and then manually segmenting that. If they are too big, consider splitting them into smaller crops.
- For the manual segmentation, please try to outline the boundaries of the cell, so that everything (membrane, cytoplasm, nucleus) is inside the boundaries. Do not just outline the cytoplasm and exclude the membrane, because that would be inconsistent with our own labelling and we wouldn't be able to use that.
- Do not use the results of the algorithm in any way to do contributed manual segmentations. This can reinforce a vicious circle of mistakes, and compromise the dataset for further algorithm development.
If you are having problems with the nucleus model, please open an issue before contributing data. Nucleus images are generally much less diverse, and we think the current training dataset already covers a very large set of modalities.
Using the GUI
The GUI serves two main functions:
- Running the segmentation algorithm.
- Manually labelling data.
Main GUI mouse controls (works in all views):
- Pan = left-click + drag
- Zoom = scroll wheel
- Full view = double left-click
- Select mask = left-click on mask
- Delete mask = Ctrl + left-click
- Start draw mask = right-click
- End draw mask = right-click, or return to circle at beginning
Overlaps in masks are NOT allowed. If you draw a mask on top of another mask, it is cropped so that it doesn't overlap with the old mask. Masks in 2D should be single strokes (if single_stroke is checked).
If you want to draw masks in 3D, then you can turn single_stroke option off and draw a stroke on each plane with the cell and then press ENTER. 3D labelling will fill in unlabelled z-planes so that you do not have to as densely label.
!NOTE!: The GUI automatically saves after you draw a mask but NOT after segmentation and NOT after 3D mask drawing (too slow). Save in the file menu or with Ctrl+S. The output file is in the same folder as the loaded image with _seg.npy
appended.
Keyboard shortcuts | Description |
---|---|
CTRL+H | help |
CTRL+Z | undo previously drawn mask/stroke |
CTRL+0 | clear all masks |
CTRL+L | load image (can alternatively drag and drop image) |
CTRL+S | SAVE MASKS IN IMAGE to _seg.npy file |
CTRL+P | load _seg.npy file (note: it will load automatically with image if it exists) |
CTRL+M | load masks file (must be same size as image with 0 for NO mask, and 1,2,3... for masks) |
CTRL+N | load numpy stack (NOT WORKING ATM) |
A/D or LEFT/RIGHT | cycle through images in current directory |
W/S or UP/DOWN | change color (RGB/gray/red/green/blue) |
PAGE-UP / PAGE-DOWN | change to flows and cell prob views (if segmentation computed) |
, / . | increase / decrease brush size for drawing masks |
X | turn masks ON or OFF |
Z | toggle outlines ON or OFF |
C | cycle through labels for image type (saved to _seg.npy ) |
Segmentation options
SIZE: you can manually enter the approximate diameter for your cells, or press "calibrate" to let the model estimate it. The size is represented by a disk at the bottom of the view window (can turn this disk off by unchecking "scale disk on").
use GPU: if you have installed the cuda version of mxnet, then you can activate this, but it won't give huge speedups when running single images in the GUI.
MODEL: there is a cytoplasm model and a nuclei model, choose what you want to segment
CHAN TO SEG: this is the channel in which the cytoplasm or nuclei exist
CHAN2 (OPT): if cytoplasm model is chosen, then choose the nuclear channel for this option
In a notebook
See run_cellpose.ipynb.
From the command line
Run python -m cellpose
and specify parameters as below. For instance to run on a folder with images where cytoplasm is green and nucleus is blue and save the output as a png:
python -m cellpose --dir ~/images_cyto/test/ --pretrained_model cyto --chan 2 --chan2 3 --save_png
You can specify the diameter for all the images or set to 0 if you want the algorithm to estimate it on an image by image basis. Here is how to run on nuclear data (grayscale) where the diameter is automatically estimated:
python -m cellpose --dir ~/images_nuclei/test/ --pretrained_model nuclei --diameter 0. --save_png
Training
The same channel settings apply for training models. To train on cytoplasmic images (green cyto and red nuclei) starting with a pretrained model from cellpose (cyto or nuclei):
python -m cellpose --train --dir ~/images_cyto/train/ --test_dir ~/images_cyto/test/ --pretrained_model cyto --chan 2 --chan2 1
You can train from scratch as well:
python -m cellpose --train --dir ~/images_nuclei/train/ --pretrained_model None
You can specify the full path to a pretrained model to use:
python -m cellpose --dir ~/images_cyto/test/ --pretrained_model ~/images_cyto/test/model/cellpose_35_0 --save_png
Parameters:
usage: __main__.py [-h] [--train] [--dir DIR] [--img_filter IMG_FILTER]
[--use_gpu] [--pretrained_model PRETRAINED_MODEL]
[--chan CHAN] [--chan2 CHAN2] [--all_channels]
[--diameter DIAMETER] [--save_png]
[--mask_filter MASK_FILTER] [--test_dir TEST_DIR]
[--n_epochs N_EPOCHS] [--batch_size BATCH_SIZE]
cellpose parameters
optional arguments:
-h, --help show this help message and exit
--train train network using images in dir
--dir DIR folder containing data to run or train on
--img_filter IMG_FILTER
end string for images to run on
--use_gpu use gpu if mxnet with cuda installed
--pretrained_model PRETRAINED_MODEL
model to use
--chan CHAN channel to segment; 0: GRAY, 1: RED, 2: GREEN, 3: BLUE
--chan2 CHAN2 nuclear channel (if cyto, optional); 0: NONE, 1: RED,
2: GREEN, 3: BLUE
--all_channels use all channels in image if using own model and
images with special channels
--diameter DIAMETER cell diameter, if 0 cellpose will estimate for each
image
--save_png save masks as png
--mask_filter MASK_FILTER
end string for masks to run on
--test_dir TEST_DIR folder containing test data (optional)
--n_epochs N_EPOCHS number of epochs
--batch_size BATCH_SIZE
batch size
Outputs
*_seg.npy
files have the following fields:
- filename : filename of image
- img : image with chosen channels (nchan x Ly x Lx) (if not multiplane)
- masks : masks (0 = NO masks; 1,2,... = mask labels)
- colors : colors for masks
- outlines : outlines of masks (0 = NO outline; 1,2,... = outline labels)
- chan_choose : channels that you chose in GUI (0=gray/none, 1=red, 2=green, 3=blue)
- ismanual : element k = whether or not mask k was manually drawn or computed by the cellpose algorithm
- flows : flows[0] is XY flow in RGB, flows[1] is the cell probability in range 0-255 instead of 0.0 to 1.0, flows[2] is Z flow in range 0-255 (if it exists)
- est_diam : estimated diameter (if run on command line)
- zdraw : for each mask, which planes were manually labelled (planes in between manually drawn have interpolated masks)
Here is an example of loading in a *_seg.npy
file and plotting masks and outlines
import numpy as np
from cellpose import plot
dat = np.load('_seg.npy', allow_pickle=True).item()
# plot image with masks overlaid
mask_RGB = plot.mask_overlay(dat['img'], dat['masks'],
colors=np.array(dat['colors']))
# plot image with outlines overlaid in red (can change color of outline)
outline_RGB = plot.outline_overlay(dat['img'], dat['outlines'],
channels=dat['chan_choose'], color=[255,0,0])
Dependencies
cellpose relies on the following excellent packages (which are automatically installed with conda/pip if missing):
- mxnet_mkl
- pyqtgraph
- PyQt5
- numpy (>=1.16.0)
- numba
- scipy
- scikit-image
- natsort
- matplotlib
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file cellpose-0.0.1.25.tar.gz
.
File metadata
- Download URL: cellpose-0.0.1.25.tar.gz
- Upload date:
- Size: 94.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.23.0 setuptools/45.2.0.post20200210 requests-toolbelt/0.9.1 tqdm/4.42.1 CPython/3.7.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | be1c283b7112873a636ad72fd287ffe3e0de3633fb0c872c341656289dd17437 |
|
MD5 | 921d0600ddab6c088f165ca2500849a4 |
|
BLAKE2b-256 | 1502260858eb352629e94c112e82768f39903638af5ee1979848e7318b5427e3 |
File details
Details for the file cellpose-0.0.1.25-py3-none-any.whl
.
File metadata
- Download URL: cellpose-0.0.1.25-py3-none-any.whl
- Upload date:
- Size: 91.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.23.0 setuptools/45.2.0.post20200210 requests-toolbelt/0.9.1 tqdm/4.42.1 CPython/3.7.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 3394be2eeaedbe75b879ea3c09b0b7e2c087e5ab0ec7ba0accadca260fc82370 |
|
MD5 | fd80952d0f6be47dfc462097b305e167 |
|
BLAKE2b-256 | c0ef622f0a42eb390a0dd8aab3131071e7bd5231687bcaff825aad6c436daf97 |