Skip to main content

Birth Annotator for Budding Yeast

Project description

Baby

Birth Annotation for Budding Yeast

Neural network code for segmenting buds from brightfield stacks.

Installation

BABY requires Python 3 and TensorFlow. For some versions of TensorFlow, you specifically need Python 3.6.

In any case, it is recommended that you install the package into a virtual environment (i.e., conda create if you are using Anaconda, or python3 -m venv otherwise).

By default, BABY will trigger installation of the latest version of TensorFlow. Our experience, however, is that performance is best with TensorFlow version 1.14. If you want to use this version, first install that in your virtual environment by running:

> pip install tensorflow==1.14

NB: To make use of a GPU you should also follow the set up instructions for installing tensorflow-gpu.

Install BABY by first obtaining this repository (e.g., git clone https://git.ecdf.ed.ac.uk/jpietsch/baby.git), and then using pip:

> pip install baby/

NB: If you are upgrading, then you may instead need to run: pip install -U baby/.

Developers: You may prefer to install an editable version:

> pip install -e baby/

Run using the Python API

Create a new BabyBrain with one of the model sets. The brain contains all the models and parameters for segmenting and tracking cells.

>>> from baby import BabyBrain, BabyCrawler, modelsets
>>> modelset = modelsets()['evolve_brightfield_60x_5z']
>>> brain = BabyBrain(**modelset)

For each time course you want to process, instantiate a new BabyCrawler. The crawler keeps track of cells between time steps.

>>> crawler = BabyCrawler(brain)

Load an image time series (from the tests subdirectory in this example). The image should have shape (x, y, z).

>>> from baby.io import load_tiled_image
>>> image_series = [load_tiled_image(
...     'tests/images/evolve_testG_tp{:d}_Brightfield.png'.format(t))
...     for t in range(1,6)]

Send images to the crawler in time-order (here a batch of size 1). We additionally request that outlines are optimised to edge predictions, and that lineage assignments, binary edge-masks and volume estimates (using the conical method) should be output at each time point.

>>> segmented_series = [crawler.step(
...     img[None, ...], refine_outlines=True, assign_mothers=True,
...     with_edgemasks=True, with_volumes=True)
...     for img, _ in image_series]

Finally, save the segmentation outlines, labels, volumes and lineage assignments as an annotated tiled png:

>>> from baby.io import save_tiled_image
>>> for t, s in enumerate(segmented_series): 
...     save_tiled_image(255 * s[0]['edgemasks'].astype('uint8').transpose((1, 2, 0)), 
...     '../segout_tp{:d}.png'.format(t + 1), 
...     {k: s[0][k] for k in ('cell_label', 'mother_assign', 'volumes')})

Run via a server

Once installed, you should be able to start a server to accept segmentation requests using:

> baby-phone

or on windows:

> baby-phone.exe

Server runs by default on http://0.0.0.0:5101. HTTP requests need to be sent to the correct URL endpoint, but the HTTP API is currently undocumented. The primary client implementation is in Matlab.

Jupyter notebooks

Training scripts are saved in Jupyter notebooks in the notebooks folder. To maintain the repository in a clean state, it's probably best to copy these to another directory for routine use. If you want to share a notebook, you can then specifically add it back to the repository at a useful checkpoint.

On how to retrain data

As of mid-2022 we aim to transition to tensorflow 2 (and then to pytorch). This means re-training all networks. We first fetch our data from skye and regenerate the train-val-test pair sets using TrainValTestPairs:

from pathlib import Path
from baby.io import TrainValTestPairs

training_data_path = Path("/home/alan/Documents/dev/training/training-images/")
tvt = TrainValTestPairs()
tvt.add_from(training_data_path / "traps-prime95b-60x")
<!-- tvt.add_from(training_data_path / "traps-evolve-60x") -->

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aliby-baby-0.1.11.tar.gz (15.2 MB view details)

Uploaded Source

Built Distribution

aliby_baby-0.1.11-py3-none-any.whl (15.8 MB view details)

Uploaded Python 3

File details

Details for the file aliby-baby-0.1.11.tar.gz.

File metadata

  • Download URL: aliby-baby-0.1.11.tar.gz
  • Upload date:
  • Size: 15.2 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.1.13 CPython/3.7.9 Linux/5.4.0-117-generic

File hashes

Hashes for aliby-baby-0.1.11.tar.gz
Algorithm Hash digest
SHA256 2e4703a150182eaa43a0c4d32df127560a6175852cb8cd6fff437bdf4fbf3284
MD5 67c05fa832f6edc832799fe20914e914
BLAKE2b-256 910eb7bf5a1f210d9061c60d1374bace59a93c1fb3faf2c3c587c68db209559a

See more details on using hashes here.

File details

Details for the file aliby_baby-0.1.11-py3-none-any.whl.

File metadata

  • Download URL: aliby_baby-0.1.11-py3-none-any.whl
  • Upload date:
  • Size: 15.8 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.1.13 CPython/3.7.9 Linux/5.4.0-117-generic

File hashes

Hashes for aliby_baby-0.1.11-py3-none-any.whl
Algorithm Hash digest
SHA256 3d5429021c8819d70c3604e09f216bd5727bbc33b17a5912f012df19e803e47d
MD5 f2bd4c43f0529185faeb5cb2127531a0
BLAKE2b-256 9ab4fbdd2b2162ffcd7d3f9147e19017655bc7c8bd305f9339c05cd21b7ee122

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page