Skip to main content

Deep learning tools for digital histology

Project description

slideflow logo DOI Python application PyPI version

Slideflow is a Python package which provides a unified API for building and testing deep learning models for histopathology, supporting both Tensorflow/Keras and PyTorch.

Slideflow includes tools for whole-slide image processing and segmentation, customizable deep learning model training with dozens of supported architectures, explainability tools including heatmaps and mosaic maps, analysis of activations from model layers, uncertainty quantification, and more. A variety of fast, optimized whole-slide image processing tools are included, including background filtering, blur/artifact detection, stain normalization, and efficient storage in *.tfrecords format. Model training is easy and highly configurable, with an easy drop-in API for training custom architectures. For external training loops, Slideflow can be used as an image processing backend, serving an optimized tf.data.Dataset or torch.utils.data.DataLoader to read and process slide images and perform real-time stain normalization.

Slideflow has been used by:

Full documentation with example tutorials can be found at slideflow.dev.

Requirements

Installation

Slideflow can be installed either with PyPI or as a Docker container. To install via pip:

pip3 install --upgrade setuptools pip wheel
pip3 install slideflow

Alternatively, pre-configured docker images are available with OpenSlide/Libvips and the latest version of either Tensorflow and PyTorch. To install with the Tensorflow backend:

docker pull jamesdolezal/slideflow:latest-tf
docker run -it --gpus all jamesdolezal/slideflow:latest-tf

To install with the PyTorch backend:

docker pull jamesdolezal/slideflow:latest-torch
docker run -it --shm-size=2g --gpus all jamesdolezal/slideflow:latest-torch

Getting started

Slideflow experiments are organized into Projects, which supervise storage of whole-slide images, extracted tiles, and patient-level annotations. To create a new project, create an instance of the slideflow.Project class, supplying a pre-configured set of patient-level annotations in CSV format:

import slideflow as sf
P = sf.Project(
  '/project/path',
  annotations="/patient/annotations.csv"
)

Once the project is created, add a new dataset source with paths to whole-slide images, tumor Region of Interest (ROI) files [if applicable], and paths to where extracted tiles/tfrecords should be stored. This will only need to be done once.

P.add_source(
  name="TCGA",
  slides="/slides/directory",
  roi="/roi/directory",
  tiles="/tiles/directory",
  tfrecords="/tfrecords/directory"
)

This step should attempt to automatically associate slide names with the patient identifiers in your annotations file. After this step has completed, double check that the annotations file has a slide column for each annotation entry with the filename (without extension) of the corresponding slide.

Extract tiles from slides

Next, whole-slide images are segmented into smaller image tiles and saved in *.tfrecords format. Extract tiles from slides at a given magnification (width in microns size) and resolution (width in pixels) using sf.Project.extract_tiles():

P.extract_tiles(
  tile_px=299,  # Tile size, in pixels
  tile_um=302   # Tile size, in microns
)

If slides are on a network drive or a spinning HDD, tile extraction can be accelerated by buffering slides to a SSD or ramdisk:

P.extract_tiles(
  ...,
  buffer="/mnt/ramdisk"
)

Training models

Once tiles are extracted, models can be trained. Start by configuring a set of hyperparameters:

params = sf.ModelParams(
  tile_px=299,
  tile_um=302,
  batch_size=32,
  model='xception',
  learning_rate=0.0001,
  ...
)

Models can then be trained using these parameters. Models can be trained to categorical, multi-categorical, continuous, or time-series outcomes, and the training process is highly configurable. For example, to train models in cross-validation to predict the outcome 'category1' as stored in the project annotations file:

P.train(
  'category1',
  params=params,
  save_predictions=True,
  multi_gpu=True
)

Evaluation, heatmaps, mosaic maps, and more

Slideflow includes a host of additional tools, including model evaluation and prediction, heatmaps, mosaic maps, analysis of layer activations, and more. See our full documentation for more details and tutorials.

License

This code is made available under the GPLv3 License and is available for non-commercial academic purposes.

Reference

The manuscript describing this protocol is in press. In the meantime, if you find our work useful for your research, or if you use parts of this code, please consider citing as follows:

James Dolezal, Sara Kochanny, & Frederick Howard. (2022). Slideflow: A Unified Deep Learning Pipeline for Digital Histology (1.1.0). Zenodo. https://doi.org/10.5281/zenodo.5703792

@software{james_dolezal_2022_5703792,
  author       = {James Dolezal and
                  Sara Kochanny and
                  Frederick Howard},
  title        = {{Slideflow: A Unified Deep Learning Pipeline for
                   Digital Histology}},
  month        = apr,
  year         = 2022,
  publisher    = {Zenodo},
  version      = {1.1.0},
  doi          = {10.5281/zenodo.5703792},
  url          = {https://doi.org/10.5281/zenodo.5703792}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

slideflow-1.1.1-py3-none-any.whl (461.0 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page