Skip to main content

Large Scale 3d Convolution Net Inference

Project description

chunkflow

Build Status PyPI version Coverage Status

patch by patch convolutional network inference with multiple frameworks including pytorch and pznet.

Introduction

3D convnet is state of the art approach to segment 3D images. Since single machine has limited computational power and RAM capacity, a large dataset can not fit in for one-time convnet inference especially for large complex networks. Hence, convnet inference should be decomposed to multiple patches and then stitch the patches together. The patches could be well distributed across machines utilizing the data level parallelism. However, there normally exist boundary effect of each patch since the image context around boundary voxels is missing. To reduce the boundary effect, the patches could be blended with some overlap. Overlap and blending could be easily handled in a single shared-memory machine, but not for distributed computation for terabyte or petabyte scale inference. This package was made to solve this problem. The solution is simply cropping the surrounding regions and stitch them together.

The boundary effect due to the cropping depends on the cropping size. If the cropping size is half of the patch size, there will be no boundary effect, but there is a lot of waste. In practise, we found that about 20%-25% of patch size is reasonably good enough.

Supported backends

  • pytorch
  • pznet

Terminology

  • patch: the input/output 3D/4D array for convnet with typical size like 32x256x256.
  • chunk: the input/output 3D/4D array after blending in each machine with typical size like 116x1216x1216.
  • block: the final main output array of each machine which should be aligned with storage backend such as neuroglancer precomputed. The typical size is like 112x1152x1152.

Usage

Produce tasks

in scripts,

python produce_tasks.py --help

launch worker to consume tasks

in the scripts folder,

python consume_tasks.py --help

use specific GPU device

we can simply set an environment variable to use specific GPU device.

CUDA_VISIBLE_DEVICES=2 python consume_tasks.py

Development

Create a new release in PyPi

python setup.py bdist_wheel --universal
twine upload dist/my-new-wheel

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

chunkflow-0.2.0-py2.py3-none-any.whl (6.8 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file chunkflow-0.2.0-py2.py3-none-any.whl.

File metadata

  • Download URL: chunkflow-0.2.0-py2.py3-none-any.whl
  • Upload date:
  • Size: 6.8 kB
  • Tags: Python 2, Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.5.0.1 requests/2.21.0 setuptools/40.8.0 requests-toolbelt/0.9.1 tqdm/4.31.1 CPython/3.6.5

File hashes

Hashes for chunkflow-0.2.0-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 de765cb707b41165c3c383a5cc85e279a5c38a4750f143713e5954a59d839f36
MD5 f501691f8c71d852a33beed3f1517b46
BLAKE2b-256 e1b5f5c8947cd0956526339df75bb8bace4b55c7c46b44acfbc34afe35fa1947

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page