Skip to main content

Fast sampling from large images

Project description

GitlabCIPipeline GitlabCICoverage Pypi PypiDownloads ReadTheDocs

Read the docs

https://ndsampler.readthedocs.io

Gitlab (main)

https://gitlab.kitware.com/computer-vision/ndsampler

Github (mirror)

https://github.com/Kitware/ndsampler

Pypi

https://pypi.org/project/ndsampler

The main webpage for this project is: https://gitlab.kitware.com/computer-vision/ndsampler

Fast random access to small regions in large images.

Random access is amortized by converting images into an efficient backend format (current backends include cloud-optimized geotiffs (cog) or numpy array files (npy)). If images are already in COG format, then no conversion is needed.

The ndsampler module was built with detection, segmentation, and classification tasks in mind, but it is not limited to these use cases.

The basic idea is to ensure your data is in MS-coco format, and then the CocoSampler class will let you sample positive and negative regions.

For classification tasks the MS-COCO data could just be that every image has an annotation that takes up the entire image.

The aspiration of this module is to allow you to access your data in-situ (i.e. no pre-processing), although the cost of that may depend on how efficient your data is to access. However a faster cache can be built at the cost of disk space. Currently we have a “cog” and “npy” backend. Help is wanted to integrate backends for hdf5 and other medical / domain-specific formats.

Installation

The ndsampler. package can be installed via pip:

pip install ndsampler

Note that ndsampler depends on kwimage, where there is a known compatibility issue between opencv-python and opencv-python-headless. Please ensure that one or the other (but not both) are installed as well:

pip install opencv-python-headless

# OR

pip install opencv-python

Lastly, to fully leverage ndsampler’s features GDAL must be installed (although much of ndsampler can work without it). Kitware has a pypi index that hosts GDAL wheels for linux systems, but other systems will need to find some way of installing gdal (conda is safe choice).

pip install --find-links https://girder.github.io/large_image_wheels GDAL

Features

  • CocoDataset for managing and manipulating annotated image datasets

  • Amortized O(1) sampling of N-dimension space-time data (wrt to constant window size) (e.g. images and video).

  • Hierarchical or mutually exclusive category management.

  • Random negative window sampling.

  • Coverage-based positive sampling.

  • Dynamic toydata generator.

Also installs the kwcoco package and CLI tool.

Usage

The main pattern of usage is:

  1. Use kwcoco to load a json-based COCO dataset (or create a kwcoco.CocoDataset programmatically).

  2. Pass that dataset to an ndsampler.CocoSampler object, and that effectively wraps the json structure that holds your images and annotations and it allows you to sample patches from those images efficiently.

  3. You can either manually specify image + region or you can specify an annotation id, in which case it loads the region corresponding to the annotation.

Example

This example shows how you can efficiently load subregions from images.

>>> # Imagine you have some images
>>> import kwimage
>>> image_paths = [
>>>     kwimage.grab_test_image_fpath('astro'),
>>>     kwimage.grab_test_image_fpath('carl'),
>>>     kwimage.grab_test_image_fpath('airport'),
>>> ]  # xdoc: +IGNORE_WANT
['~/.cache/kwimage/demodata/KXhKM72.png',
 '~/.cache/kwimage/demodata/flTHWFD.png',
 '~/.cache/kwimage/demodata/Airport.jpg']
>>> # And you want to randomly load subregions of them in O(1) time
>>> import ndsampler
>>> import kwcoco
>>> # First make a COCO dataset that refers to your images (and possibly annotations)
>>> dataset = {
>>>     'images': [{'id': i, 'file_name': fpath} for i, fpath in enumerate(image_paths)],
>>>     'annotations': [],
>>>     'categories': [],
>>> }
>>> coco_dset = kwcoco.CocoDataset(dataset)
>>> print(coco_dset)
<CocoDataset(tag=None, n_anns=0, n_imgs=3, n_cats=0)>
>>> # Now pass the dataset to a sampler and tell it where it can store temporary files
>>> workdir = ub.Path.appdir('ndsampler/demo').ensuredir()
>>> sampler = ndsampler.CocoSampler(coco_dset, workdir=workdir)
>>> # Now you can load arbitrary samples by specifying a target dictionary
>>> # with an image_id (gid) center location (cx, cy) and width, height.
>>> target = {'gid': 0, 'cx': 200, 'cy': 200, 'width': 100, 'height': 100}
>>> sample = sampler.load_sample(target)
>>> # The sample contains the image data, any visible annotations, a reference
>>> # to the original target, and params of the transform used to sample this
>>> # patch
>>> print(sorted(sample.keys()))
['annots', 'im', 'params', 'tr']
>>> im = sample['im']
>>> print(im.shape)
(100, 100, 3)
>>> # The load sample function is at the core of what ndsampler does
>>> # There are other helper functions like load_positive / load_negative
>>> # which deal with annotations. See those for more details.
>>> # For random negative sampling see coco_regions.

A Note On COGs

COGs (cloud optimized geotiffs) are the backbone efficient sampling in the ndsampler library.

To perform deep learning efficiently you need to be able to effectively randomly sample cropped regions from images, so when ndsampler.Sampler (more accurately the FramesSampler belonging to the base Sampler object) is in “cog” mode, it caches all images larger than 512x512 in cog format.

I’ve noticed a significant speedups even for “small” 1024x1024 images. I haven’t made effective use of the overviews feature yet, but in the future I plan to, as I want to allow ndsampler to sample in scale as well as in space.

Its possible to obtain this speedup with the “npy” backend, which supports true random sampling, but this is an uncompressed format, which can require a large amount of disk space. Using the “None” backend, means that loading a small windowed region requires loading the entire image first (which can be ok for some applications).

Using COGs requires that GDAL is installed. Installing GDAL is a pain though.

https://gist.github.com/cspanring/5680334

Using conda is relatively simple

conda install gdal

# Test that this works
python -c "from osgeo import gdal; print(gdal)"

Also possible to use system packages

# References:
# https://gis.stackexchange.com/questions/28966/python-gdal-package-missing-header-file-when-installing-via-pip
# https://gist.github.com/cspanring/5680334


# Install GDAL system libs
sudo apt install libgdal-dev

GDAL_VERSION=`gdal-config --version`
echo "GDAL_VERSION = $GDAL_VERSION"
pip install --global-option=build_ext --global-option="-I/usr/include/gdal" GDAL==$GDAL_VERSION


# Test that this works
python -c "from osgeo import gdal; print(gdal)"

Kitware also has a pypi index that hosts GDAL wheels for linux systems:

pip install --find-links https://girder.github.io/large_image_wheels GDAL

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ndsampler-0.8.1.tar.gz (104.1 kB view details)

Uploaded Source

Built Distribution

ndsampler-0.8.1-py3-none-any.whl (96.9 kB view details)

Uploaded Python 3

File details

Details for the file ndsampler-0.8.1.tar.gz.

File metadata

  • Download URL: ndsampler-0.8.1.tar.gz
  • Upload date:
  • Size: 104.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.2

File hashes

Hashes for ndsampler-0.8.1.tar.gz
Algorithm Hash digest
SHA256 02545bc12d735d4a1f934a44430baeb6050cfeabcebb28d560fd291e4c3f6943
MD5 6f08a26cb423a5dcfb5aa2a58b3afa32
BLAKE2b-256 bd11c5f4b8d1de716b0ec65120b0f1cdb7d1ef492d5609a2831460e168747b69

See more details on using hashes here.

File details

Details for the file ndsampler-0.8.1-py3-none-any.whl.

File metadata

  • Download URL: ndsampler-0.8.1-py3-none-any.whl
  • Upload date:
  • Size: 96.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.2

File hashes

Hashes for ndsampler-0.8.1-py3-none-any.whl
Algorithm Hash digest
SHA256 1f4c01b51dad4587e8bc6399ba4afe2ead65562e92a4315a0c722d88a4ef0a07
MD5 20b174f02c5865ece377dbe096971cf0
BLAKE2b-256 7d8d3e5c022fea6f48fa2730402bd41f80d8b9bcc408958e622774129c020b29

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page