Skip to main content

An image augmentation library for tensorflow. Easy use with tf.data.Dataset

Project description

tfAugmentor

An image augmentation library for tensorflow. The libray is designed to be easily used with tf.data.Dataset. The augmentor accepts tf.data.Dataset object or a nested tuple of numpy array.

Augmentations

Original Flip Rotation Translation
original demo_flip demo_rotation demo_translation
Crop Elactic Deform
demo_crop demo_elastic
Gaussian Blur Contrast Gamma
demo_blur demo_contrast demo_gamma

Demo

Random Rotation Random Translation Random Crop
demo_ratation demo_translation demo_crop
Random Contrast Random Gamma Elastic Deform
demo_contrast demo_gamma demo_elastic

Installation

tfAugmentor is written in Python and can be easily installed via:

pip install tfAugmentor

Required packages:

  • tensorflow (developed under tf 2.4), should work with any 2.x version
  • numpy (numpy=1.20 may cause error of tf.meshgrid, use another version)

Quick Start

tfAugmentor is implemented to work seamlessly with tf.data. The tf.data.Dataset object can be directly processed by tfAugmentor.

To instantiate an Augmentor object, three arguments are required:

class Augmentor(object):
    def __init__(self, signature, image=[], label_map=[]):
		...
  • signature: to present the structure of the dataset
    • a nested tuple of string
    • a dictionary with the values to be same as the keys, if your dataset is in a dictionary form
  • image: list of string items in signature, which will be treated as normal images
  • label: list of string items in signature, which will be treated as segmentation masks

Note: only the items in 'image' and 'label' will be processed, others will remain untouched

A simple example

import tfAugmentor as tfaug

# new tfAugmentor object
aug = tfaug.Augmentor(signature=('image', ('mask1', 'mask2')), 
                      image=['image'], 
                      label=['mask1', 'mask2'])

# add augumentation operations
aug.flip_left_right(probability=0.5)
aug.rotate90(probability=0.5)
aug.elastic_deform(strength=2, scale=20, probability=1)

# assume we have three numpy arrays
X_image = ... # shape [batch, height, width, channel]
Y_mask1 = ... # shape [batch, height, width, 1]
Y_mask2 = ... # shape [batch, height, width, 1]

# create tf.data.Dataset object
tf_dataset = tf.data.Dataset.from_tensor_slices((X_image, (Y_mask1, Y_mask2))))
# do the actual augmentation
ds1 = aug(tf_dataset)

# or you can directly pass the numpy arrays, a tf.data.Dataset object will be returned 
ds2 = aug((X_image, (Y_mask1, Y_mask2))), keep_size=True)

If the data is passed as a python dictionary, the signature should be the list/tuple of keys. For example:

import tfAugmentor as tfaug

# new tfAugmentor object
aug = tfaug.Augmentor(signature=('image', 'mask1', 'mask2'), 
                      image=['image'], 
                      label=['mask1', 'mask2'])

# add augumentation operations
aug.flip_left_right(probability=0.5)
aug.rotate90(probability=0.5)
aug.elastic_deform(strength=2, scale=20, probability=1)

# assume we have three numpy arrays
X_image = ... # shape [batch, height, width, channel]
Y_mask1 = ... # shape [batch, height, width, 1]
Y_mask2 = ... # shape [batch, height, width, 1]

ds_dict = {'image': X_image,
           'mask1': Y_mask1,
           'mask2': Y_mask2}
# create tf.data.Dataset object
tf_dataset = tf.data.Dataset.from_tensor_slices(ds_dict)
# do the actual augmentation
ds1 = aug(tf_dataset)

# or directly pass the data
ds2 = aug(ds_dict)

Note: All added operations will be executed one by one, but you can create multiply tfAugmentor to realize parallel pipelines

A more complicated example

import tfAugmentor as tfaug

# since 'class' is neither in 'image' nor in 'label', it will not be processed 
aug1 = tfaug.Augmentor((('image_rgb', 'image_depth'), ('semantic_mask', 'class')), 
                       image=['image_rgb', 'image_depth'], 
                       label=['semantic_mask'])
aug2 = tfaug.Augmentor((('image_rgb', 'image_depth'), ('semantic_mask', 'class')), 
                       image=['image_rgb', 'image_depth'], 
                       label=['semantic_mask'])

# add different augumentation operations to aug1 and aug2 
aug1.flip_left_right(probability=0.5)
aug1.random_crop_resize(sacle_range=(0.7, 0.9), probability=0.5)
aug2.elastic_deform(strength=2, scale=20, probability=1)

# assume we have the 1000 data samples
X_rgb = ...  # shape [1000 x 512 x 512 x 3]
X_depth = ... # shape [1000 x 512 x 512 x 1]
Y_semantic_mask = ... # shape [1000 x 512 x 512 x 1]
Y_class = ... # shape [1000 x 1]

# create tf.data.Dataset object
ds_origin = tf.data.Dataset.from_tensor_slices(((X_rgb, X_depth), (Y_semantic_mask, Y_class))))
# do the actual augmentation
ds1 = aug1(ds_origin)
ds2 = aug2(ds_origin)
# combine them
ds = ds_origin.concatenate(ds1)
ds = ds.concatenate(ds1)

Main Features

The argument 'probability' controls the possibility of a certain augmentation taking place.

Mirroring

# flip the image left right  
aug.flip_left_right(probability=1)
# flip the image up down 
aug.flip_up_down(probability=1) 

Rotation

# rotate by 90 degree clockwise
a.rotate90(probability=1) 
# rotate by 180 degree clockwise
a.rotate180(probability=1)
# rotate by 270 degree clockwise 
a.rotate270(probability=1) 
# rotate by a certrain degree, Args: angle - scala, in degree
a.rotate(angle, probability=1) 
# randomly rotate the image
a.random_rotate(probability=1) 

Translation

# tranlate image, Args: offset - [x, y]
a.translate(offset, probability=1):
# randoms translate image 
a.random_translate(translation_range=[-100, 100], probability=1):

Crop and Resize

# randomly crop a sub-image and resize to the original image size
a.random_crop(scale_range=([0.5, 0.8], preserve_aspect_ratio=False, probability=1) 

Elastic Deformation

# performa elastic deformation
a.elastic_deform(scale=10, strength=200, probability=1)

Photometric Adjustment

# adjust image contrast randomly
a.random_contrast(contrast_range=[0.6, 1.4], probability=1)
# perform gamma correction with random gamma values
a.random_gamma(gamma_range=[0.5, 1.5], probability=1)

Noise

# blur the image with gaussian kernel
a.gaussian_blur(sigma=2, probability=1)

Caution

  • If .batch() of tf.data.Dataset is used before augmentation, please set drop_remainder=True. Oherwise, the batch_size will be set to None. The augmention of tfAgmentor requires the batch_size dimension

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tfAugmentor-1.3.2.tar.gz (13.1 kB view details)

Uploaded Source

Built Distribution

tfAugmentor-1.3.2-py3-none-any.whl (11.9 kB view details)

Uploaded Python 3

File details

Details for the file tfAugmentor-1.3.2.tar.gz.

File metadata

  • Download URL: tfAugmentor-1.3.2.tar.gz
  • Upload date:
  • Size: 13.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.8.0 pkginfo/1.8.2 readme-renderer/32.0 requests/2.27.1 requests-toolbelt/0.9.1 urllib3/1.26.8 tqdm/4.62.3 importlib-metadata/4.10.1 keyring/23.5.0 rfc3986/2.0.0 colorama/0.4.4 CPython/3.8.12

File hashes

Hashes for tfAugmentor-1.3.2.tar.gz
Algorithm Hash digest
SHA256 b342ea5d65c05d786bd8b6d67f297ceb848db2c5e03129b1b4f51ef8145a67be
MD5 c36b0fa428ef76330841b073851ec724
BLAKE2b-256 9ddc7dbcf49e16149099c40ea104a6c87a6e45d36630d7e4dd37fb027fb3eeb1

See more details on using hashes here.

File details

Details for the file tfAugmentor-1.3.2-py3-none-any.whl.

File metadata

  • Download URL: tfAugmentor-1.3.2-py3-none-any.whl
  • Upload date:
  • Size: 11.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.8.0 pkginfo/1.8.2 readme-renderer/32.0 requests/2.27.1 requests-toolbelt/0.9.1 urllib3/1.26.8 tqdm/4.62.3 importlib-metadata/4.10.1 keyring/23.5.0 rfc3986/2.0.0 colorama/0.4.4 CPython/3.8.12

File hashes

Hashes for tfAugmentor-1.3.2-py3-none-any.whl
Algorithm Hash digest
SHA256 6ecd256648be2a83657c4eb8372f142e52db758109ea5eed438256d01a745f6b
MD5 a0f5cb328d0e2a964c8a7f815dfe2114
BLAKE2b-256 ff212fa441d562644d6ae8ab6260166a25f535404120e12c21f2a4371a6c1926

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page