Skip to main content

tensorflow easy image augmantation

Project description

Table of Contents

tfaug package

Tensorflow >= 2 recommends to be feeded data by tf.data.Dataset. This package supports creation of tf.data.Dataset (generator) and augmentation for image.

This package includes below 3 classes:

  • DatasetCreator - creator of tf.data.Dataset from tfrecords or image paths
  • TfrecordConverter - pack images and labels to tfrecord format (recommended format for better peformance)
  • AugmentImg - image augmentation class. This is used inside DatasetCreator implicitly or you can use it directly.

Features

  • Augment input image and label image with same transformations at the same time.
  • Reduce cpu load by generating all transformation matrix at first. (use input_shape parameter at DatasetCreator() or AugmentImg())
  • It could adjust sampling ratios from multiple tfrecord files. (use ratio_samples parameter at DatasetCreator().dataset_from_tfrecords) This is effective for class imbalance problems.
  • Augment on batch. It is more efficient than augment each image.
  • Use only tensorflow operators and builtin functions while augmentation. Because any other operations or functions (e.g. numpy functions) may be bottleneck of learning. mentined here.

Dependancies

  • Python >= 3.5
  • tensorflow >= 2.0
  • tensorflow-addons

For test script

  • pillow
  • numpy
  • matplotlib

Supported Augmentations

  • standardize
  • resize
  • random_rotation
  • random_flip_left_right
  • random_flip_up_down
  • random_shift
  • random_zoom
  • random_shear
  • random_brightness
  • random_saturation
  • random_hue
  • random_contrast
  • random_crop
  • random_noise
  • random_blur

Install

python -m pip install git+https://github.com/piyop/tfaug

Samples

Simple Classification and Segmentation Usage is shown below. Whole ruunable codes is in sample_tfaug.py

Classification Problem

Download, convert to tfrecord and learn MNIST dataset. Below examples are part of learn_mnist() in sample_tfaug.py

Import tfaug and define directory to be store data.

from tfaug import TfrecordConverter, DatasetCreator, AugmentImg
DATADIR = 'testdata/tfaug/'

Load MNIST dataset using tensorflow.

os.makedirs(DATADIR+'mnist', exist_ok=True)
# load mnist dataset
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()

Convert Images and Labels to Tfrecord Format by TfrecordConverter()

Convert training and validation(test) images and labels into Tfrecord format by TfrecordConverter. Therefore, Tensorflow could be load data from Tfrecord format with least overhead and parallel reading.

# save as tfrecord
TfrecordConverter().tfrecord_from_ary_label(
    x_train, y_train, DATADIR+'mnist/train.tfrecord')
TfrecordConverter().tfrecord_from_ary_label(
    x_test, y_test, DATADIR+'mnist/test.tfrecord')

Create Dataset by DatasetCreator()

Create and apply augmentation to training and validation Tfrecords by DatasetCreator. For the classification problem, use label_type = 'class' for DatasetCreator constractor. Set image augmentation params to DatasetCreator constractor.

batch_size, shuffle_buffer = 25, 25
# create training and validation dataset using tfaug:
ds_train, train_cnt = (DatasetCreator(shuffle_buffer=shuffle_buffer,
                                      batch_size=batch_size,
                                      label_type='class',
                                      repeat=True,
                                      random_zoom=[0.1, 0.1],
                                      random_rotation=20,
                                      random_shear=[10, 10],
                                      training=True)
                       .dataset_from_tfrecords([DATADIR+'mnist/train.tfrecord']))
ds_valid, valid_cnt = (DatasetCreator(shuffle_buffer=shuffle_buffer,
                                      batch_size=batch_size,
                                      label_type='class',
                                      repeat=True,
                                      training=False)
                       .dataset_from_tfrecords([DATADIR+'mnist/test.tfrecord']))

Add constant reguralization to training and validation datasets.

# constant reguralization
ds_train = ds_train.map(lambda x, y: (x/255, y))
ds_train = ds_valid.map(lambda x, y: (x/255, y))

Define and Learn Model Using Defined Datasets

Define Model

model = tf.keras.models.Sequential([
    tf.keras.layers.Flatten(input_shape=(28, 28)),
    tf.keras.layers.Dense(128, activation='relu'),
    tf.keras.layers.Dropout(0.2),
    tf.keras.layers.Dense(10)])

model.compile(optimizer=tf.keras.optimizers.Adam(0.002),
              loss=tf.keras.losses.SparseCategoricalCrossentropy(
                  from_logits=True),
              metrics=['sparse_categorical_accuracy'])

Learn Model by model.fit(). This accepts training and validation tf.data.Dataset which is created by DatasetCreator. model.fit() needs number of training and validation iterations per epoch.

# learn model
model.fit(ds_train,
          epochs=10,
          validation_data=ds_valid,
          steps_per_epoch=train_cnt//batch_size,
          validation_steps=valid_cnt//batch_size)

Evaluation

# evaluation result
model.evaluate(ds_valid,
               steps=x_test.shape[0]//batch_size,
               verbose=2)

Segmentation Problem

Download ADE20k dataset and convert to the tfrecord Below examples are part of learn_mnist() in sample_tfaug.py

First, we set input image size and batch size for model

input_size = [256, 256]  # cropped input image size
batch_size = 5

Download and convert ADE20k dataset to tfrecord by defined function download_and_convertADE20k() in sample_tfaug.py

# donwload
download_and_convert_ADE20k(input_size)

Convert Images and Labels to Tfrecord Format by TfrecordConverter()

In download_and_convertADE20k(), split original images to patch image by TfrecordConverter.get_patch() Though ADE20k images have not same image size, tensorflow model input should be exactly same size.

converter = TfrecordConverter()

~~~~~~~~~~~~~~some codes~~~~~~~~~~~~~~~~~~~~~~

img_patches = converter.get_patch(im, input_size, overlap_buffer,
                                  x_borders, y_borders, dtype=np.uint8)
lbl_pathces = converter.get_patch(lb, input_size, overlap_buffer,
                                  x_borders, y_borders, dtype=np.uint8)

Save images and labels as separated tfrecord format.

image_per_shards = 1000

~~~~~~~~~~~~~~some codes~~~~~~~~~~~~~~~~~~~~~~

converter.tfrecord_from_path_label(imgs[sti:sti+image_per_shards],
                                  path_labels,
                                  path_tfrecord)

Create Dataset by DatasetCreator()

After generate tfrecord files by TfrecordConverter.tfrecord_from_path_label, create training and validation dataset from these tfrecords by DatasetCreator. For segmentation problem, use label_type = 'segmentation' to the constractor of the DatasetCreator.
If you use input_shape param in DatasetCreator() like below, AugmentImge() generate all transformation matrix when init() is called. It reduces CPU load while learning.

# define training and validation dataset using tfaug:
tfrecords_train = glob(
    DATADIR+'ADE20k/ADEChallengeData2016/tfrecord/training_*.tfrecords')
ds_train, train_cnt = (DatasetCreator(shuffle_buffer=batch_size,
                                      batch_size=batch_size,
                                      label_type='segmentation',
                                      repeat=True,
                                      standardize=True,
                                      random_zoom=[0.1, 0.1],
                                      random_rotation=10,
                                      random_shear=[10, 10],
                                      random_crop=input_size,
                                      dtype=tf.float16,
                                      input_shape=[batch_size]+input_size+[3],#batch, y, x, channel
                                      training=True)
                       .dataset_from_tfrecords(tfrecords_train))

tfrecords_valid = glob(
    DATADIR+'ADE20k/ADEChallengeData2016/tfrecord/validation_*.tfrecords')
ds_valid, valid_cnt = (DatasetCreator(shuffle_buffer=batch_size,
                                      batch_size=batch_size,
                                      label_type='segmentation',
                                      repeat=True,
                                      standardize=True,
                                      random_crop=input_size,
                                      dtype=tf.float16,
                                      training=False)
                       .dataset_from_tfrecords(tfrecords_valid))

Define and Learn Model Using Defined Datasets

Last step is define and fit and evaluate Model.

# define model
model = def_unet(tuple(input_size+[3]), 151)  # 150class + padding area

model.compile(optimizer=tf.keras.optimizers.Adam(0.002),
              loss=tf.keras.losses.SparseCategoricalCrossentropy(
                  from_logits=True),
              metrics=['sparse_categorical_accuracy'])

model.fit(ds_train,
          epochs=10,
          validation_data=ds_valid,
          steps_per_epoch=train_cnt//batch_size,
          validation_steps=valid_cnt//batch_size)

model.evaluate(ds_valid,
               steps=valid_cnt//batch_size,
               verbose=2)

Adjust sampling ratios from multiple tfrecord files

If number of images in each class are significantly imvalanced, you may want adjust sampling ratios from each class. DatasetCreator.dataset_from_tfrecords could accepts sampling ratios.
In that case, you must use 2 dimensional nested list representing tfrecord files to path_records in DatasetCreator.dataset_from_tfrecords() and assign sampling_ratios parameter for every 1 dimensional lists in 2 dimensional path_records. A simple example was written in test_tfaug.py like below:

dc = DatasetCreator(5, 10,
                    label_type='class',
                    repeat=False,
                    **DATAGEN_CONF,  training=True)
ds, cnt = dc.dataset_from_tfrecords([[path_tfrecord_0, path_tfrecord_0],
                                     [path_tfrecord_1, path_tfrecord_1]],
                                    ratio_samples=np.array([1,10],dtype=np.float32))

Use AugmentImg Directly

Above examples ware create tf.data.Dataset by DatasetCreator. If you need to control your dataflow in other way, you could use AugmentImage Directly

1. Initialize

from tfaug import AugmentImg 
#set your augment parameters below:
arg_fun = AugmentImg(standardize=False,
                      random_rotation=5, 
                      random_flip_left_right=True,
                      random_flip_up_down=True, 
                      random_shift=(.1,.1), 
                      random_zoom=(.1,.1),
                      random_shear=(.1,.1),
                      random_brightness=.2,
                      random_saturation=None,
                      random_hue=.2,
                      random_contrast=(.2,.5),
                      random_crop=256,
                      interpolation='nearest'
                      clslabel=True,
                      training=True) 

2. use in tf.data.map() after batch()

ds=tf.data.Dataset.zip((tf.data.Dataset.from_tensor_slices(image),
                      tf.data.Dataset.from_tensor_slices(label))) \
                    .shuffle(BATCH_SIZE*10).batch(BATCH_SIZE).map(arg_fun)
model.fit(ds)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

tfaug-0.1.0.5-py3-none-any.whl (13.0 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page