Test Time Augmentations
Project description
TTAugment
Perform Augmentation during Inference and aggregate the results of all the applied augmentation to create a final output
Installation
pip install git+https://github.com/cypherics/ttAugment.git#egg=ttAugment
Supported Augmentation
Library supports all color, blur and contrast transformation provided by imgaug along with custom Geometric Transformation.
- Mirror : Crop an image to
transform_dimension
and mirror pixels to match the size ofnetwork_dimension
- CropScale : Crop an image to
transform_dimension
and rescale the image to match the size ofnetwork_dimension
- NoAugment : Keep the input unchanged
- Crop : Crop an image to
transform_dimension
- Rot : Rotate an Image
- FlipHorizontal
- FlipVertical
Parameters
How to use when test image is much bigger than what my models needs as input, Don't worry the library has it covered it will generate fragments according to the specified dimension, so you can run inference on the desired dimension, and get the output as per the original test image.
-
image_dimension - What is the size of my input image
image_dimension=(2, 1500, 1500, 3)
-
transformer - list of augmentations to perform
transform_dimension
- specifies what dimension is the network expecting the input to be in, if less than image_dimension, the library will generate smaller fragment of sizetransform_dimension
for inference and apply transformation over ittransformers=[ { "name": "CLAHE", "transform_dimension": (2, 1000, 1000, 3), }, ],
-
Dealing with parameters during Scaling transformation, two transformation perform scaling on the test images For Scaling transformation
transform_dimension
andnetwork_dimension
are mandatory parameterstransformers=[ { "name": "Mirror", "transform_dimension": (2, 800, 800, 3), "network_dimension": (2, 1000, 1000, 3) }, { "name": "ScaleCrop", "transform_dimension": (2, 800, 800, 3), "network_dimension": (2, 1000, 1000, 3) } ],
The
network_dimension
parameter informs the library to override the network input and crop the image totransform_dimension
and rescale it tonetwork_dimension
to get it as per network requirementAnd again the library will merge all the fragments to form the final output of
image_dimension
-
For using
Rot
- Rotate add"param": angle
as an additional argument
-
If the test image has the same dimension to what the network expects, in that case just remove the transform_dimension
param.
Inference
Define tta object
tta = Segmentation.populate_color(
image_dimension=(2, 1500, 1500, 3),
transformers=transformers) # transfromer as defined in parameters
Calling the generator
Input image is required to be a 4d numpy array of shape (batch, height, width, channels)
for image list(loop over all the images):
for transformation in tta.run():
# Apply forward transfromation
forward_image = tta.forward(transformation, image=image)
# Apply normalization
# Convert input to framework specific type
# Perform inference
inferred_image = model.predict(forward_image)
# make sure to convert the inferred_image to 4d numpy array [batch, height, width, classes]
reversed_image = tta.reverse(transformation, inferred_image)
# Add the reversed image to transformation
tta.update(transformation, reversed_image)
# Access the output
output = tta.transformations.output
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file ttAugment-0.3.3.tar.gz
.
File metadata
- Download URL: ttAugment-0.3.3.tar.gz
- Upload date:
- Size: 7.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/49.6.0.post20200814 requests-toolbelt/0.9.1 tqdm/4.50.2 CPython/3.7.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 1afc244a353539e5ff66f1ded841a4929a253c3e0a308d364834e1ebb6a04ad4 |
|
MD5 | 585b9e398e50007577bea83ee619d061 |
|
BLAKE2b-256 | 1a214d8c696352d3d2c99691048af5ec2ad680e0019eb838282256cb96d95f79 |