Torchvision Complementary tool to perform batch and GPU data augmentations.
Project description
Efficient vision data augmentations for CPU/GPU per-sample/batched data.
Under active development, subject to API change
Torchaug
Introduction
Torchaug is a data augmentation library for the Pytorch ecosystem. It is meant to deal efficiently with tensors that are either on CPU or GPU and either per-samples or on batches.
It seeks to improve Torchvision performance that has been implemented over Pytorch and Pillow to, among other things, perform data augmentations. However, because it has been implemented first with per-sample CPU data augmentations in mind, it has several drawbacks to make it efficient:
- For data augmentations on GPU, some CPU/GPU synchronizations cannot be avoided.
- For data augmentations applied on batch, the randomness is sampled for the whole batch and not each sample.
Torchaug removes these issues and is meant to be used complimentary with Torchvision. It follows the same nomenclature as Torchvision with functional augmentations and transforms class wrappers. It is split into two packages:
- transforms for per-sample data augmentations
- batch_transfroms for batched data augmentations.
More details can be found in the documentation.
To be sure to retrieve the same data augmentations as Torchvision, it has been tested on each of its components to match Torchvision outputs.
How to use
-
Install a Pytorch >= 2.0 environment.
-
Install Torchaug.
pip install torchaug
- Import data augmentations either from
torchaug.transforms
ortorchaug.batch_transforms
packages. To ease with handling multiple sequential augmentations, wrappers have been defined.
from torchaug.transforms import (
ImageWrapper,
RandomColorJitter,
RandomGaussianBlur
)
from torchaug.batch_transforms import (
BatchImageWrapper,
BatchRandomColorJitter,
BatchRandomHorizontalFlip
)
transform = ImageWrapper(
[RandomColorJitter(...), RandomGaussianBlur(...)],
)
batch_transform = BatchImageWrapper(
[BatchRandomColorJitter(...), BatchRandomHorizontalFlip(...)],
inplace=True
)
How to contribute
Feel free to contribute to this library by making issues and/or pull requests. For each feature you implement, add tests to make sure it works. Also, make sure to update the documentation.
LICENSE
This project is under the Apache license 2.0.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file torchaug-0.3.3.tar.gz
.
File metadata
- Download URL: torchaug-0.3.3.tar.gz
- Upload date:
- Size: 32.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.9.18
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | cd46799d3aefd8487ba8212cd55c18a1dc630e19b639980352c391f3e9ff0190 |
|
MD5 | 202f3d0ec1df9b6a7305b3646ef36a97 |
|
BLAKE2b-256 | 75180ca13af228b17c84416347290ddcd280de68346a5edcfa71423747ec90e6 |
File details
Details for the file torchaug-0.3.3-py3-none-any.whl
.
File metadata
- Download URL: torchaug-0.3.3-py3-none-any.whl
- Upload date:
- Size: 41.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.9.18
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 40b7b344cd326c21bb76047358fc433208483ca0da4d113538631070de56b52e |
|
MD5 | 3087aa90bdbd180f1c890202e3da470a |
|
BLAKE2b-256 | bcc5111f5038a11456fe51719b3cf7c0b8a1639e19af926c422f5bc1706e5249 |