Fast and easy image and n-dimensional volume patchification
Project description
PyPatchify
Fast and easy image and n-dimensional volume patchification
Install and Requirements
The package can be installed from PyPI:
pip install pypatchify
It supports both numpy arrays and pytorch tensors. However pytorch is not strictly required. The only necessary dependency is:
- numpy >= 1.21.5
To install all dependencies (including pytorch and pytest) run the following
python -m pip install requirements.txt
Hello World
The library is designed to be easy to use while keeping the computational overhead as low as possible. The following simple example shows how to patchify and unpatchify a batch of rgb images:
import pypatchify
import numpy as np
# create 16 random rgb images
imgs = np.random.uniform(0, 1, size=(16, 3, 256, 256))
# patchify into non-overlapping blocks of size 64x64
patched_imgs = pypatchify.patchify(imgs, (64, 64))
# re-create the original images from the patches
imgs = pypatchify.unpatchify(patched_imgs, (256, 256))
In case the created patches are further processed, for example by passing them through a neural network, it might make sense to collapse the different patches into a batch size as follows:
imgs = np.random.uniform(0, 1, size=(16, 3, 256, 256))
# patchify into non-overlapping blocks of size 64x64
patched_imgs = pypatchify.patchify_to_batches(imgs, (64, 64), batch_dim=0)
# re-create the original images from the patches
imgs = pypatchify.unpatchify_from_batches(patched_imgs, (256, 256), batch_dim=0)
Note that the implementations are not restricted to 2d images only but can patchify and unpatchify any multi-dimensional volume:
vols = np.random.uniform(0, 1, size=(16, 32, 32, 64, 64))
# patchify into non-overlapping blocks of size 61x8x32x16
patched_vols = pypatchify.patchify_to_batches(vols, (16, 8, 32, 16), batch_dim=0)
# re-create the original images from the patches
vols = pypatchify.unpatchify_from_batches(patched_vols, (32, 32, 64, 64), batch_dim=0)
GPU-Acceleration and Differentiable
Also when working with neural networks its probably more convenient to directly work with pytorch tensors. This can be done by simply passing the torch tensors to the function at hand. Note that all implementations allow gpu-tensors which drastically decrease the runtime of any of the patchification functions. Also there is no need to move memory between cpu and gpu.
import torch
import pypatchify
# create a random img tensor and move to cuda
imgs = torch.rand(16, 3, 256, 256).cuda()
# patchify into non-overlapping blocks of size 64x64
patched_imgs = pypatchify.patchify_to_batches(imgs, (64, 64), batch_dim=0)
# re-create the original images from the patches
imgs = pypatchify.unpatchify_from_batches(patched_imgs, (256, 256), batch_dim=0)
Furthermore all the functions are completly differentiable allowing for gradient propagation through patchification and un-patchification.
# let f and g be differentiable functions
# possibly neural networks
f = torch.tanh # processes the images
g = torch.sigmoid # processes the patched images
# create a random img tensor and move to cuda
imgs = torch.rand(16, 3, 256, 256, requires_grad=True)
# apply functions and patchify
patched_imgs = pypatchify.patchify_to_batches(f(imgs), (64, 64), batch_dim=0)
unpatched_imgs = pypatchify.unpatchify_from_batches(g(patched_imgs), (256, 256), batch_dim=0)
# compute some kind of loss and backpropagate
loss = unpatched_imgs.sum() # dummy loss
loss.backward()
# check gradients in input imgs
grads = imgs.grad # should be all ones
Other Frameworks
The library makes it very easy to support other frameworks besides numpy and pytorch. All work that needs to be done is to implement the following few functions:
- shape: get the shape of a given tensor
- strides: get the strides of the underlying memory
- reshape: reshape a given tensor to a given shape
- transpose: permute the dimensions of a given tensor to a given permuation
- as_strided: apply a given shape and strides to the memory of a given tensor
Note that most frameworks already support these functions. To now integrate the framework just inherit from the pypatchify.patch.Patchify
class and enter the functions:
class NewFramework(Patchify[NewTensorType]):
# get shape and strides from tensor object
shape:Callable
strides:Callable
# tensor operations
reshape:Callable
transpose:Callable
as_strided:Callable
The class now holds static member functions for all the patchification functionality including patchify
, unpatchify
, etc.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file pypatchify-0.1.4.tar.gz
.
File metadata
- Download URL: pypatchify-0.1.4.tar.gz
- Upload date:
- Size: 9.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.9.13
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | b13bae9891aeec9768c79eac3f3979b5f2ef91f1dcb006aa186b61003fe7ab38 |
|
MD5 | cd60b918cb721006424d06f4b86c0385 |
|
BLAKE2b-256 | f45ed11e5649b2f950466e5bbd7ecec540131b8a0182d174e662bab57a5493d0 |
File details
Details for the file pypatchify-0.1.4-py3-none-any.whl
.
File metadata
- Download URL: pypatchify-0.1.4-py3-none-any.whl
- Upload date:
- Size: 11.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.9.13
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | fab74574b8ccda2de5cfa36eec6e02bd22daf8238830ba2a4f8926e842c355c3 |
|
MD5 | ad643fdeb55a76ae50471726f170824e |
|
BLAKE2b-256 | 5407b37a22368b689ca5d7e0aa053d17a5f9be57652efa03763f6aed743cc9fc |