Skip to main content

a library to create convolution from any torch network

Project description

A library to compute N-D convolutions, transposed convolutions and recursive convolution in pytorch, using Linear filter or arbitrary functions as filter. It also gives the option of automaticly finding convolution parameters to match a desired output shape.

Instalation

Use pip3 install torchConvNd

Documentation

convNd

convNd(x, weight, kernel, stride=1, dilation=1, padding=0, bias=None, padding_mode='constant', padding_value=0)

N-Dimensional convolution.

Inputs :

x : torch.tensor of shape (batch_size, C_in, *shape).

Weight : torch.tensor of size (C_in * kernel[0] * kernel[1] * ...kernel[n_dims], C_out).

kernel : array-like or int, kernel size of the convolution.

stride : array-like or int, stride length of the convolution.

dilation : array-like or int, dilation of the convolution.

padding : None, array-like or int, padding size.

bias : None or torch.tensor of size (C_out, ).

padding_mode, padding_value: see pad.

Outputs :

out : torch.tensor of shape (batch_size, C_out, *shape_out).

ConvNd

ConvNd(in_channels, out_channels, kernel, stride=1, dilation=1, padding=0, bias=False, padding_mode='constant', padding_value=0)

Equivalent of convNd as a torch.nn.Module class.

Inputs :

in_channels : int, number of in channels.

out_channels : int, number of out channels.

bias : boolean, controls the usage or not of biases.

kernel, stride, dilation, padding, padding_mode, padding_value: Same as in convNd.

convTransposeNd

convTransposeNd(x, weight, kernel, stride=1, dilation=1, padding=0, bias=None, padding_mode='constant', padding_value=0)

Transposed convolution (using repeat_intereleave).

Inputs :

x : torch.tensor of shape (batch_size, C_in, *shape).

Weight : torch.tensor of size (C_in * kernel[0] * kernel[1] * ...kernel[n_dims], C_out).

kernel : array-like or int, kernel size of the transposed convolution.

stride : array-like or int, stride length of the transposed convolution.

dilation : array-like or int, dilation of the convolution.

padding : None, array-like or int, padding size.

bias : None or torch.tensor of size (C_out, ).

padding_mode, padding_value: see pad.

Outputs :

out : torch.tensor of shape (batch_size, *shape_out).

ConvTransposeNd

ConvTransposeNd(in_channels, out_channels, kernel, stride=1, dilation=1, padding=0, bias=None, padding_mode='constant', padding_value=0)

Equivalent of convTransposeNd as a torch.nn.Module class.

Inputs :

in_channels : int, number of in channels.

out_channels : int, number of out channels.

bias : boolean, controls the usage or not of biases.

kernel, stride, dilation, padding, padding_mode, padding_value: Same as in convTransposeNd.

convNdFunc

convNdFunc(x, func, kernel, stride=1, padding=0, stride_transpose=1, padding_mode='constant', padding_value=0, *args)

Equivalent of convNd using an arbitrary filter func.

Inputs :

x : torch.tensor of shape (batch_size, C_in, *shape).

func : function, taking a torch.tensor of shape (batch_size, C_in, *kernel) and outputs a torch.tensor of shape (batch_size, C_out).

kernel : array-like or int, kernel size of the convolution.

stride : array-like or int, stride length of the convolution.

dilation : array-like or int, dilation of the convolution.

padding : None, array-like or int, padding size.

stride_transpose : array-like or int, equivalent to stride in convTransposeNd.

padding_mode, padding_value: see pad.

*args: additional arguments to pass to func.

Outputs :

out : torch.tensor of shape (batch_size, *shape_out).

*(additional returns) : any additional returns of func.

ConvNdFunc

ConvNdFunc(func, kernel, stride=1, padding=0, padding_mode='constant', padding_value=0)

Equivalent of convNdFunc as a torch.nn.Module class.

Inputs :

func, kernel, stride, dilation, padding, stride_transpose, padding_mode, padding_value : Same as in convNdFunc.

torchConvNd.Utils

listify

listify(x, dims=1)

Transform x to an iterable if it is not.

Inputs :

x : array like or non iterable object (or string), object to listify.

dims : int, array size to obtain.

Outputs :

out : array like, listified version of x.

convShape

convShape(input_shape, kernel, stride=1, dilation=1, padding=0, stride_transpose=1)

Compute the ouput shape of a convolution.

Inputs :

input_shape : array-like or int, shape of the input tensor.

kernel : array-like or int, kernel size of the convolution.

stride : array-like or int, stride length of the convolution.

dilation : array-like or int, dilation of the convolution.

padding : None, array-like or int, padding size.

stride_transpose : array-like or int, equivalent to stride in convTransposeNd.

Outputs :

shape : array-like or int, predicted output shape of the convolution.

autoShape

autoShape(input_shape, kernel, output_shape, max_dilation=3)

Compute the optimal parameters stride, dilation, padding and stride_transpose to match output_shape.

Inputs :

input_shape : array-like or int, shape of the input tensor.

kernel : array-like or int, kernel size of the convolution.

output_shape : array-like or int, target shape of the convolution.

max_dilation : array-like or int, maximum value of dialtion.

Outputs :

kernel : array-like or int, listified(kernel, len(input_shape)) if input_shape is a list, else kernel.

stride : array-like or int, stride length of the convolution.

dilation : array-like or int, dilation of the convolution.

padding : array-like or int, padding size.

stride_transpose : array-like or int, equivalent to stride in convTransposeNd.

pad

pad(x, padding, padding_mode='constant', padding_value=0)

Based on torch.nn.functional.pad.

Inputs :

x : torch.tensor, input tensor.

padding : array-like or int, size of the padding (identical on each size).

padding_mode : 'constant', 'reflect', 'replicate' or 'circular', see torch.nn.functional.pad.

padding_value : float, value to pad with if padding_mode id 'constant'.

Outputs :

out : torch.tensor, paded tensor.

Pad

Pad(padding, padding_mode='constant', padding_value=0)

Equivalent of pad which returns a function.

Inputs :

padding, padding_mode, padding_value : same as with pad

view

view(x, kernel, stride=1)

Generate a view (for a convolution) with parameters kernel and stride.

Inputs :

x : torch.tensor, input tensor.

kernel : array-like or int, kernel size of the convolution.

stride : array-like or int, stride length of the convolution.

Outputs :

out : torch.tensor, strided tensor.

View

View(kernel, stride=1)

Equivalent of view which returns a function.

Inputs :

kernel, stride : same as in view.

Flatten

Flatten()

A torch.nn.Module class that takes a tensor of shape (N, i, j, k...) and reshape it to (N, i*j*k*...).

Reshape

Reshape(shape)

A torch.nn.Module class that takes a tensor of shape (N, i) and reshape it to (N, *shape).

Inputs :

shape : array-like or int, shape to obtain.

Clip

Clip(shape)

A torch.nn.Module that takes a slice of a tensor of size shape (in the center).

Inputs :

shape : array-like or int, shape to obtain (doesn't affect an axis where shape=-1).

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

torchConvNd-0.2.0.tar.gz (6.0 kB view hashes)

Uploaded Source

Built Distribution

torchConvNd-0.2.0-py3-none-any.whl (21.0 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page