Skip to main content

Dilated convolution with learnable spacings, built on PyTorch.

Project description

arXiv arXiv medium colab

Dilated-Convolution-with-Learnable-Spacings-PyTorch

This is an official implementation of Dilated Convolution with Learnable Spacings by Ismail Khalfaoui Hassani, Thomas Pellegrini and Timothée Masquelier.

Dilated Convolution with Learnable Spacings (abbreviated to DCLS) is a novel convolution method based on gradient descent and interpolation. It could be seen as an improvement of the well known dilated convolution that has been widely explored in deep convolutional neural networks and which aims to inflate the convolutional kernel by inserting spaces between the kernel elements.

In DCLS, the positions of the weights within the convolutional kernel are learned in a gradient-based manner, and the inherent problem of non-differentiability due to the integer nature of the positions in the kernel is solved by taking advantage of an interpolation method.

For now, the code has only been implemented on PyTorch, using Pytorch.

The method is described in the article Dilated Convolution with Learnable Spacings. The Gaussian and triangle versions are described in the arXiv preprint Dilated Convolution with Learnable Spacings: beyond bilinear interpolation.

What's new

Sep 28, 2023:

Sep 22, 2023:

Jun 16, 2023:

Jun 2, 2023:

  • New DCLS version supports Gaussian and triangle interpolations in addition to previous bilinear interpolation. To use it, please do:
pip3 install --upgrade --force-reinstall dcls

or recompile after a git update.

import torch
from DCLS.construct.modules import  Dcls2d

# Dcls2d with Gaussian interpolation. available versions : ["gauss", "max", "v1", "v0"]
m = Dcls2d(96, 96, kernel_count=26, dilated_kernel_size=17, padding=8, groups=96, version="gauss")
input = torch.randn(20, 96, 50, 100)
output = m(input)
loss = output.sum()
loss.backward()
print(output, m.weight.grad, m.P.grad, m.SIG.grad)

Apr 16, 2023:

  • Fix an important bug in Dcls1d version. Please reinstall the pip wheel via
pip3 install --upgrade --force-reinstall dcls

or recompile after a git update.

Jan 7, 2023:

  • Important modification to ConstructKernel{1,2,3}d algorithm which allows to use less memory, this modification enables very large kernel counts. For example:
from DCLS.construct.modules import  Dcls2d

m = Dcls2d(96, 96, kernel_count=2000, dilated_kernel_size=7, padding=3, groups=96).cuda() 

After installation of the new version 0.0.3 of DCLS, the use remains unchanged.

Nov 8, 2022:

  • Previous branch main is moved to branch cuda, now in main branch we have fully native torch conv{1,2,3}d.

Sep 27, 2022:

Installation

DCLS is based on PyTorch and CUDA. Please make sure that you have installed all the requirements before you install DCLS.

Requirements:

  • Pytorch version torch>=1.6.0. See torch.

Preferred versions:

pip3 install torch==1.8.0+cu111 torchvision==0.9.0+cu111 -f https://download.pytorch.org/whl/torch_stable.html

Install the latest developing version from the source codes:

From GitHub:

git clone https://github.com/K-H-Ismail/Dilated-Convolution-with-Learnable-Spacings-PyTorch.git
cd Dilated-Convolution-with-Learnable-Spacings-PyTorch
python3 -m pip install --upgrade pip
python3 -m build 
python3 -m pip install dist/dcls-0.0.5-py3-none-any.whl 

Install the last stable version from PyPI:

pip3 install dcls

Usage

Dcls methods could be easily used as a substitue of Pytorch's nn.Convnd classical convolution method:

import torch
from DCLS.construct.modules import  Dcls2d

# With square kernels, equal stride and dilation
m = Dcls2d(16, 33, kernel_count=3, dilated_kernel_size=7)
input = torch.randn(20, 16, 50, 100)
output = m(input)
loss = output.sum()
loss.backward()
print(output, m.weight.grad, m.P.grad)

A typical use is with the separable convolution

import torch
from DCLS.construct.modules import  Dcls2d

m = Dcls2d(96, 96, kernel_count=34, dilated_kernel_size=17, padding=8, groups=96)
input = torch.randn(128, 96, 56, 56)
output = m(input)
loss = output.sum()
loss.backward()
print(output, m.weight.grad, m.P.grad)

Dcls with different dimensions

import torch
from DCLS.construct.modules import  Dcls1d 

# Will construct kernels of size 7x7 with 3 elements inside each kernel
m = Dcls1d(3, 16, kernel_count=3, dilated_kernel_size=7)
input = torch.rand(8, 3, 32)
output = m(input)
loss = output.sum()
loss.backward()
print(output, m.weight.grad, m.P.grad)
import torch
from DCLS.construct.modules import  Dcls3d

m = Dcls3d(16, 33, kernel_count=10, dilated_kernel_size=(7,8,9))
input = torch.randn(20, 16, 50, 100, 30)
output = m(input)
loss = output.sum()
loss.backward()
print(output, m.weight.grad, m.P.grad)

DepthWiseConv2dImplicitGEMM for 2D-DCLS:

For 2D-DCLS, to install and enable the DepthWiseConv2dImplicitGEMM, please follow the instructions of RepLKNet. Otherwise, Pytorch's native Conv2D method will be used.

Device Supports

DCLS supports CPU and Nvidia CUDA GPU devices now.

  • Nvidia GPU
  • CPU

Make sure to have your data and model on CUDA GPU.

Publications and Citation

If you use DCLS in your work, please consider to cite it as follows:

@inproceedings{
hassani2023dilated,
title={Dilated convolution with learnable spacings},
author={Ismail Khalfaoui-Hassani and Thomas Pellegrini and Timoth{\'e}e Masquelier},
booktitle={The Eleventh International Conference on Learning Representations },
year={2023},
url={https://openreview.net/forum?id=Q3-1vRh3HOA}
}

If you use DCLS with Gaussian or triangle interpolations in your work, please consider to cite as well:

@inproceedings{
khalfaoui-hassani2023dilated,
title={Dilated Convolution with Learnable Spacings: beyond bilinear interpolation},
author={Ismail Khalfaoui-Hassani and Thomas Pellegrini and Timoth{\'e}e Masquelier},
booktitle={ICML 2023 Workshop on Differentiable Almost Everything: Differentiable Relaxations, Algorithms, Operators, and Simulators},
year={2023},
url={https://openreview.net/forum?id=j8FPBCltB9}
}

Contribution

This project is open source, therefore all your contributions are welcomed, whether it's reporting issues, finding and fixing bugs, requesting new features, and sending pull requests ...

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dcls-0.0.6.tar.gz (23.5 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

dcls-0.0.6-py3-none-any.whl (11.9 kB view details)

Uploaded Python 3

File details

Details for the file dcls-0.0.6.tar.gz.

File metadata

  • Download URL: dcls-0.0.6.tar.gz
  • Upload date:
  • Size: 23.5 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.8.10

File hashes

Hashes for dcls-0.0.6.tar.gz
Algorithm Hash digest
SHA256 a34abaf40898986b709f83a49a8ab50b6e076c829f1ae7630c964d598ee2129b
MD5 bbf33dfe13731e52e84bd5ce8cf86b42
BLAKE2b-256 6a613ed50dee7669b3f03cc48d98e7a291bab7ec7f6194f3d66d47bcb3f9abfb

See more details on using hashes here.

File details

Details for the file dcls-0.0.6-py3-none-any.whl.

File metadata

  • Download URL: dcls-0.0.6-py3-none-any.whl
  • Upload date:
  • Size: 11.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.8.10

File hashes

Hashes for dcls-0.0.6-py3-none-any.whl
Algorithm Hash digest
SHA256 088eb7c4c6fa4fe98debcc5cc758452d7b365854e35bd84d6a99c025283ee35e
MD5 bde1486108aa0794f91195bf94136d0c
BLAKE2b-256 8abfe1353847830c2caa0b88afcdfa2a50e1de3c51c013ee3772b5a122bb4f84

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page