Numerical differentiation leveraging convolutions based on PyTorch
Project description
Numerical Differentiation Leveraging Convolution (ndc)
What for?
Differentiate signals stored as PyTorch tensors, e.g. measurements obtained from a device or simulation, where automatic differentiation can not be applied.
Features
- Theoretically any order, any stencils, and any step size (see this Wiki page for information). Be aware that there are numerical limits when computing the filter kernel's coefficients, e.g. small step sized and high orders lead to numerical issues.
- Works for multidimensional signals, assuming that all dimensions share the same step size.
- Computations can be executed on CUDA. However, this has not been tested extensively.
- Straightforward implementation which you can easily adapt to your needs.
How?
The idea of this small repository is to use the duality between convolution, i.e., filtering, and numerical differentiation to leverage the existing functions for 1-dimensional convolution in order to compute the (time) derivatives.
Why PyTorch?
More often then not I received (recorded) simulation data as PyTorch tensors rather than numpy arrays.
Thus, I think it is nice to have a function to differentiate measurement signals without switching the data type or computation device.
Moreover, the torch.conv1d
function fits perfectly for this purpose.
Citing
If you use code or ideas from this repository for your projects or research, please cite it.
@misc{Muratore_ncd,
author = {Fabio Muratore},
title = {ndc - Numerical differentiation leveraging convolutions},
year = {2022},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/famura/ndc}}
}
Installation
To install the core part of the package run
pip install ndc
For (local) development install the dependencies with
pip install -e .[dev]
Usage
Consider a signal x
, e.g. a measurement you obtained form a device. This package assumes that the signal to differentiate is of shape (num_steps, dim_data)
import torch
import ndc
# Assuming you got x(t) from somewhere.
assert isinstance(x, torch.Tensor)
num_steps, dim_data = x.shape
# Specify the derivative. Here, the first order central derivative.
stencils = [-1, 0, 1]
order = 1
step_size = dt # should be known from your signal x(t), else use 1
padding = True # if true, the initial and final values are repeated as often as necessary to match the length of x
dx_dt_num = ndc.differentiate_numerically(x, stencils, order, step_size, padding)
assert dx_dt_num.device == x.device
if padding:
assert dx_dt_num.shape == (num_steps, dim_data)
else:
assert dx_dt_num.shape == (num_steps - sum(s != 0 for s in stencils), dim_data)
Contributions
Maybe you want another padding mode, or you found a way to improve the CUDA support. Please feel free to leave a pull request or issue.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file ndc-1.0.tar.gz
.
File metadata
- Download URL: ndc-1.0.tar.gz
- Upload date:
- Size: 6.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.0 CPython/3.9.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | a2f87f673399ce67b15fe9b20ada4f657329d3d1c6ea1094f0887790633d9443 |
|
MD5 | 7cee7ef23415d83c21c848d9d4a2055a |
|
BLAKE2b-256 | eeb615a7411c237645dc91c6047d9763d56675338af5a32226933e1f1220643e |