Skip to main content

Differentiable numerical differentiation for noisy time series in PyTorch

Project description

torch-dxdt - Numerical Derivatives of Timeseries Data in PyTorch

PyPI version License: MIT Python 3.9+

torch-dxdt is a PyTorch implementation of numerical differentiation methods for (noisy) time series data. It provides differentiable versions of common differentiation algorithms, allowing them to be used as part of neural network training pipelines, physics-informed neural networks (PINNs), and other gradient-based optimization tasks.

This package is inspired by and aims to be compatible with the derivative package by Andy Goldschmidt.

Features

  • 🔥 Fully Differentiable: All methods support PyTorch autograd for backpropagation
  • 🚀 GPU Accelerated: Leverage PyTorch's GPU support for fast computation
  • 📊 Multiple Methods: Seven differentiation algorithms for different use cases
  • 📈 Higher-Order Derivatives: Support for 2nd-order and multi-order derivative computation
  • 🔧 Easy API: Simple functional and object-oriented interfaces
  • 🧪 Well Tested: Validated against the reference derivative package

Installation

pip install torch-dxdt

Or install from source:

git clone https://github.com/mstoelzle/torch-dxdt.git
cd torch-dxdt
pip install -e .

Quick Start

import torch
import torch_dxdt

# Create sample data
t = torch.linspace(0, 2 * torch.pi, 100)
x = torch.sin(t) + 0.1 * torch.randn(100)

# Compute derivative using functional interface
dx = torch_dxdt.dxdt(x, t, kind="savitzky_golay", window_length=11, polyorder=3)

# Or use object-oriented interface
sg = torch_dxdt.SavitzkyGolay(window_length=11, polyorder=3)
dx = sg.d(x, t)

Available Methods

Method Class Differentiable? Best For
Finite Differences FiniteDifference ✅ Yes Fast, simple differentiation
Savitzky-Golay SavitzkyGolay ✅ Yes Noisy data with polynomial smoothing
Spectral Spectral ✅ Yes Smooth, periodic signals
Spline Spline ✅ Yes Smoothing with controllable regularization
Kernel (GP) Kernel ✅ Yes Probabilistic smoothing
Kalman Kalman ✅ Yes State estimation with noise model
Whittaker-Eilers Whittaker ✅ Yes Global smoothing with penalized least squares

Method Details

Finite Difference

# Symmetric finite differences using Taylor series coefficients
# k=1 gives 3-point central difference, k=2 gives 5-point, etc.
dx = torch_dxdt.dxdt(x, t, kind="finite_difference", k=1)

Savitzky-Golay Filter

# Polynomial smoothing filter - great for noisy data
dx = torch_dxdt.dxdt(x, t, kind="savitzky_golay", 
                 window_length=11,  # Must be odd
                 polyorder=3)       # Polynomial order

Spectral Differentiation

# FFT-based differentiation - very accurate for periodic signals
dx = torch_dxdt.dxdt(x, t, kind="spectral")

# With frequency filtering
dx = torch_dxdt.dxdt(x, t, kind="spectral", 
                 filter_func=lambda k: (k < 10).float())

Spline Smoothing

# Whittaker/spline smoothing with regularization
dx = torch_dxdt.dxdt(x, t, kind="spline", s=0.01)  # s controls smoothing

Kernel (Gaussian Process)

# GP-based differentiation with RBF kernel
dx = torch_dxdt.dxdt(x, t, kind="kernel", 
                 sigma=1.0,   # Kernel length scale
                 lmbd=0.1)    # Noise variance

Kalman Smoother

# Kalman smoother assuming Brownian motion derivative
dx = torch_dxdt.dxdt(x, t, kind="kalman", alpha=1.0)

Whittaker-Eilers Smoother

# Global smoother using penalized least squares with Cholesky decomposition
# lmbda controls smoothness (larger = smoother)
dx = torch_dxdt.dxdt(x, t, kind="whittaker", lmbda=100.0)

# With different difference penalty order
dx = torch_dxdt.dxdt(x, t, kind="whittaker", lmbda=1000.0, d_order=2)

Higher-Order Derivatives

Several methods support computing higher-order derivatives:

# Second-order derivative with Savitzky-Golay (order is a constructor parameter)
sg = torch_dxdt.SavitzkyGolay(window_length=11, polyorder=4, order=2)
d2x = sg.d(x, t)  # Second derivative

# Or via functional interface (order passed as kwarg to constructor)
d2x = torch_dxdt.dxdt(x, t, kind="savitzky_golay", 
                  window_length=11, polyorder=4, order=2)

# Whittaker also supports second-order derivatives via d_orders()
wh = torch_dxdt.Whittaker(lmbda=100.0)
derivs = wh.d_orders(x, t, orders=[1, 2])
dx, d2x = derivs[1], derivs[2]

Multi-Order Derivatives

For efficiency, you can compute multiple derivative orders simultaneously:

# Compute smoothed signal, first and second derivatives in one call
sg = torch_dxdt.SavitzkyGolay(window_length=11, polyorder=4)
derivs = sg.d_orders(x, t, orders=[0, 1, 2])

x_smooth = derivs[0]  # Smoothed signal (order 0)
dx = derivs[1]        # First derivative
d2x = derivs[2]       # Second derivative

# Also available via functional interface
derivs = torch_dxdt.dxdt_orders(x, t, kind="savitzky_golay",
                            window_length=11, polyorder=4,
                            orders=[0, 1, 2])

This is more efficient than calling d() multiple times as it avoids redundant computation.

Using with Neural Networks

The key feature of torch-dxdt is that all operations are differentiable, so you can use them in training loops:

import torch
import torch.nn as nn
import torch_dxdt

class PhysicsInformedModel(nn.Module):
    def __init__(self):
        super().__init__()
        self.net = nn.Sequential(
            nn.Linear(1, 64),
            nn.Tanh(),
            nn.Linear(64, 1)
        )
        self.diff = torch_dxdt.SavitzkyGolay(window_length=5, polyorder=2)
    
    def forward(self, t):
        x = self.net(t.unsqueeze(-1)).squeeze(-1)
        dx = self.diff.d(x, t)
        return x, dx

# Training loop
model = PhysicsInformedModel()
optimizer = torch.optim.Adam(model.parameters(), lr=0.01)

t = torch.linspace(0, 2*torch.pi, 100)
for epoch in range(100):
    optimizer.zero_grad()
    x, dx = model(t)
    
    # Physics loss: dx/dt should equal some target
    physics_loss = (dx - torch.cos(t)).pow(2).mean()
    physics_loss.backward()
    optimizer.step()

Smoothing

Some methods also support smoothing without differentiation:

# Get smoothed signal
x_smooth = torch_dxdt.smooth_x(x, t, kind="spline", s=0.1)
x_smooth = torch_dxdt.smooth_x(x, t, kind="kernel", sigma=1.0, lmbd=0.1)
x_smooth = torch_dxdt.smooth_x(x, t, kind="kalman", alpha=1.0)
x_smooth = torch_dxdt.smooth_x(x, t, kind="whittaker", lmbda=100.0)

Batched Processing

All methods support batched inputs:

# Process multiple signals at once
x_batch = torch.stack([torch.sin(t), torch.cos(t)], dim=0)  # Shape: (2, 100)
dx_batch = torch_dxdt.dxdt(x_batch, t, kind="savitzky_golay", 
                        window_length=11, polyorder=3)
# dx_batch has shape (2, 100)

Comparison with derivative Package

torch-dxdt is designed to be API-compatible with the derivative package where possible, but with PyTorch tensors instead of NumPy arrays:

# derivative (NumPy)
from derivative import dxdt as np_dxdt
dx_np = np_dxdt(x_np, t_np, kind="finite_difference", k=1)

# torch-dxdt (PyTorch)
import torch_dxdt
dx_torch = torch_dxdt.dxdt(x_torch, t_torch, kind="finite_difference", k=1)

API Reference

Functional Interface

torch_dxdt.dxdt(x, t, kind=None, axis=-1, **kwargs)

Compute the derivative of x with respect to t.

torch_dxdt.smooth_x(x, t, kind=None, axis=-1, **kwargs)

Compute the smoothed version of x (only for methods that support it).

Classes

All derivative classes inherit from torch_dxdt.Derivative and implement:

  • d(x, t, axis=-1): Compute derivative
  • d_orders(x, t, orders=[0, 1, 2], axis=-1): Compute multiple derivative orders efficiently
  • smooth(x, t, axis=-1): Compute smoothed signal (if supported)

For higher-order derivatives, pass order=N to the constructor (e.g., SavitzkyGolay(order=2)) or use d_orders().

torch_dxdt.dxdt_orders(x, t, kind=None, orders=(1, 2), axis=-1, **kwargs)

Compute multiple derivative orders simultaneously.

Examples

For a comprehensive comparison of all methods with varying noise levels and computational benchmarks, see the Jupyter notebook:

📓 examples/comparing_methods.ipynb

The notebook includes:

  • Visual comparison of all 7 differentiation methods
  • RMSE accuracy analysis across noise levels (no noise, low noise, high noise)
  • Computational efficiency benchmarks (forward and backward pass timing)
  • Parameter tuning examples for noisy data
  • Smoothing method comparisons

Requirements

  • Python >= 3.9
  • PyTorch >= 1.10.0
  • NumPy >= 1.20.0
  • SciPy >= 1.7.0

Development

# Clone the repository
git clone https://github.com/mstoelzle/torch-dxdt.git
cd torch-dxdt

# Create conda environment
conda create -n torch-dxdt python=3.13 -y
conda activate torch-dxdt

# Install in development mode with all dependencies
pip install -e ".[dev]"

# Run tests
pytest tests/ -v

# Run tests with the reference derivative package
pip install derivative
pytest tests/test_correctness.py -v

Citation

If you use this package in your research, please consider citing:

@software{torch-dxdt,
  author = {Stölzle, Maximilian},
  title = {torch-dxdt: Differentiable Numerical Differentiation in PyTorch},
  url = {https://github.com/mstoelzle/torch-dxdt},
  year = {2025}
}

This package builds upon the work in the derivative package:

@article{kaptanoglu2022pysindy,
  doi = {10.21105/joss.03994},
  year = {2022},
  publisher = {The Open Journal},
  volume = {7},
  number = {69},
  pages = {3994},
  author = {Alan A. Kaptanoglu and others},
  title = {PySINDy: A comprehensive Python package for robust sparse system identification},
  journal = {Journal of Open Source Software}
}

License

MIT License - see LICENSE for details.

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

Links

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

torch_dxdt-0.1.0.tar.gz (31.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

torch_dxdt-0.1.0-py3-none-any.whl (26.1 kB view details)

Uploaded Python 3

File details

Details for the file torch_dxdt-0.1.0.tar.gz.

File metadata

  • Download URL: torch_dxdt-0.1.0.tar.gz
  • Upload date:
  • Size: 31.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for torch_dxdt-0.1.0.tar.gz
Algorithm Hash digest
SHA256 cbe2e22781ab62d56b1d52fa4917194e899f894dd51df618e26644288b28cfd8
MD5 20710dc48f854a99b91e89914caf7746
BLAKE2b-256 9f29e615fbfc48285f703bf2031ce20242b356b54c41335ff847d6655c1c9293

See more details on using hashes here.

Provenance

The following attestation bundles were made for torch_dxdt-0.1.0.tar.gz:

Publisher: publish.yml on mstoelzle/torch-dxdt

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file torch_dxdt-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: torch_dxdt-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 26.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for torch_dxdt-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 1420eb9b4194b46b1c6886b8f88d60d34bb6b0d8ba9e05b8de3572098928cdd9
MD5 e9f4b186da2702d84a76a54f7356747b
BLAKE2b-256 fc77fd48167d8350258464b7fa32b4c38645903ef498014e1c79f184e9b8b8e5

See more details on using hashes here.

Provenance

The following attestation bundles were made for torch_dxdt-0.1.0-py3-none-any.whl:

Publisher: publish.yml on mstoelzle/torch-dxdt

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page