Dual Dynamic Kernel Filtering (DDKF): A minimal package with arbitrary kernels and learnable parameters
Project description
ddkf
Minimal package providing DDKF for time-frequency decomposition with arbitrary kernels and learnable parameters. Available to install with pip as package ddkf.
New in v4.0:
- Corrected algorithm - Window-by-window kernel application matching MATLAB reference
- Arbitrary number of kernels (not limited to 2)
- Learnable parameters (alpha, beta, gamma) via PyTorch
- Backpropagatable cubic interpolation - Gradients flow through interpolation
- No scipy dependency (pure NumPy/PyTorch)
Breaking Changes from v3.x:
- Algorithm corrected to apply kernels window-by-window (not globally)
Quick install
# Basic install
python3 -m pip install ddkf
⚠️ Important: Use the Polynomial Kernel
The polynomial kernel performs exceptionally well in any combination and should be included in your kernel configuration. We don't set it as the default to allow users flexibility in configuring the polynomial parameters (degree and offset) for their specific use case.
Recommended configuration:
kernel_names=['polynomial', 'gaussian']
gamma=[0.5, 0.5]
Examples
The examples/ folder contains complete working examples:
-
run_example.py - Basic DDKF demonstration showing signal decomposition, time-frequency representation, and inverse transform with a noisy multi-frequency signal (3 Hz + 7 Hz + noise)
-
DDKFvsDWT.py - Machine learning comparison demonstrating DDKF vs DWT (Discrete Wavelet Transform) as feature extractors for time series classification:
- Extracts 2D time-frequency features using DDKF
- Extracts 2D wavelet coefficients using DWT
- Trains separate CNNs on each feature type
- Compares classification accuracy on ECG200 dataset
- Visualizes feature representations and training curves
To run the examples:
cd examples/
python3 run_example.py # Basic signal processing demo
python3 DDKFvsDWT.py # ML classification comparison (requires aeon, pywt)
PyTorch Example (Learnable Parameters)
DDKF v4.0 supports learnable parameters via PyTorch. All parameters (alpha, beta, gamma) can be optimized through backpropagation:
import torch
from ddkf import DDKFLayer
# Create learnable DDKF layer
layer = DDKFLayer(
kernel_names=['polynomial', 'gaussian', 'polynomial'],
gamma=[0.5, 0.3, 0.2], # Initial weights (learnable)
alpha=0.15, # Initial alpha threshold (learnable)
beta=0.9, # Initial beta threshold (learnable)
window_size=20,
interp_factor=0.25 # Backpropagatable cubic interpolation
)
# Training loop
optimizer = torch.optim.Adam(layer.parameters(), lr=0.001)
for epoch in range(100):
# Forward pass
signal_batch = torch.randn(16, 1000) # Batch of signals
tfr = layer(signal_batch)
# Compute loss
loss = your_loss_function(tfr, target)
# Backward pass
optimizer.zero_grad()
loss.backward() # Gradients flow through interpolation!
optimizer.step()
# Parameters are updated
if epoch % 20 == 0:
print(f"Epoch {epoch}: alpha={layer.alpha.item():.4f}, "
f"beta={layer.beta.item():.4f}")
# Check final learned parameters
print(f"Learned gamma weights: {layer.gamma.detach().numpy()}")
print(f"Learned alpha: {layer.alpha.item():.4f}")
print(f"Learned beta: {layer.beta.item():.4f}")
API Overview
Main Class: DDKF
DDKF(
kernel="gaussian", # Single kernel or list of kernels
gamma=None, # Kernel weights (auto-normalized to sum=1)
alpha=0.15, # Alpha threshold (smoothing coefficient)
beta=0.9, # Beta threshold (smart minimum)
window_size=20, # Sliding window size
step_size=4, # Step between windows
kernel_params=None # Parameters for each kernel
)
Methods:
fit(signal)- Process signal and compute TFRget_tfr()- Get time-frequency representationinverse_transform(correction_factor=None)- Reconstruct signal
Convenience Function: denoise
denoised = denoise(
signal,
kernel=["polynomial", "gaussian"],
gamma=[0.6, 0.4],
window_size=20,
alpha=0.15,
beta=0.9
)
PyTorch Layer: DDKFLayer
DDKFLayer(
kernel_names=['polynomial', 'gaussian'],
gamma=[0.5, 0.5],
alpha=0.15, # Alpha threshold
beta=0.9, # Beta threshold
window_size=20,
step_size=4,
interp_factor=0.25 # Cubic interpolation factor
)
All parameters are learnable via backpropagation.
Forward pass:
tfr = layer(signal) # Returns time-frequency representation
Inverse transform:
recovered = layer.inverse_transform(tfr) # Phase automatically stored
# Or provide phase explicitly:
recovered = layer.inverse_transform(tfr, tfr_phase)
Parameter Descriptions
alpha (default: 0.15)
Alpha threshold for final smoothing. Suppresses weak frequency components in the final time-frequency representation. Higher values result in more aggressive smoothing.
beta (default: 0.9)
Beta threshold for the smart minimum operation. Controls which frequency components participate in the smart minimum calculation. Higher values (closer to 1.0) make the filter more selective, only including the strongest frequency components.
gamma (default: equal weights)
Kernel mixing weights. Automatically normalized to sum to 1. For a hybrid kernel with two components, gamma=[0.5, 0.5] gives equal weight to each kernel.
Available Kernels
"polynomial"- Polynomial kernel: (x + offset)^degree- Default params:
degree=2, offset=1.3
- Default params:
"gaussian"- Gaussian kernel: exp(-0.5 * ((x - center) / sigma)^2)- Default params:
center=0.7, sigma=1.0
- Default params:
"matern32"- Matérn 3/2 kernel: (1 + √3(x + offset)) * exp(-√3(x + offset))- Default params: offset=1.7
"matern52"- Matérn 5/2 kernel: (1 + √5(x + offset) + (5/3)(x + offset)²) * exp(-√5(x + offset))- Default params: offset=1.7
"rational"- Rational quadratic kernel: (1 + scale * x)^(-power)- Default params: scale=1/3, power=3
"gamma_rational"- Gamma rational quadratic kernel: (1 + scale * (x + offset)²)^(-power)- Default params: scale=1/3, offset=1.7, power=3
Custom kernels: You can also pass your own callable functions.
## Key Features
- **Window-by-window processing**: Kernels applied correctly within each window
- **Arbitrary kernels**: Use 1, 2, 3, or more kernels
- **Learnable parameters**: Optimize alpha, beta, gamma via PyTorch
- **Backpropagatable interpolation**: Gradients flow through cubic interpolation
- **No scipy**: Pure NumPy/PyTorch implementation
- **Flexible**: Works for denoising, TFR, feature extraction
## Advanced Usage
### Feature Extraction for ML
DDKF can extract time-frequency features for machine learning applications. For a complete working example comparing DDKF vs DWT features for time series classification, see `examples/DDKFvsDWT.py`.
```python
from ddkf import DDKFLayer
import torch
# Create feature extractor
ddkf = DDKFLayer(
kernel_names=['polynomial', 'gaussian'],
gamma=[0.5, 0.5],
alpha=0.15,
beta=0.85
)
# Extract features from time series
time_series = torch.randn(100, 1000) # 100 samples, 1000 time points
features = ddkf(time_series) # Shape: (100, n_windows, n_freqs)
# Flatten for ML model
features_flat = features.view(features.size(0), -1) # Shape: (100, n_windows*n_freqs)
End-to-End Trainable Pipeline
import torch.nn as nn
class DDKFClassifier(nn.Module):
def __init__(self, num_classes):
super().__init__()
self.ddkf = DDKFLayer(
kernel_names=['polynomial', 'gaussian'],
alpha=0.15,
beta=0.9
)
self.classifier = nn.Sequential(
nn.Flatten(),
nn.Linear(n_windows * n_freqs, 128),
nn.ReLU(),
nn.Dropout(0.5),
nn.Linear(128, num_classes)
)
def forward(self, x):
tfr = self.ddkf(x) # Learnable TFR
return self.classifier(tfr)
# Train entire pipeline end-to-end
model = DDKFClassifier(num_classes=10)
optimizer = torch.optim.Adam(model.parameters())
# ... training loop ...
Custom Kernels
You can provide your own kernel functions:
import torch
from ddkf import DDKFLayer
# Define custom kernel
def my_custom_kernel(x, scale=2.0, power=3):
"""Custom kernel function."""
return (x * scale) ** power
# Use with DDKF
layer = DDKFLayer(
kernel_names=[my_custom_kernel, 'gaussian'], # Mix custom + builtin
kernel_params=[
{'scale': 1.5, 'power': 2}, # params for custom kernel
{'center': 0.5, 'sigma': 0.8} # params for gaussian
],
gamma=[0.6, 0.4]
)
# Or use lambda functions
layer = DDKFLayer(
kernel_names=[
lambda x, scale=1.0: torch.exp(-x * scale),
'polynomial'
],
kernel_params=[
{'scale': 2.0},
{'degree': 3, 'offset': 1.0}
]
)
Reference
If you use this package or the underlying DDKF technique in your research or software, please cite the original work:
@article{bensegueni2025dual,
title={Dual Dynamic Kernel Filtering: Accurate Time-Frequency Representation, Reconstruction, and Denoising},
author={Bensegueni, Skander and Belhaouari, Samir Brahim and Kahalan, Yunis Carreon},
journal={Digital Signal Processing},
pages={105407},
year={2025},
publisher={Elsevier}
}
License
This project is licensed under the MIT License.
Authors
- Skander Bensegueni
- Yunis Kahalan
v4.0.0 - Corrected algorithm, updated parameter names, backpropagatable interpolation
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file ddkf-4.1.0.tar.gz.
File metadata
- Download URL: ddkf-4.1.0.tar.gz
- Upload date:
- Size: 13.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c60bdbd8f068a1ff7c22e7e6d44b1cf7a993ce3f8f9c37281459e4b1b744f99b
|
|
| MD5 |
682f0fa153eabfd4662a8368a1d62536
|
|
| BLAKE2b-256 |
075ba76d92c9d017950c00d18ef67bd7480d5a14e01b5568fa97314edae68604
|
File details
Details for the file ddkf-4.1.0-py3-none-any.whl.
File metadata
- Download URL: ddkf-4.1.0-py3-none-any.whl
- Upload date:
- Size: 11.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3f078a7396ef7cdf57e5ddb6d790c12094984a9b874fcd007ce4130b4b43d12c
|
|
| MD5 |
b692938b413612764908b6fab6135d6c
|
|
| BLAKE2b-256 |
8f9a3921c3832744f32a98da288be22ec70c09f232d4b9c19b4a50855ad3d909
|