Fast, efficient, and differentiable time-varying LPC filtering in PyTorch.
Project description
TorchLPC
torchlpc
provides a PyTorch implementation of the Linear Predictive Coding (LPC) filter, also known as all-pole filter.
It's fast, differentiable, and supports batched inputs with time-varying filter coefficients.
Given an input signal $\mathbf{x} \in \mathbb{R}^T
$ and time-varying LPC coefficients $\mathbf{A} \in \mathbb{R}^{T \times N}
$ with an order of $N
$, the LPC filter is defined as:
$$ y_t = x_t - \sum_{i=1}^N A_{t,i} y_{t-i}. $$
Usage
import torch
from torchlpc import sample_wise_lpc
# Create a batch of 10 signals, each with 100 time steps
x = torch.randn(10, 100)
# Create a batch of 10 sets of LPC coefficients, each with 100 time steps and an order of 3
A = torch.randn(10, 100, 3)
# Apply LPC filtering
y = sample_wise_lpc(x, A)
# Optionally, you can provide initial values for the output signal (default is 0)
zi = torch.randn(10, 3)
y = sample_wise_lpc(x, A, zi=zi)
Installation
pip install torchlpc
or from source
pip install git+https://github.com/yoyololicon/torchlpc.git
Derivation of the gradients of the LPC filter
The details of the derivation can be found in our preprint Differentiable All-pole Filters for Time-varying Audio Systems[^1]. We show that, given the instataneous gradient $\frac{\partial \mathcal{L}}{\partial y_t}$ where $\mathcal{L}$ is the loss function, the gradients of the LPC filter with respect to the input signal $\bf x$ and the filter coefficients $\bf A$ can be expresssed also through a time-varying filter:
\frac{\partial \mathcal{L}}{\partial x_t}
= \frac{\partial \mathcal{L}}{\partial y_t}
- \sum_{i=1}^{N} A_{t+i,i} \frac{\partial \mathcal{L}}{\partial x_{t+i}}
$$ \frac{\partial \mathcal{L}}{\partial \bf A} = -\begin{vmatrix} \frac{\partial \mathcal{L}}{\partial x_1} & 0 & \dots & 0 \ 0 & \frac{\partial \mathcal{L}}{\partial x_2} & \dots & 0 \ \vdots & \vdots & \ddots & \vdots \ 0 & 0 & \dots & \frac{\partial \mathcal{L}}{\partial x_t} \end{vmatrix} \begin{vmatrix} y_0 & y_{-1} & \dots & y_{-N + 1} \ y_1 & y_0 & \dots & y_{-N + 2} \ \vdots & \vdots & \ddots & \vdots \ y_{T-1} & y_{T - 2} & \dots & y_{T - N} \end{vmatrix}. $$
Gradients for the initial condition $y_t|_{t \leq 0}
$
The initial conditions provide an entry point at $t=1$ for filtering, as we cannot evaluate $t=-\infty$.
Let us assume $A_{t, :}|_{t \leq 0} = 0
$ so $y_t|_{t \leq 0} = x_t|_{t \leq 0}
$, which also means $\frac{\partial \mathcal{L}}{\partial y_t}|_{t \leq 0} = \frac{\partial \mathcal{L}}{\partial x_t}|_{t \leq 0}
$.
Thus, the initial condition gradients are
$$ \frac{\partial \mathcal{L}}{\partial y_t} = \frac{\partial \mathcal{L}}{\partial x_t} = -\sum_{i=1-t}^{N} A_{t+i,i} \frac{\partial \mathcal{L}}{\partial x_{t+i}} \quad \text{for } -N < t \leq 0. $$
In practice, we pad $N$ and $N \times N$ zeros to the beginning of $\frac{\partial \mathcal{L}}{\partial \bf y}$ and $\mathbf{A}$ before evaluating $\frac{\partial \mathcal{L}}{\partial \bf x}$.
The first $N$ outputs are the gradients to $y_t|_{t \leq 0}
$ and the rest are to $x_t|_{t > 0}
$.
Time-invariant filtering
In the time-invariant setting, $A_{t, i} = A_{1, i} \forall t \in [1, T]
$ and the filter is simplified to
y_t = x_t - \sum_{i=1}^N a_i y_{t-i}, \mathbf{a} = A_{1,:}.
The gradients $\frac{\partial \mathcal{L}}{\partial \mathbf{x}}
$ are filtering $\frac{\partial \mathcal{L}}{\partial \mathbf{y}}
$ with $\mathbf{a}$ backwards in time, same as in the time-varying case.
$\frac{\partial \mathcal{L}}{\partial \mathbf{a}}$ is simply doing a vector-matrix multiplication:
$$ \frac{\partial \mathcal{L}}{\partial \mathbf{a}^T} = -\frac{\partial \mathcal{L}}{\partial \mathbf{x}^T} \begin{vmatrix} y_0 & y_{-1} & \dots & y_{-N + 1} \ y_1 & y_0 & \dots & y_{-N + 2} \ \vdots & \vdots & \ddots & \vdots \ y_{T-1} & y_{T - 2} & \dots & y_{T - N} \end{vmatrix}. $$
This algorithm is more efficient than [^2] because it only needs one pass of filtering to get the two gradients while the latter needs two.
[^1]: Differentiable All-pole Filters for Time-varying Audio Systems. [^2]: Singing Voice Synthesis Using Differentiable LPC and Glottal-Flow-Inspired Wavetables.
TODO
- Use PyTorch C++ extension for faster computation.
- Use native CUDA kernels for GPU computation.
- Add examples.
Citation
If you find this repository useful in your research, please cite our work with the following BibTex entry:
@misc{ycy2024diffapf,
title={Differentiable All-pole Filters for Time-varying Audio Systems},
author={Chin-Yun Yu and Christopher Mitcheltree and Alistair Carson and Stefan Bilbao and Joshua D. Reiss and György Fazekas},
year={2024},
eprint={2404.07970},
archivePrefix={arXiv},
primaryClass={eess.AS}
}
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file torchlpc-0.4.tar.gz
.
File metadata
- Download URL: torchlpc-0.4.tar.gz
- Upload date:
- Size: 8.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.9.19
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | fa5cc1a6ef629369e2e4986cbf9ebe41844a0c75ac84b2e4a2f152b58963b6b7 |
|
MD5 | 58598f814cd77491b4d94294d9df6297 |
|
BLAKE2b-256 | f0793a83d51235b2ca876645217f097abe0cf5982288ec9f7bcadd979acf5bdf |
File details
Details for the file torchlpc-0.4-py3-none-any.whl
.
File metadata
- Download URL: torchlpc-0.4-py3-none-any.whl
- Upload date:
- Size: 6.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.9.19
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 0827b9697b55e5ec1b97b977aa778ba71a8c80a2b8a6b1d39f3d67c4f7befd8b |
|
MD5 | 3022ff0e5a87ad7b126836924d049c9b |
|
BLAKE2b-256 | 7bfd4f984911a665aac693c9c95d59ed3ca4ec09e4cbc6abdd89c0aa4389ab08 |