Fast, efficient, and differentiable dynamic range compression/expansion/limiting for PyTorch.
Project description
TorchComp
Differentiable dynamic range controller in PyTorch.
Compressor/Expander gain function
This function calculates the gain reduction $g[n]$ for a compressor/expander. It takes the RMS of the input signal $x[n]$ and the compressor/expander parameters as input. The function returns the gain $g[n]$ in linear scale. To use it as a regular compressor/expander, multiply the result $g[n]$ with the signal $x[n]$.
Function signature
def compexp_gain(
x_rms: torch.Tensor,
comp_thresh: Union[torch.Tensor, float],
comp_ratio: Union[torch.Tensor, float],
exp_thresh: Union[torch.Tensor, float],
exp_ratio: Union[torch.Tensor, float],
at: Union[torch.Tensor, float],
rt: Union[torch.Tensor, float],
) -> torch.Tensor:
"""Compressor-Expander gain function.
Args:
x_rms (torch.Tensor): Input signal RMS.
comp_thresh (torch.Tensor): Compressor threshold in dB.
comp_ratio (torch.Tensor): Compressor ratio.
exp_thresh (torch.Tensor): Expander threshold in dB.
exp_ratio (torch.Tensor): Expander ratio.
at (torch.Tensor): Attack time.
rt (torch.Tensor): Release time.
Shape:
- x_rms: :math:`(B, T)` where :math:`B` is the batch size and :math:`T` is the number of samples.
- comp_thresh: :math:`(B,)` or a scalar.
- comp_ratio: :math:`(B,)` or a scalar.
- exp_thresh: :math:`(B,)` or a scalar.
- exp_ratio: :math:`(B,)` or a scalar.
- at: :math:`(B,)` or a scalar.
- rt: :math:`(B,)` or a scalar.
"""
Note:
x_rms
should be non-negative.
You can calculate it using $\sqrt{x^2[n]}$ and smooth it with avg
.
Equations
$$ x_{\rm log}[n] = 20 \log_{10} x_{\rm rms}[n] $$
$$ g_{\rm log}[n] = \min\left(0, \left(1 - \frac{1}{CR}\right)\left(CT - x_{\rm log}[n]\right), \left(1 - \frac{1}{ER}\right)\left(ET - x_{\rm log}[n]\right)\right) $$
$$ g[n] = 10^{g_{\rm log}[n] / 20} $$
$$ \hat{g}[n] = \begin{rcases} \begin{dcases} \alpha_{\rm at} g[n] + (1 - \alpha_{\rm at}) \hat{g}[n-1] & \text{if } g[n] < \hat{g}[n-1] \ \alpha_{\rm rt} g[n] + (1 - \alpha_{\rm rt}) \hat{g}[n-1] & \text{otherwise} \end{dcases}\end{rcases} $$
Block diagram
graph TB
input((x))
output((g))
amp2db[amp2db]
db2amp[db2amp]
min[Min]
delay[z^-1]
zero( 0 )
input --> amp2db --> neg["*(-1)"] --> plusCT["+CT"] & plusET["+ET"]
plusCT --> multCS["*(1 - 1/CR)"]
plusET --> multES["*(1 - 1/ER)"]
zero & multCS & multES --> min --> db2amp
db2amp & delay --> ifelse{<}
output --> delay --> multATT["*(1 - AT)"] & multRTT["*(1 - RT)"]
subgraph Compressor
ifelse -->|yes| multAT["*AT"]
subgraph Attack
multAT & multATT --> plus1(("+"))
end
ifelse -->|no| multRT["*RT"]
subgraph Release
multRT & multRTT --> plus2(("+"))
end
end
plus1 & plus2 --> output
Limiter gain function
This function calculates the gain reduction $g[n]$ for a limiter. To use it as a regular limiter, multiply the result $g[n]$ with the input signal $x[n]$.
Function signature
def limiter_gain(
x: torch.Tensor,
threshold: torch.Tensor,
at: torch.Tensor,
rt: torch.Tensor,
) -> torch.Tensor:
"""Limiter gain function.
This implementation use the same attack and release time for level detection and gain smoothing.
Args:
x (torch.Tensor): Input signal.
threshold (torch.Tensor): Limiter threshold in dB.
at (torch.Tensor): Attack time.
rt (torch.Tensor): Release time.
Shape:
- x: :math:`(B, T)` where :math:`B` is the batch size and :math:`T` is the number of samples.
- threshold: :math:`(B,)` or a scalar.
- at: :math:`(B,)` or a scalar.
- rt: :math:`(B,)` or a scalar.
"""
Equations
$$ x_{\rm peak}[n] = \begin{rcases} \begin{dcases} \alpha_{\rm at} |x[n]| + (1 - \alpha_{\rm at}) x_{\rm peak}[n-1] & \text{if } |x[n]| > x_{\rm peak}[n-1] \ \alpha_{\rm rt} |x[n]| + (1 - \alpha_{\rm rt}) x_{\rm peak}[n-1] & \text{otherwise} \end{dcases}\end{rcases} $$
$$ g[n] = \min(1, \frac{10^\frac{T}{20}}{x_{\rm peak}[n]}) $$
$$ \hat{g}[n] = \begin{rcases} \begin{dcases} \alpha_{\rm at} g[n] + (1 - \alpha_{\rm at}) \hat{g}[n-1] & \text{if } g[n] < \hat{g}[n-1] \ \alpha_{\rm rt} g[n] + (1 - \alpha_{\rm rt}) \hat{g}[n-1] & \text{otherwise} \end{dcases}\end{rcases} $$
Block diagram
graph TB
input((x))
output((g))
peak((x_peak))
abs[abs]
delay[z^-1]
zero( 0 )
ifelse1{>}
ifelse2{<}
input --> abs --> ifelse1
subgraph Peak detector
ifelse1 -->|yes| multAT["*AT"]
subgraph at1 [Attack]
multAT & multATT --> plus1(("+"))
end
ifelse1 -->|no| multRT["*RT"]
subgraph rt1 [Release]
multRT & multRTT --> plus2(("+"))
end
end
plus1 & plus2 --> peak
peak --> delay --> multATT["*(1 - AT)"] & multRTT["*(1 - RT)"] & ifelse1
peak --> amp2db[amp2db] --> neg["*(-1)"] --> plusT["+T"]
zero & plusT --> min[Min] --> db2amp[db2amp] --> ifelse2{<}
subgraph gain smoothing
ifelse2 -->|yes| multAT2["*AT"]
subgraph at2 [Attack]
multAT2 & multATT2 --> plus3(("+"))
end
ifelse2 -->|no| multRT2["*RT"]
subgraph rt2 [Release]
multRT2 & multRTT2 --> plus4(("+"))
end
end
output --> delay2[z^-1] --> multATT2["*(1 - AT)"] & multRTT2["*(1 - RT)"] & ifelse2
plus3 & plus4 --> output
Average filter
Function signature
def avg(rms: torch.Tensor, avg_coef: Union[torch.Tensor, float]):
"""Compute the running average of a signal.
Args:
rms (torch.Tensor): Input signal.
avg_coef (torch.Tensor): Coefficient for the average RMS.
Shape:
- rms: :math:`(B, T)` where :math:`B` is the batch size and :math:`T` is the number of samples.
- avg_coef: :math:`(B,)` or a scalar.
"""
Equations
\hat{x}_{\rm rms}[n] = \alpha_{\rm avg} x_{\rm rms}[n] + (1 - \alpha_{\rm avg}) \hat{x}_{\rm rms}[n-1]
TODO
- CUDA acceleration in Numba
- PyTorch CPP extension
- Native CUDA extension
- Forward mode autograd
- Examples
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file torchcomp-0.1.tar.gz
.
File metadata
- Download URL: torchcomp-0.1.tar.gz
- Upload date:
- Size: 7.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.9.19
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 3f0fb434763e1307705094a29a0b84ad956ea99055bf3486ea6f76832ee46770 |
|
MD5 | 152195494b55fdf7db24006dc56e6313 |
|
BLAKE2b-256 | 0e6ca63ec8e7eaf0c4348edf9ff228c7719c2b62543a3eb3c865d1ddbf02f3f6 |
File details
Details for the file torchcomp-0.1-py3-none-any.whl
.
File metadata
- Download URL: torchcomp-0.1-py3-none-any.whl
- Upload date:
- Size: 6.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.9.19
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | aec7173f515eb223be066e9774d576f35637cb21290d773e9dcb1eb2744cffdc |
|
MD5 | 0192e9a020dafcc667883dcbf285a1ea |
|
BLAKE2b-256 | 1cfa00a38f445554af045d7462683ac681c1b8862ea24084bade8bba9ec12e59 |