Skip to main content

An Open-Source Library for Frequency-Domain Differentiable Audio Processing

Project description

flamo

PyPI | ICASSP25-arXiv

Open-source library for frequency-domain differentiable audio processing.

It contains differentiable implementation of common LTI audio systems modules with learnable parameters.


⚙️ Optimization of audio LTI systems

Available differentiable audio signal processors - in flamo.processor.dsp:

  • Gains : Gains, Matrices, Householder Matrices
  • Filters : Biquads, State Variable Filters (SVF), Graphic Equalizers (GEQ), Parametric Equiliers (PEQ - not released yet)
  • Delays : Integer Delays, Fractional Delays

Transforms - in flamo.processor.dsp:

  • Transform : FFT, iFFT, time anti-aliasing enabled FFT and iFFT

Utilities, system designers, and optimization - in flamo.processor.system:

  • Series : Serial chaining of differentiable systems
  • Recursion : Closed loop with assignable feedforward and feedback paths
  • Shell: Container class for safe interaction between system, dataset, and loss functions

Optimization - in flamo.optimize:

  • Trianer : Handling of the training and validation steps
  • Dataset : Customizable dataset class and helper methods

🛠️ Installation

To install it via pip, on a new python virtual environment flamo-env

python3.10 -m venv .flamo-env
source .flamo-env/bin/activate
pip install flamo

If you are using conda, you might need to install libsndfile manually

conda create -n flamo-env python=3.10
conda activate flamo-env
pip install flamo
conda install -c conda-forge libsndfile

For local installation: clone and install dependencies on a new pyton virtual environment flamo-env

git clone https://github.com/gdalsanto/flamo
cd flamo
python3.10 -m venv .flamo-env
source .flamo-env/bin/activate
pip install -e .

Note that it requires python>=3.10


💻 How to use the library

We included a few examples in ./examples that take you through the library's API.

The following example demonstrates how to optimize the parameters of Biquad filters to match a target magnitude response. This is just a toy example; you can create and optimize much more complex systems by cascading modules either serially or recursively.

Import modules

import torch
import torch.nn as nn
from flamo.optimize.dataset import Dataset, load_dataset
from flamo.optimize.trainer import Trainer
from flamo.processor import dsp, system
from flamo.functional import signal_gallery, highpass_filter

Define parameters and target response with randomized cutoff frequency and gains

in_ch, out_ch = 1, 2    # input and output channels
n_sections = 2  # number of cascaded biquad sections
fs = 48000      # sampling frequency
nfft = fs*2     # number of fft points

b, a = highpass_filter(
    fc=torch.tensor(fs/2)*torch.rand(size=(n_sections, out_ch, in_ch)), 
    gain=torch.tensor(-1) + (torch.tensor(2))*torch.rand(size=(n_sections, out_ch, in_ch)), 
    fs=fs)
B = torch.fft.rfft(b, nfft, dim=0)
A = torch.fft.rfft(a, nfft, dim=0)
target_filter = torch.prod(B, dim=1) / torch.prod(A, dim=1)

Define an instance of learnable Biquads

filt = dsp.Biquad(
    size=(out_ch, in_ch), 
    n_sections=n_sections,
    filter_type='highpass',
    nfft=nfft,
    fs=fs,
    requires_grad=True,
    alias_decay_db=0,
)   

Use the Shell class to add input and output layers and to get the magnitude response at initialization Optimization is done in the frequency domain. The input will be an impulse in the time domain, thus the input layer should perform the Fourier transform. The target is the magnitude response, so the output layer takes the absolute value of the filter's output.

input_layer = dsp.FFT(nfft)
output_layer = dsp.Transform(transform=lambda x : torch.abs(x))
model = system.Shell(core=filt, input_layer=input_layer, output_layer=output_layer)    
estimation_init = model.get_freq_response()

Set up optimization framework and launch it. The Trainer class is used to contain the model, training parameters, and training/valid steps in one class.

input = signal_gallery(1, n_samples=nfft, n=in_ch, signal_type='impulse', fs=fs)
target = torch.einsum('...ji,...i->...j', target_filter, input_layer(input))

dataset = Dataset(
    input=input,
    target=torch.abs(target),
    expand=100,
)
train_loader, valid_loader = load_dataset(dataset, batch_size=1)

trainer = Trainer(model, max_epochs=10, lr=1e-2, train_dir="./output")
trainer.register_criterion(nn.MSELoss(), 1)

trainer.train(train_loader, valid_loader)

end get the resulting response after optimization!

estimation = model.get_freq_response()

📖 Reference

This work has been submitted to ICASSP 2025. Pre-print is available on arxiv.

Dal Santo, G., De Bortoli, G. M., Prawda, K., Schlecht, S. J., & Välimäki, V. (2024). FLAMO: An Open-Source Library for Frequency-Domain Differentiable Audio Processing. arXiv preprint arXiv:2409.08723.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

flamo-0.0.10.tar.gz (13.2 MB view details)

Uploaded Source

Built Distribution

flamo-0.0.10-py3-none-any.whl (38.6 kB view details)

Uploaded Python 3

File details

Details for the file flamo-0.0.10.tar.gz.

File metadata

  • Download URL: flamo-0.0.10.tar.gz
  • Upload date:
  • Size: 13.2 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for flamo-0.0.10.tar.gz
Algorithm Hash digest
SHA256 070fef655c87479bd1c09d128ef2b273d2987cc6c3c353f88e6e963da3cc8ff7
MD5 e98cae15662f0014aff99672e833b724
BLAKE2b-256 b71c5d3343324866d1dd12483ef1679f90f935ac0389324bc414f1f15379d4c8

See more details on using hashes here.

File details

Details for the file flamo-0.0.10-py3-none-any.whl.

File metadata

  • Download URL: flamo-0.0.10-py3-none-any.whl
  • Upload date:
  • Size: 38.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for flamo-0.0.10-py3-none-any.whl
Algorithm Hash digest
SHA256 5dea9b07c3371154f5a9c2d86787a7e90baab2654d4d60e6e5fa8c3ec6cfc312
MD5 d4ac2642c277d46be3ac09519e813cc2
BLAKE2b-256 1ab2598167a9dfd8293a5543a509e28d717f3dfa16d082ed8748be4118170108

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page