Skip to main content

PyTorch implemention of part of librosa functions.

Project description

TorchLibrosa: PyTorch implementation of Librosa

This codebase provides PyTorch implementation of some librosa functions. If users previously used for training cpu-extracted features from librosa, but want to add GPU acceleration during training and evaluation, TorchLibrosa will provide almost identical features to standard torchlibrosa functions (numerical difference less than 1e-5).

Install

$ pip install torchlibrosa

Examples 1

Extract Log mel spectrogram with TorchLibrosa.

import torch
import torchlibrosa as tl

batch_size = 16
sample_rate = 22050
win_length = 2048
hop_length = 512
n_mels = 128

batch_audio = torch.empty(batch_size, sample_rate).uniform_(-1, 1)  # (batch_size, sample_rate)

# TorchLibrosa feature extractor the same as librosa.feature.melspectrogram()
feature_extractor = torch.nn.Sequential(
    tl.Spectrogram(
        hop_length=hop_length,
        win_length=win_length,
    ), tl.LogmelFilterBank(
        sr=sample_rate,
        n_mels=n_mels,
        is_log=False, # Default is true
    ))
batch_feature = feature_extractor(batch_audio) # (batch_size, 1, time_steps, mel_bins)

Examples 2

Extracting spectrogram, then log mel spectrogram, STFT and ISTFT with TorchLibrosa.

import torch
import torchlibrosa as tl

batch_size = 16
sample_rate = 22050
win_length = 2048
hop_length = 512
n_mels = 128

batch_audio = torch.empty(batch_size, sample_rate).uniform_(-1, 1)  # (batch_size, sample_rate)

# Spectrogram
spectrogram_extractor = tl.Spectrogram(n_fft=win_length, hop_length=hop_length)
sp = spectrogram_extractor.forward(batch_audio)   # (batch_size, 1, time_steps, freq_bins)

# Log mel spectrogram
logmel_extractor = tl.LogmelFilterBank(sr=sample_rate, n_fft=win_length, n_mels=n_mels)
logmel = logmel_extractor.forward(sp)   # (batch_size, 1, time_steps, mel_bins)

# STFT
stft_extractor = tl.STFT(n_fft=win_length, hop_length=hop_length)
(real, imag) = stft_extractor.forward(batch_audio)
# real: (batch_size, 1, time_steps, freq_bins), imag: (batch_size, 1, time_steps, freq_bins) #

# ISTFT
istft_extractor = tl.ISTFT(n_fft=win_length, hop_length=hop_length)
y = istft_extractor.forward(real, imag, length=batch_audio.shape[-1])    # (batch_size, samples_num)

Example 3

Check the compability of TorchLibrosa to Librosa. The numerical difference should be less than 1e-5.

python3 torchlibrosa/stft.py --device='cuda'    # --device='cpu' | 'cuda'

Contact

Qiuqiang Kong, qiuqiangkong@gmail.com

Cite

[1] Qiuqiang Kong, Yin Cao, Turab Iqbal, Yuxuan Wang, Wenwu Wang, and Mark D. Plumbley. "PANNs: Large-scale pretrained audio neural networks for audio pattern recognition." IEEE/ACM Transactions on Audio, Speech, and Language Processing 28 (2020): 2880-2894.

External links

Other related repos include:

torchaudio: https://github.com/pytorch/audio

Asteroid-filterbanks: https://github.com/asteroid-team/asteroid-filterbanks

Kapre: https://github.com/keunwoochoi/kapre

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

torchlibrosa-0.1.0.tar.gz (11.7 kB view details)

Uploaded Source

Built Distribution

torchlibrosa-0.1.0-py3-none-any.whl (11.7 kB view details)

Uploaded Python 3

File details

Details for the file torchlibrosa-0.1.0.tar.gz.

File metadata

  • Download URL: torchlibrosa-0.1.0.tar.gz
  • Upload date:
  • Size: 11.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.7.5

File hashes

Hashes for torchlibrosa-0.1.0.tar.gz
Algorithm Hash digest
SHA256 62a8beedf9c9b4141a06234df3f10229f7ba86e67678ccee02489ec4ef044028
MD5 fa1203ea0dc148d4d51bf0b27df3ac9c
BLAKE2b-256 a467e4c79da3f15777b9bc2b655d47dac553fc31e40360500fef5e66d6877ce8

See more details on using hashes here.

File details

Details for the file torchlibrosa-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for torchlibrosa-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 89b65fd28b833ceb6bc74a3d0d87e2924ddc5a845d0a246b194952a4e12a38cb
MD5 116f50a51ba16a1b7d52bbdbd988aaae
BLAKE2b-256 3eafccf007edf442c3c0cd3a98be2c82bc99edc957c04436a759b6e1e01077e0

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page