Passt pretrained model for HEAR 2021 NeurIPS Competition
Project description
PaSST package for HEAR 2021 NeurIPS Challenge Holistic Evaluation of Audio Representations
This is an implementation for Efficient Training of Audio Transformers with Patchout for HEAR 2021 NeurIPS Challenge Holistic Evaluation of Audio Representations
CUDA version
This is an implementation is tested with CUDA version 11.1, and torch installed:
pip3 install torch==1.8.1+cu111 torchaudio==0.8.1 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
but should work on newer versions of CUDA and torch.
Installation
Install the latest version of this repo:
pip install hear21passt
The models follow the common API of HEAR 21 :
hear-validator --model hear21passt.base.pt hear21passt.base
hear-validator --model noweights.txt hear21passt.base2levelF
hear-validator --model noweights.txt hear21passt.base2levelmel
There are three modules available hear21passt.base
,hear21passt.base2level
, hear21passt.base2levelmel
:
import torch
from hear21passt.base import load_model, get_scene_embeddings, get_timestamp_embeddings
model = load_model().cuda()
seconds = 15
audio = torch.ones((3, 32000 * seconds))*0.5
embed, time_stamps = get_timestamp_embeddings(audio, model)
print(embed.shape)
embed = get_scene_embeddings(audio, model)
print(embed.shape)
Getting the Logits/Class Labels
You can get the logits (before the sigmoid activation) for the 527 classes of audioset:
from hear21passt.base import load_model
model = load_model(mode="logits").cuda()
logits = model(wave_signal)
The class labels indices can be found here
You can also use different pre-trained models, for example, the model trained with KD passt_s_kd_p16_128_ap486
:
from hear21passt.base import get_basic_model
model = get_basic_model(mode="logits", arch="passt_s_kd_p16_128_ap486")
logits = model(wave_signal)
Supporting longer clips
In case of an input longer than 10 seconds, the get_scene_embeddings
method compute the average of the embedding of a 10-second overlapping windows.
Depending on the application, it may be useful to use a pre-trained that can extract embeddings from 20 or 30 seconds without averaging. These variant has pre-trained time positional encoding or 20/30 seconds:
# from version 0.0.18, it's possible to use:
from hear21passt.base20sec import load_model # up to 20 seconds of audio.
# or
from hear21passt.base30sec import load_model # up to 30 seconds of audio.
model = load_model(mode="logits").cuda()
logits = model(wave_signal)
Loading other pre-trained models for logits or fine-tuning
Each pre-trained model has a specific frequency/time positional encoding, it's necessary to select the correct input shape to be able to load the models. The important variables for loading are input_tdim
, fstride
and tstride
to specify the spectrograms time frames, the patches stride over frequency, and patches stride over time, respectively.
import torch
from hear21passt.base import get_basic_model, get_model_passt
model = get_basic_model(mode="logits")
logits = model(some_wave_signal)
# Examples of other pre-trained models using the same spectrograms
# pre-traind on openMIC-18
model.net = get_model_passt(arch="openmic", n_classes=20)
# pre-traind on FSD-50k
model.net = get_model_passt(arch="fsd50k", n_classes=200)
# pre-traind on FSD-50k without patch-overlap (faster)
model.net = get_model_passt(arch="fsd50k-n", n_classes=200, fstride=16, tstride=16)
# models are trained on 10 seconds audios from Audioset, but accept longer audios (20s, or 30s)
# These models are trained by sampling a 10-second time-pos-encodings sequence
model.net = get_model_passt("passt_20sec", input_tdim=2000)
model.net = get_model_passt("passt_30sec", input_tdim=3000)
If you provide the wrong spectrograms, the model may fail silently, by generating low-quality embeddings and logits. Make sure you have the correct spectrograms' config for the selected pre-trained models. Models with higher spectrogram resolutions, need to specify the correct spectrogram config:
from hear21passt.models.preprocess import AugmentMelSTFT
# high-res pre-trained on Audioset
model.net = get_model_passt("stfthop160", input_tdim=2000)
# hopsize=160 for this pretrained model
model.mel = AugmentMelSTFT(n_mels=128, sr=32000, win_length=800, hopsize=160, n_fft=1024, freqm=48,
timem=192,
htk=False, fmin=0.0, fmax=None, norm=1, fmin_aug_range=10,
fmax_aug_range=2000)
# higher-res pre-trained on Audioset
model.net = get_model_passt("stfthop100", input_tdim=3200)
# hopsize=100 for this pretrained model
model.mel = AugmentMelSTFT(n_mels=128, sr=32000, win_length=800, hopsize=100, n_fft=1024, freqm=48,
timem=192,
htk=False, fmin=0.0, fmax=None, norm=1, fmin_aug_range=10,
fmax_aug_range=2000)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file hear21passt-0.0.26.tar.gz
.
File metadata
- Download URL: hear21passt-0.0.26.tar.gz
- Upload date:
- Size: 23.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/4.0.2 CPython/3.11.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9aa91cef4ca6468d9075092a9c377b09270b5a59dfa83ecdd4ce9ea92c8c8431 |
|
MD5 | 3801c1b7a591df3ec834645ea4b4502b |
|
BLAKE2b-256 | 9e4d53efcc38f947303fc52547c0bbcbfe8a0d0dec5abb8217bd377dfb65430d |
File details
Details for the file hear21passt-0.0.26-py3-none-any.whl
.
File metadata
- Download URL: hear21passt-0.0.26-py3-none-any.whl
- Upload date:
- Size: 33.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/4.0.2 CPython/3.11.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | a3a7377604c6d829369111ab26a86fc5dd40154ec611b8fa5819ecaa6b252550 |
|
MD5 | d62ddd04d1ecce5327464f52cd4a2f83 |
|
BLAKE2b-256 | e89dd14d332d831125ff9aab91df8dec4f2707219e9136b6d1dc77cf8a7dae89 |