Skip to main content

rVADfast - a fast and robust unsupervised VAD

Project description

rVADfast

The Python library for an unsupervised, fast method for robust voice activity detection (rVAD), as presented in rVAD: An Unsupervised Segment-Based Robust Voice Activity Detection Method, Computer Speech & Language, 2020 or its arXiv version. More info on the rVAD GitHub page.

The rVAD paper published in Computer Speech & Language won International Speech Communication Association (ISCA) 2022 Best Research Paper Award.

The rVAD method consists of two passes of denoising followed by a VAD stage. It has been applied as a preprocessor for a wide range of applications, such as speech recognition, speaker identification, language identification, age and gender identification, self-supervised learning, human-robot interaction, audio archive segmentation, and so on as in Google Scholar.

The method is unsupervised to make it applicable to a broad range of acoustic environments, and it is optimized considering both noisy and clean conditions.

The rVAD (out of the box) ranks the 4th place (out of 27 supervised/unsupervised systems) in a Fearless Steps Speech Activity Detection Challenge.

The rVAD paper is among the most cited articles from Computer Speech and Language published since 2018 (the 6th place), in 2023.

Usage

The rVADfast library is available as a python package installable via:

pip install rVADfast

After installation, you can import the rVADfast class from which you can instantiate a VAD instance which you can use to generate vad labels:

import audiofile
from rVADfast import rVADfast

vad = rVADfast()

path_to_audiofile = "some_audio_file.wav"

waveform, sampling_rate = audiofile.read(path_to_audiofile)
vad_labels, vad_timestamps = vad(waveform, sampling_rate)

The package also contains functionality to process folders of audio files, to generate VAD labels or to trim non-speeh segments from audio files. This is done by importing the rVADfast.process module which has two methods for processing audio files, namely process.rVADfast_single_process and process.rVADfast_multi_process, with the latter utilizing multiple CPUs for processing. Additionally, a processing script can be called from commandline-tools by executing:

rVADfast_process --root <audio_file_root> --save_folder <path_to_save_files> 
--ext <audio_file_extension> --n_workers <number_of_multiprocessing_workers>

For an explanation of the additional available arguments for the commandline tool you can use:

rVADfast_process --help

In /notebooks a concrete example on how to use the rVADfast package is found.

Note that the package is still in development. Therefore, we welcome any feedback or suggestions for changes and/or additional features.

References

  1. Z.-H. Tan, A.k. Sarkara and N. Dehak, "rVAD: an unsupervised segment-based robust voice activity detection method," Computer Speech and Language, vol. 59, pp. 1-21, 2020.
  2. Z.-H. Tan and B. Lindberg, "Low-complexity variable frame rate analysis for speech recognition and voice activity detection,” IEEE Journal of Selected Topics in Signal Processing, vol. 4, no. 5, pp. 798-807, 2010.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

rVADfast-0.0.3.tar.gz (17.4 kB view hashes)

Uploaded Source

Built Distribution

rVADfast-0.0.3-py3-none-any.whl (18.3 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page