Skip to main content

VAD-Enhanced ASR with Word- and Phoneme-Level Timestamps

Project description

Praasper

PyPI Downloads Python GitHub License

Praasper is an Automatic Speech Recognition (ASR) application designed help researchers transribe audio files to both word- and phoneme-level text.

mechanism

In Praasper, we adopt a rather simple and straightforward pipeline to extract phoneme-level information from audio files. The pipeline includes Whisper and Praditor.

Now Praasper support Mandarin (zh). In the near future we plan to add support for Cantonese (yue) and English (en).

For langauges that are not yet support, you can still get a result as the word-level annotation with high external boundaries. While the inner boundries could be inaccurate due to Whisper's feature.

How to use

The default model is large-v3-turbo.

I personally recommend to use the SOTA model as time isn't a really big problem for offline processing.

import praasper

model = praasper.init_model(model_name="large-v3-turbo")  
model.annote(input_path="data")  # The folder where you store .wav

# If you want to know what other models are available:

# import whisper
# print(whisper.available_models())

The output should be like:

[00:00:252] Loading Whisper model: large-v3-turbo
[00:06:745] Model loaded successfully. Current device in use: cuda:0
[00:06:745] 1 valid audio files detected in data/data
[00:06:745] Processing test_audio.wav (1/1)
[00:06:745] VAD processing started...
[00:08:268] Drawing onset(s) (7/7, 100%)
[00:08:540] Drawing offset(s) (7/7, 100%)
[00:08:540] VAD results saved
[00:10:984] Transcribing test_audio.wav into zh...
[00:10:987] Whisper word-level transcription saved
[00:10:987] Trimming word-level annotation...
[00:11:018] Phoneme-level segmentation saved
[00:11:018] Processing completed.

Mechanism

Whisper is used to transcribe the audio file to word-level text. At this point, speech onsets and offsets exhibit time deviations in seconds.

Praditor is applied to perform Voice Activity Detection (VAD) algorithm to trim the currently existing word/character-level timestamps to millisecond level. It is a Speech Onset Detection (SOT) algorithm we developed for langauge researchers.

To extract phoneme boundaries, we designed an edge detection algorithm.

  • The audio file is first resampled to 16 kHz as to remove noise in the high-frequency domain.
  • A kernel,[-1, 0, 1], is then applied to the frequency domain to enhance the edge(s) between phonetic segments.
  • The most prominent n peaks are then selected so as to match the wanted number of phonemes.

Setup

pip installation

pip install -U praasper

If you have a succesful installation and don't care if there is GPU accelaration, you can stop it right here.

GPU Acceleration (Windows/Linux)

Whisper can automaticly detects the best currently available device to use. But you still need to first install GPU-support version torch in order to enable CUDA acceleration.

  • For macOS users, Whisper only supports CPU as the processing device.
  • For Windows/Linux users, the priority order should be: CUDA -> CPU.

If you have no experience in installing CUDA, follow the steps below:

First, go to command line and check the latest CUDA version your system supports:

nvidia-smi

Results should pop up like this (It means that this device supports CUDA up to version 12.9).

| NVIDIA-SMI 576.80                 Driver Version: 576.80         CUDA Version: 12.9     |

Next, go to NVIDIA CUDA Toolkit and download the latest version, or whichever version that fits your system/need.

Lastly, install torch that fits your CUDA version. Find the correct pip command in this link.

Here is an example for CUDA 12.9:

pip install --reinstall torch --index-url https://download.pytorch.org/whl/cu129

(Advanced) uv installation

uv is also highly recommended for way FASTER installation. First, make sure uv is installed to your default environment:

pip install uv

Then, create a virtual environment (e.g., .venv):

uv venv .venv

You should see a new .venv folder pops up in your project folder now. (You might also want to restart the terminal.)

Lastly, install praasper (by adding uv before pip):

uv pip install -U praasper

For CUDA support,

uv pip install --reinstall torch --index-url https://download.pytorch.org/whl/cu129
# Or whichever version that matches your CUDA version

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

praasper-0.1.2.tar.gz (18.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

praasper-0.1.2-py3-none-any.whl (17.5 kB view details)

Uploaded Python 3

File details

Details for the file praasper-0.1.2.tar.gz.

File metadata

  • Download URL: praasper-0.1.2.tar.gz
  • Upload date:
  • Size: 18.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.0

File hashes

Hashes for praasper-0.1.2.tar.gz
Algorithm Hash digest
SHA256 49198e3dd3624e0b05b7159c949c0660a033e5d9fdc4cd8b77a403335aeede9d
MD5 65eb1a095e011977712ad908a56e9b6a
BLAKE2b-256 2c17af76b5a1deed0df4224d3d79709ce4278d5a5fff0f1d8074d03d5491c20a

See more details on using hashes here.

File details

Details for the file praasper-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: praasper-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 17.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.0

File hashes

Hashes for praasper-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 eb47039c5864a96736ccd50c9a12277382ae1c895b20d28401dac6557007d206
MD5 4752c2309d46901ae23f8176e5ceddfb
BLAKE2b-256 5b4c976b087a594147438744a7c83c5c4e80d768ff752a428cbec1b1a6dfbcab

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page