Skip to main content

A streamlined Speech-to-Text pipeline for Whisper using CTranslate2

Project description

WhisperS2T-Reborn ⚡

An Optimized Speech-to-Text Pipeline for the Whisper Model Using CTranslate2

WhisperS2T-Reborn is a modernized fork of WhisperS2T, an optimized lightning-fast Speech-to-Text (ASR) pipeline. It is tailored for the Whisper model using the CTranslate2 backend to provide faster transcription. It includes several heuristics to enhance transcription accuracy.

Whisper is a general-purpose speech recognition model developed by OpenAI. It is trained on a large dataset of diverse audio and is also a multitasking model that can perform multilingual speech recognition, speech translation, and language identification.

Installation

pip install -U whisper-s2t-reborn

Quick Start

Transcribe a single file

import whisper_s2t

model = whisper_s2t.load_model(model_identifier="large-v3")

files = ['audio1.wav']
lang_codes = ['en']
tasks = ['transcribe']
initial_prompts = [None]

out = model.transcribe_with_vad(files,
                                lang_codes=lang_codes,
                                tasks=tasks,
                                initial_prompts=initial_prompts,
                                batch_size=32)

print(out[0][0]) # Print first utterance for first file
"""
[Console Output]

{'text': "Let's bring in Phil Mackie who is there at the palace...",
 'avg_logprob': -0.25426941679184695,
 'no_speech_prob': 8.147954940795898e-05,
 'start_time': 0.0,
 'end_time': 24.8}
"""

Batch across multiple files

Passing multiple files allows segments from different files to be batched together, making better use of the GPU:

import whisper_s2t

model = whisper_s2t.load_model(model_identifier="large-v3")

files = ['audio1.wav', 'audio2.wav', 'audio3.wav']
lang_codes = ['en', 'en', 'en']
tasks = ['transcribe', 'transcribe', 'transcribe']
initial_prompts = [None, None, None]

out = model.transcribe_with_vad(files,
                                lang_codes=lang_codes,
                                tasks=tasks,
                                initial_prompts=initial_prompts,
                                batch_size=32)

# out[0] = results for audio1.wav, out[1] = results for audio2.wav, etc.
for file_idx, transcript in enumerate(out):
    print(f"File {files[file_idx]}: {len(transcript)} segments")

Word-level alignment

To enable word-level timestamps, load the model with:

model = whisper_s2t.load_model("large-v3", asr_options={'word_timestamps': True})

Supported Models

Model Identifier
Tiny tiny / tiny.en
Base base / base.en
Small small / small.en
Medium medium / medium.en
Large V3 large-v3
Large V3 Turbo large-v3-turbo
Distil Small distil-small.en
Distil Medium distil-medium.en
Distil Large V3 distil-large-v3
Distil Large V3.5 distil-large-v3.5

All models are available in float16, float32, and bfloat16 compute types via CTranslate2-4you on Hugging Face.

Benchmarks

Model: Whisper large-v3 · FP16 · CUDA · RTX 4090 Audio: sam_altman_lex_podcast_367.flac

Comparing openai-whisper (no batch support) against whisper-s2t-reborn.

Backend Batch Size Time (s) Speedup Inference VRAM (MB)
openai-whisper 1 508.5 1.0× 362
whisper-s2t-reborn 1 372.4 1.4× 560
whisper-s2t-reborn 2 239.6 2.1× 840
whisper-s2t-reborn 4 145.5 3.5× 1,387
whisper-s2t-reborn 8 95.5 5.3× 2,427
whisper-s2t-reborn 16 69.4 7.3× 4,608
whisper-s2t-reborn 32 57.1 8.9× 8,964
whisper-s2t-reborn 64 49.8 10.2× 17,665.75

The increased VRAM usage even at batch size 1 is largely due to the VAD model. Openai's implementation doesn't use voice activity detection. The benchmarks folder has the actual scripts used.

VISUAL OF BENCHMARK RESULTS image

Acknowledgements

  • Original WhisperS2T: Thanks to shashig for the original WhisperS2T project that this fork is based on.
  • OpenAI Whisper Team: Thanks to the OpenAI Whisper Team for open-sourcing the Whisper model.
  • CTranslate2 Team: Thanks to the CTranslate2 Team for providing a faster inference engine for Transformers architecture.
  • NVIDIA NeMo Team: Thanks to the NVIDIA NeMo Team for their contribution of the open-source VAD model used in this pipeline.

License

This project is licensed under MIT License - see the LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

whisper_s2t_reborn-1.6.0.tar.gz (1.4 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

whisper_s2t_reborn-1.6.0-py3-none-any.whl (1.4 MB view details)

Uploaded Python 3

File details

Details for the file whisper_s2t_reborn-1.6.0.tar.gz.

File metadata

  • Download URL: whisper_s2t_reborn-1.6.0.tar.gz
  • Upload date:
  • Size: 1.4 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for whisper_s2t_reborn-1.6.0.tar.gz
Algorithm Hash digest
SHA256 73f00e77b30409b6f85b7b5a0a501070c43265cadb1934594c349608a23cf1d7
MD5 5f938985c1147d1ec626c9720b49eeb7
BLAKE2b-256 3569793da1bfbe83501b97a356025a3027b842f049b50a5d8179174985388cd6

See more details on using hashes here.

Provenance

The following attestation bundles were made for whisper_s2t_reborn-1.6.0.tar.gz:

Publisher: publish.yml on BBC-Esq/WhisperS2T-reborn

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file whisper_s2t_reborn-1.6.0-py3-none-any.whl.

File metadata

File hashes

Hashes for whisper_s2t_reborn-1.6.0-py3-none-any.whl
Algorithm Hash digest
SHA256 5e79a7f4c34d9996ebb5a1e17efe800e72b07257650ff2b6be3daadfea2cdeb8
MD5 a14839b0abd470cf7e8725e7b3c14133
BLAKE2b-256 eb30574cd0abb4b68a99b114c7891a9cb90a48d210145adf065268df700666c9

See more details on using hashes here.

Provenance

The following attestation bundles were made for whisper_s2t_reborn-1.6.0-py3-none-any.whl:

Publisher: publish.yml on BBC-Esq/WhisperS2T-reborn

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page