Skip to main content

A nearly-live implementation of OpenAI's Whisper.

Project description

WhisperLive

WhisperLive

A nearly-live implementation of OpenAI's Whisper.

This project is a real-time transcription application that uses the OpenAI Whisper model to convert speech input into text output. It can be used to transcribe both live audio input from microphone and pre-recorded audio files.

Installation

  • Install PyAudio and ffmpeg
 bash scripts/setup.sh
  • Install whisper-live from pip
 pip install whisper-live

Setting up NVIDIA/TensorRT-LLM for TensorRT backend

Getting Started

The server supports two backends faster_whisper and tensorrt. If running tensorrt backend follow TensorRT_whisper readme

Running the Server

python3 run_server.py --port 9090 \
                      --backend faster_whisper

# running with custom model
python3 run_server.py --port 9090 \
                      --backend faster_whisper \
                      -fw "/path/to/custom/faster/whisper/model"
  • TensorRT backend. Currently, we recommend to only use the docker setup for TensorRT. Follow TensorRT_whisper readme which works as expected. Make sure to build your TensorRT Engines before running the server with TensorRT backend.
# Run English only model
python3 run_server.py -p 9090 \
                      -b tensorrt \
                      -trt /home/TensorRT-LLM/examples/whisper/whisper_small_en

# Run Multilingual model
python3 run_server.py -p 9090 \
                      -b tensorrt \
                      -trt /home/TensorRT-LLM/examples/whisper/whisper_small \
                      -m

Controlling OpenMP Threads

To control the number of threads used by OpenMP, you can set the OMP_NUM_THREADS environment variable. This is useful for managing CPU resources and ensuring consistent performance. If not specified, OMP_NUM_THREADS is set to 1 by default. You can change this by using the --omp_num_threads argument:

python3 run_server.py --port 9090 \
                      --backend faster_whisper \
                      --omp_num_threads 4

Single model mode

By default, when running the server without specifying a model, the server will instantiate a new whisper model for every client connection. This has the advantage, that the server can use different model sizes, based on the client's requested model size. On the other hand, it also means you have to wait for the model to be loaded upon client connection and you will have increased (V)RAM usage.

When serving a custom TensorRT model using the -trt or a custom faster_whisper model using the -fw option, the server will instead only instantiate the custom model once and then reuse it for all client connections.

If you don't want this, set --no_single_model.

Running the Client

  • Initializing the client with below parameters:
    • lang: Language of the input audio, applicable only if using a multilingual model.
    • translate: If set to True then translate from any language to en.
    • model: Whisper model size.
    • use_vad: Whether to use Voice Activity Detection on the server.
    • save_output_recording: Set to True to save the microphone input as a .wav file during live transcription. This option is helpful for recording sessions for later playback or analysis. Defaults to False.
    • output_recording_filename: Specifies the .wav file path where the microphone input will be saved if save_output_recording is set to True.
from whisper_live.client import TranscriptionClient
client = TranscriptionClient(
  "localhost",
  9090,
  lang="en",
  translate=False,
  model="small",
  use_vad=False,
  save_output_recording=True,                         # Only used for microphone input, False by Default
  output_recording_filename="./output_recording.wav"  # Only used for microphone input
)

It connects to the server running on localhost at port 9090. Using a multilingual model, language for the transcription will be automatically detected. You can also use the language option to specify the target language for the transcription, in this case, English ("en"). The translate option should be set to True if we want to translate from the source language to English and False if we want to transcribe in the source language.

  • Transcribe an audio file:
client("tests/jfk.wav")
  • To transcribe from microphone:
client()
  • To transcribe from a RTSP stream:
client(rtsp_url="rtsp://admin:admin@192.168.0.1/rtsp")
  • To transcribe from a HLS stream:
client(hls_url="http://as-hls-ww-live.akamaized.net/pool_904/live/ww/bbc_1xtra/bbc_1xtra.isml/bbc_1xtra-audio%3d96000.norewind.m3u8")

Browser Extensions

Whisper Live Server in Docker

  • GPU

    • Faster-Whisper
    docker run -it --gpus all -p 9090:9090 ghcr.io/collabora/whisperlive-gpu:latest
    
    • TensorRT.
    docker run -p 9090:9090 --runtime=nvidia --gpus all --entrypoint /bin/bash -it ghcr.io/collabora/whisperlive-tensorrt
    
    # Build tiny.en engine
    bash build_whisper_tensorrt.sh /app/TensorRT-LLM-examples small.en
    
    # Run server with tiny.en
    python3 run_server.py --port 9090 \
                          --backend tensorrt \
                          --trt_model_path "/app/TensorRT-LLM-examples/whisper/whisper_small_en"
    
  • CPU

docker run -it -p 9090:9090 ghcr.io/collabora/whisperlive-cpu:latest

Note: By default we use "small" model size. To build docker image for a different model size, change the size in server.py and then build the docker image.

Future Work

  • Add translation to other languages on top of transcription.
  • TensorRT backend for Whisper.

Contact

We are available to help you with both Open Source and proprietary AI projects. You can reach us via the Collabora website or vineet.suryan@collabora.com and marcus.edel@collabora.com.

Citations

@article{Whisper
  title = {Robust Speech Recognition via Large-Scale Weak Supervision},
  url = {https://arxiv.org/abs/2212.04356},
  author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya},
  publisher = {arXiv},
  year = {2022},
}
@misc{Silero VAD,
  author = {Silero Team},
  title = {Silero VAD: pre-trained enterprise-grade Voice Activity Detector (VAD), Number Detector and Language Classifier},
  year = {2021},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/snakers4/silero-vad}},
  email = {hello@silero.ai}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

whisper-live-0.5.1.tar.gz (48.6 kB view details)

Uploaded Source

Built Distribution

whisper_live-0.5.1-py3-none-any.whl (49.1 kB view details)

Uploaded Python 3

File details

Details for the file whisper-live-0.5.1.tar.gz.

File metadata

  • Download URL: whisper-live-0.5.1.tar.gz
  • Upload date:
  • Size: 48.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.5

File hashes

Hashes for whisper-live-0.5.1.tar.gz
Algorithm Hash digest
SHA256 1827868461a899f7e83fe2b942425c19926ba67a8fa35670b7eac9caac9a9300
MD5 57338e8aebb157f0f5a3682677fc9fea
BLAKE2b-256 c74fc3beef13659957227503f6df19dc8ccb8e838661e8ec8f77a996287216e3

See more details on using hashes here.

File details

Details for the file whisper_live-0.5.1-py3-none-any.whl.

File metadata

File hashes

Hashes for whisper_live-0.5.1-py3-none-any.whl
Algorithm Hash digest
SHA256 df00916c86cdb34d0b8c55e29938cbbe59b89afe5dc421e9436ecb4b403976c9
MD5 e9913643042f54aaf98a04c80f47b600
BLAKE2b-256 8677836416832e683434ab532940c0be7e90e54e8a51e7327fd533b265bbcba5

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page