Skip to main content

A nearly-live implementation of OpenAI's Whisper.

Project description

whisper-live

A nearly-live implementation of OpenAI's Whisper.

This project is a real-time transcription application that uses the OpenAI Whisper model to convert speech input into text output. It can be used to transcribe both live audio input from microphone and pre-recorded audio files.

Unlike traditional speech recognition systems that rely on continuous audio streaming, we use voice activity detection (VAD) to detect the presence of speech and only send the audio data to whisper when speech is detected. This helps to reduce the amount of data sent to the whisper model and improves the accuracy of the transcription output.

Installation

  • Install PyAudio and ffmpeg
 bash setup.sh
  • Install whisper-live from pip
 pip install whisper-live

Getting Started

  • Run the server
 from whisper_live.server import TranscriptionServer
 server = TranscriptionServer()
 server.run("0.0.0.0", 9090)
  • On the client side

    • To transcribe an audio file:
      from whisper_live.client import TranscriptionClient
      client = TranscriptionClient(
        "localhost",
        9090,
        is_multilingual=False,
        lang="en",
        translate=False,
        model_size="small"
      )
    
      client("tests/jfk.wav")
    

    This command transcribes the specified audio file (audio.wav) using the Whisper model. It connects to the server running on localhost at port 9090. It can also enable the multilingual feature, allowing transcription in multiple languages. The language option specifies the target language for transcription, in this case, English ("en"). The translate option should be set to True if we want to translate from the source language to English and False if we want to transcribe in the source language.

    • To transcribe from microphone:
      from whisper_live.client import TranscriptionClient
      client = TranscriptionClient(
        "localhost",
        9090,
        is_multilingual=True,
        lang="hi",
        translate=True,
        model_size="small"
      )
      client()
    

    This command captures audio from the microphone and sends it to the server for transcription. It uses the multilingual option with hi as the selected language, enabling the multilingual feature and specifying the target language and task. We use whisper small by default but can be changed to any other option based on the requirements and the hardware running the server.

    • To transcribe from a HLS stream:
      client = TranscriptionClient(host, port, is_multilingual=True, lang="en", translate=False) 
      client(hls_url="http://as-hls-ww-live.akamaized.net/pool_904/live/ww/bbc_1xtra/bbc_1xtra.isml/bbc_1xtra-audio%3d96000.norewind.m3u8") 
    

    This command streams audio into the server from a HLS stream. It uses the same options as the previous command, enabling the multilingual feature and specifying the target language and task.

Transcribe audio from browser

  • Run the server
 from whisper_live.server import TranscriptionServer
 server = TranscriptionServer()
 server.run("0.0.0.0", 9090)

This would start the websocket server on port 9090.

Chrome Extension

Firefox Extension

Whisper Live Server in Docker

  • GPU
 docker build . -t whisper-live -f docker/Dockerfile.gpu
 docker run -it --gpus all -p 9090:9090 whisper-live:latest
  • CPU
 docker build . -t whisper-live -f docker/Dockerfile.cpu
 docker run -it -p 9090:9090 whisper-live:latest

Note: By default we use "small" model size. To build docker image for a different model size, change the size in server.py and then build the docker image.

Future Work

  • Add translation to other languages on top of transcription.
  • TensorRT backend for Whisper.

Contact

We are available to help you with both Open Source and proprietary AI projects. You can reach us via the Collabora website or vineet.suryan@collabora.com and marcus.edel@collabora.com.

Citations

@article{Whisper
  title = {Robust Speech Recognition via Large-Scale Weak Supervision},
  url = {https://arxiv.org/abs/2212.04356},
  author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya},
  publisher = {arXiv},
  year = {2022},
}
@misc{Silero VAD,
  author = {Silero Team},
  title = {Silero VAD: pre-trained enterprise-grade Voice Activity Detector (VAD), Number Detector and Language Classifier},
  year = {2021},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/snakers4/silero-vad}},
  commit = {insert_some_commit_here},
  email = {hello@silero.ai}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

whisper-live-0.0.10.tar.gz (26.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

whisper_live-0.0.10-py3-none-any.whl (25.5 kB view details)

Uploaded Python 3

File details

Details for the file whisper-live-0.0.10.tar.gz.

File metadata

  • Download URL: whisper-live-0.0.10.tar.gz
  • Upload date:
  • Size: 26.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for whisper-live-0.0.10.tar.gz
Algorithm Hash digest
SHA256 d0834206d84c51734cb3d2d4d1f4282f3d3caed85cda735eb4583dd92cb28f2d
MD5 90e20ab121598edba39536f4c2aa7d9d
BLAKE2b-256 7df31bdb6aa188b29403c0220ed0c2447ccaf025e6d8db8a58875ed9328e81b1

See more details on using hashes here.

File details

Details for the file whisper_live-0.0.10-py3-none-any.whl.

File metadata

  • Download URL: whisper_live-0.0.10-py3-none-any.whl
  • Upload date:
  • Size: 25.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for whisper_live-0.0.10-py3-none-any.whl
Algorithm Hash digest
SHA256 c1ee198bc6d3cfffb7e36240609a3c8d69e4d2706c0db6a69f5abe70bf9baa18
MD5 b6fc7f7ab7831011b4bf14455fc8a6e5
BLAKE2b-256 be661077502bb51aa287972807f25f38ad9abf10e2592af17273c1b4e2f8c21e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page