A nearly-live implementation of OpenAI's Whisper.
Project description
whisper-live
A nearly-live implementation of OpenAI's Whisper.
This project is a real-time transcription application that uses the OpenAI Whisper model to convert speech input into text output. It can be used to transcribe both live audio input from microphone and pre-recorded audio files.
Unlike traditional speech recognition systems that rely on continuous audio streaming, we use voice activity detection (VAD) to detect the presence of speech and only send the audio data to whisper when speech is detected. This helps to reduce the amount of data sent to the whisper model and improves the accuracy of the transcription output.
Installation
- Install PyAudio and ffmpeg
bash setup.sh
- Install whisper-live from pip
pip install whisper-live
Getting Started
- Run the server
from whisper_live.server import TranscriptionServer
server = TranscriptionServer()
server.run("0.0.0.0", 9090)
-
On the client side
- To transcribe an audio file:
from whisper_live.client import TranscriptionClient client = TranscriptionClient("localhost", 9090, is_multilingual=True, lang="hi", translate=True) client(audio_file_path)
This command transcribes the specified audio file (audio.wav) using the Whisper model. It connects to the server running on localhost at port 9090. It also enables the multilingual feature, allowing transcription in multiple languages. The language option specifies the target language for transcription, in this case, Hindi ("hi"). The translate option should be set to
True
if we want to translate from the source language to English andFalse
if we want to transcribe in the source language.- To transcribe from microphone:
from whisper_live.client import TranscriptionClient client = TranscriptionClient(host, port, is_multilingual=True, lang="hi", translate=True) client()
This command captures audio from the microphone and sends it to the server for transcription. It uses the same options as the previous command, enabling the multilingual feature and specifying the target language and task.
Transcribe audio from browser
- Run the server
from whisper_live.server import TranscriptionServer
server = TranscriptionServer()
server.run("0.0.0.0", 9090)
This would start the websocket server on port 9090
.
Chrome Extension
- Refer to Audio-Transcription-Chrome to use Chrome extension.
Firefox Extension
- Refer to Audio-Transcription-Firefox to use Mozilla Firefox extension.
Whisper Live Server in Docker
- GPU
docker build . -t whisper-live -f docker/Dockerfile.gpu
docker run -it --gpus all -p 9090:9090 whisper-live:latest
- CPU
docker build . -t whisper-live -f docker/Dockerfile.cpu
docker run -it -p 9090:9090 whisper-live:latest
Note: By default we use "small" model size. To build docker image for a different model size, change the size in server.py and then build the docker image.
Future Work
- Add translation to other languages on top of transcription.
- TensorRT backend for Whisper.
Contact
We are available to help you with both Open Source and proprietary AI projects. You can reach us via the Collabora website or vineet.suryan@collabora.com and marcus.edel@collabora.com.
Citations
@article{Whisper
title = {Robust Speech Recognition via Large-Scale Weak Supervision},
url = {https://arxiv.org/abs/2212.04356},
author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya},
publisher = {arXiv},
year = {2022},
}
@misc{Silero VAD,
author = {Silero Team},
title = {Silero VAD: pre-trained enterprise-grade Voice Activity Detector (VAD), Number Detector and Language Classifier},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/snakers4/silero-vad}},
commit = {insert_some_commit_here},
email = {hello@silero.ai}
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for whisper_live-0.0.7-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9382a5e46611277d54c506fb9bb8dcb965a5e4476e4206e725a87f59ea35cd63 |
|
MD5 | 64ba5e2d0a6d06e314920df9fc1e49db |
|
BLAKE2b-256 | 49bd828ae35714b6a5afad9f7f237567a844c4a6614cfa7cdf612ae30ac16c2e |