WhisperFlow: Real-Time Transcription Powered by OpenAI Whisper
Project description
Whisper Flow
Real-Time Transcription Using OpenAI Whisper
About The Project
OpenAI Whisper
OpenAI Whisper is a versatile speech recognition model designed for general use. Trained on a vast and varied audio dataset, Whisper can handle tasks such as multilingual speech recognition, speech translation, and language identification. It is commonly used for batch transcription, where you provide the entire audio or video file to Whisper, which then converts the speech into text. This process is not done in real-time; instead, Whisper processes the files and returns the text afterward, similar to handing over a recording and receiving the transcript later.
Whisper Flow
Using Whisper Flow, you can generate real-time transcriptions for your media content. Unlike batch transcriptions, where media files are uploaded and processed, streaming media is delivered to Whisper Flow in real time, and the service returns a transcript immediately.
What is Streaming
Streaming content is sent as a series of sequential data packets, or 'chunks,' which Whisper Flow transcribes on the spot. The benefits of using streaming over batch processing include the ability to incorporate real-time speech-to-text functionality into your applications and achieving faster transcription times. However, this speed may come at the expense of accuracy in some cases.
Stream Windowing
In scenarios involving time-streaming, it's typical to perform operations on data within specific time frames known as temporal windows. One common approach is using the tumbling window technique, which involves gathering events into segments until a certain condition is met.
Streaming Results
Whisper Flow splits the audio stream into segments based on natural speech patterns, like speaker changes or pauses. The transcription is sent back as a series of events, with each response containing more transcribed speech until the entire segment is complete.
| Transcript | EndTime | IsPartial |
|---|---|---|
| Reality | 0.55 | True |
| Reality is created | 1.05 | True |
| Reality is created by the | 1.50 | True |
| Reality is created by the mind | 2.15 | True |
| Reality is created by the mind | 2.65 | False |
| we can | 3.05 | True |
| we can change | 3.45 | True |
| we can change reality | 4.05 | True |
| we can change reality by changing | 4.45 | True |
| we can change reality by changing our mind | 5.05 | True |
| we can change reality by changing our mind | 5.55 | False |
Benchmarking
The evaluation metrics for comparing the performance of Whisper Flow are Word Error Rate (WER) and latency. Latency is measured as the time between two subsequent partial results, with the goal of achieving sub-second latency. We're not starting from scratch, as several quality benchmarks have already been performed for different ASR engines. I will rely on the research article "Benchmarking Open Source and Paid Services for Speech to Text" for guidance. For benchmarking the current implementation of Whisper Flow, I use LibriSpeech.
| Partial | Latency | Result |
True 175.47 when we took
True 185.14 When we took her.
True 237.83 when we took our seat.
True 176.42 when we took our seats.
True 198.59 when we took our seats at the
True 186.72 when we took our seats at the
True 210.04 when we took our seats at the breakfast.
True 220.36 when we took our seats at the breakfast table.
True 203.46 when we took our seats at the breakfast table.
True 242.63 When we took our seats at the breakfast table, it will
True 237.41 When we took our seats at the breakfast table, it was with
True 246.36 When we took our seats at the breakfast table, it was with the
True 278.96 When we took our seats at the breakfast table, it was with the feeling.
True 285.03 When we took our seats at the breakfast table, it was with the feeling of being.
True 295.39 When we took our seats at the breakfast table, it was with the feeling of being no
True 270.88 When we took our seats at the breakfast table, it was with the feeling of being no longer
True 320.43 When we took our seats at the breakfast table, it was with the feeling of being no longer looked
True 303.66 When we took our seats at the breakfast table, it was with the feeling of being no longer looked upon.
True 470.73 When we took our seats at the breakfast table, it was with the feeling of being no longer
True 353.25 When we took our seats at the breakfast table, it was with the feeling of being no longer looked upon as connected.
True 345.74 When we took our seats at the breakfast table, it was with the feeling of being no longer looked upon as connected in any way.
True 368.66 When we took our seats at the breakfast table, it was with the feeling of being no longer looked upon as connected in any way with the
True 400.25 When we took our seats at the breakfast table, it was with the feeling of being no longer looked upon as connected in any way with this case.
True 382.71 When we took our seats at the breakfast table, it was with the feeling of being no longer looked upon as connected in any way with this case.
False 405.02 When we took our seats at the breakfast table, it was with the feeling of being no longer looked upon as connected in any way with this case.
When running this benchmark on a MacBook Air with an M1 chip and 16GB of RAM, we achieve impressive performance metrics. The latency is consistently well below 500ms, ensuring real-time responsiveness. Additionally, the word error rate is around 7%, demonstrating the accuracy of the transcription.
Latency Stats:
count 26.000000
mean 275.223077
std 84.525695
min 154.700000
25% 205.105000
50% 258.620000
75% 339.412500
max 470.700000
Prerequisites
Before installing WhisperFlow, ensure you have the following:
- Python: 3.8 or higher (tested with Python 3.12)
- PortAudio: Required for PyAudio (audio I/O library)
Installing PortAudio
macOS (using Homebrew):
brew install portaudio
Linux (Ubuntu/Debian):
sudo apt-get install portaudio19-dev
Linux (Fedora/RHEL):
sudo dnf install portaudio-devel
Windows: PortAudio is typically bundled with PyAudio wheels on Windows. If you encounter issues, refer to the PyAudio documentation.
How To Use it
Quick Start
Get WhisperFlow running in under 5 minutes:
# Clone the repository
git clone https://github.com/dimastatz/whisper-flow.git
cd whisper-flow
# Setup environment, install dependencies, and run tests
./run.sh -local
# Activate the virtual environment
source .venv/bin/activate
# Start the server on port 8181
./run.sh -run-server
The server will be available at http://localhost:8181. Visit http://localhost:8181/health to verify it's running.
Development Setup
For contributors and developers:
1. Initial Setup
# Clone and enter the directory
git clone https://github.com/dimastatz/whisper-flow.git
cd whisper-flow
# Setup environment: creates .venv, installs dependencies, runs tests
./run.sh -local
What -local does:
- Creates a fresh virtual environment (
.venv) - Installs all dependencies from
requirements.txt - Runs
blackformatter on code - Runs
pylintlinter (requires 9.9/10 score) - Runs all unit tests (requires 95% coverage)
2. Running Tests
# Activate environment
source .venv/bin/activate
# Run tests only (formatting + linting + unit tests)
./run.sh -test
3. Running the Server
# Activate environment
source .venv/bin/activate
# Start the FastAPI server on port 8181
./run.sh -run-server
The server provides:
- WebSocket endpoint:
ws://localhost:8181/ws- Real-time streaming transcription - Health check:
GET http://localhost:8181/health- Server status - Batch transcription:
POST http://localhost:8181/transcribe_pcm_chunk- Process PCM audio files
4. Running Benchmarks
# Activate environment
source .venv/bin/activate
# Run benchmark tests (starts server, runs tests, stops server)
./run.sh -benchmark
This measures transcription latency and word error rate (WER) using LibriSpeech test data.
Docker Deployment
Using Docker
# Build and run the Docker container
./run.sh -docker
What -docker does:
- Stops and removes any existing
whisperflow-container - Removes the old
whisperflow-image - Builds a fresh Docker image with all dependencies
- Runs the container on port 8888
Manual Docker Setup
# Build the image
docker build -t whisperflow-image --file Dockerfile.test .
# Run the container
docker run --name whisperflow-container -p 8181:8181 -d whisperflow-image
# Check logs
docker logs whisperflow-container
# Stop the container
docker stop whisperflow-container
Using as a Python Library
Install WhisperFlow as a package to integrate real-time transcription into your own applications:
Installation
pip install whisperflow
Basic Usage
Create a WebSocket endpoint for real-time streaming transcription:
from fastapi import FastAPI, WebSocket
from starlette.websockets import WebSocketDisconnect
import whisperflow.streaming as st
import whisperflow.transcriber as ts
app = FastAPI()
@app.websocket("/ws")
async def websocket_endpoint(websocket: WebSocket):
# Load the Whisper model (default: tiny.en.pt)
model = ts.get_model()
# Define transcription callback
async def transcribe_async(chunks: list):
return await ts.transcribe_pcm_chunks_async(model, chunks)
# Define response callback
async def send_back_async(data: dict):
await websocket.send_json(data)
try:
await websocket.accept()
# Create transcription session
session = st.TranscribeSession(transcribe_async, send_back_async)
# Process incoming audio chunks
while True:
data = await websocket.receive_bytes()
session.add_chunk(data)
except WebSocketDisconnect:
# Client disconnected
await session.stop()
except Exception as exception:
# Handle errors
await session.stop()
if websocket.client_state.name != "DISCONNECTED":
await websocket.close()
API Reference
Transcriber Module (whisperflow.transcriber):
get_model(file_name="tiny.en.pt")- Load a Whisper modeltranscribe_pcm_chunks(model, chunks, lang="en")- Synchronous transcriptiontranscribe_pcm_chunks_async(model, chunks, lang="en")- Async transcription
Streaming Module (whisperflow.streaming):
TranscribeSession(transcribe_fn, send_back_fn)- Create a streaming sessionsession.add_chunk(audio_data)- Add audio chunk for processingsession.stop()- Stop the transcription session
Audio Format Requirements
WhisperFlow expects PCM audio data with the following specifications:
- Sample Rate: 16 kHz
- Channels: Mono (1 channel)
- Format: 16-bit signed integer (int16)
Available Commands
All commands are available through ./run.sh:
| Command | Description |
|---|---|
./run.sh -local |
Setup environment, install dependencies, run tests |
./run.sh -test |
Run formatter, linter, and unit tests |
./run.sh -run-server |
Start the FastAPI server on port 8181 |
./run.sh -benchmark |
Run performance benchmark tests |
./run.sh -docker |
Build and run Docker container |
Roadmap
- Release v1.0-RC - Includes transcription streaming implementation.
- Release v1.1 - Bug fixes and implementation of the most requested changes.
- Release v1.2 - Prepare the package for integration with the py-speech package.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file whisperflow-1.1.0.tar.gz.
File metadata
- Download URL: whisperflow-1.1.0.tar.gz
- Upload date:
- Size: 13.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b1056168a6f4e69a1b0991e987010b8c3e9db78ad020deeb8baf5c18532ec588
|
|
| MD5 |
bc1935184e6d6787cb3ec86c1e1d7979
|
|
| BLAKE2b-256 |
371dc524b20e988f1e711759b857035b5acfc4a0f32448b13e6767fb9cfc74b9
|
File details
Details for the file whisperflow-1.1.0-py3-none-any.whl.
File metadata
- Download URL: whisperflow-1.1.0-py3-none-any.whl
- Upload date:
- Size: 6.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b516df3c5d02f1d3c388f2f88ec85253ad54ec6b52397d66c17134f59952c130
|
|
| MD5 |
1012eb188311491256c21bb036491d56
|
|
| BLAKE2b-256 |
f91a9e30ddce83633382768b91e299006e195903e99a4bb9b1d0ce4e5919be8a
|