Real-time speech-to-text streaming with OpenAI Whisper
Project description
whisperpipe
Real-time speech-to-text streaming with OpenAI Whisper
Description
whisperpipe is a powerful, easy-to-use Python package for real-time, offline audio transcription using OpenAI's Whisper model. It runs locally, making it a free and private solution for continuous speech-to-text applications. It provides seamless integration with callback functions for LLM processing and supports pause/resume functionality for interactive applications.
Why whisperpipe?
In a world where most ASR (Automatic Speech Recognition) services are cloud-based, whisperpipe offers a refreshing alternative by harnessing the power of OpenAI's Whisper model to run directly on your local machine. This approach provides several key advantages:
- Complete Privacy: Since all transcription is done locally, your voice data never leaves your computer. This is crucial for applications that handle sensitive or private conversations.
- Zero Cost: Say goodbye to recurring subscription fees and per-minute charges. whisperpipe is free to use, making it an economical choice for both hobbyists and commercial projects.
- No Internet Required: Whether you're on a plane, in a remote location, or simply have an unstable internet connection, whisperpipe works flawlessly offline.
- Real-time Performance: Designed for continuous, real-time transcription, whisperpipe is ideal for live applications such as voice-controlled assistants, dictation software, and more.
- Unleash the Power of Whisper: By running the Whisper model locally, you have full control over the transcription process, from model selection to performance tuning.
whisperpipe empowers you to build powerful, private, and cost-effective voice applications with ease.
Features
- Real-time audio transcription using OpenAI Whisper
- Callback system for custom processing (LLM integration, etc.)
- Pause/Resume functionality for interactive applications
- Multiple language support
- Configurable processing parameters
- Thread-safe operation
- Easy installation and usage
Installation
From PyPI
pip install whisperpipe
From GitHub
pip install git+https://github.com/Erfan-ram/whisperpipe.git
Quick Start
from whisperpipe import pipeStream
# Basic usage
transcriber = pipeStream(
model_name="base",
language="en",
finalization_delay=10.0,
processing_interval=1.0
)
# Start streaming
transcriber.start_streaming()
Usage Examples
Basic Transcription
from whisperpipe import pipeStream
# Create transcriber instance
transcriber = pipeStream(
model_name="base",
language="en",
finalization_delay=10.0,
processing_interval=1.0
)
# Start transcription
transcriber.start_streaming()
# The transcribed text will be printed to console
# Press Ctrl+C to stop
With Custom Callback (LLM Integration)
from whisperpipe import pipeStream
def llm_processor(text):
"""Custom function to process transcribed text"""
print(f"Processing: {text}")
# Your LLM integration here
# e.g., send to OpenAI, Claude, local model, etc.
response = your_llm_api.chat(text)
print(f"Response: {response}")
return response
# Create transcriber with callback
transcriber = pipeStream(
model_name="base",
language="en",
finalization_delay=10.0,
processing_interval=1.0
)
# Register callback
transcriber.set_def_callback(llm_processor)
# Start streaming with LLM integration
transcriber.start_streaming()
Interactive Mode with Pause/Resume
from whisperpipe import pipeStream
import time
def interactive_processor(text):
"""Process text and pause for response"""
# Pause transcriber while processing
transcriber.pause_streaming()
print(f"User said: {text}")
# Process with your system
response = process_with_llm(text)
# Speak or display response
print(f"Assistant: {response}")
# Resume for next input
transcriber.resume_streaming()
transcriber = pipeStream()
transcriber.set_def_callback(interactive_processor)
transcriber.start_streaming()
API Reference
Constructor Parameters
model_name(str): Whisper model name ("tiny", "base", "small", "medium", "large"). Default: "base"language(str): Language code for transcription ("en", "es", "fr", etc.). Default: "en"finalization_delay(float): Wait time in seconds before finalizing transcription. Default: 10.0processing_interval(float): Interval in seconds between processing cycles. Default: 1.0buffer_duration_seconds(float): Time window in seconds to hold audio for processing. Default: 5.0debug_mode(bool): Enable debug mode for detailed logging. Default: True
Methods
Core Methods
start_streaming(): Start audio capture and transcriptionstop_streaming(): Stop audio capture and transcription
Callback System
set_def_callback(callback_function): Register a callback function for processing transcribed textset_def_callback(None): Clear the callback (use default behavior)
Pause/Resume Control
pause_streaming(): Pause audio processing temporarilyresume_streaming(): Resume audio processingis_paused(): Check if transcriber is pausedis_running(): Check if transcriber is running
Requirements
- Python 3.8+
- PyAudio
- OpenAI Whisper
- PyTorch
- NumPy
- pynput
License
MIT License
Author
Erfan Ramezani - erfanramezany245@gmail.com
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
Support
For issues and questions, please use the GitHub Issues page.
Citation
If you use whisperpipe in your research, please cite both our paper and the software repository.
Paper (arXiv):
The arXiv preprint link will be available here shortly. Once published, the formal BibTeX will be updated.
**Software / Codebase (Zenodo):**
```bibtex
@software{whisperpipe_code_2026,
author = {Erfan Ramezani and Mohammad Mahdi Giahi},
title = {WhisperPipe: Source Code and Implementation},
month = apr,
year = 2026,
publisher = {Zenodo},
doi = {10.5281/zenodo.19646625},
url = {[https://doi.org/10.5281/zenodo.19646625](https://doi.org/10.5281/zenodo.19646625)}
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file whisperpipe-0.1.1.tar.gz.
File metadata
- Download URL: whisperpipe-0.1.1.tar.gz
- Upload date:
- Size: 23.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
59314f1536220c4c55f54e60e6df602d2ab3b00f955d151bf1ac3cf05739201a
|
|
| MD5 |
ac5e2797d0f5eda7f69b07b761ec652e
|
|
| BLAKE2b-256 |
ffff7109d684dfb90906da40d2e4e39af8805a01da7367006f86a69a9a7dee74
|
Provenance
The following attestation bundles were made for whisperpipe-0.1.1.tar.gz:
Publisher:
release.yaml on Erfan-ram/whisperpipe
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
whisperpipe-0.1.1.tar.gz -
Subject digest:
59314f1536220c4c55f54e60e6df602d2ab3b00f955d151bf1ac3cf05739201a - Sigstore transparency entry: 1339988554
- Sigstore integration time:
-
Permalink:
Erfan-ram/whisperpipe@37851a9fb079e6e2c47037f7445015644e8e68ae -
Branch / Tag:
refs/tags/v0.1.1 - Owner: https://github.com/Erfan-ram
-
Access:
private
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yaml@37851a9fb079e6e2c47037f7445015644e8e68ae -
Trigger Event:
push
-
Statement type:
File details
Details for the file whisperpipe-0.1.1-py3-none-any.whl.
File metadata
- Download URL: whisperpipe-0.1.1-py3-none-any.whl
- Upload date:
- Size: 21.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7aa5beabdb02d23048dd66796935d51255d1e0b96cc7e81bfcb513fdfea6c9d4
|
|
| MD5 |
c57acb204ee045a49015a32ff26c11af
|
|
| BLAKE2b-256 |
6c13c8b8154d98b3508476abd262b13036efa0ea2607bb925b5f7d6311025ecd
|
Provenance
The following attestation bundles were made for whisperpipe-0.1.1-py3-none-any.whl:
Publisher:
release.yaml on Erfan-ram/whisperpipe
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
whisperpipe-0.1.1-py3-none-any.whl -
Subject digest:
7aa5beabdb02d23048dd66796935d51255d1e0b96cc7e81bfcb513fdfea6c9d4 - Sigstore transparency entry: 1339988560
- Sigstore integration time:
-
Permalink:
Erfan-ram/whisperpipe@37851a9fb079e6e2c47037f7445015644e8e68ae -
Branch / Tag:
refs/tags/v0.1.1 - Owner: https://github.com/Erfan-ram
-
Access:
private
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yaml@37851a9fb079e6e2c47037f7445015644e8e68ae -
Trigger Event:
push
-
Statement type: