Skip to main content

Python bindings for whisper.cpp

Project description

pywhispercpp

Python bindings for whisper.cpp with a simple Pythonic API on top of it.

License: MIT Wheels PyPi version Downloads

whisper.cpp is:

High-performance inference of OpenAI's Whisper automatic speech recognition (ASR) model:

  • Plain C/C++ implementation without dependencies
  • Apple silicon first-class citizen - optimized via Arm Neon and Accelerate framework
  • AVX intrinsics support for x86 architectures
  • VSX intrinsics support for POWER architectures
  • Mixed F16 / F32 precision
  • Low memory usage (Flash Attention)
  • Zero memory allocations at runtime
  • Runs on the CPU
  • C-style API

Supported platforms:

Table of contents

Installation

First Install ffmpeg

# on Ubuntu or Debian
sudo apt update && sudo apt install ffmpeg

# on Arch Linux
sudo pacman -S ffmpeg

# on MacOS using Homebrew (https://brew.sh/)
brew install ffmpeg

# on Windows using Chocolatey (https://chocolatey.org/)
choco install ffmpeg

# on Windows using Scoop (https://scoop.sh/)
scoop install ffmpeg

PYPI

  1. Once ffmpeg is installed, install pywhispercpp
pip install pywhispercpp

If you want to use the examples, you will need to install extra dependencies

pip install pywhispercpp[examples]

From source

You can install the latest dev version from GitHub:

pip install git+https://github.com/abdeladim-s/pywhispercpp

CoreML support

Thanks to @tangm, using CoreML is now supported:

To build and install, clone the repository and run the following commands:

export CMAKE_ARGS="-DWHISPER_COREML=1"
python -m build --wheel # in this repository to build the wheel. Assumes you have installed build with pip install build
pip install dist/<generated>.whl

Then download and convert the appropriate model using the original whisper.cpp repository, producing a <model>.mlmodelc directory.

You can now verify if everything's working:

from pywhispercpp.model import Model

model = Model('<model_path>/ggml-base.en.bin', n_threads=6)
print(Model.system_info())  # and you should see COREML = 1

If successful, you should also see the following on your terminal:

whisper_init_state: loading Core ML model from '<model_path>/ggml-base.en-encoder.mlmodelc'
whisper_init_state: first run on a device may take a while ...
whisper_init_state: Core ML model loaded

Quick start

from pywhispercpp.model import Model

model = Model('base.en', n_threads=6)
segments = model.transcribe('file.mp3', speed_up=True)
for segment in segments:
    print(segment.text)

You can also assign a custom new_segment_callback

from pywhispercpp.model import Model

model = Model('base.en', print_realtime=False, print_progress=False)
segments = model.transcribe('file.mp3', new_segment_callback=print)
  • The ggml model will be downloaded automatically.
  • You can pass any whisper.cpp parameter as a keyword argument to the Model class or to the transcribe function.
  • The transcribe function accepts any media file (audio/video), in any format.
  • Check the Model class documentation for more details.

Examples

The examples folder contains several examples inspired from the original whisper.cpp/examples.

Main

Just a straightforward example with a simple Command Line Interface.

Check the source code here, or use the CLI as follows:

pwcpp file.wav -m base --output-srt --print_realtime true

Run pwcpp --help to get the help message

usage: pwcpp [-h] [-m MODEL] [--version] [--processors PROCESSORS] [-otxt] [-ovtt] [-osrt] [-ocsv] [--strategy STRATEGY]
             [--n_threads N_THREADS] [--n_max_text_ctx N_MAX_TEXT_CTX] [--offset_ms OFFSET_MS] [--duration_ms DURATION_MS]
             [--translate TRANSLATE] [--no_context NO_CONTEXT] [--single_segment SINGLE_SEGMENT] [--print_special PRINT_SPECIAL]
             [--print_progress PRINT_PROGRESS] [--print_realtime PRINT_REALTIME] [--print_timestamps PRINT_TIMESTAMPS]
             [--token_timestamps TOKEN_TIMESTAMPS] [--thold_pt THOLD_PT] [--thold_ptsum THOLD_PTSUM] [--max_len MAX_LEN]
             [--split_on_word SPLIT_ON_WORD] [--max_tokens MAX_TOKENS] [--speed_up SPEED_UP] [--audio_ctx AUDIO_CTX]
             [--prompt_tokens PROMPT_TOKENS] [--prompt_n_tokens PROMPT_N_TOKENS] [--language LANGUAGE] [--suppress_blank SUPPRESS_BLANK]
             [--suppress_non_speech_tokens SUPPRESS_NON_SPEECH_TOKENS] [--temperature TEMPERATURE] [--max_initial_ts MAX_INITIAL_TS]
             [--length_penalty LENGTH_PENALTY] [--temperature_inc TEMPERATURE_INC] [--entropy_thold ENTROPY_THOLD]
             [--logprob_thold LOGPROB_THOLD] [--no_speech_thold NO_SPEECH_THOLD] [--greedy GREEDY] [--beam_search BEAM_SEARCH]
             media_file [media_file ...]

positional arguments:
  media_file            The path of the media file or a list of filesseparated by space

options:
  -h, --help            show this help message and exit
  -m MODEL, --model MODEL
                        Path to the `ggml` model, or just the model name
  --version             show program's version number and exit
  --processors PROCESSORS
                        number of processors to use during computation
  -otxt, --output-txt   output result in a text file
  -ovtt, --output-vtt   output result in a vtt file
  -osrt, --output-srt   output result in a srt file
  -ocsv, --output-csv   output result in a CSV file
  --strategy STRATEGY   Available sampling strategiesGreefyDecoder -> 0BeamSearchDecoder -> 1
  --n_threads N_THREADS
                        Number of threads to allocate for the inferencedefault to min(4, available hardware_concurrency)
  --n_max_text_ctx N_MAX_TEXT_CTX
                        max tokens to use from past text as prompt for the decoder
  --offset_ms OFFSET_MS
                        start offset in ms
  --duration_ms DURATION_MS
                        audio duration to process in ms
  --translate TRANSLATE
                        whether to translate the audio to English
  --no_context NO_CONTEXT
                        do not use past transcription (if any) as initial prompt for the decoder
  --single_segment SINGLE_SEGMENT
                        force single segment output (useful for streaming)
  --print_special PRINT_SPECIAL
                        print special tokens (e.g. <SOT>, <EOT>, <BEG>, etc.)
  --print_progress PRINT_PROGRESS
                        print progress information
  --print_realtime PRINT_REALTIME
                        print results from within whisper.cpp (avoid it, use callback instead)
  --print_timestamps PRINT_TIMESTAMPS
                        print timestamps for each text segment when printing realtime
  --token_timestamps TOKEN_TIMESTAMPS
                        enable token-level timestamps
  --thold_pt THOLD_PT   timestamp token probability threshold (~0.01)
  --thold_ptsum THOLD_PTSUM
                        timestamp token sum probability threshold (~0.01)
  --max_len MAX_LEN     max segment length in characters
  --split_on_word SPLIT_ON_WORD
                        split on word rather than on token (when used with max_len)
  --max_tokens MAX_TOKENS
                        max tokens per segment (0 = no limit)
  --speed_up SPEED_UP   speed-up the audio by 2x using Phase Vocoder
  --audio_ctx AUDIO_CTX
                        overwrite the audio context size (0 = use default)
  --prompt_tokens PROMPT_TOKENS
                        tokens to provide to the whisper decoder as initial prompt
  --prompt_n_tokens PROMPT_N_TOKENS
                        tokens to provide to the whisper decoder as initial prompt
  --language LANGUAGE   for auto-detection, set to None, "" or "auto"
  --suppress_blank SUPPRESS_BLANK
                        common decoding parameters
  --suppress_non_speech_tokens SUPPRESS_NON_SPEECH_TOKENS
                        common decoding parameters
  --temperature TEMPERATURE
                        initial decoding temperature
  --max_initial_ts MAX_INITIAL_TS
                        max_initial_ts
  --length_penalty LENGTH_PENALTY
                        length_penalty
  --temperature_inc TEMPERATURE_INC
                        temperature_inc
  --entropy_thold ENTROPY_THOLD
                        similar to OpenAI's "compression_ratio_threshold"
  --logprob_thold LOGPROB_THOLD
                        logprob_thold
  --no_speech_thold NO_SPEECH_THOLD
                        no_speech_thold
  --greedy GREEDY       greedy
  --beam_search BEAM_SEARCH
                        beam_search

Assistant

This is a simple example showcasing the use of pywhispercpp as an assistant. The idea is to use a VAD to detect speech (in this example we used webrtcvad), and when some speech is detected, we run the transcription.
It is inspired from the whisper.cpp/examples/command example.

You can check the source code here or you can use the class directly to create your own assistant:

from pywhispercpp.examples.assistant import Assistant

my_assistant = Assistant(commands_callback=print, n_threads=8)
my_assistant.start()

Here we set the commands_callback to a simple print, so the commands will just get printed on the screen.

You can run this example from the command line as well

$ pwcpp-assistant --help

usage: pwcpp-assistant [-h] [-m MODEL] [-ind INPUT_DEVICE] [-st SILENCE_THRESHOLD] [-bd BLOCK_DURATION]

options:
  -h, --help            show this help message and exit
  -m MODEL, --model MODEL
                        Whisper.cpp model, default to tiny.en
  -ind INPUT_DEVICE, --input_device INPUT_DEVICE
                        Id of The input device (aka microphone)
  -st SILENCE_THRESHOLD, --silence_threshold SILENCE_THRESHOLD
                        he duration of silence after which the inference will be running, default to 16
  -bd BLOCK_DURATION, --block_duration BLOCK_DURATION
                        minimum time audio updates in ms, default to 30

Recording

Another simple example to transcribe your own recordings.

You can use it from Python as follows:

from pywhispercpp.examples.recording import Recording

myrec = Recording(5)
myrec.start()

Or from the command line:

$ pwcpp-recording --help

usage: pwcpp-recording [-h] [-m MODEL] duration

positional arguments:
  duration              duration in seconds

options:
  -h, --help            show this help message and exit
  -m MODEL, --model MODEL
                        Whisper.cpp model, default to tiny.en

Live Stream Transcription

This example is an attempt to transcribe a livestream in realtime, but the results are not quite satisfactory yet, the CPU jumps quickly to 100% and I cannot use huge models on my descent machine. (Or maybe I am doing something wrong!) :sweat_smile:

If you have a powerful machine, give it a try.

From python :

from pywhispercpp.examples.livestream import LiveStream

url = ""  # Make sure it is a direct stream URL
ls = LiveStream(url=url, n_threads=4)
ls.start()

From the command line:

$ pwcpp-livestream --help

usage: pwcpp-livestream [-h] [-nt N_THREADS] [-m MODEL] [-od OUTPUT_DEVICE] [-bls BLOCK_SIZE] [-bus BUFFER_SIZE] [-ss SAMPLE_SIZE] url

positional arguments:
  url                   Stream URL

options:
  -h, --help            show this help message and exit
  -nt N_THREADS, --n_threads N_THREADS
                        number of threads, default to 3
  -m MODEL, --model MODEL
                        Whisper.cpp model, default to tiny.en
  -od OUTPUT_DEVICE, --output_device OUTPUT_DEVICE
                        the output device, aka the speaker, leave it None to take the default
  -bls BLOCK_SIZE, --block_size BLOCK_SIZE
                        block size, default to 1024
  -bus BUFFER_SIZE, --buffer_size BUFFER_SIZE
                        number of blocks used for buffering, default to 20
  -ss SAMPLE_SIZE, --sample_size SAMPLE_SIZE
                        Sample size, default to 4

Advanced usage

  • First check the API documentation for more advanced usage.
  • If you are a more experienced user, you can access the C-Style API directly, almost all functions from whisper.h are exposed with the binding module _pywhispercpp.
import _pywhispercpp as pwcpp

ctx = pwcpp.whisper_init_from_file('path/to/ggml/model')

Discussions and contributions

If you find any bug, please open an issue.

If you have any feedback, or you want to share how you are using this project, feel free to use the Discussions and open a new topic.

License

This project is licensed under the same license as whisper.cpp (MIT License).

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pywhispercpp-1.2.0.tar.gz (234.3 kB view hashes)

Uploaded Source

Built Distributions

pywhispercpp-1.2.0-pp310-pypy310_pp73-win_amd64.whl (121.4 kB view hashes)

Uploaded PyPy Windows x86-64

pywhispercpp-1.2.0-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (971.5 kB view hashes)

Uploaded PyPy manylinux: glibc 2.17+ x86-64

pywhispercpp-1.2.0-pp310-pypy310_pp73-manylinux_2_17_i686.manylinux2014_i686.whl (1.0 MB view hashes)

Uploaded PyPy manylinux: glibc 2.17+ i686

pywhispercpp-1.2.0-pp310-pypy310_pp73-macosx_10_9_x86_64.whl (947.6 kB view hashes)

Uploaded PyPy macOS 10.9+ x86-64

pywhispercpp-1.2.0-pp39-pypy39_pp73-win_amd64.whl (121.3 kB view hashes)

Uploaded PyPy Windows x86-64

pywhispercpp-1.2.0-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (971.7 kB view hashes)

Uploaded PyPy manylinux: glibc 2.17+ x86-64

pywhispercpp-1.2.0-pp39-pypy39_pp73-manylinux_2_17_i686.manylinux2014_i686.whl (1.0 MB view hashes)

Uploaded PyPy manylinux: glibc 2.17+ i686

pywhispercpp-1.2.0-pp39-pypy39_pp73-macosx_10_9_x86_64.whl (947.6 kB view hashes)

Uploaded PyPy macOS 10.9+ x86-64

pywhispercpp-1.2.0-pp38-pypy38_pp73-win_amd64.whl (121.4 kB view hashes)

Uploaded PyPy Windows x86-64

pywhispercpp-1.2.0-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (971.8 kB view hashes)

Uploaded PyPy manylinux: glibc 2.17+ x86-64

pywhispercpp-1.2.0-pp38-pypy38_pp73-manylinux_2_17_i686.manylinux2014_i686.whl (1.0 MB view hashes)

Uploaded PyPy manylinux: glibc 2.17+ i686

pywhispercpp-1.2.0-pp38-pypy38_pp73-macosx_10_9_x86_64.whl (947.6 kB view hashes)

Uploaded PyPy macOS 10.9+ x86-64

pywhispercpp-1.2.0-cp312-cp312-win_amd64.whl (123.7 kB view hashes)

Uploaded CPython 3.12 Windows x86-64

pywhispercpp-1.2.0-cp312-cp312-win32.whl (110.1 kB view hashes)

Uploaded CPython 3.12 Windows x86

pywhispercpp-1.2.0-cp312-cp312-musllinux_1_1_x86_64.whl (1.5 MB view hashes)

Uploaded CPython 3.12 musllinux: musl 1.1+ x86-64

pywhispercpp-1.2.0-cp312-cp312-musllinux_1_1_i686.whl (1.6 MB view hashes)

Uploaded CPython 3.12 musllinux: musl 1.1+ i686

pywhispercpp-1.2.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (971.2 kB view hashes)

Uploaded CPython 3.12 manylinux: glibc 2.17+ x86-64

pywhispercpp-1.2.0-cp312-cp312-manylinux_2_17_i686.manylinux2014_i686.whl (1.0 MB view hashes)

Uploaded CPython 3.12 manylinux: glibc 2.17+ i686

pywhispercpp-1.2.0-cp312-cp312-macosx_10_9_x86_64.whl (953.2 kB view hashes)

Uploaded CPython 3.12 macOS 10.9+ x86-64

pywhispercpp-1.2.0-cp312-cp312-macosx_10_9_universal2.whl (1.8 MB view hashes)

Uploaded CPython 3.12 macOS 10.9+ universal2 (ARM64, x86-64)

pywhispercpp-1.2.0-cp311-cp311-win_amd64.whl (121.9 kB view hashes)

Uploaded CPython 3.11 Windows x86-64

pywhispercpp-1.2.0-cp311-cp311-win32.whl (109.1 kB view hashes)

Uploaded CPython 3.11 Windows x86

pywhispercpp-1.2.0-cp311-cp311-musllinux_1_1_x86_64.whl (1.5 MB view hashes)

Uploaded CPython 3.11 musllinux: musl 1.1+ x86-64

pywhispercpp-1.2.0-cp311-cp311-musllinux_1_1_i686.whl (1.6 MB view hashes)

Uploaded CPython 3.11 musllinux: musl 1.1+ i686

pywhispercpp-1.2.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (972.6 kB view hashes)

Uploaded CPython 3.11 manylinux: glibc 2.17+ x86-64

pywhispercpp-1.2.0-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl (1.0 MB view hashes)

Uploaded CPython 3.11 manylinux: glibc 2.17+ i686

pywhispercpp-1.2.0-cp311-cp311-macosx_10_9_x86_64.whl (948.3 kB view hashes)

Uploaded CPython 3.11 macOS 10.9+ x86-64

pywhispercpp-1.2.0-cp311-cp311-macosx_10_9_universal2.whl (1.8 MB view hashes)

Uploaded CPython 3.11 macOS 10.9+ universal2 (ARM64, x86-64)

pywhispercpp-1.2.0-cp310-cp310-win_amd64.whl (122.1 kB view hashes)

Uploaded CPython 3.10 Windows x86-64

pywhispercpp-1.2.0-cp310-cp310-win32.whl (109.2 kB view hashes)

Uploaded CPython 3.10 Windows x86

pywhispercpp-1.2.0-cp310-cp310-musllinux_1_1_x86_64.whl (1.5 MB view hashes)

Uploaded CPython 3.10 musllinux: musl 1.1+ x86-64

pywhispercpp-1.2.0-cp310-cp310-musllinux_1_1_i686.whl (1.6 MB view hashes)

Uploaded CPython 3.10 musllinux: musl 1.1+ i686

pywhispercpp-1.2.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (972.5 kB view hashes)

Uploaded CPython 3.10 manylinux: glibc 2.17+ x86-64

pywhispercpp-1.2.0-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl (1.0 MB view hashes)

Uploaded CPython 3.10 manylinux: glibc 2.17+ i686

pywhispercpp-1.2.0-cp310-cp310-macosx_10_9_x86_64.whl (948.3 kB view hashes)

Uploaded CPython 3.10 macOS 10.9+ x86-64

pywhispercpp-1.2.0-cp310-cp310-macosx_10_9_universal2.whl (1.8 MB view hashes)

Uploaded CPython 3.10 macOS 10.9+ universal2 (ARM64, x86-64)

pywhispercpp-1.2.0-cp39-cp39-win_amd64.whl (122.1 kB view hashes)

Uploaded CPython 3.9 Windows x86-64

pywhispercpp-1.2.0-cp39-cp39-win32.whl (109.4 kB view hashes)

Uploaded CPython 3.9 Windows x86

pywhispercpp-1.2.0-cp39-cp39-musllinux_1_1_x86_64.whl (1.5 MB view hashes)

Uploaded CPython 3.9 musllinux: musl 1.1+ x86-64

pywhispercpp-1.2.0-cp39-cp39-musllinux_1_1_i686.whl (1.6 MB view hashes)

Uploaded CPython 3.9 musllinux: musl 1.1+ i686

pywhispercpp-1.2.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (972.7 kB view hashes)

Uploaded CPython 3.9 manylinux: glibc 2.17+ x86-64

pywhispercpp-1.2.0-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl (1.0 MB view hashes)

Uploaded CPython 3.9 manylinux: glibc 2.17+ i686

pywhispercpp-1.2.0-cp39-cp39-macosx_10_9_x86_64.whl (948.4 kB view hashes)

Uploaded CPython 3.9 macOS 10.9+ x86-64

pywhispercpp-1.2.0-cp39-cp39-macosx_10_9_universal2.whl (1.8 MB view hashes)

Uploaded CPython 3.9 macOS 10.9+ universal2 (ARM64, x86-64)

pywhispercpp-1.2.0-cp38-cp38-win_amd64.whl (121.9 kB view hashes)

Uploaded CPython 3.8 Windows x86-64

pywhispercpp-1.2.0-cp38-cp38-win32.whl (109.2 kB view hashes)

Uploaded CPython 3.8 Windows x86

pywhispercpp-1.2.0-cp38-cp38-musllinux_1_1_x86_64.whl (1.5 MB view hashes)

Uploaded CPython 3.8 musllinux: musl 1.1+ x86-64

pywhispercpp-1.2.0-cp38-cp38-musllinux_1_1_i686.whl (1.6 MB view hashes)

Uploaded CPython 3.8 musllinux: musl 1.1+ i686

pywhispercpp-1.2.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (972.1 kB view hashes)

Uploaded CPython 3.8 manylinux: glibc 2.17+ x86-64

pywhispercpp-1.2.0-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl (1.0 MB view hashes)

Uploaded CPython 3.8 manylinux: glibc 2.17+ i686

pywhispercpp-1.2.0-cp38-cp38-macosx_10_9_x86_64.whl (948.4 kB view hashes)

Uploaded CPython 3.8 macOS 10.9+ x86-64

pywhispercpp-1.2.0-cp38-cp38-macosx_10_9_universal2.whl (1.8 MB view hashes)

Uploaded CPython 3.8 macOS 10.9+ universal2 (ARM64, x86-64)

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page