Skip to main content

Stabilizing timestamps of OpenAI's Whisper outputs down to word-level.

Project description

Stabilizing Timestamps for Whisper

Description

This script modifies methods of Whisper's model to gain access to the predicted timestamp tokens of each word (token) without needing additional inference. It also stabilizes the timestamps down to the word (token) level to ensure chronology. Additionally, it can suppress gaps in speech for more accurate timestamps.

image

TODO

  • Add function to stabilize with multiple inferences
  • Add word timestamping (previously only token based)

Dependency

Setup

  1. Install Whisper
  2. Check if Whisper is installed correctly by running a quick test
import whisper
model = whisper.load_model('base')
assert model.transcribe('audio.mp3').get('segments')
  1. Install stable-ts
pip install stable-ts

Executing script

from stable_whisper import load_model

model = load_model('base')
# modified model should run just like the regular model but with additional hyperparameters and extra data in results
results = model.transcribe('audio.mp3')
stab_segments = results['segments']
first_segment_word_timestamps = stab_segments[0]['whole_word_timestamps']

# or to get token timestamps that adhere more to the top prediction
from stable_whisper import stabilize_timestamps
stab_segments = stabilize_timestamps(results, top_focus=True)

Generate .srt with stable timestamps

# word-level 
from stable_whisper import results_to_word_srt
# after you get results from modified model
# this treats a word timestamp as end time of the word
# and combines words if their timestamps overlap
results_to_word_srt(results, 'audio.srt')  # combine_compound=True will merge words with no prepended space
# sentence/phrase-level
from stable_whisper import results_to_sentence_srt
# after you get results from modified model
results_to_sentence_srt(results, 'audio.srt')
# below is from large model default settings

https://user-images.githubusercontent.com/28970749/202782436-0d56140b-5d52-4f33-b32b-317a19ad32ca.mp4

# sentence/phrase-level & word-level
from stable_whisper import results_to_sentence_word_ass
# after you get results from modified model
results_to_sentence_word_ass(results, 'audio.ass')
# below is from large model default settings

https://user-images.githubusercontent.com/28970749/202782412-dfa027f8-7073-4023-8ce5-285a2c26c03f.mp4

Additional Info

  • Since the sentence/segment-level timestamps are predicted directly, they are always more accurate and precise than word/token-level timestamps.
  • Although timestamps are chronological, they can still be off sync depending on the model and audio.
  • The unstable_word_timestamps are left in the results, so you can possibly find better way to utilize them.

License

This project is licensed under the MIT License - see the LICENSE file for details

Acknowledgments

Includes slight modification of the original work: Whisper

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

stable-ts-1.0.0.tar.gz (22.6 kB view details)

Uploaded Source

File details

Details for the file stable-ts-1.0.0.tar.gz.

File metadata

  • Download URL: stable-ts-1.0.0.tar.gz
  • Upload date:
  • Size: 22.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.8.13

File hashes

Hashes for stable-ts-1.0.0.tar.gz
Algorithm Hash digest
SHA256 63f3b79b11b2ce8a6771d519ed423b49491a6161d281a452a679d81d57fdb5dc
MD5 a90ea687bf6cdc03c87b8926091142ca
BLAKE2b-256 7c30dfceac5e5da83b6459756c18f914110a9d02f3f33bfa3f710f2f9c312dd2

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page