Modifies OpenAI's Whisper to produce more reliable timestamps.
Project description
Stabilizing Timestamps for Whisper
This library modifies Whisper to produce more reliable timestamps and extends its functionality.
Setup
pip install -U stable-ts
To install the latest commit:
pip install -U git+https://github.com/jianfch/stable-ts.git
Usage
Transcribe
import stable_whisper
model = stable_whisper.load_model('base')
result = model.transcribe('audio.mp3')
result.to_srt_vtt('audio.srt')
CLI
stable-ts audio.mp3 -o audio.srt
Docstrings: load_model(), transcribe(), transcribe_minimal()
faster-whisper
Use with faster-whisper:
model = stable_whisper.load_faster_whisper('base')
result = model.transcribe_stable('audio.mp3')
stable-ts audio.mp3 -o audio.srt -fw
Docstring: load_faster_whisper(), transcribe_stable()
Output
Stable-ts supports various text output formats.
result.to_srt_vtt('audio.srt') #SRT
result.to_srt_vtt('audio.vtt') #VTT
result.to_ass('audio.ass') #ASS
result.to_tsv('audio.tsv') #TSV
Docstrings:
to_srt_vtt(),
to_ass(),
to_tsv()
to_txt()
save_as_json()
There are word-level and segment-level timestamps. All output formats support them.
They also support will both levels simultaneously except TSV.
By default, segment_level
and word_level
are both True
for all the formats that support both simultaneously.
Examples in VTT.
Default: segment_level=True
+ word_level=True
CLI
--segment_level true
+ --word_level true
00:00:07.760 --> 00:00:09.900
But<00:00:07.860> when<00:00:08.040> you<00:00:08.280> arrived<00:00:08.580> at<00:00:08.800> that<00:00:09.000> distant<00:00:09.400> world,
segment_level=True
+ word_level=False
00:00:07.760 --> 00:00:09.900
But when you arrived at that distant world,
segment_level=False
+ word_level=True
00:00:07.760 --> 00:00:07.860
But
00:00:07.860 --> 00:00:08.040
when
00:00:08.040 --> 00:00:08.280
you
00:00:08.280 --> 00:00:08.580
arrived
...
JSON
The result can also be saved as a JSON file to preserve all the data for future reprocessing. This is useful for testing different sets of postprocessing arguments without the need to redo inference.
result.save_as_json('audio.json')
CLI
stable-ts audio.mp3 -o audio.json
Processing JSON file of the results into SRT.
result = stable_whisper.WhisperResult('audio.json')
result.to_srt_vtt('audio.srt')
CLI
stable-ts audio.json -o audio.srt
Alignment
Audio can be aligned/synced with plain text on word-level.
text = 'Machines thinking, breeding. You were to bear us a new, promised land.'
result = model.align('audio.mp3', text, language='en')
When the text is correct but the timestamps need more work,
align()
is a faster alternative for testing various settings/models.
new_result = model.align('audio.mp3', result, language='en')
CLI
stable-ts audio.mp3 --align text.txt --language en
--align
can also a JSON file of a result
Docstring: align()
Adjustments
Timestamps are adjusted after the model predicts them.
When suppress_silence=True
(default), transcribe()
/transcribe_minimal()
/align()
adjust based on silence/non-speech.
The timestamps can be further adjusted base on another result with adjust_by_result()
,
which acts as a logical AND operation for the timestamps of both results, further reducing duration of each word.
Note: both results are required to have word timestamps and matching words.
# the adjustments are in-place for `result`
result.adjust_by_result(new_result)
Docstring: adjust_by_result()
Refinement
Timestamps can be further improved with refine()
.
This method iteratively mutes portions of the audio based on current timestamps
then compute the probabilities of the tokens.
Then by monitoring the fluctuation of the probabilities, it tries to find the most precise timestamps.
"Most precise" in this case means the latest start and earliest end for the word
such that it still meets the specified conditions.
model.refine('audio.mp3', result)
CLI
stable-ts audio.mp3 --refine -o audio.srt
Input can also be JSON file of a result.
stable-ts result.json --refine -o audio.srt --refine_option "audio=audio.mp3"
Docstring: refine()
Regrouping Words
Stable-ts has a preset for regrouping words into different segments with more natural boundaries.
This preset is enabled by regroup=True
(default).
But there are other built-in regrouping methods that allow you to customize the regrouping algorithm.
This preset is just a predefined combination of those methods.
# The following results are all functionally equivalent:
result0 = model.transcribe('audio.mp3', regroup=True) # regroup is True by default
result1 = model.transcribe('audio.mp3', regroup=False)
(
result1
.clamp_max()
.split_by_punctuation([('.', ' '), '。', '?', '?', (',', ' '), ','])
.split_by_gap(.5)
.merge_by_gap(.3, max_words=3)
.split_by_punctuation([('.', ' '), '。', '?', '?'])
)
result2 = model.transcribe('audio.mp3', regroup='cm_sp=.* /。/?/?/,* /,_sg=.5_mg=.3+3_sp=.* /。/?/?')
# To undo all regrouping operations:
result0.reset()
Any regrouping algorithm can be expressed as a string. Please feel free share your strings here
Regrouping Methods
- regroup()
- split_by_gap()
- split_by_punctuation()
- split_by_length()
- split_by_duration()
- merge_by_gap()
- merge_by_punctuation()
- merge_all_segments()
- clamp_max()
- lock()
Editing
The editing methods in stable-ts can be chained with Regrouping Methods and used in regroup()
.
Remove specific instances words or segments:
# Remove first word of the first segment:
first_word = result[0][0]
result.remove_word(first_word)
# This following is also does the same:
del result[0][0]
# Remove the last segment:
last_segment = result[-1]
result.remove_segment(last_segment)
# This following is also does the same:
del result[-1]
Docstrings: remove_word(), remove_segment()
Removing repetitions:
# Example 1: "This is is is a test." -> "This is a test."
# The following removes the last two " is":
result.remove_repetition(1)
# Example 2: "This is is is a test this is a test." -> "This is a test."
# The following removes the second " is" and third " is", then remove the last "this is a test"
# The first parameter `max_words` is `4` because "this is a test" consists 4 words
result.remove_repetition(4)
Docstring: remove_repetition()
Removing specific word(s) by string content:
# Remove all " ok" from " ok ok this is a test."
result.remove_words_by_str('ok')
# Remove all " ok" and " Um..." from " ok this is a test. Um..."
result.remove_words_by_str(['ok', 'um'])
Docstring: remove_words_by_str()
Filling in segment gaps:
# result0: [" How are you?"] [" I'm good."] [" Good!"]
# result1: [" Hello!"] [" How are you?"] [" How about you?"] [" Good!"]
result0.fill_in_gaps(result1)
# After filling in the gaps in `result0` with contents in `result1`:
# result0: [" Hello!"] [" How are you?"] [" I'm good."] [" How about you?"] [" Good!"]
Docstring: fill_in_gaps()
Locating Words
There are two ways to locate words. The first way is by approximating time at which the words are spoken then transcribing a few seconds around the approximated time. This also the faster way for locating words.
matches = model.locate('audio.mp3', 'are', language='en', count=0)
for match in matches:
print(match.to_display_str())
# verbose=True does the same thing as this for-loop.
Docstring: locate()
CLI
stable-ts audio.mp3 --locate "are" --language en -to "count=0"
The second way allows you to locate words with regular expression, but it requires the audio to be fully transcribed first.
result = model.transcribe('audio.mp3')
# Find every sentence that contains "and"
matches = result.find(r'[^.]+and[^.]+\.')
# print the all matches if there are any
for match in matches:
print(f'match: {match.text_match}\n'
f'text: {match.text}\n'
f'start: {match.start}\n'
f'end: {match.end}\n')
# Find the word before and after "and" in the matches
matches = matches.find(r'\s\S+\sand\s\S+')
for match in matches:
print(f'match: {match.text_match}\n'
f'text: {match.text}\n'
f'start: {match.start}\n'
f'end: {match.end}\n')
Docstring: find()
Silence Suppression
While the timestamps predicted by Whisper are generally accurate,
it sometimes predicts the start of a word way before the word is spoken
or the end of a word long after the word has been spoken.
This is where "silence suppression" helps. It is enabled by default (suppress_silence=True
).
The idea is to adjust the timestamps based on the timestamps of non-speech portions of the audio.
Note: In V1, "silence suppressing" refers to the process of suppressing timestamp tokens of the silent portions,
but changed to postprocessing adjustments in V2, which allows stable-ts to be used with other ASR models.
The timestamp token suppression feature is disabled by default, but can still be enabled with suppress_ts_tokens=True
By default, stable-ts determines the non-speech timestamps based on
how loud a section of the audio is relative to the neighboring sections.
This method is most effective for cases, where the speech is significantly louder than the background noise.
The other method is to use Silero VAD (enabled with vad=True
).
To visualize the differences between non-VAD and VAD, see Visualizing Suppression.
Besides the parameters for non-speech detection sensitivity (see Visualizing Suppression),
the following parameters are used to combat inaccurate non-speech detection.
min_word_dur
is the shortest duration each word is allowed from adjustments.
nonspeech_error
is the relative error of the non-speech that appears in between a word.
Note: Before 2.14, nonspeech_error
was not available,
and min_word_dur
prevented any adjustments that resulted in word duration shorter than min_word_dur
.
For the following example, min_word_dur=0.5
(default: 0.1) and nonspeech_error=0.3
(default: 0.3).
nonspeech_error=0.3
allows each non-speech section to be treated 1.3 times their actual duration.
Either from the start of the corresponding word to the end of the non-speech
or from the start of the non-speech to the end of the corresponding word.
In the case that both conditions are met, the shorter one is used.
Or if both are equal, then the start of the non-speech to the end of the word is used.
The second non-speech from 1.375s to 1.75s is ignored for 'world.' because it failed both conditions.
The first word, 'Hello', satisfies only the former condition from 0s to 0.625, thus the new start for 'Hello'
would be 0.625s. However, min_word_dur=0.5
requires the resultant duration to be at least 0.5s.
As a result, the start of 'Hello' is changed to 0.375s instead of 0.625s.
Tips
- do not disable word timestamps with
word_timestamps=False
for reliable segment timestamps - use
vad=True
for more accurate non-speech detection - use
demucs=True
to isolate vocals with Demucs; it is also effective at isolating vocals even if there is no music - use
demucs=True
andvad=True
for music - set same seed for each transcription (e.g.
random.seed(0)
) fordemucs=True
to produce deterministic outputs - to enable dynamic quantization for inference on CPU use
--dq true
for CLI ordq=True
forstable_whisper.load_model
- use
encode_video_comparison()
to encode multiple transcripts into one video for synced comparison; see Encode Comparison - use
visualize_suppression()
to visualize the differences between non-VAD and VAD options; see Visualizing Suppression - refinement can an effective (but slow) alternative for polishing timestamps if silence suppression isn't effective
Visualizing Suppression
You can visualize which parts of the audio will likely be suppressed (i.e. marked as silent). Requires: Pillow or opencv-python.
Without VAD
import stable_whisper
# regions on the waveform colored red are where it will likely be suppressed and marked as silent
# [q_levels]=20 and [k_size]=5 (default)
stable_whisper.visualize_suppression('audio.mp3', 'image.png', q_levels=20, k_size = 5)
With Silero VAD
# [vad_threshold]=0.35 (default)
stable_whisper.visualize_suppression('audio.mp3', 'image.png', vad=True, vad_threshold=0.35)
Docstring: visualize_suppression()
Encode Comparison
You can encode videos similar to the ones in the doc for comparing transcriptions of the same audio.
stable_whisper.encode_video_comparison(
'audio.mp3',
['audio_sub1.srt', 'audio_sub2.srt'],
output_videopath='audio.mp4',
labels=['Example 1', 'Example 2']
)
Docstring: encode_video_comparison()
Multiple Files with CLI
Transcribe multiple audio files then process the results directly into SRT files.
stable-ts audio1.mp3 audio2.mp3 audio3.mp3 -o audio1.srt audio2.srt audio3.srt
Any ASR
You can use most of the features of Stable-ts improve the results of any ASR model/APIs. Just follow this notebook.
Quick 1.X → 2.X Guide
What's new in 2.0.0?
- updated to use Whisper's more reliable word-level timestamps method.
- the more reliable word timestamps allow regrouping all words into segments with more natural boundaries.
- can now suppress silence with Silero VAD (requires PyTorch 1.12.0+)
- non-VAD silence suppression is also more robust
Usage changes
results_to_sentence_srt(result, 'audio.srt')
→result.to_srt_vtt('audio.srt', word_level=False)
results_to_word_srt(result, 'audio.srt')
→result.to_srt_vtt('output.srt', segment_level=False)
results_to_sentence_word_ass(result, 'audio.srt')
→result.to_ass('output.ass')
- there's no need to stabilize segments after inference because they're already stabilized during inference
transcribe()
returns aWhisperResult
object which can be converted todict
with.to_dict()
. e.gresult.to_dict()
License
This project is licensed under the MIT License - see the LICENSE file for details
Acknowledgments
Includes slight modification of the original work: Whisper
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.