Transcribes audio files to .srt
Project description
Substream
Transcribes an audio file to .srt subtitle format using word timings from Google's Speech-to-Text API.
Requirements:
- A Google account, signed up for cloud.
Installing:
pip install substream
Cloud setup:
-
Create a new service account for a new project dedicated to your recognition job. It must have the following permissions:
- Cloud Speech Service Agent
- Storage Admin OR
- Storage Object Viewer if supplying a
gs://
URI to the script.
You can set the location to the .json credentials file you downloaded in the current environment like:
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/cloud_credentials.json
(OR) you can set it just before the substream command like:
GOOGLE_APPLICATION_CREDENTIALS=/path/to/cloud_credentials.json substream ...
On run, a temporary bucket will be created, the file uploaded, and on completion or error, a context manager ensures bucket deletion.
Please be careful with these credentials as cloud resources can be expensive, so make to store them securely if you do store them at all, and make sure all project buckets are deleted manually even if the app reports they have been successfully deleted.
Full Usage:
usage: substream [-h] -i INPUT -o SRT_FILENAME [--language CODE] [-v]
Transcribes an audio file or .jsonl dump to .srt using the Google Cloud
Speech-to-Text API
optional arguments:
-h, --help show this help message and exit
-i INPUT, --input INPUT
mono audio file (flac, opus, 16 bit pcm) (or) gs://
uri to audio file (or) intermediate .jsonl dump
(default: None)
-o SRT_FILENAME, --output SRT_FILENAME
.srt filename (default: None)
--language CODE https://cloud.google.com/speech-to-text/docs/languages
(default: en-US)
-v, --verbose extra logging (default: False)
Sample Usage with a local file:
substream -v -i test.flac -o test.srt --language en-US
Sample usage with a URI:
substream -v -i gs://my-bucket/test.flac -o test.srt
Uninstalling:
pip uninstall substream
FAQ
-
Why the long-running API rather than the streaming API?
The long running API is more accurate.
-
What is the .jsonl file?
Each stripped line in the file is a string containing a json representation of a word with it's start and end timings. Later versions of this program may accept the .jsonl file to format the sentences in a better way without having to re-run the audio transcription.
-
Known Issues:
-
'walls of text' caused by people speaking without interruption. Some subtitles may have to be manually split using a .srt editor.
-
Speaker identification is currently broken in the long running api for long files, so splitting on this is curently disabled. (this exacerbates the above point)
-
Progress report is unimplemented by the long running API currently.
-
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file substream-0.1.1.tar.gz
.
File metadata
- Download URL: substream-0.1.1.tar.gz
- Upload date:
- Size: 8.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/40.8.0 requests-toolbelt/0.9.1 tqdm/4.32.1 CPython/3.6.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 1387585314e257921e8efb9a640ab5548afb66a4624d269380689f2504488ba2 |
|
MD5 | 756e99c9458a860de7544ec564884027 |
|
BLAKE2b-256 | dd750400c7628e9b411b5cf31114167e313dbc0eacd85fff698752f05af37c2f |