Python library and CLI for Speechmatics
Project description
speechmatics-python
Python client library and CLI for Speechmatics Realtime and Batch ASR v2 APIs.
Getting started
To install from PyPI:
$ pip install speechmatics-python
To install from source:
$ git clone https://github.com/speechmatics/speechmatics-python
$ cd speechmatics-python && python setup.py install
Windows users may need to run the install command with an extra flag:
$ python setup.py install --user
Requirements
- Python 3.7+
API documentation
Please see https://speechmatics.github.io/speechmatics-python/.
The core Speechmatics documentation can be found at https://docs.speechmatics.com.
Example command-line usage
Configuring Auth Tokens
-
Setting an auth token for CLI authentication:
$ speechmatics config set --auth-token $AUTH_TOKEN
Auth tokens are stored in toml config at HOME_DIR/.speechmatics/config. You may also set the auth_token for each CLI command using the --auth-token flag. The --auth-token flag overrides the value stored in the config file, e.g.
$ speechmatics transcribe --auth-token $AUTH_TOKEN --generate-temp-token example_audio.wav
-
Removing an auth_token from the toml file:
$ speechmatics config unset --auth-token
-
Setting --generate-temp-token flag globally for CLI authentication:
$ speechmatics config set --generate-temp-token
-
Unsetting generate temp token globally for CLI authentication:
$ speechmatics config unset --generate-temp-token
Realtime ASR
-
Starting a real-time session for self-service SaaS customers using a .wav file as the input audio:
$ speechmatics transcribe --lang en --generate-temp-token example_audio.wav
-
Starting a real-time session for enterprise SaaS customers using a .wav file as the input audio:
# Point URL to the a SaaS enterprise runtime $ URL=wss://neu.rt.speechmatics.com/v2/en $ speechmatics transcribe --url $URL example_audio.wav
-
Starting a real-time session for on-prem customers using a .wav file as the input audio:
# Point URL to the local instance of the realtime appliance $ URL=ws://realtimeappliance.yourcompany:9000/v2 $ speechmatics transcribe --url $URL --lang en --ssl-mode none example_audio.wav
-
Show the messages that are going over the websocket connection using verbose output:
$ speechmatics -v transcribe --url $URL --ssl-mode none example_audio.wav
-
The CLI also accepts an audio stream on standard input. Transcribe the piped input audio:
$ cat example_audio.wav | speechmatics transcribe --url $URL --ssl-mode none -
-
Pipe audio directly from the microphone (example uses MacOS with ffmpeg)
List available input devices:
$ ffmpeg -f avfoundation -list_devices true -i ""
There needs to be at least one available microphone attached to your computer. The command below gets the microphone input and pipes it to the transcriber. You may need to change the sample rate to match the sample rate that your machine records at. You may also need to replace
:default
with something like:0
or:1
if you want to use a specific microphone.$ ffmpeg -loglevel quiet -f avfoundation -i ":default" -f f32le -acodec pcm_f32le -ar 44100 - \ > | speechmatics transcribe --url $URL --ssl-mode none --raw pcm_f32le --sample-rate 44100 -
-
Transcribe in real-time with partials (example uses Ubuntu with ALSA). In this mode, the transcription engine produces words instantly, which may get updated as additional context becomes available.
List available input devices:
$ cat /proc/asound/cards
Record microphone audio and pipe to transcriber:
$ ffmpeg -loglevel quiet -f alsa -i hw:0 -f f32le -acodec pcm_f32le -ar 44100 - \ | speechmatics transcribe --url $URL --ssl-mode none --enable-partials --raw pcm_f32le --sample-rate 44100 -
Add the
--print-json
argument to see the raw JSON transcript messages being sent rather than just the plaintext transcript.Batch ASR
-
Submit a .wav file for batch ASR processing
$ speechmatics batch transcribe --lang en example_audio.wav
-
Files may be submitted for asynchronous processing
$ speechmatics batch submit example_audio.wav
-
Enterprise SaaS and on-prem customers can point to a custom runtime:
# Point URL to a custom runtime (in this case, the trial runtime) $ URL=https://trial.asr.api.speechmatics.com/v2/ $ speechmatics batch transcribe --url $URL example_audio.wav
-
Check processing status of a job
# $JOB_ID is from the submit command output $ speechmatics batch job-status --job-id $JOB_ID
-
Retrieve completed transcription
# $JOB_ID is from the submit command output $ speechmatics batch get-results --job-id $JOB_ID
Custom Transcription Config File
-
Instead of passing all the transcription options via the command line you can also pass a transcription config file. The config file is a JSON file that contains the transcription options. The config file can be passed to the CLI using the
--config-file
option.$ speechmatics transcribe --config-file transcription_config.json example_audio.wav
-
The format of this JSON file is described in detail in the Batch API documentation and RT API documentation.
Testing
To install development dependencies and run tests
$ pip install -r requirements-dev.txt
$ make test
Support
If you have any issues with this library or encounter any bugs then please get in touch with us at support@speechmatics.com.
License: MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for speechmatics-python-1.5.0.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 39c0235be33c576b9068f5961e3acc24e786a2f3498dc05a4d6e8bacdfabb118 |
|
MD5 | 2eb9c13bd832aaf226855cb95ac3d20d |
|
BLAKE2b-256 | fe1f4df6e290b9caa09d2117b69103cd970c6d6755cc00a6cbf4a1545e6e46e8 |
Hashes for speechmatics_python-1.5.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | d6a32b9e93810ac6a3ea65a667917780c048968d779a03deac56bb072fbcb47e |
|
MD5 | b1cefe0a5a295cd51adc806b23ddb24e |
|
BLAKE2b-256 | ff476bad8cd15f198a2ed0f6de60d1a13d77b9731ec1f145c755a3778bd80ce3 |