Skip to main content

Python library and CLI for Speechmatics

Project description

Legacy Speechmatics Python Client

⚠️ WARNING: This package (speechmatics-python) has been deprecated.

  • No new features will be added
  • Bug fixes and security patches only until 2025-12-31

Migration Path

New Official SDKs (at speechmatics-python-sdk):

Old Package New Package Purpose Migration Guide
speechmatics-python speechmatics-batch Batch transcription 📖 Batch Migration Guide
speechmatics-python speechmatics-rt Real-time transcription 📖 RT Migration Guide
speechmatics.cli Not available (will be released as a separate package)

Python client library and CLI for Speechmatics Realtime and Batch ASR v2 APIs.

Getting started

To install from PyPI:

pip install speechmatics-python

To use the sm-metrics tool for diarization features and speaker identification metrics, install with the optional dependencies:

pip install speechmatics-python[metrics]

To install from source:

git clone https://github.com/speechmatics/speechmatics-python
cd speechmatics-python && python setup.py install

Windows users may need to run the install command with an extra flag:

python setup.py install --user

Docs

The speechmatics python SDK and CLI is documented at https://speechmatics.github.io/speechmatics-python/.

The Speechmatics API and product documentation can be found at https://docs.speechmatics.com.

Real-Time Client Usage

from speechmatics.models import *
import speechmatics

# Change to your own file
PATH_TO_FILE = "tests/data/ch.wav"
LANGUAGE = "en"

# Generate an API key at https://portal.speechmatics.com/manage-access/
API_KEY = ""

# Create a transcription client from config defaults
sm_client = speechmatics.client.WebsocketClient(API_KEY)

sm_client.add_event_handler(
    event_name=ServerMessageType.AddPartialTranscript,
    event_handler=print,
)

sm_client.add_event_handler(
    event_name=ServerMessageType.AddTranscript,
    event_handler=print,
)

conf = TranscriptionConfig(
    language=LANGUAGE, enable_partials=True, max_delay=5, enable_entities=True,
)

print("Starting transcription (type Ctrl-C to stop):")
with open(PATH_TO_FILE, "rb") as fd:
    try:
        sm_client.run_synchronously(fd, conf)
    except KeyboardInterrupt:
        print("\nTranscription stopped.")

Batch Client Usage

from speechmatics.models import ConnectionSettings, BatchTranscriptionConfig
from speechmatics.batch_client import BatchClient
from httpx import HTTPStatusError

API_KEY = "YOUR_API_KEY"
PATH_TO_FILE = "example.wav"
LANGUAGE = "en"

# Open the client using a context manager
with BatchClient(API_KEY) as client:
    try:
        job_id = client.submit_job(PATH_TO_FILE, BatchTranscriptionConfig(LANGUAGE))
        print(f'job {job_id} submitted successfully, waiting for transcript')

        # Note that in production, you should set up notifications instead of polling.
        # Notifications are described here: https://docs.speechmatics.com/features-other/notifications
        transcript = client.wait_for_completion(job_id, transcription_format='txt')
        # To see the full output, try setting transcription_format='json-v2'.
        print(transcript)
    except HTTPStatusError as e:
        if e.response.status_code == 401:
            print('Invalid API key - Check your API_KEY at the top of the code!')
        elif e.response.status_code == 400:
            print(e.response.json()['detail'])
        else:
            raise e

Example command-line usage

A complete list of commands and flags can be found in the SDK docs at https://speechmatics.github.io/speechmatics-python/.

Configuring Auth Tokens

  • Setting an auth token for CLI authentication:

    speechmatics config set --auth-token $AUTH_TOKEN
    

    Auth tokens are stored in toml config at HOME_DIR/.speechmatics/config. You may also set the auth_token for each CLI command using the --auth-token flag. The --auth-token flag overrides the value stored in the config file, e.g.

    speechmatics transcribe --auth-token $AUTH_TOKEN example_audio.wav
    
  • Removing an auth_token from the toml file:

    speechmatics config unset --auth-token
    
  • Setting URLs for connecting to transcribers. These values can be used in places of the --url flag:

    speechmatics config set --rt-url wss://eu2.rt.speechmatics.com/v2 --batch-url https://asr.api.speechmatics.com/v2
    
  • Unsetting transcriber URLs in the toml config:

    speechmatics config unset --rt-url --batch-url
    
  • Setting URLs for connecting to transcribers. These values can be used in places of the --url flag:

    speechmatics config set --rt-url wss://eu2.rt.speechmatics.com/v2 --batch-url https://asr.api.speechmatics.com/v2
    
  • Unsetting transcriber URLs in the toml config:

    speechmatics config unset --rt-url --batch-url
    

    Realtime ASR

  • Starting a real-time session for self-service SaaS customers using a .wav file as the input audio:

    speechmatics transcribe --lang en example_audio.wav
    
  • Real-time transcription of online stream (needs ffmpeg installed):

    ffmpeg -v 0 -i https://cdn.bitmovin.com/content/assets/art-of-motion-dash-hls-progressive/mpds/f08e80da-bf1d-4e3d-8899-f0f6155f6efa.mpd \
    -f s16le -ar 44100 -ac 1 -acodec pcm_s16le - | \
    speechmatics transcribe --raw pcm_s16le --sample-rate 44100 -
    
  • Starting a real-time session for enterprise SaaS customers using a .wav file as the input audio:

    # Point URL to the a SaaS enterprise runtime
    URL=wss://neu.rt.speechmatics.com/v2/en
    
    speechmatics transcribe --url $URL example_audio.wav
    
  • Starting a real-time session for on-prem customers using a .wav file as the input audio:

    # Point URL to the local instance of the realtime appliance
    URL=ws://realtimeappliance.yourcompany:9000/v2
    
    speechmatics transcribe --url $URL --lang en --ssl-mode none example_audio.wav
    
  • Show the messages that are going over the websocket connection using verbose output:

    speechmatics -v transcribe --url $URL --ssl-mode none example_audio.wav
    
  • The CLI also accepts an audio stream on standard input. Transcribe the piped input audio:

    cat example_audio.wav | speechmatics transcribe --url $URL --ssl-mode none -
    
  • Pipe audio directly from the microphone (example uses MacOS with ffmpeg)

    List available input devices:

    ffmpeg -f avfoundation -list_devices true -i ""
    

    There needs to be at least one available microphone attached to your computer. The command below gets the microphone input and pipes it to the transcriber. You may need to change the sample rate to match the sample rate that your machine records at. You may also need to replace :default with something like :0 or :1 if you want to use a specific microphone.

    ffmpeg -loglevel quiet -f avfoundation -i ":default" -f f32le -acodec pcm_f32le -ar 44100 - \
    >   | speechmatics transcribe --url $URL --ssl-mode none --raw pcm_f32le --sample-rate 44100 -
    
  • Transcribe in real-time with partials (example uses Ubuntu with ALSA). In this mode, the transcription engine produces words instantly, which may get updated as additional context becomes available.

    List available input devices:

    cat /proc/asound/cards
    

    Record microphone audio and pipe to transcriber:

    ffmpeg -loglevel quiet -f alsa -i hw:0 -f f32le -acodec pcm_f32le -ar 44100 - \
        | speechmatics transcribe --url $URL --ssl-mode none --enable-partials --raw pcm_f32le --sample-rate 44100 -
    

    Add the --print-json argument to see the raw JSON transcript messages being sent rather than just the plaintext transcript.

    Batch ASR

  • Submit a .wav file for batch ASR processing

    speechmatics batch transcribe --lang en example_audio.wav
    
  • Files may be submitted for asynchronous processing

    speechmatics batch submit example_audio.wav
    
  • Enterprise SaaS and on-prem customers can point to a custom runtime:

    # Point URL to a custom runtime (in this case, the trial runtime)
    URL=https://trial.asr.api.speechmatics.com/v2/
    
    speechmatics batch transcribe --url $URL example_audio.wav
    
  • Check processing status of a job

    # $JOB_ID is from the submit command output
    speechmatics batch job-status --job-id $JOB_ID
    
  • Retrieve completed transcription

    # $JOB_ID is from the submit command output
    speechmatics batch get-results --job-id $JOB_ID
    
  • Submit a job with automatic language identification

    speechmatics batch transcribe --language auto --langid-langs en,es example_audio.wav
    

    If Speechmatics is not able to identify a language with high enough confidence, the job will be rejected. This is to reduce the risk of transcribing incorrectly.

    --langid-langs is optional and specifies what language(s) you expect to be detected in the source files.

  • Submit a job with translation

    speechmatics batch transcribe --translation-langs de,es --output-format json-v2 example_audio.wav
    

    --translation-langs is supported in asynchronous mode as well, and translation output can be retrieved using get-results with --output-format json-v2 set.

    When combining language identification with translation, we can't know if the identified language can be translated to your translation targets. If the translation pair is not supported, the error will be recorded in the metadata of the transcript.

  • Start a real-time transcription with translation session using microphone audio and pipe to transcriber

    ffmpeg -loglevel quiet -f alsa -i hw:0 -f f32le -acodec pcm_f32le -ar 44100 - \
        | speechmatics rt transcribe --url $URL --ssl-mode none --raw pcm_f32le --sample-rate 44100 \
    --print-json --translation-langs fr -
    
  • Submit a job with summarization

    speechmatics batch transcribe --summarize --output-format json-v2 example_audio.wav
    
  • Submit a job with sentiment analysis

    speechmatics batch transcribe --sentiment-analysis --output-format json-v2 example_audio.wav
    
  • Submit a job with topic detection

    speechmatics batch transcribe --detect-topics --output-format json-v2 example_audio.wav
    
  • Submit a job with auto chapters

    speechmatics batch transcribe --detect-chapters --output-format json-v2 example_audio.wav
    

    Custom Transcription Config File

  • Instead of passing all the transcription options via the command line you can also pass a transcription config file. The config file is a JSON file that contains the transcription options. The config file can be passed to the CLI using the --config-file option.

    speechmatics transcribe --config-file transcription_config.json example_audio.wav
    
  • The format of this JSON file is described in detail in the Batch API documentation and RT API documentation.

SM Metrics

This package includes tooling for benchmarking transcription and diarization accuracy.

For more information, see the asr_metrics/README.md

Testing

To install development dependencies and run tests

pip install -r requirements-dev.txt
make test

Support

If you have any issues with this library or encounter any bugs then please get in touch with us at support@speechmatics.com.


License: MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

speechmatics_python-4.0.9.tar.gz (133.1 kB view details)

Uploaded Source

Built Distribution

speechmatics_python-4.0.9-py3-none-any.whl (132.2 kB view details)

Uploaded Python 3

File details

Details for the file speechmatics_python-4.0.9.tar.gz.

File metadata

  • Download URL: speechmatics_python-4.0.9.tar.gz
  • Upload date:
  • Size: 133.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.5

File hashes

Hashes for speechmatics_python-4.0.9.tar.gz
Algorithm Hash digest
SHA256 97757d5f9ae199ade329c2fe4c58625b8a1ef23697d6035fcdbaa7e98dc688d3
MD5 3fae3181c8316cc2a6db52fb35b0127e
BLAKE2b-256 a27ef58713eab33a8291b6b7088ed24843a43d71a2d99b2db37de1cc8bef1850

See more details on using hashes here.

File details

Details for the file speechmatics_python-4.0.9-py3-none-any.whl.

File metadata

File hashes

Hashes for speechmatics_python-4.0.9-py3-none-any.whl
Algorithm Hash digest
SHA256 52d6acfeb148e906175697ba2807a88de5bb1d9f8c03b721c8cf5c33125f2fb1
MD5 61f06fd0b3a342b358ed7e6ffc1fcf5c
BLAKE2b-256 95997150f12f76f0fe4abc5c504b4b5004a916598eba67ca4660156a71d6cba1

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page