Skip to main content

Speechbox

Project description

GitHub release Contributor Covenant

🤗 Speechbox offers a set of speech processing tools, such as punctuation restoration.

Installation

With pip (official package)

pip install speechbox

Contributing

We ❤️ contributions from the open-source community! If you want to contribute to this library, please check out our Contribution guide. You can look out for issues you'd like to tackle to contribute to the library.

Also, say 👋 in our public Discord channel Join us on Discord under ML for Audio and Speech. We discuss the new trends about machine learning methods for speech, help each other with contributions, personal projects or just hang out ☕.

Tasks

Task Description Author
Punctuation Restoration Punctuation restoration allows one to predict capitalized words as well as punctuation by using Whisper. Patrick von Platen
ASR With Speaker Diarization Transcribe long audio files, such as meeting recordings, with speaker information (who spoke when) and the transcribed text. Sanchit Gandhi

Punctuation Restoration

Punctuation restoration relies on the premise that Whisper can understand universal speech. The model is forced to predict the passed words, but is allowed to capitalized letters, remove or add blank spaces as well as add punctuation. Punctuation is simply defined as the offial Python string.Punctuation characters.

Note: For now this package has only been tested with:

and only on some 80 audio samples of patrickvonplaten/librispeech_asr_dummy.

See some transcribed results here.

Web Demo

If you want to try out the punctuation restoration, you can try out the following 🚀 Spaces:

Hugging Face Spaces

Example

In order to use the punctuation restoration task, you need to install Transformers:

pip install --upgrade transformers

For this example, we will additionally make use of datasets to load a sample audio file:

pip install --upgrade datasets

Now we stream a single audio sample, load the punctuation restoring class with "openai/whisper-tiny.en" and add punctuation to the transcription.

from speechbox import PunctuationRestorer
from datasets import load_dataset

streamed_dataset = load_dataset("librispeech_asr", "clean", split="validation", streaming=True)

# get first sample
sample = next(iter(streamed_dataset))

# print out normalized transcript
print(sample["text"])
# => "HE WAS IN A FEVERED STATE OF MIND OWING TO THE BLIGHT HIS WIFE'S ACTION THREATENED TO CAST UPON HIS ENTIRE FUTURE"

# load the restoring class
restorer = PunctuationRestorer.from_pretrained("openai/whisper-tiny.en")
restorer.to("cuda")

restored_text, log_probs = restorer(sample["audio"]["array"], sample["text"], sampling_rate=sample["audio"]["sampling_rate"], num_beams=1)

print("Restored text:\n", restored_text)

See examples/restore for more information.

ASR With Speaker Diarization

Given an unlabelled audio segment, a speaker diarization model is used to predict "who spoke when". These speaker predictions are paired with the output of a speech recognition system (e.g. Whisper) to give speaker-labelled transcriptions.

The combined ASR + Diarization pipeline can be applied directly to long audio samples, such as meeting recordings, to give fully annotated meeting transcriptions.

Web Demo

If you want to try out the ASR + Diarization pipeline, you can try out the following Space:

Hugging Face Spaces

Example

In order to use the ASR + Diarization pipeline, you need to install 🤗 Transformers and pyannote.audio:

pip install --upgrade transformers pyannote.audio

For this example, we will additionally make use of 🤗 Datasets to load a sample audio file:

pip install --upgrade datasets

Now we stream a single audio sample, pass it to the ASR + Diarization pipeline, and return the speaker-segmented transcription:

import torch
from speechbox import ASRDiarizationPipeline
from datasets import load_dataset

device = "cuda:0" if torch.cuda.is_available() else "cpu"
pipeline = ASRDiarizationPipeline.from_pretrained("openai/whisper-tiny", device=device)

# load dataset of concatenated LibriSpeech samples
concatenated_librispeech = load_dataset("sanchit-gandhi/concatenated_librispeech", split="train", streaming=True)
# get first sample
sample = next(iter(concatenated_librispeech))

out = pipeline(sample["audio"])
print(out)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

speechbox-0.2.1.tar.gz (22.4 kB view details)

Uploaded Source

Built Distribution

speechbox-0.2.1-py3-none-any.whl (20.3 kB view details)

Uploaded Python 3

File details

Details for the file speechbox-0.2.1.tar.gz.

File metadata

  • Download URL: speechbox-0.2.1.tar.gz
  • Upload date:
  • Size: 22.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.13

File hashes

Hashes for speechbox-0.2.1.tar.gz
Algorithm Hash digest
SHA256 250de696210e2390af61b7204d84cb9c29a9789919ebbfdf5bebf65c4bf35ce4
MD5 71ce5368d9215823cc78a7dd3e91546f
BLAKE2b-256 bcdfa8e3a1ecd01896f98be8d23dbc2d488e3b06c03ab75d5b9e87014199c1f8

See more details on using hashes here.

File details

Details for the file speechbox-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: speechbox-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 20.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.13

File hashes

Hashes for speechbox-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 bfd4c63afa57a4dc26179f0143636d1ebc224a8333618bed9c8c971b06500fb5
MD5 29a686720b2e3fc9701d3e99fed13a16
BLAKE2b-256 32e84cb10f042ea08fd234545e0d386243d2b77f94d3b39ee6432b842242d8c3

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page