A package of speech machine learning pipeline to automatically get transcriptions with speaker labels from audio inputs
Project description
SpeechMLPipeline
SpeechMLPipeline is a Python package for users to run the complete speech machine learning pipeline via one simple function to get transcriptions with speaker labels from input audio files. SpeechMLPipeline applys and implements the most widely used and the innovative machine learning models at each step of the pipeline:
- Audio-to-Text Transcription: OpenAI Whisper with timestamp adjustment
- Speaker Change Detection: PyAnnotate, Audio-based Spectral Clustering Model, Text-based Llama2-70b Speaker Change Detection Model, Rule-based NLP Speaker Change Detection Model, Ensemble Audio-and-text-based Speaker Change Detection Model
- Speaker Identification: Speechbrain
Audio-to-text Transcription
- The OpenAI Whisper is selected for the audio-to-text transcription as it is the most accurate model available for English transcription. The OpenAI Whisper with timestamp adjustment is used to reduce the misalignment between the timestamps and the transcription texts by identifying the silence parts and predicting timestamps at the word level.
Speaker Change Detection
-
The PyAnnotate models is by far one of the most popular models for speaker diarization. It detects speaker change by applying clustering methods based on audio features. The speaker change detection results are directly inferred from speaker diarization results.
-
The Audio-based Spectral Clustering Model is developed by extracting audio features from Librosa and applying spectral clustering to audio features. This model is one of the most common speaker change detection models used in academic research.
-
The Text-based Llama2-70b Speaker Change Detection Model is an innovative speaker change detection model based on LLMs. It is developed by asking Llama2 if the speaker changes across two consecutive text segments by understanding the interrelationships between these two texts via their semantic meaning.
-
The Rule-based NLP Speaker Change Detection Model is applied to detect speaker change by analyzing text using well-defined rules developed by human comprehension.
-
The Ensemble Audio-and-text-based Speaker Change Detection Models is built by ensembling audio-based or text-based speaker change detection models. The voting methods are used to aggregate the predictions of the speaker change detection models above except for Rule-based NLP model. The aggregated predictions are then corrected by Rule-based NLP model. It has two models, the Majority Model based on the majority voting and the Unanimity Model based on the unanimity voting.
Speaker Identification
- The Speechbrain models are used to perform the speaker identification by comparing the similarities between the vector embeddings of each input audio segment and labelled speakers audio segments.
Create New Python Environment to Avoid Packages Versions Conflict If Needed
python -m venv <envname>
source <envname>/bin/activate
Dependencies Installation
Please download requirements.txt from the main repo folder to install the package dependencies.
pip install -r <.../requirements.txt>
Package Installation
The speechmlpipeline package could be installed either via Pypi or Github.
Install speechmlpipeline via Pypi for the stable version of the package
pip install speechmlpipeline
Install speechmlpipeline via Github for the latest version of the package
git lfs install
git clone https://github.com/princeton-ddss/SpeechMLPipeline
cd <.../SpeechMLPipeline>
pip install .
Download Models Offline to Run Them without Internet Connection
Download Spacy NLP Model by Running Commands below in Terminal
python -m spacy download en_core_web_lg
Download Whisper, Llama2, and Speechbrain Models by using the Download Module from the Package
<hf_access_token> is the access token to Hugging Face. Please create a Hugging Face account if it does not exist. The new access token could be created by following the instructions.
<models_list> is the list of names of models to be downloaded. Usually, the value of models_list should be set as ['whisper', 'llama2-70b', 'speechbrain'].
<download_model_path> is the local path where all the downloaded models would be saved.
from speechmlpipeline.DownloadModels.download_models_main_function import download_models_main_function
download_models_main_function(<download_model_path>, <models_list>, <hf_access_token>)
Download PyAnnote Models using Dropbox Link
To download PyAnnotate models, please download pyannote3.1 folder in this Dropbox Link.
To use the PyAnnotate models, please replace <local_path> with the local parent folder of the downloaded pyannote3.1 folder in pyannote3.1/Diarization/config.yaml and pyannote3.1/Segmentation/config.yaml.
Usage
The complete pipeline could be run by using run_speech_ml_pipeline function which could be directly imported as
from speechmlpipeline import run_speech_ml_pipeline
The run_speech_ml_pipeline function takes four classes instances corresponding to each step in the Speech Machine Learning Pipeline as the inputs:
- transcription: TranscriptionInputs Class to specify inputs to run OpenAI Whisper for Audio-to-Text Transcription with Timestamps Adjustment
- speakerchangedetection: SpeakerChangeDetectionInputs Class to specify inputs to run various models including PyAnnote Model, Spectral Clustering, Llama2, and NLP Rule-Based Analysis for Speaker Change Detection
- ensembledetection: EnsembleDetectionInputs Class to specify inputs to build an Ensemble Model of Speaker Change Detection by considering both audio and textual features
- speakeridentification: SpeakerIdentificationInputs Class to specify inputs to run Speechbrain Verification Model for Speaker Identification
To run the complete pipeline, the function could be called as
run_speech_ml_pipeline(transcription = TranscriptionInputs(...),
speakerchangedetection=SpeakerChangeDetectionInputs(...), ensembledetection=EnsembleDetectionInputs(...),
speakeridentification=SpeakerIdentificationInputs(...))
To run any particular steps, please simply just use the inputs corresponding to the particular steps.
For instance, to run all steps of the pipeline with the existing transcriptions:
run_speech_ml_pipeline(speakerchangedetection=SpeakerChangeDetectionInputs(...), ensembledetection=EnsembleDetectionInputs(...),
speakeridentification=SpeakerIdentificationInputs(...))
For instance, to run speaker change detection with the existing transcriptions:
run_speech_ml_pipeline(speakerchangedetection=SpeakerChangeDetectionInputs(...), ensembledetection=EnsembleDetectionInputs(...))
For instance, to run speaker identification with the existing transcriptions and speaker change detection results:
run_speech_ml_pipeline(speakeridentification=SpeakerIdentificationInputs(...))
Please view the descriptions below to specify the attributes of the class instance corresponding to each step of the pipeline. Please convert the audio files type to wav to run the whole pipeline or speaker identification.
- TranscriptionInputs
- audio_file_input_path: A path which contains the audio file
- audio_file_input_name: A audio file name ending with .wav
- whisper_model_path: A path where the Whisper model files are saved
- whisper_output_path: A path to save the csv file of transcription outputs
- device: Torch device type to run the model; If device is set as None, GPU would be automatically used if it is available.
- only_run_in_english: True or False to Indicate if Whisper would only be run when the identified langauge in the audio file is English
- SpeakerChangeDetectionInputs
- audio_file_input_path: A path which contains an input audio file
- audio_file_input_name: A audio file name containing the file type
- min_speakers: The minimal number of speakers in the input audio file
- max_speakers: The maximal number of speakers in the input audio file
- whisper_output_path: A path where a Whisper transcription output csv file is saved
- whisper_output_file_name: A Whisper transcription output csv file name ending with .csv
- detection_models: A list of names of speaker change detection models to be run
- detection_output_path: A path to save the speaker change detection output in csv file
- hf_access_token: Access token to HuggingFace
- llama2_model_path: A path where the Llama2 model files are saved
- pyannote_model_path: A path where the Pyannote model files are saved
- device: Torch device type to run the model; If device is set as None, GPU would be automatically used if it is available.
- detection_llama2_output_path: A path where the pre-run Llama2 speaker change detection output in csv file
- temp_output_path: A path to save the current run of Llama2 speaker change detection output to avoid future rerunning
- EnsembleDetectionInputs
- detection_file_input_path: A path where the speaker change detection output in csv file is saved
- detection_file_input_name: A speaker change detection output csv file name ending with .csv
- ensemble_output_path: A path to save the ensemble detection output in csv file
- ensemble_voting: A list of voting methods to be used to build the final ensemble model
- SpeakerIdentificationInputs
- detection_file_input_path: A path where the speaker change detection output in csv file is saved
- detection_file_input_name: A speaker change detection output csv file name ending with .csv
- audio_speaker_file_input_path: A path which contains a verified audio file of each speaker
- audio_file_input_path: A path which contains an input audio file
- verification_model_path: A path where the speaker verification model files are saved, default to None
- speaker_change_col: A column name in the detection output csv file which specifies which speaker change detection model result is used for speaker identification
- verification_score_threshold: A score threshold in which the speaker would be identified as "OTHERS" if the verification score is below this threshold,ranging from negative value to 1
- identification_output_path: A path to save the speaker identification output in csv file
- temp_output_path: A path to save the temporary cut audio file of each segment
Please view the sample codes to run the function in sample_run.py and sample_run_existingllama2output.py in the src/speechmlpipeline folder. For detailed functions and class decriptions, please refer to src/speechmlpipeline/main_pipeline_local_function.py
Evaluation
Please view the summary of the prediction performance of speaker change detection models:
- Audio-based Model: PyAnnote
- Text-based Model: The Llama2 Model
- Audio-and-Text based Models: The Unanimity Model and the Majority Model
VoxConverse is an only audio-visual diarization dataset consisting of over 50 hours of multispeaker clips of human speech, extracted from YouTube videos, usually in a political debate or news segment context to ensure multi-speaker dialogue. The audio files in the dataset have lots of variations of the proportion of speaker changes, which indicates the effectiveness of the dataset as the evaluation dataset to evaluate the models robustness.
Average Coverage, Purity, Precision, and Recall
PyAnnote | Llama2 | Unanimity | Majority | |
---|---|---|---|---|
Coverage | 86% | 45% | 59% | 84% |
Purity | 83% | 89% | 87% | 70% |
Precision | 23% | 14% | 24% | 32% |
Recall | 19% | 32% | 41% | 19% |
The AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting recordings. Around two-thirds of the data has been elicited using a scenario in which the participants play different roles in a design team, taking a design project from kick-off to completion over the course of a day. The rest consists of naturally occurring meetings in a range of domains. Different from VoxConverse Dataset, AMI dataset is not that diverse as it only consists of meeting recordings. The median and average proportion of speaker change is both around 78%, and the minimal proportion is above 59%. Thus, the evaluation analysis based on AMI is more applicable to measure the models performance under regular conversational setting.
Average Coverage, Purity, Precision, and Recall
PyAnnote | Llama2 | Unanimity | Majority | |
---|---|---|---|---|
Coverage | 89% | 75% | 80% | 92% |
Purity | 60% | 65% | 64% | 46% |
Precision | 44% | 32% | 40% | 46% |
Recall | 18% | 18% | 25% | 11% |
For the detailed descriptions of the models, metrics, and analysis, please download the evaluation_analysis pdf file from the AudioAndTextBasedSpeakerChangeDetection repo.
License
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
File details
Details for the file speechmlpipeline-1.1.0-py3-none-any.whl
.
File metadata
- Download URL: speechmlpipeline-1.1.0-py3-none-any.whl
- Upload date:
- Size: 30.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.10.11
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | c5d97452a93e6b370313f79075c66b4897c468b675a4ba24b94d63d5eb7f05c8 |
|
MD5 | 8bc4afd8977f977d1e4f13753246b1b0 |
|
BLAKE2b-256 | 0b6efa72a69556fdf1a5547dd687af47b0d1c88d94f530e99fc4a926e6f71327 |