Few-Shot Speaker Identification Model Training and Application
Project description
speakerbox
Few-Shot Multi-Recording Speaker Identification Transformer Fine-Tuning and Application
Installation
Stable Release: pip install speakerbox
Development Head: pip install git+https://github.com/CouncilDataProject/speakerbox.git
Documentation
For full package documentation please visit councildataproject.github.io/speakerbox.
Example Usage Video
Link: https://youtu.be/SK2oVqSKPTE
In the example video, we use the Speakerbox library to quickly annotate a dataset of audio clips from the show The West Wing and train a speaker identification model to identify three of the show's characters (President Bartlet, Charlie Young, and Leo McGarry).
Problem
Given a set of recordings of multi-speaker recordings:
example/
├── 0.wav
├── 1.wav
├── 2.wav
├── 3.wav
├── 4.wav
└── 5.wav
Where each recording has some or all of a set of speakers, for example:
- 0.wav -- contains speakers: A, B, C
- 1.wav -- contains speakers: A, C
- 2.wav -- contains speakers: B, C
- 3.wav -- contains speakers: A, B, C
- 4.wav -- contains speakers: A, B, C
- 5.wav -- contains speakers: A, B, C
You want to train a model to classify portions of audio as one of the N known speakers in future recordings not included in your original training set.
f(audio) -> [(start_time, end_time, speaker), (start_time, end_time, speaker), ...]
i.e. f(audio) -> [(2.4, 10.5, "A"), (10.8, 14.1, "D"), (14.8, 22.7, "B"), ...]
The speakerbox
library contains methods for both generating datasets for annotation
and for utilizing multiple audio annotation schemes to train such a model.
The following table shows model performance results as the dataset size increases:
dataset_size | mean_accuracy | mean_precision | mean_recall | mean_training_duration_seconds |
---|---|---|---|---|
15-minutes | 0.874 ± 0.029 | 0.881 ± 0.037 | 0.874 ± 0.029 | 101 ± 1 |
30-minutes | 0.929 ± 0.006 | 0.94 ± 0.007 | 0.929 ± 0.006 | 186 ± 3 |
60-minutes | 0.937 ± 0.02 | 0.94 ± 0.017 | 0.937 ± 0.02 | 453 ± 7 |
All results reported are the average of five model training and evaluation trials for each of the different dataset sizes. All models were fine-tuned using an NVIDIA GTX 1070 TI.
Note: this table can be reproduced in ~1 hour using an NVIDIA GTX 1070 TI by:
Installing the example data download dependency:
pip install speakerbox[example_data]
Then running the following commands in Python:
from speakerbox.examples import (
download_preprocessed_example_data,
train_and_eval_all_example_models,
)
# Download and unpack the preprocessed example data
dataset = download_preprocessed_example_data()
# Train and eval models with different subsets of the data
results = train_and_eval_all_example_models(dataset)
Workflow
Diarization
We quickly generate an annotated dataset by first diarizing (or clustering based on the features of speaker audio) portions of larger audio files and splitting each the of the clusters into their own directories that you can then manually clean up (by removing incorrectly clustered audio segments).
Notes
- It is recommended to have each larger audio file named with a unique id that can be used to act as a "recording id".
- Diarization time depends on machine resources and make take a long time -- one potential recommendation is to run a diarization script overnight and clean up the produced annotations the following day.
- During this process audio will be duplicated in the form of smaller audio clips -- ensure you have enough space on your machine to complete this process before you begin.
- Clustering accuracy depends on how many speakers there are, how distinct their voices are, and how much speech is talking over one-another.
- If possible, try to find recordings where speakers have a roughly uniform distribution of speaking durations.
⚠️ To use the diarization portions of speakerbox
you need to complete the
following steps: ⚠️
- Visit hf.co/pyannote/speaker-diarization and accept user conditions.
- Visit hf.co/pyannote/segmentation and accept user conditions.
- Visit hf.co/settings/tokens to create an access token (only if you had to complete 1.)
Diarize a single file:
from speakerbox import preprocess
# The token can also be provided via the 'HUGGINGFACE_TOKEN` environment variable.
diarized_and_split_audio_dir = preprocess.diarize_and_split_audio(
"0.wav",
hf_token="token-from-hugging-face",
)
Diarize all files in a directory:
from speakerbox import preprocess
from pathlib import Path
from tqdm import tqdm
# Iterate over all 'wav' format files in a directory called 'data'
for audio_file in tqdm(list(Path("data").glob("*.wav"))):
# The token can also be provided via the 'HUGGINGFACE_TOKEN` environment variable.
diarized_and_split_audio_dir = preprocess.diarize_and_split_audio(
audio_file,
# Create a new directory to place all created sub-directories within
storage_dir=f"diarized-audio/{audio_file.stem}",
hf_token="token-from-hugging-face",
)
Cleaning
Diarization will produce a directory structure organized by unlabeled speakers with the audio clips that were clustered together.
For example, if "0.wav"
had three speakers, the produced directory structure may look
like the following tree:
0/
├── SPEAKER_00
│ ├── 567-12928.wav
│ ├── ...
│ └── 76192-82901.wav
├── SPEAKER_01
│ ├── 34123-38918.wav
│ ├── ...
│ └── 88212-89111.wav
└── SPEAKER_02
├── ...
└── 53998-62821.wav
We leave it to you as a user to then go through these directories and remove any audio clips that were incorrectly clustered together as well as renaming the sub-directories to their correct speaker labels. For example, labelled sub-directories may look like the following tree:
0/
├── A
│ ├── 567-12928.wav
│ ├── ...
│ └── 76192-82901.wav
├── B
│ ├── 34123-38918.wav
│ ├── ...
│ └── 88212-89111.wav
└── D
├── ...
└── 53998-62821.wav
Notes
- Most operating systems have an audio playback application to queue an entire directory of audio files as a playlist for playback. This makes it easy to listen to a whole unlabeled sub-directory (i.e. "SPEAKER_00") at a time and pause playback and remove files from the directory which were incorrectly clustered.
- If any clips have overlapping speakers, it is up to you as a user if you want to remove those clips or keep them and properly label them with the speaker you wish to associate them with.
Training Preparation
Once you have annotated what you think is enough recordings, you can try preparing a dataset for training.
The following functions will prepare the audio for training by:
- Finding all labeled audio clips in the provided directories
- Chunk all found audio clips into smaller duration clips (parametrizable)
- Check that the provided annotated dataset meets the following conditions:
- There is enough data such that the training, test, and validation subsets all contain different recording ids.
- There is enough data such that the training, test, and validation subsets each contain all labels present in the whole dataset.
Notes
- During this process audio will be duplicated in the form of smaller audio clips -- ensure you have enough space on your machine to complete this process before you begin.
- Directory names are used as recording ids during dataset construction.
from speakerbox import preprocess
dataset = preprocess.expand_labeled_diarized_audio_dir_to_dataset(
labeled_diarized_audio_dir=[
"0/", # The cleaned and checked audio clips for recording id 0
"1/", # ... recording id 1
"2/", # ... recording id 2
"3/", # ... recording id 3
"4/", # ... recording id 4
"5/", # ... recording id 5
]
)
dataset_dict, value_counts = preprocess.prepare_dataset(
dataset,
# good if you have large variation in number of data points for each label
equalize_data_within_splits=True,
# set seed to get a reproducible data split
seed=60,
)
# You can print the value_counts dataframe to see how many audio clips of each label
# (speaker) are present in each data subset.
value_counts
Model Training and Evaluation
Once you have your dataset prepared and available, you can provide it directly to the training function to begin training a new model.
The eval_model
function will store a filed called results.md
with the accuracy,
precision, and recall of the model and additionally store a file called
validation-confusion.png
which is a
confusion matrix.
Notes
- The model (and evaluation metrics) will be stored in a new directory called
trained-speakerbox
(parametrizable). - Training time depends on how much data you have annotated and provided.
- It is recommended to train with an NVidia GPU with CUDA available to speed up the training process.
- Speakerbox has only been tested on English-language audio and the base model for fine-tuning was trained on English-language audio. We provide no guarantees as to it's effectiveness on non-English-language audio. If you try Speakerbox on with non-English-language audio, please let us know!
from speakerbox import train, eval_model
# dataset_dict comes from previous preparation step
train(dataset_dict)
eval_model(dataset_dict["valid"])
Model Inference
Once you have a trained model, you can use it against a new audio file:
from speakerbox import apply
annotation = apply(
"new-audio.wav",
"path-to-my-model-directory/",
)
The apply function returns a pyannote.core.Annotation.
Development
See CONTRIBUTING.md for information related to developing the code.
MIT License
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file speakerbox-1.2.0.tar.gz
.
File metadata
- Download URL: speakerbox-1.2.0.tar.gz
- Upload date:
- Size: 38.9 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.11.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 08da93c037108369cd1ac3aca6f418152f897aadd7137836826b312113161316 |
|
MD5 | e5b4ee5016cf9a8af58e86dc3e75b88f |
|
BLAKE2b-256 | 6407e1fd9ad7cd9eea632d4f04a3f143f16ae59ddafe78c298e2f7d3db2f5dcb |
File details
Details for the file speakerbox-1.2.0-py3-none-any.whl
.
File metadata
- Download URL: speakerbox-1.2.0-py3-none-any.whl
- Upload date:
- Size: 38.9 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.11.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 3582e080ed0ec423368199c60b6bf46c9a68a71cc98dd19411265735b14fe951 |
|
MD5 | ea35de3d7df5f37422f57520c9ee7668 |
|
BLAKE2b-256 | e41b99b12db20460c2b5ac0ec31fc47fcf9d85e3fd0d61eaa46e5237d2597f01 |