Streaming speaker diarization
Project description
Streaming speaker diarization
Overlap-aware low-latency online speaker diarization based on end-to-end local segmentation
by Juan Manuel Coria, Hervé Bredin, Sahar Ghannay and Sophie Rosset.
We propose to address online speaker diarization as a combination of incremental clustering and local diarization applied to a rolling buffer updated every 500ms. Every single step of the proposed pipeline is designed to take full advantage of the strong ability of a recently proposed end-to-end overlap-aware segmentation to detect and separate overlapping speakers. In particular, we propose a modified version of the statistics pooling layer (initially introduced in the x-vector architecture) to give less weight to frames where the segmentation model predicts simultaneous speakers. Furthermore, we derive cannot-link constraints from the initial segmentation step to prevent two local speakers from being wrongfully merged during the incremental clustering step. Finally, we show how the latency of the proposed approach can be adjusted between 500ms and 5s to match the requirements of a particular use case, and we provide a systematic analysis of the influence of latency on the overall performance (on AMI, DIHARD and VoxConverse).
Citation
Awaiting paper publication (ASRU 2021).
Installation
- Create environment:
conda create -n diarization python==3.8
conda activate diarization
-
Install the latest PyTorch version following the official instructions
-
Install:
cd StreamingSpeakerDiarization
pip install -e .
Usage
CLI
Stream a previously recorded conversation:
python -m diart.demo /path/to/audio.wav
Or use a real audio stream from your microphone:
python -m diart.demo microphone
This will launch a real-time visualization of the diarization outputs as they are produced by the system:
By default, the script uses step = latency = 500ms, and it sets reasonable values for all hyper-parameters.
See python -m diart.demo -h
for more information.
API
We provide various building blocks that can be combined to process an audio stream.
Our streaming implementation is based on RxPY, but the functional
module is completely independent.
In this example we show how to obtain speaker embeddings from a microphone stream with Equation 2:
import rx
import rx.operators as ops
import diart.operators as myops
from diart.sources import MicrophoneAudioSource
import diart.functional as fn
sample_rate = 16000
mic = MicrophoneAudioSource(sample_rate=sample_rate)
# Initialize independent modules
segmentation = fn.FrameWiseModel("pyannote/segmentation")
embedding = fn.ChunkWiseModel("pyannote/embedding")
osp = fn.OverlappedSpeechPenalty(gamma=3, beta=10)
normalization = fn.EmbeddingNormalization(norm=1)
# Reformat microphone stream. Defaults to 5s duration and 500ms shift
regular_stream = mic.stream.pipe(myops.regularize_stream(sample_rate=sample_rate))
# Branch the microphone stream to calculate segmentation
segmentation_stream = regular_stream.pipe(ops.map(segmentation))
# Join audio and segmentation stream to calculate speaker embeddings
embedding_stream = rx.zip(regular_stream, segmentation_stream).pipe(
ops.starmap(lambda wave, seg: (wave, osp(seg))),
ops.starmap(embedding),
ops.map(normalization)
)
embedding_stream.subscribe(on_next=lambda emb: print(emb.shape))
mic.read()
Output:
torch.Size([4, 512])
torch.Size([4, 512])
torch.Size([4, 512])
...
Reproducible research
In order to reproduce the results of the paper, use the following hyper-parameters:
Dataset | latency | tau | rho | delta |
---|---|---|---|---|
DIHARD III | any | 0.555 | 0.422 | 1.517 |
AMI | any | 0.507 | 0.006 | 1.057 |
VoxConverse | any | 0.576 | 0.915 | 0.648 |
DIHARD II | 1s | 0.619 | 0.326 | 0.997 |
DIHARD II | 5s | 0.555 | 0.422 | 1.517 |
For instance, for a DIHARD III configuration, one would use:
python -m diart.demo /path/to/file.wav --latency=5 --tau=0.555 --rho=0.422 --delta=1.517 --output /output/dir
And then to obtain the diarization error rate:
from pyannote.metrics.diarization import DiarizationErrorRate
from pyannote.database.util import load_rttm
metric = DiarizationErrorRate()
hypothesis = load_rttm("/output/dir/output.rttm")
hypothesis = list(hypothesis.values())[0] # Extract hypothesis from dictionary
reference = load_rttm("/path/to/reference.rttm")
reference = list(reference.values())[0] # Extract reference from dictionary
der = metric(reference, hypothesis)
For convenience and to facilitate future comparisons, we also provide the expected outputs in RTTM format corresponding to every entry of Table 1 and Figure 5 in the paper. This includes the VBx offline baseline as well as our proposed online approach with latencies 500ms, 1s, 2s, 3s, 4s, and 5s.
License
MIT License
Copyright (c) 2021 Université Paris-Saclay
Copyright (c) 2021 CNRS
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.