Skip to main content

Unofficial wespeaker pypi package

Project description

WeSpeaker

License Python-Version

Roadmap | Docs | Paper | Runtime | Pretrained Models | Huggingface Demo | Modelscope Demo

WeSpeaker mainly focuses on speaker embedding learning, with application to the speaker verification task. We support online feature extraction or loading pre-extracted features in kaldi-format.

Installation

Install python package

pip install git+https://github.com/wenet-e2e/wespeaker.git

Command-line usage (use -h for parameters):

$ wespeaker --task embedding --audio_file audio.wav --output_file embedding.txt
$ wespeaker --task embedding_kaldi --wav_scp wav.scp --output_file /path/to/embedding
$ wespeaker --task similarity --audio_file audio.wav --audio_file2 audio2.wav
$ wespeaker --task diarization --audio_file audio.wav

Python programming usage:

import wespeaker

model = wespeaker.load_model('chinese')
embedding = model.extract_embedding('audio.wav')
utt_names, embeddings = model.extract_embedding_list('wav.scp')
similarity = model.compute_similarity('audio1.wav', 'audio2.wav')
diar_result = model.diarize('audio.wav')

Please refer to python usage for more command line and python programming usage.

Install for development & deployment

  • Clone this repo
git clone https://github.com/wenet-e2e/wespeaker.git
  • Create conda env: pytorch version >= 1.12.1 is recommended !!!
conda create -n wespeaker python=3.9
conda activate wespeaker
conda install pytorch=1.12.1 torchaudio=0.12.1 cudatoolkit=11.3 -c pytorch -c conda-forge
pip install -r requirements.txt
pre-commit install  # for clean and tidy code

🔥 News

  • 2024.09.03: Support the SimAM_ResNet and the model pretrained on VoxBlink2, check Pretrained Models for the pretrained model, VoxCeleb Recipe for the super performance, and python usage for the command line usage!
  • 2024.08.30: We support whisper_encoder based frontend and propose the Whisper-PMFA framework, check #356.
  • 2024.08.20: Update diarization recipe for VoxConverse dataset by leveraging umap dimensionality reduction and hdbscan clustering, see #347 and #352.
  • 2024.08.18: Support using ssl pre-trained models as the frontend. The WavLM recipe is also provided, see #344.
  • 2024.05.15: Add support for quality-aware score calibration, see #320.
  • 2024.04.25: Add support for the gemini-dfresnet model, see #291.
  • 2024.04.23: Support MNN inference engine in runtime, see #310.
  • 2024.04.02: Release Wespeaker document with detailed model-training tutorials, introduction of various runtime platforms, etc.
  • 2024.03.04: Support the eres2net-cn-common-200k and campplus-cn-common-200k of damo #281, check python usage for details.
  • 2024.02.05: Support the ERes2Net #272 and Res2Net #273 models.
  • 2023.11.13: Support CLI usage of wespeaker, check python usage for details.
  • 2023.07.18: Support the kaldi-compatible PLDA and unsupervised adaptation, see #186.
  • 2023.07.14: Support the NIST SRE16 recipe, see #177.

Recipes

  • VoxCeleb: Speaker Verification recipe on the VoxCeleb dataset
    • 🔥 UPDATE 2024.05.15: We support score calibration for Voxceleb and achieve better performance!
    • 🔥 UPDATE 2023.07.10: We support self-supervised learning recipe on Voxceleb! Achieving 2.627% (ECAPA_TDNN_GLOB_c1024) EER on vox1-O-clean test set without any labels.
    • 🔥 UPDATE 2022.10.31: We support deep r-vector up to the 293-layer version! Achieving 0.447%/0.043 EER/mindcf on vox1-O-clean test set
    • 🔥 UPDATE 2022.07.19: We apply the same setups as the CNCeleb recipe, and obtain SOTA performance considering the open-source systems
      • EER/minDCF on vox1-O-clean test set are 0.723%/0.069 (ResNet34) and 0.728%/0.099 (ECAPA_TDNN_GLOB_c1024), after LM fine-tuning and AS-Norm
  • CNCeleb: Speaker Verification recipe on the CnCeleb dataset
    • 🔥 UPDATE 2024.05.16: We support score calibration for Cnceleb and achieve better EER.
    • 🔥 UPDATE 2022.10.31: 221-layer ResNet achieves 5.655%/0.330 EER/minDCF
    • 🔥 UPDATE 2022.07.12: We migrate the winner system of CNSRC 2022 report slides
      • EER/minDCF reduction from 8.426%/0.487 to 6.492%/0.354 after large margin fine-tuning and AS-Norm
  • NIST SRE16: Speaker Verification recipe for the 2016 NIST Speaker Recognition Evaluation Plan. Similar recipe can be found in Kaldi.
    • 🔥 UPDATE 2023.07.14: We support NIST SRE16 recipe. After PLDA adaptation, we achieved 6.608%, 10.01%, and 2.974% EER on trial Pooled, Tagalog, and Cantonese, respectively.
  • VoxConverse: Diarization recipe on the VoxConverse dataset

Discussion

For Chinese users, you can scan the QR code on the left to follow our offical account of WeNet Community. We also created a WeChat group for better discussion and quicker response. Please scan the QR code on the right to join the chat group.

Citations

If you find wespeaker useful, please cite it as

@article{wang2024advancing,
  title={Advancing speaker embedding learning: Wespeaker toolkit for research and production},
  author={Wang, Shuai and Chen, Zhengyang and Han, Bing and Wang, Hongji and Liang, Chengdong and Zhang, Binbin and Xiang, Xu and Ding, Wen and Rohdin, Johan and Silnova, Anna and others},
  journal={Speech Communication},
  volume={162},
  pages={103104},
  year={2024},
  publisher={Elsevier}
}

@inproceedings{wang2023wespeaker,
  title={Wespeaker: A research and production oriented speaker embedding learning toolkit},
  author={Wang, Hongji and Liang, Chengdong and Wang, Shuai and Chen, Zhengyang and Zhang, Binbin and Xiang, Xu and Deng, Yanlei and Qian, Yanmin},
  booktitle={IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
  pages={1--5},
  year={2023},
  organization={IEEE}
}

Looking for contributors

If you are interested to contribute, feel free to contact @wsstriving or @robin1001

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

wespeaker_unofficial-0.0.1.tar.gz (68.0 kB view details)

Uploaded Source

Built Distribution

wespeaker_unofficial-0.0.1-py3-none-any.whl (93.8 kB view details)

Uploaded Python 3

File details

Details for the file wespeaker_unofficial-0.0.1.tar.gz.

File metadata

  • Download URL: wespeaker_unofficial-0.0.1.tar.gz
  • Upload date:
  • Size: 68.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.9

File hashes

Hashes for wespeaker_unofficial-0.0.1.tar.gz
Algorithm Hash digest
SHA256 5c60ce4f155a6ffcd375afcecf01ba4e5b0f02ca7b048e10d4bd25ab6c1a5c72
MD5 88f446b3a88b809ca8641d69ebac0a98
BLAKE2b-256 92c1adbe89641cbcabb5ebd7c9617f38f37468a12bb415e7e411f337051ee47a

See more details on using hashes here.

File details

Details for the file wespeaker_unofficial-0.0.1-py3-none-any.whl.

File metadata

File hashes

Hashes for wespeaker_unofficial-0.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 6571d739df19b5702ca66e9bb090a5f54d9d68948148d9a08b466bf963092a30
MD5 2925b99679628f23cb487e09de016884
BLAKE2b-256 0f9d6d8d2548779d3cd55be873509885ce45db4ceb3083d3d0e6b764e635cda5

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page