This is a pre-production deployment of Warehouse, however changes made here WILL affect the production instance of PyPI.
Latest Version Dependencies status unknown Test status unknown Test coverage unknown
Project Description

This package contains scripts that show how to use Idiap speaker recognition tool to reproduce Idiap results for NIST SRE 2012.

If you use this package and/or its results, please cite the following publications:

  1. The Spear paper published at ICASSP 2014:

      author = {Khoury, E. and El Shafey, L. and Marcel, S.},
      title = {Spear: An open source toolbox for speaker recognition based on {B}ob},
      booktitle = {IEEE Intl. Conf. on Acoustics, Speech and Signal Processing (ICASSP)},
      year = {2014},
      url = {},
  2. The paper that describes the development set used by the I4U consortium:

       author = {Saeidi, Rahim and others},
       month = {aug},
       title = {I4U Submission to NIST SRE 2012: a large-scale collaborative effort for noise-robust speaker verification},
       booktitle = {INTERSPEECH},
       year = {2013},
       location = {Lyon, France},
       pdf = {}
  3. Bob as the core framework used to run the experiments:

      author = {A. Anjos and L. El Shafey and R. Wallace and M. G\"unther and C. McCool and S. Marcel},
      title = {Bob: a free signal processing and machine learning toolbox for researchers},
      year = {2012},
      month = oct,
      booktitle = {20th ACM Conference on Multimedia Systems (ACMMM), Nara, Japan},
      publisher = {ACM Press},
      url = {},


Just download this package and decompress it locally:

$ wget
$ unzip
$ cd

Use buildout to bootstrap and have a working environment ready for experiments:

$ python bootstrap
$ ./bin/buildout

This also requires that bob (>= 1.2.0) is installed.

Reproducing NIST-SRE 2012 experiments

Getting the data

You first need to order the NIST SRE databases (Fisher, Switchboard, MIXER):

Please follow the instructions and the evaluation plan given by NIST:

Getting the file lists

The file lists of the development and evaluation sets are automatically downloaded from this pypi package:

The file list of the development set were prepared by the I4U consortium. Special thanks to Rahim Saeidi for the good work (original link of the lists: The file names were then normalized following the PRISM definition. Please follow the instructions in xbob.db.nist_sre12

Setting the database configuration file

Once the sphere data are preprocessed, and possibly downsampled to 8KHz, you should set the paths in the configuration file to the data according to your own environment for both Male and Female:

- config/database/nist_sre12/
- config/database/nist_sre12/

Running the experiments

The following command is intended to run the entire experiment on both the development and the evaluation sets using ISV (Inter-Session Variability Modeling) and for both Male and Female:

$  bin/ -d config/database/nist_sre12/ -T PATH/TO/TEMP_DIR/  -U PATH/TO/RESULTS_DIR/ -p config/preprocessing/ -f config/features/ -t config/tools/isv/ -b male

$  bin/ -d config/database/nist_sre12/ -T PATH/TO/TEMP_DIR/  -U PATH/TO/RESULTS_DIR/ -p config/preprocessing/ -f config/features/ -t config/tools/isv/ -b female

For more details and options, please type:

$ bin/ --help

You may want to change the parameters in the configuration files for VAD (Energy, 4Hz Modulation energy), Features (MFCC, LFCC), and Tools (UBM-GMM, ISV, I-Vector). Please look to the different configuration settings in:

- src/spkrec/config/

Running on the grid

In order to run the experiment on the grid, you need to have gridtk installed on your local network. Details can be found here:

Evaluation on the Development set

The EER on the Development sets can be obtained using the evaluation script from the bob library.

For Male, without any score normalization:

$ ./bin/ -d PATH/TO/RESULTS_DIR/male/scores/nonorm/scores-dev -t PATH/TO/RESULTS_DIR/male/scores/nonorm/scores-dev -x
  • EER = 4.68%

For Male, with ZT score normalization:

$ ./bin/ -d PATH/TO/RESULTS_DIR/male/scores/ztnorm/scores-dev -t PATH/TO/RESULTS_DIR/male/scores/ztnorm/scores-dev -x
  • EER = 3.98%

For Female, without any score normalization:

$ ./bin/ -d PATH/TO/RESULTS_DIR/female/scores/nonorm/scores-dev -t PATH/TO/RESULTS_DIR/female/scores/nonorm/scores-dev -x
  • EER = 6.28%

For Female, with ZT score normalization:

$ ./bin/ -d PATH/TO/RESULTS_DIR/female/scores/ztnorm/scores-dev -t PATH/TO/RESULTS_DIR/female/scores/ztnorm/scores-dev -x
  • EER = 5.16%

Notice that there are different implementations for EER. For example, the default one in Bob is different from the implementation in Bosaris.

Please check the NIST evaluation guidlines to see how to evaluate on SRE 2012 Evaluation set. Further, the simple scores should be converted to compound scores. Please find more details given by Niko Brummer on the webpage of Bosaris toolkit:
Release History

Release History


This version

History Node

TODO: Figure out how to actually get changelog content.

Changelog content for this version goes here.

Donec et mollis dolor. Praesent et diam eget libero egestas mattis sit amet vitae augue. Nam tincidunt congue enim, ut porta lorem lacinia consectetur. Donec ut libero sed arcu vehicula ultricies a non tortor. Lorem ipsum dolor sit amet, consectetur adipiscing elit.

Show More

Download Files

Download Files

TODO: Brief introduction on what you do with files - including link to relevant help section.

File Name & Checksum SHA256 Checksum Help Version File Type Upload Date (39.5 kB) Copy SHA256 Checksum SHA256 Source Mar 4, 2014

Supported By

WebFaction WebFaction Technical Writing Elastic Elastic Search Pingdom Pingdom Monitoring Dyn Dyn DNS HPE HPE Development Sentry Sentry Error Logging CloudAMQP CloudAMQP RabbitMQ Heroku Heroku PaaS Kabu Creative Kabu Creative UX & Design Fastly Fastly CDN DigiCert DigiCert EV Certificate Rackspace Rackspace Cloud Servers DreamHost DreamHost Log Hosting