Skip to main content

Running the experiments as given in paper:

Project description

This package provides the source code to run the experiments published in the paper Score Calibration in Face Recognition. It relies on the FaceRecLib to execute the face recognition experiments, and on Bob to compute the calibration experiments.

Installation

The installation of this package relies on the BuildOut system. By default, the command line sequence:

$ ./python bootstrap.py
$ ./bin/buildout

should download and install all requirements, including the FaceRecLib, the database interfaces xbob.db.scface, xbob.db.mobio and all their required packages. There are a few exceptions, which are not automatically downloaded:

Bob

The face recognition experiments rely on the open source signal-processing and machine learning toolbox Bob. To install Bob, please visit http://www.idiap.ch/software/bob and follow the installation instructions. Please verify that you have at least version 1.2.0 of Bob installed. If you have installed Bob in a non-standard directory, please open the buildout.cfg file from the base directory and set the prefixes directory accordingly.

Image Databases

The experiments are run on external image databases. We do not provide the images from the databases themselves. Hence, please contact the database owners to obtain a copy of the images. The two databases used in our experiments can be downloaded here:

Important!

After downloading the databases, you will need to tell our software, where it can find them by changing the configuration files. In particular, please update the scface_directory in xfacereclib/paper/IET2014/database_scface.py, as well as mobio_image_directory and mobio_annotation_directory in xfacereclib/paper/IET2014/database_mobio.py. Please let all other configuration parameters unchanged as this might influence the face recognition experiments and, hence, the reproducibility of the results.

Getting help

In case anything goes wrong, please feel free to open a new ticket in our GitLab page, or send an email to manuel.guenther@idiap.ch.

Recreating the results of the Paper

After successfully setting up the databases, you are now able to run the face recognition and calibration experiments as explained in the Paper.

The experiment configuration

The face recognition experiment are run using the FaceRecLib, but for convenience there exists a wrapper script that set up the right parametrization for the call to the FaceRecLib. The configuration files that are used by the FaceRecLib, which contain all the parameters of the experiments, can be found in the xfacereclib/paper/IET2014/ directory. Particularly, the xfacereclib/paper/IET2014/dct_mobio.py and xfacereclib/paper/IET2014/isv_mobio.py files contain the configuration for the DCT block features and the ISV algorithm as described in the Paper. Accordingly,

Running the experiments

This script can be found in bin/iet2014_face_recog.py. It requires some command line options, which you can list using ./bin/iet2014_face_recog.py --help. Usually, the command line options have a long version (starting with --) and a shortcut (starting with a single -), here we use only the long versions:

  • --temp-directory: Specify a directory where temporary files will be stored (default: temp). This directory can be deleted after all experiments ran successfully.

  • --result-directory: Specify a directory where final result files will be stored (default: results). This directory is required to evaluate the experiments.

  • --databases: Specify a list of databases that you want your experiments to run on. Possible values are scface and mobio. By default, experiments on both databases are executed.

  • --protocols: Specify a list of protocols that you want to run. Possible values are combined, close, medium and far for database scface, and male and female for mobio. By default, all protocols are used.

  • --combined-zt-norm: Execute the face recognition experiments on the SCface database with combined ZT-norm cohort.

  • --verbose: Print out additional information or debug information during the execution of the experiments. The --verbose option can be used several times, increasing the level to Warning (1), Info (2) and Debug (3). By default, only Error (0) messages are printed.

  • --dry-run: Use this option to print the calls to the FaceRecLib without executing them.

Additionally, you can pass options directly to the FaceRecLib, but you should do that with care. Simply use -- to separate options to the bin/iet2014_face_recog.py from options to the FaceRecLib. For example, the --force option might be of interest. See ./bin/faceverify.py --help for a complete list of options.

It is advisable to use the --dry-run option before actually running the experiments, just to see that everything is correct. Also, the Info (2) verbosity level prints useful information, e.g., by adding the --verbose --verbose (or shortly -vv) on the command line. A commonly used command line sequence to execute the face recognition algorithm on both databases could be:

  1. Run the experiments on the MOBIO database:

    $ ./bin/iet2014_face_recog.py -vv --databases mobio
  2. Run the experiments on the SCface database, using protocol-specific files for the ZT-norm:

    $ ./bin/iet2014_face_recog.py -vv --databases scface
  3. Run the experiments on the SCface database, using files from all distance conditions for the ZT-norm:

    $ ./bin/iet2014_face_recog.py -vv --databases scface --combined-zt-norm --protocols close medium far

Evaluating the experiments

After all experiments have finished successfully, the resulting score files can be evaluated. For this, the bin/iec2014_evaluate.py script can be used to create the Tables 3, 4, 5 and 6 of the Paper, simply by writing LaTeX-compatible files that can later be interpreted to generate the tables.

Generating output files

Also, all information are written to console (when using the -vvv option to enable debug information), including:

  1. The \(C^{\mathrm{min}}_{\mathrm{ver}}\) of the development set, the \(C^{\mathrm{min}}_{\mathrm{ver}}\) of the evaluation set and the \(C_{\mathrm{ver}}\) of the evaluation set based on the optimal threshold on the development set.

  2. The \(C_{\mathrm{frr}}\) on both development and evaluation set, using the threshold defined at FAR=1% of the development set.

  3. The \(C_{\mathrm{ver}}\) on the development and evaluation set, when applying threshold \(\theta_0=0\) (mainly useful for calibrated scores).

  4. The \(C_{\mathrm{cllr}}\) performance on the development and the evaluation set.

  5. The \(C^{\mathrm{min}}_{\mathrm{cllr}}\) performance on the development and the evaluation set.

All these numbers are computed with and without ZT score normalization, and before and after score calibration.

To run the script, some command line parameters can be specified, see ./bin/iec2014_evaluate.py --help:

  • --result-directory: Specify the directory where final result files are stored (default: results). This should be the same directory as passed to the bin/iec2014_execute.py` script.

  • --databases: Specify a list of databases that you want evaluate. Possible values are scface and mobio. By default, both databases are evaluated.

  • --protocols: Specify a list of protocols that you want to evaluate. Possible values are combined, close, medium and far for database scface, and male and female for mobio. By default, all protocols are used.

  • --combined-zt-norm: Evaluate the face recognition experiments on the SCface database with combined ZT-norm cohort.

  • --combined-threshold: Evaluate the face recognition experiments on the SCface database by computing the threshold on the combined development set.

  • --latex-directory: The directory, where the final score files will be placed into, by default this directory is latex.

Again, the most usual way to compute the resulting tables could be:

  1. Evaluate experiments on MOBIO:

    $ bin/iet2014_evaluate.py -vvv --database mobio
  2. Evaluate experiments on SCface with distance-dependent ZT-norm:

    $ bin/iet2014_evaluate.py -vvv --database scface
  3. Evaluate experiments on SCface with distance-independent ZT-norm:

    $ bin/iet2014_evaluate.py -vvv --database scface --combined-zt-norm --protocols close medium far
  4. Evaluate experiments on SCface with distance-independent threshold (will mainly change the \(C_{\mathrm{ver}}\) of the evaluation set):

    $ bin/iet2014_evaluate.py -vvv --database scface --combined-threshold --protocols close medium far
  5. The experiments to compare linear calibration with categorical calibration as given in Table 7 of the Paper are run using the bin/iet2014_categorical.py script:

    $ bin/iet2014_categorical.py -vvv

Generate the LaTeX tables

Finally, the LaTeX tables can be regenerated by defining the accordant \Result and \ResultAtZero LaTeX macros and include the resulting files. E.g., to create Table 3 of the Paper, define:

\newcommand\ResultIII[2]{\\}
\newcommand\ResultII[9]{#1\,\% \ResultIII}
\newcommand\Result[9]{#1\,\% & #4\,\% & #2\,\% & #3\,\% & #5\,\% & #6\,\% & #9\,\% & #7\,\% & #8\,\% &\ResultII}
\newcommand\ResultAtZero[8]{}

set up your tabular environment with 10 columns and input at according places:

\input{latex/mobio_male}
\input{latex/mobio_female}
\input{latex/scface_close}
\input{latex/scface_medium}
\input{latex/scface_far}
\input{latex/scface_combined}

Accordingly, the other tables can be generated from files:

  • Table 4a): latex/scface_close-zt.tex, latex/scface_medium-zt.tex and latex/scface_far-zt.tex

  • Table 4b): latex/scface_close-thres.tex, latex/scface_medium-thres.tex and latex/scface_far-thres.tex

  • Tables 5 and 6: latex/mobio_male.tex, latex/mobio_female.tex, latex/scface_close-zt.tex, latex/scface_medium-zt.tex, latex/scface_far-zt.tex and latex/scface_combined.tex .

  • Table 7: latex/calibration-none.tex, latex/calibration-linear.tex and latex/calibration-categorical.tex

Generate the score distribution plots

At the end, also the score distribution plots that are shown in Figures 3 and 4 of the Paper can be regenerated. These plots require the face recognition experiments to have finished, and also the categorical calibration to have run. Afterwards, the script bin/iet2014_plot.py can be executed. Again, the script has a list of command line options:

  • --result-directory: Specify the directory where final result files are stored (default: results). This should be the same directory as passed to the bin/iec2014_execute.py` script.

  • --figure: Specify, which figure you want to create. Possible values are 3 and 4.

  • --output-file: Specify the file, where the plots should be written to. By default, this is Figure_3.pdf or Figure_4.pdf for --figure 3 or --figure 4, respectively.

Hence, running:

$ ./bin/iet2014_plot.py -vv --figure 3
$ ./bin/iet2014_plot.py -vv --figure 4

should be sufficient to generate the plots.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

xfacereclib.paper.IET2014-1.0.0.zip (53.8 kB view details)

Uploaded Source

File details

Details for the file xfacereclib.paper.IET2014-1.0.0.zip.

File metadata

File hashes

Hashes for xfacereclib.paper.IET2014-1.0.0.zip
Algorithm Hash digest
SHA256 3c9146d089e4aa957b26b51d4899de91cfe444bfb8ee36ca75ddbcf62c4384fc
MD5 6bd2567f15c3564ee08b4eea08c2ed4b
BLAKE2b-256 54daa320dd20d874d88175ed4d83622192cac708b9cb159092b2c9b8d2acdce0

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page