This is a pre-production deployment of Warehouse, however changes made here WILL affect the production instance of PyPI.
Project Description
This package implements:
• cropping face bounding boxes from Replay-Attack database
• extracting GLCM features for spoofing detection
• generating classification scores for the features using SVM and LDA
• extracting other types of features using other satellite packages it depends on
• calculating Q-statistics and fusing classification scores at score-level using other satellite package it depends on.

This satellite package depends on the following satellite packages: antispoofing.lbp , antispoofing.lbptop , antispoofing.motion , antispoofing.fusion , antispoofing.utils . This dependence enables an interface for the scripts in these satellite packages through antispoofing.competition_icb2013, which means easy spoofing score generation using different types of features, as well as analysis of the common errors and fusion of the methods at score-level.

The fused system consisting of several of these counter-measures is submitted to the The 2nd competition on counter measures to 2D facial spoofing attacks, in conjuction with ICB 2013.

If you use this package and/or its results, please cite the following publications:

1. Bob as the core framework used to run the experiments:

@inproceedings{Anjos_ACMMM_2012,
author = {A. Anjos AND L. El Shafey AND R. Wallace AND M. G\"unther AND C. McCool AND S. Marcel},
title = {Bob: a free signal processing and machine learning toolbox for researchers},
year = {2012},
month = oct,
booktitle = {20th ACM Conference on Multimedia Systems (ACMMM), Nara, Japan},
publisher = {ACM Press},
}

2. The 2nd competition on counter measures to 2D facial spoofing attacks:

@INPROCEEDINGS{Chingovska_ICB2013_2013,
author = {Chingovska, Ivana and others},
keywords = {Anti-spoofing, Competition, Counter-Measures, face spoofing, presentation attack},
title = {The 2nd competition on counter measures to 2D facial spoofing attacks},
booktitle = {International Conference of Biometrics 2013},
year = {2013}
}


If you wish to report problems or improvements concerning this code, please contact the authors of the above mentioned papers.

## Raw data

The data used in the paper is publicly available and should be downloaded and installed prior to try using the programs described in this package. Visit the REPLAY-ATTACK database portal for more information.

## Installation

Note

If you are reading this page through our GitHub portal and not through PyPI, note the development tip of the package may not be stable or become unstable in a matter of moments.

Go to http://pypi.python.org/pypi/antispoofing.competition_icb2013 to download the latest stable version of this package.

There are 2 options you can follow to get this package installed and operational on your computer: you can use automatic installers like pip (or easy_install) or manually download, unpack and use zc.buildout to create a virtual work environment just for this package.

### Using an automatic installer

Using pip is the easiest (shell commands are marked with a $signal): $ pip install antispoofing.competition_icb2013


You can also do the same with easy_install:

$./bin/buildout  These 2 commands should download and install all non-installed dependencies and get you a fully operational test and development environment. Note The python shell used in the first line of the previous command set determines the python interpreter that will be used for all scripts developed inside this package. Because this package makes use of Bob, you must make sure that the bootstrap.py script is called with the same interpreter used to build Bob, or unexpected problems might occur. If Bob is installed by the administrator of your system, it is safe to consider it uses the default python interpreter. In this case, the above 3 command lines should work as expected. If you have Bob installed somewhere else on a private directory, edit the file buildout.cfg before running ./bin/buildout. Find the section named external and edit the line egg-directories to point to the lib directory of the Bob installation you want to use. For example: [external] recipe = xbob.buildout:external egg-directories=/Users/crazyfox/work/bob/build/lib  ## User Guide This section explains how to use the package in order to: a) crop face bounding boxes from Replay-Attack; b) calculate the GLCM features on Replay-Attack database; c) generate LBP, LBP-TOP and motion correlation features on Replay-Attack; d) generate classification scores using Linear Discriminant Analysis (LDA) and Support Vector Machines (SVM) and Multi-Layer perceptron (MLP); e) calculate common errors and Q-statistics for each of the features; f) perform fusion at score-level for the different classification scores. For generation of LBP, LBP-TOP and motion-correlation features, please refer to the corresponding satellite packages (antispoofing.lbp , antispoofing.lbptop , antispoofing.motion respectively). For fusion at score-level, please refer to the corresponding satellite package (antispoofing.fusion). ### Crop face bounding boxes The features used in the paper are generated over the normalized face bounding boxes of the frames in the videos. The script to be used for face cropping and normalization is ./bin/crop_faces.py. It outputs .hdf5 files for each video, containing 3D numpy.array of pixel values of the normalized cropped frames. The first dimension of the array corresponds to the frames of the video files.: $ ./bin/crop_faces.py replay


To execute this script for the anonymized test-set, please call:

$./bin/crop_faces.py replay --ICB-2013  To see all the options for the scripts crop_faces.py. just type --help at the command line. If you want to see all the options for a specific database (e.g. protocols, lighting conditions etc.), type the following command (for Replay-Attack): $ ./bin/calcglcm.py replay --help


This script uses the automatic face detections provided alongside Replay-Attack database. For frames with no detections, we copy the face detection from the previous frame (if there is one). In our work, we consider all the face bounding boxes smaller then 50x50 pixels as invalid detections (option --ff). Frames with no detected face or invalid detected face (<50x50 pixels) are set to Nan in our .hdf5 files. The face bounding boxes are normalized to 64x64 before storing (option -n).

### Calculate the GLCM features

The first stage of the process is calculating the feature vectors on per-frame basis. The script operates on .hdf5 files as obtained using ./bin/crop_faces.py. The first dimension of the array corresponds to the frames of the video files.

The program to be used for calculating the GLCM features is ./bin/calcglcm.py:

$./bin/calcglcm.py replay  To execute this script for the anonymized test-set, call: $ ./bin/calc_faces.py replay --ICB-2013


To see all the options for the scripts calcglcm.py just type --help at the command line. If you want to see all the options for a specific database (e.g. protocols, lighting conditions etc.), type the following command (for Replay-Attack):

$./bin/calcglcm.py replay --help  ### Classification with linear discriminant analysis (LDA) The classification with LDA is performed using the script ./bin/ldatrain.py. To execute the script with prior normalization and PCA dimensionality reduction as is done in the paper (for Replay-Attack), call: $ ./bin/ldatrain.py -r -n replay


If you want to normalize the output scores as well, just set the --ns option.

To execute this script for the anonymized test-set, call:

$./bin/ldatrain.py -r -n replay --ICB-2013  To reproduce our results, set the parameters cost=-1 (option -c -1) and gamma=3 (option -g 3) in the training of the SVM. This script can be used to calculate the LDA scores not only for GLCM, but also for any other features computed with any other of the satellite packages. To see all the options for this script, just type --help at the command line. ### Classification with support vector machine (SVM) The classification with SVM is performed using the script ./bin/svmtrain.py. To execute the script with prior normalization of the data in the range [-1, 1] and PCA reduction as in the paper (for Replay-Attack), call: $ ./bin/svmtrain.py -n -r replay


If you want to normalize the output scores as well, just set the --ns option.

To call this script for the anonymized test-set, call:



### Q-Statistic

Fusion two or more countermeasures is one way to improve the classification performance. Kuncheva and Whitaker [1] shown the combination of statistically independent classifiers maximises the performance of a fusion and in order to measure this dependency, they proposed the Q-Statistic. For two countermeasures (A and B), the Q-Statistics can be defined.

\begin{equation*} Q_{A,B} = \frac{N_{11}N_{00} - N_{01}N_{10}}{N_{11}N_{00} +N_{01}N_{10}} \end{equation*}

where mathcal{N} is the number of times that a countermeasure make a correct classification (mathcal{N_1}) or make an incorrect classification (mathcal{N_0}).

To run the Q-Statistic script call:

\$ ./bin/icb2013_qstatistic.py --input-dir  [Set of scores of each countermeasure] -v [database]


### Generating other types of features

This package depends on other satellite packages for calculating other types of features: LBP, LBP-TOP and motion correlation. To read more details and to generate these types of features, please refer to the corresponding satellite packages (antispoofing.lbp , antispoofing.lbptop , antispoofing.motion respectively). Note that it is possoble to call the scripts belonging to these other satellite packages within antispoofing.competition_icb2013 satellite package.

To generate classificatio scores for the other types of features, you can use the methods provided by this or the other correponding satellite packages.

### Fusion of counter-measures

The classification scores obtained using different features and classification techniques can be fused at score level. To read about the available fusion techniques as well as to perform the fusion, please refer to the corresponding satellite package antispoofing.fusion . Note that you can call the scripts belonging to antispoofing.fusion satellite package within antispoofing.competition_icb2013 satellite package.

### Generating error rates

To calculate the threshold on the classification scores of a single or a fused counter-measure, use ./bin/eval_threshold.py. Note that as an input argument you need to give the file with the developments scores to evaluate the threshold. To calculate the error rates, use ./bin/apply_threshold.py. To see all the options for these two scripts, just type --help at the command line.

## References

[1] L. I. Kuncheva and C. J. Whitaker, “Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy,” Mach. Learn., vol. 51, pp. 181–207, May 2003.

Release History

## Release History

1.1.0

This version

TODO: Figure out how to actually get changelog content.

Changelog content for this version goes here.

Donec et mollis dolor. Praesent et diam eget libero egestas mattis sit amet vitae augue. Nam tincidunt congue enim, ut porta lorem lacinia consectetur. Donec ut libero sed arcu vehicula ultricies a non tortor. Lorem ipsum dolor sit amet, consectetur adipiscing elit.

1.0.0

TODO: Figure out how to actually get changelog content.

Changelog content for this version goes here.

Donec et mollis dolor. Praesent et diam eget libero egestas mattis sit amet vitae augue. Nam tincidunt congue enim, ut porta lorem lacinia consectetur. Donec ut libero sed arcu vehicula ultricies a non tortor. Lorem ipsum dolor sit amet, consectetur adipiscing elit.