Skip to main content

Texture (LBP) based counter-measures for the REPLAY-ATTACK database

Project description

This package implements the LBP counter-measure to spoofing attacks to face recognition systems as described at the paper On the Effectiveness of Local Binary Patterns in Face Anti-spoofing, by Chingovska, Anjos and Marcel, presented at the IEEE BioSIG 2012 meeting.

If you use this package and/or its results, please cite the following publications:

  1. The original paper with the counter-measure explained in details:

    author = {Chingovska, Ivana and Anjos, Andr{\'{e}} and Marcel, S{\'{e}}bastien},
    keywords = {Attack, Counter-Measures, Counter-Spoofing, Face Recognition, Liveness Detection, Replay, Spoofing},
    month = sep,
    title = {On the Effectiveness of Local Binary Patterns in Face Anti-spoofing},
    journal = {IEEE BIOSIG 2012},
    year = {2012},
  2. Bob as the core framework used to run the experiments:

        author = {A. Anjos AND L. El Shafey AND R. Wallace AND M. G\"unther AND C. McCool AND S. Marcel},
        title = {Bob: a free signal processing and machine learning toolbox for researchers},
        year = {2012},
        month = oct,
        booktitle = {20th ACM Conference on Multimedia Systems (ACMMM), Nara, Japan},
        publisher = {ACM Press},

If you wish to report problems or improvements concerning this code, please contact the authors of the above mentioned papers.

Raw data

The data used in the paper is publicly available and should be downloaded and installed prior to try using the programs described in this package. Visit the REPLAY-ATTACK database portal for more information.

This satellite package can also work with the CASIA_FASD database.



If you are reading this page through our GitHub portal and not through PyPI, note the development tip of the package may not be stable or become unstable in a matter of moments.

Go to to download the latest stable version of this package.

There are 2 options you can follow to get this package installed and operational on your computer: you can use automatic installers like pip (or easy_install) or manually download, unpack and use zc.buildout to create a virtual work environment just for this package.

Using an automatic installer

Using pip is the easiest (shell commands are marked with a $ signal):

$ pip install antispoofing.lbp

You can also do the same with easy_install:

$ easy_install antispoofing.lbp

This will download and install this package plus any other required dependencies. It will also verify if the version of Bob you have installed is compatible.

This scheme works well with virtual environments by virtualenv or if you have root access to your machine. Otherwise, we recommend you use the next option.

Using zc.buildout

Download the latest version of this package from PyPI and unpack it in your working area. The installation of the toolkit itself uses buildout. You don’t need to understand its inner workings to use this package. Here is a recipe to get you started:

$ python
$ ./bin/buildout

These 2 commands should download and install all non-installed dependencies and get you a fully operational test and development environment.


The python shell used in the first line of the previous command set determines the python interpreter that will be used for all scripts developed inside this package. Because this package makes use of Bob, you must make sure that the script is called with the same interpreter used to build Bob, or unexpected problems might occur.

If Bob is installed by the administrator of your system, it is safe to consider it uses the default python interpreter. In this case, the above 3 command lines should work as expected. If you have Bob installed somewhere else on a private directory, edit the file buildout.cfg before running ./bin/buildout. Find the section named buildout and edit the line prefixes to point to the directory where Bob is installed or built. For example: For example:


User Guide

This section explains how to use the package in order to: a) calculate the LBP features on the REPLAY-ATTACK or CASIA_FASD database; b) perform classification using Chi-2, Linear Discriminant Analysis (LDA) and Support Vector Machines (SVM). At the bottom of the page, you can find instructions how to reproduce the exact paper results.

It is assumed you have followed the installation instructions for the package, and got the required database downloaded and uncompressed in a directory. After running the buildout command, you should have all required utilities sitting inside the bin directory. We expect that the video files of the database are installed in a sub-directory called database at the root of the package. You can use a link to the location of the database files, if you don’t want to have the database installed on the root of this package:

$ ln -s /path/where/you/installed/the/database database

If you don’t want to create a link, use the --input-dir flag (available in all the scripts) to specify the root directory containing the database files. That would be the directory that contains the sub-directories train, test, devel and face-locations.

Calculate the LBP features

The first stage of the process is calculating the feature vectors, which are essentially normalized LBP histograms. There are two types of feature vectors:

  1. per-video averaged feature-vectors (the normalized LBP histograms for each frame, averaged over all the frames of the video. The result is a single feature vector for the whole video), or
  2. a single feature vector for each frame of the video (saved as a multiple row array in a single file).

The program to be used for the first case is ./bin/, and for the second case ./bin/ They both use the utility script spoof/ Depending on the command line arguments, they can compute different types of LBP histograms over the normalized face bounding box. Furthermore, the normalized face-bounding box can be divided into blocks or not.

The following command will calculate the per-video averaged feature vectors of all the videos in the REPLAY-ATTACK database and will put the resulting .hdf5 files with the extracted feature vectors in the default output directory ./lbp_features:

$ ./bin/ --ff 50 replay

In the above command, the face size filter is set to 50 pixels (as in the paper), and the program will discard all the frames with detected faces smaller then 50 pixels as invalid.

To calculate the feature vectors for each frame separately (and save them into a single file for the full video), you have to run:

$ ./bin/ --ff 50 replay

To see all the options for the scripts and, just type --help at the command line. Change the default option in order to obtain various features, as described in the paper.

If you want to see all the options for a specific database (e.g. protocols, lighting conditions etc.), type the following command (for Replay-Attack):

$ ./bin/ replay --help

Classification using Chi-2 distance

The clasification using Chi-2 distance consists of two steps. The first one is creating the histogram model (average LBP histogram of all the real access videos in the training set). The second step is comparison of the features of development and test videos to the model histogram and writing the results.

The script to use for creating the histogram model is ./bin/ It expects that the LBP features of the videos are stored in a folder ./bin/lbp_features. The model histogram will be written in the default output folder ./res. You can change this default features by setting the input arguments. To execute this script fro Replay-Attack, just run:

$ ./bin/ replay

The script for performing Chi-2 histogram comparison is ./bin/, and it assumes that the model histogram has been already created. It makes use of the utility script spoof/ The default input directory is ./lbp_features, while the default input directoru for the histogram model as well as default output directory is ./res. To execute this script for Replay-Attack, just run:

$ ./bin/ -s replay

Do not forget the -s option if you want the scores for each video saved in a file.

To see all the options for the scripts and, just type --help at the command line.

Classification with linear discriminant analysis (LDA)

The classification with LDA is performed using the script ./bin/ The default input and output directories are ./lbp_features and ./res. To execute the script with prior PCA dimensionality reduction as is done in the paper (for Replay-Attack), call:

$ ./bin/ -r -s replay

Do not forget the -s option if you want the scores for each video saved in a file.

To see all the options for this script, just type --help at the command line.

Classification with support vector machine (SVM)

The classification with SVM is performed using the script ./bin/ The default input and output directories are ./lbp_features and ./res. To execute the script with prior normalization of the data in the range [-1, 1] as in the paper (for Replay-Attack), call:

$ ./bin/ -n --eval -s replay

Do not forget the -s option if you want the scores for each video saved in a file.

To see all the options for this script, just type --help at the command line.

Classification with support vector machine (SVM) on a different database or database subset

In the training process, the SVM machine, as well as the normalization and PCA parameters are saved in an .hdf5 file. They can be used later for classification of data from a different database or database subset. This can be done using the script ./bin/ The default input and output directories are ./lbp_features and ./res. To execute the script, call:

$ ./bin/ -n --eval replay

Do not forget the -s option if you want the scores for each video saved in a file. Also, do not forget to specify the right .hdf5 file where the SVM machine and the parameters are saved using the -i parameter (the default one is ./res/svm_machine.hdf5

To see all the options for this script, just type --help at the command line.

Reproduce paper results

The exact commands to reproduce the results from the paper are given here. First, feature exatraction should be done as follows:

$ ./bin/ -d features/regular replay
$ ./bin/ -d features/transitional replay
$ ./bin/ -d features/direction_coded replay
$ ./bin/ -d features/modified replay
$ ./bin/ -d features/per-block -b 3 replay

The results in Table II are obtained with the following commands:

$ ./bin/ -v features/regular -d models/regular replay
$ ./bin/ -v features/regular -m models/regular -d scores/regular -s replay

By changing the -v parameter, you can change the type of features, resulting in the scores for the different columns of the table.

The results in Table III are obtained by the same commands, using the corresponding value for the -v parameter for the per-block computed feature.

The results in Table IV for LDA and SVM classification are obtained by the following two commands, respectively:

$ ./bin/ -v features/regular -d scores/regular -n replay
$ ./bin/ -v features/regular -d scores/regular -n -r replay

The results for the CASIA-FASD database can be obtained in the same way, by specifying the casia parameter at the end of the commands. Note that the results for CASIA-FASD are reported on per-block basis, and using 5-fold cross validation. This means that the results need to be generated 5 times, training with different fold, which can be specified as an argument as well.

Important note: the results in the last column of Table V are not straight-forwardly reproducible at the moment (in particular, the concatenation of histograms is not directly supported using the scripts in this satellite package). Furthermore, at the present state, the scripts do not support the NUAA database. Work to solve this incovenience is in progress :)


In case of problems, please contact any of the authors of the paper.

Project details

Release history Release notifications

History Node


History Node


History Node


This version
History Node


History Node


History Node


History Node


History Node


History Node


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Filename, size & hash SHA256 hash help File type Python version Upload date (45.2 kB) Copy SHA256 hash SHA256 Source None Mar 26, 2014

Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging CloudAMQP CloudAMQP RabbitMQ AWS AWS Cloud computing Fastly Fastly CDN DigiCert DigiCert EV certificate StatusPage StatusPage Status page