Skip to main content

Accumulate depth frames of 3DMAD database for better face models and analyze verification and spoofing performances of 2D, 2.5D and 3D samples

Project description

This package implements the baseline verification algorithms and LBP-based counter-measures against spoofing attacks with 3d masks to 2D, 2.5D and 3D face recognition systems as described in the paper Spoofing Face Recognition with 3D Masks, by N. Erdogmus and S. Marcel.

If you use this package and/or its results, please cite the following publications:

  1. The original paper with the baseline verification and counter-measure algorithms explained in details:

  2. Bob as the core framework used to run the experiments:

        author = {A. Anjos AND L. El Shafey AND R. Wallace AND M. G\"unther AND C. McCool AND S. Marcel},
        title = {Bob: a free signal processing and machine learning toolbox for researchers},
        year = {2012},
        month = oct,
        booktitle = {20th ACM Conference on Multimedia Systems (ACMMM), Nara, Japan},
        publisher = {ACM Press},

If you wish to report problems or improvements concerning this code, please contact the authors of the above mentioned papers.

Raw data

The data used in the paper is publicly available and should be downloaded and installed prior to try using the programs described in this package. Visit the 3D MASK ATTACK database portal for more information.



If you are reading this page through our GitHub portal and not through PyPI, note the development tip of the package may not be stable or become unstable in a matter of moments.

Go to to download the latest stable version of this package.

There are 2 options you can follow to get this package installed and operational on your computer: you can use automatic installers like pip (or easy_install) or manually download, unpack and use zc.buildout to create a virtual work environment just for this package.

Using an automatic installer

Using pip is the easiest (shell commands are marked with a $ signal):

$ pip install

You can also do the same with easy_install:

$ easy_install

This will download and install this package plus any other required dependencies. It will also verify if the version of Bob you have installed is compatible.

This scheme works well with virtual environments by virtualenv or if you have root access to your machine. Otherwise, we recommend you use the next option.

Using zc.buildout

Download the latest version of this package from PyPI and unpack it in your working area. The installation of the toolkit itself uses buildout. You don’t need to understand its inner workings to use this package. Here is a recipe to get you started:

$ python
$ ./bin/buildout

These 2 commands should download and install all non-installed dependencies and get you a fully operational test and development environment.


The python shell used in the first line of the previous command set determines the python interpreter that will be used for all scripts developed inside this package. Because this package makes use of Bob, you must make sure that the script is called with the same interpreter used to build Bob, or unexpected problems might occur.

If Bob is installed by the administrator of your system, it is safe to consider it uses the default python interpreter. In this case, the above 3 command lines should work as expected. If you have Bob installed somewhere else on a private directory, edit the file buildout.cfg before running ./bin/buildout. Find the section named external and edit the line egg-directories to point to the lib directory of the Bob installation you want to use. For example:

recipe = xbob.buildout:external

User Guide

This section explains how to use the package in order to: a) accumulate depth frames in the 3DMAD database in order to obtain better 3D face models; b) analyze each mask in the 3DMAD database with 2 different algorithms for each of the 2D, 2.5D and 3D modes; c) test LBP-based anti-spoofing algorithms using 3 different classifiers: Chi-2, Linear Discriminant Analysis (LDA) and Support Vector Machines (SVM).

It is assumed you have followed the installation instructions for the package, and got the required database downloaded and uncompressed in a directory. After running the buildout command, you should have all required utilities sitting inside the bin directory. We expect that the data files of the database are installed in a sub-directory called database at the root of the package. You can use a link to the location of the database files, if you don’t want to have the database installed on the root of this package:

$ ln -s /path/where/you/installed/the/database database

If you don’t want to create a link, use the --inputdir flag (available in all the scripts) to specify the root directory containing the database files.

Accumulate depth frames and generate pre-processed 2D, 2.5D and 3D files

The first stage of the process is accumulating depth frames for each Kinect video. 30 frames are aligned and accumulated for each 3D model, resulting in 10 models per video (HDF5 file).

Firstly, the depth values in depth frames have to be projected to their real world coordinates by running the following command:

$ bin/ -i <database directory>

This creates a subfolder in the output directiory (By default ./output/aligned) and saves there the aligned coordinates of all frames in HDF5 files.

Next, the aligned frames are accumulated via the following command:

$ bin/

This creates a subfolder in the output directiory (By default ./output/accumulated) and saves there the accumulated models as HDF5 files (10 models per one video in the database). The contents of this folder are the 3D data to be used in baseline verification experiments.

In order to generate depth maps from the accumulated models, run the following command:

$ bin/

This creates a subfolder in the output directiory (By default ./output/depth) and saves there the depth maps obtained from the accumulated models. The contents of this folder are the 2.5D data to be used in baseline verification and anti-spoofing experiments.

Finally, the corresponding grayscale images are created, by taking every 30th frame of each video in the database:

$ bin/ -i <database directory>

This creates a subfolder in the output directiory (By default ./output/grayscale) and saves there the grayscale images obtained from the video files in the database. The contents of this folder are the 2D data to be used in baseline verification and anti-spoofing experiments.

To see all the options for these scripts, just type --help at the command line.

You can also view the accumulated models using:

$ bin/ <path to the accumulated HSF5 file>

Analyzing facial masks in 3DMAD via baseline verification experiments

Once the experiments files for 2D, 2.5D and 3D modes are generated, you can run the baseline verification algorithms which analyze each mask in the database in a leave-one-out manner.

For LBP-2D, LBP-2.5D, TPS-3D, ISV-2D, ISV-2.5D and ICP-3D methods, the following commands should be used in their respective order:

$ bin/
$ bin/ -t depth
$ bin/
$ bin/
$ bin/ -t depth
$ bin/

The LBP, TPS and ISV algorithms save the extracted features in subfolders, by default under ./feature/<algorithm>. The obtained scores are saved into subfolders by default under ./result/<algorithm>_<data type>.

Finally, the plots are generated for each of these experiments via following commands (by default under ./result folder):

$ bin/
$ bin/ -t depth
$ bin/ -a TPS
$ bin/ -a ISV
$ bin/ -a ISV -t depth
$ bin/ -a ICP

To see all the options for these scripts, just type --help at the command line.

Anti-spoofing experiments

Anti-spoofing experiments with different types of LBP features and classifiers can be run by the following command:

$bin/ -t <grayscale/depth> -l <regular/transitional/direction-coded/modified/maatta11> -c <chi2/lda/svm>

without using blocks. -b flag is used to divide the LBP images into 3x3 blocks and concatahate their histograms:

$bin/ -t <grayscale/depth> -l <regular/transitional/direction-coded/modified> -c <chi2/lda/svm> -b

Maatta11 has its own specific block division.

Once all experiments are completed (2x5x3 + 2x4x3 = 54 in total), the bar plot can be obtained via:

$ bin/

To see all the options for these scripts, just type --help at the command line.


In case of problems, please contact any of the authors of the paper.

Project details

Release history Release notifications

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for, version 1.0.0
Filename, size File type Python version Upload date Hashes
Filename, size (126.0 kB) File type Source Python version None Upload date Hashes View

Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN DigiCert DigiCert EV certificate StatusPage StatusPage Status page