SPIGA: Shape Preserving Facial Landmarks with Graph Attention Networks
Project description
SPIGA: Shape Preserving Facial Landmarks with Graph Attention Networks.
This repository contains the source code of SPIGA, a face alignment and headpose estimator that takes advantage of the complementary benefits from CNN and GNN architectures producing plausible face shapes in presence of strong appearance changes.
It achieves top-performing results in:
Setup
The repository has been tested on Ubuntu 20.04 with CUDA 11.4, the latest version of cuDNN, Python 3.8 and Pytorch 1.12.1. Please install the repository from source code:
# Best practices:
# 1. Create a virtual environment.
# 2. Install Pytorch according to your CUDA version.
# 3. Install SPIGA from source code:
git clone https://github.com/andresprados/SPIGA.git
cd spiga
pip install -e .
- Models: You can download the model weights from Google Drive. By default, they should be stored at
./models/weights/
. - Datasets: Download the dataset images from the official websites (300W, AFLW, WFLW, COFW). By default they should be saved following the next folder structure:
./data/databases/ # Default path can be updated by modifying 'db_img_path' in ./data/loaders/dl_config.py
|
└───/300w
│ └─── /images
│ | /private
│ | /test
| └ /train
|
└───/cofw
│ └─── /images
|
└───/aflw
│ └─── /data
| └ /flickr
|
└───/wflw
└─── /images
- Annotations: We have stored for simplicity the datasets annotations directly in
./data/annotations
. We strongly recommend to move them out of the repository if you plan to use it as a git directory. - Results: Similar to the annotations problem, we have stored the results in
./eval/results/<dataset_name>
. Remove them if need it.
Note: All the callable files provide a detailed parser that describes the behaviour of the program and their inputs. Please, check the operational modes by using the extension --help
.
Dataloaders and Benchmarks
The alignment dataloaders and his respective benchmark are located at ./data
and ./eval/benchmark
respectively.
For more information check the data readme or the benchmark readme.
Evaluation
The models evaluation is divided in two scripts:
Results generation: The script extracts the data alignments and headpose estimation from the desired <dataset_name>
trained network. Generating a ./eval/results/results_<dataset_name>_test.json
file which follows the same data structure defined by the dataset annotations.
python ./eval/results_gen.py <dataset_name>
Benchmark metrics: The script generates the desired landmark or headpose estimation metrics. We have implemented an useful benchmark which allows you to test any model using a results file as input.
python ./eval/benchmark/evaluator.py /path/to/<results_file.json> --eval lnd pose -s
Note: You will have to interactively select the NME_norm and other parameters in the terminal window.
Results Sum-up
WFLW Dataset
MERLRAV Dataset
300W Private Dataset
NME_bbox | AUC_7 | FR_7 | NME_P90 | NME_P95 | NME_P99 | |
---|---|---|---|---|---|---|
full | 2.031 | 71.011 | 0.167 | 2.788 | 3.078 | 3.838 |
indoor | 2.035 | 70.959 | 0.333 | 2.726 | 3.007 | 3.712 |
outdoor | 2.027 | 37.174 | 0.000 | 2.824 | 3.217 | 3.838 |
COFW68 Dataset
NME_bbox | AUC_7 | FR_7 | NME_P90 | NME_P95 | NME_P99 | |
---|---|---|---|---|---|---|
full | 2.517 | 64.050 | 0.000 | 3.439 | 4.066 | 5.558 |
300W Public Dataset
NME_ioc | AUC_8 | FR_8 | NME_P90 | NME_P95 | NME_P99 | |
---|---|---|---|---|---|---|
full | 2.994 | 62.726 | 0.726 | 4.667 | 5.436 | 7.320 |
common | 2.587 | 44.201 | 0.000 | 3.710 | 4.083 | 5.215 |
challenge | 4.662 | 42.449 | 3.704 | 6.626 | 7.390 | 10.095 |
Coming soon...
- Release evaluation code and pretrained models.
- Project page and demo.
- Training code.
BibTeX Citation
If you find this work or code useful for your research, please consider citing:
@inproceedings{prados22spiga,
author = {Andres Prados-Torreblanca and José M. Buenaposada and Luis Baumela},
title = {Shape Preserving Facial Landmarks with Graph Attention Networks},
booktitle = {British Machine Vision Conference (BMVC)},
year = {2022},
url = {https://arxiv.org/abs/2210.07233}
}
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
File details
Details for the file spiga-0.0.2-py3-none-any.whl
.
File metadata
- Download URL: spiga-0.0.2-py3-none-any.whl
- Upload date:
- Size: 75.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.8.0 colorama/0.4.4 importlib-metadata/4.6.4 keyring/23.5.0 pkginfo/1.8.2 readme-renderer/34.0 requests-toolbelt/0.9.1 requests/2.25.1 rfc3986/1.5.0 tqdm/4.57.0 urllib3/1.26.5 CPython/3.10.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 94d695ff8b60e4ffadcd881ccc4bd407fc8642ade14aa655ee58590b6edf63a9 |
|
MD5 | a1ba2a060ac1a66c0169e56e5fee94d2 |
|
BLAKE2b-256 | 59ee18e500ae6c32005c87076fd58338950f472e2cb7f6c7337d8e00b376331c |