Skip to main content

Face analysis PyTorch framework.

Project description

facetorch

build lint PyPI Conda (channel only) PyPI - License Code style: black

Hugging Face Space demo app ๐Ÿค—

Google Colab notebook demo Open In Colab

User Guide, Documentation, ChatGPT facetorch guide

Docker Hub (GPU)

Facetorch is a Python library designed for facial detection and analysis, leveraging the power of deep neural networks. Its primary aim is to curate open-source face analysis models from the community, optimize them for high performance using TorchScript, and integrate them into a versatile face analysis toolkit. The library offers the following key features:

  1. Customizable Configuration: Easily configure your setup using Hydra and its powerful OmegaConf capabilities.

  2. Reproducible Environments: Ensure reproducibility with tools like conda-lock for dependency management and Docker for containerization.

  3. Accelerated Performance: Enjoy enhanced performance on both CPU and GPU with TorchScript optimization.

  4. Simple Extensibility: Extend the library by uploading your model file to Google Drive and adding a corresponding configuration YAML file to the repository.

Facetorch provides an efficient, scalable, and user-friendly solution for facial analysis tasks, catering to developers and researchers looking for flexibility and performance.

Please use this library responsibly and with caution. Adhere to the European Commission's Ethics Guidelines for Trustworthy AI to ensure ethical and fair usage. Keep in mind that the models may have limitations and potential biases, so it is crucial to evaluate their outputs critically and consider their impact.

Install

PyPI

pip install facetorch

Conda

conda install -c conda-forge facetorch

Usage

Prerequisites

Docker Compose provides an easy way of building a working facetorch environment with a single command.

Run docker example

  • CPU: docker compose run facetorch python ./scripts/example.py
  • GPU: docker compose run facetorch-gpu python ./scripts/example.py analyzer.device=cuda

Check data/output for resulting images with bounding boxes and facial 3D landmarks.

(Apple Mac M1) Use Rosetta 2 emulator in Docker Desktop to run the CPU version.

Configure

The project is configured by files located in conf with the main file: conf/config.yaml. One can easily add or remove modules from the configuration.

Components

FaceAnalyzer is the main class of facetorch as it is the orchestrator responsible for initializing and running the following components:

  1. Reader - reads the image and returns an ImageData object containing the image tensor.
  2. Detector - wrapper around a neural network that detects faces.
  3. Unifier - processor that unifies sizes of all faces and normalizes them between 0 and 1.
  4. Predictor dict - set of wrappers around neural networks trained to analyze facial features.
  5. Utilizer dict - set of wrappers around any functionality that requires the output of neural networks e.g. drawing bounding boxes or facial landmarks.

Structure

analyzer
    โ”œโ”€โ”€ reader
    โ”œโ”€โ”€ detector
    โ”œโ”€โ”€ unifier
    โ””โ”€โ”€ predictor
            โ”œโ”€โ”€ embed
            โ”œโ”€โ”€ verify
            โ”œโ”€โ”€ fer
            โ”œโ”€โ”€ au
            โ”œโ”€โ”€ va
            โ”œโ”€โ”€ deepfake
            โ””โ”€โ”€ align
    โ””โ”€โ”€ utilizer
            โ”œโ”€โ”€ align
            โ”œโ”€โ”€ draw
            โ””โ”€โ”€ save

Models

Detector

|     model     |   source  |   params  |   license   | version |
| ------------- | --------- | --------- | ----------- | ------- |
|   RetinaFace  |  biubug6  |   27.3M   | MIT license |    1    |
  1. biubug6

Predictor

Facial Representation Learning (embed)

|       model       |   source   |  params |   license   | version |  
| ----------------- | ---------- | ------- | ----------- | ------- |
|  ResNet-50 VGG 1M |  1adrianb  |  28.4M  | MIT license |    1    |
  1. 1adrianb

Face Verification (verify)

|       model      |   source    |  params  |      license       | version |  
| ---------------- | ----------- | -------- | ------------------ | ------- |
|    MagFace+UNPG  | Jung-Jun-Uk |   65.2M  | Apache License 2.0 |    1    |
|  AdaFaceR100W12M |  mk-minchul |    -     |     MIT License    |    2    |
  1. Jung-Jun-Uk
  2. mk-minchul

Facial Expression Recognition (fer)

|       model       |      source    |  params  |       license      | version |  
| ----------------- | -------------- | -------- | ------------------ | ------- |
| EfficientNet B0 7 | HSE-asavchenko |    4M    | Apache License 2.0 |    1    |
| EfficientNet B2 8 | HSE-asavchenko |   7.7M   | Apache License 2.0 |    2    |
  1. HSE-asavchenko

Facial Action Unit Detection (au)

|        model        |   source  |  params |       license      | version |  
| ------------------- | --------- | ------- | ------------------ | ------- |
| OpenGraph Swin Base |  CVI-SZU  |   94M   |     MIT License    |    1    |
  1. CVI-SZU

Facial Valence Arousal (va)

|       model       |   source   |  params |   license   | version |
| ----------------- | ---------- | ------- | ----------- | ------- |
|  ELIM AL AlexNet  | kdhht2334  |  2.3M   | MIT license |    1    |
  1. kdhht2334

Deepfake Detection (deepfake)

|         model        |      source      |  params  |   license   | version |
| -------------------- | ---------------- | -------- | ----------- | ------- |
|    EfficientNet B7   |     selimsef     |   66.4M  | MIT license |    1    |
  1. selimsef

Face Alignment (align)

|       model       |      source      |  params  |   license   | version |
| ----------------- | ---------------- | -------- | ----------- | ------- |
|    MobileNet v2   |     choyingw     |   4.1M   | MIT license |    1    |
  1. choyingw

Model download

Models are downloaded during runtime automatically to the models directory. You can also download the models manually from a public Google Drive folder.

Execution time

Image test.jpg (4 faces) is analyzed (including drawing boxes and landmarks, but not saving) in about 486ms and test3.jpg (25 faces) in about 1845ms (batch_size=8) on NVIDIA Tesla T4 GPU once the default configuration (conf/config.yaml) of models is initialized and pre heated to the initial image size 1080x1080 by the first run. One can monitor the execution times in logs using the DEBUG level.

Detailed test.jpg execution times:

analyzer
    โ”œโ”€โ”€ reader: 27 ms
    โ”œโ”€โ”€ detector: 193 ms
    โ”œโ”€โ”€ unifier: 1 ms
    โ””โ”€โ”€ predictor
            โ”œโ”€โ”€ embed: 8 ms
            โ”œโ”€โ”€ verify: 58 ms
            โ”œโ”€โ”€ fer: 28 ms
            โ”œโ”€โ”€ au: 57 ms
            โ”œโ”€โ”€ va: 1 ms
            โ”œโ”€โ”€ deepfake: 117 ms
            โ””โ”€โ”€ align: 5 ms
    โ””โ”€โ”€ utilizer
            โ”œโ”€โ”€ align: 8 ms
            โ”œโ”€โ”€ draw_boxes: 22 ms
            โ”œโ”€โ”€ draw_landmarks: 7 ms
            โ””โ”€โ”€ save: 298 ms

Development

Run the Docker container:

  • CPU: docker compose -f docker-compose.dev.yml run facetorch-dev
  • GPU: docker compose -f docker-compose.dev.yml run facetorch-dev-gpu

Add predictor

Prerequisites

  1. file of the TorchScript model
  2. ID of the Google Drive model file
  3. facetorch fork

Facetorch works with models that were exported from PyTorch to TorchScript. You can apply torch.jit.trace function to compile a PyTorch model as a TorchScript module. Please verify that the output of the traced model equals the output of the original model.

The first models are hosted on my public Google Drive folder. You can either send the new model for upload to me, host the model on your Google Drive or host it somewhere else and add your own downloader object to the codebase.

Configuration

Create yaml file
  1. Create new folder with a short name of the task in predictor configuration directory /conf/analyzer/predictor/ following the FER example in /conf/analyzer/predictor/fer/
  2. Copy the yaml file /conf/analyzer/predictor/fer/efficientnet_b2_8.yaml to the new folder /conf/analyzer/predictor/<predictor_name>/
  3. Change the yaml file name to the model you want to use: /conf/analyzer/predictor/<predictor_name>/<model_name>.yaml
Edit yaml file
  1. Change the Google Drive file ID to the ID of the model.
  2. Select the preprocessor (or implement a new one based on BasePredPreProcessor) and specify it's parameters e.g. image size and normalization in the yaml file to match the requirements of the new model.
  3. Select the postprocessor (or implement a new one based on BasePredPostProcessor) and specify it's parameters e.g. labels in the yaml file to match the requirements of the new model.
  4. (Optional) Add BaseUtilizer derivative that uses output of your model to perform some additional actions.
Configure tests
  1. Add a new predictor to the main config.yaml and all tests.config.n.yaml files. Alternatively, create a new config file e.g. tests.config.n.yaml and add it to the /tests/conftest.py file.
  2. Write a test for the new predictor in /tests/test_<predictor_name>.py

Test and submit

  1. Run linting: black facetorch
  2. Add the new predictor to the README model table.
  3. Update CHANGELOG and version
  4. Submit a pull request to the repository

Update environment

CPU:

  • Add packages with corresponding versions to environment.yml file
  • Lock the environment: conda lock -p linux-64 -f environment.yml --lockfile conda-lock.yml
  • (Alternative Docker) Lock the environment: docker compose -f docker-compose.dev.yml run facetorch-lock
  • Install the locked environment: conda-lock install --name env conda-lock.yml

GPU:

  • Add packages with corresponding versions to gpu.environment.yml file
  • Lock the environment: conda lock -p linux-64 -f gpu.environment.yml --lockfile gpu.conda-lock.yml
  • (Alternative Docker) Lock the environment: docker compose -f docker-compose.dev.yml run facetorch-lock-gpu
  • Install the locked environment: conda-lock install --name env gpu.conda-lock.yml

Run tests + coverage

  • Run tests and generate coverage: pytest tests --verbose --cov-report html:coverage --cov facetorch

Generate documentation

  • Generate documentation from docstrings using pdoc3: pdoc --html facetorch --output-dir docs --force --template-dir pdoc/templates/

Profiling

  1. Run profiling of the example script: python -m cProfile -o profiling/example.prof scripts/example.py
  2. Open profiling file in the browser: snakeviz profiling/example.prof

Research Highlights Leveraging facetorch

Sharma et al. (2024)

Sharma, Paritosh, Camille Challant, and Michael Filhol. "Facial Expressions for Sign Language Synthesis using FACSHuman and AZee." Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages, pp. 354โ€“360, 2024.

Liang et al. (2023)

Liang, Cong, Jiahe Wang, Haofan Zhang, Bing Tang, Junshan Huang, Shangfei Wang, and Xiaoping Chen. "Unifarn: Unified transformer for facial reaction generation." Proceedings of the 31st ACM International Conference on Multimedia, pp. 9506โ€“9510, 2023.

Gue et al. (2023)

Gue, Jia Xuan, Chun Yong Chong, and Mei Kuan Lim. "Facial Expression Recognition as markers of Depression." 2023 Asia Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), pp. 674โ€“680, 2023.

Acknowledgements

I would like to thank the open-source community and the researchers who have shared their work and published models. This project would not have been possible without their contributions.

Citing

If you use facetorch in your work, please make sure to appropriately credit the original authors of the models it employs. Additionally, you may consider citing the facetorch library itself. Below is an example citation for facetorch:

@misc{facetorch,
    author = {Gajarsky, Tomas},
    title = {Facetorch: A Python Library for Analyzing Faces Using PyTorch},
    year = {2024},
    publisher = {GitHub},
    journal = {GitHub Repository},
    howpublished = {\url{https://github.com/tomas-gajarsky/facetorch}}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

facetorch-0.5.1.tar.gz (1.1 MB view details)

Uploaded Source

Built Distribution

facetorch-0.5.1-py3-none-any.whl (40.4 kB view details)

Uploaded Python 3

File details

Details for the file facetorch-0.5.1.tar.gz.

File metadata

  • Download URL: facetorch-0.5.1.tar.gz
  • Upload date:
  • Size: 1.1 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.20

File hashes

Hashes for facetorch-0.5.1.tar.gz
Algorithm Hash digest
SHA256 b0c6ec0032d493495be63ca87c44a6558efa39df4d371e2ac764938cd5402359
MD5 74d7e094dfc243826c7020e776fb9a94
BLAKE2b-256 33ac07cdff2704d49b1b41297d533eeeb2bcfc37acf284a09eeece3a0cb27d14

See more details on using hashes here.

File details

Details for the file facetorch-0.5.1-py3-none-any.whl.

File metadata

  • Download URL: facetorch-0.5.1-py3-none-any.whl
  • Upload date:
  • Size: 40.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.20

File hashes

Hashes for facetorch-0.5.1-py3-none-any.whl
Algorithm Hash digest
SHA256 bade6ba2967781efa13e85574a4a69aa9da975dd5a894fb24bd87dbf65235ead
MD5 d73431d43c8eecc909458401dca01f63
BLAKE2b-256 8d9f400f928b45b515c09c7552e273b8c291dda4325eeb44dc1039c23f7b6e09

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page