Forked Lightweight Face Recognition and Facial Attribute Analysis Framework (Age, Gender, Emotion, Race) for Python
Project description
deepface
Deepface is a lightweight face recognition and facial attribute analysis (age, gender, emotion and race) framework for python. It is a hybrid face recognition framework wrapping state-of-the-art models: VGG-Face
, Google FaceNet
, OpenFace
, Facebook DeepFace
, DeepID
, ArcFace
and Dlib
.
Experiments show that human beings have 97.53% accuracy on facial recognition tasks whereas those models already reached and passed that accuracy level.
Installation
The easiest way to install deepface is to download it from PyPI
. It's going to install the library itself and its prerequisites as well. The library is mainly based on TensorFlow and Keras.
pip install deepface
Then you will be able to import the library and use its functionalities.
from deepface import DeepFace
Facial Recognition - Demo
A modern face recognition pipeline consists of 5 common stages: detect, align, normalize, represent and verify. Deepface handles all these common stages in the background. You can just call its verification, find or analysis function with a single line of code.
Face Verification - Demo
This function verifies face pairs as same person or different persons. It expects exact image paths as inputs. Passing numpy or based64 encoded images is also welcome.
result = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg")
Face recognition - Demo
Face recognition requires applying face verification many times. Herein, deepface has an out-of-the-box find function to handle this action. It's going to look for the identity of input image in the database path and it will return pandas data frame as output.
df = DeepFace.find(img_path = "img1.jpg", db_path = "C:/workspace/my_db")
Face recognition models - Demo
Deepface is a hybrid face recognition package. It currently wraps many state-of-the-art face recognition models: VGG-Face
, Google FaceNet
, OpenFace
, Facebook DeepFace
, DeepID
, ArcFace
and Dlib
. The default configuration uses VGG-Face model.
models = ["VGG-Face", "Facenet", "Facenet512", "OpenFace", "DeepFace", "DeepID", "ArcFace", "Dlib"]
result = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg", model_name = models[1])
df = DeepFace.find(img_path = "img1.jpg", db_path = "C:/workspace/my_db", model_name = models[1])
FaceNet, VGG-Face, ArcFace and Dlib overperforms than OpenFace, DeepFace and DeepID based on experiments. Supportively, FaceNet /w 512d got 99.65%; FaceNet /w 128d got 99.2%; ArcFace got 99.41%; Dlib got 99.38%; VGG-Face got 98.78%; DeepID got 97.05; OpenFace got 93.80% accuracy scores on LFW data set whereas human beings could have just 97.53%.
Similarity
Face recognition models are regular convolutional neural networks and they are responsible to represent faces as vectors. We expect that a face pair of same person should be more similar than a face pair of different persons.
Similarity could be calculated by different metrics such as Cosine Similarity, Euclidean Distance and L2 form. The default configuration uses cosine similarity.
metrics = ["cosine", "euclidean", "euclidean_l2"]
result = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg", distance_metric = metrics[1])
df = DeepFace.find(img_path = "img1.jpg", db_path = "C:/workspace/my_db", distance_metric = metrics[1])
Euclidean L2 form seems to be more stable than cosine and regular Euclidean distance based on experiments.
Facial Attribute Analysis - Demo
Deepface also comes with a strong facial attribute analysis module including age
, gender
, facial expression
(including angry, fear, neutral, sad, disgust, happy and surprise) and race
(including asian, white, middle eastern, indian, latino and black) predictions.
obj = DeepFace.analyze(img_path = "img4.jpg", actions = ['age', 'gender', 'race', 'emotion'])
Age model got ± 4.65 MAE; gender model got 97.44% accuracy, 96.29% precision and 95.05% recall as mentioned in its tutorial.
Streaming and Real Time Analysis - Demo
You can run deepface for real time videos as well. Stream function will access your webcam and apply both face recognition and facial attribute analysis. The function starts to analyze a frame if it can focus a face sequantially 5 frames. Then, it shows results 5 seconds.
DeepFace.stream(db_path = "C:/User/Sefik/Desktop/database")
Even though face recognition is based on one-shot learning, you can use multiple face pictures of a person as well. You should rearrange your directory structure as illustrated below.
user
├── database
│ ├── Alice
│ │ ├── Alice1.jpg
│ │ ├── Alice2.jpg
│ ├── Bob
│ │ ├── Bob.jpg
Face Detectors - Demo
Face detection and alignment are early stages of a modern face recognition pipeline. Experiments show that just alignment increases the face recognition accuracy almost 1%. OpenCV
, SSD
, Dlib
, MTCNN
and RetinaFace
detectors are wrapped in deepface. OpenCV is the default detector.
backends = ['opencv', 'ssd', 'dlib', 'mtcnn', 'retinaface']
#face detection and alignment
detected_face = DeepFace.detectFace(img_path = "img.jpg", detector_backend = backends[4])
#face verification
obj = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg", detector_backend = backends[4])
#face recognition
df = DeepFace.find(img_path = "img.jpg", db_path = "my_db", detector_backend = backends[4])
#facial analysis
demography = DeepFace.analyze(img_path = "img4.jpg", detector_backend = backends[4])
RetinaFace and MTCNN seem to overperform in detection and alignment stages but they are slower than others. If the speed of your pipeline is more important, then you should use opencv or ssd. On the other hand, if you consider the accuracy, then you should use retinaface or mtcnn.
API - Demo
Deepface serves an API as well. You can clone /api/api.py
and pass it to python command as an argument. This will get a rest service up. In this way, you can call deepface from an external system such as mobile app or web.
python api.py
Face recognition, facial attribute analysis and vector representation functions are covered in the API. You are expected to call these functions as http post methods. Service endpoints will be http://127.0.0.1:5000/verify
for face recognition, http://127.0.0.1:5000/analyze
for facial attribute analysis, and http://127.0.0.1:5000/represent
for vector representation. You should pass input images as base64 encoded string in this case. Here, you can find a postman project.
Face recognition models represent facial images as vector embeddings. The idea behind facial recognition is that vectors should be more similar for same person than different persons. The question is that where and how to store facial embeddings in a large scale system. Herein, deepface offers a represention function to find vector embeddings from facial images.
embedding = DeepFace.represent(img_path = "img.jpg", model_name = 'Facenet')
Tech stack is vast to store vector embeddings. To determine the right tool, you should consider your task such as face verification or face recognition, priority such as speed or confidence, and also data size.
Contribution
Pull requests are welcome. You should run the unit tests locally by running test/unit_tests.py
. Please share the unit test result logs in the PR. Deepface is currently compatible with TF 1 and 2 versions. Change requests should satisfy those requirements both.
Support
There are many ways to support a project - starring⭐️ the GitHub repo is just one.
Citation
Please cite deepface in your publications if it helps your research. Here are examples of its BibTeX entries:
@inproceedings{serengil2020lightface,
title = {LightFace: A Hybrid Deep Face Recognition Framework},
author = {Serengil, Sefik Ilkin and Ozpinar, Alper},
booktitle = {2020 Innovations in Intelligent Systems and Applications Conference (ASYU)},
pages = {23-27},
year = {2020},
doi = {10.1109/ASYU50717.2020.9259802},
url = {https://doi.org/10.1109/ASYU50717.2020.9259802},
organization = {IEEE}
}
@inproceedings{serengil2021lightface,
title = {HyperExtended LightFace: A Facial Attribute Analysis Framework},
author = {Serengil, Sefik Ilkin and Ozpinar, Alper},
booktitle = {2021 International Conference on Engineering and Emerging Technologies (ICEET)},
year = {2021},
organization = {IEEE}
}
Also, if you use deepface in your GitHub projects, please add deepface in the requirements.txt.
Licence
Deepface is licensed under the MIT License - see LICENSE
for more details. However, the library wraps some external face recognition models: VGG-Face, Facenet, OpenFace, DeepFace, DeepID, ArcFace and Dlib. Besides, age, gender and race / ethnicity models are based on VGG-Face. Licence types will be inherited if you are going to use those models. Please check the license types of those models for production purposes.
Deepface logo is created by Adrien Coquet and it is licensed under Creative Commons: By Attribution 3.0 License.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file nvface-0.1.0.tar.gz
.
File metadata
- Download URL: nvface-0.1.0.tar.gz
- Upload date:
- Size: 58.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.7.1 importlib_metadata/4.8.2 pkginfo/1.8.2 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.46.0 CPython/3.9.5
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 0f7afb67fec60bd34168afaf3442355ee677cbc0f90af6aa68fa062238f8a3ce |
|
MD5 | 0bb76367e4bd01def7df7022c960ae0c |
|
BLAKE2b-256 | f49e7cdac9d808ec5bb7addee5d1f47ea281c3fddb0cef8173de9b922033e3ca |
File details
Details for the file nvface-0.1.0-py3-none-any.whl
.
File metadata
- Download URL: nvface-0.1.0-py3-none-any.whl
- Upload date:
- Size: 63.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.7.1 importlib_metadata/4.8.2 pkginfo/1.8.2 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.46.0 CPython/3.9.5
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 0ba03ecfd2b0d5c1bd16a51ff0f01bab8d2e5f46d28f1f70cda1110d47007e09 |
|
MD5 | 72f0406f7e12ededc709d6c6e9b7d64f |
|
BLAKE2b-256 | 50e31b8fdb295b4a63b0195270189e4b0d3d1a01b6f4b3099226b6ab22067393 |