Skip to main content

A module to run facexformer model as pipeline

Project description

FaceXFormer Pipeline Implementation

Example Image

This repository contains an easy-to-use pipeline implementation of the FaceXFormer, a unified transformer model for comprehensive facial analysis, as described in the paper by Kartik Narayan et al. from Johns Hopkins University.

Here is the official code repository: FaceXFormer Official Repository

What Does This Implementation Do Differently?

The official implementation is excellent as it primarily focuses on benchmarking. However, it is not yet application-ready. With this implementation:

  • No need to deal with reverse transforms, resizing, or remapping to the original image size.
  • Cropping is being handled internally (different crops are used for face parsing and landmarks for better accuracy).
  • It is possible to run only one task or any combination of tasks.
  • You can pass your own face detection method's coordinates as arguments and you are not forced to rerun the face detection calculations.
  • Visual debugging is much easier thanks to the use of the visual_debugger package.
  • Results are provided with all the extra information you may need.

What is it

You can use FaceXFormer to extract

  • faceparsing mask
  • landmarks
  • headpose orientation
  • various attributes
  • visibility
  • age-gender-race

information really fast and from unified model. And you can do it really fast (37 FPS).

Installation

pip install facexformer_pipeline 

Usage

To use the FaceXFormer pipeline, follow these steps:

# Import the pipeline class
from facexformer_pipeline import FacexformerPipeline

# Initialize the pipeline with desired tasks
pipeline = FacexformerPipeline(debug=True, tasks=['headpose', 'landmark', 'faceparsing'])

# Put your code for reading an image 
# image_path = "sample_image_head_only.jpg"
# uih = UniversalImageInputHandler(image_path)   #  to use UniversalImageInputHandler you need "pip install image_input_handler"
# img = uih.img

# Run the model on an image
results = pipeline.run_model(img)

# Access the results from results dictionary
print(results['headpose'])
print(results['landmarks']) 
print(results['faceparsing_mask']) 


# Also you can access intermediate results such as face region crop, face coordinates etc
print(results['face_ROI'])
print(results['face_coordinates']) 
print(results['head_coordinates']) 

You can demonstrate the results really easily with visual_debugger (These lines creates the above image)

# Show the results on the image
from visual_debugger import VisualDebugger, Annotation, AnnotationType

vdebugger = VisualDebugger(tag="facex", debug_folder_path="./", active=True)

annotation_landmarks_face_ROI = [Annotation(type=AnnotationType.POINTS, coordinates=results["landmarks_face_ROI"])]
annotation_landmarks = [Annotation(type=AnnotationType.POINTS, coordinates=results["landmarks"])]
annotation_headpose = [Annotation(type=AnnotationType.PITCH_YAW_ROLL, orientation=[results["headpose"]["pitch"],results["headpose"]["yaw"],results["headpose"]["roll"] ])]
annotation_face_coordinates = [Annotation(type=AnnotationType.RECTANGLE, coordinates=results["face_coordinates"])]
annotation_head_coordinates = [Annotation(type=AnnotationType.RECTANGLE, coordinates=results["head_coordinates"])]
annotation_faceparsing = [Annotation(type=AnnotationType.MASK, mask=results["faceparsing_mask"])]
annotation_faceparsing_head_ROI = [Annotation(type=AnnotationType.MASK, mask=results["faceparsing_mask_head_ROI"])]

vdebugger.visual_debug(img, name="original_image")
vdebugger.visual_debug(img, annotation_face_coordinates, name="", stage_name="face_coor")
vdebugger.visual_debug(results["face_ROI"], name="", stage_name="cropped_face_ROI")
vdebugger.visual_debug(img, annotation_head_coordinates, name="", stage_name="head_coor")
vdebugger.visual_debug(results["head_ROI"], name="", stage_name="cropped_head_ROI")
vdebugger.visual_debug(results["face_ROI"], annotation_landmarks_face_ROI, name="landmarks", stage_name="on_face_ROI")
vdebugger.visual_debug(img, annotation_landmarks, name="landmarks", stage_name="on_image")
vdebugger.visual_debug(results["face_ROI"], annotation_headpose, name="headpose")
vdebugger.visual_debug(results["head_ROI"], annotation_faceparsing_head_ROI, name="faceparsing", stage_name="mask_on_head_ROI")
vdebugger.visual_debug(img, annotation_faceparsing, name="faceparsing", stage_name="mask_on_full_image")
vdebugger.cook_merged_img() # creates merged image

Acknowledgements

This implementation is based on the research conducted by Kartik Narayan and his team at Johns Hopkins University. All credit for the conceptual model and its validation belongs to them.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

facexformer_pipeline-0.2.8.tar.gz (16.1 kB view details)

Uploaded Source

Built Distribution

facexformer_pipeline-0.2.8-py3-none-any.whl (16.5 kB view details)

Uploaded Python 3

File details

Details for the file facexformer_pipeline-0.2.8.tar.gz.

File metadata

  • Download URL: facexformer_pipeline-0.2.8.tar.gz
  • Upload date:
  • Size: 16.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.19

File hashes

Hashes for facexformer_pipeline-0.2.8.tar.gz
Algorithm Hash digest
SHA256 e9f2ce9868f719b32aecec8097ef24e0ba59b744cb06696407eb5d58bec4d9ef
MD5 aa04280f9f7ef67f7fe43906ca8613df
BLAKE2b-256 4b5ca3280ea53d2d4745622bdecfcb928cb508c388e5b0a3529fc0073dc3da1b

See more details on using hashes here.

File details

Details for the file facexformer_pipeline-0.2.8-py3-none-any.whl.

File metadata

File hashes

Hashes for facexformer_pipeline-0.2.8-py3-none-any.whl
Algorithm Hash digest
SHA256 899e289c7b434e5089db908967266a48cfac139bd0e0406b1fe59f92fbe10e29
MD5 2dac520607566d93ee5e4c7ccdba5703
BLAKE2b-256 12ae95b2822f721bdcd2aa2c591f63e75f0aaf6efd03eb3a38e3f3cde786cb87

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page