Skip to main content

UniFace: A Comprehensive Library for Face Detection, Recognition, Landmark Analysis, Age, and Gender Detection

Project description

UniFace: All-in-One Face Analysis Library

License Python PyPI Version Build Status Downloads Code Style: PEP8 GitHub Release Downloads

uniface is a lightweight face detection library designed for high-performance face localization, landmark detection and face alignment. The library supports ONNX models and provides utilities for bounding box visualization and landmark plotting. To train RetinaFace model, see https://github.com/yakhyo/retinaface-pytorch.


Features

  • Age and gender detection (Planned).
  • Face recognition (Planned).
  • Face Alignment (Added: 2024-11-21).
  • High-speed face detection using ONNX models (Added: 2024-11-20).
  • Accurate facial landmark localization (e.g., eyes, nose, and mouth) (Added: 2024-11-20).
  • Easy-to-use API for inference and visualization (Added: 2024-11-20).

Installation

The easiest way to install UniFace is via PyPI. This will automatically install the library along with its prerequisites.

pip install uniface

To work with the latest version of UniFace, which may not yet be released on PyPI, you can install it directly from the repository:

git clone https://github.com/yakhyo/uniface.git
cd uniface
pip install .

Quick Start

To get started with face detection using UniFace, check out the example notebook. It demonstrates how to initialize the model, run inference, and visualize the results.


Examples

Explore the following example notebooks to learn how to use UniFace effectively:

  • Face Detection: Demonstrates how to perform face detection, draw bounding boxes, and landmarks on an image.
  • Face Alignment: Shows how to align faces using detected landmarks.
  • Age and Gender Detection: Example for detecting age and gender from faces. (underdevelopment)

Initialize the Model

from uniface import RetinaFace

# Initialize the RetinaFace model
uniface_inference = RetinaFace(
    model="retinaface_mnet_v2",  # Model name
    conf_thresh=0.5,             # Confidence threshold
    pre_nms_topk=5000,           # Pre-NMS Top-K detections
    nms_thresh=0.4,              # NMS IoU threshold
    post_nms_topk=750            # Post-NMS Top-K detections
)

Run Inference

Inference on image:

import cv2
from uniface.visualization import draw_detections

# Load an image
image_path = "assets/test.jpg"
original_image = cv2.imread(image_path)

# Perform inference
boxes, landmarks = uniface_inference.detect(original_image)

# Visualize results
draw_detections(original_image, (boxes, landmarks), vis_threshold=0.6)

# Save the output image
output_path = "output.jpg"
cv2.imwrite(output_path, original_image)
print(f"Saved output image to {output_path}")

Inference on video:

import cv2
from uniface.visualization import draw_detections

# Initialize the webcam
cap = cv2.VideoCapture(0)

if not cap.isOpened():
    print("Error: Unable to access the webcam.")
    exit()

while True:
    # Capture a frame from the webcam
    ret, frame = cap.read()
    if not ret:
        print("Error: Failed to read frame.")
        break

    # Perform inference
    boxes, landmarks = uniface_inference.detect(frame)

    # Draw detections on the frame
    draw_detections(frame, (boxes, landmarks), vis_threshold=0.6)

    # Display the output
    cv2.imshow("Webcam Inference", frame)

    # Exit if 'q' is pressed
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

# Release the webcam and close all OpenCV windows
cap.release()
cv2.destroyAllWindows()

Evaluation results of available models on WiderFace

RetinaFace Models Easy Medium Hard
retinaface_mnet025 88.48% 87.02% 80.61%
retinaface_mnet050 89.42% 87.97% 82.40%
retinaface_mnet_v1 90.59% 89.14% 84.13%
retinaface_mnet_v2 91.70% 91.03% 86.60%
retinaface_r18 92.50% 91.02% 86.63%
retinaface_r34 94.16% 93.12% 88.90%

API Reference

RetinaFace Class

Initialization

RetinaFace(
    model: str,
    conf_thresh: float = 0.5,
    pre_nms_topk: int = 5000,
    nms_thresh: float = 0.4,
    post_nms_topk: int = 750
)

Parameters:

  • model (str): Name of the model to use. Supported models:
    • retinaface_mnet025, retinaface_mnet050, retinaface_mnet_v1, retinaface_mnet_v2
    • retinaface_r18, retinaface_r34
  • conf_thresh (float, default=0.5): Minimum confidence score for detections.
  • pre_nms_topk (int, default=5000): Max detections to keep before NMS.
  • nms_thresh (float, default=0.4): IoU threshold for Non-Maximum Suppression.
  • post_nms_topk (int, default=750): Max detections to keep after NMS.

detect Method

detect(
    image: np.ndarray,
    max_num: int = 0,
    metric: str = "default",
    center_weight: float = 2.0
) -> Tuple[np.ndarray, np.ndarray]

Description: Detects faces in the given image and returns bounding boxes and landmarks.

Parameters:

  • image (np.ndarray): Input image in BGR format.
  • max_num (int, default=0): Maximum number of faces to return. 0 means return all.
  • metric (str, default="default"): Metric for prioritizing detections:
    • "default": Prioritize detections closer to the image center.
    • "max": Prioritize larger bounding box areas.
  • center_weight (float, default=2.0): Weight for prioritizing center-aligned faces.

Returns:

  • bounding_boxes (np.ndarray): Array of detections as [x_min, y_min, x_max, y_max, confidence].
  • landmarks (np.ndarray): Array of landmarks as [(x1, y1), ..., (x5, y5)].

Visualization Utilities

draw_detections

draw_detections(
    image: np.ndarray,
    detections: Tuple[np.ndarray, np.ndarray],
    vis_threshold: float
) -> None

Description: Draws bounding boxes and landmarks on the given image.

Parameters:

  • image (np.ndarray): The input image in BGR format.
  • detections (Tuple[np.ndarray, np.ndarray]): A tuple of bounding boxes and landmarks.
  • vis_threshold (float): Minimum confidence score for visualization.

Contributing

We welcome contributions to enhance the library! Feel free to:

  • Submit bug reports or feature requests.
  • Fork the repository and create a pull request.

License

This project is licensed under the MIT License. See the LICENSE file for details.


Acknowledgments


Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

uniface-0.1.4.tar.gz (17.3 kB view details)

Uploaded Source

Built Distribution

uniface-0.1.4-py3-none-any.whl (16.2 kB view details)

Uploaded Python 3

File details

Details for the file uniface-0.1.4.tar.gz.

File metadata

  • Download URL: uniface-0.1.4.tar.gz
  • Upload date:
  • Size: 17.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.10.15

File hashes

Hashes for uniface-0.1.4.tar.gz
Algorithm Hash digest
SHA256 ea2bf954eb2d6f1a609f58d5cb6a69cbe82810ea1dd26b89c77f96ad366dce6e
MD5 9c4bb4700f0aa3a011e6522a1cfd72cc
BLAKE2b-256 d0f20e0faff054beb1e9c36efe04c2a688aad92a7924376728eb0e4868dd45f6

See more details on using hashes here.

File details

Details for the file uniface-0.1.4-py3-none-any.whl.

File metadata

  • Download URL: uniface-0.1.4-py3-none-any.whl
  • Upload date:
  • Size: 16.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.10.15

File hashes

Hashes for uniface-0.1.4-py3-none-any.whl
Algorithm Hash digest
SHA256 f3bac79a7666648f04fc21319142cc7819d5289199e732877cf1049128ab9434
MD5 49d763147d8f92101b99835dd05b75cb
BLAKE2b-256 c86fa28ec79d4dcb1595ccb829f492843c61aba7e1507bb88202c1cf21f68243

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page