Skip to main content

Retinaface implementation in Pytorch.

Project description

Retinaface

DOI

https://habrastorage.org/webt/uj/ff/vx/ujffvxxpzixwlmae8gyh7jylftq.jpeg

This repo is build on top of https://github.com/biubug6/Pytorch_Retinaface

Differences

Train loop moved to Pytorch Lightning

IT added a set of functionality:

  • Distributed training
  • fp16
  • Syncronized BatchNorm
  • Support for various loggers like W&B or Neptune.ml

Hyperparameters are defined in the config file

Hyperparameters that were scattered across the code moved to the config at retinadace/config

Augmentations => Albumentations

Color that were manually implemented replaced by the Albumentations library.

Todo:

  • Horizontal Flip is not implemented in Albumentations
  • Spatial transforms like rotations or transpose are not implemented yet.

Color transforms defined in the config.

Added mAP calculation for validation

In order to track the progress, mAP metric is calculated on validation.

Installation

pip install -U retinaface_pytorch

Example inference

import cv2
from retinaface.pre_trained_models import get_model

image = <numpy array with shape (height, width, 3)>

model = get_model("resnet50_2020-07-20", max_size=2048)
model.eval()
annotation = model.predict_jsons(image)
  • Jupyter notebook with the example: Open In Colab
  • Jupyter notebook with the example on how to combine face detector with mask detector: Open In Colab

Data Preparation

The pipeline expects labels in the format:

[
  {
    "file_name": "0--Parade/0_Parade_marchingband_1_849.jpg",
    "annotations": [
      {
        "bbox": [
          449,
          330,
          571,
          720
        ],
        "landmarks": [
          [
            488.906,
            373.643
          ],
          [
            542.089,
            376.442
          ],
          [
            515.031,
            412.83
          ],
          [
            485.174,
            425.893
          ],
          [
            538.357,
            431.491
          ]
        ]
      }
    ]
  },

You can convert the default labels of the WiderFaces to the json of the propper format with this script.

Training

Install dependencies

pip install -r requirements.txt
pip install -r requirements_dev.txt

Define config

Example configs could be found at retinaface/configs

Define environmental variables

export TRAIN_IMAGE_PATH=<path to train images>
export VAL_IMAGE_PATH=<path to validation images>
export TRAIN_LABEL_PATH=<path to train annotations>
export VAL_LABEL_PATH=<path to validation annotations>

Run training script

python retinaface/train.py -h
usage: train.py [-h] -c CONFIG_PATH

optional arguments:
  -h, --help            show this help message and exit
  -c CONFIG_PATH, --config_path CONFIG_PATH
                        Path to the config.

Inference

python retinaface/inference.py -h
usage: inference.py [-h] -i INPUT_PATH -c CONFIG_PATH -o OUTPUT_PATH [-v]
                    [-g NUM_GPUS] [-m MAX_SIZE] [-b BATCH_SIZE]
                    [-j NUM_WORKERS]
                    [--confidence_threshold CONFIDENCE_THRESHOLD]
                    [--nms_threshold NMS_THRESHOLD] -w WEIGHT_PATH
                    [--keep_top_k KEEP_TOP_K] [--world_size WORLD_SIZE]
                    [--local_rank LOCAL_RANK] [--fp16]

optional arguments:
  -h, --help            show this help message and exit
  -i INPUT_PATH, --input_path INPUT_PATH
                        Path with images.
  -c CONFIG_PATH, --config_path CONFIG_PATH
                        Path to config.
  -o OUTPUT_PATH, --output_path OUTPUT_PATH
                        Path to save jsons.
  -v, --visualize       Visualize predictions
  -g NUM_GPUS, --num_gpus NUM_GPUS
                        The number of GPUs to use.
  -m MAX_SIZE, --max_size MAX_SIZE
                        Resize the largest side to this number
  -b BATCH_SIZE, --batch_size BATCH_SIZE
                        batch_size
  -j NUM_WORKERS, --num_workers NUM_WORKERS
                        num_workers
  --confidence_threshold CONFIDENCE_THRESHOLD
                        confidence_threshold
  --nms_threshold NMS_THRESHOLD
                        nms_threshold
  -w WEIGHT_PATH, --weight_path WEIGHT_PATH
                        Path to weights.
  --keep_top_k KEEP_TOP_K
                        keep_top_k
  --world_size WORLD_SIZE
                        number of nodes for distributed training
  --local_rank LOCAL_RANK
                        node rank for distributed training
  --fp16                Use fp6
python -m torch.distributed.launch --nproc_per_node=<num_gpus> retinaface/inference.py <parameters>

Web app

https://retinaface.herokuapp.com/

Code for the web app: https://github.com/ternaus/retinaface_demo

Converting to ONNX

The inference could be sped up on CPU by converting the model to ONNX.

Ex: python -m converters.to_onnx -m 1280 -o retinaface1280.onnx

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

retinaface_pytorch-0.0.8.tar.gz (26.1 kB view details)

Uploaded Source

Built Distribution

retinaface_pytorch-0.0.8-py2.py3-none-any.whl (26.8 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file retinaface_pytorch-0.0.8.tar.gz.

File metadata

  • Download URL: retinaface_pytorch-0.0.8.tar.gz
  • Upload date:
  • Size: 26.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.7.1 importlib_metadata/4.10.1 pkginfo/1.8.2 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.1 CPython/3.9.5

File hashes

Hashes for retinaface_pytorch-0.0.8.tar.gz
Algorithm Hash digest
SHA256 1236a4a87f4d261eb4838219ace39d96a864731c92aa75f01cee452cc38329a2
MD5 c51e3bacc25a7e1770e07476f30f1885
BLAKE2b-256 069101fc6e8c5bc5df5635fcec28c49efdbe7272965b2f0c76d459e4569fdcd1

See more details on using hashes here.

File details

Details for the file retinaface_pytorch-0.0.8-py2.py3-none-any.whl.

File metadata

  • Download URL: retinaface_pytorch-0.0.8-py2.py3-none-any.whl
  • Upload date:
  • Size: 26.8 kB
  • Tags: Python 2, Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.7.1 importlib_metadata/4.10.1 pkginfo/1.8.2 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.1 CPython/3.9.5

File hashes

Hashes for retinaface_pytorch-0.0.8-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 bf64dd6562e7dd0342c7408e086c57b675bd15cf759f51def9737b31cd581de7
MD5 fc51978c51ad0cce0c6aaefdeedec381
BLAKE2b-256 16a70d59e76c03d7446ea4d47f17e05bdd1153ce3259c84b883d903d5d2f4da8

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page