PifPaf: Composite Fields for Human Pose Estimation
Project description
openpifpaf
Continuously tested on Linux, MacOS and Windows:
CVPR 2019 paper
PifPaf: Composite Fields for Human Pose Estimation
We propose a new bottom-up method for multi-person 2D human pose estimation that is particularly well suited for urban mobility such as self-driving cars and delivery robots. The new method, PifPaf, uses a Part Intensity Field (PIF) to localize body parts and a Part Association Field (PAF) to associate body parts with each other to form full human poses. Our method outperforms previous methods at low resolution and in crowded, cluttered and occluded scenes thanks to (i) our new composite field PAF encoding fine-grained information and (ii) the choice of Laplace loss for regressions which incorporates a notion of uncertainty. Our architecture is based on a fully convolutional, single-shot, box-free design. We perform on par with the existing state-of-the-art bottom-up method on the standard COCO keypoint task and produce state-of-the-art results on a modified COCO keypoint task for the transportation domain.
Demo
Image credit: "Learning to surf" by fotologic which is licensed under CC-BY-2.0.
Created with
python3 -m openpifpaf.predict docs/coco/000000081988.jpg --show --image-output --json-output
which also produces json output.
More demos:
- openpifpafwebdemo project (best performance)
- OpenPifPaf running in your browser: https://vita-epfl.github.io/openpifpafwebdemo/ (experimental)
- the
openpifpaf.video
command (requires OpenCV) - Google Colab demo
Install
Python 3 is required. Python 2 is not supported.
Do not clone this repository
and make sure there is no folder named openpifpaf
in your current directory.
pip3 install openpifpaf
For a live demo, we recommend to try the
openpifpafwebdemo project.
Alternatively, openpifpaf.video
(requires OpenCV) provides a live demo as well.
For development of the openpifpaf source code itself, you need to clone this repository and then:
pip3 install numpy cython
pip3 install --editable '.[train,test]'
The last command installs the Python package in the current directory
(signified by the dot) with the optional dependencies needed for training and
testing. If you modify functional.pyx
, run this last command again which
recompiles the static code.
Interfaces
python3 -m openpifpaf.predict --help
: help screenpython3 -m openpifpaf.video --help
: help screenpython3 -m openpifpaf.train --help
: help screenpython3 -m openpifpaf.eval_coco --help
: help screenpython3 -m openpifpaf.logs --help
: help screen
Tools to work with models:
python3 -m openpifpaf.migrate --help
: help screenpython3 -m openpifpaf.export_onnx --help
: help screen
Pre-trained Models
Performance metrics with version 0.11 on the COCO val set obtained with a GTX1080Ti:
Backbone | AP | APᴹ | APᴸ | t_{total} [ms] | t_{dec} [ms] |
---|---|---|---|---|---|
resnet50 | 67.7 | 65.1 | 72.6 | 69 | 27 |
shufflenetv2k16w | 67.1 | 62.0 | 75.3 | 54 | 25 |
shufflenetv2k30w | 71.1 | 65.9 | 79.1 | 94 | 22 |
Command to reproduce this table: python -m openpifpaf.benchmark --backbones resnet50 shufflenetv2k16w shufflenetv2k30w
.
Pretrained model files are shared in the
openpifpaf-torchhub
repository and linked from the backbone names in the table above.
The pretrained models are downloaded automatically when
using the command line option --checkpoint backbonenameasintableabove
.
For comparison, old v0.10:
Backbone | AP | APᴹ | APᴸ | t_{total} [ms] | t_{dec} [ms] |
---|---|---|---|---|---|
shufflenetv2x2 v0.10 | 60.4 | 55.5 | 67.8 | 56 | 33 |
resnet50 v0.10 | 64.4 | 61.1 | 69.9 | 76 | 32 |
resnet101 v0.10 | 67.8 | 63.6 | 74.3 | 97 | 28 |
Train
See datasets for setup instructions.
The exact training command that was used for a model is in the first line of the training log file.
ShuffleNet models are trained without ImageNet pretraining:
time CUDA_VISIBLE_DEVICES=0,1 python3 -m openpifpaf.train \
--lr=0.1 \
--momentum=0.9 \
--epochs=150 \
--lr-warm-up-epochs=1 \
--lr-decay 120 \
--lr-decay-epochs=20 \
--lr-decay-factor=0.1 \
--batch-size=32 \
--square-edge=385 \
--lambdas 1 1 0.2 1 1 1 0.2 0.2 1 1 1 0.2 0.2 \
--auto-tune-mtl \
--weight-decay=1e-5 \
--update-batchnorm-runningstatistics \
--ema=0.01 \
--basenet=shufflenetv2k16w \
--headnets cif caf caf25
# for improved performance, take the epoch150 checkpoint and train with
# extended-scale and 10% orientation invariance:
time CUDA_VISIBLE_DEVICES=0,1 python3 -m openpifpaf.train \
--lr=0.05 \
--momentum=0.9 \
--epochs=250 \
--lr-warm-up-epochs=1 \
--lr-decay 220 \
--lr-decay-epochs=30 \
--lr-decay-factor=0.01 \
--batch-size=32 \
--square-edge=385 \
--lambdas 1 1 0.2 1 1 1 0.2 0.2 1 1 1 0.2 0.2 \
--auto-tune-mtl \
--weight-decay=1e-5 \
--update-batchnorm-runningstatistics \
--ema=0.01 \
--checkpoint outputs/shufflenetv2k16w-200504-145520-cif-caf-caf25-d05e5520.pkl --extended-scale --orientation-invariant=0.1
You can refine an existing model with the --checkpoint
option.
To visualize logs:
python3 -m openpifpaf.logs \
outputs/resnet50block5-pif-paf-edge401-190424-122009.pkl.log \
outputs/resnet101block5-pif-paf-edge401-190412-151013.pkl.log \
outputs/resnet152block5-pif-paf-edge401-190412-121848.pkl.log
To produce evaluation metrics every five epochs and check the directory for new checkpoints every 5 minutes:
while true; do \
CUDA_VISIBLE_DEVICES=0 find outputs/ -name "shufflenetv2k16w-200504-145520-cif-caf-caf25.pkl.epoch??[0,5]" -exec \
python3 -m openpifpaf.eval_coco --checkpoint {} -n 500 --long-edge=641 --skip-existing \; \
; \
sleep 300; \
done
Person Skeletons
COCO / kinematic tree / dense:
Created with python3 -m openpifpaf.datasets.constants
.
Video
Requires OpenCV. The --video-output
option also requires matplotlib.
python3 -m openpifpaf.video --source myvideotoprocess.mp4 --checkpoint shufflenetv2k16w --video-output --json-output
Replace myvideotoprocess.mp4
with 0
for webcam0 or other OpenCV compatible sources.
Documentation Pages
Related Projects
- monoloco: "Monocular 3D Pedestrian Localization and Uncertainty Estimation" which uses OpenPifPaf for poses.
- openpifpafwebdemo: web front-end.
Citation
@InProceedings{kreiss2019pifpaf,
author = {Kreiss, Sven and Bertoni, Lorenzo and Alahi, Alexandre},
title = {{PifPaf: Composite Fields for Human Pose Estimation}},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.