Skip to main content

PifPaf: Composite Fields for Human Pose Estimation

Project description

openpifpaf

Build Status

We propose a new bottom-up method for multi-person 2D human pose estimation that is particularly well suited for urban mobility such as self-driving cars and delivery robots. The new method, PifPaf, uses a Part Intensity Field (PIF) to localize body parts and a Part Association Field (PAF) to associate body parts with each other to form full human poses. Our method outperforms previous methods at low resolution and in crowded, cluttered and occluded scenes thanks to (i) our new composite field PAF encoding fine-grained information and (ii) the choice of Laplace loss for regressions which incorporates a notion of uncertainty. Our architecture is based on a fully convolutional, single-shot, box-free design. We perform on par with the existing state-of-the-art bottom-up method on the standard COCO keypoint task and produce state-of-the-art results on a modified COCO keypoint task for the transportation domain.

@article{kreiss2019pifpaf,
  title={PifPaf: Composite Fields for Human Pose Estimation},
  author={Kreiss, Sven and Bertoni, Lorenzo and Alahi, Alexandre},
  journal={CVPR, arXiv preprint arXiv:1903.06593},
  year={2019}
}

arxiv.org/abs/1903.06593

Demo

example image with overlaid pose skeleton

Image credit: "Learning to surf" by fotologic which is licensed under CC-BY-2.0.
Created with: python3 -m openpifpaf.predict --show docs/coco/000000081988.jpg

For more demos, see the openpifpafwebdemo project and the openpifpaf.webcam command. There is also a Google Colab demo.

example image

Install

Python 3 is required. Python 2 is not supported. Do not clone this repository and make sure there is no folder named openpifpaf in your current directory.

pip3 install openpifpaf

For a live demo, we recommend to try the openpifpafwebdemo project. Alternatively, openpifpaf.webcam provides a live demo as well. It requires OpenCV. To use a globally installed OpenCV from inside a virtual environment, create the virtualenv with the --system-site-packages option and verify that you can do import cv2.

For development of the openpifpaf source code itself, you need to clone this repository and then:

pip3 install numpy cython
pip3 install --editable '.[train,test]'

The last command installs the Python package in the current directory (signified by the dot) with the optional dependencies needed for training and testing. The current changelog and the changelogs for prior releases are in HISTORY.md.

Interfaces

  • python3 -m openpifpaf.predict --help
  • python3 -m openpifpaf.webcam --help
  • python3 -m openpifpaf.train --help
  • python3 -m openpifpaf.eval_coco --help
  • python3 -m openpifpaf.logs --help

Example commands to try:

# live demo
MPLBACKEND=macosx python3 -m openpifpaf.webcam --scale 0.1 --source=0

# single image
python3 -m openpifpaf.predict my_image.jpg --show

Pre-trained Models

Put the files from this Google Drive into your outputs folder. Alternative downloads:

models Cloudflare IPFS gateway to https IPFS DAT (broken?)
ResNet50 (97MB) CF R50 IPFS R50 DAT repo
ResNet101 (169MB) CF R101 IPFS R101 DAT repo
ResNet152 (229MB) CF R152 IPFS R152 DAT repo

Visualize logs:

python3 -m openpifpaf.logs \
  outputs/resnet50block5-pif-paf-edge401-190424-122009.pkl.log \
  outputs/resnet101block5-pif-paf-edge401-190412-151013.pkl.log \
  outputs/resnet152block5-pif-paf-edge401-190412-121848.pkl.log

Train

See datasets for setup instructions. See studies.ipynb for previous studies.

Train a model:

python3 -m openpifpaf.train \
  --lr=1e-3 \
  --momentum=0.95 \
  --epochs=75 \
  --lr-decay 60 70 \
  --batch-size=8 \
  --basenet=resnet50block5 \
  --head-quad=1 \
  --headnets pif paf \
  --square-edge=401 \
  --regression-loss=laplace \
  --lambdas 30 2 2 50 3 3 \
  --freeze-base=1

You can refine an existing model with the --checkpoint option.

To produce evaluations at every epoch, check the directory for new snapshots every 5 minutes:

while true; do \
  CUDA_VISIBLE_DEVICES=0 find outputs/ -name "resnet101block5-pif-paf-l1-190109-113346.pkl.epoch???" -exec \
    python3 -m openpifpaf.eval_coco --checkpoint {} -n 500 --long-edge=641 --skip-existing \; \
  ; \
  sleep 300; \
done

Person Skeletons

COCO / kinematic tree / dense:

Created with python3 -m openpifpaf.data.

Video

Processing a video frame by frame from video.avi to video.pose.mp4 using ffmpeg:

export VIDEO=video.avi  # change to your video file

mkdir ${VIDEO}.images
ffmpeg -i ${VIDEO} -qscale:v 2 -vf scale=641:-1 -f image2 ${VIDEO}.images/%05d.jpg
python3 -m openpifpaf.predict --checkpoint resnet152 ${VIDEO}.images/*.jpg
ffmpeg -framerate 24 -pattern_type glob -i ${VIDEO}.images/'*.jpg.skeleton.png' -vf scale=640:-1 -c:v libx264 -pix_fmt yuv420p ${VIDEO}.pose.mp4

In this process, ffmpeg scales the video to 641px which can be adjusted.

Evaluations

See evaluation logs for a long list. This result was produced with python -m openpifpaf.eval_coco --checkpoint outputs/resnet101block5-pif-paf-edge401-190313-100107.pkl --long-edge=641 --loader-workers=8:

 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets= 20 ] = 0.657
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets= 20 ] = 0.866
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets= 20 ] = 0.719
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.619
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.718
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 20 ] = 0.712
 Average Recall     (AR) @[ IoU=0.50      | area=   all | maxDets= 20 ] = 0.895
 Average Recall     (AR) @[ IoU=0.75      | area=   all | maxDets= 20 ] = 0.768
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.660
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.785
Decoder 0: decoder time = 875.4406125545502s
total processing time = 1198.353811264038s

Profiling Decoder

Run predict with the --profile option:

python3 -m openpifpaf.predict --checkpoint resnet152 \
  docs/coco/000000081988.jpg --show --profile --debug

This will write a stats table to the terminal and also produce a decoder.prof file. You can use flameprof (pip install flameprof) to get a flame graph with flameprof decoder.prof > docs/coco/000000081988.jpg.decoder_flame.svg:

flame graph for decoder on a COCO image

For a larger image as, e.g., from NuScenes:

python3 -m openpifpaf.predict --checkpoint resnet152 \
  docs/nuscenes/test.jpg --show --profile --debug

Then create the flame graph with flameprof decoder.prof > docs/nuscenes/test.jpg.decoder_flame.svg to produce:

flame graph for decoder on a NuScenes image

For a crowded image:

python3 -m openpifpaf.predict --checkpoint resnet152 \
  docs/crowd.png --show --profile --debug

Then create the flame graph with flameprof decoder.prof > docs/crowd.png.decoder_flame.svg to produce:

flame graph for decoder on a NuScenes image

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

openpifpaf-0.7.0.tar.gz (177.9 kB view hashes)

Uploaded Source

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page