Introduction

Continuously tested on Linux, MacOS and Windows: Tests
CVPR 2019 paper

PifPaf: Composite Fields for Human Pose Estimation

We propose a new bottom-up method for multi-person 2D human pose estimation that is particularly well suited for urban mobility such as self-driving cars and delivery robots. The new method, PifPaf, uses a Part Intensity Field (PIF) to localize body parts and a Part Association Field (PAF) to associate body parts with each other to form full human poses. Our method outperforms previous methods at low resolution and in crowded, cluttered and occluded scenes thanks to (i) our new composite field PAF encoding fine-grained information and (ii) the choice of Laplace loss for regressions which incorporates a notion of uncertainty. Our architecture is based on a fully convolutional, single-shot, box-free design. We perform on par with the existing state-of-the-art bottom-up method on the standard COCO keypoint task and produce state-of-the-art results on a modified COCO keypoint task for the transportation domain.

Demo

example image with overlaid pose predictions

Image credit: “Learning to surf” by fotologic which is licensed under CC-BY-2.0.
Created with python3 -m openpifpaf.predict docs/coco/000000081988.jpg --image-output.

More demos:

_images/wave3.gif

Install

Python 3 is required. Python 2 is not supported. Do not clone this repository and make sure there is no folder named openpifpaf in your current directory.

pip3 install openpifpaf

You need to install matplotlib to produce visual outputs.

To modify openpifpaf itself, please follow Modify Code.

For a live demo, we recommend to try the openpifpafwebdemo project. Alternatively, openpifpaf.video (requires OpenCV) provides a live demo as well.

About this Guide

This is an auto-generated book. Many sections of this book, like Prediction, are interactive: they are based on Jupyter Notebooks that can be launched interactively in the cloud by clicking on the rocket at the top and selecting a cloud provider like Binder. This is also the reason why some pages contain quite a bit of code: this is all the code required for regenerating that particular page.

Syntax: Some code blocks start with an exclamation point “!”. It means that this command is executed on the command line. You can run the same command yourself without the exclamation point.

Pre-trained Models

Performance metrics with version 0.11 on the COCO val set obtained with a GTX1080Ti:

Checkpoint

AP

APᴹ

APᴸ

t_{total} [ms]

t_{dec} [ms]

size

resnet50

67.8

65.3

72.6

70

28

105.0MB

shufflenetv2k16w

67.3

62.2

75.3

54

25

43.9MB

shufflenetv2k30w

71.1

66.0

79.0

94

22

122.3MB

Command to reproduce this table: python -m openpifpaf.benchmark --checkpoints resnet50 shufflenetv2k16 shufflenetv2k30.

Pretrained model files are shared in the openpifpaf-torchhub repository and linked from the checkpoint names in the table above. The pretrained models are downloaded automatically when using the command line option --checkpoint checkpointasintableabove.

Citation

@InProceedings{kreiss2019pifpaf,
  author = {Kreiss, Sven and Bertoni, Lorenzo and Alahi, Alexandre},
  title = {{PifPaf: Composite Fields for Human Pose Estimation}},
  booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  month = {June},
  year = {2019}
}

Reference: [KBA19]

KBA19

Sven Kreiss, Lorenzo Bertoni, and Alexandre Alahi. PifPaf: Composite Fields for Human Pose Estimation. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). June 2019.

Commercial License

This software is available for licensing via the EPFL Technology Transfer Office (https://tto.epfl.ch/, info.tto@epfl.ch).