Skip to main content

Detection and classification of head gestures in videos

Project description

ci License MIT 1.0

Introduction

The Nodding Pigeon library provides a pre-trained model and a simple inference API for detecting head gestures in short videos. Under the hood, it uses Google MediaPipe for collecting the landmark features.

Installation

Tested for Python 3.8, 3.9, and 3.10.

The best way to install this library with its dependencies is from PyPI:

python3 -m pip install --upgrade noddingpigeon

Alternatively, to obtain the latest version from this repository:

git clone git@github.com:bhky/nodding-pigeon.git
cd nodding-pigeon
python3 -m pip install .

Usage

An easy way to try the API and the pre-trained model is to make a short video with your head gesture.

Webcam

The code snippet below will perform the following:

  • Search for the pre-trained weights file from $HOME/.noddingpigeon/weights/, if not exists, the file will be downloaded from this repository.
  • Start webcam.
  • Collect the needed number of frames (default 60) for the model.
  • End webcam automatically (or you can press q to end earlier).
  • Make prediction of your head gesture and print the result to STDOUT.
from noddingpigeon.inference import predict_video

result = predict_video()
print(result)
# Example result:
# {'gesture': 'nodding',
#  'probabilities': {'has_motion': 1.0,
#   'gestures': {'nodding': 0.9576354622840881,
#    'turning': 0.042364541441202164}}}

Video file

Alternatively, you could provide a pre-recorded video file:

from noddingpigeon.inference import predict_video
from noddingpigeon.video import VideoSegment

result = predict_video(
  "your_head_gesture_video.mp4",
  video_segment=VideoSegment.LAST,  # Optionally change these parameters.
  motion_threshold=0.5,
  gesture_threshold=0.9
)

Note that no matter how long your video is, only the pre-defined number of frames (60 for the current model) are used for prediction. The video_segment enum option controls how the frames are obtained from the video, e.g., VideoSegment.LAST means the last (60) frames will be used.

Thresholds can be adjusted as needed, see explanation in a later section.

Result format

The result is returned as a Python dictionary.

{
  'gesture': 'turning',
  'probabilities': {
    'has_motion': 1.0,
    'gestures': {
      'nodding': 0.009188028052449226,
      'turning': 0.9908120036125183
    }
  }
}

Head gestures

The following gesture types are available:

  • nodding - Repeatedly tilt your head upward and downward.
  • turning - Repeatedly turn your head leftward and rightward.
  • stationary - Not tilting or turning your head; translation motion is still treated as stationary.
  • undefined - Unrecognised gesture or no landmarks detected (usually means no face is shown).

To determine the final gesture:

  • If has_motion probability is smaller than motion_threshold (default 0.5), gesture is stationary. Other probabilities are irrelevant.
  • Otherwise, we will look for the largest probability from gestures:
    • If it is smaller than gesture_threshold (default 0.9), gesture is undefined,
    • else, the corresponding gesture label is selected (e.g., nodding).
  • If no landmarks are detected in the video, gesture is undefined. The probabilities dictionary is empty.

API

noddingpigeon.inference

predict_video

Detect head gesture shown in the input video either from webcam or file.

  • Parameters:
    • video_path (Optional[str], default None): File path to the video file, or None for starting a webcam.
    • model (Optional[tf.keras.Model], default None): A TensorFlow-Keras model instance, or None for using the default model.
    • max_num_frames (int, default 60): Maximum number of frames to be processed by the model. Do not change when using the default model.
    • video_segment (VideoSegment enum, default VideoSegment.BEGINNING): See explanation of VideoSegment.
    • end_padding (bool, default True): If True and max_num_frames is set, when the input video has not enough frames to form the feature tensor for the model, padding at the end will be done using the features detected on the last frame.
    • drop_consecutive_duplicates (bool, default True): If True, features from a certain frame will not be used to form the feature tensor if they are considered to be the same as the previous frame. This is a mechanism to prevent "fake" video created with static images.
    • postprocessing (bool, default True): If True, the final result will be presented as the Python dictionary described in the usage section, otherwise the raw model output is returned.
    • motion_threshold (float, default 0.5): See the head gestures section.
    • gesture_threshold (float, default 0.9): See the head gestures section.
  • Return:
    • A Python dictionary if postprocessing is True, otherwise List[float] from the model output.

noddingpigeon.video

VideoSegment

Enum class for video segment options.

  • VideoSegment.BEGINNING: Collect the required frames for the model from the beginning of the video.
  • VideoSegment.LAST: Collect the required frames for the model toward the end of the video.

noddingpigeon.model

make_model

Create an instance of the model used in this library, optionally with pre-trained weights loaded.

  • Parameters:
    • weights_path (Optional[str], default $HOME/.noddingpigeon/weights/*.h5): Path to the weights in HDF5 format to be loaded by the model. The weights file will be downloaded if not exists. If None, no weights will be downloaded nor loaded to the model. Users can provide path if the default is not preferred. The environment variable NODDING_PIGEON_HOME can also be used to indicate where the .noddingpigeon/ directory should be located.
  • Return:
    • tf.keras.Model object.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

noddingpigeon-0.5.0.tar.gz (10.7 kB view details)

Uploaded Source

Built Distribution

noddingpigeon-0.5.0-py3-none-any.whl (10.0 kB view details)

Uploaded Python 3

File details

Details for the file noddingpigeon-0.5.0.tar.gz.

File metadata

  • Download URL: noddingpigeon-0.5.0.tar.gz
  • Upload date:
  • Size: 10.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.0 CPython/3.9.12

File hashes

Hashes for noddingpigeon-0.5.0.tar.gz
Algorithm Hash digest
SHA256 739ce5b3fdef0c8f2afa8da669cde29ffeb891c961ccf103370e858701e7c538
MD5 b53a2f43290a0307ca6dd88b7a787d69
BLAKE2b-256 fad4fb41544b61ff31b0f46630412abb0272fd04fbffb69fe20a5795e02edf1c

See more details on using hashes here.

File details

Details for the file noddingpigeon-0.5.0-py3-none-any.whl.

File metadata

File hashes

Hashes for noddingpigeon-0.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 c0eeb63c16ea38bfdec0da335cead07701ddc1648c6eea57790b554d55f3b6b2
MD5 46d1902308f2ae998999e6164683b8f2
BLAKE2b-256 f02c4d9f98a7ddb1050bafa47f9b0a27f3de944b51b24d7e743233e57051bdef

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page