Skip to main content

An example of deep object detection and tracking with a Raspberry Pi, PiCamera, and Pimoroni Pantilt Hat

Project description

Raspberry Pi Deep PanTilt

image

Documentation Status

READ THIS FIRST!

A detailed walk-through is available in Real-time Object Tracking with TensorFlow, Raspberry Pi, and Pan-tilt HAT.

Build List

An example of deep object detection and tracking with a Raspberry Pi

Basic Setup

Before you get started, you should have an up-to-date installation of Raspbian 10 (Buster) running on your Raspberry Pi. You'll also need to configure SSH access into your Pi.

Installation

  1. Install system dependencies
sudo apt-get update && sudo apt-get install -y \
    cmake python3-dev libjpeg-dev libatlas-base-dev raspi-gpio libhdf5-dev python3-smbus
  1. Install TensorFlow 2.0 (community-built wheel)
pip install https://github.com/leigh-johnson/Tensorflow-bin/blob/master/tensorflow-2.0.0-cp37-cp37m-linux_armv7l.whl?raw=true
  1. Install the rpi-deep-pantilt package.
pip install rpi-deep-pantilt

Example Usage

Object Detection

The following will start a PiCamera preview and render detected objects as an overlay. Verify you're able to detect an object before trying to track it.

Supports Edge TPU acceleration by passing the --edge-tpu option.

rpi-deep-pantilt detect

rpi-deep-pantilt detect --help

Usage: rpi-deep-pantilt detect [OPTIONS]

Options:
  --loglevel TEXT  Run object detection without pan-tilt controls. Pass
                   --loglevel=DEBUG to inspect FPS.
  --edge-tpu       Accelerate inferences using Coral USB Edge TPU
  --help           Show this message and exit.

Object Tracking

The following will start a PiCamera preview, render detected objects as an overlay, and track an object's movement with the pan-tilt HAT.

By default, this will track any person in the frame. You can track other objects by passing --label <label>. For a list of valid labels, run rpi-deep-pantilt list-labels.

rpi-deep-pantilt track

Supports Edge TPU acceleration by passing the --edge-tpu option.

rpi-deep-pantilt track --help 
Usage: cli.py track [OPTIONS]

Options:
  --label TEXT     The class label to track, e.g `orange`. Run `rpi-deep-
                   pantilt list-labels` to inspect all valid values
                   [required]
  --loglevel TEXT
  --edge-tpu       Accelerate inferences using Coral USB Edge TPU
  --help           Show this message and exit.

Valid labels for Object Detection/Tracking

rpi-deep-pantilt list-labels

The following labels are valid tracking targets.

['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light', 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow', 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush']

Face Detection (NEW in v1.1.x)

The following command will detect all faces. Supports Edge TPU acceleration by passing the --edge-tpu option.

rpi-deep-pantilt face-detect --help
Usage: cli.py face-detect [OPTIONS]

Options:
  --loglevel TEXT  Run object detection without pan-tilt controls. Pass
                   --loglevel=DEBUG to inspect FPS.
  --edge-tpu       Accelerate inferences using Coral USB Edge TPU
  --help           Show this message and exit.

Face Tracking (NEW in v1.1.x)

The following command will track between all faces in a frame. Supports Edge TPU acceleration by passing the --edge-tpu option.

rpi-deep-pantilt face-detect --help
Usage: cli.py face-detect [OPTIONS]

Options:
  --loglevel TEXT  Run object detection without pan-tilt controls. Pass
                   --loglevel=DEBUG to inspect FPS.
  --edge-tpu       Accelerate inferences using Coral USB Edge TPU
  --help           Show this message and exit.

Model Summary

The following section describes the models used in this project.

Object Detection & Tracking

FLOAT32 model (ssd_mobilenet_v3_small_coco_2019_08_14)

rpi-deep-pantilt detect and rpi-deep-pantilt track perform inferences using this model. Bounding box and class predictions render at roughly 6 FPS on a Raspberry Pi 4.

The model is derived from ssd_mobilenet_v3_small_coco_2019_08_14 in tensorflow/models. I extended the model with an NMS post-processing layer, then converted to a format compatible with TensorFlow 2.x (FlatBuffer).

I scripted the conversion steps in tools/tflite-postprocess-ops-float.sh.

Quantized UINT8 model (ssdlite_mobilenet_edgetpu_coco_quant)

If you specify --edge-tpu option, rpi-deep-pantilt detect and rpi-deep-pantilt track perform inferences using this model. Rounding box and class predictions render at roughly 24+ FPS (real-time) on Raspberry Pi 4.

This model REQUIRES a Coral Edge TPU USB Accelerator to run.

This model is derived from ssdlite_mobilenet_edgetpu_coco_quant in tensorflow/models. I reversed the frozen .tflite model into a protobuf graph to add an NMS post-processing layer, quantized the model in a .tflite FlatBuffer format, then converted using Coral's edgetpu_compiler tool.

I scripted the conversion steps in tools/tflite-postprocess-ops-128-uint8-quant.sh and tools/tflite-edgetpu.sh.

Face Detection & Tracking

I was able to use the same model architechture for FLOAT32 and UINT8 input, facessd_mobilenet_v2_quantized_320x320_open_image_v4_tflite2.

This model is derived from facessd_mobilenet_v2_quantized_320x320_open_image_v4 in tensorflow/models.

Credits

The MobileNetV3-SSD model in this package was derived from TensorFlow's model zoo, with post-processing ops added.

The PID control scheme in this package was inspired by Adrian Rosebrock tutorial Pan/tilt face tracking with a Raspberry Pi and OpenCV

This package was created with Cookiecutter and the audreyr/cookiecutter-pypackage project template.

History

1.0.0 (2019-12-01)

  • First release on PyPI.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

rpi_deep_pantilt-1.1.0-py2.py3-none-any.whl (28.0 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file rpi_deep_pantilt-1.1.0-py2.py3-none-any.whl.

File metadata

  • Download URL: rpi_deep_pantilt-1.1.0-py2.py3-none-any.whl
  • Upload date:
  • Size: 28.0 kB
  • Tags: Python 2, Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.14.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/42.0.2 requests-toolbelt/0.9.1 tqdm/4.40.0 CPython/3.7.3

File hashes

Hashes for rpi_deep_pantilt-1.1.0-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 395ddc2a959b05fba9f977240c38bc6b8b2df0f133b87787a8a27ac15e33c043
MD5 d787cae2537c94c6b1ebfc1df86c3cf4
BLAKE2b-256 31891036b52c34b9be8bb49d1cb0633028cc6084579466c6fe63ec86b5e5c6b2

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page