Skip to main content

Off-the-shelf computer vision ML models. Yolov5, gender and age determination.

Project description

QuickTake

Off-the-shelf computer vision ML models. Yolov5, gender and age determination.

The goal of this repository is to provide, easy to use, abstracted, APIs to powerful computer vision models.

Models

$3$ models are currently available:

  • Object detection
  • Gender determination
  • Age determination

Model Engine

The models

Getting Started

Install the package with pip:

pip install quicktake

Usage

Build an instance of the class:

from quicktake import QuickTake

Image Input

Each model is designed to handle $3$ types of input:

  • raw pixels (torch.Tensor): raw pixels of a single image. Used when streaming video input.
  • image path (str): path to an image. Used when processing a single image.
  • image directory (str): path to a directory of images. Used when processing a directory of images.

Expected Use

Gender and age determination models are trained on faces. They work fine on a larger image, however, will fail to make multiple predictions in the case of multiple faces in a single image.

The API is currently designed to chain models:

  1. yolo is used to identify objects.
  2. IF a person is detected, the gender and age models are used to make predictions.

This is neatly bundled in the QuickTake.yolo_loop() method.

Getting Started

Launch a webcam stream:

QL = QuickTake()
QL.launchStream()

Note: Each model returns the results results_ as well as the runtime time_.

Run on a single frame:

from IPython.display import display
from PIL import Image
import cv2

# example images
img = './data/random/dave.png'

# to avoid distractions
import warnings
warnings.filterwarnings('ignore')

# init module
from quicktake import QuickTake
qt = QuickTake()

# extract frame from raw image path
frame = qt.read_image(img)

We can now fit qt.age(<frame>) or qt.gender(<frame>) on the frame. Alternatively we can cycle through the objects detected by yolo and if a person is detected, fit qt.age() and qt.gender():

# generate points
for _label, x0,y0,x1,y1, colour, thickness, results, res_df, age_, gender_ in qt.yolo_loop(frame):
    _label = QuickTake.generate_yolo_label(_label)
    QuickTake.add_block_to_image(frame, _label, x0,y0,x1,y1, colour=colour, thickness=thickness)

The result is an image with the bounding boxes and labels, confidence (in yolo prediction), age, and gender if a person is detected.

Example output: a person is detected and thus age, gender are estimated.

The staged output is also useful:

Example of the YoloV5 detection boundaries.

For a more comprehensive example directory.

Future

I have many more models; deployment methods & applications in the pipeline.

If you wish to contribute, please email me @zachcolinwolpe@gmail.com.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

quicktake-0.0.16.tar.gz (6.3 kB view hashes)

Uploaded Source

Built Distribution

quicktake-0.0.16-py3-none-any.whl (6.4 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page