Off-the-shelf computer vision ML models. Yolov5, gender and age determination.
Project description
QuickTake
Off-the-shelf computer vision ML models. Yolov5, gender and age determination.
The goal of this repository is to provide, easy to use, abstracted, APIs to powerful computer vision models.
Models
$3$ models are currently available:
Object detection
Gender determination
Age determination
Model Engine
The models
-
YoloV5
: Object detection. This forms the basis of the other models. Pretrained onCOCO
. Documentation here -
Gender
:ResNet18
is used as the models backend. Transfer learning is applied to model gender. The additional gender training was done on the gender classification dataset, using code extract from here. -
Age
: The age model is an implementation of theSSR-Net
paper: SSR-Net: A Compact Soft Stagewise Regression Network for Age Estimation. ThepyTorch
model was largely derived from oukohou.
Getting Started
Install the package with pip:
pip install quicktake
Usage
Build an instance of the class:
from quicktake import QuickTake
Image Input
Each model is designed to handle $3$ types of input:
raw pixels (torch.Tensor)
: raw pixels of a single image. Used when streaming video input.image path (str)
: path to an image. Used when processing a single image.image directory (str)
: path to a directory of images. Used when processing a directory of images.
Expected Use
Gender
and age
determination models are trained on faces. They work fine on a larger image, however, will fail to make multiple predictions in the case of multiple faces in a single image.
The API is currently designed to chain models:
yolo
is used to identify objects.IF
a person is detected, thegender
andage
models are used to make predictions.
This is neatly bundled in the QuickTake.yolo_loop()
method.
Getting Started
Launch a webcam stream:
QL = QuickTake()
QL.launchStream()
Note: Each model returns the results results_
as well as the runtime time_
.
Run on a single frame:
from IPython.display import display
from PIL import Image
import cv2
# example images
img = './data/random/dave.png'
# to avoid distractions
import warnings
warnings.filterwarnings('ignore')
# init module
from quicktake import QuickTake
qt = QuickTake()
# extract frame from raw image path
frame = qt.read_image(img)
We can now fit qt.age(<frame>)
or qt.gender(<frame>)
on the frame. Alternatively we can cycle through the objects detected by yolo
and if a person is detected, fit qt.age()
and qt.gender()
:
# generate points
for _label, x0,y0,x1,y1, colour, thickness, results, res_df, age_, gender_ in qt.yolo_loop(frame):
_label = QuickTake.generate_yolo_label(_label)
QuickTake.add_block_to_image(frame, _label, x0,y0,x1,y1, colour=colour, thickness=thickness)
The result is an image with the bounding boxes and labels, confidence (in yolo prediction), age, and gender if a person is detected.
.
The staged output is also useful:
.
For a more comprehensive example directory.
Future
I have many more models; deployment methods & applications in the pipeline.
If you wish to contribute, please email me @zachcolinwolpe@gmail.com.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for quicktake-0.0.16-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 30e65fdfccf844adacf260e0ea938690bd0f69bd466fdb2fc264330541177dc2 |
|
MD5 | b5e4cfa2bd7666515848b185e3a8d569 |
|
BLAKE2b-256 | 0b546a806c2f0cdc7a6a51ddaac84eaa171f6746a0ae91a9ab49f6ea32f0fa7a |