A human perception library
Project description
Terran is a human perception library that provides computer vision techniques and algorithms in order to facilitate building systems that interact with people.
The philosophy behind the library is to focus on tasks and problems instead of models and algorithms. That is, it aims to always have the best possible algorithm for the job given its constraints, and to take the burden of finding which model performs best off you.
The library strives to be friendly and easy-to-use. It's written fully in
Python and Pytorch, avoiding C++ extensions as much as possible, in order to
avoid difficulties in installation. Just pip install
and you're good to go!
We (currently) provide models for: face detection, face recognition and pose estimation. We also offer several utility functions for efficiently reading and visualizing results, which should simplify work a bit.
Example of Terran's face detection and pose estimation capabilities.
Features
-
Efficient I/O utilities to read and write videos through
ffmpeg
. Frames are pre-fetched in a background thread, allowing you to maximize GPU usage when processing videos. -
Utilities to open remote images, recursively find images, and (prettily) visualize results. We also allow reading from video streams and even videos from video platforms supported by Youtube-DL.
-
Checkpoint management tool, so you don't have to manually download pre-trained model files.
-
Face detection provided through the RetinaFace model.
-
Face recognition provided through the ArcFace model.
-
Pose estimation provided through the OpenPose model (2017 version).
Getting started
Be sure to read the full documentation here.
Installation
Terran requires Python 3.6 or above, and Pytorch 1.3 or above. It can be used with or without a GPU, though the current available algorithms require GPUs in order to run under a reasonable time.
To install, run:
pip install terran
Or, if you want better-looking visualizations and have Cairo installed, you can go with:
pip install terran[cairo]
If you require a particular Pytorch version (e.g. you're using a specific CUDA version), be sure to install it beforehand.
For more information, see the Installation section in the documentation.
Usage
See the Getting started section in the documentation, and the Examples section for more in-depth examples.
You can use the functions under terran.io.*
for easy reading of media files,
and the appropriate algorithm function under the top-level module. If you don't
need any customization, just issue the following
in an interactive console:
>>> from terran.io import open_image
>>> from terran.vis import display_image, vis_faces
>>> from terran.face import face_detection
>>>
>>> image = open_image('examples/readme/many-faces-raw.jpg')
>>> detections = face_detection(image)
>>> display_image(vis_faces(image, detections))
If it's the first use, you should be prompted to download the model files. You
can also do it manually, by running terran checkpoint list
and then
terran checkpoint download <checkpoint-id>
in a terminal.
Or maybe:
>>> from terran.vis import vis_poses
>>> from terran.pose import pose_estimation
>>>
>>> image = open_image('examples/readme/many-poses-raw.jpg')
>>> display_image(vis_poses(image, pose_estimation(image)))
Examples
Finding a person in a group of images
You can use Terran's I/O utilities to quickly find a person within all the images present in a directory, in a Google Photos-like functionality. The code is present at examples/match.py.
python examples/match.py reference.png images/
Here reference.png
is the path to the reference person, which should contain
only one person, while images/
is the directory containing the images to
search in.
Face detection over a video
Terran also provides functions to perform I/O over videos, in order to read them efficiently and in a background thread, as well as to write them. See examples/video.py to see a short example of running face detection over a video.
python examples/video.py video.mp4 out.mp4
Here video.mp4
is the video to run the face detection over, and out.mp4
the
output location.
Note that video.mp4
could be a Youtube video or the path to your webcam. For
instance:
python examples/video.py 'https://www.youtube.com/watch?v=oHg5SJYRHA0' out.mp4 --duration=30
You could also mix this example and the one above to search for a person within a video. We leave it as an exercise for the reader.
Customizing model settings
You might want to customize any of the detection functions (such as
face_detection
) in order to change e.g. the size images are resized to (in
order to make it run faster). You can do it like so:
from terran.face import Detection
face_detection = Detection(short_side=208)
image = open_image(...)
detections = face_detection(image)
References
Terran doesn't provide training code for the models. As such, pre-trained weights are taken, adapted and re-packaged from the official model repositories of the respective models.
- For OpenPose, the official Pytorch version weights are used.
- For ArcFace and RetinaFace, Insightface's weights are used (in mxnet, translated to Pytorch by us).
License
Terran is released under the BSD 3-Clause license.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file terran-0.1.2.tar.gz
.
File metadata
- Download URL: terran-0.1.2.tar.gz
- Upload date:
- Size: 51.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/47.1.1 requests-toolbelt/0.9.1 tqdm/4.48.2 CPython/3.8.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 39509e19507e0783689e578514fb429cd4f2b395c29bee1ea05844202d87ecf2 |
|
MD5 | a67d63b78d666634518d2ec34e80e609 |
|
BLAKE2b-256 | 562222f2cfe4e9d177b242507d51601865a78e25e5eb7eed3a339d74a165e1d7 |