Skip to main content

No project description provided

Project description

Copyright (c) 2019, Justus Schock
All rights reserved.

Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:

* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.

* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Description: # shapenet

[![Build Status](https://travis-ci.com/justusschock/shapenet.svg?token=GsT2RFaJJMxpqLAN3xuh&branch=master)](https://travis-ci.com/justusschock/shapenet) [![codecov](https://codecov.io/gh/justusschock/shapenet/branch/master/graph/badge.svg?token=gpwVgQjw18)](https://codecov.io/gh/justusschock/shapenet) ![LICENSE](https://img.shields.io/github/license/justusschock/shapedata.svg)

This repository contains the [PyTorch](https://pytorch.org) implementation of [our Paper "SUPER-REALTIME FACIAL LANDMARK DETECTION AND SHAPE FITTING BY DEEP REGRESSION OF SHAPE MODEL PARAMETERS"](#our-paper).

## Contents
* [Installation](#installation)
* [Usage](#usage)
* [By Scripts](#by-scripts)
* [From Python](#from-python)
* [Pretrained Weights](#pretrained-weights)
* [Our Paper](#our-paper)

## Installation

### From Binary:
`pip install shapenet`

### From Source:
`pip install git+https://github.com/justusschock/shapenet`

## Usage
### By Scripts
For simplicity we provide several scripts to preprocess the data, train networks, predict from networks and export the network via [`torch.jit`](https://pytorch.org/docs/stable/jit.html).
To get a list of the necessary and accepted arguments, run the script with the `-h` flag.

#### Data Preprocessing
* `prepare_all_data`: prepares multiple datasets (you can select the datasets to preprocess via arguments passed to this script)
* `prepare_cat_dset`: Download and preprocesses the [Cat-Dataset](https://www.kaggle.com/crawford/cat-dataset)
* `prepare_helen_dset`: Preprocesses an already downloaded ZIP file of the [HELEN Dataset](http://www.ifp.illinois.edu/~vuongle2/helen/) (Download is recommended from [here](https://ibug.doc.ic.ac.uk/download/annotations/helen.zip) since this already contains the landmarks)
* `prepare_lfpw_dset`: Preprocesses an already downloaded ZIP file of the [LFPW Dataset](https://neerajkumar.org/databases/lfpw/) (Download is recommended from [here](https://ibug.doc.ic.ac.uk/download/annotations/lfpw.zip) since this already contains the landmarks)

#### Training
* `train_shapenet`: Trains the shapenet with the configuration specified in an extra configuration file (exemplaric configuration for all avaliable datasets are provided in the [example_configs](example_configs) folder)

#### Prediction
* `predict_from_net`: Predicts all images in a given directory (assumes existing groundtruths for cropping, otherwise the cropping to groundtruth could be replaced by a detector)

#### JIT-Export
* `export_to_jit`: Traces the given model and saves it as jit-ScriptModule, which can be accessed via Python and C++

### From Python
This implementation uses the [`delira`-Framework](https://github.com/justusschock/delira) for training and validation handling. It supports mixed precision training and inference via [NVIDIA/APEX](https://github.com/NVIDIA/apex) (must be installed separately). The data-handling is outsourced to [shapedata](https://github.com/justusschock/shapedata).

The following gives a short overview about the packages and classes.

#### `shapenet.networks`
The `networks` subpackage contains the actual implementation of the shapenet with bindings to integrate the `ShapeLayer` and other feature extractors (currently the ones registered in `torchvision.models`).

#### `shapenet.layer`
The `layer` subpackage contains the Python and C++ Implementations of the ShapeLayer and the Affine Transformations. It is supposed to use these Layers as layers in `shapenet.networks`

#### `shapenet.jit`
The `jit` subpackage is a less flexible reimplementation of the subpackages `shapenet.networks` and `shapenet.layer` to export trained weights as jit-ScriptModule

#### `shapenet.utils`
The `utils` subpackage contains everything that did not suit into the scope of any other package. Currently it is mainly responsible for parsing of configuration files.

#### `shapenet.scripts`
The `scripts` subpackage contains all scipts described in [Scripts](#by-scripts) and their helper functions.

### Pretrained Weights
Currently Pretrained Weights are available for [grayscale faces](https://drive.google.com/file/d/1QS2GUZK9xKWvpbDYgUCc-m0qI60TMnLj/view?usp=sharing) and [cats](https://drive.google.com/file/d/13S-4vLmmUBNy2XKJl_yR1u7Z283Iu1zB/view?usp=sharing).

For these Networks the image size is fixed to 224 and the pretrained weights can be loaded via `torch.jit.load("PATH/TO/NETWORK/FILE.ptj")`. The inputs have to be of type `torch.Tensor` with dtype `torch.float` in shape `(BATCH_SIZE, 1, 224, 224)` and normalized in a range between (0, 1).


## Our Paper
If you use our Code for your own research, please cite our paper:
```
@article{Kopaczka2019,
title = {Super-Realtime Facial Landmark Detection and Shape Fitting by Deep Regression of Shape Model Parameters},
author = {Marcin Kopaczka and Justus Schock and Dorit Merhof},
year = {2019},
journal = {arXiV preprint}
}
```
A link to the PDF will be given as soon, as the preprint is online available.

Platform: UNKNOWN
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Education
Classifier: Intended Audience :: Science/Research
Classifier: Natural Language :: English
Classifier: Programming Language :: Python :: 3
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Scientific/Engineering :: Medical Science Apps.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

shapenet-0.1.0.tar.gz (26.0 kB view details)

Uploaded Source

Built Distribution

shapenet-0.1.0-py3-none-any.whl (37.3 kB view details)

Uploaded Python 3

File details

Details for the file shapenet-0.1.0.tar.gz.

File metadata

  • Download URL: shapenet-0.1.0.tar.gz
  • Upload date:
  • Size: 26.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.5.0.1 requests/2.21.0 setuptools/40.8.0 requests-toolbelt/0.9.1 tqdm/4.31.0 CPython/3.7.1

File hashes

Hashes for shapenet-0.1.0.tar.gz
Algorithm Hash digest
SHA256 926d9870cfaa9f9eef1aef97e8d47e728ded0522c183984d26ce9d2dc30115f7
MD5 280ce556883cbe67870fabcdeaf45314
BLAKE2b-256 98d88eef79f3992ad919f82fd2f61582b9f9bbbfc514d26ddb79beb9dad3e024

See more details on using hashes here.

File details

Details for the file shapenet-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: shapenet-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 37.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.5.0.1 requests/2.21.0 setuptools/40.8.0 requests-toolbelt/0.9.1 tqdm/4.31.0 CPython/3.7.1

File hashes

Hashes for shapenet-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 56f6b2f47261f5def322e291d0a9aad1ff83d489b1520a481ac45d9630627939
MD5 5bb030b96783c2ca36fbf0ed580a6021
BLAKE2b-256 0eecb70775b5848d69fe41e3b1b10454f6d887f8846cf9e2cdba872c5deaa42b

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page