Classifying model for Sign Language Recognition System
Project description
# Gesture classifying model
## Introduction
Pip package used in sign language translation system available in the following repository:
***
[https://github.com/kacper1095/translation-system-application](https://github.com/kacper1095/translation-system-application)
***
## Installation
1. `pip install gesture_classifying_model`
2. At top of pipeline.py insert line
```angular2html
from gesture_classifying_model import GestureClassifier
```
3. Instantiate class in transfomers list, ex.:
```angular2html
transformers = [
AlreadyExistingTransfomer(),
...
GestureClassifier(),
...
AlreadyExistingTransfomer()
]
```
4. First run will cause download of weights to `models/gesture_classifier` with weights in *.h5 format and config.yml
5. Run appplication and start using system!
## Classifier input info
* image size = [None, 64, 64, 3] (batch size, height, width, channels)
- where *None* means, it can be arbitrary number of frames, **only last frame is classified**
* values range = [0, 255] in RGB (may be float or int)
## Classifier ouput info
* only letters are classified
* available letters (both upper and lower)*:
```angular2html
abcdefghiklmnopqrstuwxy
```
* output size = [None, 24], where under *None* is same number as in the input
<small>*the reason for such letters is that 'j' and 'z' both need movement, but classifier uses only singular frames</small>
## Requirements
Used environment:
* Python 3.5
* Theano 0.9.0
* Keras 1.2.2
## Changelog
* v0.1.3:
- documentation on PyPI
* v0.1.2:
- first PyPI availability
- downloading weights after one day, after next run
## Introduction
Pip package used in sign language translation system available in the following repository:
***
[https://github.com/kacper1095/translation-system-application](https://github.com/kacper1095/translation-system-application)
***
## Installation
1. `pip install gesture_classifying_model`
2. At top of pipeline.py insert line
```angular2html
from gesture_classifying_model import GestureClassifier
```
3. Instantiate class in transfomers list, ex.:
```angular2html
transformers = [
AlreadyExistingTransfomer(),
...
GestureClassifier(),
...
AlreadyExistingTransfomer()
]
```
4. First run will cause download of weights to `models/gesture_classifier` with weights in *.h5 format and config.yml
5. Run appplication and start using system!
## Classifier input info
* image size = [None, 64, 64, 3] (batch size, height, width, channels)
- where *None* means, it can be arbitrary number of frames, **only last frame is classified**
* values range = [0, 255] in RGB (may be float or int)
## Classifier ouput info
* only letters are classified
* available letters (both upper and lower)*:
```angular2html
abcdefghiklmnopqrstuwxy
```
* output size = [None, 24], where under *None* is same number as in the input
<small>*the reason for such letters is that 'j' and 'z' both need movement, but classifier uses only singular frames</small>
## Requirements
Used environment:
* Python 3.5
* Theano 0.9.0
* Keras 1.2.2
## Changelog
* v0.1.3:
- documentation on PyPI
* v0.1.2:
- first PyPI availability
- downloading weights after one day, after next run
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Close
Hashes for gesture_classifying_model-0.1.5.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | cd61cdfca0a3a81c4b3c5349ff2fd1e33af4eb31e74cd0556427c5982aac74c6 |
|
MD5 | f288ea7256f7a6566c5e105480cc4865 |
|
BLAKE2b-256 | a540fdd09385006b4d7abd98827ea3f554f6ff13d9dbc54aab84bae5006dd41b |