Translate between spoken/text languages and sign language videos using AI.
Project description
Sign Language Translator ⠎⠇⠞
- Overview
- How to install the package
- Usage
- Models
- How to Build a Translator for your Sign Language
- Directory Tree
- How to Contribute
- Research Papers & Citation
- Upcoming/Roadmap
- Credits and Gratitude
- Bonus
- Number of lines of code
- :)
Overview
Sign language consists of gestures and expressions used mainly by the hearing-impaired to talk. This project is an effort to bridge the communication gap between the hearing and the hearing-impaired community using Artificial Intelligence.
The goal is to provide a user friendly API to novel Sign Language Translation solutions that can easily adapt to any regional sign language. Unlike most other projects, this python library can translate full sentences and not just the alphabet.
A bigger hurdle is the lack of datasets and frameworks that deep learning engineers and software developers can use to build useful products for the target community. This project aims to empower sign language translation by providing robust components, tools and models for both sign language to text and text to sign language conversion. It seeks to advance the development of sign language translators for various regions while providing a way towards sign language standardization.
Solution
We've have built an extensible rule-based text-to-sign translation system that can be used to generate training data for Deep Learning models for both sign to text & text to sign translation.
To create a rule-based translation system for your regional language, you can inherit the TextLanguage and SignLanguage classes and pass them as arguments to the ConcatenativeSynthesis class. To write sample texts of supported words, you can use our language models. Then, you can use that system to fine-tune our AI models.
Major Components and Goals
-
Sign language to Text
-
Extract pose vectors (2D or 3D) from videos and map them to corresponding text representations of the performed signs.
-
Fine-tuned a neural network, such as a state-of-the-art speech-to-text model, with gradual unfreezing starting from the input layers to convert pose vectors to text.
-
-
Text to Sign Language
- This is a relatively easier task if you parse input text and play appropriate video clips for each word.
- Motion Transfer
- Concatenate pose vectors in the time dimension and transfer the movements onto any given image of a person. This ensures smooth transitions between video clips.
- Sign Feature Synthesis
- Condition a pose sequence generation model on a pre-trained text encoder (e.g., fine-tune decoder of a multilingual T5) to output pose vectors instead of text tokens. This solves challenges related to unknown synonyms or hard-to-tokenize/process words or phrases.
-
Language Processing Utilities
- Sign Processing
- 3D world landmarks extraction with Mediapipe.
- Pose Visualization with matplotlib and moviepy.
- Pose transformations (data augmentation) with scipy.
- Text Processing
- Normalize text input by substituting unknown characters/spellings with supported words.
- Disambiguate context-dependent words to ensure accurate translation. "spring" -> ["spring(water-spring)", "spring(metal-coil)"]
- Tokenize text (word & sentence level).
- Classify tokens and mark them with Tags.
- Sign Processing
-
Data Collection and Creation
-
Capture variations in signs in a scalable and diversity accommodating way and enable advancing sign language standardization efforts.
- Clip extraction from long videos using timestamps
- Multithreaded Web scraping
- Language Models to generate sentences composed of supported word
-
-
Datasets
The sign videos are categorized by:
1. country 2. source organization 3. session number 4. camera angle 5. person code ((d: deaf | h: hearing)(m: male | f: female)000001) 6. equivalent text language word
The files are labeled as follows:
country_organization_sessionNumber_cameraAngle_personCode_word.extension
The text data includes:
7. word/sentence mappings to videos 8. spoken language sentences and phrases 9. spoken language sentences & corresponding sign video label sequences 10. preprocessing data such as word-to-numbers, misspellings, named-entities etc
See the sign-language-datasets repo and its release files for the actual data & details
How to install the package
pip install sign-language-translator
Editable mode:
git clone https://github.com/sign-language-translator/sign-language-translator.git
cd sign-language-translator
pip install -e .
pip install -e git+https://github.com/sign-language-translator/sign-language-translator.git#egg=sign_language_translator
Usage
See the test cases or the notebooks repo for detailed use but here is the general API:
Command Line
You can use the following functionalities of the SLT package via CLI as well. A command entered without any arguments will show the help. The useable model-codes are listed in help.
Note: Objects & models do not persist in memory across commands, so this is a quick but inefficient way to use this package. In production, create a server which uses the python interface.
Download
Download dataset files or models if you need them. The parameters are regular expressions.
slt download --overwrite true '.*\.json' '.*\.mp4'
slt download --progress-bar true '.*/tlm_14.0M.pt'
By default, auto-download is enabled. Default download directory is /install-directory/sign_language_translator/sign-language-resources/
. (See slt.config.settings.Settings)
Translate
Translate text to sign language using a rule-based model
slt translate \
--model-code "concatenative" \
--text-lang urdu --sign-lang psl \
--sign-format 'mediapipe-landmarks' \
"وہ سکول گیا تھا۔" \
'مجھے COVID نہیں ہے!'
Complete
Auto complete a sentence using our language models. This model can write sentences composed of supported words only:
$ slt complete --end-token ">" --model-code urdu-mixed-ngram "<"
('<', 'وہ', ' ', 'یہ', ' ', 'نہیں', ' ', 'چاہتا', ' ', 'تھا', '۔', '>')
These models predict next characters until a specified token appears. (e.g. generating names using a mixture of models):
$ slt complete \
--model-code unigram-names --model-weight 1 \
--model-code bigram-names -w 2 \
-m trigram-names -w 3 \
--selection-strategy merge --beam-width 2.5 --end-token "]" \
"[s"
[shazala]
Embed Videos
Embed videos into a sequence of vectors using selected embedding models.
slt embed videos/*.mp4 --model-code mediapipe-pose-2-hand-1 --embedding-type world --processes 4 --save-format csv
Python
Basics
import sign_language_translator as slt
# download dataset or models (if you need them for personal use)
# (by default, resources are auto-downloaded within the install directory)
# slt.set_resource_dir("path/to/folder") # Helps preventing duplication across environments or using cloud synced data
# slt.utils.download_resource(".*.json") # downloads into resource_dir
# print(slt.Settings.FILE_TO_URL.keys()) # All downloadable resources
print("All available models:")
print(list(slt.ModelCodes)) # slt.ModelCodeGroups
# print(list(slt.TextLanguageCodes))
# print(list(slt.SignLanguageCodes))
# print(list(slt.SignFormatCodes))
Text to Sign Translation:
# Load text-to-sign model
# deep_t2s_model = slt.get_model("t2s-flan-T5-base-01.pt") # pytorch
# rule-based model (concatenates clips of each word)
t2s_model = slt.get_model(
model_code = "concatenative-synthesis", # slt.ModelCodes.CONCATENATIVE_SYNTHESIS
text_language = "urdu", # or object of any child of slt.languages.text.text_language.TextLanguage class
sign_language = "pakistan-sign-language", # or object of any child of slt.languages.sign.sign_language.SignLanguage class
sign_format = "video", # or object of any child of slt.vision.sign_wrappers.sign.Sign class
)
text = "HELLO دنیا!" # HELLO treated as an acronym
sign_language_sentence = t2s_model(text)
# slt_video_object.show() # class: slt.vision.sign_wrappers.video.Video
# slt_video_object.save(f"sentences/{text}.mp4")
Sign to Text Translation
dummy code: (will be finalized in v0.8+)
# load sign
video = slt.Video("video.mp4")
# features = slt.extract_features(video, "mediapipe_pose_v2_hand_v1")
# Load sign-to-text model
deep_s2t_model = slt.get_model("gesture_mp_base-01") # pytorch
# translate via single call to pipeline
# text = deep_s2t_model.translate(video)
# translate via individual steps
features = deep_s2t_model.extract_features(video.iter_frames())
encoding = deep_s2t_model.encoder(features)
# logits = deep_s2t_model.decoder(encoding, token_ids = [0])
# logits = deep_s2t_model.decoder(encoding, token_ids = [0, logits.argmax(dim=-1)])
# ...
tokens = deep_s2t_model.decode(encoding) # uses beam search to generate a token sequence
text = "".join(tokens) # deep_s2t_model.detokenize(tokens)
print(features.shape)
print(logits.shape)
print(text)
Text Language Processor
Process text strings using language specific classes:
from sign_language_translator.languages.text import Urdu
ur_nlp = Urdu()
text = "hello جاؤں COVID-19."
normalized_text = ur_nlp.preprocess(text)
# normalized_text = 'جاؤں COVID-19.' # replace/remove unicode characters
tokens = ur_nlp.tokenize(normalized_text)
# tokens = ['جاؤں', ' ', 'COVID', '-', '19', '.']
# tagged = ur_nlp.tag(tokens)
# tagged = [('جاؤں', Tags.SUPPORTED_WORD), (' ', Tags.SPACE), ...]
tags = ur_nlp.get_tags(tokens)
# tags = [Tags.SUPPORTED_WORD, Tags.SPACE, Tags.ACRONYM, ...]
# word_senses = ur_nlp.get_word_senses("میں")
# word_senses = [["میں(i)", "میں(in)"]]
Sign Language Processor
This processes the text representation of sign language which mainly deals with the video file names. For video processing, see vision section.
from sign_language_translator.languages.sign import PakistanSignLanguage
psl = PakistanSignLanguage()
tokens = ["he", " ", "went", " ", "to", " ", "school", "."]
tags = 3 * [Tags.WORD, Tags.SPACE] + [Tags.WORD, Tags.PUNCTUATION]
tokens, tags, _ = psl.restructure_sentence(tokens, tags) # ["he", "school", "go"]
signs = psl.tokens_to_sign_dicts(tokens, tags)
# signs = [
# {'signs': [['pk-hfad-1_وہ']], 'weights': [1.0]},
# {'signs': [['pk-hfad-1_school']], 'weights': [1.0]},
# {'signs': [['pk-hfad-1_گیا']], 'weights': [1.0]}
# ]
Vision
dummy code: (will be finalized in v0.7)
import sign_language_translator as slt
# load video
video = slt.Video("sign.mp4")
print(video.duration(), video.shape)
# extract features
# model = slt.get_model(slt.ModelCodes.MEDIAPIPE_POSE_V2_HAND_V1)
model = slt.models.MediaPipeLandmarksModel() # default args
embedding = model.embed(video.frames(), landmark_type="world") # torch.Tensor
print(embedding.shape) # (n_frames, n_landmarks * 5)
# embed dataset
# slt.models.utils.VideoEmbeddingPipeline(model).process_videos_parallel(
# ["dataset/*.mp4"], n_processes=12, save_format="csv", ...
# )
# transform / augment data
sign = slt.MediaPipeSign(embedding, landmark_type="world")
sign = sign.rotate(60, 10, 90, degrees=True)
sign = sign.transform(slt.vision.transformations.ZoomLandmarks(1.1, 0.9, 1.0))
# plot
video_visualization = sign.video()
image_visualization = sign.image(steps=5)
overlay_visualization = sign.overlay(video)
# display
video_visualization.show()
image_visualization.show()
overlay_visualization.show()
Language models
Simple statistical n-gram model:
from sign_language_translator.models.language_models import NgramLanguageModel
names_data = [
'[abeera]', '[areej]', '[farida]', '[hiba]', '[kinza]',
'[mishal]', '[nimra]', '[rabbia]', '[tehmina]', '[zoya]',
'[amjad]', '[atif]', '[farhan]', '[huzaifa]', '[mudassar]',
'[nasir]', '[rizwan]', '[shahzad]', '[tayyab]', '[zain]',
]
# train an n-gram model (considers previous n tokens to predict)
model = NgramLanguageModel(window_size=2, unknown_token="")
model.fit(names_data)
# inference loop
name = '[r'
for _ in range(10):
nxt, prob = model.next(name) # selects next token randomly from learnt probability distribution
name += nxt
if nxt in [']' , model.unknown_token]:
break
print(name)
# '[rabeej]'
# see ngram model's implementation
print(model.__dict__)
Mash up multiple language models & complete generation through beam search:
from sign_language_translator.models.language_models import MixerLM, BeamSampling, NgramLanguageModel
# using data from previous example
names_data = [...] # or slt.languages.English().vocab.person_names # concat start/end symbols
# train models
SLMs = [
NgramLanguageModel(window_size=size, unknown_token="")
for size in range(1,4)
]
[lm.fit(names_data) for lm in SLMs]
# randomly select a model and infer through it
mixed_model = MixerLM(
models=SLMs,
selection_probabilities=[1,2,4],
unknown_token="",
model_selection_strategy = "choose", # "merge"
)
print(mixed_model)
# Mixer LM: unk_tok=""[3]
# ├── Ngram LM: unk_tok="", window=1, params=85 | prob=14.3%
# ├── Ngram LM: unk_tok="", window=2, params=113 | prob=28.6%
# └── Ngram LM: unk_tok="", window=3, params=96 | prob=57.1%
# use Beam Search to find high likelihood names
sampler = BeamSampling(mixed_model, beam_width=3) #, scoring_function = ...)
name = sampler.complete('[')
print(name)
# [rabbia]
Write sentences composed of only those words for which sign videos are available so that the rule-based text-to-sign model can generate training examples for a deep learning model:
from sign_language_translator.models.language_models import TransformerLanguageModel
# model = slt.get_model("ur-supported-gpt")
model = TransformerLanguageModel.load("models/tlm_14.0M.pt")
# sampler = BeamSampling(model, ...)
# sampler.complete(["<"])
# see probabilities of all tokens
model.next_all(["میں", " ", "وزیراعظم", " ",])
# (["سے", "عمران", ...], [0.1415926535, 0.7182818284, ...])
Models
Translation: Text to sign Language
Name | Architecture | Description | Input | Output |
---|---|---|---|---|
Concatenative Synthesis | Rules + Hash Tables | The Core Rule-Based translator mainly used to synthesize translation dataset. Initialize it using TextLanguage, SignLanguage & SignFormat objects. |
string | slt.SignFile |
Video: Embedding/Feature extraction
Name | Architecture | Description | Input format | Output format |
---|---|---|---|---|
MediaPipe Landmarks (Pose + Hands) |
CNN based pipelines. See Here: Pose, Hands | Encodes videos into pose vectors (3D world or 2D image) depicting the movements of the performer. | List of numpy images (n_frames, height, width, channels) |
torch.Tensor (n_frames, n_landmarks * 5) |
Data generation: Language Models
Name | Architecture | Description | Input format | Output format |
---|---|---|---|---|
N-Gram Langauge Model | Hash Tables | Predicts the next token based on learned statistics about previous N tokens. | List of tokens | (token, probability) |
Transformer Language Model | Decoder-only Transformers (GPT) | Predicts next token using query-key-value attention, linear transformations and soft probabilities. | torch.Tensor (batch, token_ids) List of tokens |
torch.Tensor (batch, token_ids, vocab_size) (token, probability) |
How to Build a Translator for Sign Language
To create your own sign language translator, you'll need these essential components:
-
Data Collection
- Gather a collection of videos featuring individuals performing sign language gestures.
- Prepare a JSON file that maps video file names to corresponding text language words, phrases, or sentences that represent the gestures.
- Prepare a parallel corpus containing text language sentences and sequences of sign language video filenames.
-
Language Processing
- Implement a subclass of
slt.languages.TextLanguage
:- Tokenize your text language and assign appropriate tags to the tokens for streamlined processing.
- Create a subclass of
slt.languages.SignLanguage
:- Map text tokens to video filenames using the provided JSON data.
- Rearrange the sequence of video filenames to align with the grammar and structure of sign language.
- Implement a subclass of
-
Rule-Based Translation
- Pass instances of your classes from the previous step to
slt.models.ConcatenativeSynthesis
class to obtain a rule-based translator object. - Construct sentences in your text language and use the rule-based translator to generate sign language translations. (You can use our language models to generate such texts.)
- Pass instances of your classes from the previous step to
-
Model Fine-Tuning
- Utilize the sign language videos and corresponding text sentences from the previous step.
- Apply our training pipeline to fine-tune a chosen model for improved accuracy and translation quality.
Remember to contribute back to the community:
- Share your data, code, and models by creating a pull request (PR), allowing others to benefit from your efforts.
- Create your own sign language translator (e.g. as your university thesis) and contribute to a more inclusive and accessible world.
Directory Tree
sign-language-translator ├── MANIFEST.in ├── README.md ├── poetry.lock ├── pyproject.toml ├── requirements.txt ├── tests │ └── * │ └── sign_language_translator ├── cli.py ├── config │ ├── enums.py │ ├── helpers.py │ ├── settings.py │ └── urls.yaml │ ├── data_collection │ ├── completeness.py │ ├── scraping.py │ └── synonyms.py │ ├── languages │ ├── utils.py │ ├── vocab.py │ ├── sign │ │ ├── mapping_rules.py │ │ ├── pakistan_sign_language.py │ │ └── sign_language.py │ │ │ └── text │ ├── english.py │ ├── text_language.py │ └── urdu.py │ ├── models │ ├── _utils.py │ ├── utils.py │ ├── language_models │ │ ├── abstract_language_model.py │ │ ├── beam_sampling.py │ │ ├── mixer.py │ │ ├── ngram_language_model.py │ │ └── transformer_language_model │ │ ├── layers.py │ │ ├── model.py │ │ └── train.py │ │ │ ├── sign_to_text │ ├── text_to_sign │ │ ├── concatenative_synthesis.py │ │ └── t2s_model.py │ │ │ └── video_embedding │ ├── mediapipe_landmarks_model.py │ └── video_embedding_model.py │ ├── sign-language-resources (auto-downloaded) │ └── * │ ├── text │ ├── metrics.py │ ├── preprocess.py │ ├── subtitles.py │ ├── tagger.py │ ├── tokenizer.py │ └── utils.py │ ├── utils │ ├── download.py │ ├── tree.py │ └── utils.py │ └── vision └── utils.py
How to Contribute
Datasets:
- Contribute by scraping, compiling, and centralizing video datasets.
- Help with labeling word mapping datasets.
- Establish connections with Academies for the Deaf to collaboratively develop standardized sign language grammar and integrate it into the rule-based translators.
New Code:
- Create dedicated sign language classes catering to various regions.
- Develop text language processing classes for diverse languages.
- Experiment with training models using diverse hyper-parameters.
- Don't forget to integrate
string short codes
of your classes and models intoenums.py
, and ensure to update functions likeget_model()
andget_.*_language()
. - Enhance the codebase with comprehensive docstrings, exemplary usage cases, and thorough test cases.
Existing Code:
- Optimize the codebase by implementing techniques like parallel processing and batching.
- Strengthen the project's documentation with clear docstrings, illustrative usage scenarios, and robust test coverage.
- Contribute to the documentation for sign-language-translator.readthedocs.io to empower users with comprehensive insights.
Product Development:
Research Papers & Citation
Stay Tuned!
Upcoming/Roadmap
CLEAN_ARCHITECTURE_VISION: v0.7
# class according to feature type:
# landmarks
# video transformations
# landmark augmentation
# concatenative synthesis returns features
# subtitles
# make scraping dependencies optional (beautifulsoup4, deep_translator)
# GUI with gradio
MISCELLANEOUS
# clean demonstration notebooks
# expand reference clip data by scraping everything
# data info table
# https://sign-language-translator.readthedocs.io/en/latest/
# sequence diagram for creating a translator
DEEP_TRANSLATION: v0.8-v1.x
# sign to text with fine-tuned whisper
# pose vector generation with fine-tuned flan-T5
# motion transfer
# pose2video: stable diffusion or GAN?
# speech to text
# text to speech
# LanguageModel: experiment by dropping space tokens
# parallel text corpus
RESEARCH PAPERs
# datasets: clips, text, sentences, disambiguation
# rule based translation: describe entire repo
# deep sign-to-text: pipeline + experiments
# deep text-to-sign: pipeline + experiments
PRODUCT DEVELOPMENT
# ML inference server
# Django backend server
# React Frontend
# React Native mobile app
Credits and Gratitude
This project started in October 2021 as a BS Computer Science final year project with 3 students and 1 supervisor. After 9 months at university, it became a hobby project for Mudassar who has continued it till at least 2023-09-18.
Immense gratitude towards: (click to expand)
-
Mudassar Iqbal for coding the project so far.
-
Rabbia Arshad for help in initial R&D and web development.
-
Waqas Bin Abbas for assistance in initial video data collection process.
-
Kamran Malik for setting the initial project scope, idea of motion transfer and connecting us with Hamza Foundation.
-
Hamza Foundation (especially Ms Benish, Ms Rashda & Mr Zeeshan) for agreeing for collaboration and providing the reference clips, hearing-impaired performers for data creation, and creating the text2gloss dataset.
-
UrduHack (espacially Ikram Ali) for their work on Urdu character normalization.
-
Telha Bilal for help in designing the architecture of some modules.
Bonus
Count total number of lines of code (Package: 7059 + Tests: 957):
git ls-files | grep '\.py' | xargs wc -l
Just for Fun
Q: What was the deaf student's favorite course?
A: Communication skills
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for sign_language_translator-0.6.3.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 2bcaab887b6ac1e10797a058b6f87a77072a6b14a1a66828dbf901e7b8daed57 |
|
MD5 | a6ade525865ec1a33bbb358838f1bdfa |
|
BLAKE2b-256 | 9aa5f3eedef6f60152591dc3865651eab160e9edd051b9517c04d2b1d9ba1c4d |
Hashes for sign_language_translator-0.6.3-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 2b8bcb9cda914c7e5f07136b9e60cf92a309783b7b2bd275d6ef3c50166cf5e3 |
|
MD5 | 2bf86945fbab51e3ff1c87d5295a84f9 |
|
BLAKE2b-256 | cc398aa26648033072dfeeadcf9e4263a9d1e6d457f041cff424300318005c19 |