Sign Language Translator#

sign_language_translator#

Code: sign-language-translator/sign-language-translator Help: https://sign-language-translator.readthedocs.io Demo: https://huggingface.co/sltAI

This project is an effort to bridge the communication gap between the hearing and the hearing-impaired community using Artificial Intelligence. The goal is to provide a user friendly API to novel Sign Language Translation solutions that can easily adapt to any regional sign language.

Usage#

import sign_language_translator as slt

# download dataset or models (if you need them for personal use)
# (by default, resources are auto-downloaded within the install directory)
# slt.Assets.set_root_dir("path/to/folder")  # Helps preventing duplication across environments or using cloud synced data
# slt.Assets.download(r".*.json")  # downloads into asset_dir
# print(slt.Assets.FILE_TO_URL.keys())  # All downloadable resources

print("All available models:")
print(list(slt.ModelCodes))  # slt.ModelCodeGroups
# print(list(slt.TextLanguageCodes))
# print(list(slt.SignLanguageCodes))
# print(list(slt.SignFormatCodes))

# -------------------------- TRANSLATE: text to sign --------------------------

# The core model of the project (rule-based text-to-sign translator)
# which enables us to generate synthetic training datasets
model = slt.models.ConcatenativeSynthesis(
text_language="urdu", sign_language="pk-sl", sign_format="video" )

text = "یہ بہت اچھا ہے۔" # "This very good is."
sign = model.translate(text) # tokenize, map, download & concatenate
sign.show()

model.text_language = "hindi"  # slt.TextLanguageCodes.HINDI  # slt.languages.text.Hindi()
sign_2 = model.translate("कैसे हैं आप?") # "How are you?"
sign_2.save("how are you.mp4", overwrite=True)

# -------------------------- TRANSLATE: sign to text --------------------------

sign = slt.Video("path/to/video.mp4")
sign = slt.Video.load_asset("videos/pk-hfad-1_program.mp4")  # downloads, reads and displays a dataset file
sign.show_frames_grid()

# Extract Pose Vector for feature reduction
embedding_model = slt.models.MediaPipeLandmarksModel()
embedding = embedding_model.embed(sign.iter_frames())
# slt.Landmarks(embedding, connections="mediapipe-world").show()

# # Load sign-to-text model (pytorch) (COMING SOON!)
# translation_model = slt.get_model(slt.ModelCodes.Gesture)
# text = translation_model.translate(embedding)
# print(text)

CLI Module#

Sign Language Translator (SLT) Command Line Interface

This module provides a command line interface (CLI) for the Sign Language Translator (SLT) library. It allows you to perform various operations such as translating text to sign language or vice versa, downloading resource files, completing text sequences using Language Models & embedding videos into sequences of vectors.

$ slt
Usage:
    slt [OPTIONS] COMMAND [ARGS]...

Options:
    --help  Show this message and exit.

Commands:
    assets     Assets manager to download & display Datasets & Models.
    complete   Complete a sequence using Language Models.
    translate  Translate text into sign language or vice versa.
    embed      Embed Videos Using Selected Model.