Jumla Sign Language
The main objective of this project is to provide to researchers and computer engineers a platform that offers a set of tools and data useful for processing and analyzing ArSL. Building a labeled database is fundamental to make any innovation in relation to the ArSL. In the first step, Mada will build the first labeled and indexed ArSL corpus containing both video and Mocap data. This content is very important for linguistics and computer researchers to study ArSL and make linguistic and scientific research. It can also be used by engineers to build new tools based on Mada corpus. In order to build this database a Sign Language studio will be setup using a dedicated Motion Capture solution that combines handshape, body movements and facial expression tracking.
Video about the project
(in Arabic language and translated into Qatari Sign Language using the avatar BuHamad)
Related Publications
This paper proposes the first large-scale and annotated Qatari sign language dataset for continuous sign language processing. This dataset focuses on phrases and sentences commonly used in healthcare settings and contains 6300 records of 900 sentences. The dataset collection process involves diverse participants, including both hearing-impaired individuals and sign interpreters, to capture variations in signing styles, speeds, and other linguistic nuances. The data collection setup integrates advanced technology, including true depth cameras, to comprehensively record signing movements from various angles. The collected dataset is rich in content, encompassing different signing variations and linguistic intricacies…
Journal: IEEE Access | (Q1, Impact Factor 2020: 3.476) | 2023
The analysis and recognition of sign languages are currently active fields of research focused on sign recognition. Various approaches differ in terms of analysis methods and the devices used for sign acquisition. Traditional methods rely on video analysis or spatial positioning data calculated using motion capture tools. In contrast to these conventional recognition and classification approaches, electromyogram (EMG) signals, which measure muscle electrical activity, offer potential technology for detecting gestures…
Journal: MDPI Sensors | (Q1, Impact Factor 2022: 3.9) | 2023
Sign languages are the most common mode of communication with and between hearing-impaired individuals. In the Arab world, Arabic sign language is used with different dialects supporting a distinct set of rules for the gestures used. With research on natural language processing advancing, models have been developed to translate sign language to spoken language and vice versa. However, Arabic sign language has rarely been studied due to the lack of availability of datasets dealing with Arabic sign language…
Dataset: IEEE Dataport | 2022
In the United States, the National Institute on Deafness and Other Communication Disorders estimates that 90% or more deaf children have hearing parents. Communication is one of the first aspects of family life impacted by having a deaf child. The hearing parents of deaf children and teachers often have difficulty communicating with their deaf children and need to interact with them using sign language. Deaf children and hearing parents still face many challenges in learning sign languages despite technological advances such as mobile apps, desktop and web applications, and new instructional materials and methods. In Qatar, 13.7% of the persons with disabilities have some difficulties, many challenges, or cannot hear totally, highlighting the need to include and foster…
Conference: EDUCON 2022 | IEEE Global Engineering Education Conference | March 28-31, 2022 | Online and Tunis, Tunisia [hybrid]
The present paper describes an ongoing project on designing and creating the First Qatari Sign Language dataset called “Jumla Dataset: The Jumla Qatari Sign Language Corpus” with intra-linguistic and extra-linguistic levels of the written Arabic text. The annotation of videos in Qatari Sign Language (QSL) takes input from signers to identify the Arabic glosses components toward representing the QSL in a written way with high accuracy, furthermore to the use of the annotation output in the development of computational Sign Language tools. The QSL annotation is based on an input of 4 videos recorded by deaf persons or Sign Language interpreters from different angles (front, left side, right side, and facial view)…
Conference: ICTA 2021 | 8th International Conference on Information and Communication Technology and Accessibility | December 8-10, 2021 | [online]