TY - JOUR
T1 - Enhancing Communication Accessibility
T2 - UrSL-CNN Approach to Urdu Sign Language Translation for Hearing-Impaired Individuals
AU - Das, Khushal
AU - Abid, Fazeel
AU - Rasheed, Jawad
AU - Kamlish,
AU - Asuroglu, Tunc
AU - Alsubai, Shtwai
AU - Soomro, Safeeullah
N1 - Publisher Copyright:
Copyright © 2024 The Authors. Published by Tech Science Press.
PY - 2024
Y1 - 2024
N2 - Deaf people or people facing hearing issues can communicate using sign language (SL), a visual language. Many works based on rich source language have been proposed; however, the work using poor resource language is still lacking. Unlike other SLs, the visuals of the Urdu Language are different. This study presents a novel approach to translating Urdu sign language (UrSL) using the UrSL-CNN model, a convolutional neural network (CNN) architecture specifically designed for this purpose. Unlike existing works that primarily focus on languages with rich resources, this study addresses the challenge of translating a sign language with limited resources. We conducted experiments using two datasets containing 1500 and 78,000 images, employing a methodology comprising four modules: data collection, pre-processing, categorization, and prediction. To enhance prediction accuracy, each sign image was transformed into a greyscale image and underwent noise filtering. Comparative analysis with machine learning baseline methods (support vector machine, Gaussian Naive Bayes, random forest, and k-nearest neighbors’ algorithm) on the UrSL alphabets dataset demonstrated the superiority of UrSL-CNN, achieving an accuracy of 0.95. Additionally, our model exhibited superior performance in Precision, Recall, and F1-score evaluations. This work not only contributes to advancing sign language translation but also holds promise for improving communication accessibility for individuals with hearing impairments.
AB - Deaf people or people facing hearing issues can communicate using sign language (SL), a visual language. Many works based on rich source language have been proposed; however, the work using poor resource language is still lacking. Unlike other SLs, the visuals of the Urdu Language are different. This study presents a novel approach to translating Urdu sign language (UrSL) using the UrSL-CNN model, a convolutional neural network (CNN) architecture specifically designed for this purpose. Unlike existing works that primarily focus on languages with rich resources, this study addresses the challenge of translating a sign language with limited resources. We conducted experiments using two datasets containing 1500 and 78,000 images, employing a methodology comprising four modules: data collection, pre-processing, categorization, and prediction. To enhance prediction accuracy, each sign image was transformed into a greyscale image and underwent noise filtering. Comparative analysis with machine learning baseline methods (support vector machine, Gaussian Naive Bayes, random forest, and k-nearest neighbors’ algorithm) on the UrSL alphabets dataset demonstrated the superiority of UrSL-CNN, achieving an accuracy of 0.95. Additionally, our model exhibited superior performance in Precision, Recall, and F1-score evaluations. This work not only contributes to advancing sign language translation but also holds promise for improving communication accessibility for individuals with hearing impairments.
KW - Convolutional neural networks
KW - Pakistan sign language
KW - visual language
UR - http://www.scopus.com/inward/record.url?scp=85201772854&partnerID=8YFLogxK
U2 - 10.32604/cmes.2024.051335
DO - 10.32604/cmes.2024.051335
M3 - Article
AN - SCOPUS:85201772854
SN - 1526-1492
VL - 141
SP - 689
EP - 711
JO - CMES - Computer Modeling in Engineering and Sciences
JF - CMES - Computer Modeling in Engineering and Sciences
IS - 1
ER -