Sign Language Recognition for Static and Dynamic Gestures

α
Jay Suthar
Jay Suthar
σ
Devansh Parikh
Devansh Parikh
ρ
Tanya Sharma
Tanya Sharma
Ѡ
Avi Patel
Avi Patel

Send Message

To: Author

Sign Language Recognition for Static and Dynamic Gestures

Article Fingerprint

ReserarchID

7L73I

Sign Language Recognition for Static and Dynamic Gestures Banner

AI TAKEAWAY

Connecting with the Eternal Ground
  • English
  • Afrikaans
  • Albanian
  • Amharic
  • Arabic
  • Armenian
  • Azerbaijani
  • Basque
  • Belarusian
  • Bengali
  • Bosnian
  • Bulgarian
  • Catalan
  • Cebuano
  • Chichewa
  • Chinese (Simplified)
  • Chinese (Traditional)
  • Corsican
  • Croatian
  • Czech
  • Danish
  • Dutch
  • Esperanto
  • Estonian
  • Filipino
  • Finnish
  • French
  • Frisian
  • Galician
  • Georgian
  • German
  • Greek
  • Gujarati
  • Haitian Creole
  • Hausa
  • Hawaiian
  • Hebrew
  • Hindi
  • Hmong
  • Hungarian
  • Icelandic
  • Igbo
  • Indonesian
  • Irish
  • Italian
  • Japanese
  • Javanese
  • Kannada
  • Kazakh
  • Khmer
  • Korean
  • Kurdish (Kurmanji)
  • Kyrgyz
  • Lao
  • Latin
  • Latvian
  • Lithuanian
  • Luxembourgish
  • Macedonian
  • Malagasy
  • Malay
  • Malayalam
  • Maltese
  • Maori
  • Marathi
  • Mongolian
  • Myanmar (Burmese)
  • Nepali
  • Norwegian
  • Pashto
  • Persian
  • Polish
  • Portuguese
  • Punjabi
  • Romanian
  • Russian
  • Samoan
  • Scots Gaelic
  • Serbian
  • Sesotho
  • Shona
  • Sindhi
  • Sinhala
  • Slovak
  • Slovenian
  • Somali
  • Spanish
  • Sundanese
  • Swahili
  • Swedish
  • Tajik
  • Tamil
  • Telugu
  • Thai
  • Turkish
  • Ukrainian
  • Urdu
  • Uzbek
  • Vietnamese
  • Welsh
  • Xhosa
  • Yiddish
  • Yoruba
  • Zulu

Abstract

Humans are called social animals, which makes communication a very important part of humans. Humans use shoes and non-verbal forms of language for communication purposes, but not all humans can give oral speech. Hearing impaired and mute people. Sign language became consequently advanced for them and nevertheless impairs communication. Therefore, this paper proposes a system that uses streams to use CNN networks for the classification of alphabets and numbers. Alphabet and number gestures are static gestures in Indian sign language, and CNN is used because it provides very good results for image classification. Use hand-masked (skin segmented) images for model training. For dynamic hand gestures, the system uses the LSTM network for classification tasks. LSTMs are known for their accurate prediction of time zone distributed data. This paper presents different types of hand gestures, namely two models for static and dynamic prediction, CNN and LSTM.

References

15 Cites in Article
  1. M Mohandes,M Deriche,J Liu (2014). Image-Based and Sensor-Based Approaches to Arabic Sign Language Recognition.
  2. C Zhu,W Sheng (2011). Wearable Sensor-Based Hand Gesture and Daily Activity Recognition for Robot-Assisted Living.
  3. T Jaya,V Rajendran (2018). Hand-Talk Assistive Technology for the Dumb.
  4. Lakshman Ramkumar,Sudharsana Premchand,Gokul Vijayakumar (2019). Sign Language Recognition using Depth Data and CNN.
  5. P Gupta,A Agrawal,S Fatima (2014). Sign Language Problem and Solutions for Deaf and Dumb People.
  6. Ashish Nikam,Aarti Ambekar (2016). Sign language recognition using image based hand gesture recognition techniques.
  7. N Lele (2018). Image Classification Using Convolutional Neural Network.
  8. J Raheja,A Mishra,A Chaudhary (2016). Indian sign language recognition using SVM.
  9. Archana Ghotkar,Gajanan Kharate (2014). Study Of Vision Based Hand Gesture Recognition Using Indian Sign Language.
  10. K Simonyan,A Zisserman (2014). Two-Stream Convolutional Neural Network for Video Action Recognition.
  11. Limin Wang,Yuanjun Xiong,Zhe Wang,Yu Qiao,Dahua Lin,Xiaoou Tang,Luc Van Gool (2016). Temporal Segment Networks: Towards Good Practices for Deep Action Recognition.
  12. Andrej Karpathy,George Toderici,Sanketh Shetty,Thomas Leung,Rahul Sukthankar,Li Fei-Fei (2014). Large-Scale Video Classification with Convolutional Neural Networks.
  13. J Sun,J Wang,T Yeh (2017). Video understanding: from video classification to captioning.
  14. Yaddula Babu (2015). Adaptive Video Streaming Through Server Driven Rate Control In Manets.
  15. Pradip Patel,Narendra Patel (2019). CNN Based Real Time Recognition of Hand Gestures for Indian Sign Language.

Funding

No external funding was declared for this work.

Conflict of Interest

The authors declare no conflict of interest.

Ethical Approval

No ethics committee approval was required for this article type.

Data Availability

Not applicable for this article.

How to Cite This Article

Jay Suthar. 2021. \u201cSign Language Recognition for Static and Dynamic Gestures\u201d. Global Journal of Computer Science and Technology - D: Neural & AI GJCST-D Volume 21 (GJCST Volume 21 Issue D2): .

Download Citation

Enhanced AI models identify static and dynamic gestures efficiently.
Journal Specifications

Crossref Journal DOI 10.17406/gjcst

Print ISSN 0975-4350

e-ISSN 0975-4172

Keywords
Classification
GJCST-D Classification: I.2.7
Version of record

v1.2

Issue date

September 1, 2021

Language
en
Experiance in AR

Explore published articles in an immersive Augmented Reality environment. Our platform converts research papers into interactive 3D books, allowing readers to view and interact with content using AR and VR compatible devices.

Read in 3D

Your published article is automatically converted into a realistic 3D book. Flip through pages and read research papers in a more engaging and interactive format.

Article Matrices
Total Views: 3663
Total Downloads: 839
2026 Trends
Related Research

Published Article

Humans are called social animals, which makes communication a very important part of humans. Humans use shoes and non-verbal forms of language for communication purposes, but not all humans can give oral speech. Hearing impaired and mute people. Sign language became consequently advanced for them and nevertheless impairs communication. Therefore, this paper proposes a system that uses streams to use CNN networks for the classification of alphabets and numbers. Alphabet and number gestures are static gestures in Indian sign language, and CNN is used because it provides very good results for image classification. Use hand-masked (skin segmented) images for model training. For dynamic hand gestures, the system uses the LSTM network for classification tasks. LSTMs are known for their accurate prediction of time zone distributed data. This paper presents different types of hand gestures, namely two models for static and dynamic prediction, CNN and LSTM.

Our website is actively being updated, and changes may occur frequently. Please clear your browser cache if needed. For feedback or error reporting, please email [email protected]

Request Access

Please fill out the form below to request access to this research paper. Your request will be reviewed by the editorial or author team.
X

Quote and Order Details

Contact Person

Invoice Address

Notes or Comments

This is the heading

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.

High-quality academic research articles on global topics and journals.

Sign Language Recognition for Static and Dynamic Gestures

Jay Suthar
Jay Suthar
Devansh Parikh
Devansh Parikh
Tanya Sharma
Tanya Sharma
Avi Patel
Avi Patel

Research Journals