Sign Language Recognition for Static and Dynamic Gestures

Article ID

7L73I

Enhanced AI models identify static and dynamic gestures efficiently.

Sign Language Recognition for Static and Dynamic Gestures

Jay Suthar
Jay Suthar
Devansh Parikh
Devansh Parikh
Tanya Sharma
Tanya Sharma
Avi Patel
Avi Patel
DOI

Abstract

Humans are called social animals, which makes communication a very important part of humans. Humans use shoes and non-verbal forms of language for communication purposes, but not all humans can give oral speech. Hearing impaired and mute people. Sign language became consequently advanced for them and nevertheless impairs communication. Therefore, this paper proposes a system that uses streams to use CNN networks for the classification of alphabets and numbers. Alphabet and number gestures are static gestures in Indian sign language, and CNN is used because it provides very good results for image classification. Use hand-masked (skin segmented) images for model training. For dynamic hand gestures, the system uses the LSTM network for classification tasks. LSTMs are known for their accurate prediction of time zone distributed data. This paper presents different types of hand gestures, namely two models for static and dynamic prediction, CNN and LSTM.

Sign Language Recognition for Static and Dynamic Gestures

Humans are called social animals, which makes communication a very important part of humans. Humans use shoes and non-verbal forms of language for communication purposes, but not all humans can give oral speech. Hearing impaired and mute people. Sign language became consequently advanced for them and nevertheless impairs communication. Therefore, this paper proposes a system that uses streams to use CNN networks for the classification of alphabets and numbers. Alphabet and number gestures are static gestures in Indian sign language, and CNN is used because it provides very good results for image classification. Use hand-masked (skin segmented) images for model training. For dynamic hand gestures, the system uses the LSTM network for classification tasks. LSTMs are known for their accurate prediction of time zone distributed data. This paper presents different types of hand gestures, namely two models for static and dynamic prediction, CNN and LSTM.

Jay Suthar
Jay Suthar
Devansh Parikh
Devansh Parikh
Tanya Sharma
Tanya Sharma
Avi Patel
Avi Patel

No Figures found in article.

Jay Suthar. 2021. “. Global Journal of Computer Science and Technology – D: Neural & AI GJCST-D Volume 21 (GJCST Volume 21 Issue D2): .

Download Citation

Journal Specifications

Crossref Journal DOI 10.17406/gjcst

Print ISSN 0975-4350

e-ISSN 0975-4172

Classification
GJCST-D Classification: I.2.7
Keywords
Article Matrices
Total Views: 3594
Total Downloads: 894
2026 Trends
Research Identity (RIN)
Related Research
Our website is actively being updated, and changes may occur frequently. Please clear your browser cache if needed. For feedback or error reporting, please email [email protected]

Request Access

Please fill out the form below to request access to this research paper. Your request will be reviewed by the editorial or author team.
X

Quote and Order Details

Contact Person

Invoice Address

Notes or Comments

This is the heading

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.

High-quality academic research articles on global topics and journals.

Sign Language Recognition for Static and Dynamic Gestures

Jay Suthar
Jay Suthar
Devansh Parikh
Devansh Parikh
Tanya Sharma
Tanya Sharma
Avi Patel
Avi Patel

Research Journals