IRJET- Gesture Tool – Aiding Disabled Via AI

Page 1

International Research Journal of Engineering and Technology (IRJET)

e-ISSN: 2395-0056

Volume: 08 Issue: 07 | July 2021

p-ISSN: 2395-0072

www.irjet.net

Gesture Tool – Aiding Disabled Via AI Abhishek Kumar Saxena1, Krish Shah2, Mayank Soni3, Shravani Vidhate4 1-4Student,

Dept. of Computer Science and Engineering, MIT School of Engineering, MIT ADT University, Pune Maharashtra, India ---------------------------------------------------------------------***----------------------------------------------------------------------

Abstract - Communication is one of the most important

preferable and easier. The system is based on the hand gesture method to computer vision.

things when it comes to living in a society, well, average people do not suffer much because they are able to interact properly with those around them. Unfortunately, there are a few people who are disabled, some who can neither speak nor hear. Most of us also wonder how they interact as we think about how deaf individuals live. Although sign language makes it easier for them to communicate with each other it also establishes a barrier between deaf and dumb individuals and ordinary individuals. Since most hearing people do not know how to "speak" the sign language and have little patience to practice, interacting with them is generally inconvenient for deaf people. The purpose of this project is to build a real-time hand gesture recognition system using concepts of Deep Learning and NLP that recognizes hand gestures and translates gestures into speech, as well as text message, and vice versa.

2. LITERATURE SURVEY [1] In this paper, they have suggested a system for understanding hand movements that would not only act as a way of communicating between deaf and dumb and mute individuals, but also as an educator. Being very common among these individuals, hand gestures serve as a good means of communication. The hand is therefore considered as an input for a system that will either display the corresponding result in the form of text or voice, or both. The framework for gesture recognition mainly consists of image acquisition, segmentation followed by morphological erosion and gesture recognition feature extraction. [2] A proposed system that is simple to implement without a complicated calculation of features. With 90% precision, they measured the high gesture recognition rate. Their approach projects hand gestures into speech and vice versa, with the use of image acquisition, image processing, extraction of features and a few more principles of artificial intelligence and data mining.

Key Words: Communication, Deep Learning, NLP, Gesture, Recognition, Sign language

1.INTRODUCTION Communication between humans is an action that can be seen as more than just the transfer of knowledge to one another. We, as people, take more than just the truth to communicate. The facial expressions, the language of the body, the way you speak, the way you react, the way you go with the language, the thought, the meaning, the choice of words, all constitute to how you communicate. Some people are either born with defects that render them unable to hear or speak, or an impairment caused by an accident that makes them unable to talk or hear. They do not understand what their own speech sounds like or the meaning of words cannot be understood. The gap between normal human beings and deaf and dumb is wide and ever-increasing day by day. They can’t understand what other people say, or they can’t articulate themselves in a way that is understandable, even though they understand it. Over the years, these people have been creative enough to come up with several ways to express themselves, and have introduced sign language to help them learn to communicate. This was a very significant step in improving communication skills for deaf and dumb people. Sign language provides the deaf and dumb people with the best communication medium to communicate with people. Sign language is difficult to understand for people who are not usually well aware of it. Gesture is a non-verbal communication that requires hand gestures that are much

© 2021, IRJET

|

Impact Factor value: 7.529

[3] It implies a vision-based technique in which continuous sequences of complex hand gesture frames are taken. The four primary steps for identifying complex hand gestures are pre-processing, segmentation, feature extraction and classification. The technique of skin colour filtering for segmentation was used in their method. The technique of Eigen vectors and Eigen values has been used for feature extraction. For more detailed feature identification, they have also used Histogram matching. Word sign prediction using one or both hands, dynamic hand gesture words dataset working with Indian Sign language and dual-way communication has also been proposed in this framework. [4] The framework extracts gestures from real-time video in the Indian Sign Language (ISL) and maps them with a human clear and understandable voice. They have used the video-to-speech processing technique that will involve video frame creation, region of interest (ROI) discovery and information domain image mapping using the Correlational-based approach and then related audio generation using Google Text-to-Speech (TTS) API. By translating speech to text using the Google Speech-to-Text (STT) API, the tongue is mapped with equivalent Indian signing gestures, further mapping the text to the related animated gestures from the database. Real-time video (ISL) translation to human natural language speech equivalent.

|

ISO 9001:2008 Certified Journal

|

Page 1480


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.
IRJET- Gesture Tool – Aiding Disabled Via AI by IRJET Journal - Issuu