Sei sulla pagina 1di 4

Deaf Sign Language Interpreter Using Hand Gesture

Recognition
Jezan M. Villalon

Abstract
Sign language is a complex language which employs sign by the use of hand gestures and
expressions to communicate. Deaf or mute students has many important things to share in the
society or in the school they are studying and it’s a shame when communication barrier arises.
The purpose of the study is to create a tool to lessen or eliminate communication barrier between
deaf or mute person and normal person (Deaf student and Teacher). An assistive tool for deaf
and mute students was created, enabling them to use hand gestures or sign language. This study
integrates hardware and software to build a tool to improve their communication problems to
other people. The tool is consist of Arduino microprocessor and Leap motion sensor to be able
to interpret the sign language of the deaf person.

Keywords: Deaf, assistive tool, hand gestures, leap motion, Arduino

Introduction
Persons who have hearing and speech - impairments are not able to verbally
communicate. These people speak in their own native language called sign language specifically
ASL (American Sign Language). Same as any languages it can be learned by anyone, but it is
usually spoken on a remote group only. In today’s society sign language speakers need to a
human sign language interpreter to break this communication barrier.
This study aims to create an assistive tool to help improve the life of this deaf and mute
people by giving them the technology to easily get in touch to other people with no impairments
using hand gestures. This project can be achieve by creating a hardware and a software tool. This
integrated tool operates an interpreter helping the deaf or mute person to communicate using
the ASL (American Sign Language).
The Deaf Sign Language Interpreter Using Hand Gesture Recognition uses Leap Motion
sensor in which the deaf person will just have to use hand gesture and the tool will interpret what
the deaf person gesture in the computer screen. This tool helps deaf or mute person advances
their social integration as they enable to express themselves to the non – sign language speakers.
Related Works
Human Motion Recognition
Human movement can be controlled by taking distinction between two pixel esteems in
successive edges. Kinematic approach speaks to movement direction by 2-D direction focuses( X,
X Y, ) T or 3-D direction focuses( X , Y, Z, T). Each point relates to individual joint an incentive in
outline for human stance. Picture or shape approach speaks to movement utilizing optical stream
or utilizing MHI or MEI. Human movement can be either straightforwardly perceived from picture
groupings, or it very well may be done in a various layer process. For the most part, for
straightforward activities, movement is perceived specifically from picture groupings and they
can be seen as single layer approaches. Nonetheless, entangled exercises can be perceived by
utilizing different layer acknowledgment techniques. Contingent upon multifaceted nature,
human movement is theoretically sorted into signals, activities, action, connections, and
gathering exercises. Confused movement can be perceived utilizing different layers, by decaying
it into straightforward activities or motions. Perceived straightforward activities or motions at
bring down levels are utilized for the acknowledgment of entangled movements at more elevated
amounts [2].

Hand Gesture Recognition


Gestures of the Hand are read by an input sensing device such as an mobile or computer.
It reads the movements of the human body and communicates with computer that uses these
gestures as an input. These gestures are then interpreted using algorithm either based on
statistical analysis or artificial intelligence techniques. The primary goal of gesture recognition
research is to create a system which can identify specific human hand gestures and use them to
convey information. By recognizing the hand symbols of a man it can help in communication with
deaf and dumb people. It helps in taking prompt action at that time [3].

Image Acquisition
According to Pratibha Pandey et. Al Image acquisition is the first step in any vision system.
In this application it is done by using IPWebCam android application. The application uses the
camera present in the phone for continuous image capturing and a simultaneous display on the
screen. The image captured by the application is streamed over its Wi-Fi connection (or WLAN
without internet as used here) for remote viewing. The program access the image by logging to
the device’s IP, which is then showed in the GUI [3].
The result of image capture is shown is Figure 1

Methods and Algorithms


A compositional depiction includes demonstrating and portrayal of designs utilizing fitting
components like engineering depiction dialects and engineering system. Compositional outline
gives the important relationship for the major auxiliary components [1].

The result of
image capture is
shown is Figure 2
References:

[1] Chaudhary, Ankit. n.d. "Android based Portable Hand Sign Recognition System." Machine Vision Lab
19.

[2] Geetanjali Vinayak Kale, Varsha Hemant Patil. 2016. "A Study of Vision based Human Motion
Recognition and Analysis." International Journal of Ambient Computing and Intelligence 18.

[3] Pratibha Pandey, Vinay Jain. March 2015. "Hand Gesture Recognition for Sign Language Recognition:
A Review." International Journal of Science, Engineering and Technology Research (IJSETR) 7.

Potrebbero piacerti anche