Sei sulla pagina 1di 21

Blind’s Personal Assistance Application

CHAPTER 1

INTRODUCTION

Eyesight is one of the greatest indispensable human senses and it acts a Supreme
significant role in human observation about neighbouring surroundings. Many problems arises
when trying to carry out routine daily activities when you’re visually impaired. Including
identifying objects (mostly while shopping), physical movement, handling cash is an entirely
different concern, withdrawing money from an ATM can be really time consuming, reading
writing using braille and many more problems have to be faced at every movement in there day
to day life. And as we know that for many people it was unable get back their vision by any kind
of medication or surgery's, for whom we call as total blinds. Therefore, to overcome as many as
problems for blinds with a minimum cost and in an effective manner, we are trying to build an
application known as “BLIND’s PERSONAL ASSISTANT APPLICATION” application for
Android. It include Face Recognition, Text Recognition, Traffic Signal Light Identification,
Navigation Assistance, Message Passing etc. It also include an Web Application for care taker,
used for track blind person and train new faces into the application.

Optical character recognition is an automatic identification and alphanumeric encoding of


printed text by means of an optical scanner and specific software systems. By means of Optical
character recognition (OCR) software lets a machine to read stationary images of text and
translate them into editable data. Optical character recognition is also a significant tool for
making reachable documents, predominantly PDFs, for sightless and visually-impaired people.
Here we used basic android camera as optical scanner to capture the image of document and
convert the same to soft copy which is editable.

Detect the traffic lights with Tensor Flow Object Detection API, and then use image
processing technique to classifier the state of the traffic lights. If the lights are "red" or "yellow",
it outputs command "stop"; If the lights are "green", the it outputs “go".

Face recognition is done using machine learning algorithm with feature vector. There is a
pattern here – different faces have different dimensions like the ones above. Similar faces have
similar dimensions. The challenging part is to convert a particular face into numbers – Machine
Learning algorithms only understand numbers. This numerical representation of a “face” (or an

Dept. of CSE, AACET, Thodupuzha Page 1


Blind’s Personal Assistance Application

element in the training set) is termed as a feature vector. A feature vector comprises of various
numbers in a specific order.

Voice module is used to record particular commands on this application helps the blind
person to access all functionalities of this application. For recording and accessing voice
command we use Voice Recognizers. Almost all functionalities in this application works on
voice commands such as my location, help, etc. Same as instructions for the blind and outcome
using predefined commands everything will be getting for the blind as voice output, In this
application we have object detection, signboard detection, etc. all those details will be getting for
the blind as voice out, we use android built in Text-to-speech engine for this. The Text-to-
speech may be our device’s engine or Google's Text-to-speech engine, the device
manufacturer's engine, and any third-party text-to-speech engines that we have downloaded from
the Google Play Store.

With the rapid development of mobile communication and the pervasive computing
technology, the requirement of obtaining location-aware service is rapidly increasing. Though
Global Positioning System (GPS) can provide accurate and reliable position information for
location services. The Global Positioning System (GPS) is a space based radio navigation system
that provides reliable positioning, navigation, and timing services to server. The GPS receiver
will provide location details such as latitude and longitude values from the satellite. This location
details will be sending to the server for further tracking of the blind person for caretaker.
Caretakers can track the blind person with their last updated location details. If the blind come in
front of an emergency that time he will be accessing emergency access button the same time
emergency details mainly including location details will be sent to the server.

Emergency module a blind can request for an emergency help when he is need. This is
using the voice capturing like “help” keyword detection. After detecting this keyword the system
automatically fetch the location details using GPS. Latitude and longitude along with blind id
and message is sent to the care taker.

Dept. of CSE, AACET, Thodupuzha Page 2


Blind’s Personal Assistance Application

1.1 Existing System

Many technologies where using to assist blind people.

 OrCam.
 Guide cane or walking stick.
 Braille.
 QR life barcode and QR reader
 Talk back.

1.1.1 OrCam:

OrCam harnesses the power of artificial vision to assist people who are visually
impaired. OrCam has created a technologically advanced device unique in its ability to provide
visual aid through a discreet wearable platform and simple easy-to-use interface which serves to
enhance the daily lives of people with vision loss. OrCam gives independence. Features of this
systems are read texts, recognize faces, and identify products. It Convert texts to audio and it has
two parts.

i. Light weight camera clips on to the glass frame.


ii. Tiny wearable computer to fit in a pocket.

Disadvantage of OrCam is very expensive

Fig 1.1 OrCam

Dept. of CSE, AACET, Thodupuzha Page 3


Blind’s Personal Assistance Application

1.1.2 Guide cane or walking stick:

The most common cane tip type is the basic, rubber cane tip which provides great
traction and often includes a steel insert for increased durability. Another type of cane tip
available is the tripod or quad cane tip, which is attached to a single tip on the cane but ends
with three or four prongs.

Fig 1.2 Guide cane or walking stick

1.1.3 Braille :

It is a system of raised dots that can be read with the fingers by people who are blind or
who have low vision. Teachers, parents, and others who are not visually impaired ordinarily
read braille with their eyes. Braille is not a language. Uses a set of raised bumps or dot that can
be felt with finger.

Fig 1.3 Braille

Dept. of CSE, AACET, Thodupuzha Page 4


Blind’s Personal Assistance Application

1.1.4 QR life barcode and QR reader :

It is an android app for reading bar codes and QR codes. It is helpful while purchasing
things.

Fig 1.4 Android App

1.2 Proposed System

Due to eye disease and eye related problems, uncontrolled diabetes accidents and many
other problems, number of visually impaired person increases every year. They are faced a lot of
problems. Here we propose an application for visually impaired person, this application is
controlled by volume key. This using 2 technologies,

i. Mobile Application

The blind user carrying this application along with them

ii. Web Application

The care taker can control this application, this web is mainly used for tracking the blind
and train new faces into the application

The functionalities of this application are,

• Character recognition
• Face recognition

Dept. of CSE, AACET, Thodupuzha Page 5


Blind’s Personal Assistance Application

• Traffic light detection


• Navigation
• Message passing
• Track the blind
• New face training

Speech and text is the main medium for human communication. One of the most
significant difficulty for visually impaired person is to read hard copy documents, name board
etc. This application helps to recognize the text and output as an audio. So they can read hard
copy documents and signboard very easily.

The next difficulty of visually impaired person is to identify people in their real world.
This difficulty can overcome using this app. This application stores the images of the people and
the system checks the presence of person in the frame and inform the user via audio.

Visually impaired person can very hard to travel alone. So here we include traffic light
detection and navigation through Google map. Detect the traffic lights with Tensor Flow Object
Detection API, and then use image processing technique to classifier the state of the traffic
lights. If the lights are "red" or "yellow", it outputs command "stop"; If the lights are "green", the
it outputs “go".

With the rapid development of mobile communication and the pervasive computing
technology, the requirement of obtaining location-aware service is rapidly increasing. Though
Global Positioning System (GPS) can provide accurate and reliable position information for
location services. The Global Positioning System (GPS) is a space based radio navigation system
that provides reliable positioning, navigation, and timing services to server. The GPS receiver
will provide location details such as latitude and longitude values from the satellite.

This location details will be sending to the server for further tracking of the blind person
for caretaker. Caretakers can track the blind person with their last updated location details. If the
blind come in front of an emergency that time he will be accessing emergency access button the
same time emergency details mainly including location details will be sent to the server.

Dept. of CSE, AACET, Thodupuzha Page 6


Blind’s Personal Assistance Application

A blind can request for an emergency help when he is need. This is using the voice
capturing like “help” keyword detection. After detecting this keyword the system automatically
fetch the location details using GPS. Latitude and longitude along with blind id and message is
sent to the care taker

1.3 Objective

The objective of these system for are set of activities that each module should carry. These
are:-

A. Face Recognition : Train the system for performing face recognition and it identify
peoples that faces are already stored
B. Text Recognition: To avoid the braille work, it provides a way to read computer printed
documents and text in image by using OCR technique.
C. Voice Interaction: The application must be capable of interacting with the user over voice
D. Traffic Signal Light Identification: Train the system for performing traffic light detection
and identification
E. Navigation Assistance: Using Google map, the user can get direction and get information
about current place
F. Message Passing: The user can pass messages to care taker via voice, it is done by using
panic button

Dept. of CSE, AACET, Thodupuzha Page 7


Blind’s Personal Assistance Application

1.4 Scope

This project is currently done keeping in mind that blind should alone recognize faces
that he encounters in his surroundings, read printed documents, identifying traffic light,
Navigation and message passing. The faces recognized by the system will be the limited.
Because it can identify only the trained objects. So the care taker can train the new faces into the
system and track the blind using a webpage. This application is controlled by volume key and
features are accessed using keyword. And reading printed documents is done by using Optical
character recognition, it can read only English as of now. But in future other languages can be
added.

Dept. of CSE, AACET, Thodupuzha Page 8


Blind’s Personal Assistance Application

CHAPTER 2

LITERATURE SURVEY

2.1 Proposed a smart device for visually impaired person

This system enables blind people to handle the Android phone effectively. The blind
people wanted to make use of the services like calling, getting notification of battery level,
Hearing music and to get latest updates on the Android phone. The proposed system enables to
obtain all the services through their voice command. The Selendroid enables the communication
between smart phone and the various web servers.

It has the ability to identify the spoken languages and convert into machine
understandable format. This is done by the SRE. The individual speaker input is read and
isolated into vocabulary. This system performs an action that is usually performed by a normal
person. The VIPs who need to use Android phone have to give their input through voice to the
SRE through microphone or headset. The SRE converts the speech into text. The text is given as
input to the command recognition module. The command is recognized and identified by using
morphological analyser. SRE output controls the dialler, music player, Selendroid architecture
(SA) and Google maps.

The dialer manager gives the option of dial, hold, disconnect. The music manager include
play, stop, pause, forward and rewind the music tracks. The battery notification is given when it
indicates battery level is low to certain range i.e., 20%. Google maps are used by the system to
retrieve the longitude and latitude of the VIPs and given to the concern person. The concern
person is the known person of VIP. By making use of the coordinates, the concern person can
easily identify the accurate position where the user will be present. Thus, it overcomes the
scenario of missing situation to a minimal extent. After knowing the location, the concern person
makes a call to the user to acknowledge the situation of missing scenario.

Dept. of CSE, AACET, Thodupuzha Page 9


Blind’s Personal Assistance Application

Advantage:

By practically implementing smart device for VIP it provides a low expensive Android
mobile to get recent information through Selendroid architecture. It is the effective way to know
the latest news, bus routes and weather report. Using voice application the normal human can
reveal themselves in various domains and so breadth of application will be an impressive tool in
a Ubiquitous environment. The system also intimates the low level of the battery to the VIP, so
that recharge can be done immediately. It also supplies a better way of handling the situation,
when a person got lost in an unknown environment.

Disadvantage:

This system only concentrates how to use a mobile phone and how its feature can be
accessed. It does not provide any other services the blind wants such as face recognition, text
recognition and navigation etc.

2.2 An Assistive Mobile System Supporting Blind and VIPs When They Are Outdoor

Portable devices are nowadays largely used among people. These devices have a lot of
potential in aiding visually impaired people on a daily basis. This paper presents an android
application for smartphone, especially made to assist these persons. The application uses MEMS
sensors from a smartphone and also the information received from a few external sensorial
modules. The hardware modules form together an assistive portable system. Communication
between smartphone and external modules is made via Bluetooth and Wi-Fi. Because the users
are visually impaired, this application interface is designed to meet the necessary requirements.
Moreover, the communication between smartphone and its user is made through a text-to-speech
software module. The assistive activities covered by these modules, range from making a phone
call, to indoor and outdoor guidance.

Advantage:

According to tests, the assistant system that uses this android application proves to be
efficient, portable, small, and cost-effective and does not require many hours of training.

Dept. of CSE, AACET, Thodupuzha Page 10


Blind’s Personal Assistance Application

Disadvantage:

The application interface contains several screen areas that correspond to the various
assistive software modules. When the visually impaired user touches the touchscreen in a certain
area, the phone will communicate the area reached, and if he wants to access that assistant
module, then you will have to double-click the area. It is very difficult to blinds.

2.3 Text Recognition and Face Detection Aid for Visually Impaired Person Using
Raspberry pi

The proposed method is to help blind person in reading the text present on the text labels,
printed notes and products as a camera based assistive text reader. The implemented idea
involves text recognition and faces detection from image taken by camera on spectacle and
recognizes the text using OCR. Conversion of the recognized text file to voice output by e-Speak
algorithm. The system is good for portability, which is achieved by providing a battery backup.
The portability allows the user to carry the device anywhere and can use at any time. A prototype
was developed which uses a camera on spectacle and Raspberry pi that works in real time.

The proposed system has two different modes. The face and text modes are selected
using mode control switch. The system captures the frame and checks the presence of text in the
frame. It will also check the presence of face in the frame and inform the user via audio message.
If a character is found by the camera the user will be informed that image with some text was
detected. Thus if the user wanted to hear or to know about the content in the image he can use a
switch to capture the image. The e-Speak creates an analog signal corresponding to the text file
given as the input. The analog signal produced by the e-Speak is then given to a headphone to
get the audio output signal.

Advantage:

The proposed idea portability issue is solved by using Raspberry pi. The MATLAB is
replaced with Open CV and it results in fast processing. Open CV which is the latest tool for
image processing has more supporting libraries than MATLAB. The device consists of a camera
installed on the spectacle. The processor used is very small and can be kept inside the pocket of
the user. A wired connection is provided with the camera for fast access. Power bank provided
for the system helps to work the device for about 6 to 8 hours. By these features the device

Dept. of CSE, AACET, Thodupuzha Page 11


Blind’s Personal Assistance Application

become simple, reliable and more user friendly. The proposed system can be improved through
addition of various components. Addition of GPS to the present system will enable the user to
get directions and it could give information regarding present location of the user. Also the
device can be used for face recognition. Visually impaired person need not to guess people. He
can identify them as the camera capture their faces. GSM module can be added to this system to
implement a panic button. If the user is in trouble, then he can make use of the panic button to
seek help by sending the location to some predefined mobile numbers.

Disadvantage:

It had a hardware device. So user cannot carry this device. It is expensive.

2.4 Android Assistant Eyemate for Blind and Blind Tracker

It is a blind assistive and tracking embedded system. In this system the blind person is
navigated through a spectacle interfaced with an android application. The blind person is guided
through Bengali or English voice commands generated by the application according to the
obstacle position. Using voice commands blind person can establish voice call to a predefined
number just by pressing the headset button, without touching the phone. They can also control
the application using their voice commands. Emergency numbers are saved in the application.
The blind assistive application gets the latitude and longitude using GPS and then sends them to
the server. The movement of the blind person is tracked through another application that points
out the current position of the person in Google Map. The system includes object detection,
Eyemate for blind android application, voice commanding, a blind tracker application.

Advantages:

 It provides the accurate distance from the obstacles


 Alerts the blind person about the obstacle position.
 Voice command and emergency call establishment.
 Finds the current location of the blind person.

Disadvantages:

 The application is interfaced with a hardware module.


 Two different applications are required. One for guiding the blind person and the other
for tracking the blind person.
Dept. of CSE, AACET, Thodupuzha Page 12
Blind’s Personal Assistance Application

 The microcontroller that is used may get damaged.


 Bengali and English are the only languages supported by the system.

2.5 Image Recognition for Visually Impaired Person by Voice

This system is to represent a method where a blind person can get information about the
shape of an image through speech signal. It proposed an algorithm for image recognition by
speech sound. This method enables the visually impaired people to see with the help of ears.
Edge detection is used to recognize the objects in the image. Based on the information from
Edge Detection sound is generated. The sound is generated using the MATLAB’s syntax. Thus
the images of the objects that are in front of the blind person is recognized and is told to the
person by the system through voice.

Advantages:

 The blind person can tell the name of the object in front of them through hearing the
voice commands from the system.
 It can act as an eye for visually impaired person.
 It is portable.
 It consumes less power.

Disadvantages:

 Hardware is required for building the system.


 The performance decreases with the complexity in the image.
 It is still considered as not an efficient and practical application.

2.6 Comparison

Implementing smart devices for VIPs provides a low expensive android mobile to get
recent information through Selendroid architecture. Using voice application the normal human
can reveal themselves the various domains. So the application will be an impressive tool in a
ubiquitous environment. The system have a low battery consumption, so that recharge can be
done immediately. But the system only concentrate on how to use a mobile phone and how its
feature can be accessed. It does not provide any other service to the blinds that they want.

Dept. of CSE, AACET, Thodupuzha Page 13


Blind’s Personal Assistance Application

The assistive mobile system efficient, portable and small. It is cost effective system and
does not require long time for training. But this system may be difficult for the blinds to use in
some cases.

The text recognition and face detection aid using Raspberry pi solves the portability
issues. A power bank is provided for the system that will help in working of the system for about
6 to 8 hours. The device is simple, reliable and user friendly. But it uses a hardware system. So
user cannot carry this device and it is also very expensive.

Eyemate is a device that provide accurate distance of the obstacle from the user and
allows the blind person through voice. It also finds the current location of the blind person which
can be used to track them by their family. But the system has a hardware module. Two
applications are used in this system – one for tracking and another for guiding the blind person.
The system only supports two languages – Bengali and English.

Image recognition by voice is a good companion for blind person. It helps the blinds in
identifying the objects in front of them. It is portable and consumes less power. But it also has a
hardware part. The performance of the system decreases with the complexity of the image
captured. It cannot be an efficient and practical application until now.

Dept. of CSE, AACET, Thodupuzha Page 14


Blind’s Personal Assistance Application

CHAPTER 3

PROPOSED SYSTEM

3.1: Architecture

Capture Image

CAMERA Image Processing Text Extraction

Face Text to speech


Traffic Light Detection
Converter
Detection (Capture New
Faces) (Voice)

Output

Dept. of CSE, AACET, Thodupuzha Page 15


Blind’s Personal Assistance Application

3.2 Description

In this system, the camera on the android smartphones is used for capturing images. The
captured image is then processed through several image processing techniques.

For text recognition, after the image processing, the image is passed to a text extraction
unit that helps in extracting the textual content from the image. The result from the text
extraction is directed to a Text to speech converter that converts the textual contents into voice.
For this purpose any converters can be used, such as Google’s Text-To-Speech converter or in-
built text to speech converter engine or any other third party engines. Thus the texts from the
image are converted into voice that can be heard by the visually impaired person. Optical
Character Recognition involves the detection of text content on images and translation of the
images to encoded text that the computer can easily understand. An image containing text is
scanned and analyzed in order to identify the characters in it. Upon identification, the character is
converted to machine-encoded text. To us, text on an image is easily discernible and we are able
to detect characters and read the text, but to a computer, it is all a series of dots. The image is
first scanned and the text and graphics elements are converted into a bitmap, which is essentially
a matrix of black and white dots. The image is then pre-processed where the brightness and
contrast are adjusted to enhance the accuracy of the process. The image is now split into zones
identifying the areas of interest such as where the images or text are and this helps kickoff the
extraction process. The areas containing text can now be broken down further into lines and
words and characters and now the software is able to match the characters through comparison
and various detection algorithms. The final result is the text in the image that we're given. The
process may not be 100% accurate and might need human intervention to correct some elements
that were not scanned correctly. Error correction can also be achieved using a dictionary or even
Natural Language Processing (NLP). The output can now be converted to other mediums such
as word documents, PDFs, or even audio content through text-to-speech technologies. For this
OCR project, Python-Tesseract, or simply PyTesseract, library which is a wrapper for Google's
Tesseract-OCR Engine are used.

The Flask web framework is used to create our simple OCR server where it can take
pictures via the webcam or upload photos for character recognition purposes. It is also going to
use Pipenv since it also handles the virtual-environment setup and requirements

Dept. of CSE, AACET, Thodupuzha Page 16


Blind’s Personal Assistance Application

management.Besides those, it also use the Pillow library which is a fork of the Python Imaging
Library (PIL) to handle the opening and manipulation of images in many formats in Python.

This system concentrate on PyTesseract although there are other Python libraries that can
help you extract text from images such as:

• Textract: which can extract data from PDFs but is a heavy package.

• Pyocr: offers more detection options such as sentences, digits, or words.

In case of traffic light detection, the image is processed using the Traffic Light Detection
Algorithm. If the light on the traffic light is found to be red or yellow, the application asks the
user to “stop”. If the light is green, the application will command the user to “go”.

For face detection, the image from the camera is processed in order to extract faces from
the captured image. The extracted images are compared with the ones that are already stored in
the database of the application. If the face matches any of in the database, then the details of that
person is passed to the user through voice. In order to understand how Face Recognition works,
let us first get an idea of the concept of a feature vector. Every Machine Learning algorithm takes
a dataset as input and learns from this data. The algorithm goes through the data and identifies
patterns in the data. For instance, suppose we wish to identify whose face is present in a given
image, there are multiple things we can look at as a pattern:

 Height/width of the face.

 Height and width may not be reliable since the image could be rescaled to a smaller
face. However, even after rescaling, what remains unchanged are the ratios – the ratio
of height of the face to the width of the face won’t change.

 Color of the face.

 Width of other parts of the face like lips, nose, etc.

Clearly, there is a pattern here – different faces have different dimensions like the ones above.
Similar faces have similar dimensions. The challenging part is to convert a particular face into
numbers – Machine Learning algorithms only understand numbers. This numerical
representation of a “face” (or an element in the training set) is termed as a feature vector. A
feature vector comprises of various numbers in a specific order.

Dept. of CSE, AACET, Thodupuzha Page 17


Blind’s Personal Assistance Application

As a simple example, we can map a “face” into a feature vector which can comprise various
features like:

 Height of face (cm)

 Width of face (cm)

 Average color of face (R, G, B)

 Width of lips (cm)

 Height of nose (cm)

So, our image is now a vector that could be represented as (23.1, 15.8, 255, 224, 189, 5.2,
and 4.4). Of course there could be countless other features that could be derived from the image
(for instance, hair color, facial hair, spectacles, etc). However, for the example, let us consider
just these 5 simple features.

Now, once encoded each image into a feature vector, the problem becomes much simpler.
Clearly, when there are 2 faces (images) that represent the same person, the feature vectors
derived will be quite similar. Put it the other way, the “distance” between the 2 feature vectors
will be quite small.

Machine Learning can help us here with 2 things:

1) Deriving the feature vector: it is difficult to manually list down all of the features because
there are just so many. A Machine Learning algorithm can intelligently label out many of
such features. For instance, a complex feature could be: ratio of height of nose and width
of forehead. Now it will be quite difficult for a human to list down all such “second
order” features.

2) Matching algorithms: Once the feature vectors have been obtained, a Machine Learning
algorithm needs to match a new image with the set of feature vectors present in the
corpus.

Face Recognition algorithm are build using some of the well-known Python libraries.

Dept. of CSE, AACET, Thodupuzha Page 18


Blind’s Personal Assistance Application

CHAPTER 4

PROJECT PLANNING

2019 August - Project Initiation and Introduction.

2019 December - Collect Details of Project.

2020 January - Learn The Basics Of Python, SQL, Html And Android.

2020 February - Start Project Coding.

2020 March 1 - 50% Of Code Will Be Completed.

2020 March Last - 100% Of Code Will Be Completed.

2020 April - Testing.

4.1 Cost

Approximately 1000/-

Dept. of CSE, AACET, Thodupuzha Page 19


Blind’s Personal Assistance Application

CHAPTER 5

CONCLUSION

People who are completely blind or have impaired vision usually have a difficult time
navigating outside the spaces that they're accustomed to. In fact, physical movement is one of the
biggest challenges for blind people. Traveling or merely walking down a crowded street can be
challenging. Because of this, many people with low vision will prefer to travel with a sighted
friend or family member when navigating unfamiliar places. Also, blind people must memorize
the location of every obstacle or item in their home environment. Another difficulty is in
identifying their friends and close ones.

The proposed system is a good means for the visually impaired people to overcome these
difficulties. The system has the features like traffic light detection, text recognition, face
recognition and identification, navigation system and an emergency system. All these features
are controlled using voice. The proposed system is an android application along with a webpage.
The android application is for the visually impaired person and the webpage is for the caretaker.
By using the mobile application, the visually impaired person will be able to read hard copy of
documents, identify the close ones and so on. Through the webpage the caretaker can track the
blinds. The caretakers are the ones who train new faces into the system for identification.
Therefore, the proposed system will be a good companion for the visually impaired people. The
system will be a substitute for their eyes and can be helpful for guiding them.

Dept. of CSE, AACET, Thodupuzha Page 20


Blind’s Personal Assistance Application

REFERENCES

[1] “Text Recognition and Face Detection for Visually Impaired Person Using Raspberry
Pi” IEEE International Conference on Information Networking (ICOIN), April: 2018
[2] Pranob K Charles, V.Harish, M.Swathi, CH. Deepthi, "A Review on the Various
Techniques used for Optical Character Recognition", International journal for
Engineering and research solution
[3] Shraddha A. Kamble “An Approach for Object and Scene Detection for Blind Peoples
Using Vocal Vision” Int. Journal of Engineering Research and Applications ISSN: 2248-
9622, Vol. 4, Issue 12, pp.01-03, December 2014.
[4] Qian Lin “Let Blind People See: Real-Time Visual Recognition with Results Converted to
3D Audio”, International Journal of Engineering Research and Applications, Vol. 6, Issue
5, pp. 8–9, June 2015.
[5] Shivaji Sarokar, Seema Udgirkar, Sujit Gore & Dinesh Kakust “Object Detection System
for Blind People”, International Journal of Innovative Research in Computer and
Communication Engineering (An ISO 3297: 2007 Certified Organization) Vol. 4, Issue
9,pp 12-20, September 2016
[6] Chen X and A. L. Yuille, Ada Boost Learning for Detecting and Reading Text in City
Scenes Proceedings of IEEE International Conference on Computer Vision and Pattern
Recognition, volume 9, pp. 366-373, 2010.

Dept. of CSE, AACET, Thodupuzha Page 21

Potrebbero piacerti anche