Sei sulla pagina 1di 4

IJSTE - International Journal of Science Technology & Engineering | Volume 2 | Issue 10 | April 2016 ISSN (online): 2349-784X

Electronic Eye for Visually Challenged People

Dr.Mrs.T.Yasodha Associate Professor Department of Electronics Communication Engineering CCET, Oddanchatram, Dindigul,Tamilnadu

Maria Lean V P UG Scholar Department of Electronics Communication Engineering CCET, Oddanchatram, Dindigul,Tamilnadu

Linchu Ann Sabu UG Scholar Department of Electronics Communication Engineering CCET, Oddanchatram, Dindigul,Tamilnadu

SaumyaMariamThomas UG Scholar Department of Electronics Communication Engineering CCET, Oddanchatram, Dindigul,Tamilnadu

Sneha Mary Thomas UG Scholar Department of Electronics Communication Engineering CCET, Oddanchatram, Dindigul,Tamilnadu

Abstract

Assistive technologies are being developed for visually impaired people to live independently and confidently. This project work proposes a camera-based assistive text reading framework to help blind persons to read text labels and product packaging from hand-held objects in their daily lives. The project work is framed into three stages: Image capturing, Text recognition , Speech output. The project work can be able to insist the blind people in their daily life. The entire application will run on Raspberry pi B. The Raspberry pi B is a credit card sized single computer or So C , or System on a Chip, is a method of placing all necessary electronics for running a computer on a single chip. Instead of having an individual chip for the CPU, GPU, USB controller, RAM everything is compressed down into one package. Raspberry pi B needs an Operating system to start up. In the aim of cost reduction, the Raspberry pi B omits any on-board non-volatile memory used to store the boot loaders, Linux Kernels and file systems as seen in more traditional embedded systems. Rather, a SD/MMC card slot is provided for this purpose. After boot load, as per the application program Raspberry pi B will get execute. Obstacle detection is also provided in this system to recognise the obstacle in front and produce the voice output through Ear phone to blind users. Keywords: Obstacle detection, text recognition, Raspberry pi, System on a chip

I.

INTRODUCTION

In worldwide the visually impaired peoples are 314 million, in that 45 million are visual impairment which was released by WHO in 10 facts regarding blindness. Reading is obviously essential in today’s society .Printed text is everywhere in the form of reports, Receipts , bank statements, product packages , instruction on medicine bottles, etc. And the ability of people who are blind or have significant visual impairments to read printed labels and product packages will enhance independent living and social self-sufficiency so here we are going to propose a system which is useful to blind people and to identify the obstacle in front in a clear manner such that they don’t trip and fall. Here the entire application is done in a Raspberry pi board, which is a miniature marvel, packing considerable computing power into a footprint no larger than a credit card.Phython is a simple programming tool available on Raspberry pi.Python is a high- level language, this means that python code is written in largely recognisable English, providing the Pi with commands in a manner that is quick to learn and easy to follow. Python is published under an open-source licence, and its freely available for Linux, OS X and windows computer systems. This cross-platform support means that software written using Python on the Pi can be used on computers running almost any other operating system as well-except where the program makes use of Pi-specific hardware such as the GPIO port. In this work, we use Open cv (Open Source Computer Vision)which is used for image comparison, which is a library of programming functions mainly aimed at real-time computer vision. Open CV’s application areas include: 2D and 3D feature tool kits, Object identification, segmentation and recognition,facial recognition system.

II. EXISTING SYSTEM

Today ,there are already a few systems that have some promise for portable use, but they cannot handle product labelling. Blind braille readers often prefer electronic braille displays. But these are prohibitively expensive. The search is on, therefore, for a low-cost refreshable display that would go beyond current technologies and deliver graphical content as well as text [1].Tactile maps are efficient tools to improve spatial understanding and mobility skills of visually impaired people. Their limited adaptability can be compensated with haptic devices which display graphical information, but their assessment is frequently

All rights reserved by www.ijste.org

245

Electronic Eye for Visually Challenged People (IJSTE/ Volume 2 / Issue 10 / 048)

limited to performance-based metrics only which can hide potential spatial abilities [2]. This article reviews recent research on perception of tangible pictures in sighted and blind people. Haptic picture naming accuracy is dependent upon familiarity and access to semantic memory, just as in visual recognition [3].For example, Portable bar code readers are designed to help blind people to identify different products in an extensive product database can enable users who are blind to access information about this products, through speech and Braille. But a big limitation is that it is very hard for blind users to find the location of the bar code and to correctly point the bar code .However, these systems are generally designed for and perform best with document images with simple backgrounds, standard fonts, a small range of font sizes, and well organized characters rather than commercial product boxes with multiple decorative patterns. Most state-of-the-art OCR software cannot directly handle scene images with complex backgrounds.

directly handle scene images with complex backgrounds. Fig. 1: Refreshable Braille display Fig. 2: Tactile maps

Fig. 1: Refreshable Braille display

complex backgrounds. Fig. 1: Refreshable Braille display Fig. 2: Tactile maps III. P ROPOSED SYSTEMS Fig.

Fig. 2: Tactile maps

III. PROPOSED SYSTEMS

Braille display Fig. 2: Tactile maps III. P ROPOSED SYSTEMS Fig. 3: Proposed system This project

Fig. 3: Proposed system

This project work proposes a camera-based assistive text reading framework and audio output along with Raspberry pi. Assistive technologies are being developed for visually impaired people in order to live confidently. This project helps blind people to read text labels and product packaging from hand-held objects in their daily lives.the project work is framed into three stages. First,

All rights reserved by www.ijste.org 246

All rights reserved by www.ijste.org

246
246

Electronic Eye for Visually Challenged People (IJSTE/ Volume 2 / Issue 10 / 048)

Image capturing using a mini camera, the text which the user need to read, get captured as an image and have to send to the image processing platform. Secondly, text recognition Using text recognition algorithm,the text will get filtered from the image. Finally , Speech output A filtered text will be passed into this system to get an audio output . This project work can be able to insist the blind people in their daily life.The entire application will run on Raspberry pi.

IV. HARDWARE DESIGN

In our project we use Raspberry pi B module, Camera, Ultrasonic sensor, Headphone, Battery. All the component are interface with the central unit of Raspberry pi board with their only GPIO pins. Ultrasonic sensor is used, acts as the same principle of Radar. Ultrasonic detect the obstacle in front of the person and also find the range of obstacle. Camera capture the image continuously and send the feedback to the Raspberry pi board. The headphone is used for announcement purpose.

pi board. The headphone is used for announcement purpose. Pi camera module The pi camera is

Pi camera module

The pi camera is used to take the image continuously. It is used for just capturing the images from the front side and give the feedback to the Raspberry pi board.

Ultrasonic Sensor Modulefront side and give the feedback to the Raspberry pi board. In this system Ultrasonic sensor

In this system Ultrasonic sensor are used for obstacle detection in front of the blind person.

used for obstacle detection in front of the blind person. Headphone The headphones are used for

Headphone

The headphones are used for announcement purpose to the blind persons.

are used for announcement purpose to the blind persons. Raspberry Pi Development Board Fig. 3: Raspberry

Raspberry Pi Development Board

purpose to the blind persons. Raspberry Pi Development Board Fig. 3: Raspberry pi board V. B

Fig. 3: Raspberry pi board

V. BLOCK DIAGRAM

Pi Development Board Fig. 3: Raspberry pi board V. B LOCK DIAGRAM Fig. 4: All rights

Fig. 4:

All rights reserved by www.ijste.org 247

All rights reserved by www.ijste.org

247
247

Electronic Eye for Visually Challenged People (IJSTE/ Volume 2 / Issue 10 / 048)

VI. EXPLANATION

Here, in this block diagram the whole system is controlled by Arm11 processor and this processor is implemented on Raspberry pi Board. The system consists of Raspberry pi,Pi camera, SDcard, and personal computer. Those all components are connected by USB adaptors. Raspberry Pi is the key element in processing module. First, Image capturing-Using pi camera image to be taken. Secondly Text recognition, it can be done by Histogram processing. Finally Speech output the text content is changed into speech output.

VII.SOFTWARE SPECIFICATIONS AND FRAMEWORK

Software Specifications

1)

Operating system: Linux

2)

Platform: OpenCV (Linux-Library)

Linux Operating system

Linux Operating system

The Linux open source operating system, or Linux OS, is a freely distributable, cross platform operating system based on Unix that can be installed on PCs, laptops, net books, mobile and tablet devices, video game consoles, servers, supercomputers and more.

OPEN CV Libraryvideo game consoles, servers, supercomputers and more. Open CV is an open source computer vision library

Open CV is an open source computer vision library originally developed by Intel. It is free for commercial and research use under a BSD (Berkeley Software Distribution) license. The library is cross platform, and runs on Linux, Windows and Mac OS X. it focus mainly towards real-time image processing, as such, if it finds Intel’s integrated performance primitives on the system, it will use these commer cial optimise routines to accelerate itself.

VIII.

RESULT

Thus we have proposed an audio output component to inform the blind user to recognize the text codes in the form of speech or audio. Image is recognized using Content Based Image Retrieval System it involves 3D colour histogram processing. From the image the features are extracted using image descriptor and the dataset is indexed. From the bunch of extracted features we compare the numerical matrix using openCV and then we need to submit the query image and our job is to extract the features of it and apply the similarity functions to compare the query features to the features already indexed and text is displayed. Finally text is converted to speech output by using eSpeak application which is installed in Raspberry pi.This work is helpful for visually challenged people to live confidently and independently in the society. In this work we also implement an obstacle detection using ultrasonic sensors so that they no need any manual support.

IX. CONCLUSION AND FUTURE WORK

The project titled “Electronic Eye for visually challenged people” has been successfully designed and tested. It has been developed by integrating features of all the hardware components and software used. Presence of every module has been reasoned out and placed carefully thus contributing to the best working of the unit. Secondly, using highly advanced Raspberry pi board and with the help of growing technology the project has been successfully implemented. Furthermore, we will address the significant human interfaces issues associated with reading text by blind user. The future work of this project is to be developed in various real time application. Our project is based on database that are stored in it, but the future work is to provide audio output in a real time manner. And we can also provide local navigation so that it will be useful for blind people to move from one place to another without depending others. It can also developed to read books, newspapers, magazines, articles etc. through audio output.

REFERENCES

[1]

Efficiently Operating Wireless Nodes Powered by Renewable Energy Sources

[2]

M. Médard and A. Sprintson, Network Coding: Fundamentals and Applications.New York, NY, USA: Academic, 2011.

[3]

S. Kattiet al., “Xors in the air: Practical wireless network coding,” ACM SIGCOMM Comput. Commun. Rev., vol. 36, no. 4, pp. 243254, Oct. 2006.

[4]

M. Effros, T. Ho, and S. Kim, “A tiling approach to network code Design For wireless networks,” in IEEE ITW, 2006, pp. 62–66.1716 IEEE JOURNAL

[5]

ON SELECTED AREAS IN COMMUNICATIONS, VOL. 33, NO. 8, AUGUST 2015 X. He and A. Yener, “On the energy-delay trade-off of a two-way Relay network,” in Proc.42nd Annu. CISS,2008,pp. 865–870.

[6]

E. Ciftcioglu, Y. Sagduyu, R. Berry, and A. Yener, “Cost-delay tradeoffsFor two-way relay networks,” IEEE Trans. Wireless Commun., vol. 10, no. 12, pp.

[7]

41004109, Dec. 2011. Y.-P. Hsu et al., “Opportunities for network coding: To wait or not to wait,” in Proc. IEEE ISIT, 2011, pp. 791–795.

[8]

V. S. Borkar, “Control of Markov chains with long-run average Cost criterion: The dynamic programming equations,” SIAM J. Control Optim., vol. 27,

[9]

no. 3, pp. 642657, 1989. R. Cavazos-Cadena and L. I. Sennott, “Comparing recent Assumptions for the existence of average optimal stationary policies,” Oper. Res. Lett., vol. 11,

no. 1, pp. 3337, Feb. 1992. [10] L. I. Sennott, “The average cost optimality equation and critical Number policies,” Probability Eng. Inf. Sci., vol. 7, no. 1, pp. 4767, 1993. Wireless Commun., vol. 9, no. 4, pp. 13261336, Apr. 2010.

All rights reserved by www.ijste.org 248

All rights reserved by www.ijste.org

248
248