Sei sulla pagina 1di 5

2017 2nd International Conference on Automation, Cognitive Science, Optics, Micro Electro-Mechanical System, and Information Technology

(ICACOMIT), October 23, 2017, Jakarta, Indonesia

Development of Brain-Controlled Wheelchair


Supported by Raspicam Image Processing based
Raspberry pi
Arjon Turnip*, Taufik Hidayat, Dwi Esti Kusumandari
Technical Implementation Unit for Instrumentation Development
Indonesian Institute of Sciences
Bandung, Indonesia
*arjon.turnip@lipi.go.id

Abstract— Helping a disable subject for improving their quality signals into command signals which can be later employ for
of life and providing them a priorit care at the right time is one of controlling the wheelchair movement into a desired direction
the most important things for us a responsible member of without any physical controls. Currently, there are many works
researcher. Hence there is a need for developing a wheelchair are focus on creating interfaces to control devises such as
that is intelligent, provides easy maneuverability, and safety to wheelchairs or other type of mobile systems. Also, there are
drive. A brain intelligent wheelchair, which uses the recorded several works that review the existing systems. Recently,
signals from the brain and processes it to control the wheelchair several related works about the development and the
has been designed. However, it has not yet been widely adopted, application of the wheelchair based BCI have been done [1]-
and any commercial device would need proper safety trials and
[16]. A BCI system which controls the wheelchair movement
approval before release. In this paper, a smart wheelchair based
in only for forward direction has developed in [1]. A BCI that
bio-signal & non bio-signal approach which translated into
movement commands by the arduino microcontroller is controls the wheelchair in three direction (i.e., turn left, turn
proposed. Four ultrasonics sensor, a raspicam for obstacle right, and move forward) has applied in [2], [3]. The brain
detection, and Inertial Measurement Unit for speed controller on control based on the brain-emotional-learning algorithm is
sloping road are integrated with the wheelchair based biosignals employ to control the omnidirectional robot [4]. Combining the
called brain-controlled wheelchair. The performance BCI with shared control architecture allows for dynamically
improvement of the of the obstacle avoidance are examinated. producing intuitive and smooth trajectories [5]. The feature
The evaluation of the wheelchair movement is performed under extraction and classification part are the important key in the
real conditions using direction and speed control commands of BCI development and greatly effect on its performance.
the wheelchair. It is obtained that the misclassification is reduced
and the accuracy is increased. The main function of a BCI is to translate and communicate
Keywords—BCI; Raspberry Pi; Brain-controlled wheelchair; the human intentions into exsternal devices (such as
EEG wheelchairs, robots, tools, and so forth) in the form of
appropriate commands. As shown, feature extraction and
classification algorithm play an important role in the design of
I. INTRODUCTION brain-based control for obtaining high classification accuracy.
Millions of disable people around the world such as brain In the BCI design and development, high classification
stroke that suffer from mobility impairments rely upon accuracy and transfer rate are very important. Otherwise, the
powered wheelchairs to get on with their activities of daily misclassification can cause the movement into a wrong
living. But there are many disable people that are not supported directon that can lead to dangerous situations. Then, develop
by appropriate wheelchairs at all yet, either because they are the BCI with high accuracy is mandatory for the users safety.
physically unable to move the chair in conventional mode, or Instead of improving the softwares for feature extraction and
because they are totally incapable of driving the chair in safely. classification accuracy, in this works the integration of the BCI
For that purpose, many research has been conducted such as with others supported devices is proposed.
applying the brain signals to directly control the wheelchair.
The wheelchair that powered by the brain signals has attracted Driving a safe wheelchair based BCI in more complex
great attention due to their convenience and relatively low cost. track could be a highly challenging task for a disable user,
The instrument used to measure the brain activities and especially for the necessity of delivering commands with
converting them as control signals required to develop an precise timing. However, with shared control methods where
interface between the human brain and the computer or other the designed intelligent controller decreases the user from low
devices called brain-computer interface (BCI) system. These level tasks, without sacrificing the BCI prominence and
brain activities can be detected using several instruments but in adaptability of human beings have been performed to
this paper, it is measured using electroencephalographic sufficiently minimize workload, whilst simultaneously
(EEG). A BCI system is used to transforms the recorded EEG improving its performance [17]. The shared control paradigm

978-1-5386-0510-3/17/$31.00 ©2017 IEEE


7
consist of two intelligent agents: the human user (the user
need only convey intentions) and the wheelchair (interprets in
the current conditions such as obstacle avoidance). In this
paper, the wheelchair based on brain computer interface
(called brain-controlled wheelchair) integrated with others
devices as obstacle avoidance is focused. Therefore, the
sophisticated computer vision techniques that worked with
shelf webcams and combined this sensory information with an
array of sonars that we fed to our shared control algorithm.
From the developed wheelchair based BCI, the signals are
being received in the form of serial data integrated with other
sensors such as microcontroller ultrasonic sensor etc. The
ultrasonic sensors (highly exacting in the field of mobile
devices such as robotic and wheelchair) are employed for
obstacle detection. Four modules of sensor (two modules in
front and two modules in left and side) can easily be interfaced Fig. 1. Electronic Block Diagram of the Brain-Controlled Wheelchair.
with microcontroller. These sensors are commonly available
with low price in the market. The intelligent wheelchairs are The Haar like feature algorithm is one of the main method
comfortably driven and provides good efficiency in terms of used in the image processing procedure. This method has an
power requirement and good performance in term of accuracy utility to get the value of the image feature that will be used as
by employ the ultrasonic sensor and webcam to support a primary identity of object characteristics in the process of
obstacle avoidance. It has been experimentally proved that the matching block [20]. The feature value is obtained by applting
vision-based shared controller is increases in safety and an integral image algorithm (normally different from the
efficiency. The intelligent wheelchair based shared control conventional method by which calculates the pixel value of an
architecture is briefly described. Currently, the brain- image in one by one). The image calculation process is
controlled wheelchair is designed with an electronics section concentrated in the selected square area (which is identified
consisting of the Arduino as a controller, the Mitsar-EEG as an obstacle) only of the detected object [21] so that the
system (with scalp to acquire brain wave information), four faster of calculation is acvieved. The selected area identified
ultrasonic sensors as wall safety mechanisms, a BCI system as a data positive image whilst outer of the selected area
containing for EEG signal processing, a Intrument (called the background image) denoted as a data negative.
measurement unit for auto brake system, and an EMG system Both of the identified area are combined as a weak classifier
for performance inprovement of the wheelchair. In this work, with AdaBoost method. The weak classifier data is obtained
we will add the Raspicam electronics system with a camera by compared images to each other until all of the images
module to acquire information in the form of images around different each other [22]. While the strong classifier is
the brain-controlled wheelchair and raspberry pi as an image obtained when campared images is match each other. The
information processor that will inform an obstacle to Arduino strong classifier then used as a comparative database between
controller as a command. Electronic Block Diagram of the the real-time imagery and the trained image. The last step of
Brain-Controlled Wheelchair is given in Figure 1. the haar like feature algorithm is the cascade classifier that
used to validated the train image with real-time imagery based
II. METHOD
on the established stage [20]. If the match of all stages is
The procedure of the image processing acquired from the accomplished then a identified object is obtained. Figure 2
Raspicam sensors is started with the acquisition of captured represent the block diagram of the image processing
image of the hole table and the down stairs. The captured procedure.
image is fed into pre-processing stage whilst the RGB image
color format is converted into grayscale form. This process is
important to minimize the calculation time of the each image
(there are many image need to be simultaneously processed).
The next stage is filtering process with a bilateral filter method.
This process is employed as noise reduction in digital images
by preserving the edge of the processed image (object
characteristic detection) well-kept in sharp [18], [19]. The last
stage is extraction of the objects or image characteristic with
canny edge detection techniques. At the end, the object
detection is obtained by classify the edge of the binary image.
The classification method with haar like feature is used.
Fig. 2. The procedure of raspicam image processing.

8
III. DEVELOPMENT AND EXPERIMENTAL RESULTS
In the developed intelligent brain-controlled wheelchair
system, the acquired brain signal is classified using frequency
range as a feature. For the classification purpose, the well
known classifier algorithm (i.e., Adaptive neuro fuzzy
inference system (ANFIS) method) is applied. The signals are
classified into four features: less then 7 Hz, 7 - 12 Hz, 12.1 -
13 Hz, and bigger than 17 Hz to indicate stop, left, forward,
and right, respectively.Therefore, the classified signals in the
form of four directon is used for controlling the wheelchair
direction movement. Whilst the captured raspicam image is
processed in raspberry pi and classified using haar like feature
method as explained above. The wheelchair also equipped
with four ultrasonic sensors (two in left and right, and two in
the front of the wheelchair) integrated with raspicam sensor.
The results of the image classification is denoted as a type of
intelligent brain-controlled wheelchair obstacle avoidance. If
any obstacles are detected, then the Data2 which contain a
command of the wheelchair to be stop (shown in Figure 3) is
executed by ignore all of others command. The developed
arduino as a main controller is employed to identify whether
there is a serial data sent. If the sent data is exist, then the
command in the data variable will be read. Finally, both the
command on Data1 and Data2 are executed into a PWM
signal that serves to specify the DC motor rotation speed of Fig. 4. Controller communication configuration.
the intelligent brain-controlled wheelchair. The developed aArduino as the main controller periodically
receive the distance information from four ultrasonic sensors to
maintain a safe distance between brain-controlled wheelchair
and wall. Raspicam module (connected to CSI camera port on
raspberry pi) periodically provides information in the form of
images of the situation around the brain-controlled wheelchair.
The electronic configuration between Arduino controller and
raspberry pi is developed for both controllers to communicate
each other using serial UART port wilst Raspberry pi uses pin
foot 6 as ground, pin 8 as transmitter, and pin 10 as receiver.
The Arduino uses the pin foot TX1, RX1, and ground. In order
to be able to receive accurate information, a voltage divider is
performed with a large resistor about 10kȍ and 4.7kȍ so that
the Arduino voltage level of 5 V could be converted into 3.3 V
in accordance with the raspberry pi signal level. Configuration
of electronics relations can be seen in Fig. 4.
The process in the controller (with set point of the detected
obstacle (i.e., hole table and down stairs) coordinate value
(x,y)) for raspberry pi system is designed as a closed-loop
control process as shown in Fig. 5. Raspicam as a camera
sensor will continue to acquire every frame of image in real-
time and send information on raspberry pi. The image
information is continuously processed by raspberry pi
controller with supported by the OpenCV library that consist
of preprocessing stage, segmentation, feature extraction, and
classification. The obstacle is detected from the grouping of
obstacles using the dominant method (comparing the
coordinates (x,y) of the detected obstacle with the coordinate
set point). When the y-axis coordinates are greater than 120, it
is mean that the brain-controlled wheelchair is about 20 cm in
Fig. 3. The mechanism of shared control of brain-controlled wheelchair.
front of the obstacle (hole table of down stairs), then the
wheelchair must be stop. The image used in the learning

9
process (improving the time detection of the obstacle) is about The main focus on designing the raspberry pi algorithm is
54 of positive images and 27 of negative images. The obtained how an image can be processed to determine objects in the
image is in RGB color format of 360 x 240 pixel resolution form of obstacles which acquire using raspicam as a visual
with 24 bit bitmap file format for positive image samples and sensor. The algorithm is continuously designed to read the
in grayscale format of 360 x 240 pixels resolution with JPEG serial port of the UART. Data 1 is the EEG signal processing
file format for negative image. To determine the ROI position whilst Data 2 is the image processing, the IMU value, and the
of the training object, the objectmarker.exe is used to marked ultrasonic value. These data will be processed by
and truncated the positive image and the position information microcontroller such that the PWM signal value could be sent
of the cropped object is stored in the info.txt file as a source to to the DC motor. Data 2 which contains safety information
create vector imagery. The image training process of the Haar- will send commands that is, forward, backward, right, left, or
like feature algorithm generally begins by taking a collection stop. The control signal in Data 2 is represented by a percent
of training images. The image that has passed the mark and value of the percent duty cycle. Percentage of duty cycle is
crop stage will then be resized with 24x24 pixels. Generally, used to determine the motor rotation speed. The greater the
the process of image training and the application processing percent value of duty cycle will make the motor rotation speed
for extraction and classification on Haar-like feature method is faster. The direction of motor rotation will be determined by
can be explained as in Fig. 6 and Fig. 7. Respectively. the polarity value at percent duty cycle. A positive value in the
duty cycle will make the motor rotation clockwise and the
negative value will make the motor rotation counterclockwise.
If the program is not stopped, then program will looping
continuously to read Data 1 and Data 2.
To validate the performance of the intelligent brain-
controlled wheelchair, the speed, accuracy, and the distance of
the sensor to the selected object is tested. The experimental
procedure are divided into three measurements step: light
Fig. 5. Diagram block of Raspberry pi controller. intensity tolerance, distance object detection, and the
performance of raspicam detection. As a standard for the
detected object, the database that consists of two groups: hole
table and down stair objects. Hole table consists of 40 positive
and 44 negative related images and down stair object consists
of 51 positive and 35 negative related images. Observation
data of the camera sensors are used to determine the
relationship between the distance of the object with the pixel
value of the recorded image by the raspicam. The linear
regression statistic method with the formula of y = 133,448 -
1,054x is used to predict the correlation between the position
and the distance detections. The recorded data indicated that
the wheelchair could detect an obstacle object if the light
intencity around wheelchair in betwwen 58 to 103 LUXs with
time speed detection about 9 ms and 11 ms for the hole table
and down stair objects, respectively. The maximum and
Fig. 6. Training Image Haar-Like Feature. minimum distance by which the wheelchair could well detect
an obstacle object is also tested.

IV. CONCLUSIONS
An intelligent brain-controlled whellchair based shared
control for improving the disable quality of life and providing
them a priority care at the right time is developed. Beside the
use of the brain signals as a control movement direction, the
wheelchair also supported by several sensors to make sure the
user safety. In the experiment test, the detection process and its
accuracy of an obstacle objects (i.e., hole table, wall, and down
stairs) is highly affected by the distance and light intensity. It is
found that the allowed light intensity for indor area to be
properly in object detection is about 58 to 103 LUXs. This
range of intensity could be change depend on the light intensity
condition at the time of capturing the background image. The
Fig. 7. Haar Like Feature processing for extraction and classification.

10
distance of object detection is obtained about average 95 cm [14] A. Turnip and D. E. Kusumandari, “Improvement of BCI performance
and it is not affected by the light intensity of the environment. through nonlinear independent component analisis extraction,” Journal
of Computer, vol. 9, no. 3, pp. 688-695, March 2014.
[15] A. Turnip, D. Soetraprawata, and D. E. Kusumandari, “A Comparison
ACKNOWLEDGMENT of Extraction Techniques for the rapid EEG-P300 Signals,” Advanced
Science Letters, vol. 20, no. 1, pp. 80-85(6), January, 2014.
This research was supported by the Technical [16] A. Turnip, D. Soetraprawata, D. E. Kusumandari, “A comparison of
EEG processing methods to improve the performance of BCI,”
Implementation Unit for Instrumentation Development, International Journal of Signal Processing Systems, 1 (1), 63-67
Indonesian Institute of Sciences, Faculty of Medicine [17] A. Turnip and D. Soetraprawata, “The Performance of EEG-P300
(Padjajaran University) and, Insentif Riset Sistem Inovasi Classification using Backpropagation Neural Networks,” Journal of
Nasional (INSINAS), Kementerian Riset, Teknologi, dan Mechatronics, Electrical Power, and Vehicular Technology, 4 (2), 81-
Pendidikan Tinggi (Ristekdikti), Indonesia. 88
[18] A. Turnip and D. Soetraprawata, “Electrooculography Detection from
Recorded Electroencephalogram Signals by Extended Independent
REFERENCES Component Analysis,” Advanced Science Letters, 21 (2), 173-176,
[1] R. Leeb, D. Friedman, G. R. Müller-Putz, R. Scherer, M. Slater, and G. 2015.
Pfurtscheller, “Self-paced (asynchronous) BCI control of a wheelchair [19] A. Turnip, “Comparison of ICA-Based JADE and SOBI Methods EOG
in virtual environments: a case study with a tetraplegic,” Computational Artifacts Removal,” Journal of Medical and Bioengineering, Vol 4 (6),
Intelligence and Neuroscience, vol. 2007, Article ID 79642, 8 pages, 2015.
2007. [20] A. Alonso, R. de la Rosa, L. del Val, M.I. Jimenez, and S. Franco,
[2] R. Scherer, F. Lee, A. Schlögl, R. Leeb, H. Bischof, and G. (2009). “A Robot Controlled by Blinking for Ambient Assisted Living,
Pfurtscheller, “Toward self-paced brain-computer communication: In: Distributed Computing, Artificial Intelligence, Bioinformatics, Soft
navigation through virtual worlds,” IEEE Transactions on Biomedical Computing, and Ambient Assisted Living,” Lecture Notes in
Engineering, vol. 55, no. 2, pp. 675–682, 2008. Computer Science, pp.(839-842), Springer-Verlag, 3-642-02480-7,
[3] Fattouh, O. Horn, and G. Bourhis, “Emotional BCI control of a smart Berlin, Germany.
wheelchair,” International Journal of Computer Science Issues, vol. 10,
no. 3, pp. 32–36, 2013. [21] Q. Zeng, C. L. Teo, B. Rebsamen, and E. Burdet, “A collaborative
[4] M. A. Sharbafi, C. Lucas, and R. Daneshvar, “Motion control of omni- wheelchair system. IEEE Transactions on Neural Systems and
directional three-wheel robots by brain-emotional-learning-based Rehabilitation Engineering,” Vol 16, No 2, pp. (161-170), 1534-4320,
intelligent controller,” IEEE Transactions on Systems, Man and April 2008,.
Cybernetics—Part C: Applications and Reviews, vol. 40, no. 6, pp. [22] T. Carlson and Y. Demiris, “Collaborative Control for a Robotic
630–638, 2010. Wheelchair: Evaluation of Performance, Attention, and Workload,” in
[5] B. Rebsamen, C. Guan, H. Zhang et al., “A brain controlled wheelchair IEEE Transactions on Systems, Man, and Cybernetics, Part B
to navigate in familiar environments,” IEEE Transactions on Neural (Cybernetics), vol. 42, no. 3, pp. 876-888, June 2012.
Systems and Rehabilitation Engineering, vol. 18, no. 6, pp. 590–598, [23] S. Deswal, S. Gupta, and B. Bhushan, “A Survey of Various Bilateral
2010. Filtering Techniques. International Journal of Signal Processing,”
[6] M. Mazo, F.J. Rodríguez, J.L. Lázaro, J. Ureña, J.C. Gracía, E. Santiso, Image Processing and Pattern Recognition. Vol 8 No 3 : 105-120, 2015.
P. Revenga, and J.J. García, (1995). Wheelchair for physically disabled [24] S. Vijayarani, and M. Vinupriya, “Performance Analysis of Canny and
people with voice, ultrasonic and infrared sensor control. Autonomous Sobel Edge Detection Algorithms in Image Mining,” International
Robots, vol 2, no 3, September 1995, pp.(203-224), 0929-5593. Journal of Innovative Research in Computer and Communication
Engineering. vol. 1 no 8, 2013.
[7] E. Perez, C. Soria, V. Mut, O. Nasisi, and T. F. Bastos, “Interfaz basada [25] W. Xu, and E. J. Lee. 2013. “Eye Detection and Tracking Using
en visión aplicada a una silla de ruedas robótica. Proceedings of III Rectangle Features and Integrated Eye Tracker by Web Camera,”
International Congress on Domotics,” Robotics and Remote-Assistance International Journal of Multimedia and Ubiquitous Engineering. Vol 8
for All - DRT4all2009, 978-84-88934-39-0, Barcelona (Spain), May, No 4.
2009. [26] G. Facciolo, N. Limare, and E. Meinhardt, “Integral Images for Block
[8] A. Úbeda, J. M. Azorín, E. Iáñez, and J.M. Sabater, “Interfaz de Matching,” Image Processing On Line. Vol 4, pp. 344-369, 2014.
seguimiento ocular basado en visión artificial para personas con [27] P. L. Bartlett, and M. Transkin, “AdaBoost is Consistent,” Journal of
discapacidad,” Proceedings of III International Congress on Domotics, Machine Learning Research. Vol 8, pp. 2347-2368, 2007.
Robotics and Remote-Assistance for All -DRT4all2009, 978-84-88934-
39-0, Barcelona (Spain), May, 2009.
[9] T. F. Bastos, A. Ferreira, W. C. Celeste, D. C. Calieri, M. S. Filho,
and C. de la Cruz, “Silla de ruedas robótica multiaccionada inteligente
con capacidad de comunicación,” Proceedings of III International
Congress on Domotics, Robotics and Remote-Assistance for All -
DRT4all2009, 978-84-88934-39-0, Barcelona (Spain), May, 2009.
[10] A. Turnip, K. S. Hong, and M. Y. Jeong, “Real-time feature extraction
of P300 component using adaptive nonlinear principal component
analysis,” BioMedical Engineering OnLine,” vol. 10(83), 2011.
[11] A. Turnip and K. S. Hong, “Classifying mental activities from EEG-
P300 signals using adaptive neural network,” Int. J. Innov. Comp. Inf.
Control, vol. 8(7), 2012.
[12] A. Turnip, S. S. Hutagalung, J. Pardede, and, D. Soetraprawata, "P300
detection using multilayer neural networks based adaptive feature
extraction method", International Journal of Brain and Cognitive
Sciences, vol. 2, no. 5, pp. 63-75, 2013.
[13] A. Turnip and M. Siahaan, “Adaptive Principal Component Analysis
based Recursive Least Squares for Artifact Removal of EEG Signals,”
Advanced Science Letters, vol. 20, no.10-12, pp. 2034-2037(4),
October 2014.

11

Potrebbero piacerti anche