Sei sulla pagina 1di 44

A

SEMINAR REPORT
ON

Brain Controlled Car for Disabled


In partial fulfillment of requirements for the degree of
Bachelor of Technology
in
MECHANICAL ENGINEERING

Submitted By:

Guided By:

Suchit Bhansali

Prof.Sandeep Jain

M.E. III Year

M.E. Department

(VI Semester)

DEPARTMENT OF MECHANICAL ENGINEERING


JODHPUR INSTITUTE OF ENGINEERING AND TECHNOLOGY

JIET Universe, N.H. 65, New Pali Road, Mogra,


Jodhpur-342002(Raj.)

JODHPUR
INSTITUTE OF
ENGINEERING &
TECHNOLOGY

CERTIFICATE
This is to certify that seminar titled Brain Controlled Car for
Disabled being submitted by Suchit Bhansali of B.Tech. final year,
Roll No. 12EJIME112 in partial fulfillment for the award of degree of
Bachelor of Technology in Mechanical Engineering, at JIET, Jodhpur
2

affiliated with RTU, Kota as a record of students own work carried out
by him under guidance of the undersigned.
He has not submitted the matter embodied in the seminar in this form
for the award of any other degree or diploma.
Signature of HOD

Signature of Guide

(Prof. M.R. Baid)

(Prof. Sandeep Jain)

External Examiner____________
Internal Examiner_____________

Candidates Declaration
I hereby declare that the work, which is being presented in this seminar,
entitled Brain Controlled Car for Disabled in partial fulfillment for
the award of Degree of Bachelor of Technology submitted to the
Department of Mechanical Engineering, Jodhpur Institute of Engineering
and Technology, Rajasthan Technical University, is a record of my own
work carried under the guidance of Prof. SANDEEP JAIN, Department
of Mechanical Engineering, Jodhpur Institute of Engineering and
Technology, Jodhpur.
I have not submitted the matter presented in this seminar anywhere from
the award of any other degree.

Suchit Bhansali
Mechanical Engineering,
Jodhpur Institute of Engineering and Technology, Jodhpur
Counter Signed By

Prof. Sandeep Jain


Department of Mechanical Engineering,
Jodhpur Institute of Engineering and Technology, Jodhpur.

ACKNOWLEDGEMENT

It is not the brain that matters the most, but those which guide it: the character, the
heart, generous qualities and progressive force.
It is indeed a matter of great pleasure and privilege to present the seminar on
BRAIN CONTROLLED CAR FOR DISABLED under the valuable guidance
of Mr.Sandeep Jain.
I am highly grateful to Prof. M.R.Baid (Head, Department of Mechanical
Engineering) for providing us this great opportunity to carry out independent research
on this topic.
Furthermore I would like to thank all others especially my parents and numerous
friends. This seminar would not have been a success without the inspiration, valuable
suggestions and moral support from the throughout its Course.
4

TABLE OF CONTENTS:
Acknowledgment
Abstract

CHAPTER 1
Introduction
Literature Survey

CHAPTER 2
2.1 Artificial Intelligence-----------------------------------------------------------------13
2.2 What is AI? ------------------------------------------------------------------------------14

CHAPTER 3
3.1 Brain-Computer Interface ----------------------------------------------------------15
3.1.1 The Evolution of BCIs and the Bridge with Human Computer
Interaction----------------------------------------------------------------------------16
3.2 Brain Imaging to Directly Control Devices-----------------------------------19
3.2.1 Bypassing Physical Movement to Specify Intent----------------19
3.2.2 Learning to Control Brain Signals------------------------------------20
3.2.3 Evaluation of Potential Impact-----------------------------------------21
3.3 Brain Imaging as an Indirect Communication Channel-------------------22
3.3.1 Exploring Brain Imaging for End-User Applications----------22
3.3.2 Understanding Cognition in the Real World----------------------23
3.3.3 Cognitive State as an Evaluation Metric----------------------------24
3.4 Conclusions------------------------------------------------------------------------------26
3.4.1 Test Results Comparing Driver Accuracy With/Without BCI-------26
CHAPTER 4
4.1 Automatic Navigation System-----------------------------------------------------28
4.1.1 Problem Formulation----------------------------------------------------30
4.1.2 Collision Prediction and Avoidance for Mobile Objects----32
4.1.3 Manoeuvre Planning For an Unmanned Vehicle--------------36
4.2 Conclusions-----------------------------------------------------------------------------37

CHAPTER 5
Conclusions-------------------------------------------------------------------------------38

List

of

Figures----------------------------------------------------------------------------39
REFRENCES ------------------------------------------------------------------------------40

JIET GROUP OF INSTITUTION


Seminar Topic Brain Controlled Car for Disabled
Student Name - Suchit Bhansali
6H2, Mechanical

Abstract
This report considers the development of a brain driven car, which would be of great
help to the physically disabled people. Since these cars will rely only on what the
individual is thinking they will hence not require any physical movement on the part
of the individual. The car integrates signals from a variety of sensors like video,
weather monitor, anti-collision etc. It also has an automatic navigation system in case
of emergency. The car works on the asynchronous mechanism of artificial

intelligence. Its a great advance of technology which will make the disabled, abled.
In the 40s and 50s, a number of researchers explored the connection between
neurology, information theory, and cybernetics. Some of them built machines that
used electronic networks to exhibit rudimentary intelligence, such as W. Grey Walter's
turtles and the Johns Hopkins Beast. Many of these researchers gathered for meetings
of the Teleological Society at Princeton and the Ratio Club in England. Most
researchers hope that their work will eventually be incorporated into a machine with
general intelligence (known as strong AI), combining all the skills above and
exceeding human abilities at most or all of them. A few believe that anthropomorphic
features like artificial consciousness or an artificial brain may be required for such a
project.
Guide:- Asst. Prof. Sandeep Jain

Submitted to :- Asst. Prof. Sandeep Jain

Signature:-

Signature:-

Date:-

CHAPTER 1
INTRODUCTION
Autonomous cars play an important role in current robotics and A.I. research. The
development of driverless cars started in the late 70s and 80s. Ernst Dickmanns
Mercedes Benz achieved a travel velocity of 100 km/h on restricted highways without
traffic. In the DARPA Grand Challenge 2005, autonomous cars drove off-road on
desert terrain, several of them reaching the finish line. DARPAs Urban Challenge of
2007 demonstrated that intelligent cars are able to handle urban scenarios and
situations with simulated traffic.
Lately, autonomous cars have been driving through real world traffic for testing
purposes in urban and rural areas alike. This research leads to the introduction of
various driver assistance systems for street cars. One key aspect for driver assistance

8
Figure.1- Brain Controlled Car For Disabled

systems is how the interface


between human and
machine affects usability. This
interface

question

is

more

important for people without


full

bodily

control.

Brain

Computer Interfaces can be a


solution here. Recently, BCIsystems have become relatively
affordable and allow people to
interact directly with their environment. Another big field lies in human interaction
within computer games, e.g. in the research game Brain Basher or in. As a sub-field
of BCI research, BCI using motor imagination brain patterns has become popular,
where the user has to think of a motion instead of performing it physically. In other
work, users could control mechanical devices with EEG patterns. In this paper we
want to present a solution where a human controls a car just by using brain signals,
i.e., without need for any physical interaction with the car.
In the first application, computer-aided free driving allows the passenger to claim
steering- and speed-control in special areas. The car prevents traffic rule-violations
and accidents by reclaiming control before they happen. The second application
implements a semi-autonomous path-planning, where a car drives autonomously
through a road-network until it arrives at so called decision points. Typically located
at crossings, decision points require the passenger to choose which way to drive next.

Literature Survey
The following experiments were conducted on the former Tempelhof airport in Berlin:
Experiment 1: At first we measured the accuracy of control. The first task was to
keep the car on an infield course, using left and right patterns for steering only.
The velocity was set to 2 meters per second. The driver had to drive the track for three
laps to see whether the accuracy remained constant over time.
Result: At the beginning of the first experiment we marked the desired lanes on the
airfield. As we found, on a flat surface those lanes are hard to see from greater
distances. Moreover, it is difficult for a human driver to estimate his distance to the
middle of the lane with centimetre accuracy. Therefore the test person had access to a
computer monitor, which displayed a model of the car on the virtual track from birds
eye perspective. The test person succeeded in keeping a close distance to the desired

10

trajectory, while only having to steer the car. We performed three tests to observe the
variance between different laps. The standard deviation of the lateral error function
over time was 1.875 meters for one lap, the error function is shown in Fig. 9. One lap
lasted for about 10 minutes. In the following laps this error did not diverge by more
than 0.2 m. The angular error standard deviation was 0.20 rad.
Experiment 2: In the second experiment the driving person had to control throttle
and brake in addition to the steering commands for left and right. The car was now
able to accelerate from 0 to 3 meters per second.
Result: The test person managed to control the car, controlling the velocity and the
steering wheel. However, the accuracy of steering control was reduced, compared to
Exp. 1, resulting in a larger standard deviation of the lateral error, which was 2.765 m.
The standard deviation of the orientation was 0.410 rad and, thus, larger as well.
Experiment 3: To check the lateral error to the lane at higher speeds, we designed
another track with long straight lanes and two sharp corners. The velocity was fixed to
5 meters per second and like in the first experiments, the driver had to steer left and
right only, trying to stay at the reference lane.
Result: The lateral error became even greater on the speedway. The speed was set to 5
meters per second and the test person tried to focus on heading in the right direction
(keeping the orientation error small) rather than reducing the lateral distance. This is
due to the fact that at higher speeds, the target point for orienting the car is displaced
forwards. The standard deviation of the lateral error was 4.484, the standard deviation
of the orientation error was 0.222 rad.
Experiment 4: We checked the response time of the test person. The test person
received different commands, such as left, right, push or pull from another
person and had to generate the corresponding brain pattern - this had to be recognized
by the control computer. The time from the command until the recognition within the
control computer was measured. We also measured falsely classified patterns.

11

Result: In this experiment we measured the time it takes to generate a pattern with the
brain and to classify it. Results are shown in Fig. 14. Over 60 percent of the brain
commands could be generated within 5 or less seconds, about 26 percent even within
two seconds or less. In 20 percent of all cases the generated pattern was wrong. This
was usually due to concentration problems of the test person. After a while, at latest
after one hour a new training of the brain patterns is necessary. Further, after using the
BCI for 90 minutes we experienced some tiredness of our test subject, which results
in longer response times or higher inaccuracies.
Experiment 5: In this experiment, we tested the second module, the Brain Chooser.
Here, at intersections, the operator was asked to decide for the left or the right route.
Then the test person had about ten seconds to decide for left or right direction. This
long decision phase helps to filter out noise and ensures that the test person was
generating the desired pattern over a longer time, reducing the risk of coincidentally
generated patterns.
Result: In this experiment for the Brain Chooser the test person achieved correctly
classified directions in more than 90 percent of cases.

12

CHAPTER 2
2.1 Artificial Intelligence
Humankind has given itself the scientific name Homo sapiens man the wise
because our mental capacities are so important to our everyday lives and our sense of
self. The field of Artificial Intelligence or AI, attempts to understand intelligent
entities. Thus, one reason to study it is to learn more about ourselves. But unlike
philosophy and psychology, which are also concerned with intelligence, AI strives to
build intelligent entities as well as understand them. Another reason to study AI is that
these constructed intelligent entities are interesting and useful in their own right. AI
has produced many significant and impressive products even at this early stage in its
development. Although no one can predict the future in detail, it is clear that
computers with human-level intelligence (or better) would have a huge impact on our
everyday lives and on the future course of civilization.

13

AI addresses one of the ultimate puzzles. How is it possible for a slow, tiny brain,
whether biological or electronic, to perceive, understand, predict, and manipulate a
world far larger and more complicated than itself? How do we go about making
something with those properties? These are hard questions, but unlike the search for
faster-than-light travel or an antigravity device, the researcher in AI has solid evidence
that the quest is possible. All the researcher has to do is look in the mirror to see an
example of an intelligent system.
AI is one of the newest disciplines. It was formally initiated in 1956, when the name
was coined, although at that point work had been under way for about five years.
Along with modern genetics, it is regularly cited as the "field I would most like to be
in" by scientists in other disciplines. A student in physics might reasonably feel that
all the good ideas have already been taken by Galileo, Newton, Einstein, and the rest,
and that it takes many years of study before one can contribute new ideas. AI, on the
other hand, still has openings for a full-time Einstein.
AI currently encompasses a huge variety of subfields, from general-purpose areas
such as perception and logical reasoning, to specific tasks such as playing chess,
proving mathematical theorems, writing poetry, and diagnosing diseases. Often,
scientists in other fields move gradually into artificial intelligence, where they find the
tools and vocabulary to systematize and automate the intellectual tasks on which they
have been working all their lives. Similarly, workers in AI can choose to apply their
methods to any area of human intellectual endeavour. In this sense, it is truly a
universal field.

2.2 What is AI?


Definitions:
The definitions of AI vary along two main dimensions. The ones on top are concerned
with thought processes and reasoning, whereas the ones on the bottom address
behaviour. Also, the definitions on the left measure success in terms of human
performance, whereas the ones 1 on the right measure against an ideal concept of
intelligence, which we will call rationality. A system is rational if it does the right

14

thing. This gives us four possible goals to pursue in artificial intelligence, as seen in
the Table-2.1
"The exciting new effort to make
computers think . . . machines with
minds, in the full and literal sense"
(Haugeland, 1985)
"[The automation of] activities that we
associate with human thinking, activities
such as decision-making, problem
solving, learning" (Bellman, 1978)
"The art of creating machines that
perform
functions
that
require
intelligence when performed by people"
(Kurzweil, 1990)
"The study of how to make computers do
things at which, at the moment, people
are better" (Rich and Knight, 1991 )

"The study of mental faculties through


the use of computational models"
(Charniak and McDermott, 1985)
"The study of the computations that make
it possible to perceive, reason, and act"
(Winston, 1992)

"A field of study that seeks to explain and


emulate intelligent behaviour in terms of
computational processes"
(Schalkoff, 1990)
"The branch of computer science that is
concerned with the automation of
intelligent behaviour"
(Luger and Stubblefield, 1993)
Table-2.1- Some definitions of AI.

CHAPTER 3
3.1 Brain-Computer Interface
For generations, humans have fantasized about the ability to
communicate and interact with machines through thought alone or
to create devices that can peer into persons mind and thoughts.
These ideas have captured the imagination of humankind in the
form of ancient myths and modern science fiction stories. However,
it is only recently that advances in cognitive neuroscience and brain
imaging technologies have started to provide us with the ability to
interface directly with the human brain.
This ability is made possible through the use of sensors that can
monitor some of the physical processes that occur within the brain
that correspond with certain forms of thought.
15

Primarily driven by growing societal recognition for the needs of


people with physical disabilities, researchers have used these
technologies

to

build

Brain

Computer

Interfaces

(BCIs),

communication systems that do not depend on the brains normal


output pathways of peripheral nerves and muscles. In these
systems, users explicitly manipulate their brain activity instead of
using motor movements to produce signals that can be used to
control computers or communication devices. The impact of this
work is extremely high, especially to those who suffer from
devastating neuromuscular injuries and neurodegenerative diseases
such as amyotrophic lateral sclerosis, which eventually strips
individuals of voluntary muscular activity while leaving cognitive
function intact.
Meanwhile, largely independent of these efforts, Human-Computer
Interaction (HCI) researchers continually work to increase the
communication

bandwidth

and

quality

between

humans

and

computers. They have explored visualizations and multimodal


presentations so that computers may use as many sensory channels
as possible to send information to a human. Similarly, they have
devised

hardware

and

software

innovations

to

increase

the

information a human can quickly input into the computer. Since we


have traditionally interacted with the external world only through
our physical bodies, these input mechanisms have mostly required
performing some form of motor activity, be it moving a mouse,
hitting buttons, using hand gestures, or speaking.
Additionally, these researchers have started to consider implicit
forms of input, that is, input that is not explicitly performed to direct
a computer to do something. In an area of exploration referred to by
names such as perceptual computing or contextual computing,
researchers attempt to infer information about user state and intent
16

by observing their physiology, behaviour, or even the environment


in which they operate. Using this information, systems can
dynamically adapt themselves in useful ways in order to better
support the user in the task at hand.
It is believed that there exists a large opportunity to bridge the
burgeoning research in Brain-Computer Interfaces and Human
Computer Interaction, and this book attempts to do just that. We
believe that BCI researchers would benefit greatly from the body of
expertise built in the HCI field as they construct systems that rely
solely on interfacing with the brain as the control mechanism.
Likewise, BCIs are now mature enough that HCI researchers must
add them to our tool belt when designing novel input techniques
(especially in environments with constraints on normal motor
movement), when measuring traditionally elusive cognitive or
emotional phenomena in evaluating our interfaces, or when trying
to infer user state to build adaptive systems. Each chapter in this
book was selected to present the novice reader with an overview of
some aspect of BCI or HCI, and in many cases the union of the two,
so that they not only get a flavour of work that currently exists, but
are hopefully inspired by the opportunities that remain.

3.1.1 The Evolution of BCIs and the Bridge with Human


Computer Interaction
The evolution of any technology can generally be broken into three
phases. The initial phase, or proof-of-concept, demonstrates the
basic functionality of a technology. In this phase, even trivially
functional systems are impressive and stimulate imagination. They
are also sometimes misunderstood and doubted. As an example,
when moving pictures were first developed, people were amazed by
simple footage shot with stationary cameras of flowers blowing in
the wind or waves crashing on the beach. Similarly, when the
17

computer mouse was first invented, people were intrigued by the


ability to move a physical device small distances on a table-top in
order to control a pointer in two dimensions on a computer screen.
In brain sensing work, this represents the ability to extract any bit of
information directly from the brain without utilizing normal muscular
channels.
In the second phase, or
emulation, the technology
is used to mimic existing
technologies.
The

first

movies

simply

recorded

plays,

and

were
stage

computer

mice were used to select

Figure.2- Asynchronous Switch Design

from lists of items much as they would have been with the
numeric pad on a keyboard. Similarly, early brain-computer
interfaces have aimed to emulate functionality of mice and
keyboards, with very few fundamental changes to the interfaces on
which they operated. It is in this phase that the technology starts to
be driven less by its novelty and starts to interest a wider audience
interested by the science of understanding and developing it more
deeply.
Finally, the technology hits the third phase, in which it attains
maturity in its own right. In this phase, designers understand and
exploit the intricacies of the new technology to build unique
experiences that provide us with capabilities never before available.
For example, the flashback and crosscut, as well as bullet-time
introduced more recently by the movie the Matrix have become
well-acknowledged idioms of the medium of film. Similarly, the
mouse has become so well integrated into our notions of computing
that it is extremely hard to imagine using current interfaces without
18

such a device attached. It should be noted that in both these cases,


more than forty years passed between the introduction of the
technology and the widespread development and usage of these
methods.
It is believed that brain-computer interface work is just now coming
out of its infancy, and that the opportunity exists to move it from
the proof-of-concept and emulation stages into maturity. However,
to do this, we will have not only have to continue the discovery and
invention within the domain itself, but also start to build bridges and
leverage researchers and work in other fields. Meanwhile, the
human computer interaction field continues to work toward
expanding the effective information bandwidth between human and
machine, and more importantly to design

technologies that

integrate seamlessly into our everyday tasks. Specifically, we


believe there are several opportunities, though we believe our views
are necessarily constrained and hope that this book inspires further
crossover and discussion. For example:

While the BCI community has largely focused on the very


difficult mechanics of acquiring data from the brain, HCI
researchers could add experience designing interfaces that
make the most out of the scanty bits of information they have
about the user and their intent. They also bring in a slightly
different viewpoint which may result in interesting innovation
on the existing applications of interest. For example, while BCI
researchers maintain admirable focus on providing patients
who have lost muscular control an alternate input device, HCI
researchers might complement the efforts by considering the
entire

locked-in

experience,

including

such

factors

preparation, communication, isolation, and awareness, etc.

19

as

Beyond

the

traditional

definition

of

Brain-Computer

Interfaces, HCI researchers have already started to push the


boundaries of what we can do if we can peer into the users
brain, if even ever so roughly. Considering how these devices
apply to healthy users in addition to the physically disabled,
and how adaptive system may take advantage of them could
push analysis methods as well as application areas.

The HCI community has also been particularly successful at


systematically exploring and creating whole new application
areas. In addition to thinking about using technology to fix
existing pain points, or to alleviate difficult work, this
community has sought scenarios in which technology can
augment everyday human life in some way. We believe that
we have only begun to scratch the surface of the set of
applications that brain sensing technologies open, and hope
that this book stimulates a much wider audience to being
considering these scenarios.

3.2 Brain Imaging to Directly Control Devices


3.2.1 Bypassing Physical Movement to Specify Intent
Most current brain-computer interface work has grown out of the
neuroscience and medical fields, and satisfying patient needs has
been a prime motivating force. Much of this work aims to improve
the lives of patients with severe neuromuscular disorders such as
Amyotrophic Lateral Sclerosis (ALS), also popularly known as Lou
Gerigs disease, brainstem stroke, or spinal cord injury. In the latter
stages of these disorders, many patients lose all control of their
physical bodies, including simple functions such as eye-gaze. Some
even need help with vital functions such as breathing. However,

20

many of these patients retain full control of their higher level


cognitive abilities.
While medical technologies
that augment vital bodily
functions

have

drastically

extended the lifespan of


these

patients,

Figure.3-EEG Transmission

these

technologies do not alleviate the mental frustration


or social isolation caused by having no way to communicate with
the external world. Providing these patients with brain-computer
interfaces that allow them to control computers directly with their
brain signals could dramatically increase their quality of life. The
complexity of this control ranges from simple binary decisions, to
moving a cursor on the screen, to more ambitious control of
mechanical prosthetic devices.
Most current brain-computer interface research has been a logical
extension of assistive methods in which one input modality is
substituted for another. When users lose the use of their arms, they
typically move to eye or head tracking, or even speech, to control
their computers. However, when they lose control of their physical
movement, the physiological function they have the most and
sometimes only control over is their brain activity.

3.2.2 Learning to Control Brain Signals


To successfully use current direct control brain-computer interfaces,
users have to learn to intentionally manipulate their brain signals. To
date, there have been two approaches for training users to control
their brain signals (Curran and Stokes 2003).In the first, users are

21

given specific cognitive tasks such as motor imagery to generate


measurable brain activity. Using this technique the user can send a
binary signal to the computer, for example, by imagining sequences
of rest and physical activity such as moving their arms or doing high
kicks. The second approach, called operant conditioning, provides
users with continuous feedback as they try to control the interface.
Users may think about anything (or nothing) so long as they achieve
the desired outcome. Over many sessions, users acquire control of
the interface without being consciously aware of how they are
performing the task. Unfortunately, many users find this technique
hard to master.
Other researchers have designed interfaces that exploit the specific
affordances of brain control. One such interface presents a grid of
keys, each representing a letter or command (Sutter 1992). Each
row or column of the grid flashes in rapid succession, and the user is
asked to count the number of flashes that occur over the desired
key. The system determines the row and column of interest by
detecting an event-related signal called the P300 response, which
occurs in the parietal cortex about 300 milliseconds after the onset
of a significant stimulus.
It is believed that there remains much work to be done in designing
interfaces that exploit our understanding of cognitive neuroscience
and that provide the maximum amount of control using the lowest
possible bit rate. We believe that expertise in human-computer
interaction can be leveraged to design novel interfaces that may be
generally applicable to brain-computer interfaces and low bit rate
interactions.

3.2.3 Evaluation of Potential Impact

22

We are still at a very early stage in brain-computer interface


research. Because current systems require so much cognitive effort
and produce such small amounts of control information (the best
systems now get 25 bits/minute), they remain useful mainly in
carefully controlled scenarios and only to users who have no motor
alternatives. Much work has to be done before we are able to
successfully replace motor movement with brain signals, even in the
simplest of scenarios.
While researchers believe that these interfaces will get good enough
to vastly improve the lives of disabled users, not all are certain that
brain-computer interfaces will eventually be good enough to
completely replace motor movement even for able-bodied users. In
fact, many researchers have mixed feelings on whether or not this is
useful or advisable in many situations. However, we do foresee
niche applications in which brain-computer interfaces might be
useful for able-bodied people.
For example, since these interfaces could potentially bypass the lag
in mentally generating and executing motor movements, they would
work well in applications for which response times are crucial.
Additionally, they could be useful in scenarios where it is physically
difficult to move. Safety mechanisms on airplanes or spacecraft
could benefit from such interfaces. In these scenarios, pilots
experiencing large physical forces do not have much time to react to
impending disasters, and even with limited bandwidth brain control
could be valuable. Also, since brain control is intrinsically less
observable than physical movement, brain-computer interfaces may
be useful for covert operation, such as in command and control or
surveillance applications for military personnel.
Brain-computer interfaces could also be successful in games and
entertainment applications. In fact, researchers have already begun
23

to explore this lucrative area to exploit the novelty of such an input


device in this large and growing market. One interesting example of
such a game is Brain-ball, developed at the Interactive Studio in
Sweden (Hjelm and Browall 2000). In this game, two players
equipped with EEG are seated on opposite sides of a table. Players
score simply by moving a ball on the table into the opponents goal.
The unusual twist to this game is that users move the ball by
relaxing. The more relaxed the EEG senses the user to be, the more
the ball moves. Hence, rather than strategic thoughts and intense
actions, the successful player must learn to achieve calmness and
inactivity.

3.3 Brain Imaging as an Indirect Communication Channel


3.3.1 Exploring Brain Imaging for End-User Applications
As
researchers,

HCI
we

are in the unique


position to think
about

the

opportunities
offered

Figure.4- Brain - to - Machine Mechanism


by

widespread
adoption of braincomputer
interfaces. While it is a remarkable endeavour to use brain activity
as
a novel replacement for motor movement, we think that braincomputer interfaces used in this capacity will probably remain
tethered to a fairly niche market. Hence, in this book, we look

24

beyond current research approaches for the potential to make brain


imaging useful to the general end-user population in a wide range of
scenarios.
These considerations have led to very different approaches in using
brain imaging and brain-computer interfaces. Rather than building
systems in which users intentionally generate brain signals to
directly control computers, researchers

have also sought to

passively sense and model some notion of the users internal


cognitive state as they perform useful tasks in the real world. This
approach is similar to efforts aimed at measuring emotional state
with physiological sensors (e.g. Picard and Klein 2002). Like
emotional state, cognitive state is a signal that we would never
want the user to intentionally control, either because it would
distract them from performing their tasks or because they are not
able to articulate the information.
People are notoriously good at modelling the approximate cognitive
state of other people using only external cues. For example, most
people have little trouble determining that someone is deep in
thought simply by looking at them. This ability mediates our social
interactions and communication, and is something that is notably
lacking

in

our

interactions

with

computers.

While

we

have

attempted to build computer systems that make similar inferences,


current models and sensors are not sensitive enough to pick up on
subtle external cues that represent internal cognitive state. With
brain imaging, we can now directly measure what is going on in a
users brain, presumably making it easier for a computer to model
this state.

3.3.2 Understanding Cognition in the Real World

25

Early neuroscience and cognitive psychology research was largely built upon case
studies of neurological syndromes that damaged small parts of the brain. By studying
the selective loss of cognitive functions caused by the damage, researchers were able
to understand how specific parts of the brain mediated different functions. More
recently, with improvements in brain imaging technologies, researchers have used
controlled experiments to observe specific brain activations that happen as a result of
particular cognitive activities. In both these approaches, the cognitive activities tested
are carefully constructed and studied in an isolated manner.
While isolating cognitive activities has its merits, we believe that measuring brain
activity as the user operates in the real world could lead to new insights. Researchers
are already building wearable brain imaging systems that are suitable for use outside
of the laboratory. These systems can be coupled with existing sensors that measure
external context so that we can correlate brain activity with the tasks that elicit this
activity. While the brain imaging device can be seen as a powerful sensor that informs
existing context sensing systems, context sensing systems can also be viewed as an
important augmentation to brain imaging devices.
Again, we believe that there are opportunities here that are currently underexplored.
Using this approach, we are able not only to measure cognitive activity in more
complex scenarios than we can construct in the laboratory, but also to study processes
that take long periods of time. This is useful in tasks for which the brain adapts slowly
or for tasks that cannot be performed on demand in sterile laboratory environments,
such as idea generation or the storage of contextual memory cues as information is
learned. Also, while neuroscience studies have focused on the dichotomy between
neurologically disabled and normal patients, we now have the opportunity to study
other individual differences, perhaps due to factors such as gender, expertise on a
given task, or traditional assessment levels of cognitive ability. Finally, we believe
that there exists the opportunity to study people as they interact with one another. This
can be used to explore the neural basis of social dynamics, or to attempt to perform
dynamic workload distribution between people collaborating on a project.
Furthermore, having data from multiple people operating in the real world over long

26

periods of time might allow us to find patterns and build robust cognitive models that
bridge the gap between current cognitive science and neuroscience theory.

3.3.3 Cognitive State as an Evaluation Metric


In a more controlled and applied setting, the cognitive state derived from brain
imaging could be used as an evaluation metric for either the user or for computer
systems. Since we can measure the intensity of cognitive activity as a user performs
certain tasks, we could potentially use brain imaging to assess cognitive aptitude
based on how hard someone has to work on a particular set of tasks. With proper task
and cognitive models, we might use these results to generalize performance
predictions in a much broader range of scenarios.
For example, using current testing methods, a user who spends a huge amount of
cognitive effort working on test problems may rate similarly to someone who spent
half the test time daydreaming so long as they ended up with the same number of
correct answers. However, it might be useful to know that the second user might
perform better if the test got harder or if the testing scenario got more stressful. In
entertainment scenarios such as games, it may be possible to quantify a users
immersion and attentional load. Some of the work in this book is aimed at validating
brain imaging as a cognitive evaluation method and examine how it can be used to
augment traditional methods.
Rather than evaluating the human, a large part of human-computer interaction
research is centred on the ability to evaluate computer hardware or software
interfaces. This allows us not only to measure the effectiveness of these interfaces, but
more importantly to understand how users and computers interact so that we can
improve our computing systems. Thus far, researchers have been only partially
successful in learning from performance metrics such as task completion times and
error rates. They have also used behavioural and physiological measures to infer
cognitive processes, such as mouse movement and eye gaze as a measure of attention,

27

or heart rate and galvanic skin response as measures of arousal and fatigue. However,
there remain many cognitive processes that are hard to measure externally.
For these, they typically resort to clever experimental
design or subjective questionnaires which give them
indirect metrics for specific cognitive phenomena. For
example, it is still extremely difficult to accurately

Figure.5-EyeBall Tracking

ascertain cognitive workloads or particular cognitive


strategies used, such as verbal versus spatial memory
encoding. Brain
sensing provides the promise of a measure that more directly quantifies the cognitive
utility of our interfaces. This could potentially provide powerful measures that either
corroborate external measures, or more interestingly, shed light on the interactions
that we would have never derived from external measures alone. Various researchers
are working to generalize these techniques and provide a suite of cognitive measures
that brain imaging provides.

3.4 Conclusion

28

Brain-computer interfaces will increase acceptance by offering customized, intelligent


help and training, especially for the non-expert user. Development of such a flexible
interface paradigm raises several challenges in the areas of machine perception and
automatic explanation. The teams doing research in this field have developed a single
position, brain-controlled switch that responds to specific patterns detected in
spatiotemporal electroencephalograms (EEG) measured from the human scalp. We
refer to this initial design as
the Low- Frequency filtered
and

run

Fourier

through

transform

fast

Figure.6- EEG

before

being displayed as a three


dimensional

graphic.

The

data can then be piped into


MIDI

compatible

programs.

music

Furthermore,

MIDI can be adjusted to


control other external processes, such as robotics. The experimental control system is
configured for the particular task being used in the evaluation. Real Time Workshop
generates all the control programs from Simulink models and C/C++ using MS Visual
C++ 6.0. Analysis of data is mostly done within Mat lab environment.

3.4.1 Test Results Comparing Driver Accuracy With/Without


BCI
1. Able-bodied subjects using imaginary movements could attain equal or better
control accuracies than able-bodied subjects using real movements.
2. Subjects demonstrated activation accuracies in the range of 70-82% with false
activations below 2%.
3. Accuracies using actual finger movements were observed in the range 36-83%
4. The average classification accuracy of imaginary movements was over 99%.
29

The principle behind the whole mechanism is that the impulse of the human brain can
be tracked and even decoded. The Low-Frequency Asynchronous Switch Design
traces the motor neurons in the brain. When the driver attempts for a physical
movement, he/she sends an impulse to the motor neuron. These motor neurons carry
the signal to the physical components such as hands or legs. Hence we decode the
message at the motor neuron to obtain maximum accuracy. By observing the sensory
neurons we can monitor the eye movement of the driver.
As the eye moves, the cursor on the screen also moves and is also brightened when
the driver concentrates on one particular point in his environment. The sensors, which
are placed at the front and rear ends of the car, send a live feedback of the
environment to the computer. The steering wheel is turned through a specific angle by
electromechanical actuators. The angle of turn is calibrated from the distance moved
by the dot on the screen.

Figure.7- Electromechanical Control Unit

CHAPTER 4

30

4.1 Automatic Navigation System


Advanced vehicle control and safety systems represent an important
and growing segment of the current research in the automation of
highway systems. The most important topics include automatic
vehicle localization, cruise control, traffic management, obstacle
detection, collision avoidance, etc.
The navigation system generates a set of collision avoidance
manoeuvres between the vehicle in consideration and other moving
vehicles as well as static obstacles on the highway.
Planning collision-free and safe manoeuvres for an unmanned
vehicle in dynamic environments such as highways is a difficult
problem and has been addressed by researchers only recently.
Several motion-planning and obstacle-avoidance techniques for
autonomous guided vehicles have been proposed in the literature.
Collision prediction and avoidance are implemented by selecting
vehicle speeds outside a set that would result in collision with a
given obstacle. Of course, it is assumed that the instantaneous state
(position and speed) of each mobile object in the scenery is
measurable. A collision-free trajectory is then obtained by searching
a tree of feasible avoidance manoeuvres, computed at discrete-time
intervals.

31

Later, this research work


was

generalized

considering
moving

by

obstacles

along

arbitrary

trajectories in. For these


type of obstacles, the

Figure.8- Sensors and Their Range

proposed method is able


to

generate

avoidance

local

manoeuvres

based on the current speed and path curvature of the moving


obstacle.
An obstacle-avoidance method for 2D motion planning, based on
aerospace guidance systems, is presented in. Collision prediction is
achieved by a collision cone approach. This concept reduces the
engagement between two irregularly shaped obstacles into an
equivalent engagement between a point and a circle. Position and
speed of the obstacle are measurable by on-board sensors.
Different collision-free manoeuvres for an unmanned vehicle are
generated by different alternatives of avoiding a predicted collision
with a moving vehicle (obstacle). The only available information of
the mentioned obstacle is its current position and speed obtained by
on-board sensors. As soon as new information from the world is
provided by the sensors, the manoeuvre planner is intended to be
executed, defining a closed-loop approach. Sensors provide to the
navigation system, under a predetermined sample rate, speeds and
positions of every vehicle (obstacle) in sight.
Figure. 8 shows a vehicle equipped with rings of ultrasonic sensors,
infrared sensors, a laser scanner and GPS. Collision is predicted by
computing the minimum translational distance between the motions
of the unmanned and obstacle vehicles. The unknown motion of the
32

obstacle is assumed to be estimated from the recent position and


speed received from the sensors. When a collision is predicted,
intermediate

temporal-positions,

which

avoid

the

mentioned

collision, are determined by defining different manoeuvres for the


unmanned vehicle.
Vehicles

and

unforeseeable
modelled
Automatic

other

obstacles

by

are

sphere.

generation

of

this

type of geometric models and


their extensions are shown in.
Motions

of

the

Figure.8-Unmanned vehicle with on-board


sensors

involved

vehicles are represented by the


spherical cylinder that contains
the volume swept from their current positions to their expected
future ones. Speeds along such estimated motions are assumed
constant.
Rotations are not considered in most of motion-planning algorithms
for autonomous vehicles. In the presented application, it is assumed
that vehicles move only by translation in a two-dimensional space.
Another characteristic assumed by the proposed manoeuvre planner
is discarding the non-holonomic constraint of the involved vehicles,
due to the minimum speed and to the minimum curvature radius
permitted in a highway.

4.1.1 Problem Formulation


A vehicle VU is modelled by a sphere and it is formally represented by the following
tuple of five parameter functions:

Vu(t) = (pu(t), lu(t), u(t), vu(t), ru)


33

(1)

Where pu(t) is the position of the vehicle at time t at the lane identified by lu(t). pu(t) is
also the centre of the sphere that envelops vehicle Vu. u (t) is the steering-angle
function defined between the axes of the vehicle and lane lu(t). vu(t) is the speed at
time t. ru is the invariable radius of the sphere. Model of a vehicle is graphically
shown in figure.9. Sphere
model introduces a margin
safety

avoiding

of

an

excessive approach in the


changing-lane
manoeuvres.
The minimum time gap
min is a parameter used for
characterizing
minimum
clearance

the
permitted

or

distance

Figure.9/10- Spherical cylinder representing


motion of a vehicle

between two consecutive


vehicles located on the
same lane. min depends on the speed of the following vehicle.
Collision-prediction tests among the unmanned vehicle and nearby vehicles or
obstacles are applied by considering a temporal-horizon parameter t that verifies
t>min.
Using the speed and position of the unmanned vehicle VU at the current instant of time
tS, a future position at time tS+t is estimated. This extrapolated position is obtained
by assuming a constant speed at the time interval [tS,tS+t]. With the end sphere
positions pU(tS) and pU(tS+t), a spherical cylinder, which describes the motion of the
vehicle VU at a constant speed in [tS,tS+t], is defined. Position pU(tS+t) is updated,
if necessary, to be located at the centre of the current lane lU(tS). See figure for the
graphical description of a motion. For clarity reasons, the figure.10 has been
represented in two dimensions.
Motion of the vehicle VB (obstacle) is computed based on a tracking algorithm that
provides speed and position at the rate of the sensor sampling. During the

34

inter-sampling time, it is assumed that the obstacle vehicle is moving in accordance


with the current direction. Motion of a given vehicle VM is characterized by providing
the two spheres representing respectively the positions of the vehicle at time tS and
tS+t. Consequently, a motion is defined by the following parameters:

MVm = {pm(ts), pm(ts+t), rm, ts, ts+t}

(2)

The proposed motion representation contains all the infinite intermediate temporalpositions between the start pm(tS) and final pm(tS+t) positions. These intermediate
configurations pIm(t) with t[tS, tS+t] are characterized by parameter [0,1].
Given the unmanned vehicle VU, let pu(tS), pu(tS+t) be:
pu(tS) = (xUtS, yUtS), pu(tS+tS) = (xUtS+tS, yUtS+tS) respectively.

pIu(t) = (x,y),

t[tS, tS+t]

x = xUtS + ( xUtS+tS - xUtS )

(3)

y = yUtS + ( yUtS+tS - yUtS )


with [0,1]
t = tS+ (tS+t- tS) = tS+ .t
This motion definition comes from the polytope and spherically-extended-polytope
theories. Finally, for dynamics reasons, it is assumed that two functions amax(vu),
dmax(vu) are available. amax(vu) is the maximum acceleration at the current speed vu and,
analogously, dmax(vu) is the maximum deceleration at vu. These functions depend on
the dynamics of the unmanned vehicle, and they are used for discarding any generated
manoeuvre that is dynamically unfeasible.

4.1.2 Collision Prediction and Avoidance for Mobile Objects


The collision-prediction procedure is based on the computation of the Minimum
Translational Distance (MTD) between each one of spherical cylinders representing
respectively the motions of the two involved vehicles.

35

The proposed method for distance computation between two spherical cylinders is
based on an extension of the well-known GJK algorithm. This algorithm computes the
distance between two polytopes as the separation between the origin point O
(coordinate reference) and their Minkowski difference set.
In this way, if the origin point is located inside the Minkowski difference set, the
involved polytopes are colliding, otherwise, the separation distance, between the
mentioned objects, is then computed. Let us consider the following motions
corresponding to two vehicles VU and VB.

MVu = {pu(ts), pu(ts+t), ru, ts, ts+t}


MVB = {pB(ts), pB(ts+t), rB, ts, ts+t}

(4)

with

pu(tS) = (xUtS, yUtS); pu(tS+tS) = (xUtS+tS, yUtS+tS)


pB(tS) = (xBtS, yBtS); pB(tS+tS) = (xBtS+tS, yBtS+tS)

(5)

Minkowski difference of two spherical cylinders is a spherical plane defined by four


spherical vertices {S0,S1,S2,S3}. Each sphere Si=(ci,ri), where ci is its centre and ri is its
radius, is computed as follows:

C0 = pu(tS) pB(tS)
C1 = pu(tS) pB(tS+t)
C2 = pU(tS+t) pB(tS)
C3 = pU(tS+t) pB(tS+t)

(6)

Figure.11- Motions of the


unmanned and obstacle
Vehicles starting and
overtaking maneuver0

r = r1 = r2 = r3 = rU + rB
Definition of the Minkowski difference, set of the two involved motions, is
graphically shown in figure. Both figures are represented in a two dimensional space
for clarify. Note that the origin point is inside the Minkowski difference set, and
consequently, motions are colliding.

Figure 12: Spherical plane


representing the Minkowski
difference of motions
depicted in figure 11.

36

However, it is obvious that a collision between the two motion cylinders does not
imply collision between the associated vehicles.
Considering the spherical cylinder defined by spheres S0=(c0,r0) and S3=(c3,r3) in the
Minkowski difference set, with r0=r3, a set of infinite intermediate spheres S03=(c03,r03)
can be determined by applying (3).

c03 = c0 + (c3 - c0)

with [0,1]

(7)

r03 = r0
Substituting expressions in (7) for the ones given by (6), it is
obtained

c03 = pu(tS) pB(tS) + ( pU(tS+t) pB(tS+t) pu(tS) + pB(tS)


(8)
r03 = rU + rB;

with [0,1]
37

Figure.13- Time-continuous representation, characterized by , of vehicles VU and VB along


their respective motions

Equation (8) can also be expressed as,

c03 =[pu(tS)+(pU(tS+t)pu(tS))][pB(tS) + (pB(tS+t)pB(tS))]


(9)
with [0,1]

r03 = rU + rB;

In accordance with (3), the following expressions

[pu(tS) +(pU(tS+t)pu(tS))]
[pB(tS) + (pB(tS+t)pB(tS))]

(10)

state the motions of vehicle VU and VB respectively. Therefore


expressions given by (9) represents the translational difference
between

both

vehicle

motions

at

continuous

time,

and

consequently, it is concluded that the minimum translational


distance between two mobile vehicles (spheres) is the distance from
the origin point to the following spherical cylinder (motion).

{pu(tS) pB(tS), pU(tS+t) pB(tS+t), rU + rB, tS, tS+t} (11)

38

Additionally, note that parameter [0,1]acquires the meaning of time. This property
is consequence of the temporal equation given by (3). Figure shows graphically this
temporal meaning of .
In this way, positions of the involved vehicles at a given time t[tS,tS+t] will be
characterized by the following

tts
t

(12)

Minimum translational distance (MTD) between the origin point and the spherical
cylinder defined by spheres S0 and S3, i.e., between motions MVU and MVB, is obtained
as follows:
MTD(MVU, MVB) = do (rU + rB)

(13)

Figure.14- Computation of the MTD from the origin point O to the spherical cylinder
determined by spheres s0 and s3

where dO is the distance of the origin point O to the segment defined by centres c0 and
c3. dO is computed by projecting the origin point onto the mentioned segment. See
figure for graphical details. The projection of the origin point O is obtained by
means of the computation of parameter , as follows

c 03c 0
C 03C 03

With c03 =

c3 c 0 , c 0 = c0 O

(14)
39

Consequently O is the projection of the origin point into the structure defined by the
centres of the mentioned spheres, but just when verifies [0,1]. If [0,1], is
saturated to the corresponding value zero or one. Then, projection O is determined
by the following equation

O = c0+ (c3 - c0)

(15)

and parameter dO is obtained as follows


dO = O - O = O

(16)

4.1.2 Manoeuvre Planning For an Unmanned


Vehicle
The proposed manoeuvre-planning algorithm is divided into two
different stages:

Collision test with the immediately preceding vehicle

Collision-free test for a planned overtaken manoeuvre

4.2 Conclusion
A fast computational method has been developed for generating
collision-free maneuvers for the automated driving of an unmanned
vehicle in automated highway systems.

40

The maneuver-planning algorithm is run as frequent as sensors


provide positions and speeds of the vehicles in sight.
The proposed maneuver-planning technique is based on the
computation of the minimum translational distance between the
motions of the unmanned vehicle and the obstacles in sight.
When a collision is predicted, a braking and a double overtaking
maneuvers are generated. Before starting an overtaking maneuver,
it is tested a possible collision with any vehicle on the lane to be
used for the overtaking maneuver.

CHAPTER 5
5.1 Conclusion
41

When the above requirements are satisfied and if this car becomes cost effective then
we shall witness a revolutionary change in the society where the demarcation between
the abler and the disabled vanishes. Thus the integration of bioelectronics with
automotive systems is essential to develop efficient and futuristic vehicles, which
shall be witnessed soon helping the disabled in every manner in the field of
transportation.
Brain-computer interfaces pose a great opportunity to interact with highly intelligent
systems such as autonomous vehicles. While relying on the car as a smart assistance
system, they allow a passenger to gain control of the very essential aspect of driving
without the need to use arms or legs. Even while legal issues remain for public
deployment, this could already enable a wide range of disabled people to command a
vehicle in closed environments such as a parks, zoos, or inside buildings.
Free drive with the brain and Brain Chooser give a glimpse of what is already
possible with brain-computer interfaces for commanding autonomous cars. Modifying
the route of a vehicle with a BCI is already an interesting option for applications that
help disabled people to become more mobile. It has been proven that free driving with
a BCI is possible, but the control is still too inaccurate for letting mind-controlled cars
operate within open traffic. The semi-autonomous Brain Chooser overcame this
weakness, and decisions were performed with a high precision. Improvements of the
BCI device could have multiple positive effects. One effect, of course, would be a
more accurate control of the car, i.e., a more accurate steering and velocity control in
free drive mode. Further, it is desirable to be able to distinguish more than four brain
patterns in the future. This would enable the driver to give further commands, e.g.,
switching lights off and on, or setting the on-board navigation system to the desired
location by thought alone.
More detailed experiments regarding this decline of concentration over time and
within the context of car driving will be future work as well.

List of Figures
S.No.

Topic

Page No.

Figure 1

Brain Controlled Car for


Disabled
Asynchronous Switch
Design

Figure 2

42

17

Figure 3

EEG Transmission

19

Figure 4

22

Figure 5

Brain-to-Machine
Mechanisms
Eye-Ball Tracking

Figure 6

EEG

Figure 7

Electromechanical Control
Unit
Sensors and Their Range

Figure 8
Figure 9
Figure 10
Figure 11
Figure 12

Figure 13

Figure 14

Figure 15

Figure 16

Unmanned vehicle with


on-board sensors
Spherical Cylinder
representing motion of
cylinder
Spherical Cylinder
representing motion of
cylinder
Motions of the unmanned
and obstacle Vehicles
starting and overtaking
manoeuvre
Spherical plane
representing the
Minkowski difference of
motions
Time-continuous
representation,
characterized by
Computation of the MTD
from the origin point O to
the spherical cylinder
determined by spheres s0
and s3
Manoeuvres planned
(braking, left-lane and
right-lane overtaken) for
an unmanned vehicle in a
two-lane highway with
two vehicles (obstacles)

References

43

25
26
27
28
29
30
30
33
33
34
35

36

Ebook on A Navigation System for Unmanned Vehicles in


Automated Highway Systems by Enrique J. Bernabeu, Josep
Tornero, & Masayoshi Tomizuka

Ebook on Brain-Computer Interfaces and Human-Computer


Interaction by Desney Tan and Anton Nijholt

Ebook on Semi-Autonomous Car Control Using Brain


Computer Interfaces by Daniel Gohring, David Latotzky, Miao
Wang & Raul Rojas

Ebook on Artificial Intelligence: A Modern Approach by Stuart


J. Russell and Peter Norvig

44

Potrebbero piacerti anche