Sei sulla pagina 1di 58

Design & Implementation of mouse simulation using gestures on a Virtual screen 1

Chapter 1
INTRODUCTION

1.1 About the Project


We investigate the visual and vocal modalities of interaction with computer systems. We
focus our attention on the integration of visual and vocal interface as possible replacement and/or
additional modalities to enhance human-computer interaction. We present a new framework for
employing eye gaze as a modality of interface. While voice commands, as means of interaction with
computers, have been around for a number of years, integration of both the vocal interface and the
visual interface, in terms of detecting user's eye movements through an eye-tracking device, is novel
and promises to open the horizons for new applications where a hand-mouse interface provides little
or no apparent support to the task to be accomplished. We present an array of applications to illustrate
the new framework and eye-voice integration.

User-computer dialogues are typically one-sided, with the bandwidth from computer to user
far greater than that from user to computer. The movement of a users eyes can provide a convenient,
natural, and high-bandwidth source of additional user input, to help redress this imbalance. We
therefore investigate the introduction of eye movements as a computer input medium. Our emphasis
is on the study of interaction techniques that incorporate eye movements into the user-computer
dialogue in a convenient and natural way. Interaction techniques and the broader issues raised by
non-command-based interaction styles. It discusses some of the human factors and technical
considerations that arise in trying to use eye movements as an input medium, describes our approach
and the first eye movement-based interaction techniques that we have devised and implemented in
our laboratory. And considers eye movement-based interaction as an exemplar of a new.

Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen 2

Chapter 2

LITERATURE SURVEY

The 1968 science fiction epic and classic film 2001: A Space Odyssey was considered a
source of motivation for speech and vision control research, both on the academic and industrial
fronts. Hollywood has also gone as far producing movies where a computer tracking eyes could tell
what a person is seeing. It was an artists dream leading to substantial research and funding to
understand human vision and voice as modalities of control of a computer system. Computer systems
interact with the outside world in the form of specialized input/output subsystems which perform
basic processing in a limited communication band. Humans on the other hand have much wider
bandwidth for communication not only with one another but most notably with machines. A
communication channel between a computer and a human is not isotropic [1]. Unlike humans,
computers have a constrained interface through which they can interact with the outside world.

Human-computer interaction addresses [10, 11] five primary interaction modes: menu
selection, form fill-in, command line, natural language and direct manipulation. In a typical user
environment, it is customary to find one or more of such modes. Direct manipulation, however, by far
has distinguished itself from the rest as it enables users to easily and naturally select and manipulate
an object in isolation or in conjunction with some other object with the least amount of effort. In a
graphical user interface environment, direct manipulation is mostly accomplished via the use of a
pointing device such as the electromechanical mouse.

Direct manipulation requires user interface designers to rethink their strategies in creating
visual representations that are predictable and controllable and including interface modalities that
otherwise might not be available. The success of the use of pointing devices such as the mouse was in
its ability to create the parallelism with human skills such as pointing, grabbing and moving objects.
But as we perform such tasks we employ the visual elements that actually create the mental image of
things that we need to do to accomplish such tasks. In other words, our hands do follow our eyes.

Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen 3

Eye gaze, within the direct manipulation paradigm, could easily afford possibilities of
interface with computers as yet another input device, mimicking a pointing device, such as the
electro-mechanical mouse. Eye gaze draws its strength from manipulating non-textual objects, in an
environment where eye fixations are received as a continuous stream of input data and which are
interpreted on a real time basis. Unlike pointing devices where events take place on the basis of
events, such as pressing a button, eye gaze tracking depends on receiving a continuous stream of data
that needs to be manipulated, processed and interpreted, as processing of other tasks continues. The
programming environment that serves such a paradigm is multi-threaded.

Eye gaze trace data have relatively simple structure; the x and y coordinate of, and fixation
on, the screen, and time information. But to make any functional sense of data, the data has to be
processed on a spatio-temporal basis to allow for modeling of events to fit that of a pointing device.

Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen 4

Chapter 3
SYSTEM ANALYSIS

3.1 Need for the Proposed System

The need for the proposed system is to control the computer activities by physically disabled
persons through their natural commodities like voice and eye actions, these gestures are sensed and
processed by special sensors and is processed by a controller and given to pc.

Now a days computer usage is essential for each and every person in his day to day life, as a
normal person we can use and operate easily but how about the physically disabled person who
requires a computer for some of his works. Hence to solve this problem this system operates as a
modality to their work and this system senses their Eye gestures, recognizes Voice and Hand gestures
and is processed using a controller.

Hand gesture recognition provides an intelligent, Natural and convenient way of human
computer interaction (HCI). Sign language recognition (SLR) and gesture based control are two
major applications for hand gesture recognition technology. SLR aims to interpret sign languages
automatically by a computer in order to help the deaf communicate with hearing society
conveniently. Since signaling language is a kind of highly structured and large symbolic human
gesture set.

One of HCI research areas, multimodal user interface allows user to interact with a computer by
using his or her natural communication modalities, such as speech, pen, touch, gestures, eye gaze,
and facial expression.

Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen 5

3.2 Existing System

The existing system is compatible with only Voice recognition and this system is not helpful
for all the physically disabled persons, like the hand gestures and eye gestures were not sensed by
this system and the sensors used are of less effective and the hardwares used are of big size and the
cost is more and the power consumption is very much high.

3.2.1 Drawbacks of the Existing System

Compatible only with voice recognition.


High power consumption.
Bigger hardware size.
Cost is more.

3.3 Proposed System


Our Embedded project presents the hardware and software co design and implementation of
sensors for recognizing simple hand gestures, eye motion and speech recognition.

Our Embedded System uses face, speech and hand detections as a tool to detect and track
gesture (face and hand motion). This detected information is given to ARM micro controller.

This system senses this gestures which is connected to micro controller through USB device
and finally the information is given to PC. These signals are processed inside the ARM7.

We continuously scan for various human gestures, human voice, Eye blinking actions as soon
as impact id detected more impact related sensors comes in pictures. The Microcontroller stores all
this data in the internal memory.

As soon as the gestures are detected the signal is processed in ARM7, since it has image and
video processing operation in it, there is no need of external hardware or block diagrams, this
controller is of very low cost and it is very much less power consumable device and it effectively
senses and operates the human necessity to his computer interaction.

Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen 6

3.3.1 Block diagram

Hand and Eye blink sensors


POWER SUPPLY

LPC2127 (ARM7)
USB DEVICE

Computer

SPEECH RECOGNITION

Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen 7

Fig 3.2 Prototype Proposed System

3.3.1 Advantages of Proposed System

System which is more efficient, reliable & effective.


To overcome the limitations of existing systems.
To develop a smart economical system with less cost.
There are very less number of systems implemented on physical disabled peoples.
To develop a system with all the useful utilities, this will help physical disabled to use
computer for their work.
Normal persons can also use this system to avoid their health issues.

Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen 8

3.4 Requirements Specification

3.4.1 Hardware Specification

Microcontroller[LPC2127 (ARM7)]
Voice Recognition Module
Eye Blink Sensor
Relay Unit
Power Supply
USB Cable

3.4.2 Software Requirements

Embedded C
Kiel C Compiler IDE
Flash Programmer

Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen 9

Chapter 4
SYSTEM DESIGN
4.1. Hardware Design

Hand and Eye blink sensors


POWER SUPPLY

LPC2127 (ARM7)
USB DEVICE

Computer

SPEECH RECOGNITION

Fig 4.1 Block diagram

Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen 10

4.1.2 Microcontroller [LPC2127 (ARM7)]

The LPC2127/LPC2129 are based on a 16/32 bit ARM7TDMI-S CPU with real-time
emulation and embedded trace support, together with 128/256 kilobytes (kB) of embedded high
speed flash memory. A 128-bit wide memory interface and a unique accelerator architecture enable
32-bit code execution at maximum clock rate. For critical code size applications, the alternative 16-
bit Thumb Mode reduces code by more than 30 % with minimal performance penalty.

With their compact 64 pin package, low power consumption, various 32-bit timers, 4-channel
10-bit ADC, 2 advanced CAN channels, PWM channels and 46 GPIO lines with up to 9 external
interrupt pins these microcontrollers are particularly suitable for automotive and industrial control
applications as well as medical systems and fault-tolerant maintenance buses. With a wide range of
additional serial communications interfaces, they are also suited for communication gateways and
protocol converters as well as many other general-purpose applications.

Key features

16/32-bit ARM7TDMI-S microcontroller in a tiny LQFP64 package.


16 kB on-chip Static RAM.
128/256 kB on-chip Flash Program Memory. 128-bit wide interface/accelerator enables high
speed 60 MHz operation.
In-System Programming (ISP) and In-Application Programming (IAP) via on-chip boot-
loader software. Flash programming takes 1 ms per 512 byte line. Single sector or full chip
erase takes 400 ms.
Embedded ICE-RT interface enables breakpoints and watch points. Interrupt service routines
can continue to execute while the foreground task is debugged with the on-chip RealMonitor
software.
Embedded Trace Macro cell enables non-intrusive high speed real-time tracing of instruction
execution.

Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen 11

Fig 4.3 Pin configuration of LPC2127/29 (ARM7)

Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen 12

Fig 4.4 Block diagram of LPC2127/29 (ARM7)

Pin Description

Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen 13

Symbol Pin Type Description


P0.0 to 19, 21, Port 0: Port 0 is a 32-bit bi-directional I/O port with individual
P0.31 22, I/O direction
Controls for each bit. The operation of port 0 pins depends upon
26, 27, the pin
Function selected via the Pin Connect Block. Pins 26 and 31 of
29-31, 33-35, port 0 are not
37-39,
41, available.
45-47, 53-55,
1-3, 5, 9, 11,
13-15

P0.0 19 O TxD0 Transmitter output for UART0.

O PWM1 Pulse Width Modulator output 1.

P0.1 21 I RxD0 Receiver input for UART0.

O PWM3 Pulse Width Modulator output 3.

I EINT0 External interrupt 0 input

SCL I2C clock input/output. Open drain output (for I2C


P0.2 22 I/O compliance).
I CAP0.0 Capture input for Timer 0, channel 0.

SDA I2C data input/output. Open drain output (for I2C


P0.3 26 I/O compliance).
O MAT0.0 Match output for Timer 0, channel 0.

I EINT1 External interrupts 1 input.

SCK0 Serial clock for SPI0. SPI clock output from master
P0.4 27 I/O or input to
Slave.

I CAP0.1 Capture input for Timer 0, channel 1.

MISO0 Master In Slave OUT for SPI0. Data input to SPI


P0.5 29 I/O master or data
Output from SPI slave.

O MAT0.1 Match output for Timer 0, channel 1.

P0.6 30 I/O MOSI0 Master Out Slave In for SPI0. Data output from SPI
Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen 14

master or data
Input to SPI slave.

I CAP0.2 Capture input for Timer 0, channel 2.

SSEL0 Slave Select for SPI0. Selects the SPI interface as a


P0.7 31 I slave.

O PWM2 Pulse Width Modulator output 2.

I EINT2 External interrupts 2 input.

P0.8 33 O TxD1 Transmitter output for UART1.

O PWM4 Pulse Width Modulator output 4.

P0.9 34 I RxD1 Receiver input for UART1.

O PWM6 Pulse Width Modulator output 6.

I EINT3 External interrupt 3 input.

P0.10 35 O RTS1 Request to Send output for UART1.

I CAP1.0 Capture input for Timer 1, channel 0.

P0.11 37 I CTS1 Clear to Send input for UART1.

I CAP1.1 Capture input for Timer 1, channel 1.

P0.12 38 I DSR1 Data Set Ready input for UART1.


O MAT1.0 Match output for Timer 1, channel 0.

P0.13 39 O DTR1 Data Terminal Ready output for UART1.

O MAT1.1 Match output for Timer 1, channel 1.

P0.14 41 I DCD1 Data Carrier Detect input for UART1.

I EINT1 External interrupt 1 input.


Note: LOW on this pin while RESET is LOW forces on-chip
boot-loader to
take control of the part after reset.

P0.15 45 I RI1 Ring Indicator input for UART1.


Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen 15

I EINT2 External interrupt 2 input.

P0.16 46 I EINT0 External interrupt 0 input.

O MAT0.2 Match output for Timer 0, channel 2.

I CAP0.2 Capture input for Timer 0, channel 2.

P0.17 47 I CAP1.2 Capture input for Timer 1, channel 2.

SCK1 Serial Clock for SPI1. SPI clock output from master or
I/O input to slave.

O MAT1.2 Match output for Timer 1, channel 2.

P0.18 53 I CAP1.3 Capture input for Timer 1, channel 3.

MISO1 Master In Slave Out for SPI1. Data input to SPI


I/O master or data
output from SPI slave.

O MAT1.3 Match output for Timer 1, channel 3.

P0.19 54 O MAT1.2 Match output for Timer 1, channel 2.

MOSI1 Master Out Slave In for SPI1. Data output from SPI
I/O master or data
input to SPI slave.

I CAP1.2 Capture input for Timer 1, channel 2.

P0.20 55 O MAT1.3 Match output for Timer 1, channel 3.

SSEL1 Slave Select for SPI1. Selects the SPI interface as a


I slave.

I EINT3 External interrupt 3 input.

P0.21 1 O PWM5 Pulse Width Modulator output 5.

I CAP1.3 Capture input for Timer 1, channel 3.

P0.22 2 I CAP0.0 Capture input for Timer 0, channel 0.

O MAT0.0 Match output for Timer 0, channel 0.

Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen 16

P0.23 3 I RD2 CAN2 receiver input.

P0.24 5 O TD2 CAN2 transmitter output.

P0.25 39 O RD1 CAN1 receiver input.

AIN0 A/D converter, input 0. This analog input is always


P0.27 11 I connected to its
pin.

I CAP0.1 Capture input for Timer 0, channel 1.

O MAT0.1 Match output for Timer 0, channel 1.

AIN1 A/D converter, input 1. This analog input is always


P0.28 13 I connected to its
pin.

I CAP0.2 Capture input for Timer 0, channel 2.

O MAT0.2 Match output for Timer 0, channel 2.

AIN2 A/D converter, input 2. This analog input is always


P0.29 14 I connected to its
pin.

I CAP0.3 Capture input for Timer 0, Channel 3.

O MAT0.3 Match output for Timer 0, channel 3.

AIN3 A/D converter, input 3. This analog input is always


P0.30 15 I connected to its
pin.

I EINT3 External interrupts 3 input.

I CAP0.0 Capture input for Timer 0, channel 0.

16, 12, 8, Port 1: Port 1 is a 32-bit bi-directional I/O port with individual
P1.0 to P1.31 4, I/O direction
controls for each bit. The operation of port 1 pins depends upon
48, 44, 40, the pin
function selected via the Pin Connect Block. Pins 0 through 15
36, 32, 28, of port 1 are
24, 64, 60, not available.
56, 52, 20

Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen 17

TRACEPKT0 Trace Packet, bit 0. Standard I/O port with


P1.16 16 O internal pull-up.

TRACEPKT1 Trace Packet, bit 1. Standard I/O port with


P1.17 12 O internal pull-up.

TRACEPKT2 Trace Packet, bit 2. Standard I/O port with


P1.18 8 O internal pull-up.

TRACEPKT3 Trace Packet, bit 3. Standard I/O port with


P1.19 4 O internal pull-up.

TRACESYNC Trace Synchronization. Standard I/O port


P1.20 48 O with internal
pull-up.
Note: LOW on this pin is LOW, enables pins
while RESET P1.25:16 to
operate as Trace port after reset.

PIPESTAT0 Pipeline Status, bit 0. Standard I/O port with


P1.21 44 O internal pull-up.

PIPESTAT1 Pipeline Status, bit 1. Standard I/O port with


P1.22 40 O internal pull-up.

PIPESTAT2 Pipeline Status, bit 2. Standard I/O port with


P1.23 36 O internal pull-up.

TRACECLK Trace Clock. Standard I/O port with internal


P1.24 32 O pull-up.

EXTIN0 External Trigger Input. Standard I/O with internal


P1.25 28 I pull-up.

RTCK Returned Test Clock output. Extra signal added to


P1.26 24 I/O the JTAG port.
Assists debugger synchronization when processor frequency
varies.
Bi-directional pin with internal pull-up.
Note: LOW on this pin is LOW, enables pins
while RESET P1.31:26 to
Operate as Debug port after reset.

P1.27 64 O TDO Test Data out for JTAG interface.

P1.28 60 I TDI Test Data in for JTAG interface.

Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen 18

P1.29 56 I TCK Test Clock for JTAG interface.

P1.30 52 I TMS Test Mode Select for JTAG interface.

Test Reset for JTAG


P1.31 20 I TRST interface.

TD1 10 O TD1 CAN1 transmitter output.


External Reset input: A LOW on this pin resets the device,
RESET 57 I causing I/O ports
and peripherals to take on their default states, and processor
execution to
begin at address 0. TTL with hysteresis, 5 V tolerant.

Input to the oscillator circuit and internal clock generator


XTAL1 62 I circuits.

XTAL2 61 O Output from the oscillator amplier.

6, 18, 25,
V
SS 42, I Ground: 0 V reference.
50

Analog Ground: 0 V reference. This should nominally be the


V
SSA 59 I same voltage
as VSS, but should be isolated to minimize noise and error.
V
SSA_P PLL Analog Ground: 0 V reference. This should nominally be
LL 58 I the same
voltage as VSS, but should be isolated to minimize noise and
error.
1.8 V Core Power Supply: This is the power supply voltage for
V
18 17, 49 I internal
circuitry.

Analog 1.8 V Core Power Supply: This is the power supply


V
18A 63 I voltage for
internal circuitry. This should be nominally the same voltage as
V18 but should
be isolated to minimize noise and error.

3.3 V Pad Power Supply: This is the power supply voltage for
V3 23, 43, 51 I the I/O ports.
Analog 3.3 V Pad Power Supply: This should be nominally the
V
3A 7 I same
voltage as V3 but should be isolated to minimize noise and error.
Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen 19

4.1.2.3 Functional description

4.1.2.3.1 Architectural overview

The ARM7TDMI-S is a general purpose 32-bit microprocessor, which offers high


performance and very low power consumption. The ARM architecture is based on Reduced
Instruction Set Computer (RISC) principles, and the instruction set and related decode mechanism
are much simpler than those of micro programmed Complex Instruction Set Computers. This
simplicity results in a high instruction throughput and impressive real-time interrupt response from
a small and cost-effective processor core.
Pipeline techniques are employed so that all parts of the processing and memory systems can
operate continuously. Typically, while one instruction is being executed, its successor is being
decoded, and a third instruction is being fetched from memory.
The ARM7TDMI-S processor also employs a unique architectural strategy known as Thumb,
which makes it ideally suited to high-volume applications with memory restrictions, or applications
where code density is an issue.
The key idea behind Thumb is that of a super-reduced instruction set Essentially, the
ARM7TDMI-S processor has two instruction sets:
The standard 32-bit ARM set.
A 16-bit Thumb set.
The Thumb sets 16-bit instruction length allows it to approach twice the density of standard
ARM code while retaining most of the ARMs performance advantage over a traditional 16-bit
processor using 16-bit registers. This is possible because Thumb code operates on the same 32-bit
register set as ARM code.
Thumb code is able to provide up to 65 % of the code size of ARM, and 160 % of the
performance of an equivalent ARM processor connected to a 16-bit memory system.

4.1.2.3.2 On-Chip Flash program memory

The LPC2127/LPC2129 incorporates a 128 kB and 256 kB Flash memory system


Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen 20

respectively. This memory may be used for both code and data storage. Programming of the Flash
memory may be accomplished in several ways. It may be programmed In System via the serial port.
The application program may also erase and/or program the Flash while the application is running,
allowing a great degree of flexibility for data storage Held rmware upgrades, etc. When on-chip
boot loader is used, 120/248 kB of Flash memory is available for user code.
The LPC2127/LPC2129 Flash memory provides a minimum of 100,000 erase/write cycles
and 20 years of data retention.
On-chip boot loader (as of revision 1.60) provides Code Read Protection (CRP) for the
LPC2127/LPC2129 on-chip Flash memory. When the CRP is enabled, the JTAG debug port and ISP
commands accessing either the on-chip RAM or Flash memory are disabled. However, the ISP Flash
Erase command can be executed at any time (no matter whether the CRP is on or off). Removal of
CRP is achieved by erasure of full on-chip user Flash. With the CRP off, full access to the chip via
the JTAG and/or ISP is restored.

4.1.2.3.3 On-Chip static RAM

On-Chip static RAM may be used for code and/or data storage. The SRAM may be
accessed as 8-bits, 16-bits, and 32-bits. The LPC2119/LPC2129 provides 16 kB of static RAM.

4.1.2.3.4 Memory map

The LPC2119/LPC2129 memory maps incorporate several distinct regions, as shown in the
following figures.
In addition, the CPU interrupt vectors may be re-mapped to allow them to reside in either
Flash memory (the default) or on-chip static RAM.

4.1.2.3.5 Interrupt controller

The Vectored Interrupt Controller (VIC) accepts all of the interrupt request inputs and
categorizes them as FIQ, vectored IRQ, and non-vectored IRQ as designed by programmable
settings. The programmable assignment scheme means that priorities of interrupts from the various
peripherals can be dynamically assigned and adjusted.
Fast Interrupt request (FIQ) has the highest priority. If more than one request is assigned to
FIQ, the VIC combines the requests to produce the FIQ signal to the ARM processor. The fastest
Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen 21

possible FIQ latency is achieved when only one request is classified as FIQ, because then the FIQ
service routine can simply start dealing with that device. But if more than one request is assigned to
the FIQ class, the FIQ service routine can read a word from the VIC that identifies which FIQ
source(s) is (are) requesting an interrupt.
Vectored IRQs have the middle priority. Sixteen of the interrupt requests can be assigned to
this category. Any of the interrupt requests can be assigned to any of the 16 vectored IRQ slots,
among which slot 0 has the highest priority and slot 15 has the lowest.
Non-vectored IRQs have the lowest priority.
The VIC combines the requests from all the vectored and non-vectored IRQs to produce the
IRQ signal to the ARM processor. The IRQ service routine can start by reading a register from the
VIC and jumping there. If any of the vectored IRQs are requesting, the VIC provides the address of
the highest-priority requesting IRQs service routine, otherwise it provides the address of a default
routine that is shared by all the non-vectored IRQs. The default routine can read another VIC register
to see what IRQs are active.

Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen 22

4.1.3. EASY VR MODULE


Easy VR is a multi-purpose speech recognition module designed to easily add versatile,
robust and cost effective speech recognition capabilities to virtually any application.
The Easy VR module can be used with any host with an UART interface powered at 3.3V
5V, such as PIC and Adruino boards. Some application examples include home automation, such as
voice controlled light switches, locks or beds, or adding hearing to the most popular robots on the
market.

4.1.3.1 Easy VR Features

A host of built-in Speaker Independent (SI) commands for ready to run basic controls, in the
followings languages:
o English (US)
o Italian
o German
o French
o Spanish
o Japanese

Supports up to 32 user-defined Speaker Dependent (SD) triggers or commands as well as


Voice Passwords. SD custom commands can be spoken in ANY language.
Easy-to-use and simple Graphical User Interface to program Voice Commands and audio.

Module can be used with any host with an UART interface (powered at 3.3V - 5V)
Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen 23

Simple and robust documented serial protocol to access and program through the host board
3 GPIO lines (IO1, IO2, IO3) that can be controlled by new protocol commands.
PWM audio output that supports 8 speakers.
Sound playback of up to 9 minutes of recorded sounds or speech.

4.1.3.2 Technical Specifications

Connector Number Name Type Description


1 GND - Ground

2 VCC I Voltage DC input

J1
3 ERX I Serial Data Receive (TTL level)

4 ETX O Serial Data Transmit (TTL level)

J2 1-2 PWM O Differential audio output (can directly drive 8 speaker)


MIC_RE
1 T - Microphone reference ground

J3
2 MIC_IN I Microphone input signal

1 /RST I Active low asynchronous reset (internal 100K pull-up)

2 /XM I Boot select (internal 1K pull-down)


J4
Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen 24

3 IO1 I/O General purpose I/O (3.0 VDC TTL level)


4 IO2 I/O General purpose I/O (3.0 VDC TTL level)

5 IO3 I/O General purpose I/O (3.0 VDC TTL level)

Note: the GPIO (J4.3, J4.4, and J4.5) are at nominal 3.0VDC level. Do not connect 5VDC
directly to these pins!

Recommended Operating Conditions

Symbol Parameter Min Type Max Unit


VCC Voltage DC Input 3.3 5.0 5.5 V

Ta Ambient Operating Temperature Range 0 25 70 C

ERX Serial Port Receive Data 0 - VCC V

ETX Serial Port Transmit Data 0 - VCC V

Electrical Characteristics

These are applicable to J4 pins only, including IO1-3, /XM and /RST.

Symbol Parameter Min Type Max Unit


VIH Input High Voltage 2.4 3.0 3.3 V

VIL Input Low Voltage -0.1 0.0 0.75 V


IIL Input Leakage Current (0 < VIO < 3V, Hi-Z Input) <1 10 A

k
Strong 10

RPU Pull-up Resistance k


Weak 200

VOH Output High Voltage (IOH = -5 mA) 2.4 V

Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen 25

VOL Output Low Voltage (IOL = 8 mA) 0.6 V

Power Supply Requirements

Ma Uni
Symbol Parameter Min Typ x t
m
ISleep Sleep current <1 A

m
IOper Operating current 12 A

mA
ISpeaker Audio playback current (with 8 speaker) 180 (RMS)

Serial Interface

The Easy VR is a slave module communicating via an asynchronous serial interface


(commonly known as UART interface), with the following features:
Baud Rate: 9600 (default), 19200, 38700, 57600, 115200
Frame: 8 Data bits, No parity, 1 Stop bit

The receiver input data line is ERX, while the transmitter output data line is ETX. No handshake
lines are used.

Example of a serial data frame representing character A (decimal 65 or hexadecimal 41):

VCC

Idl
Idle Start 1 0 0 0 0 0 1 0 Stop e

4.1.3.3 Microphone
Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen 26

The microphone provided with the Easy VR module is an Omni directional electrets condenser
microphone (Horn EM9745P-382):
Sensitivity -38dB (0dB=1V/Pa @1KHz)
Load Impedance 2.2K
Operating Voltage 3V
Almost flat frequency response in range 100Hz 20kHz

If you use a microphone with different specifications the recognition accuracy may be
adversely affected. No other kind of microphone is supported by the Easy VR.

Note: Vocal commands should be given from about 60cm from the microphone, but you
can try at greater distances by talking louder.

Please note that improper acoustic positioning of the microphone will reduce recognition
accuracy. Many mechanical arrangements are possible for the microphone element, and some will
work better than others. When mounting the microphone in the final device, keep in mind the
following guidelines:

Flush Mounting - The microphone element should be positioned as close to the mounting surface
as possible and should be fully seated in the plastic housing. There must be no airspace between the
microphone element and the housing. Having such airspace can lead to acoustic resonance, which
can reduce recognition accuracy.

GOOD BAD

cavity

No Obstructions, Large Hole - The area in front of the microphone element must be kept clear of
obstructions to avoid interference with recognition. The diameter of the hole in the housing in front
of the microphone should be at least 5 mm. Any necessary plastic surface in front of the
microphone should be as thin as possible, being no more than 0.7 mm, if possible.
Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen 27

clear area

internal
diaphragm

Insulation - The microphone should be acoustically isolated from the housing if possible. This can
be accomplished by surrounding the microphone element with a spongy material such as rubber or
foam. The provided microphone has this kind of insulating foam. The purpose is to prevent auditory
noises produced by handling or jarring the device from being picked up by the microphone. Such
extraneous noises can reduce recognition accuracy.

GOOD BAD

absorbent fastened
material directly

Distance - If the microphone is moved from 15 cm to 30 cm from the speakers mouth, the signal
power decreases by a factor of four. The difference between a loud and a soft voice can also be
more than a factor of four. Although the internal preamplifier of the Easy VR compensates for a
wide dynamic range of input signal strength, if its range is exceeded, the user application can
provide feedback to the speaker about the voice volume (see appendix Error codes).
4.1.3.4 Audio Output

The Easy VR audio output interface is capable of directly driving an 8 speaker. It could also
be connected to an external audio amplifier to drive lower impedance loudspeakers.
Note: Connecting speakers with lower impedance directly to the module may
permanently damage the EasyVR audio output or the whole module.

Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen 28

It is possible to connect higher impedance loads such as headphones, provided that you scale
down the output power according to the speaker ratings, for example using a series resistor. The
exact resistor value depends on the headphone power ratings and the desired output volume (usually
in the order of 10k).
Note: Connecting headphone speakers directly to the EasyVR audio output may
damage your hearing.

4.1.3.5 General Purpose I/O

Since the Easy VR communication interface takes two pins of the host controller, a few
spare I/O pins are provided, that can be controlled with the communication protocol, to get those
pins back for basic tasks, such as lighting an LED.

The three I/O pins IO1IO3 are connected directly to the embedded microcontroller on the
Easy VR module, so they are referenced to the internal 3.0V regulated power supply. If you need to
interface to circuits using a different supply, there are a number of solutions you can adopt. Some of
these are outlined below (here Ion indicates any one of the three I/O pins of the Easy VR).

All the I/O pins are inputs with weak internal pull-up after power on. You must explicitly
configure a pin before you can use it as an output (see the example code Use general purpose I/O
pins).
4.1.3.6 Flash Update

The Easy VR module includes a boot loader that allows to update the firmware and to
download new sound tables to the on-board memory.

The boot mode is activated by keeping the /XM signal to a high logical level at power on or
reset. This can be easily done with a jumper (or switch) taking the signal to a suitable pull-up
resistor.
To download a firmware update or a sound table to the Easy VR, power on the module with
the jumper closed. For normal operation, just leave the jumper open. Do not change the jumper
position while the module is already powered on. It is safe to change /XM level while the module is
reset (/RST low).

Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen 29

VCC

Internal 1K
/XM Pull-down

Jumper

Boot mode selection circuit

The pull-up resistor value to use depends on the VCC power supply voltage. For the voltage
of the /XM pin when the jumper is closed (short) the following relation holds (note you have a
voltage divider circuit):
4.1.3.7 Easy VR on Arduino

You can connect the Easy VR module to an Arduino board basically in two ways:

1. Bridge mode You can control the module using a software serial library and connect to
the module with the EasyVR Commander from your PC, with the same pin configuration

2. Adapter mode You can use the Arduino board as a USB/Serial adapter by holding the
microcontroller in reset, but you need to change the connections once you want to control
the module from the microcontroller

Bridge mode
Communication with both the
This is the preferred connection mode, since it allows Arduino manage the bridge mode
simple microcontroller and the PC. All the provided examples automatically
for Arduino when the Easy VR Commander requests a
connection.

Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen
30

Automatic bridge mode used to be supported only on Arduino boards with a boot loader
implementing EEPROM programming.

The latest version of Easy VR Commander (since 3.1.x) and Arduino libraries (since 1.1)
does not rely on that feature anymore, so it should work on all Arduino boards.

Note: bridge mode cannot be used to download a Sound Table or to perform a flash
update. You need to use adapter mode or a true USB/Serial adapter.

Adapter mode

This connection scheme has the advantage of working with any Arduino board that has an on-
board USB/Serial adapter and not needing a spare input pin to enter bridge mode.

Also, it does not rely on the AVR microcontroller to do any software bridge between
communication pins, so it can be used to check your hardware in case of connection problems.

Using this method also allows you to download a Sound Table to the Easy VR module, provided
you also configure the module to start in boot mode (see paragraph Flash Update).

Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen
31

This configuration, with Reset shorted to GND, is for connection with the Easy VR
Commander. To use the module from the Arduino microcontroller, you need to remove the short
(yellow wire) and move the ETX/ERX connection to other pins. The example code uses pin 12
for ETX and pin 13 for ERX, like the above bridge mode.

Arduino software
Follow these few steps to start playing with your Easy VR module and Arduino:
1. Connect the Easy VR module to your Arduino board as outlined before
2. If you want audio output, connect an 8 speaker to J2 header
3. Connect the supplied microphone to the MIC (J3) connector
4. Copy the Easy VR library to your Arduino libraries folder on your PC
5. Connect your Arduino board to your PC via USB.

Figure 1 Installation folder for the EasyVR Arduino library

Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen
32

4.1.4. Easy VR Programming


Communication Protocol
Introduction

Communication with the Easy VR module uses a standard UART interface compatible
with 3.3-5V TTL/CMOS logical levels, according to the powering voltage VCC.

A typical connection to an MCU-based host:

EasyVR Host MCU

VCC 3.3V 5V
GND GND
ERX TX
ETX RX

The initial configuration at power on is 9600 baud, 8 bit data, No parity, 1 bit stop. The
baud rate can be changed later to operate in the range 9600 - 115200 baud.

The communication protocol only uses printable ASCII characters, which can be divided in
two main groups:
Command and status characters, respectively on the TX and RX lines, chosen among
lower-case letters.
Command arguments or status details, again on the TX and RX lines, spanning the
range of capital letters.

Each command sent on the TX line, with zero or more additional argument bytes,
receives an answer on the RX line in the form of a status byte followed by zero or more
arguments.

There is a minimum delay before each byte sent out from the Easy VR module to the RX

Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen
33

line, that is initially set to 20 ms and can be selected later in the ranges 0 - 9 ms, 10 - 90 ms, and
100 ms - 1 s. That accounts for slower or faster host systems and therefore suitable also for
software-based serial communication (bit-banging).

Since the Easy VR serial interface also is software-based, a very short delay might be
needed before transmitting a character to the module, especially if the host is very fast, to allow
the Easy VR to get back listening to a new character.

The communication is host-driven and each byte of the reply to a command has to be
acknowledged by the host to receive additional status data, using the space character. The reply
is aborted if any other character is received and so there is no need to read all the bytes of a reply
if not required.

Invalid combinations of commands or arguments are signaled by a specific status byte,


that the host should be prepared to receive if the communication fails. Also a reasonable timeout
should be used to recover from unexpected failures.

If the host does not send all the required arguments of a command, the command is
ignored by the module, without further notification, and the host can start sending another
command.

The module automatically goes to lowest power sleep mode after power on. To initiate
communication, send any character to wake-up the module.

4.1.5. IR BASED EYE BLINK SENSOR


We investigate the visual and vocal modalities of interaction with computer systems. We
focus our attention on the integration of visual and vocal interface as possible replacement and/or
additional modalities to enhance human-computer interaction. We present a new framework for
employing eye gaze as a modality of interface. While voice commands, as means of interaction
with computers, have been around for a number of years, integration of both the vocal interface
Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen
34

and the visual interface, in terms of detecting user's eye movements through an eye-tracking
device, is novel and promises to open the horizons for new applications where a hand-mouse
interface provides little or no apparent support to the task to be accomplished. We present an
array of applications to illustrate the new framework and eye-voice integration.

User-computer dialogues are typically one-sided, with the bandwidth from computer to
user far greater than that from user to computer. The movement of a users eyes can provide a
convenient, natural, and high-bandwidth source of additional user input, to help redress this
imbalance. We therefore investi-gate the introduction of eye movements as a computer input
medium. Our emphasis is on the study of interaction techniques that incorporate eye movements
into the user-computer dialogue in a convenient and natural way. This chapter describes research
at NRL on developing such interaction techniques and the broader issues raised by non-
command-based interaction styles. It discusses some of the human factors and technical
considerations that arise in trying to use eye movements as an input medium, describes our
approach and the first eye movement-based interaction techniques that we have devised and
implemented in our laboratory, reports our experiences and observations on them, and considers
eye movement-based interaction as an exemplar of a new, more general class of non-command-
based user-computer interaction.

Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen
35

4.1.5.1 Eye Gaze Tracking

Eye tracking technology is, to a great extent, limited to laboratory environments because
of the high investment associated with its acquisition. Eye tracking equipment currently has a
voluminous control box which provides all hardware and firmware support that the camera and
the host computer require. But to use such equipment in control, the setup needs an additional
computer to process the data. The eye tracking setup also requires careful calibration with every
new subject. The technology has not yet arrived at the stage where it can serve in a production
environment.

Despite its shortcomings, it has been used successfully in many applications. It has
potential applications such as in operating room activities, with surgery planning, execution,
monitoring, etc. that would materialize as the technology matures. Additionally, eye tracking [6,
7] has been used to address the needs of motor handicapped persons and to increase their
productivity and their empowerment. Furthermore, target tracking and acquisition, and reduction
of fatigue and potential injuries have been discussed in the literature [8, 9] as other potential uses
of eye gaze tracking.

Farid et al. [13] and Murtagh [14] have devised, using C++ and MFC, a methodology
whereby simple single click events can be used to evoke certain contextually based actions such
as controlling multiple video streams in a web browser. The same techniques were used to
decompress, at varied compression rates, large medical (orthopedic) images from 8.4 MB to 568
KB.

Sibert et al. [2, 3] have made side by side comparisons between the performance of an
eye tracking device and the mouse under the same experimental settings. They have determined
that object selection is faster using the eye tracker than the mouse. They have also created an
empirical metric to measure the performance of on screen selection of an object. Furthermore, if
eye gaze tracking does not show observable delay in object selection and breaks even with the

Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen
36

performance of a mouse under the same conditions, then eye tracking performs acceptably well.
We have calculated the velocity components along the x and y direction and found the maximum
values to be around 19.0103 and 17.5103 pixel per second for vx and vy respectively, which is
considerably larger than the speed components of the electro-mechanical mouse. We additionally
find visual target lock-ins at higher amplitudes and higher speeds to be more accurate than those
using hand- controlled mice, which agrees with Sibert et al. Furthermore, we, verifiably, find that
target over- and under-shoots are much smaller that their counterparts using mice at higher
speeds.

The aforementioned discussion does not make any general case in favor of visually
controlled events over those using the conventional mechanical mouse. In fact the visual
modality of interface cannot fully replace the mouse as an input device despite its superior
performance in certain aspects of its behavior. Therefore we see that the visual modality of
interface could be employed as complementary to the electro-mechanical.

It has been documented by a number of researchers such as Lankford [6, 7] and Zhai et
al. [8, 9] that the use of eye tracking technology by the disabled provides them with the ability to
communicate with the environment that surrounds them. While this and other applications are
examples of valid use of the technology, we think that eye gaze tracking technology could
enhance both the productivity and efficiency of a surgeon in an operating room. As such, one of
the requirements that we had to impose on the architecture of our eye tracking systems is that the
eye should be allowed to manipulate screen objects without resorting to visual which in effect
will distract the subjects attention.

Lankford [6, 7] has used assistive visual decision methods to complete visual tasks.
Lankford, using the ERICA system, has demonstrated that additional visually evoked mouse
events can further be generated and coupled with visual iconic cues to represent mouse
functionality such as dragging and double clicking, etc. The most basic operation in ERICA is a
visual click, where once captured, the subject is then, presented with a set of on-the-screen,
iconic menu choices showing other images of different mouse functions such as Left Double
Click, Left Drag, etc.

Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen
37

Zhai et al. [8, 9] alternatively have asserted that eye gaze has fundamental limitations and
that it is unnatural to overload the visual perceptual channel (eye) with motor control tasks. They
further proposed the Manual And Gaze Input Cascaded (MAGIC) pointing. In MAGIC,
ballistic cursor movements were left to eye control, where the cursor is moved from one
screen location to a region of tolerance in which the final target point is located. Subsequent
compensation for over- or under-shoots, that attempts to lock into the final target, could
manually be controlled.

Similarly, Sibert and Jacob [2, 3, 4, 5] have divided the subjects monitor into two areas:
the first is used for object selection using the eye tracker and the other area displays information
about the selected object.

4.1.5.2 Mouse Emulation: Introductory Remarks

In attempting to address functionality of eye-mouse, researchers have tried to mimic all


or most of the range of mouse functionality available in Microsoft Windows and maybe other
operating systems. Obviously, graphical user interfaces within an operating system environment
define a range of mouse functions and leave it to applications software to override and/or
overload some of these functions, i.e. to add new features to these applications.

For example, tapping the left mouse button and holding it while moving (or dragging)
over text in Microsoft Word results in marking the text for another operation that needs to follow,
e.g. deletion. Performing the same operation in the desktop environment of MS Windows results
in dragging and dropping the chosen object(s), i.e. moving the object(s) into another folder.

Electromechanical mice or their optical counterparts are convenient, accessible and, more
importantly, are fine tuned to address information at the pixel level. The eyes, as components of
the ocular-motor system, do not have the same range of capabilities, and as such cannot handle
all the motor functions imposed by the implementation of the electromechanical mouse, without
being inefficient and/or strenuous. While a mouse can be positioned on a pixel, the eye is unable
Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen
38

to fixate on a point voluntarily. The eye is constantly moving about the object of interest. The
eyes constant move is termed saccadic motion. Saccadic motion is rapid 2-dimensional
Brownian-like motion. While we can see a pixel, it is not possible to fixate on that pixel, in the
same way as we can hold it using a mouse

Implementations of eye- mouse capabilities are based on capturing eye fixations within
the confines of monitor boundaries. To implement a comprehensive set of mouse functionalities,
a designer had to resort to one or two general approaches. The first, [6, 7] is to capture a visual
mouse click and then display iconic menu entries which the user could select from to complete
the task at hand. In the second approach [8], the subject completes the task by supplementing the
system with a mouse or keyboard tap(s) . In either case, the visual continuity of the task is
interrupted by starting another task whose purpose is to control the task that the subject has
originally started. Once completed, the user may return to the application domain. These series
of steps may be repeated as many times as deemed necessary. The cost of the first approach is
strained eyes and possibly frustration; and in the second the system requires manual handling of
events which may not justify the whole effort of using eye tracking for control.

4.1.5.3 Eye Mouse Event Handling

We use the ASL 504 Pan/Tilt Optics Eye Tracker manufactured by Applied Science
Laboratories. We normally calibrate the system at the beginning of each test run to ensure the
validity and correctness of the data points collected. We use a 9-point calibration. All control
experiments are written in Microsoft Visual C++ with MFC, Java and/or Mat lab. Data recording
takes place via the external port, XDAT serial port using the communication protocol set by
ASL, using a 2 monitor workstation running Windows XP. One monitor is viewed by the subject
and the other is used by the experimenter. We set the ASL eye tracking frequency to the default
value of 60 Hz. We use the demand mode where the host workstation requests data from the
eye tracker by sending one byte. The eye tracker, in turn, returns an 8 byte message containing
the pupil diameter, the x-component and y-component of the point of gaze, all in a 16-bit
representation.

Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen
39

To construct a visual mouse click, we define an eye fixation counter that counts the
number of collected eye fixations n, within a certain amount of time t 0 within a certain
bounding square of dimensions x and y. n, t0, x, y are user configurable. To claim a
visual click all of the n points must be collected within the designated area, xy. If the subject
blinks or moves out of the imaginary square during the designated time interval, the counter is
reset to a new start. If the subject successfully completes all n fixations within the designated
area in the designated time interval, then a visual tap is recorded. To calculate the position of the
tap (p0), we compute the moving average of both the x and y components of all successful
fixations.

In our implementation of the eye mouse, the basic assumption we make is the subject in a
default state of an incomplete drag-and-drop. To complete this operation, the system waits for
another click (a visual tap + position i.e. D-U-p1) at another position p 1 (p1 p0). The second
click must occur within a user definable time t1.

When the subject continues to gaze within the same square area of x*y for a time
period equal to or slightly greater than 2t 0 the drag-and-drop state changes to a double click
state (D-U-p0 + D-U-p0) and a double click is recorded and a suitable action is carried out .

We have made a design decision to make all parameters t 0, x, y be user configurable


to accommodate both the changing user requirements as well as the application interaction
environment. We, however, have set default values that are based on usability studies that we
have conducted in our laboratory. We have set the t 0 to a default of one second in both the
Matlab and C++ implementations. In Java the default value of this t 0 is set at .75 second.
Additionally, the number of points that are needed to define one visual click is user definable but
initially set to a default value of 10. These choices seem to provide a comfortable environment
for subjects we have used.

All events defined above are reset to their initial values when a blink is encountered or
when the eye gets distracted outside the tolerance region xy. While a click would normally be
followed by a drag and drop or by another click to form a double click, there are instances where

Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen
40

the user just visually clicks but never follows this click with any additional action. This renders
the sequence of events incomplete leading to a reset, where the eye mouse system goes back to
its initial state.

In the following section we give examples of how to implement context based


applications that use the eye mouse as the primary method of interaction.

4.1.5.4 Two Dimensional Image Manipulation

To test our eye mouse capabilities we design a simple visual application launcher, Figure
2, which has a hot spot interface that can easily be invoked visually. The hot spots on the visual
application launcher (VAL) are divided broadly into 2-dimensional display tools, 3-dimensional
display tools and web browser based tools.

Visual Application Launcher

2-D Image Large Image Pipe


Navigator Navigator traversal

3-D Compression Web


Manipulation Decompression Browsing

Figure 1. The Visual Application Launcher

Two dimensional image manipulation tools include image scrolling and translation,
image zoom-in and zoom-out. The visual application launcher also includes a hot spot for
navigating large images of the order of 16,000x16,000 pixel. Figures 2 and 3 show an example of
navigation of a medium size image (3000x3964 pixels) from the European Southern
Observatory, ESO [15]. If the size of the image is larger than the allowable window size on the
view monitor, image scrolling is activated allowing the subject to visually scroll through the
Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen
41

images in all directions. There are four principal directions along which one could move the
image. These directions are East, West, North and South. To scroll one should visually click in
the direction where scrolling should take place. For example to scroll right to left (i.e. moving
west) one should visually click on the right edge of the image or the right edge of the bounding
box. Likewise scrolling an image along any of the other principal directions follows the same
pattern. We have allowed another set of four operations where scrolling is allowed along the
diagonal and off diagonal axes. Diagonal and off-diagonal scrolling is possible by continually
visually clicking around any of the four corners. For example to view the north-eastern section of
an image, one would visually click on the top right corner of the image or the bounding box.

The same application, in an intuitive way, can zoom in and out of an image. In Figure 2,
the image has 514x719 pixels.

One would visually click around the center of the image to zoom-in. The image in Figure
4 shows the zoomed-in image whose dimensions are 3000x3964 pixels. Zooming out of an
image requires the subject to move his/her eye to the zoom-out button and visually click it.
Pressing a button external to the image to zoom-out of the image, in this case, does not impose
any unnecessary moves and does not interfere with any visual tasks that the subject may be
performing at the time.

Figure 2. Original image. Image translation and scrolling are visually


triggered.

Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen
42

Figure 3. Large image manipulation, with zoom-in and zoom-out


functionality.

The application provides a reset button whose purpose is to reset the image to the
initial conditions at load time.

4.1.6 RELAY
A relay is an electrically operated switch. A relay circuit is typically a smaller switch or
device which drives (opens/closes) an electric switch that is capable of carrying much larger
current amounts. Or a circuit which operates the coil or electronic actuator from one source and
uses a separate power source to drive an isolated device.

The relay switches when the signal coming into the driver is high. It must be connected to
a transducer driver subsystem. Relays use an electromagnetic coil to move the poles of a switch
when powered. There are three pairs of connections known as common, normally open and
normally closed. The relay circuit shown uses a DPDT (double-pole, double-throw) relay.

Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen
43

Fig. 4.7 Schematic Diagram of Relay

The centre terminal block is the common (CO) connection and is connected to either the
upper or lower terminal block depending on the state of the relay.

When not switched, the centre terminal block is connected to the normally closed (NC) lower
terminal block. When switched, the centre terminal block is connected to the normally open
(NO) upper terminal block.

In this project the relay switches on as soon as the sensor senses fire and provides safety.
The relay can be used in applying some applications like controlling emergency doors as soon as
fire is been detected.

4.1.7 BUZZER

A buzzer or beeper is an audio signaling device, which may be mechanical,


electromechanical, or piezoelectric. It is basically a device which gives out continuous beep
sound if the flame sensor senses fire. We use a buzzer to give out continuous beep sound if the
flame sensor senses fire.

Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen
44

Fig. 4.8 Buzzer

Typical uses of buzzers and beepers include alarms, timers and confirmation of user input such
as a mouse click or keystroke.

It most commonly consists of a number of switches or sensors connected to a control unit


that determines if and which button was pushed or a preset time has lapsed, and usually
illuminates a light on the appropriate button or control panel, and sounds a warning in the form
of a continuous or intermittent buzzing or beeping sound. Initially this device was based on an
electromechanical system which was identical to an electric bell without the metal gong (which
makes the ringing noise). Often these units were anchored to a wall or ceiling and used the
ceiling or wall as a sounding board. Another implementation with some AC-connected devices
was to implement a circuit to make the AC current into a noise loud enough to drive a
loudspeaker and hook this circuit up to a cheap 8-ohm speaker. Nowadays, it is more popular to
use a ceramic-based piezoelectric sounder like a Son alert which makes a high-pitched tone.
Usually these were hooked up to driver circuits which varied the pitch of the sound or pulsed
the sound on and off.

4.1.8 Maxim Integrated MAX232 Multi-Channel RS-232Drivers /


Receivers
Maxim Integrated's MAX232 multi-channel RS-232 drivers / receivers are +5V-powered
devices intended for all EIA/TIA-232E and V.28/V.24 communications interfaces, particularly
where 12V is not available. The Maxim Integrated MAX232 employs four 1.0F external
capacitors and have a data rate of 120kbps. The MAX232A features 0.1F external capacitors
and a 200-kbps data rate. The MAX232E with 15kV ESD protection is designed for RS-232
and V.28 communications in harsh environments. The MAX232 multi-channel RS-232 drivers /
receivers also feature a low-power shutdown mode that reduces power dissipation to less than
5W. Typical applications include portable computers, low-power modems, interface translation,
battery-powered RS-232 systems, and multidrop RS-232 networks.

Features

Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen
45

Power Supply: +3.0V to +5.5V

No. of RS-232 Drivers/Rx: 2/2

No. of Ext. Capacitors: 4

Applications

Portable Computers

Low-Power Modems

Interface Translation

Battery-Powered RS-232 Systems

Multidrop RS-232 Networks

4.2 Software Design


4.2.1 Keil development tool (IDE)
The Keil development tools offer numerous features and advantages that helps to
develop embedded applications quickly and successfully. They are easy to use and are
guaranteed to help you achieve your goals.

4.2.1.1 Keil IDE


The Vision3 IDE from Keil combines project management, make facilities, source code
editing, program debugging, and complete simulation in one powerful environment. The Vision
development platform is easy-to-use and it helps to create embedded programs that work quickly.
The Vision editor and debugger are integrated in a single application that provides a seamless
embedded project development environment.

Vision3 provides unique features which are listed below

Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen
46

The Device Database automatically sets the assembler, compiler, and linker options for
the chip selected. This prevents wasting ones time configuring the tools and helps to get started
writing code faster.

A robust Project Manager lets to create several different configurations of required


target from a single project file. Only the Keil Vision3 IDE allows to create an output file for
simulating, an output file for debugging with an emulator, and an output file for programming an
EPROM--all from the same Project file.

An integrated Make facility with automatic dependency generation. One doesnt have to
figure out which header files and include files are used by which source files. The Keil
compilers and assemblers do that automatically.

Interactive error correction. As the project compiles, errors and warnings appear in an
output window. Corrections may be made to the files in the project while Vision3 continues to
compile in the background. Line numbers associated with each error or warning is automatically
resynchronized when you make changes to the source.

4.2.2 Vision3 Debugger

The Vision Debugger from Keil supports simulation using only PC or laptop, and
debugging using the target system and a debugger interface. Vision includes traditional features
like simple and complex breakpoints, watch windows, and execution control as well as
sophisticated features like trace capture, execution profiler, code coverage, and logic analyzer.

4.2.3 Executing code

Vision offers several ways to control and manipulate program execution.

Reset - It is possible to debug reset conditions using the Vision Simulator.

Run/Stop - Buttons and Commands make starting and stopping program execution is easy.

Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen
47

Single-Stepping - Vision supports various methods of single-stepping through your target


program.

Execution Trace - Execution trace information for each executed instruction is stored by
Vision.

Breakpoints - Both simple and complex breakpoints are supported by the Vision Debugger.

4.2.4 EMBEDDED C
In this project use of embedded C is made over the normal assembly level language
that has a few drawbacks which make users not to use assembly level languages such as
Assembly is hard to learn, hard to read and understand, hard to debug, hard to maintain and it is
not portable.

Here are some of the features of the embedded-C programming that has been used to
write for the microcontroller to send fault to the PC. Embedded C is designed to bridge the
performance miss-match between the standard C and the embedded hardware and application
architecture. It extends the C language with the primitives that are single-processing applications
and that are commonly provided by DSP processors and microcontrollers.

Embedded C supports the fixed point data types and also name address spaces and
specifications of embedded C extends the C language to support free standing embedded
processors in exploiting the multiple address space functionality, user-defined name address
spaces, and direct access to processors and I/O registers. These features are common for small
embedded processor used in most consumer products previously it was common practice that
each of the tool providers supported these features using functionally similar, but syntactically
different, implementations. For the embedded C specifications, the functionally from the various
tool providers was used and a common, extensible syntax was defined specific embedded-
systems deficiencies in C have been addressed to reduce application dependence on assembly
code. Embedded C makes life easier for application programmers. The primitives provided are

Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen
48

the primitives that fit the conceptual model of the application. The embedded C extensions to C
unlock the high performance features of embedded processors for C programmers. The
embedded C specification brings back the roots of C to embedded systems as primarily a high
level language means of accessing the processor

We use keil microvision3 and Philips flash utility in our project to dump the code from the
system to the microcontroller.

Vision3 adds many new features to the Editor like Text Templates, Quick Function Navigation,
Syntax Coloring with brace highlighting, Configuration Wizard for dialog based startup and
debugger setup. Vision3 is fully compatible to Vision2 and can be used in parallel with Vision2.

4.2.5 FLASH MAGIC PROGRAMMER

Flash Magic is an application for programming flash based microcontrollers from NXP
using a serial or Ethernet protocol while in the target hardware. To erase and program a device
and setting key options, few steps have to be followed. It has its own intuitive graphical interface
which is user friendly. With this application, any section of flash files can be read and saved as
an Intel hex file. Flash Magic supports half-duplex communications for many devices.

Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen
49

Fig 4.14 Keil Project View

Fig.4.15 Philips flash utility main window

Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen
50

4.3 Data Flow Diagram


4.3.1 System Flow Chart

Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen
51

4.4 CODING
4.4.1 Main Program
#include <REG51F.H>

void delay(unsigned int count);


sbit p01 = P0^1; // RELAY FOR CALCULATOR
sbit p02 = P0^2;
sbit p03 = P0^3;
sbit p04 = P0^4;

int i,p1,p2,p3,p4;
//********************************************************
int main(void)
{
P0=0x00;
P1=0x00;
p1=0;
p2=0;
p3=0;
p4=0;

while(1)
{

//**************************************************
//**************welcomemessage**********************
//**************************************************
Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen
52

if(P1==0x01)
{

if(p1==1)
{
p01=1;
delay(500);
p01=0;
//P1=0x00;
p1=0;
}
}

else if(P1==0x02)
{
if(p2==1)
{
p02=1;
delay(500);
p02=0;
// P1=0x00;
p2=0;
}
}
else if(P1==0x03)
{
if(p3==1)
{
p03=1;
Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen
53

delay(500);
p03=0;
//P1=0x00;
p3=0;
}
}

else if(P1==0x04)
{
if(p4==1)
{
p04=1;
delay(500);
p04=0;
//P1=0x00;
p4=0;
}
}
else if(P1==0x05)
{
p1=1;
p2=1;
p3=1;
p4=1;
//delay(900000);
}
else
{
p1=0;
p2=0;
p3=0;

Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen
54

p4=0;
// P1=0x00;
}
}

return (0);

void delay(unsigned int count)


{
unsigned int d,c;
for(d=0;d<=count;d++)
{
for(c=0;c<=1275;c++);
}
}

Chapter 5
Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen
55

ADVANTAGES

Low cost.

Easy to Implement

Automated operation.

Low Power consumption.

Reduces normal human health problems caused by Computer.

High relevant and efficient Sensors.

Easy modification.

User friendly

Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen
56

Chapter 6
FUTURE ENHANCEMENT

We can provide Hand gesture operations by adding few more Sensors.

Entire Computer work can be controlled by enhancing the system.

Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen
57

Chapter 7
CONCLUSION

We continuously scan for various human gestures, human voice, Eye


blinking actions as soon as impact id detected more impact related sensors
comes in pictures. The C stores all this data in the internal memory

As soon as the gestures are detected the signal is processed in ARM7, since
it has image and video processing operation in it, there is no need of external
hardware or block diagrams.

This controller is of very low cost and it is very much less power
consumable device and it effectively senses and operates the human
necessity to his computer interaction.

Used for tracking the face, eye, voice and hand gestures using smart
cameras and Sensors.

In todays digitized world, processing speeds have increased dramatically,


with computers being advanced to the levels where they can assist humans
in complex tasks. Yet, input technologies seem to cause a major bottleneck
in performing some of the tasks, under-utilizing the available resources and
restricting the expressiveness of application use. Hand Gesture recognition
comes to rescue here. Computer Vision methods for hand gesture interfaces
must surpass current performance in terms of robustness and speed to
achieve interactivity and usability.
Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru
Design & Implementation of mouse simulation using gestures on a Virtual screen
58

Chapter 8
REFERENCES

1. Datasheets and the user manuals of LPC2127.


2. Muhammed Ali Mazidi and Jancie Gillispie Mazidi, The 8051 Microcontroller and
Embedded System using assembly C, Person Education, 2nd edition 2003.
3. Predko, Microcontrollers: Architecture, Programming, Interfacing and System Design,
Pearson Education,2.
4. Sakr, Sharif. "ARM co-founder John Biggs". Engadget. Retrieved December 23, 2011. "[...]
the ARM7-TDMI was licensed by Texas Instruments and designed into the Nokia 6110, which
was the first ARM-powered GSM phone."
5. Donal Heffernan (2002) 8051 Tutorial, University of Limerick.
6. Mazidi M. (1987) 8051 Assembly Language Programming and Interfacing, Prentice Hall,
USA.
7. Muhammed Ali Mazidi (2nd edition) The 8051 Microcontroller and Embedded System Using
Assembly and C

7.2 ONLINE RESOURCES


http://www.keil.com
http://www.atmel.com
http://www.google.com
http://www.atmsite.org.in

Dept. of Electronics and Communication Engineering Shirdi Sai Engineering College, Bengaluru

Potrebbero piacerti anche