Sei sulla pagina 1di 103

TABLE OF CONTENTS

Abstract.iii List of figures...vii List of Abbreviations.viii 1. Introduction1 1.1 Brain Computer Interface.....1 1.2 History of the Electroencephalograph..3 1.3 Brain Functions & EEG3 1.4 Inner Workings of the EEG..5 1.4.1 1.4.2 Hardware.5 Processing software6

1.5 External Applications...8 Literature Review & Project Background...9 2.1 Pattern Recognition & Classification..9 2.1.1 LDA-How Does it Work?.......................10 The Learning Filter12 Traditional Vector Quantization...13 Learning Vector Quantization..14 ANNs.........15 Application of ANNs to VQs17 2.2 Noise Filtration...11 2.2.1 2.2.2 2.2.3

2.2.3.1 2.2.3.2

2.3 Whats New........18 2.3.1 2.3.2 Creating an Unsupervised LVQ...18 How Does it Work?.................................19

2.4 Final Hypothesis.22 3 The Design...23 3.1 Purpose23 3.2 Design Criteria24

3.3 Required Materials..24 3.4 End User..24 3.5 Experimental Background & Design..25 3.5.1 3.5.2 3.5.3 Real-time Use25 Training Sequence.26 Experimental Variables.27

3.6 Methods...28 3.7 Results.95 3.8 Conclusion..96 4 5 Appendix.98 Works cited...101

1. Introduction
Human Computer Interaction has been through various phases, progressing in parallel with advancements in computer technology. The earliest methods were to directly manipulating the hardware but with time the Digital Computers became more complex in multiple aspects which led to designing of several interaction devices such as the conventional keyboards, mice, touch screens etc. Most of these devices fall under the same paradigm. Hence any new device or technology introduced is designed to operate in the conventional User Interfaces. One such recent technology is brain computer interface. While the existing devices are all physical a brain computer interface is mostly based on mental process. For the immediate use of this BCI an efficient way would be latching them to existing navigation systems of existing user interface instead of designing an entirely new User Interface. With this intent a module is designed to latch the cursor navigation system with EEG device. The modules construct follows a typical BCI. It works using the motor paradigm EEG i.e., EEG Signals produced by physical or imaginary moment of limbs. Open Vibe, a software dedicated for building BCI, is used in the module. MATLAB is used for Simulation using recorded EEG signal.

1.1 Brain Computer Interface A Brain-Computer Interface (BCI) is a device that enables communication without movement. People can communicate via thought alone. Since BCIs do not require movement, they may be the only communication system possible for severely disabled users who cannot speak or use keyboards, mice or other interfaces.

Most BCI research focuses on helping severely disabled users send messages or commands. But, this is beginning to change. Some companies have begun offering BCI-based games for healthy users, and other groups are developing or discussing BCIs for new purposes and for new users. There may soon be a substantial increase in number of people using BCIs. Any BCI or BNCI requires at least 4 components. At least one sensor must detect brain activity. (In a BNCI, the sensor could detect other signals from the body, which might reflect activity
1

from the eyes, heart, muscles, etc.) Next, a signal processing system must translate the resulting signals into messages or commands. Next, this information must be sent to an application on a device, such as a web browser on a monitor or a movement system on a wheelchair. Finally, there must be an application interface or operating environment that determines how these components interact with each other and with the user.

There are often a lot of misunderstandings about what BCIs can and cannot do. BCIs do not write to the brain. BCIs do not alter perception or implant thoughts or images. BCIs cannot work from a distance, or without your knowledge. To use a BCI, you must have a sensor of some kind on your head, and you must voluntarily choose to perform certain mental tasks to accomplish goals. For example, the downloads section of this site has videos that show someone moving through a virtual environment by thinking about moving A BCI requires must meet four criteria to be a BCI. First, the device must rely on direct measures of brain activity. A BNCI is a device that can also rely on indirect measures of brain activity. Second, the device must provide feedback to the user. Third, the device must operate in realtime. Fourth, the device must rely on intentional control. That is, the user must choose to perform a mental task, with the goal of sending a message or command, each time s/he wants to use the BCI.

1.2 History of the Electroencephalograph This is a technology that, although has existed since the 1920s, has only now come into the limelight. Electroencephalography, or EEG, was first demonstrated in 1924 by Hans Berger, who made the first recording of brain signals using rudimentary radio equipment to amplify weak electric signals produced by the brain. His work as a neurologist put forth many early speculations on the device he called an elektrenkephalogramm and its medical uses. He theorized that it could diagnose brain disorders and diseases, and this proves to be its most useful application even today. As his work carried on with future scientists such as Edgar Adrian and W. Gray Walter, people realized that not only could this amazing machine diagnose brain tumors, but it could also provide much-needed medical insight into the inner complexities of the brain. This was the first true brain-computer interface ever invented; because it provides direct feedback from the brain, it shows promising results that could address the limitations put forth by EMG. 1.3 Brain Functions & EEG- How does it work? EEG works by analyzing electrical signals present on the scalp that accompany thoughts, and this begins deep within the brain itself. The brain is made up of billions of neurons, which are specialized cells that can be electrically or chemically stimulated. These neurons are said to be the lowest unit of information processing; everything above them in the hierarchy of the mind is based upon these structures. In terms of electrical circuits, the objective of neurons within the brain can be compared to a feedback loop; the information is continually processed by gathering data from external sensors. Neurons function is gather information through input from other neurons, process based on external stimuli, and pass it on to the next neuron through output. In essence, neurons gather data from a variety of sensory sources and create a low-resolution, representative, simplified output; this is what allows everything from image recognition to sound pinpointing to occur so effortlessly within the brain. Furthermore, this unique processing paradigm also facilitates active learning, since different sensory inputs can be assigned different levels of importance (or weights, as will be discussed later). Because of this, all information that flows through the central nervous system is processed by neurons. Since neurons must process all information, there are many different types of neurons; however, the different types can be divided into three subgroups: motor, sensory, and inter neurons. In order to communicate with other neurons, a neuron has two vital structures that provide this means of communications:
3

axons and dendrites. A neuron may possess only one axon but may have many dendrites. The act of communicating between neurons is called synapses, and the use of chemical or electrical stimulation at axon/dendrite locations is called neurotransmission. Neurons are able to generate electrical currents through the release or intake of ions through the neuronal membrane; this sudden upward or downward flux in current is called the action potential. Interneuron, whose function is to relay and process information between different neurons, communicate using neurotransmission at axon-dendrite (chemical synapse) or dendrite-dendrite (electrical synapse) areas. Both chemical and electrical neurotransmission involve the exchange of ions to transfer information; whereas in an electrical circuit electricity flows through wires, neuronal charges take place at these axon/dendrites sites. Neurons ability to speak to each other comes from this exchanging of electrical currents which serve to transport information through the neurons until the destination, usually the spinal cord, is reached. Because neurons process all information that flows in or out of the nervous system, enough neurons can process the same information such that the combined synapses are sufficient to produce a detectable electric current. This detectable current is usually present by the time the information reaches the scalp, because neuronal firing deep within the brain gathers enough neurons in a pyramid fashion as information processed. These currents are known as potentials because they require a reference point within the brain itself in order to be equated properly. Potentials produced by the brain fluctuate many times per second, producing recognizable variants of the sine wave; these waves are similar to those of a heartbeat, but waves produced by the brain are present in several different frequencies at the same time. These frequencies are: Delta (<4 Hz), Theta (4-7 Hz), Alpha (8-12 Hz), Beta (12-30 Hz), and Gamma (also known as High Beta, >30 Hz). The real mystery of Electroencephalography lies in the differentiation of these frequency bands. For example, alpha waves are usually only exhibited when there is no visual focal point for the user to focus on, such as when the eyes are closed. However, they may also be present during meditation or higher contemplation (slang daydreaming), suggesting that they are not simply scanning or resting waves. Like the white noise on TV. The level of activity in the above frequency ranges accurately reflects the users state of mind or level of consciousness. For instance, spikes in the Alpha range shows a relaxed state of mind, while activity in the Beta range shows the user to be alert or active, and coincidentally also shows use of motor control neurons. Use of this feature can be seen anywhere, from the diagnosis of a coma to lie-detector tests.
4

1.4 Inner Workings of the EEG The Electroencephalograph measures electrical activity in these frequencies, making it possible to associate certain patterns in the waves with functions of the brain. In order to detect the currents, surface electrodes are placed on the scalp above key regions of the brain. These regions, or lobes, are also responsible for different aspects of the psychological makeup, such as logic and motor control. Electrodes are placed according to the International 10-20 System, a standardized mapping of key scalp areas; the number of electrodes in a medical-grade EEG ranges from 19 to as many as 256. The more electrodes there are, the higher the resolution is and thoughts that originate from deeper regions in the brain can be detected more accurately. A high-power amplifier then increases the signal strength so that it can be analyzed by a receiver connected to a computer, which analyzes and sorts data. However, no two people are alike, and even the same person could experience a change in brainwave patterns as they age. Together, these problems represent a manufacturing nightmare; if there are one million users, one million versions of the software must be made! These problems are easily overcome, though, with the use of learning software, which automatically tailors itself to the users thoughts, as is described in the following sections. 1.4.1 Hardware The ability to analyze and recognize patterns in brain activity is, arguably, the cornerstone of the Electroencephalograph. To date, no other machine can achieve this resolution while maintaining a time-signal relationship. That is, recordings can be analyzed in real time, whereas an MRI, for example, would require time on an order of minutes between activity and recognition. Using this unique feature, the EEG uses specialized signal processing software to recognize and differentiate patterns in the brains electrical activity. Because EEG measures potentials on the scalp, it requires a predefined reference channel in order to gather any data at all. This is because, in terms of electrical circuits, the potentials are the positive source of electricity, but the reference channel acts as a ground to complete the circuit. The reference channel itself is defined as a single electrode placed at a key region according to the 10-20 system that allows completion of the circuit. In most medical EEGs, this is located on the bridge of the nose, but there are some cases where it is located on the skull just behind the ear lobes.

1.4.2 Processing Software However, here an issue is encountered: thought processing is not the only source of electrical spikes present in the area surrounding the brain; blinks, jaw clenching, and flexing neck muscles produce significantly higher voltages than the original brainwave data, causing any obtained data to become false. These false readings, called artifacts, can render a study completely useless by fooling the software into recognizing patterns that were not present originally. Nevertheless, the EEG is one step ahead here as well. In the process of using the signal processor to sort data into the frequency bands, simple limiting algorithms can be used to effectively clean up the data by limiting input values within a reasonable range. Additionally, comparative algorithms can also be implemented, which compare the spikes within one frequency band to those within another. This is another effective method of parsing, or using only what is needed from the input data. This method of parsing is achievable because of the unique visible-across-all-frequencies aspect of the brains electrical activity. In other words, if there is a spike in the Beta band, there will also be a similar spike in the Theta band, however small it may be. This is not because of the brain itself, but because of the fashion of neuronal processing that allows several thoughts to be exhibited at the same time. Despite this cleanup of the data, the brainwaves themselves fluctuate at a rate too fast to be processed and recognized accurately by software, thereby causing all signals to seem identical. In order to counteract this, an EEG also implements other functions in the software to smooth the signal. Usually, some sort of lossy data compression is implemented to create a representative signal; for instance, LVQ (Learning Vector Quantization, which is discussed later) and several modern audio-compressors make use of this concept. Additionally, rudimentary epoching of the signal at either a fixed or weighted (Weights determined by interval-calculating function, also discussed in a later section) interval Furthermore, other functions such as linear detrending can be utilized to compensate for fluctuations in hardware readings. Next, various real-time filters and algorithms must be applied in order to make the data easier to read and map signals to their respective frequency bands. First, a temporal filter, whose job is to use predetermined algorithms to help parse the data and further remove embedded artifacts. The temporal filter uses known filters such as the Butterworth and Chebychev filters (that presumably have been determined in previous experiments carried out by others) to carry out this process. A temporal filter also provides output signals only between predefined frequency parameters, such as only those in the Beta (12-30 Hz) band, thereby further removing artifacts
6

that might have caused higher-than-normal spikes in the data. Additionally, a spatial filter is applied to specify the location of data channels (relative to the reference voltage [defined earlier]) and channelize it. The filter works by applying matrices in a linear equation to generate output channels that are combinations of input channels. Spatial filters greatly simplify the amount of processing required during pattern classification, as higher-level data and nonsensorimotor regions can be effectively ignored. Finally, an identifier function maintains arbitrary information about the current acquisition scheme, usually for the purposes of experiment replication and subsequent analyses; the only time this is used is when a configuration file is written, as described in the next section. Yet more processing methods rely on FFTs, or Fast Fourier Transforms, to differentiate between various frequency-related constituents; as the transform removes the temporal axis, it is not well-suited to online use, and thus will not be discussed further (with the exception of short-term FTs, which are in fact used in several processing modules). The last step in this process involves the training of the software that allows it to recognize patterns in the data. This adaptation to each signal is essential, because no two users posses the same brains or exhibit the same type of electrical activity. Therefore, a profile that is unique to each user must be created in order to tell the software what it must look for. This profile is created through the use of a trainer function, which takes a sample of data elapsed over a small period of time, applies the above filters, and saves it to a configuration file as a control group for later recognition. The way this configuration file is created by telling the user to imagine or have a thought that would trigger an end result. The software then maps this configuration file, along with others that have been created, to their specific triggered events. For example, when training the software to recognize a make fist pattern, the user would imagine closing his/her fingers for, say, 5 seconds, and the software would create a configuration file containing the signal and mapping. That way, when the user starts an online EEG session and imagines making a fist, the pattern is recognized by comparing the signal against the configuration file and activating a predetermined event. At this point, in order to recognize patterns with the greatest accuracy and speed, the software requires these new functions: a file writer/reader, a trainer, a processor, and an event stimulator (that allows communication with third-party programs).

1.5 External Applications Once the EEG signal processing is complete, the stimulators (in the form of anything from a key press to the execution of other software) can be used to effectively mind-control anything that can be connected to the processing computer.

2. Literature Review & Project Background


Through research, it has been determined that EEG can, in fact, serve as a viable brain-computer interface for recognizing sensorimotor signals, specifically those used for the manipulation of limbs. However, the problem that plagues todays generation of EEG-controlled devices lies in the signal processing portion of the system hardware and software increase in sophistication and reliability every day, but the basic hurdles of noise reduction, signal integrity, and pattern detection remain ever-present obstacles. The focus of this project, then, is to improve said software in such a way that a) minimizes the presence of noise in the signal while maintaining integrity, and b) allows any arbitrarily chosen operator to achieve similar pattern detection accuracy by instituting a profiling, trained program to perform the classification. 2.1 Pattern Recognition & Classification At the heart of an EEG-based brain-computer interface is a pattern detection algorithm that allows a program to recognize certain organizations within a signal that may represent some sort of cognitive thought. In this study, the goal of the developed software was to institute the capability to accurately recognize low-level, intent-based signals in the sensorimotor regions of the brain, with the purpose of using this data to manipulate a robotic limb. Normally, classifiers regulate some sort of class system in which to categorize incoming data and make decisions based upon the location of the data within the system. In other words, in two-dimensional classifiers, like the ones to be discussed in this paper, use dynamic tessellations to separate data and make decisions based upon density and proximity to class boundaries. Although there is an entire field of study devoted to pattern recognition today, the researcher limited the focus to two popular algorithms used in many situations today, Linear Discriminant Analysis (LDA) and Principle Component Analysis (PCA). Generally speaking, LDA and PCA utilize similar categorization techniques, but the idiosyncrasies of each are what determine overall accuracy. For instance, PCA goes about classifying data by feature trends, whereas LDA better identifies data trends; because of this, LDA generally has greater decision ability due to improved separation of classes. Due to this discrepancy, the researcher decided to implement both PCA and LDA as classifiers in the design software, using PCA as the control.

2.1.1 LDA: How does it Work? As described previously, the LDA classifier algorithm implements a system of classes to categorize various objects, or data; the variability of data is denoted by variance. Thus, the typical goal of any classifier is to maximize variance between classes (resulting in better decision-making ability), minimize variance within classes (an indication of data uniformity and accurate decisions), and to maximize overall variance. There are two established types of LDA today, class-dependent and independent classifications. In this study, class-dependent classification was utilized due its increased decision-making ability and the inherent, unstable nature of EEG signals that would render class-independent transformations useless. Data, in the form of vectors, are stored in matrices, of which covariance matrices can then be calculated by the following formulas (for a bi-class problem; this can be expanded to incorporate many classes, as we see in the following sections). First, the within-class covariance:

Where m and n are given by the order of input matrix

.And then within-class scatter matrix Sw , representing the variance within a single class, is computed as such:

Where p is the a priori probability of the given set

The overall mean is calculated using the a priori (A priori is a method of deriving event probability; it dictates that, for any M possible events, the probability P that event N occurs is 1/N) probabilities of each set, and the between-class covariance is calculated.

= (X i x )m (Yi y )m
Where X and Y are two sets of data, n is the number of data, i is an index variable, and x and y represent the means of X and Y

Finally, the between-class scatter matrix can be computed as such, substituting the variables ofCb :

= (x )m (y )n
10

Class-dependent LDA has a simple goal- to maximize the ratio between the between-class scatter matrix and within-class scatter matrix:

=
Here, a two-class LDA transform is shown:

det ( ) det ( )

Source: outgraph.org

LDA is often used in conjunction with a training sequence, so that classified patterns can be matched to dictionary of patterns generated by the user. This is implemented simply initializing a secondary transform to classify training data based on experimental information, then comparing results to those of the primary transform during on-line use. 2.2 Noise Filtration Here, noise is defined as any arbitrary signal fluctuations that may or may not lead to the original signal being compromised and the failure of subsequent processing modules to perform as expected. As such, noise can be treated as a signal in and of itself, and can be isolated actively from the original signal if its values can be predicted (by an equation of some sort). Therefore, a second signal inversely proportional to the noise can be mathematically combined with the noisy signal to produce a third, filtered signal- a technique known as convolution. Here, signals are represented by simple trigonometric functions; assuming the sine wave is noise, a negative sine wave can be convolved with the noisy signal, producing a standing wave as shown:

11

There are several rudimentary, brute-force processing methods that are implemented in the software of this project; these include temporal filters, spatial filters, and various signal transformations to ease the difficulty of pattern detection. Temporal filters rely upon frequencyattenuating linear filters (That is some frequencies are rejected without regard to weight system), allowing some frequencies to pass into the filtered signal while blocking others entirely. Many temporal filters today use either Chebychev- or Butterworth- type filters; in this study, the Butterworth filter was implemented in order to retain maximum uniformity in sensitivities for the desired frequencies. Spatial filters, normally used in optics, are generally comprised of a Gaussian filtering algorithm that reduces the filters sensitivity to noise in the image. Since a signal can also be quantized into a series of vectors, such a filter can also be applied to EEG signals. In this project, spatial filters used as black box scenario since it does not change values among different operators, i.e., it does not rely upon a system of arbitrary weights. Finally, simple signal transformations (Different from transforms, which indicate the literal transformation of axes (i.e. frequency-time analysis)) such as squaring the signal improve the accuracy with which later pattern detection can be carried out. 2.2.1 The Learning Filter The implementation of a system of weights (either affixed or permanently changing) to perform signal filtration is a concept that could potentially yield much more accurate results than bruteforce filters like those mentioned, and this is the core focus of this study. By improving learning filters, a much higher accuracy of pattern detection can be achieved at later stages due to the absence of signal-compromising noise; one such filter that is the primary focus of this study is LVQ (Learning Vector Quantization).

12

2.2.2 Traditional Vector Quantization Historically, VQ is known as a lossy (Certain values in the original signal are lost, depending on the rate of quantization) form of data compression, using a series of vectors to represent several values. At its most basic level, it is not anything more than a simple estimator, or rounding device. Incoming signals are split into packets, or quanta of vectors; these are the data that will then be grouped into different categories, or classes, based on the relative distance between their values. A rudimentary explanation of the concepts behind VQ:

For the sake of simplicity, only two-dimensional VQs are discussed here. The size and bounds of classes are ever-changing, and for each class, there is a single representative vector that is the result of VQ compression. A simple diagram of a two-dimensional compression:

In this illustration, green dots represent individual points of data while the red points reflect the overall representative vector of every class, also called codevectors (with the codebook representing the set of all codevectors). Therefore, the ratio of codevectors to input vectors determines the compression rate as well as signal integrity, with the goal being to reduce the
13

effect of noise while maintaining an adequate amount of precision. With traditional VQ compressors, the weights used to determine the value of the codevector of a single class (determined by the distance between individual data and class boundaries) may be initialized but are fixed throughout the compression process. Therefore, the location of class boundaries, and by extension the codevectors themselves, can be accurately predicted if the signal is already known. Furthermore, because of this unchanging nature, traditional VQs are best suited to prerecorded signals, since weights cannot be optimized as incoming signals fluctuate. Generally, VQ design is most easily created through the use of a training sequence, without which many complex integration calculations would have to be performed. The problem of designing an optimal codebook has yet to be solved, as it is an NP hard type problem; subpar procedures have been determined to facilitate the creation of codebooks, such as that formulated by Linde, Buzo, & Gray, with the goal being minimizing mean signal distortion. The training sequence used to satisfy the LBG VQ design usually consists of a large sample of the signal, preferably one that encompasses all statistical tendencies of the source (for instance, in the case of EEG signals, muscular artifacts and electrode impedance must be simulated in order to fully train the VQ). This algorithm is an iterative process, solving the problem of the VQ design by using an arbitrarily sourced codebook (Found by the process of assigning an arbitrary value to a codevector then splitting until the entire codebook is filled.) as the initial training sequence. 2.2.3 Learning Vector Quantization (LVQ) and Artificial Neural Networks As mentioned earlier, the system of weights is crucial to fine-tuning filtration procedureshowever, as the saying goes, no two are alike and it is inevitable that such system may fail for some while perform well for others, or not at all. Therefore, it becomes necessary that the weights (As mentioned earlier, those assigned to all data in a given class that ultimately determine the value of the representative codevector) be variable, ensuring that resultant codevectors represent a minimal percentage of noise in the signal; this is the crux of the entire filtration procedure, as lossy data compression is used for the sole reason of minimizing the representation of noise. Such a system can also be considered a Self-Organizing Map since it creates a dynamic tessellation of class boundaries that in turn allow the positions of codevectors to be infinitely variable. In the case of LVQ, weights are actively modified by means of an artificial neural network (ANN) also trained in a (normally) supervised procedure (i.e. the user presents a scenario then
14

gives an example of the best possible response). The point of using this type of approach is to minimize the error between the VQ-reconstructed signal and the original signal, also called distortion. 2.2.3.1 Artificial Neural Networks (ANNs) In order to understand how the weights of the VQ are modified actively, it is necessary to possess a rudimentary knowledge of ANNs. This unique processing paradigm allows many inputs of different types to be represented by a few, low-resolution data that can be used to provide feedback data to other programs, in this case the weight array for a VQ. ANNs are based heavily upon the architecture of the human brain itself, and by extension the biological neurons that make up the gray matter. Each neuron is the network (biological or artificial) returns a single value for any number of given input values, using a weight system of its own. For instance, a neuron found in the human brain:

.follows this same pattern, using electrochemical neurotransmitter to perform the I/O functions discussed earlier. Neurons can easily be programmatically modeled, since a program can be coded to behave like a neural network, eliminating the need to construct individual digital circuits. The inputs are given respective weights, and are then mathematically combined within the neuron by a simple summation. A simple illustration of an artificial neuron:

15

. And subsequent networks, sometimes following the popular feedforward fashion:

Akin to their biological counterparts, neurons are activated by the value of the summation, using a set threshold value to determine the state. As such, it behaves in a binary fashion, giving its output in terms of 0s and 1s, rendering any subsequent neurons practically useless. Therefore, a simple sigmoid function is used to transform the activation value into an analog value that can be used to ultimately set the weights for our VQ.

Where m is the mth neuron and n is the total number of neurons, x and w are input and weight, and am and vm are the activation value and return value, respectively

Hence, the process of calculating the return value of any given neuron is an iterative process, as seen in the following C function: float neuronCalc (int[] inputs, int[] weights) { long a, v; for (int i=0;i<sizeOf(inputs);i++) { a += inputs[i] * weights[i];//summation
16

} v = 1 / (1 + pow(math.e, a)); return v; } The process of setting weights turns out to be a cyclical process; thus, we simply use arbitrary values as the initial weights for the ANN, input data, and adjust accordingly until correct outputs are realized for every possible combination of inputs. Such a training paradigm is a supervised type of learning, also known as back-propagation. This may suffice under certain circumstances in which inputs are binary, but in most cases it is nearly impossible to compensate for every possible statistical tendency of the input data; this becomes all the more apparent when working with EEG signals. Could it be possible that the ANN itself could be trained in an unsupervised fashion using output from the resultant VQ as a feedback mechanism? This is a primitive form of genetic algorithm, and is discussed in further detail in the Whatss New section. 2.2.3.2 Application of an ANN to a VQ Unlike traditional VQs, which require an extensive training dataset to initialize weights, LVQ weights are actively modified to adapt to changing statistical tendencies of the input signal, thereby modifying the positions of the codevectors themselves. As previously stated, the goal is to design class boundaries and the resultant codebook in a way that minimizes the Euclidean distance between any input vector and its corresponding codevector. Therefore, an LVQ then has two primary variables other than the input data itself: the weight system, and codevector positions that are determined by Euclidian distance formula. Also known as winner-take-all solution, the manipulation process nudges codevectors toward the original datapoint if classification was accurate and away if it was inaccurate. Furthermore, the presence of outliers, such as noise, may be made less conspicuous by the adaptive nature of the weight system 9 . Currently, although genetic algorithms can be used to evolve more optimal ANN weights, initial VQ training must still be supervised using arbitrary values or datasets, ensuring longer evolution periods to attain an LBG-optimal solution. A possible solution to this is also discussed in following section.

17

2.3 Whats new 2.3.1 Creating an Unsupervised LVQ It has been discussed earlier that an ANN must be initialized with an arbitrary sequence of weights in order to began the training process, and finally obtain the values of the VQs weights from solutions presented by the ANN. However, this process of initialization and adjustment, despite that it can be automated (and is in most software nowadays), can take enormous amounts of time to calculate due to the guess and check nature of weightsetting process, especially if the network contains a relatively large hidden layer (or multiple hidden layers). This also contributes to the glaring inefficiency of systems that are dependent upon VQ-compression to function, hence their being phased out within the past decade or so to be replaced with more efficient Gaussian-filtering algorithms. After significant review of relevant literature, the researcher hypothesized that it was possible to do away with the process entirely by implementing a similar winner-take-all solution in weighting the ANN itself. In other words, it could be possible to allow the network to construct the VQ dynamically(even as its own weights are changing) and use the output signal as feedback mechanism to nudge the entire ANNs weights in a certain direction. By initiating such a process, the ANN could learn to recognize and underrepresent noise in the resulting signal without any supervision whatsoever. Not only would this be capable of yielding an LBG-optimal VQ design solution, but it would require a significantly lesser time to initialize. Approaching the topic of signal heuristics, the researchers solution again seems to Approach perform (theoretically) significantly greater noise reduction than traditional LVQs in such case where not all noise is white noise. Traditional LVQs are initialized by choosing an arbitrary input vector then modifying the weights of the VQ itself by minimizing the mean squared distortion; this can be predicted by the following formula:

= ( ) 2 dx
Where x is a randomly chosen input vector, p(x) is the probability of choosing said vector, and r(x) is defined as the VQ-reconstructed vector. The || operator denotes the Euclidean norm of the resultant vector.

Then, weights in the VQ are optimized to minimize the value of D, and subsequently, individual neurons also have their weights modified to reflect this change. However, this approach is not only inefficient but also does not allow the heuristics of a given signal to be taken into
18

consideration. In other words, traditional LVQs like that described above are more oriented toward retaining signal integrity rather than the removal of noise- two approaches to signal processing that seem similar but in fact are enormously different. This would not be true if, in fact, signal integrity could be maintained accurately using this method; it is the very fact that signal integrity cannot be maintained without the consideration of specific signal heuristics that prevents todays LVQs from becoming widely used. Again, the researchers solution provides significant gains in the area; because the type of learning paradigm that is utilized does not depend on the choosing of single arbitrary vectors to optimize individual neurons, entire patterns and artifacts can be detected and compensated for with much greater ease and efficiency. Also, this removes the probability factor p(x) from the distortion calculation, allowing the LVQ to be implemented without respect the amount of data in the training signal. Because of this unique behavior, all training sessions can be performed live should the need arise: such functionality may be extremely convenient when exact event data in the original signal is not known. 2.3.2 How does it work? To reiterate, the distortion is used not to set the weights of the VQ itself or those of individual neurons, but rather the entire neural network itself; the coefficient of adjustment (termed here as K x for each codevector x) varies according to the Euclidean space between (i.e. proximity) corresponding codevectors and the offending codevectors). space between (i.e. 13 . The decay of K with regard to codevector proximity is determined by a Gaussian-type distribution, of the general format

The mean equals 0 for a regular normal distribution, but optimal values can be found experimentally; since the effect is not extremely great, it is left at 0 in this study. Therefore, we use an approximately Gaussian, memoryless source of data as the original signal, then intentionally convolve it with noise at random intervals, characteristic of the target signal (in this case EEG signals, which are known to exhibit both high-frequency long-term noise and short-term high-amplitude artifacts). Following this, we feed it through the LVQ and use the mean distortion between the VQ-rendered signal and the original source multiplied by K x to
19

adjust the weights of neurons automatically. Iterating over this process produces a convenient feedback loop that can facilitate the ANN learning efficiently and relatively quickly. Here, an example source of original data is shown: (a simple sinus is shown for visual aid)

Allowing the noisy signal to be the input data for the VQ, the mean distortion is found by measuring the Euclidean space between the vectors and calculating the norm:

Therefore, the overall weight offset (termed here as O i , representing the offset for the neuron of index i in the network matrix) for every neuron can be calculated using the following equation, as formulated by the researcher:

Where r(x) is generated codevector

Finally, optimization success can be easily computed by the following formula (scale in Euclidean distance):

Where index i is at its maximum value (the total number of generated codevectors) Following this, to further decrease ANN evolution time, it is possible to implement a control system such as Linear Quadratic Regulation (LQR) to provide a second weight-offset to the ANN. This could

20

minimize the time needed to achieve a minimum D x . Therefore, the total weight offset w i can be calculated as follows, also formulated by the researcher:

Application of LDA Classification toward ANN Initialization As described in the previous section, in order to carry out the novel training process as stipulated by the researcher, a source of noise characteristic of the source data is required; however, this process can be difficult to achieve without inadvertently presenting several forms of noise simultaneously. This can occur due to hardware failure or arbitrary human error so it is important that each characteristic noise-signal be represented in its entirety so as to properly initialize the ANN weights. Here, a Linear-Discriminant Analysis (LDA) Classifier can be trained using intentional noise, and the subsequent training files can be convolved with the original signal to separate the noise component. This can then be used to provide accurate, representative noise data to the ANN training process. In this respect, LDA can be used to actively aid in artifact and noise suppression

Furthermore, because of LDAs class tessellation system, it is possible to use distance between the classified data and the class boundaries (also known as the hyperplane) as the coefficient of matching, thereby resulting in an analog value that can be used to control joint positions of robotic limbs in later experimentation.

21

2.4 Final Hypothesis With the goal of obtaining greater pattern detection accuracy in EEG signals, is it possible to train an LVQ automatically and more efficiently in a direct feedback loop from output signals in a way that yields greater accuracy of codevector representation even in the presence of significant noise and artifacts? Additionally, can LDA be used as a further weight offset in the LVQ, allowing active noise and artifact suppression?

22

3. The Design
3.1 Purpose This project sought to prove that a widely used medical device, the Electroencephalograph, has applications as a suitable brain-computer interface. This device, which is normally used to diagnose various brain disorders and abnormal activity in hospitals, could potentially be applied in the field of HCI as a means of brain-machine interfacing. Electroencephalography requires relatively simple hardware; at its most basic level, it requires electrodes, amplifiers, and a processing unit. The unit, which encompasses complex digital signal processing, is the heart of the system and is what allows signals from such a complex, high-level device like the human brain to be decomposed into low-level sensorimotor commands. As promising as it may sound, EEG-based brain-computer interfaces have suffered greatly in popularity and support due to their immense complexity and hardware-related problems; the two major hurdles that have been plaguing the EEG for the last few decades are noise filtration and accurate pattern recognition. These two ever-present obstacles have caused scientists and physicians alike to shy away from this technology, which could be the most informative direct brain-computer interface to date. This project seeks to improve the usability of EEG hardware in the field of assistive technologies, 1) by the synthesis of custom software that improves the efficiency and accuracy of existing software through the use of novel processing methods, and 2) by demonstrating that EEG technology can be used as a navigation device in the conventional user interfaces. Goal
1.

To generate a program that can sufficiently parse, sort, analyze, and recognize patterns in (based on user configuration files) data obtained from an Electroencephalograph (EEG) using novel methods as prescribed by the researcher in previous sections.

2.

To demonstrate, as a means of proof-of-concept, that EEG has real application in Human computer Interaction with the conventional user interfaces and building User Interfaces solely based on EEG would be promising.

23

3.2 Design Criteria Software Synthesis: The interfacing software, which is written in C and executed in OpenViBE and EEGLAB, must conform to the following design criteria, which have been optimally stated for use in real-world application in the field of prosthetics.
1.

Accuracy- although there is no numerical value assigned to this criterion, the accuracy of the pattern detection program must be high enough such that there is the least possible number of false positives; also there must be a noticeable improvement over the control (PCA and temporal filters only)

2.

Ease of Use- the analysis of behavioral patterns in the signal, through several brief training sessions

3.

Applicability- the above 3 criteria must be combined in the optimal configuration that provides a robust, intuitive software interface

3.3 Required Materials

Signal-processing environment or GUI for grouping code. In this study, OpenViBE and EEGLAB (with Neural Network Toolbox, which contains the libraries necessary to perform LVQ), both open-source free GUIs, were used in conjunction to execute code and perform signal processing

A laptop computer capable of the processing power required to run OpenViBE and EEGLAB An EEG device with at least 10 channels; the Emotiv Epoc, a 14-channel consumer-use headset, was used here. Saline solution and electrode contacts are also required. Epoc software is also required for use as a control group later on. It is preferable to use EEG devices having OpenVibe driver.

Source of EEG datasets; must include subjects performing some sort of motor control, and extensive experimental information from the source is required in order for the training modules to function properly.

3.4 End User The primary audience for the information gleaned from this project will be those who are interested in making convenient user interfaces in BCI.
24

3.5 Experimental Background & Design OpenViBE is a GUI interface for grouping several programs together in a fashion that cause EEG signals to pass through each one before returning a result. The entities into which code is grouped are called modules within the program; thus, stringing together one or more modules allows the signal to go through various stages of processing. Because of this nature, it was fairly easy to use OpenViBE as in live; one such module within OV that allows devices to be connected and share data is called the Acquisition Server. EEGLAB offers an encapsulated GUI from which files can be executed using buttons or scripts; this functionality is demonstrated later during signal processing. Furthermore, due to the fact that many programs to execute the mathematical processing functions described earlier have already been created within EEGLAB and OV, the synthesis portion of the study consists of manipulating and editing existing code to reflect the researchers hypothesis (i.e. modifying/replacing the functions used for weightsetting the ANN). As mentioned previously, both OpenViBE and EEGLAB will be used perform processing. The reason for this was the requirements of the processing software, as described in The Current Focus. To summarize, the software requires: several spatial and temporal filters to perform preprocessing; file readers and writers to read EEG datasets, training data, and create experimental information; an LDA-based classification system with a custom output for ANN weightsetting, as well as outputs for measuring intra-class Euclidean distances; an LVQ utilizing novel training paradigms and weight-offset formulas; and finally, methods of exchanging data between the two programs as well as via serial port. The following sections describe the general flow of data between the two programs. 3.5.1 Normal Real-Time Use Due to OpenVibes existing implementation of drivers for Emotive Epoc headset, OVs acquisition server was used to gather data real-time from the device. Then, the string of programming in OVs UI channels the streamed data into various temporal and spatial filters (preprocessing); from here, the data is sent off to EEGLAB to undergo the LVQ compression process. Since there was no way of directly streaming the data between the two programs at the time of writing, OV simply uses a script to dump 16-byte files of streamed data within an

25

allocated folder. A corresponding script in EEGLAB reads each of these files, which contain vital experimental information, in succession, deleting them as they are read by the program. The LVQ-rendered signal from EEGLAB then returns to OpenViBE through the same dumpingand-reading process. Because of this method of transferring data and classification lag, projected lag between signal input and classification is about 15 ms. once the signal is read by OV again, it goes through the LDA classification process, which yields two values: the classification state and distance from data to the LDA hyper plane. Finally, these values are transferred via OVs built in VRPN server to VRPN client that is used to navigate the mouse cursor.

3.5.2 Training Sequence During the training sequence, the flow of data through the program is significantly different; since the LVQ has not been setup or trained, the data never reaches the EEGLAB processing portion of the system. First the source of noise is collected in order to train the LVQ the user is requested to remain calm, still, and simply act normally. At this point, the signal only goes through preprocessing (temporal/spatial filtering). Then, the user is asked to simulate a series of possible artifacts, such as muscular artifacts from clenching the jaw or blinking. After a series of successive trails, the rest signals and similar artifacts are averaged to provide representative signals. Then, the rest signal is convolved with the artifacted signal to provide inform ation about noise only; this noise is then convolved with a sample source of clean, generated data to
26

initiate the training sequence of the LVQ. Following this, the formulas used calculate neuronal weight offset as described in section Whats New are applied, and training process runs until the end of the source. A flowchart showing this process:

3.5.3 Experimental Variables The experimental procedure consists of two portions, simulation testing using prerecorded EEG datasets from various institutions, namely the BCI Competition (see Sources), and real-time testing with the researcher as the subject Control Software simulation Real-time testing PCA-based system Proprietary software suite Manipulated processing Custom system Custom system As seen above, there are two different controls; this is due to the fact that the PCA-based system returns several errors when used in conjunction with the built-in Epoc driver on the Acquisition Server. LDA/LVQ-based processing LDA/LVQ-based processing

27

3.6 Methods For the sake of simplicity, the following section is divided into two parts: Software Synthesis & Observation and Simulation. Software synthesis summarizes the creation of software as well as several details and justifications that were not mentioned earlier. Software Synthesis & Observation
1. 2.

Start OpenViBE Designer, found in C:\xxxx\Program Files\openvibe\. To begin a processing scenario, a source of data is required. This can be accomplished through the use of the Acquisition Client, GDF File Reader, or the Generic Stream Reader module. The Acquisition Client is meant for use with real-time online processing and the Generic stream reader only works with OpenViBE (.ov) files, so the CSV File Reader will be used for now.

3.

Select the source file to be read by double-clicking on the box to open its attributes and clicking the Browse button. In this project, the source of EEG Data was found in the public domain (cited at the end of this paper).

4.

Next, an identity module is used in order to mix the streams of data from different sources into a single output signal. In addition, the identity box also stores redundant experiment information for different sources so that a complete file can be written for later data retention. Connect the output signal stream from the Generic Stream Reader to the Identity box. In order to initialize the visualization in the latter part of this program, an XML stimulation scenario player is required, and is provided with Openvibe. Open its attributes and browse to select the stimulation scenario, also provided with Openvibe. Tie its output stimulations to the identity box as well.

5.

A reference channel is needed in order to receive any data, because this serves as the ground (negative) for the electric potentials generated on the scalp. Therefore, drag it into the scenario window and tie its input to identitys output. Open its attributes and select the appropriate channel (corresponding to the source of EEG data). In this case, channel 2 was used, with is the Nz channel, the electrode that rests on the bridge of the nose. As explained

28

earlier, the reference, or base, channel is usually located on the bridge of the nose or on the skull behind the earlobes.
6.

In order to reduce required processing power and gather the data from only the sensorimotor cortex, a channel selector is needed. Drag it into the window and tie its input to the reference channel. Open the attributes and select the channels to be used. Any channels which correspond with electrode placement over the sensorimotor cortex of the brain can be used. However, for simplicitys sake, all channels except the reference channel were selected. The scenario should now consist of what is seen in Fig. 1.1

7.

A spatial filter is now required in order to mix the channels into a number smaller than was given in the input. For this purpose, its function is to arrange the data and compact it from ten channels to two so that later processing and feedback becomes much less complicated to achieve. Additionally, it implements a filter called a surface laplacian filter that improves spatial resolution of the signal. In other words, it removes noise in relation to the reference channel. The mixing of channels occurs using this equation:

= ( )
Where a is the kth output channel, is jth input channel and Sjk is the coefficient for jth input channel and kth output channel in the spatial filter matrix. Drag it into the window and tie its input to the output of the channel selector module. Change its attributes so that it matches what is seen in Fig. 1.2 . Although any number of output channels less than ten is possible, processing power requirement is increased greatly; also, a signal average module will be implemented later, defeating the purpose of the increased number of channels. On a side note, portability will decrease as required processing power increases, so this module is somewhat vital to this program.
8.

Next, to parse the data of any unwanted artifacts, such as blinks and muscle twitches, a frequency filter needs to be used to discard high spikes in the signal caused by false data. Also, since this project focuses on motor control rather than other functions of the brain, the data needs to be limited to the Beta band, which is associated with working functions of the brain, namely the motor cortex. This frequency filter is known in OpenViBE as a Temporal Filter, because frequency is, in fact, based on time. Drag it into the scenario window and tie

29

its input to the output of the spatial filter module. Change its attributes to that it matches those in Fig. 1.3 .
9.

In EEG signals, many ripples could signify an event, but it is far easier and faster to analyze a general trend (line of best fit) rather than the amplitude frequency. Therefore, the incoming signal needs to be split into time-based chunks so that a relative curve can be realized. This process, relative to signal processing software, is called epoching. OpenViBE contains two different modules for epoching, and these are Time Based Epoching and Stimulation Based Epoching. Since the immediate goal is filtering and not feedback, time based epoching is used. Stimulation based epoching is epoching that is triggered after an external stimulus, such as a key press, and it serves a different purpose whose explanation is outside the scope of this paper. The difference between a signal with epoching and a normal signal is shown in Fig. 1.4 . Drag the time based epoching module into the scenario window, and configure its attributes so it matches those seen in Fig. 1.5

10.

In order to smooth the output signal further, averages of the epochs are taken, reverting the visualization graph from a choppy stream to a smooth sine-like curve. OpenViBE includes such a module to accomplish this, called Epoch Average. Drag it into the scenario window and modify its attributes. Testing on this specific box could not occur simply because of the complexity of the software at this point, so values were left at the defaults. However, its default connections are streamed matrices, which is acceptable for visualization modules (such as signal display), but cannot connect its output to other modules which require a signal type input and output needs to be configured to a signal type rather than a streamed matrix of vectors type. This is accomplished by right-clicking the module and configuring the outputs to the signals type in the dialog box that appears, as seen in Fig. 1.6 Doing so will change both the input and output signal type. Open up the attributes and change the averaging type to moving epoch average, if it was not done so already. This type of averaging is necessary because the signal is not stationary. Tie its input to the output of the Time Based Epoching module.

11.

Next, to show a greater differentiation between two different patterns, the signal is squared. This has the effect of making the space between epoch-points much clearer than before, as seen in Fig. 1.7 . This also has the added benefit of the streamed signal positive, paving the way for later equations required to process the signal (since a logarithm of the signal will be
30

taken later, a negative signal could produce an error message or a generally flat signal). There is a generic module in OpenViBE for passing the signal through equations, called Simple DSP. Drag this into the scenario window and enter the following equation after opening the attributes:

12.

To take an average of everything done so far in terms of processing the signal, drag the Signal Average module into the window and tie its input to the output of the Simple DSP box module. This is advantageous because it allows for greater still differentiation between behavioral patterns in the streamed signal and more advanced EEG synchronicity.

13.

Using another Simple DSP module, the logarithm of the signal is taken. Drag the Simple DSP module into the window, open its attributes, and enter the following equation:

14.

Now, the matrix stream coming from the Simple DSP box, now a signal, must be converted into unit vectors so that classification can occur. The Feature Aggregator module does this for us.

15.

Implement the CSV File Writer here using the custom timing script that allows file dumping. [EEGLAB LVQ processing is not described here since it is only a GUI; for further details, look at attached computer code].

16.

In the same scenario, implement another CSV reader that allows reading of EEGLABdumped files; also bring in experimental information through its module.

17.

Here, LDA is instituted. The feature vectors from the previous module allow for easy classification of the streamed signal through the use of a module that compares incoming vectors against a previously constituted user configuration file. This configuration file, stored in the source of the designing platform (OpenViBE), consists of a recording of vectors over a set period of time, creating a control group, as it were. Such a module, called the Classifier Processor in OpenViBE, uses Linear Discriminant Analysis, or LDA, as a simple yet efficient method of comparison and discrimination of patterns against the user configuration. LDA operates by looking for specific features between the vectors through analysis of class probabilities, assuming that the probability of any point in time of the
31

signal has a certain value is normally distributed as seen in Fig. 1.8 . In addition, it is also assumed that the covariance of the two features is equal. From here, the Bayes Theorem shows that:

Where P is the probability that object x belongs to groups i or j From here, the LDA formula is derived:

The Classifier Processor module in OpenViBE makes use of LDA to recognize patterns in the incoming signal. Drag the module into the scenario window, and load the previously created configuration file into the attributes. Set the class labels and the classifier as shown in Fig. 1.9 .
18.

Finally, the visualization process begins. First, instantiate the XML Stimulation Scenario Player and tie its output to stimulation input on the identity module implemented at the beginning of the scenario. This module provides cues by means of left- or right-arrows and stimulates the last module in the scenario to provide visual feedback. The timing and animation of the cues are provided in the C:\xxxx\openvibe\share\openvibe-

plugins\stimulation\graz_stimulation.xml directory, courtesy of an earlier experiment conducted by the Graz University of Technology. Lastly, a Graz Visualization module, native to OpenViBE, is used at the end of the scenario and is stimulated by the XML Stimulation Scenario Player module and receives signals in the form of a streamed matrix from the Classifier Processor module. See Fig. 1.10 for the resulting scenario.
19.

To run the scenario, click the play button located at the top of the scenario window. Due to the way the previous Graz University experiment was completed, the Graz Visualization window will start the feedback process at 00:40. The threshold for determining success or failure is the end of the visible x-axis line in either direction.

20.

Perform training as prescribed earlier, and then run the datasets through the trained scenario. Each of the 26 motor-imagery datasets (containing specific experimental information,
32

including goals, event times, etc.) were run 4 times, for a total of 104 trials in simulation. Each dataset was restricted to 15 events, for a total of 1560 candidate events during the trial process.
21. 22.

Setup the PCA-based processing scenario and repeat the experimentation process. Later the vrpn server is employed to communicate with the module to move the cursor.

Fig. 1.1

Fig. 1.2

33

Fig. 1.3

Fig. 1.4

Fig. 1.5

Fig. 1.6

34

Fig. 1.7

Fig. 1.8

Fig. 1.9

35

Fig. 1.10

Simulation
The entire procedure is simulated in Matlab following the methods described for usage in OpenViBE. The simulation keenly illustrates each of these procedures individually. The code for the simulation is as follow: Initiating execution: function executebci S.fh = figure('units','pixels',... 'position',[800 500 230 100],... 'menubar','none',... 'numbertitle','off',... 'name','BCI',... 'resize','off','color','w'); S.pb(1) = uicontrol('style','push',... 'units','pixels',... 'position',[10 10 100 30],... 'fontsize',14,... 'string','Training'); S.pb(2) = uicontrol('style','push',... 'units','pixels',...
36

'position',[120 10 100 30],... 'fonts',14,... 'str','Simulation'); S.txt1 = uicontrol('style','text',... 'unit','pix',... 'position',[30 70 170 21],... 'string','BCI Simulation Project',... 'backgroundcolor','w',... 'fontsize',12); set(S.fh,'CloseRequestFcn',{@winclose}); set(S.pb(:),'callback',{@pb_call,S}) csvwrite('flagbits.txt',[0 0 0 0 0 0 0 0 0 0]); function [] = pb_call(varargin) if varargin{1}==S.pb(2) eval('bci_simulation'); elseif varargin{1}==S.pb(1) eval('bci_training'); end end function [] = winclose(varargin) delete('flagbits.txt'); delete(S.fh); end end

Training module:

function []= bci_training flag.input=0; flag.epoch=0; Y_OFFSET=-50;


37

scr_size=get(0,'screensize'); S.fh = figure('units','pixels',... 'position',scr_size,... 'menubar','none',... 'name','BCI-Training',... 'numbertitle','off',... 'resize','off','color',[0.6 0.8 0.8]); S.dspls = uicontrol('style','list',... 'unit','pix',... 'position',[30 Y_OFFSET+150 180 180],... 'min',0,'max',2,... 'fontsize',14,... 'string',{'Quadratic';'Logarithmic'});

S.dsptx = uicontrol('style','tex',... 'unit','pix',... 'position',[30 Y_OFFSET+350 40 20],... 'backgroundcolor',get(S.fh,'color'),... 'fontsize',12,'fontweight','bold',... 'string','DSP'); S.fltls = uicontrol('style','list',... 'unit','pix',... 'position',[270 Y_OFFSET+150 180 180],... 'min',0,'max',2,... 'fontsize',14,... 'string',{'Spatial Filtering'; 'Temporal Filtering'});

S.flttx = uicontrol('style','tex',... 'unit','pix',... 'position',[270 Y_OFFSET+350 80 20],... 'backgroundcolor',get(S.fh,'color'),... 'fontsize',12,'fontweight','bold',...


38

'string','Filtering');

S.clsls = uicontrol('style','list',... 'unit','pix',... 'position',[720 Y_OFFSET+150 180 180],... 'min',0,'max',2,... 'fontsize',14,... 'string',{'LVQ'});

S.clstx = uicontrol('style','tex',... 'unit','pix',... 'position',[720 Y_OFFSET+350 110 20],... 'backgroundcolor',get(S.fh,'color'),... 'fontsize',12,'fontweight','bold',... 'string','Classification'); S.eptx = uicontrol('style','tex',... 'unit','pix',... 'position',[470 Y_OFFSET+340 80 20],... 'backgroundcolor',get(S.fh,'color'),... 'fontsize',12,'fontweight','bold',... 'string','Epoching'); S.epinput = uicontrol('style','edit',... 'units','pix',... 'position',[470 Y_OFFSET+300 190 30],...%'min',0,'max',2,... % This is the key to multiline edits. 'string','',...0 'fontweight','bold',...%'horizontalalign','left',... 'fontsize',11);

S.eppb = uicontrol('style','push',... 'units','pix',... 'position',[600 Y_OFFSET+275 60 20],...


39

'fontsize',10,... 'string','Get'); S.avgpb = uicontrol('style','push',... 'units','pix',... 'position',[520 Y_OFFSET+200 150 30],... 'fontsize',10,... 'string','Averaging');

BGCOLOR = get(gcf, 'color');

S.frame1= uicontrol('Units','points', ... 'BackgroundColor',BGCOLOR, ... 'ListboxTop',0, ... 'HorizontalAlignment', 'left',... 'Position',[400 Y_OFFSET+300 600 250], ... 'Style','frame', ... 'Tag','Frame1'); S.bcitext = uicontrol('style','tex',... 'unit','pix',... 'position',[400 Y_OFFSET+730 510 50],... 'backgroundcolor',get(S.fh,'color'),... 'fontsize',15,'fontweight','bold',... 'string','Brain Computer Interface Training Module '); S.inputtx = uicontrol('style','tex',... 'unit','pix',... 'position',[10 Y_OFFSET+700 110 20],... 'backgroundcolor',get(S.fh,'color'),... 'fontsize',12,'fontweight','bold',... 'string','Input : '); S.inputed = uicontrol('style','edit',... 'units','pix',...
40

'position',[100 Y_OFFSET+700 190 30],...%'min',0,'max',2,... % This is the key to multiline edits. 'string','',... 'fontweight','bold',...%'horizontalalign','left',... 'fontsize',11);

S.inputpb = uicontrol('style','push',... 'units','pix',... 'position',[300 Y_OFFSET+700 150 30],... 'fontsize',14,... 'string','Get'); S.indata = uicontrol('style','tex','horizontalalign','left',... 'unit','pix',... 'position',[10 Y_OFFSET+430 510 200],... 'backgroundcolor',get(S.fh,'color'),... 'fontsize',12,'fontweight','bold',... 'string','Load Data ');

S.bg = uibuttongroup('units','pix',... 'pos',[190 Y_OFFSET+650 260 40]); S.rd(1) = uicontrol(S.bg,... 'style','rad',... 'unit','pix',... 'position',[10 5 70 30],... 'string','Training 1'); S.rd(2) = uicontrol(S.bg,... 'style','rad',... 'unit','pix',... 'position',[90 5 70 30],... 'string','Training 2'); S.rd(3) = uicontrol(S.bg,... 'style','rad',...
41

'unit','pix',... 'position',[170 5 70 30],... 'string','Training 3'); framepos = get(S.frame1,'position'); framexoffset=framepos(1)+150; frameyoffset=framepos(2)+80;

S.ftext(1) = uicontrol('style','tex','horizontalalign','left',... 'unit','pix',... 'position',[framexoffset+10 frameyoffset+300 300 20],... 'backgroundcolor',get(S.fh,'color'),... 'fontsize',12,'fontweight','bold',... 'string','Tracking the process... '); S.ftext(2) = uicontrol('style','tex','horizontalalign','left',... 'unit','pix',... 'position',[framexoffset+10 frameyoffset+210 300 80],... 'backgroundcolor',get(S.fh,'color'),... 'fontsize',12,'fontweight','bold',... 'string','DSP : none '); S.ftext(3) = uicontrol('style','tex','horizontalalign','left',... 'unit','pix',... 'position',[framexoffset+10 frameyoffset+130 300 80],... 'backgroundcolor',get(S.fh,'color'),... 'fontsize',12,'fontweight','bold',... 'string','Filtering : none '); S.ftext(4) = uicontrol('style','tex','horizontalalign','left',... 'unit','pix',... 'position',[framexoffset+10 frameyoffset+50 300 80],... 'backgroundcolor',get(S.fh,'color'),... 'fontsize',12,'fontweight','bold',... 'string','Epoching : none ');
42

S.ftext(5) = uicontrol('style','tex','horizontalalign','left',... 'unit','pix',... 'position',[framexoffset+10 frameyoffset+30 300 20],... 'backgroundcolor',get(S.fh,'color'),... 'fontsize',12,'fontweight','bold',... 'string','Averaging : none '); S.ftext(6) = uicontrol('style','tex','horizontalalign','left',... 'unit','pix',... 'position',[framexoffset+320 frameyoffset+210 400 100],... 'backgroundcolor',get(S.fh,'color'),... 'fontsize',12,'fontweight','bold',... 'string','Classification :'); S.ftext(7) = uicontrol('style','tex','horizontalalign','left',... 'unit','pix',... 'position',[framexoffset+320 frameyoffset+110 400 100],... 'backgroundcolor',get(S.fh,'color'),... 'fontsize',12,'fontweight','bold',... 'string',''); set(S.inputpb,'callback',{@inputpb_call,S,flag}); end

function [] = inputpb_call(varargin) [S,flag]=varargin{[3,4]}; ipstr=get(S.inputed,'string'); if isempty(ipstr) msgbox('Enter valid Input'); elseif strcmp(ipstr,'Subject1_2D.mat') load(ipstr); D.lb1(:,1)=LeftBackward1(:,5);D.lb1(:,2)=LeftBackward1(:,6);D.lb1(:,3)=LeftBackward1(:,13); D.lb1(:,4)=LeftBackward1(:,18); D.lb2(:,1)=LeftBackward2(:,5);D.lb2(:,2)=LeftBackward2(:,6);D.lb2(:,3)=LeftBackward2(:,13); D.lb2(:,4)=LeftBackward2(:,18);
43

D.lb3(:,1)=LeftBackward3(:,5);D.lb3(:,2)=LeftBackward3(:,6);D.lb3(:,3)=LeftBackward3(:,13); D.lb3(:,4)=LeftBackward3(:,18); D.lf1(:,1)=LeftForward1(:,5);D.lf1(:,2)=LeftForward1(:,6);D.lf1(:,3)=LeftForward1(:,13);D.lf1(: ,4)=LeftForward1(:,18); D.lf2(:,1)=LeftForward2(:,5);D.lf2(:,2)=LeftForward2(:,6);D.lf2(:,3)=LeftForward2(:,13);D.lf2(: ,4)=LeftForward2(:,18); D.lf3(:,1)=LeftForward3(:,5);D.lf3(:,2)=LeftForward3(:,6);D.lf3(:,3)=LeftForward3(:,13);D.lf3(: ,4)=LeftForward3(:,18); D.rb1(:,1)=RightBackward1(:,5);D.rb1(:,2)=RightBackward1(:,6);D.rb1(:,3)=RightBackward1(:, 13);D.rb1(:,4)=RightBackward1(:,18); D.rb2(:,1)=RightBackward2(:,5);D.rb2(:,2)=RightBackward2(:,6);D.rb2(:,3)=RightBackward2(:, 13);D.rb2(:,4)=RightBackward2(:,18); D.rb3(:,1)=RightBackward3(:,5);D.rb3(:,2)=RightBackward3(:,6);D.rb3(:,3)=RightBackward3(:, 13);D.rb3(:,4)=RightBackward3(:,18); D.rf1(:,1)=RightForward1(:,5);D.rf1(:,2)=RightForward1(:,6);D.rf1(:,3)=RightForward1(:,13);D. rf1(:,4)=RightForward1(:,18); D.rf2(:,1)=RightForward2(:,5);D.rf2(:,2)=RightForward2(:,6);D.rf2(:,3)=RightForward2(:,13);D. rf2(:,4)=RightForward2(:,18); D.rf3(:,1)=RightForward3(:,5);D.rf3(:,2)=RightForward3(:,6);D.rf3(:,3)=RightForward3(:,13);D. rf3(:,4)=RightForward3(:,18); set(S.dspls,'callback',{@dsppb_call,S,flag,D}); set(S.fltls,'callback',{@fltpb_call,S,flag,D}); set(S.indata,'string',strrep(readme,' For further information, please contact midhatali@gmail.com .',' '),'fontsize',9); end end function [] = dsppb_call(varargin) S=varargin{3}; D=varargin{5}; L=get(S.dspls,{'string','value'}); switch L{2} case 1
44

switch findobj(get(S.bg,'selectedobject')) case S.rd(1) sizelb1=size(D.lb1); for i= 1 : sizelb1(2) set(S.ftext(2),'string','DSP : Applying.....'); pause(0.1); for j = 1 : sizelb1(1) DSP.lb1(j,i)=D.lb1(j,i)*D.lb1(j,i); DSP.lf1(j,i)=D.lf1(j,i)*D.lf1(j,i); DSP.rb1(j,i)=D.rb1(j,i)*D.rb1(j,i); DSP.rf1(j,i)=D.rf1(j,i)*D.rf1(j,i); end end %msgbox('training 1');

case S.rd(2) sizelb1=size(D.lb1); for i= 1 : sizelb1(2) set(S.ftext(2),'string','DSP : Applying.....'); pause(0.1); for j = 1 : sizelb1(1) DSP.lb1(j,i)=D.lb1(j,i)*D.lb1(j,i); DSP.lf1(j,i)=D.lf1(j,i)*D.lf1(j,i); DSP.rb1(j,i)=D.rb1(j,i)*D.rb1(j,i); DSP.rf1(j,i)=D.rf1(j,i)*D.rf1(j,i); DSP.lb2(j,i)=D.lb2(j,i)*D.lb2(j,i); DSP.lf2(j,i)=D.lf2(j,i)*D.lf2(j,i); DSP.rb2(j,i)=D.rb2(j,i)*D.rb2(j,i); DSP.rf2(j,i)=D.rf2(j,i)*D.rf2(j,i);

end end % msgbox('training 2'); case S.rd(3)


45

sizelb1=size(D.lb1); for i= 1 : sizelb1(2) set(S.ftext(2),'string','DSP : Applying.....'); pause(0.1); for j = 1 : sizelb1(1) DSP.lb1(j,i)=D.lb1(j,i)*D.lb1(j,i); DSP.lf1(j,i)=D.lf1(j,i)*D.lf1(j,i); DSP.rb1(j,i)=D.rb1(j,i)*D.rb1(j,i); DSP.rf1(j,i)=D.rf1(j,i)*D.rf1(j,i); DSP.lb2(j,i)=D.lb2(j,i)*D.lb2(j,i); DSP.lf2(j,i)=D.lf2(j,i)*D.lf2(j,i); DSP.rb2(j,i)=D.rb2(j,i)*D.rb2(j,i); DSP.rf2(j,i)=D.rf2(j,i)*D.rf2(j,i); DSP.lb3(j,i)=D.lb3(j,i)*D.lb3(j,i); DSP.lf3(j,i)=D.lf3(j,i)*D.lf3(j,i); DSP.rb3(j,i)=D.rb3(j,i)*D.rb3(j,i); DSP.rf3(j,i)=D.rf3(j,i)*D.rf3(j,i);

end end % msgbox('training 3'); otherwise disp('wrong option') % Very unlikely I think. end set(S.ftext(2),'string',{'DSP : Done';'Function used : Quadratic'}); %msgbox('F(x)=x*x'); case 2 switch findobj(get(S.bg,'selectedobject')) case S.rd(1) sizelb1=size(D.lb1); for i= 1 : sizelb1(2) set(S.ftext(2),'string','DSP : Applying.....');
46

pause(0.1); for j = 1 : sizelb1(1) DSP.lb1(j,i)=log(D.lb1(j,i)+1); DSP.lf1(j,i)=log(D.lf1(j,i)+1); DSP.rb1(j,i)=log(D.rb1(j,i)+1); DSP.rf1(j,i)=log(D.rf1(j,i)+1); end end %msgbox('training 1');

case S.rd(2) sizelb1=size(D.lb1); for i= 1 : sizelb1(2) set(S.ftext(2),'string','DSP : Applying.....'); pause(0.1); for j = 1 : sizelb1(1) DSP.lb1(j,i)=log(D.lb1(j,i)+1); DSP.lf1(j,i)=log(D.lf1(j,i)+1); DSP.rb1(j,i)=log(D.rb1(j,i)+1); DSP.rf1(j,i)=log(D.rf1(j,i)+1); DSP.lb2(j,i)=log(D.lb2(j,i)+1); DSP.lf2(j,i)=log(D.lf2(j,i)+1); DSP.rb2(j,i)=log(D.rb2(j,i)+1); DSP.rf2(j,i)=log(D.rf2(j,i)+1);

end end % msgbox('training 2'); case S.rd(3) sizelb1=size(D.lb1); for i= 1 : sizelb1(2) set(S.ftext(2),'string','DSP : Applying.....'); pause(0.1); for j = 1 : sizelb1(1)
47

DSP.lb1(j,i)=log(D.lb1(j,i)+1); DSP.lf1(j,i)=log(D.lf1(j,i)+1); DSP.rb1(j,i)=log(D.rb1(j,i)+1); DSP.rf1(j,i)=log(D.rf1(j,i)+1); DSP.lb2(j,i)=log(D.lb2(j,i)+1); DSP.lf2(j,i)=log(D.lf2(j,i)+1); DSP.rb2(j,i)=log(D.rb2(j,i)+1); DSP.rf2(j,i)=log(D.rf2(j,i)+1); DSP.lb3(j,i)=log(D.lb3(j,i)+1); DSP.lf3(j,i)=log(D.lf3(j,i)+1); DSP.rb3(j,i)=log(D.rb3(j,i)+1); DSP.rf3(j,i)=log(D.rf3(j,i)+1);

end end % msgbox('training 3'); otherwise disp('wrong option') % Very unlikely I think. end

set(S.ftext(2),'string',{'DSP : Done';'Function used : Logarithmic'}); %msgbox('F(x)=log(x+1)'); otherwise msgbox('select correct option'); end set(S.fltls,'callback',{@fltpb_call,S,flag,D,DSP}); assignin('base','dsp',DSP); end function []= fltpb_call(varargin) S=varargin{3}; flag=varargin{4}; D=varargin{5};
48

DSP=varargin{6}; CO.val=[0.1 0.2 0.3 0.4]; L=get(S.fltls,{'string','value'}); switch L{2} case 1 sizeofdata=size(DSP.lb1); switch findobj(get(S.bg,'selectedobject')) case S.rd(1) for k = 1 : sizeofdata(2) set(S.ftext(3),'string','Filtering : Applying.....'); pause(0.1); for j = 1 : sizeofdata(1) FLT.lb1(j,k)= (1*(CO.val(k)*DSP.lb1(j,1)))+(2*(CO.val(k)*DSP.lb1(j,2)))+... (3*(CO.val(k)*DSP.lb1(j,3)))+(4*(CO.val(k)*DSP.lb1(j,4))); FLT.lf1(j,k)= (1*(CO.val(k)*DSP.lf1(j,1)))+(2*(CO.val(k)*DSP.lf1(j,2)))+... (3*(CO.val(k)*DSP.lf1(j,3)))+(4*(CO.val(k)*DSP.lf1(j,4))); FLT.rb1(j,k)= (1*(CO.val(k)*DSP.rb1(j,1)))+(2*(CO.val(k)*DSP.rb1(j,2)))+... (3*(CO.val(k)*DSP.rb1(j,3)))+(4*(CO.val(k)*DSP.rb1(j,4))); FLT.rf1(j,k)= (1*(CO.val(k)*DSP.rf1(j,1)))+(2*(CO.val(k)*DSP.rf1(j,2)))+... (3*(CO.val(k)*DSP.rf1(j,3)))+(4*(CO.val(k)*DSP.rf1(j,4))); end

end case S.rd(2) for k = 1 : sizeofdata(2) set(S.ftext(3),'string','Filtering : Applying.....'); pause(0.1); for j = 1 : sizeofdata(1) FLT.lb1(j,k)= (1*(CO.val(k)*DSP.lb1(j,1)))+(2*(CO.val(k)*DSP.lb1(j,2)))+... (3*(CO.val(k)*DSP.lb1(j,3)))+(4*(CO.val(k)*DSP.lb1(j,4))); FLT.lf1(j,k)= (1*(CO.val(k)*DSP.lf1(j,1)))+(2*(CO.val(k)*DSP.lf1(j,2)))+... (3*(CO.val(k)*DSP.lf1(j,3)))+(4*(CO.val(k)*DSP.lf1(j,4)));
49

FLT.rb1(j,k)= (1*(CO.val(k)*DSP.rb1(j,1)))+(2*(CO.val(k)*DSP.rb1(j,2)))+... (3*(CO.val(k)*DSP.rb1(j,3)))+(4*(CO.val(k)*DSP.rb1(j,4))); FLT.rf1(j,k)= (1*(CO.val(k)*DSP.rf1(j,1)))+(2*(CO.val(k)*DSP.rf1(j,2)))+... (3*(CO.val(k)*DSP.rf1(j,3)))+(4*(CO.val(k)*DSP.rf1(j,4))); FLT.lb2(j,k)= (1*(CO.val(k)*DSP.lb2(j,1)))+(2*(CO.val(k)*DSP.lb2(j,2)))+... (3*(CO.val(k)*DSP.lb2(j,3)))+(4*(CO.val(k)*DSP.lb2(j,4))); FLT.lf2(j,k)= (1*(CO.val(k)*DSP.lf2(j,1)))+(2*(CO.val(k)*DSP.lf2(j,2)))+... (3*(CO.val(k)*DSP.lf2(j,3)))+(4*(CO.val(k)*DSP.lf2(j,4))); FLT.rb2(j,k)= (1*(CO.val(k)*DSP.rb2(j,1)))+(2*(CO.val(k)*DSP.rb2(j,2)))+... (3*(CO.val(k)*DSP.rb2(j,3)))+(4*(CO.val(k)*DSP.rb2(j,4))); FLT.rf2(j,k)= (1*(CO.val(k)*DSP.rf2(j,1)))+(2*(CO.val(k)*DSP.rf2(j,2)))+... (3*(CO.val(k)*DSP.rf2(j,3)))+(4*(CO.val(k)*DSP.rf2(j,4)));

end

end case S.rd(3) for k = 1 : sizeofdata(2) set(S.ftext(3),'string','Filtering : Applying.....'); pause(0.1); for j = 1 : sizeofdata(1) FLT.lb1(j,k)= (1*(CO.val(k)*DSP.lb1(j,1)))+(2*(CO.val(k)*DSP.lb1(j,2)))+... (3*(CO.val(k)*DSP.lb1(j,3)))+(4*(CO.val(k)*DSP.lb1(j,4))); FLT.lf1(j,k)= (1*(CO.val(k)*DSP.lf1(j,1)))+(2*(CO.val(k)*DSP.lf1(j,2)))+... (3*(CO.val(k)*DSP.lf1(j,3)))+(4*(CO.val(k)*DSP.lf1(j,4))); FLT.rb1(j,k)= (1*(CO.val(k)*DSP.rb1(j,1)))+(2*(CO.val(k)*DSP.rb1(j,2)))+... (3*(CO.val(k)*DSP.rb1(j,3)))+(4*(CO.val(k)*DSP.rb1(j,4))); FLT.rf1(j,k)= (1*(CO.val(k)*DSP.rf1(j,1)))+(2*(CO.val(k)*DSP.rf1(j,2)))+... (3*(CO.val(k)*DSP.rf1(j,3)))+(4*(CO.val(k)*DSP.rf1(j,4))); FLT.lb2(j,k)= (1*(CO.val(k)*DSP.lb2(j,1)))+(2*(CO.val(k)*DSP.lb2(j,2)))+... (3*(CO.val(k)*DSP.lb2(j,3)))+(4*(CO.val(k)*DSP.lb2(j,4))); FLT.lf2(j,k)= (1*(CO.val(k)*DSP.lf2(j,1)))+(2*(CO.val(k)*DSP.lf2(j,2)))+...
50

(3*(CO.val(k)*DSP.lf2(j,3)))+(4*(CO.val(k)*DSP.lf2(j,4))); FLT.rb2(j,k)= (1*(CO.val(k)*DSP.rb2(j,1)))+(2*(CO.val(k)*DSP.rb2(j,2)))+... (3*(CO.val(k)*DSP.rb2(j,3)))+(4*(CO.val(k)*DSP.rb2(j,4))); FLT.rf2(j,k)= (1*(CO.val(k)*DSP.rf2(j,1)))+(2*(CO.val(k)*DSP.rf2(j,2)))+... (3*(CO.val(k)*DSP.rf2(j,3)))+(4*(CO.val(k)*DSP.rf2(j,4))); FLT.lb3(j,k)= (1*(CO.val(k)*DSP.lb2(j,1)))+(2*(CO.val(k)*DSP.lb2(j,2)))+... (3*(CO.val(k)*DSP.lb2(j,3)))+(4*(CO.val(k)*DSP.lb2(j,4))); FLT.lf3(j,k)= (1*(CO.val(k)*DSP.lf3(j,1)))+(2*(CO.val(k)*DSP.lf3(j,2)))+... (3*(CO.val(k)*DSP.lf3(j,3)))+(4*(CO.val(k)*DSP.lf3(j,4))); FLT.rb3(j,k)= (1*(CO.val(k)*DSP.rb3(j,1)))+(2*(CO.val(k)*DSP.rb3(j,2)))+... (3*(CO.val(k)*DSP.rb3(j,3)))+(4*(CO.val(k)*DSP.rb3(j,4))); FLT.rf3(j,k)= (1*(CO.val(k)*DSP.rf3(j,1)))+(2*(CO.val(k)*DSP.rf3(j,2)))+... (3*(CO.val(k)*DSP.rf3(j,3)))+(4*(CO.val(k)*DSP.rf3(j,4)));

end

end end set(S.ftext(3),'string',{'Filtering : Done';'Function used : Spatial filtering'}); %msgbox('spatial filtering'); case 2 sizeofdata=size(DSP.lb1); switch findobj(get(S.bg,'selectedobject')) case S.rd(1) for k = 1 : sizeofdata(2) set(S.ftext(3),'string','Filtering : Applying.....'); pause(0.1); for j = 1 : sizeofdata(1) FLT.lb1(j,k)= ((1*(CO.val(k)*DSP.lb1(j,1)))+(2*(CO.val(k)*DSP.lb1(j,2)))+... (3*(CO.val(k)*DSP.lb1(j,3)))+(4*(CO.val(k)*DSP.lb1(j,4))))/4; FLT.lf1(j,k)= ((1*(CO.val(k)*DSP.lf1(j,1)))+(2*(CO.val(k)*DSP.lf1(j,2)))+... (3*(CO.val(k)*DSP.lf1(j,3)))+(4*(CO.val(k)*DSP.lf1(j,4))))/4;
51

FLT.rb1(j,k)= ((1*(CO.val(k)*DSP.rb1(j,1)))+(2*(CO.val(k)*DSP.rb1(j,2)))+... (3*(CO.val(k)*DSP.rb1(j,3)))+(4*(CO.val(k)*DSP.rb1(j,4))))/4; FLT.rf1(j,k)= ((1*(CO.val(k)*DSP.rf1(j,1)))+(2*(CO.val(k)*DSP.rf1(j,2)))+... (3*(CO.val(k)*DSP.rf1(j,3)))+(4*(CO.val(k)*DSP.rf1(j,4))))/4; end

end case S.rd(2) for k = 1 : sizeofdata(2) set(S.ftext(3),'string','Filtering : Applying.....'); pause(0.1); for j = 1 : sizeofdata(1) FLT.lb1(j,k)= ((1*(CO.val(k)*DSP.lb1(j,1)))+(2*(CO.val(k)*DSP.lb1(j,2)))+... (3*(CO.val(k)*DSP.lb1(j,3)))+(4*(CO.val(k)*DSP.lb1(j,4))))/4; FLT.lf1(j,k)= ((1*(CO.val(k)*DSP.lf1(j,1)))+(2*(CO.val(k)*DSP.lf1(j,2)))+... (3*(CO.val(k)*DSP.lf1(j,3)))+(4*(CO.val(k)*DSP.lf1(j,4))))/4; FLT.rb1(j,k)= ((1*(CO.val(k)*DSP.rb1(j,1)))+(2*(CO.val(k)*DSP.rb1(j,2)))+... (3*(CO.val(k)*DSP.rb1(j,3)))+(4*(CO.val(k)*DSP.rb1(j,4))))/4; FLT.rf1(j,k)= ((1*(CO.val(k)*DSP.rf1(j,1)))+(2*(CO.val(k)*DSP.rf1(j,2)))+... (3*(CO.val(k)*DSP.rf1(j,3)))+(4*(CO.val(k)*DSP.rf1(j,4))))/4; FLT.lb2(j,k)= ((1*(CO.val(k)*DSP.lb2(j,1)))+(2*(CO.val(k)*DSP.lb2(j,2)))+... (3*(CO.val(k)*DSP.lb2(j,3)))+(4*(CO.val(k)*DSP.lb2(j,4))))/4; FLT.lf2(j,k)= ((1*(CO.val(k)*DSP.lf2(j,1)))+(2*(CO.val(k)*DSP.lf2(j,2)))+... (3*(CO.val(k)*DSP.lf2(j,3)))+(4*(CO.val(k)*DSP.lf2(j,4))))/4; FLT.rb2(j,k)= ((1*(CO.val(k)*DSP.rb2(j,1)))+(2*(CO.val(k)*DSP.rb2(j,2)))+... (3*(CO.val(k)*DSP.rb2(j,3)))+(4*(CO.val(k)*DSP.rb2(j,4))))/4; FLT.rf2(j,k)= ((1*(CO.val(k)*DSP.rf2(j,1)))+(2*(CO.val(k)*DSP.rf2(j,2)))+... (3*(CO.val(k)*DSP.rf2(j,3)))+(4*(CO.val(k)*DSP.rf2(j,4))))/4;

end

end
52

case S.rd(3) for k = 1 : sizeofdata(2) set(S.ftext(3),'string','Filtering : Applying.....'); pause(0.1); for j = 1 : sizeofdata(1) FLT.lb1(j,k)= ((1*(CO.val(k)*DSP.lb1(j,1)))+(2*(CO.val(k)*DSP.lb1(j,2)))+... (3*(CO.val(k)*DSP.lb1(j,3)))+(4*(CO.val(k)*DSP.lb1(j,4))))/4; FLT.lf1(j,k)= ((1*(CO.val(k)*DSP.lf1(j,1)))+(2*(CO.val(k)*DSP.lf1(j,2)))+... (3*(CO.val(k)*DSP.lf1(j,3)))+(4*(CO.val(k)*DSP.lf1(j,4))))/4; FLT.rb1(j,k)= ((1*(CO.val(k)*DSP.rb1(j,1)))+(2*(CO.val(k)*DSP.rb1(j,2)))+... (3*(CO.val(k)*DSP.rb1(j,3)))+(4*(CO.val(k)*DSP.rb1(j,4))))/4; FLT.rf1(j,k)= ((1*(CO.val(k)*DSP.rf1(j,1)))+(2*(CO.val(k)*DSP.rf1(j,2)))+... (3*(CO.val(k)*DSP.rf1(j,3)))+(4*(CO.val(k)*DSP.rf1(j,4))))/4; FLT.lb2(j,k)= ((1*(CO.val(k)*DSP.lb2(j,1)))+(2*(CO.val(k)*DSP.lb2(j,2)))+... (3*(CO.val(k)*DSP.lb2(j,3)))+(4*(CO.val(k)*DSP.lb2(j,4))))/4; FLT.lf2(j,k)= ((1*(CO.val(k)*DSP.lf2(j,1)))+(2*(CO.val(k)*DSP.lf2(j,2)))+... (3*(CO.val(k)*DSP.lf2(j,3)))+(4*(CO.val(k)*DSP.lf2(j,4))))/4; FLT.rb2(j,k)= ((1*(CO.val(k)*DSP.rb2(j,1)))+(2*(CO.val(k)*DSP.rb2(j,2)))+... (3*(CO.val(k)*DSP.rb2(j,3)))+(4*(CO.val(k)*DSP.rb2(j,4))))/4; FLT.rf2(j,k)= ((1*(CO.val(k)*DSP.rf2(j,1)))+(2*(CO.val(k)*DSP.rf2(j,2)))+... (3*(CO.val(k)*DSP.rf2(j,3)))+(4*(CO.val(k)*DSP.rf2(j,4))))/4; FLT.lb3(j,k)= ((1*(CO.val(k)*DSP.lb2(j,1)))+(2*(CO.val(k)*DSP.lb2(j,2)))+... (3*(CO.val(k)*DSP.lb2(j,3)))+(4*(CO.val(k)*DSP.lb2(j,4))))/4; FLT.lf3(j,k)= ((1*(CO.val(k)*DSP.lf3(j,1)))+(2*(CO.val(k)*DSP.lf3(j,2)))+... (3*(CO.val(k)*DSP.lf3(j,3)))+(4*(CO.val(k)*DSP.lf3(j,4))))/4; FLT.rb3(j,k)= ((1*(CO.val(k)*DSP.rb3(j,1)))+(2*(CO.val(k)*DSP.rb3(j,2)))+... (3*(CO.val(k)*DSP.rb3(j,3)))+(4*(CO.val(k)*DSP.rb3(j,4))))/4; FLT.rf3(j,k)= ((1*(CO.val(k)*DSP.rf3(j,1)))+(2*(CO.val(k)*DSP.rf3(j,2)))+... (3*(CO.val(k)*DSP.rf3(j,3)))+(4*(CO.val(k)*DSP.rf3(j,4))))/4;

end

53

end end set(S.ftext(3),'string',{'Filtering : Done';'Function used : Temporal filtering'}); %msgbox('temporal filtering'); otherwise msgbox('select correct option'); end set(S.eppb,'callback',{@eppb_call,S,flag,FLT}); assignin('base','flt',FLT); end function [] = eppb_call(varargin) S=varargin{3}; flag=varargin{4}; FLT=varargin{5}; ipstr=str2num(get(S.epinput,'string')); if isempty(ipstr) msgbox('Enter valid Input'); else sizeofdata=size(FLT.lb1); n = int16(sizeofdata(1)/ipstr); switch findobj(get(S.bg,'selectedobject')) case S.rd(1) for i = 1 : n set(S.ftext(4),'string','Epoching : Processing.....'); pause(0.1); for j = 1 : ipstr EPO.lb1(i,j,1) = FLT.lb1(((i-1)*ipstr)+j,1);EPO.lb1(i,j,2) = FLT.lb1(((i-1)*ipstr)+j,2); EPO.lb1(i,j,3) = FLT.lb1(((i-1)*ipstr)+j,3);EPO.lb1(i,j,4) = FLT.lb1(((i-1)*ipstr)+j,4); EPO.lf1(i,j,1) = FLT.lf1(((i-1)*ipstr)+j,1);EPO.lf1(i,j,2) = FLT.lf1(((i-1)*ipstr)+j,2); EPO.lf1(i,j,3) = FLT.lf1(((i-1)*ipstr)+j,3);EPO.lf1(i,j,4) = FLT.lf1(((i-1)*ipstr)+j,4); EPO.rb1(i,j,1) = FLT.rb1(((i-1)*ipstr)+j,1);EPO.rb1(i,j,2) = FLT.rb1(((i-1)*ipstr)+j,2); EPO.rb1(i,j,3) = FLT.rb1(((i-1)*ipstr)+j,3);EPO.rb1(i,j,4) = FLT.rb1(((i-1)*ipstr)+j,4);
54

EPO.rf1(i,j,1) = FLT.rf1(((i-1)*ipstr)+j,1);EPO.rf1(i,j,2) = FLT.rf1(((i-1)*ipstr)+j,2); EPO.rf1(i,j,3) = FLT.rf1(((i-1)*ipstr)+j,3);EPO.rf1(i,j,4) = FLT.rf1(((i-1)*ipstr)+j,4); end end case S.rd(2) for i = 1 : n set(S.ftext(4),'string','Epoching : Processing.....'); pause(0.1); for j = 1 : ipstr EPO.lb1(i,j,1) = FLT.lb1(((i-1)*ipstr)+j,1);EPO.lb1(i,j,2) = FLT.lb1(((i-1)*ipstr)+j,2); EPO.lb1(i,j,3) = FLT.lb1(((i-1)*ipstr)+j,3);EPO.lb1(i,j,4) = FLT.lb1(((i-1)*ipstr)+j,4); EPO.lf1(i,j,1) = FLT.lf1(((i-1)*ipstr)+j,1);EPO.lf1(i,j,2) = FLT.lf1(((i-1)*ipstr)+j,2); EPO.lf1(i,j,3) = FLT.lf1(((i-1)*ipstr)+j,3);EPO.lf1(i,j,4) = FLT.lf1(((i-1)*ipstr)+j,4); EPO.rb1(i,j,1) = FLT.rb1(((i-1)*ipstr)+j,1);EPO.rb1(i,j,2) = FLT.rb1(((i-1)*ipstr)+j,2); EPO.rb1(i,j,3) = FLT.rb1(((i-1)*ipstr)+j,3);EPO.rb1(i,j,4) = FLT.rb1(((i-1)*ipstr)+j,4); EPO.rf1(i,j,1) = FLT.rf1(((i-1)*ipstr)+j,1);EPO.rf1(i,j,2) = FLT.rf1(((i-1)*ipstr)+j,2); EPO.rf1(i,j,3) = FLT.rf1(((i-1)*ipstr)+j,3);EPO.rf1(i,j,4) = FLT.rf1(((i-1)*ipstr)+j,4); EPO.lb2(i,j,1) = FLT.lb2(((i-1)*ipstr)+j,1);EPO.lb2(i,j,2) = FLT.lb2(((i-1)*ipstr)+j,2); EPO.lb2(i,j,3) = FLT.lb2(((i-1)*ipstr)+j,3);EPO.lb2(i,j,4) = FLT.lb2(((i-1)*ipstr)+j,4); EPO.lf2(i,j,1) = FLT.lf2(((i-1)*ipstr)+j,1);EPO.lf2(i,j,2) = FLT.lf2(((i-1)*ipstr)+j,2); EPO.lf2(i,j,3) = FLT.lf2(((i-1)*ipstr)+j,3);EPO.lf2(i,j,4) = FLT.lf2(((i-1)*ipstr)+j,4); EPO.rb2(i,j,1) = FLT.rb2(((i-1)*ipstr)+j,1);EPO.rb2(i,j,2) = FLT.rb2(((i-1)*ipstr)+j,2); EPO.rb2(i,j,3) = FLT.rb2(((i-1)*ipstr)+j,3);EPO.rb2(i,j,4) = FLT.rb2(((i-1)*ipstr)+j,4); EPO.rf2(i,j,1) = FLT.rf2(((i-1)*ipstr)+j,1);EPO.rf2(i,j,2) = FLT.rf2(((i-1)*ipstr)+j,2); EPO.rf2(i,j,3) = FLT.rf2(((i-1)*ipstr)+j,3);EPO.rf2(i,j,4) = FLT.rf2(((i-1)*ipstr)+j,4);

end end

case S.rd(3) for i = 1 : n set(S.ftext(4),'string','Epoching : Processing.....');


55

pause(0.1); for j = 1 : ipstr EPO.lb1(i,j,1) = FLT.lb1(((i-1)*ipstr)+j,1);EPO.lb1(i,j,2) = FLT.lb1(((i-1)*ipstr)+j,2); EPO.lb1(i,j,3) = FLT.lb1(((i-1)*ipstr)+j,3);EPO.lb1(i,j,4) = FLT.lb1(((i-1)*ipstr)+j,4); EPO.lf1(i,j,1) = FLT.lf1(((i-1)*ipstr)+j,1);EPO.lf1(i,j,2) = FLT.lf1(((i-1)*ipstr)+j,2); EPO.lf1(i,j,3) = FLT.lf1(((i-1)*ipstr)+j,3);EPO.lf1(i,j,4) = FLT.lf1(((i-1)*ipstr)+j,4); EPO.rb1(i,j,1) = FLT.rb1(((i-1)*ipstr)+j,1);EPO.rb1(i,j,2) = FLT.rb1(((i-1)*ipstr)+j,2); EPO.rb1(i,j,3) = FLT.rb1(((i-1)*ipstr)+j,3);EPO.rb1(i,j,4) = FLT.rb1(((i-1)*ipstr)+j,4); EPO.rf1(i,j,1) = FLT.rf1(((i-1)*ipstr)+j,1);EPO.rf1(i,j,2) = FLT.rf1(((i-1)*ipstr)+j,2); EPO.rf1(i,j,3) = FLT.rf1(((i-1)*ipstr)+j,3);EPO.rf1(i,j,4) = FLT.rf1(((i-1)*ipstr)+j,4); EPO.lb2(i,j,1) = FLT.lb2(((i-1)*ipstr)+j,1);EPO.lb2(i,j,2) = FLT.lb2(((i-1)*ipstr)+j,2); EPO.lb2(i,j,3) = FLT.lb2(((i-1)*ipstr)+j,3);EPO.lb2(i,j,4) = FLT.lb2(((i-1)*ipstr)+j,4); EPO.lf2(i,j,1) = FLT.lf2(((i-1)*ipstr)+j,1);EPO.lf2(i,j,2) = FLT.lf2(((i-1)*ipstr)+j,2); EPO.lf2(i,j,3) = FLT.lf2(((i-1)*ipstr)+j,3);EPO.lf2(i,j,4) = FLT.lf2(((i-1)*ipstr)+j,4); EPO.rb2(i,j,1) = FLT.rb2(((i-1)*ipstr)+j,1);EPO.rb2(i,j,2) = FLT.rb2(((i-1)*ipstr)+j,2); EPO.rb2(i,j,3) = FLT.rb2(((i-1)*ipstr)+j,3);EPO.rb2(i,j,4) = FLT.rb2(((i-1)*ipstr)+j,4); EPO.rf2(i,j,1) = FLT.rf2(((i-1)*ipstr)+j,1);EPO.rf2(i,j,2) = FLT.rf2(((i-1)*ipstr)+j,2); EPO.rf2(i,j,3) = FLT.rf2(((i-1)*ipstr)+j,3);EPO.rf2(i,j,4) = FLT.rf2(((i-1)*ipstr)+j,4); EPO.lb3(i,j,1) = FLT.lb3(((i-1)*ipstr)+j,1);EPO.lb3(i,j,2) = FLT.lb3(((i-1)*ipstr)+j,2); EPO.lb3(i,j,3) = FLT.lb3(((i-1)*ipstr)+j,3);EPO.lb3(i,j,4) = FLT.lb3(((i-1)*ipstr)+j,4); EPO.lf3(i,j,1) = FLT.lf3(((i-1)*ipstr)+j,1);EPO.lf3(i,j,2) = FLT.lf3(((i-1)*ipstr)+j,2); EPO.lf3(i,j,3) = FLT.lf3(((i-1)*ipstr)+j,3);EPO.lf3(i,j,4) = FLT.lf3(((i-1)*ipstr)+j,4); EPO.rb3(i,j,1) = FLT.rb3(((i-1)*ipstr)+j,1);EPO.rb3(i,j,2) = FLT.rb3(((i-1)*ipstr)+j,2); EPO.rb3(i,j,3) = FLT.rb3(((i-1)*ipstr)+j,3);EPO.rb3(i,j,4) = FLT.rb3(((i-1)*ipstr)+j,4); EPO.rf3(i,j,1) = FLT.rf3(((i-1)*ipstr)+j,1);EPO.rf3(i,j,2) = FLT.rf3(((i-1)*ipstr)+j,2); EPO.rf3(i,j,3) = FLT.rf3(((i-1)*ipstr)+j,3);EPO.rf3(i,j,4) = FLT.rf3(((i-1)*ipstr)+j,4);

end end end set(S.ftext(4),'string',{'Epoching : Done';['Sampling Rate : ' num2str(ipstr)]}); end


56

set(S.avgpb,'callback',{@avgpb_call,S,flag,EPO}); assignin('base','epo',EPO); end function [] = avgpb_call(varargin) S=varargin{3}; flag=varargin{4}; EPO=varargin{5}; sizeofdata=size(EPO.lb1); flagbit=[0 1 0 1 1 0 1 0 0 1]; switch findobj(get(S.bg,'selectedobject')) case S.rd(1)

for i = 1 : sizeofdata(1) set(S.ftext(5),'string','Averaging : Processing... '); pause(0.01); for j = 1 : sizeofdata(2) AG.lb1(i,j) = (EPO.lb1(i,j,1)+EPO.lb1(i,j,2)+EPO.lb1(i,j,3)+EPO.lb1(i,j,4))/4; AG.lf1(i,j) = (EPO.lf1(i,j,1)+EPO.lf1(i,j,2)+EPO.lf1(i,j,3)+EPO.lf1(i,j,4))/4; AG.rb1(i,j) = (EPO.rb1(i,j,1)+EPO.rb1(i,j,2)+EPO.rb1(i,j,3)+EPO.rb1(i,j,4))/4; AG.rf1(i,j) = (EPO.rf1(i,j,1)+EPO.rf1(i,j,2)+EPO.rf1(i,j,3)+EPO.rf1(i,j,4))/4; end end case S.rd(2) for i = 1 : sizeofdata(1) set(S.ftext(5),'string','Averaging : Processing... '); pause(0.01); for j = 1 : sizeofdata(2) AG.lb1(i,j) = (EPO.lb1(i,j,1)+EPO.lb1(i,j,2)+EPO.lb1(i,j,3)+EPO.lb1(i,j,4))/4; AG.lf1(i,j) = (EPO.lf1(i,j,1)+EPO.lf1(i,j,2)+EPO.lf1(i,j,3)+EPO.lf1(i,j,4))/4; AG.rb1(i,j) = (EPO.rb1(i,j,1)+EPO.rb1(i,j,2)+EPO.rb1(i,j,3)+EPO.rb1(i,j,4))/4; AG.rf1(i,j) = (EPO.rf1(i,j,1)+EPO.rf1(i,j,2)+EPO.rf1(i,j,3)+EPO.rf1(i,j,4))/4; AG.lb2(i,j) = (EPO.lb2(i,j,1)+EPO.lb2(i,j,2)+EPO.lb2(i,j,3)+EPO.lb2(i,j,4))/4;
57

AG.lf2(i,j) = (EPO.lf2(i,j,1)+EPO.lf2(i,j,2)+EPO.lf2(i,j,3)+EPO.lf2(i,j,4))/4; AG.rb2(i,j) = (EPO.rb2(i,j,1)+EPO.rb2(i,j,2)+EPO.rb2(i,j,3)+EPO.rb2(i,j,4))/4; AG.rf2(i,j) = (EPO.rf2(i,j,1)+EPO.rf2(i,j,2)+EPO.rf2(i,j,3)+EPO.rf2(i,j,4))/4;

end end

case S.rd(3)

for i = 1 : sizeofdata(1) set(S.ftext(5),'string','Averaging : Processing... '); pause(0.01); for j = 1 : sizeofdata(2) AG.lb1(i,j) = (EPO.lb1(i,j,1)+EPO.lb1(i,j,2)+EPO.lb1(i,j,3)+EPO.lb1(i,j,4))/4; AG.lf1(i,j) = (EPO.lf1(i,j,1)+EPO.lf1(i,j,2)+EPO.lf1(i,j,3)+EPO.lf1(i,j,4))/4; AG.rb1(i,j) = (EPO.rb1(i,j,1)+EPO.rb1(i,j,2)+EPO.rb1(i,j,3)+EPO.rb1(i,j,4))/4; AG.rf1(i,j) = (EPO.rf1(i,j,1)+EPO.rf1(i,j,2)+EPO.rf1(i,j,3)+EPO.rf1(i,j,4))/4; AG.lb2(i,j) = (EPO.lb2(i,j,1)+EPO.lb2(i,j,2)+EPO.lb2(i,j,3)+EPO.lb2(i,j,4))/4; AG.lf2(i,j) = (EPO.lf2(i,j,1)+EPO.lf2(i,j,2)+EPO.lf2(i,j,3)+EPO.lf2(i,j,4))/4; AG.rb2(i,j) = (EPO.rb2(i,j,1)+EPO.rb2(i,j,2)+EPO.rb2(i,j,3)+EPO.rb2(i,j,4))/4; AG.rf2(i,j) = (EPO.rf2(i,j,1)+EPO.rf2(i,j,2)+EPO.rf2(i,j,3)+EPO.rf2(i,j,4))/4; AG.lb3(i,j) = (EPO.lb3(i,j,1)+EPO.lb3(i,j,2)+EPO.lb3(i,j,3)+EPO.lb3(i,j,4))/4; AG.lf3(i,j) = (EPO.lf3(i,j,1)+EPO.lf3(i,j,2)+EPO.lf3(i,j,3)+EPO.lf3(i,j,4))/4; AG.rb3(i,j) = (EPO.rb3(i,j,1)+EPO.rb3(i,j,2)+EPO.rb3(i,j,3)+EPO.rb3(i,j,4))/4; AG.rf3(i,j) = (EPO.rf3(i,j,1)+EPO.rf3(i,j,2)+EPO.rf3(i,j,3)+EPO.rf3(i,j,4))/4;

end end end set(S.ftext(5),'string','Averaging : Done ');

set(S.clsls,'callback',{@clsls_call,S,flag,AG});
58

assignin('base','ag',AG); end function [] = clsls_call(varargin) S=varargin{3}; flag=varargin{4}; AG=varargin{5}; sizeofdata=size(AG.lb1); flagbit=[0 1 0 1 1 0 1 0 0 1]; switch findobj(get(S.bg,'selectedobject')) case S.rd(1)

tvalue1=mean(AG.lb1,2); tvalue2=mean(AG.lf1,2); tvalue3=mean(AG.rb1,2); tvalue4=mean(AG.rf1,2); for i = 1 :sizeofdata(1) for j= 1 : sizeofdata(2) if int32(AG.lb1(i,j)) > int32(tvalue1(i)) B.lb1(i,j) = 1; else B.lb1(i,j)=0; end if int32(AG.lf1(i,j)) > int32(tvalue2(i)) B.lf1(i,j) = 1; else B.lf1(i,j)=0; end if int32(AG.rb1(i,j)) > int32(tvalue3(i)) B.rb1(i,j) = 1; else B.rb1(i,j)=0; end
59

if int32(AG.rf1(i,j)) > int32(tvalue4(i)) B.rf1(i,j) = 1; else B.rf1(i,j)=0; end end end n=size(B.lb1); W.lb1=B.lb1(n(1),:); W.lf1=B.lf1(n(1),:); W.rb1=B.rb1(n(1),:); W.rf1=B.rf1(n(1),:); lrate=[0.01 0.02 0.3 0.04 0.05 0.06 0.07 0.08 0.09 0.1]; set(S.ftext(7),'string',{['Average weight LB : ' num2str(mean(W.lb1))];... ['Average weight LF :' num2str(mean(W.lf1))];... ['Average weight RB :' num2str(mean(W.rb1))];... ['Average weight RF :' num2str(mean(W.rf1))]});

for li = 1 : 10 %set(S.ftext(6),'string',{'Classification:' ; ['Training with learning rate : ' num2str(lrate(li))]}); %pause(0.1); for i = 1 : sizeofdata(1)-1 %left if sum((B.lb1(i,:)-W.lb1).^2) < sum((B.lb1(i,:)-W.lf1).^2) W.lb1 = W.lb1 + (lrate(li).*(B.lb1(i,:) - W.lb1)); %disp('lb-y'); else W.lb1 = W.lb1 - (lrate(li).*(B.lb1(i,:) - W.lb1)); %disp('lb-n'); end if sum((B.lf1(i,:)-W.lf1).^2) < sum((B.lf1(i,:)-W.lb1).^2) W.lf1 = W.lf1 + (lrate(li).*(B.lf1(i,:) - W.lf1));
60

%disp('lf-y'); else W.lf1 = W.lf1 - (lrate(li).*(B.lf1(i,:) - W.lf1)); %disp('lf-n'); end %right if sum((B.rb1(i,:)-W.rb1).^2) < sum((B.rb1(i,:)-W.rf1).^2) W.rb1 = W.rb1 + (lrate(li).*(B.rb1(i,:) - W.rb1)); %disp('rb-y'); else W.rb1 = W.rb1 - (lrate(li).*(B.rb1(i,:) - W.rb1)); %disp('rb-n'); end if sum((B.rf1(i,:)-W.rf1).^2) < sum((B.rf1(i,:)-W.rb1).^2) W.rf1 = W.rf1 + (lrate(li).*(B.rf1(i,:) - W.rf1)); %disp('rf-y'); else W.rf1 = W.rf1 - (lrate(li).*(B.rf1(i,:) - W.rf1)); %disp('lf-n'); end set(S.ftext(6),'string',{'Classification :' ; ['Training 1 with learning rate : ' num2str(lrate(li))]... ;['Chunk ' num2str(i) ' LeftBackward&LeftFarward'];['Chunk ' num2str(i) ' -

RightBackward&RightFarward']}); pause(0.4); end set(S.ftext(7),'string',{['Average weight LB : ' num2str(mean(W.lb1))];... ['Average weight LF :' num2str(mean(W.lf1))];... ['Average weight RB :' num2str(mean(W.rb1))];... ['Average weight RF :' num2str(mean(W.rf1))]}); end case S.rd(2) tvalue1=mean(AG.lb1,2);
61

tvalue2=mean(AG.lf1,2); tvalue3=mean(AG.rb1,2); tvalue4=mean(AG.rf1,2); for i = 1 :sizeofdata(1) for j= 1 : sizeofdata(2) if int32(AG.lb1(i,j)) > int32(tvalue1(i)) B.lb1(i,j) = 1; else B.lb1(i,j)=0; end if int32(AG.lf1(i,j)) > int32(tvalue2(i)) B.lf1(i,j) = 1; else B.lf1(i,j)=0; end if int32(AG.rb1(i,j)) > int32(tvalue3(i)) B.rb1(i,j) = 1; else B.rb1(i,j)=0; end if int32(AG.rf1(i,j)) > int32(tvalue4(i)) B.rf1(i,j) = 1; else B.rf1(i,j)=0; end end end n=size(B.lb1); W.lb1=B.lb1(n(1),:); W.lf1=B.lf1(n(1),:); W.rb1=B.rb1(n(1),:); W.rf1=B.rf1(n(1),:);
62

lrate=[0.01 0.02 0.3 0.04 0.05 0.06 0.07 0.08 0.09 0.1]; set(S.ftext(7),'string',{['Average weight LB : ' num2str(mean(W.lb1))];... ['Average weight LF :' num2str(mean(W.lf1))];... ['Average weight RB :' num2str(mean(W.rb1))];... ['Average weight RF :' num2str(mean(W.rf1))]});

for li = 1 : 10 %set(S.ftext(6),'string',{'Classification:' ; ['Training with learning rate : ' num2str(lrate(li))]}); %pause(0.1); for i = 1 : sizeofdata(1)-1 %left if sum((B.lb1(i,:)-W.lb1).^2) < sum((B.lb1(i,:)-W.lf1).^2) W.lb1 = W.lb1 + (lrate(li).*(B.lb1(i,:) - W.lb1)); %disp('lb-y'); else W.lb1 = W.lb1 - (lrate(li).*(B.lb1(i,:) - W.lb1)); %disp('lb-n'); end if sum((B.lf1(i,:)-W.lf1).^2) < sum((B.lf1(i,:)-W.lb1).^2) W.lf1 = W.lf1 + (lrate(li).*(B.lf1(i,:) - W.lf1)); %disp('lf-y'); else W.lf1 = W.lf1 - (lrate(li).*(B.lf1(i,:) - W.lf1)); %disp('lf-n'); end %right if sum((B.rb1(i,:)-W.rb1).^2) < sum((B.rb1(i,:)-W.rf1).^2) W.rb1 = W.rb1 + (lrate(li).*(B.rb1(i,:) - W.rb1)); %disp('rb-y'); else W.rb1 = W.rb1 - (lrate(li).*(B.rb1(i,:) - W.rb1)); %disp('rb-n');
63

end if sum((B.rf1(i,:)-W.rf1).^2) < sum((B.rf1(i,:)-W.rb1).^2) W.rf1 = W.rf1 + (lrate(li).*(B.rf1(i,:) - W.rf1)); %disp('rf-y'); else W.rf1 = W.rf1 - (lrate(li).*(B.rf1(i,:) - W.rf1)); %disp('lf-n'); end set(S.ftext(6),'string',{'Classification :' ; ['Training 1 with learning rate : ' num2str(lrate(li))]... ;['Chunk ' num2str(i) ' LeftBackward&LeftFarward'];['Chunk ' num2str(i) ' -

RightBackward&RightFarward']}); pause(0.4); end set(S.ftext(7),'string',{['Average weight LB : ' num2str(mean(W.lb1))];... ['Average weight LF :' num2str(mean(W.lf1))];... ['Average weight RB :' num2str(mean(W.rb1))];... ['Average weight RF :' num2str(mean(W.rf1))]}); end

tvalue1=mean(AG.lb2,2); tvalue2=mean(AG.lf2,2); tvalue3=mean(AG.rb2,2); tvalue4=mean(AG.rf2,2); for i = 1 :sizeofdata(1) for j= 1 : sizeofdata(2) if int32(AG.lb2(i,j)) > int32(tvalue1(i)) B.lb2(i,j) = 1; else B.lb2(i,j)=0; end if int32(AG.lf2(i,j)) > int32(tvalue2(i)) B.lf2(i,j) = 1;
64

else B.lf2(i,j)=0; end if int32(AG.rb2(i,j)) > int32(tvalue3(i)) B.rb2(i,j) = 1; else B.rb2(i,j)=0; end if int32(AG.rf2(i,j)) > int32(tvalue4(i)) B.rf2(i,j) = 1; else B.rf2(i,j)=0; end end end n=size(B.lb2); W.lb2=B.lb2(n(1),:); W.lf2=B.lf2(n(1),:); W.rb2=B.rb2(n(1),:); W.rf2=B.rf2(n(1),:); lrate=[0.01 0.02 0.3 0.04 0.05 0.06 0.07 0.08 0.09 0.1]; set(S.ftext(7),'string',{['Average weight LB : ' num2str(mean(W.lb2))];... ['Average weight LF :' num2str(mean(W.lf2))];... ['Average weight RB :' num2str(mean(W.rb2))];... ['Average weight RF :' num2str(mean(W.rf2))]});

for li = 1 : 10 %set(S.ftext(6),'string',{'Classification:' ; ['Training with learning rate : ' num2str(lrate(li))]}); %pause(0.1); for i = 1 : sizeofdata(1)-1 %left if sum((B.lb2(i,:)-W.lb2).^2) < sum((B.lb2(i,:)-W.lf2).^2)
65

W.lb2 = W.lb2 + (lrate(li).*(B.lb2(i,:) - W.lb2)); %disp('lb-y'); else W.lb2 = W.lb2 - (lrate(li).*(B.lb2(i,:) - W.lb2)); %disp('lb-n'); end if sum((B.lf2(i,:)-W.lf2).^2) < sum((B.lf2(i,:)-W.lb2).^2) W.lf2 = W.lf2 + (lrate(li).*(B.lf2(i,:) - W.lf2)); %disp('lf-y'); else W.lf2 = W.lf2 - (lrate(li).*(B.lf2(i,:) - W.lf2)); %disp('lf-n'); end %right if sum((B.rb2(i,:)-W.rb2).^2) < sum((B.rb2(i,:)-W.rf2).^2) W.rb2 = W.rb2 + (lrate(li).*(B.rb2(i,:) - W.rb2)); %disp('rb-y'); else W.rb2 = W.rb2 - (lrate(li).*(B.rb2(i,:) - W.rb2)); %disp('rb-n'); end if sum((B.rf2(i,:)-W.rf2).^2) < sum((B.rf2(i,:)-W.rb2).^2) W.rf2 = W.rf2 + (lrate(li).*(B.rf2(i,:) - W.rf2)); %disp('rf-y'); else W.rf2 = W.rf2 - (lrate(li).*(B.rf2(i,:) - W.rf2)); %disp('lf-n'); end set(S.ftext(6),'string',{'Classification :' ; ['Training 2 with learning rate : ' num2str(lrate(li))]... ;['Chunk ' num2str(i) ' LeftBackward&LeftFarward'];['Chunk ' num2str(i) ' -

RightBackward&RightFarward']}); pause(0.4);
66

end set(S.ftext(7),'string',{['Average weight LB : ' num2str(mean(W.lb2))];... ['Average weight LF :' num2str(mean(W.lf2))];... ['Average weight RB :' num2str(mean(W.rb2))];... ['Average weight RF :' num2str(mean(W.rf2))]}); end set(S.ftext(7),'string',{['Average weight LB : ' num2str((mean(W.lb2)+mean(W.lb1))/2)];... ['Average weight LF :' num2str((mean(W.lf2)+mean(W.lf1))/2)];... ['Average weight RB :' num2str((mean(W.rb2)+mean(W.rb1))/2)];... ['Average weight RF :' num2str((mean(W.rf2)+mean(W.rf1))/2)]});

case S.rd(3)

tvalue1=mean(AG.lb1,2); tvalue2=mean(AG.lf1,2); tvalue3=mean(AG.rb1,2); tvalue4=mean(AG.rf1,2); for i = 1 :sizeofdata(1) for j= 1 : sizeofdata(2) if int32(AG.lb1(i,j)) > int32(tvalue1(i)) B.lb1(i,j) = 1; else B.lb1(i,j)=0; end if int32(AG.lf1(i,j)) > int32(tvalue2(i)) B.lf1(i,j) = 1; else B.lf1(i,j)=0; end if int32(AG.rb1(i,j)) > int32(tvalue3(i)) B.rb1(i,j) = 1; else
67

B.rb1(i,j)=0; end if int32(AG.rf1(i,j)) > int32(tvalue4(i)) B.rf1(i,j) = 1; else B.rf1(i,j)=0; end end end n=size(B.lb1); W.lb1=B.lb1(n(1),:); W.lf1=B.lf1(n(1),:); W.rb1=B.rb1(n(1),:); W.rf1=B.rf1(n(1),:); lrate=[0.01 0.02 0.3 0.04 0.05 0.06 0.07 0.08 0.09 0.1]; set(S.ftext(7),'string',{['Average weight LB : ' num2str(mean(W.lb1))];... ['Average weight LF :' num2str(mean(W.lf1))];... ['Average weight RB :' num2str(mean(W.rb1))];... ['Average weight RF :' num2str(mean(W.rf1))]});

for li = 1 : 10 %set(S.ftext(6),'string',{'Classification:' ; ['Training with learning rate : ' num2str(lrate(li))]}); %pause(0.1); for i = 1 : sizeofdata(1)-1 %left if sum((B.lb1(i,:)-W.lb1).^2) < sum((B.lb1(i,:)-W.lf1).^2) W.lb1 = W.lb1 + (lrate(li).*(B.lb1(i,:) - W.lb1)); %disp('lb-y'); else W.lb1 = W.lb1 - (lrate(li).*(B.lb1(i,:) - W.lb1)); %disp('lb-n'); end
68

if sum((B.lf1(i,:)-W.lf1).^2) < sum((B.lf1(i,:)-W.lb1).^2) W.lf1 = W.lf1 + (lrate(li).*(B.lf1(i,:) - W.lf1)); %disp('lf-y'); else W.lf1 = W.lf1 - (lrate(li).*(B.lf1(i,:) - W.lf1)); %disp('lf-n'); end %right if sum((B.rb1(i,:)-W.rb1).^2) < sum((B.rb1(i,:)-W.rf1).^2) W.rb1 = W.rb1 + (lrate(li).*(B.rb1(i,:) - W.rb1)); %disp('rb-y'); else W.rb1 = W.rb1 - (lrate(li).*(B.rb1(i,:) - W.rb1)); %disp('rb-n'); end if sum((B.rf1(i,:)-W.rf1).^2) < sum((B.rf1(i,:)-W.rb1).^2) W.rf1 = W.rf1 + (lrate(li).*(B.rf1(i,:) - W.rf1)); %disp('rf-y'); else W.rf1 = W.rf1 - (lrate(li).*(B.rf1(i,:) - W.rf1)); %disp('lf-n'); end set(S.ftext(6),'string',{'Classification :' ; ['Training 1 with learning rate : ' num2str(lrate(li))]... ;['Chunk ' num2str(i) ' LeftBackward&LeftFarward'];['Chunk ' num2str(i) ' -

RightBackward&RightFarward']}); pause(0.4); end set(S.ftext(7),'string',{['Average weight LB : ' num2str(mean(W.lb1))];... ['Average weight LF :' num2str(mean(W.lf1))];... ['Average weight RB :' num2str(mean(W.rb1))];... ['Average weight RF :' num2str(mean(W.rf1))]}); end
69

tvalue1=mean(AG.lb2,2); tvalue2=mean(AG.lf2,2); tvalue3=mean(AG.rb2,2); tvalue4=mean(AG.rf2,2); for i = 1 :sizeofdata(1) for j= 1 : sizeofdata(2) if int32(AG.lb2(i,j)) > int32(tvalue1(i)) B.lb2(i,j) = 1; else B.lb2(i,j)=0; end if int32(AG.lf2(i,j)) > int32(tvalue2(i)) B.lf2(i,j) = 1; else B.lf2(i,j)=0; end if int32(AG.rb2(i,j)) > int32(tvalue3(i)) B.rb2(i,j) = 1; else B.rb2(i,j)=0; end if int32(AG.rf2(i,j)) > int32(tvalue4(i)) B.rf2(i,j) = 1; else B.rf2(i,j)=0; end end end n=size(B.lb2); W.lb2=B.lb2(n(1),:); W.lf2=B.lf2(n(1),:);
70

W.rb2=B.rb2(n(1),:); W.rf2=B.rf2(n(1),:); lrate=[0.01 0.02 0.3 0.04 0.05 0.06 0.07 0.08 0.09 0.1]; set(S.ftext(7),'string',{['Average weight LB : ' num2str(mean(W.lb2))];... ['Average weight LF :' num2str(mean(W.lf2))];... ['Average weight RB :' num2str(mean(W.rb2))];... ['Average weight RF :' num2str(mean(W.rf2))]});

for li = 1 : 10 %set(S.ftext(6),'string',{'Classification:' ; ['Training with learning rate : ' num2str(lrate(li))]}); %pause(0.1); for i = 1 : sizeofdata(1)-1 %left if sum((B.lb2(i,:)-W.lb2).^2) < sum((B.lb2(i,:)-W.lf2).^2) W.lb2 = W.lb2 + (lrate(li).*(B.lb2(i,:) - W.lb2)); %disp('lb-y'); else W.lb2 = W.lb2 - (lrate(li).*(B.lb2(i,:) - W.lb2)); %disp('lb-n'); end if sum((B.lf2(i,:)-W.lf2).^2) < sum((B.lf2(i,:)-W.lb2).^2) W.lf2 = W.lf2 + (lrate(li).*(B.lf2(i,:) - W.lf2)); %disp('lf-y'); else W.lf2 = W.lf2 - (lrate(li).*(B.lf2(i,:) - W.lf2)); %disp('lf-n'); end %right if sum((B.rb2(i,:)-W.rb2).^2) < sum((B.rb2(i,:)-W.rf2).^2) W.rb2 = W.rb2 + (lrate(li).*(B.rb2(i,:) - W.rb2)); %disp('rb-y'); else
71

W.rb2 = W.rb2 - (lrate(li).*(B.rb2(i,:) - W.rb2)); %disp('rb-n'); end if sum((B.rf2(i,:)-W.rf2).^2) < sum((B.rf2(i,:)-W.rb2).^2) W.rf2 = W.rf2 + (lrate(li).*(B.rf2(i,:) - W.rf2)); %disp('rf-y'); else W.rf2 = W.rf2 - (lrate(li).*(B.rf2(i,:) - W.rf2)); %disp('lf-n'); end set(S.ftext(6),'string',{'Classification :' ; ['Training 2 with learning rate : ' num2str(lrate(li))]... ;['Chunk ' num2str(i) ' LeftBackward&LeftFarward'];['Chunk ' num2str(i) ' -

RightBackward&RightFarward']}); pause(0.4); end set(S.ftext(7),'string',{['Average weight LB : ' num2str(mean(W.lb2))];... ['Average weight LF :' num2str(mean(W.lf2))];... ['Average weight RB :' num2str(mean(W.rb2))];... ['Average weight RF :' num2str(mean(W.rf2))]}); end

tvalue1=mean(AG.lb3,2); tvalue2=mean(AG.lf3,2); tvalue3=mean(AG.rb3,2); tvalue4=mean(AG.rf3,2); for i = 1 :sizeofdata(1) for j= 1 : sizeofdata(2) if int32(AG.lb3(i,j)) > int32(tvalue1(i)) B.lb3(i,j) = 1; else B.lb3(i,j)=0; end
72

if int32(AG.lf3(i,j)) > int32(tvalue2(i)) B.lf3(i,j) = 1; else B.lf3(i,j)=0; end if int32(AG.rb3(i,j)) > int32(tvalue3(i)) B.rb3(i,j) = 1; else B.rb3(i,j)=0; end if int32(AG.rf3(i,j)) > int32(tvalue4(i)) B.rf3(i,j) = 1; else B.rf3(i,j)=0; end end end n=size(B.lb3); W.lb3=B.lb3(n(1),:); W.lf3=B.lf3(n(1),:); W.rb3=B.rb3(n(1),:); W.rf3=B.rf3(n(1),:); lrate=[0.01 0.02 0.3 0.04 0.05 0.06 0.07 0.08 0.09 0.1]; set(S.ftext(7),'string',{['Average weight LB : ' num2str(mean(W.lb3))];... ['Average weight LF :' num2str(mean(W.lf3))];... ['Average weight RB :' num2str(mean(W.rb3))];... ['Average weight RF :' num2str(mean(W.rf3))]});

for li = 1 : 10 %set(S.ftext(6),'string',{'Classification:' ; ['Training with learning rate : ' num2str(lrate(li))]}); %pause(0.1); for i = 1 : sizeofdata(1)-1
73

%left if sum((B.lb3(i,:)-W.lb3).^2) < sum((B.lb3(i,:)-W.lf3).^2) W.lb3 = W.lb3 + (lrate(li).*(B.lb3(i,:) - W.lb3)); %disp('lb-y'); else W.lb3 = W.lb3 - (lrate(li).*(B.lb3(i,:) - W.lb3)); %disp('lb-n'); end if sum((B.lf3(i,:)-W.lf3).^2) < sum((B.lf3(i,:)-W.lb3).^2) W.lf3 = W.lf3 + (lrate(li).*(B.lf3(i,:) - W.lf3)); %disp('lf-y'); else W.lf3 = W.lf3 - (lrate(li).*(B.lf3(i,:) - W.lf3)); %disp('lf-n'); end %right if sum((B.rb3(i,:)-W.rb3).^3) < sum((B.rb3(i,:)-W.rf3).^2) W.rb3 = W.rb3 + (lrate(li).*(B.rb3(i,:) - W.rb3)); %disp('rb-y'); else W.rb3 = W.rb3 - (lrate(li).*(B.rb3(i,:) - W.rb3)); %disp('rb-n'); end if sum((B.rf3(i,:)-W.rf3).^2) < sum((B.rf3(i,:)-W.rb3).^2) W.rf3 = W.rf3 + (lrate(li).*(B.rf3(i,:) - W.rf3)); %disp('rf-y'); else W.rf3 = W.rf3 - (lrate(li).*(B.rf3(i,:) - W.rf3)); %disp('lf-n'); end set(S.ftext(6),'string',{'Classification :' ; ['Training 3 with learning rate : ' num2str(lrate(li))]...

74

;['Chunk

'

num2str(i)

'

LeftBackward&LeftFarward'];['Chunk

'

num2str(i)

'

RightBackward&RightFarward']}); pause(0.4); end set(S.ftext(7),'string',{['Average weight LB : ' num2str(mean(W.lb3))];... ['Average weight LF :' num2str(mean(W.lf3))];... ['Average weight RB :' num2str(mean(W.rb3))];... ['Average weight RF :' num2str(mean(W.rf3))]}); end set(S.ftext(7),'string',{['Average weight LB : '

num2str((mean(W.lb2)+mean(W.lb1)+mean(W.lb3))/3)];... ['Average weight LF :' num2str((mean(W.lf2)+mean(W.lf1)+mean(W.lf3))/3)];... ['Average weight RB :' num2str((mean(W.rb2)+mean(W.rb1)+mean(W.rb3))/3)];... ['Average weight RF :' num2str((mean(W.rf2)+mean(W.rf1)+mean(W.rf3))/3)]});

end set(S.ftext(6),'string',{'Classification :';'Done'}); assignin('base','B',B); assignin('base','W',W); csvwrite('flagbits.txt',flagbit); end

Simulation module:

function bci_simulation flagbit=csvread('flagbits.txt'); if flagbit(5)==1 import java.awt.Robot S.mouse = Robot; S.mouse.mouseMove(0, 0);
75

scr_size=get(0,'screensize'); S.processflag=false; S.fh = figure('units','pixels',... 'position',scr_size,... 'menubar','none',... 'name','BCI-Simulation',... 'numbertitle','off',... 'resize','off','color','w'); ax1_size = [5 620 1355 100]; S.ax1 = axes('units','pixels',... 'position',ax1_size); ax2_size = [5 500 1355 100]; S.ax2 = axes('units','pixels',... 'position',ax2_size);

ax3_size = [600 5 750 490]; S.ax3 = axes('units','pixels',... 'position',ax3_size); bgimg=imread('D:\finalyearproject\bgblack.jpg'); image(bgimg);

ax4_size = [5 390 590 100]; S.ax4 = axes('units','pixels',... 'position',ax4_size);

ax5_size = [5 270 590 100]; S.ax5 = axes('units','pixels',... 'position',ax5_size); ax6_size = [5 150 590 100]; S.ax6 = axes('units','pixels',... 'position',ax6_size);
76

ax7_size = [5 30 590 100]; S.ax7 = axes('units','pixels',... 'position',ax7_size);

S.txt1 = uicontrol('style','text',... 'unit','pix',... 'position',[5 700 150 21],... 'string','Combined wave form',... 'backgroundcolor','w',... 'fontsize',12); S.txt2 = uicontrol('style','text',... 'unit','pix',... 'position',[5 560 150 21],... 'string','Average wave form',... 'backgroundcolor','w',... 'fontsize',12); S.txt3 = uicontrol('style','text','horizontalalign','left',... 'unit','pix',... 'position',[10 460 180 20],... 'string','Signals from channel C3',... 'backgroundcolor','w',... 'fontsize',8); S.txt4 = uicontrol('style','text','horizontalalign','left',... 'unit','pix',... 'position',[10 340 180 20],... 'string','Signals from channel C4',... 'backgroundcolor','w',... 'fontsize',8); S.txt5 = uicontrol('style','text','horizontalalign','left',... 'unit','pix',... 'position',[10 220 180 20],... 'string','Signals from channel T3',...
77

%channe1 -1 : C3-5

%channe1 -2 : C4-6

%channe1 -3 : T3-13

'backgroundcolor','w',... 'fontsize',8); S.txt6 = uicontrol('style','text','horizontalalign','left',... 'unit','pix',... 'position',[10 100 180 20],... 'string','Signals from channel CZ',... 'backgroundcolor','w',... 'fontsize',8); S.txt7 = uicontrol('style','text',... 'unit','pix',... 'position',[600 460 80 20],... 'string','Unstable',... 'backgroundcolor','k','foregroundcolor','w',... 'fontsize',13); S.txt8 = uicontrol('style','text',... 'unit','pix',... 'position',[1230 460 100 20],... 'string','Not moving',... 'backgroundcolor','k','foregroundcolor','w',... 'fontsize',13); S.bg = uibuttongroup('units','pix',... 'pos',[1090 720 160 30]); S.rd(1) = uicontrol(S.bg,... 'style','rad',... 'unit','pix',... 'position',[5 5 70 20],... 'string','Left'); S.rd(2) = uicontrol(S.bg,... 'style','rad',... 'unit','pix',... 'position',[80 5 70 20],... 'string','Right');
78

%channe1 -4 : CZ-18

simposition = get(S.ax3,'position'); S.mx = simposition(1)+(simposition(3)/2); S.my = scr_size(4)-(simposition(2)+(simposition(4)/2)); S.mouse.mouseMove(S.mx,S.my); S.txt9 = uicontrol('style','text',... 'unit','pix',... 'position',[850 460 200 20],... 'string',['Position : ' num2str(S.mx) ' , ' num2str(S.my)],... 'backgroundcolor','k','foregroundcolor','w',... 'fontsize',13); S.txt10 = uicontrol('style','text',... 'unit','pix',... 'position',[610 10 200 20],... 'string','Active Motion : None',... 'backgroundcolor','k','foregroundcolor','w',... 'fontsize',13); S.txt11 = uicontrol('style','text',... 'unit','pix',... 'position',[1150 70 200 20],... 'string','Left-Fw : None',... 'backgroundcolor','k','foregroundcolor','w',... 'fontsize',13); S.txt12 = uicontrol('style','text',... 'unit','pix',... 'position',[1150 50 200 20],... 'string','Left-Bw : None',... 'backgroundcolor','k','foregroundcolor','w',... 'fontsize',13); S.txt13 = uicontrol('style','text',... 'unit','pix',... 'position',[1150 30 200 20],... 'string','Right-Fw : None',...
79

'backgroundcolor','k','foregroundcolor','w',... 'fontsize',13); S.txt14 = uicontrol('style','text',... 'unit','pix',... 'position',[1150 10 200 20],... 'string','Right-Bw : None',... 'backgroundcolor','k','foregroundcolor','w',... 'fontsize',13); set(S.fh,'keypressfcn',{@figclose,S}); set(S.txt8,'string','Not moving'); else msgbox('First process the training module.'); end end

function []= figclose(varargin)

[K,S]= varargin{[2 3]}; set(S.fh,'WindowButtonMotionFcn',{@changesimtext,S}); if strcmp(K.Key,'space') close(S.fh); elseif strcmp(K.Key,'return') simulation_process(S); elseif strcmp(K.Key,'l') set(S.bg,'selectedobject',S.rd(1)); elseif strcmp(K.Key,'r') set(S.bg,'selectedobject',S.rd(2)); end end function []= simulation_process(S) S.processrunflag = true; set(S.fh,'keypressfcn',{@figclose,S});
80

set(S.txt11,'string','Left-Fw : Mapped'); set(S.txt12,'string','Left-Bw : Mapped'); set(S.txt13,'string','Right-Fw : Not Mapped'); set(S.txt14,'string','Right-Bw : Not Mapped'); load('C:\Users\shaneeth\Desktop\gui-matlab\Subject1_1D.mat'); tp_left=left'; tp_left_req(:,1) = tp_left(:,5); tp_left_req(:,2) = tp_left(:,6); tp_left_req(:,3) = tp_left(:,13); tp_left_req(:,4) = tp_left(:,18); avg_left= sum(tp_left_req,2)./4;

plot(S.ax1,tp_left_req(:,1:4)); plot(S.ax4,tp_left_req(:,1),'color','g'); plot(S.ax5,tp_left_req(:,2),'color','r'); plot(S.ax6,tp_left_req(:,3),'color','b'); plot(S.ax7,tp_left_req(:,4),'color','k'); tp_left_size=size(tp_left); %max_val_left=max(avg_left)-100; %min_val_left=min(avg_left)+100; %ylim(S.ax2,[min_val_left max_val_left]); %set(S.ax1,'xtick',[],'ytick',[]); %set(S.ax2,'xtick',[],'ytick',[]); set(S.ax3,'xtick',[],'ytick',[]); xlimoffset=-500; xlimoffset_ch=-250; chunk_threshold=15; samplecount=1; sample_st_y=[]; sample_st_x=[]; toggle_st=true; mouse_active_lu=50;
81

mouse_active_lb=10; mouse_active_flag=false; mouseactive_lu=false; mouseactive_lb=false; mx_temp=S.mx; my_temp=S.my; try for j = 1:500:tp_left_size(1) switch findobj(get(S.bg,'selectedobject')) case S.rd(1) set(S.txt11,'string','Left-Fw : Mapped'); set(S.txt12,'string','Left-Bw : Mapped'); set(S.txt13,'string','Right-Fw : Not Mapped'); set(S.txt14,'string','Right-Bw : Not Mapped'); set(S.txt3,'string','Signals from channel C3(Left)'); set(S.txt4,'string','Signals from channel C4(Left)'); set(S.txt5,'string','Signals from channel T3(Left)'); set(S.txt6,'string','Signals from channel CZ(Left)'); plot(S.ax1,tp_left_req(:,1:4)); plot(S.ax4,tp_left_req(:,1),'color','g'); plot(S.ax5,tp_left_req(:,2),'color','r'); plot(S.ax6,tp_left_req(:,3),'color','b'); plot(S.ax7,tp_left_req(:,4),'color','k'); chunk_avg_min = min(avg_left(j:j+500)); chunk_avg_max = max(avg_left(j:j+500)); chunk_avg_val = (chunk_avg_min+chunk_avg_max)/2; %t = timer('StartDelay', 1, 'Period', 1, 'TasksToExecute', 5, ... % 'ExecutionMode', 'fixedRate');

for j2 = j:j+500 if avg_left(j2)>(chunk_avg_val+chunk_threshold) binary_avg_left(j2)=1;


82

else binary_avg_left(j2)=0; end if avg_left(j2)<(chunk_avg_val-chunk_threshold) binary_avg_left_b(j2)=1; else binary_avg_left_b(j2)=0; end end if toggle_st==true sample_y_temp= -100:100; toggle_st=false; else sample_y_temp=100:-1:-100; toggle_st=true; end sample_y_size=size(sample_y_temp); sample_st_y=[sample_st_y sample_y_temp]; sample_x_temp(1:sample_y_size(2))=j; sample_st_x=[sample_st_x sample_x_temp];

sum_binary_chunk_lu = sum(binary_avg_left(j:j+500)) sum_binary_chunk_lb = sum(binary_avg_left_b(j:j+500)) avg_xaxis=1:(j+500); plot(S.ax2,avg_xaxis(1:j+500),avg_left(1:j+500),'k',sample_st_x,sample_st_y,'r');

samplecount=samplecount+sample_y_size(2);

if sum_binary_chunk_lu>mouse_active_lu mouse_active_flag=true; mouseactive_lu=true; mouse_lu_disp=(sum_binary_chunk_lu - mouse_active_lu);


83

elseif sum_binary_chunk_lb>mouse_active_lb mouse_active_flag=true; mouseactive_lb=true; mouse_lb_disp = (sum_binary_chunk_lb - mouse_active_lb); end %timer function %delete(t); %t = timer('StartDelay', 1, 'Period', 1, 'TasksToExecute', 1, ... % % % 'ExecutionMode', 'fixedRate'); t.TimerFcn = {@bci_mousecontrol,mouse}; start(t);

temp_mu=0; temp_mb=0; set(S.txt8,'string','Not moving'); set(S.txt10,'string','Active Motion : None'); for i = j :30: j+500 xlim(S.ax1,[xlimoffset+i xlimoffset+i+500]); xlim(S.ax2,[xlimoffset+i xlimoffset+i+500]);

ylim(S.ax2,[chunk_avg_min chunk_avg_max]);

xlim(S.ax4,[xlimoffset_ch+i xlimoffset_ch+i+250]); xlim(S.ax5,[xlimoffset_ch+i xlimoffset_ch+i+250]); xlim(S.ax6,[xlimoffset_ch+i xlimoffset_ch+i+250]); xlim(S.ax7,[xlimoffset_ch+i xlimoffset_ch+i+250]); if mouse_active_flag==true if mouseactive_lu==true set(S.txt10,'string','Active Motion : Down'); if temp_mu<mouse_lu_disp S.mouse.mouseMove(S.mx,S.my); temp_mu=temp_mu+1; S.my=S.my+temp_mu;
84

if S.my>my_temp+200 S.my=my_temp; end end elseif mouseactive_lb==true set(S.txt10,'string','Active Motion : Up'); if temp_mb<mouse_lb_disp S.mouse.mouseMove(S.mx,S.my); temp_mb=temp_mb+1; S.my=S.my-temp_mb; end if S.my<my_temp-200 S.my=my_temp; end

end end pause(0.0001); if strcmp(get(S.fh,'Currentkey'),'quote') waitforbuttonpress; end end mouse_active_flag=false; mouseactive_lu=false; mouseactive_lb=false; %wait(t); case S.rd(2) set(S.txt11,'string','Left-Fw : Not Mapped'); set(S.txt12,'string','Left-Bw : Not Mapped'); set(S.txt13,'string','Right-Fw : Mapped'); set(S.txt14,'string','Right-Bw : Mapped'); set(S.txt3,'string','Signals from channel C3(Right)');
85

set(S.txt4,'string','Signals from channel C4(Right)'); set(S.txt5,'string','Signals from channel T3(Right)'); set(S.txt6,'string','Signals from channel CZ(Right)');

plot(S.ax1,tp_left_req(:,1:4)); plot(S.ax4,tp_left_req(:,1),'color','r'); plot(S.ax5,tp_left_req(:,2),'color','b'); plot(S.ax6,tp_left_req(:,3),'color','y'); plot(S.ax7,tp_left_req(:,4),'color','g'); chunk_avg_min = min(avg_left(j:j+500)); chunk_avg_max = max(avg_left(j:j+500)); chunk_avg_val = (chunk_avg_min+chunk_avg_max)/2; %t = timer('StartDelay', 1, 'Period', 1, 'TasksToExecute', 5, ... % 'ExecutionMode', 'fixedRate');

for j2 = j:j+500 if avg_left(j2)>(chunk_avg_val+chunk_threshold) binary_avg_left(j2)=1; else binary_avg_left(j2)=0; end if avg_left(j2)<(chunk_avg_val-chunk_threshold) binary_avg_left_b(j2)=1; else binary_avg_left_b(j2)=0; end end if toggle_st==true sample_y_temp= -100:100; toggle_st=false; else sample_y_temp=100:-1:-100;
86

toggle_st=true; end sample_y_size=size(sample_y_temp); sample_st_y=[sample_st_y sample_y_temp]; sample_x_temp(1:sample_y_size(2))=j; sample_st_x=[sample_st_x sample_x_temp];

sum_binary_chunk_lu = sum(binary_avg_left(j:j+500)) sum_binary_chunk_lb = sum(binary_avg_left_b(j:j+500)) avg_xaxis=1:(j+500); plot(S.ax2,avg_xaxis(1:j+500),avg_left(1:j+500),'k',sample_st_x,sample_st_y,'r');

samplecount=samplecount+sample_y_size(2);

if sum_binary_chunk_lu>mouse_active_lu mouse_active_flag=true; mouseactive_lu=true; mouse_lu_disp=(sum_binary_chunk_lu - mouse_active_lu); elseif sum_binary_chunk_lb>mouse_active_lb mouse_active_flag=true; mouseactive_lb=true; mouse_lb_disp = (sum_binary_chunk_lb - mouse_active_lb); end %timer function %delete(t); %t = timer('StartDelay', 1, 'Period', 1, 'TasksToExecute', 1, ... % % % 'ExecutionMode', 'fixedRate'); t.TimerFcn = {@bci_mousecontrol,mouse}; start(t);

temp_mu=0; temp_mb=0; set(S.txt8,'string','Not moving');


87

set(S.txt10,'string','Active Motion : None'); for i = j :30: j+500 xlim(S.ax1,[xlimoffset+i xlimoffset+i+500]); xlim(S.ax2,[xlimoffset+i xlimoffset+i+500]);

ylim(S.ax2,[chunk_avg_min chunk_avg_max]);

xlim(S.ax4,[xlimoffset_ch+i xlimoffset_ch+i+250]); xlim(S.ax5,[xlimoffset_ch+i xlimoffset_ch+i+250]); xlim(S.ax6,[xlimoffset_ch+i xlimoffset_ch+i+250]); xlim(S.ax7,[xlimoffset_ch+i xlimoffset_ch+i+250]); if mouse_active_flag==true if mouseactive_lu==true set(S.txt10,'string','Active Motion : Right'); if temp_mu<mouse_lu_disp S.mouse.mouseMove(S.mx,S.my); temp_mu=temp_mu+1; S.mx=S.mx+temp_mu; if S.mx>mx_temp+200 S.mx=mx_temp; end end elseif mouseactive_lb==true set(S.txt10,'string','Active Motion : Left'); if temp_mb<mouse_lb_disp S.mouse.mouseMove(S.mx,S.my); temp_mb=temp_mb+1; S.mx=S.mx-temp_mb; end if S.mx<mx_temp-200 S.mx=mx_temp; end
88

end end pause(0.0001); if strcmp(get(S.fh,'Currentkey'),'quote') waitforbuttonpress; end end mouse_active_flag=false; mouseactive_lu=false; mouseactive_lb=false; %wait(t); end end %delete(t); catch err disp(err) end end %function [] = bci_mousecontrol(varargin) %mouse=varargin{3}; %mouse.mouseMove(0, 0); %screenSize = get(0, 'screensize'); %for i = 1: screenSize(4) %mouse.mouseMove(i, i); %pause(0.00001); %end %end function [] = changesimtext(varargin) S=varargin{3}; ssize=get(0,'screensize'); xy=get(0,'pointerlocation');
89

set(S.txt9,'string',['Position : ' num2str(xy(1)) ' , ' num2str(ssize(4)-xy(2))]); set(S.txt8,'string','moving'); end

Screen shots
Initiating execution:

Training

module:

90

91

92

93

Simulation

module:

94

3.7 Results
The format of the results is as follows: There were two different experimental procedures performed. One, purely in software. As described in the section Methods, OpenViBE open-source EEG signal processing program using custom written code utilized several algorithms (also explained in Methods) to parse, analyze, and recognize patterns in a prerecorded EEG dataset found in the public domain. In order to limit the number of variables so as to keep the margin of error at a minimum, the same EEG data sets were used. 104 trials of 26 EEG datasets (4 per dataset), each with 15 possible events were recorded. Imagery functions were not regulated during this phase; rather, original experimental information was used to determine accuracy of classification. Total: 1560 candidate events. Two, a proof-of-concept involving direct interfacing between proprietary EEG hardware and mouse Cursor. Here, 52 trials of 5 regulated imagery functions, each with 3 possible patterns, were conducted. Total: 780 candidate events.

95

3.8 Conclusions Analysis One goal of this project was to design, revise, and present a suitable, adaptive program that could be coupled with todays Electroencephalography hardware to create a more efficient and affordable brain-computer interface that can be put to practical use in the field of prosthetic devices. At the beginning of this project, the software was assigned certain criteria to meet in order to be considered successful. These criteria were: affordability, user adaptability, accuracy, and applicability. It is the belief of the researcher after thorough experimentation that the program met all design criteria and exceeded beyond expectations where affordability was concerned. The proof-of-concept hardware experimentation showed that such a task is, in fact, possible to accomplish. Overall, the project followed guidelines, exceeded expectations, and could be ready to be incorporated into todays assistive technology with further experimentation. The attained data from the simulation phase of experimentation displays a mean accuracy of 91.35% on prerecorded EEG datasets. This represents a significant gain in accuracy over the oftused combination of PCA and temporal filters only- about 15.52% more of candidate events, on average, were detected and classified accurately by the experimental scenarios. Accuracy was recorded using a win/no-record system; therefore, it was determined by the ratio of the number of candidate events to the number classified to the number classified correctly. It certainly suggests that the initial goal of creating a more robust, adaptive, intuitive interface was mostly achieved. Furthermore, there were several times when the LVQ-training program would simply hang and take inordinate amounts of time to compute. This implies that there are still several bugs to be worked out in the code, and, despite the gains in accuracy, could be further improved still in order to yield near-optimal filtration/classification programs. In fact, not including the LVQ filtering or LDA-based classification, most values for the filtration/preprocessing stages were obtained through experimentation and the review of current literature. After experimentation with the EEG device and robotic limb in real-time use, similar results were achieved; the proprietary Epoc software achieved about 13% lower accuracy in accurate pattern detection than did the custom processing scenarios. However, the sudden decrease in accuracy between simulation and real-time use was heavily apparent. It seemed that, since the statistical tendencies of the entire signal was already known to the processing program during

96

experimentation, ANN weights for the LVQ were much more optimized for the job than were those used during real-time experimentation. Additionally, this could also be due to the inherent unpredictability of the subjects thoughts during online use. In other words, whereas the subjects whose data were used for simulation testing were given cues and thus might have had the telltale aha reaction to them, the user was free to perform the imagery at any point in time, without any expectation of surprise. This could be one of the primary factors to explain the sudden drop between simulation and online testing pattern detection accuracy. Finally, it is the researchers opinion that application of the developed software in the real world in the form of brain-computer interfaces is very much possible, given the systems flexibility, affordability, open-source nature, and as previously described, accuracy. Because of this, it is the researchers belief that the two goals set at the initiation of the experiment were achieved. These were to 1) improve on existing software using methods that may increase the accuracy of pattern detection in EEG signals, and 2) to demonstrate, as a means of proof-of-concept, that EEG signals gathered from an acquisition device could be used to actuate a robotic arm, giving some insight into how this technology might be applied in the field of prosthetics. Furthermore, the results of hardware experimentation showed a mean accuracy of 63.61% for the Emotiv software and 76.12% for the custom software, representing a mean gain in accuracy of 12.51%. From the presented data, the researcher concluded that the software utilized a different method of filtering and learning than the program created using OpenViBE, as illustrated by the clear differences in the detection rates. Therefore, the researcher believes that this project demonstrated that it is possible for EEG technology to be applied in the field of HCI to produce an interface that is affordable, user-friendly, and practical to use.

97

4. Appendix
Key Terms Object Oriented Programming (OOP) - refers to a unique programming paradigm in which several related functions are grouped into simulated entities known as classes, which then serve as templates to create objects which contain data that can be accessed or mutated using that objects functions Software Modules/Boxes the various objects used in OOP that can be manipulated and connected in certain ways to produce custom software Electroencephalograph (EEG) a machine that records electrical activity along the scalp produced by the firing of neurons within the brain that occur in recognizable patterns Biomimicry - a discipline that studies nature's best ideas and then imitates these designs and processes to solve human problems RAM - a form of computer data storage that takes the form of integrated circuits that allow stored data to be accessed in any order EEPROM - stands for Electrically Erasable Programmable Read-Only Memory and is a type of non-volatile memory used in computers and other electronic devices to store small amounts of data that must be saved when power is removed Algorithm - an effective method expressed as a finite list of well-defined instructions for calculating a function Scenario (disambiguation) - A set of test cases that ensure that the business process flows are tested from end to end Electrode - is an electrical conductor used to make contact with a nonmetallic part of a circuit Reference Channel (see Electrode) the egress for all electric potentials recorded on the scalp during EEG processes; this acts as a ground and as a signal location reference Cortex - a sheet of neural tissue that is outermost to the cerebrum of the mammalian brain Lobes portions of the brain that have been shown to regulate different, specific brain functions Sensorimotor - Of, relating to, or involving both sensory and motor activity
98

Axon - a long, slender projection of a nerve cell, or neuron, that conducts electrical impulses away from the neuron's cell body Dendrite - the branched projections of a neuron that act to conduct the electrochemical stimulation received from other neural cells to the cell body Neurotransmitter - A chemical that is released from a nerve cell which thereby transmits an impulse from a nerve cell to another nerve, muscle, organ, or other tissue. A neurotransmitter is a messenger of neurologic information from one cell to another. Neuron - an electrically excitable cell that processes and transmits information by electrical and chemical signaling Synapse - a junction that permits a neuron to pass an electrical or chemical signal to another cell Electric Potential the difference in charge between two oppositely charged objects within a predetermined distance Epoching splitting the signal into chunks of a predetermined size and frequency limit Event (disambiguation) any triggered process on a computer Signal Processing - an area of electrical engineering and applied mathematics that deals with operations on or analysis of signals, in either discrete or continuous time, to perform useful operations on those signals Linear Discriminant Analysis (LDA) a mthod used in statistics, pattern recognition and machine learning to find a linear combination of features which characterize or separate two or more classes of objects or events Bayes Theorem - shows the relation between two conditional probabilities which are the reverse of each other Fourier Transform decomposes a signal into the various frequency components Microcontroller - a small computer on a single integrated circuit containing a processor core, memory, and programmable input/output peripherals Integrated Development Environment (IDE) - a programming environment that has been packaged as an application program, typically consisting of a code editor, a compiler, a debugger, and a graphical user interface (GUI)
99

Bootloader a process that starts operating systems when the user turns on a computer system Syntax (disambiguation) correct wording/phrasing in machine language that can be recognized and converted to binary data Compilation - the translation of source code into object code by a compiler Console also used as a command-line interface, a mechanism for interacting with a computer operating system or software by typing commands to perform specific tasks Brain-computer Interface - a direct communication pathway between the brain and an external device Parse - the process of analyzing a text, made of a sequence of tokens, to determine its grammatical structure with respect to a given (more or less) formal grammar Qualitative Analysis - analysis that uses subjective judgment based on nonquantifiable information User Configuration File in this case, a file that maps a specific users brainwave pattern data to specific limb movements Noise (disambiguation) arbitrary, hardware-caused fluctuations in a signal that have adverse effects on later signal processing Vector Quantization the use of randomly selected vectors in a signal to determine the position of other selected vectors, thereby allowing data compression and analysis Bayes Theorem a theorem that relies on the conditional probability principle, and predicts classification of objects into groups based on certain parameters

100

5. Works Cited
Smith, S. (n.d.). Digital Signal Processing. The Scientist and Engineer's Guide to Digital Signal Processing. (2004, September 7). WaveMetrics - scientific graphing, data analysis, curve fitting & image processing software. Retrieved December 3, 2010, from

http://www.wavemetrics.com/products/igorpro/dataanalysis/signalprocessing.htm Signal Processing. (n.d.). IBM Research. Retrieved November 16, 2010, from

http://domino.research.ibm.com Biomedical signal analysis: EEG analysis using DSP algorithms. (n.d.). Electronics Engineering Herald . Retrieved October 9, 2010, from http://www.eeherald.com/section/design-

guide/dg100008.html

EEG / MRI Matlab Toolbox. (n.d.). EEG- SourceForge. Retrieved December 5, 2010, from http://eeg.sourceforge.net/

101

Potrebbero piacerti anche