Sei sulla pagina 1di 5

Neural Network Classifier and Filtering for EEG Detection in Brain-Computer Interface Device

Mr. CHOI NANG SO


Email: cnso@excite.com

Prof. J GODFREY LUCAS


Email: jglucas@optusnet.com.au

SCHOOL OF MECHATRONICS, COMPUTER & ELECTRICAL ENGINEERING UNIVERSITY OF WESTERN SYDNEY P.O BOX 10 KINGSWOOD, NSW 2747, AUSTRALIA
ABSTRACT In this paper, a neural network classifier and two backpropagation filtering models (fast backpropagation and backpropagation- Levenberg-Marquardt ) are used to process simulated brain wave signals (Electroencephalogram, EEG),which are similar to the signals obtained from in-house developed BCI equipment. The simulated raw data sets are used to train the multi-layer perceptron (MLP) neural network which then classify the EEG wave bands. The output of the neural classifier will not only be applied to show the status of the brain, but it is also used to classify the filtered EEG signals from the backpropagation neural filtering models which are investigated in this paper. Index Terms Brain, Brain-Computer Interface (BCI), electroencephalogram (EEG), neural network, multilayer perceptrons (MLP), recognition, classifier, and backpropagation.

1. Introduction A comprehensive review of research and development into Brain Computer Interface (BCI) devices has been reported [1] which concludes that the key technical challenge is to reliably and accurately detect and classify features of an electroencephalogram (EEG). Since different BCI differ greatly in their inputs, translation algorithms, outputs, and other characteristics, it is often difficult to make a direct comparison of results [3]. Nevertheless, in all cases, the goal is to maximize performance and practicability for the chosen application. The goal of this paper is to classify and filter EEG raw data with neural network translation algorithms in order to obtain the best performance for the BCI prototype which has been developed. It has been shown [2] that a neural network classifier can be used to distinguish the features of an EEG. A neural network classifier accompanied with two backpropagation filtering models for processing the simulated brain wave signals (Electroencephalogram, EEG) collected from developed Brain-Computer Interface (BCI) Device prototype will be presented. The simulated raw data sets generated will be used to train the multi-layer perceptron (MLP) neural network to classify the EEG wave bands. The output of the neural classifier will not only be applied to show the status of the brains activity, it is also used to classify the filtered EEG signals from the backpropagation neural filtering models which are investigated in this paper.

2. Methods A. Training for the Neural Classifier by MLP The neural classifier model developed is a multilayer perceptron (MLP) structure that contains two hidden layers as shown in Figure 1. There is no precise method for determining the number of neurons per hidden layer and these must be estimated [6]. If too many neurons are in the hidden layers, it is hard for the network to make generalizations. If too few neurons are in the hidden layers, it is hard for the network to encode the significant features in the input data. Following comprehensive trial and error testing, it was found that the number of neurons used in the hidden layers in this work performs more efficiently than the number of neurons in hidden layers as specified by Kolmogorovs theorem [6]. The first layer is fed with 30(R) sets of simulated EEG raw data involving delta, theta, alpha, beta and gamma bands. The numbers of neurons in the first and second hidden layer were optimised by trial and error to be 150(S1) and 3(S2). Since the output of the network will be decoded in a binary scheme in the present design, the hard-limit transfer function (f1 and f2) is selected in order to provide each neuron output with a 1 if the result (a1 or a2) is equal to or greater than 0. In any other case, it provides an output of 0. In this neural classifier model, the two hidden layers are evaluated with a hard-limit transfer function. The output (a1) of the first hidden layer will be fed to the input of the second hidden layer, the output of the second hidden layer comprises three output neurons which provides a combination of 3-bit binary values and indicates which EEG wave band has been classified. The outputs of the neural classifier

corresponding to the EEG delta, theta, alpha, beta and

shown in Figure 1. However, the selected transfer

gamma bands are 000, 001, 010, 011 and 110 respectively. For other unclassified bands, the output will be shown as 111. With all the simulated raw data fed into the MLP classifier, the convergence of the network model can be presented as a spectrum result or more specifically as a good indicator to show the network converging to a result within tolerance error (i.e. sum-squared network error) as shown in Figure

functions in the first (f1) and second (f2) hidden layers must be differentiable [5]. Also, since most of the nonlinearly part of the input features have been solved in the first hidden layer, the second hidden layer transfer function can be simpler than the first and can provide any continuous value on the output [7]. With these criteria, the tangent-sigmoid function and pure linear function are assigned to the first (f1)

Figure 2a : Result for MLP neural classifier with S1=150 and S2=3

Figure 2b : Result for MLP neural classifier with S1=140 and S2=3

2a and 2b. B. Training for the Neural Network Filtering of EEG Signals by BP There are many training algorithms for the backpropagation BP neural network. However, at the moment, no single algorithm is universally better than the others [4]. For the present BCI prototype, two backpropagation methods are investigated which filter out the Gaussian noise with a standard deviation of 3x10-4 and a mean of 0 from the different EEG wave bands. The first BP neural network training algorithm is fast backpropagation (trainbpx). The second BP training algorithm used is Levenberg-Marquardt (trainlm). Both are feedforward algorithms [5]. They are constructed with two hidden layer BP models as

and second (f2) hidden layers. In addition to the transfer functions that exist in the same layer in each model, there are 205 (P) input data sets for which each EEG wave bands (embedded with Gaussian noise) will be input to the networks. The training efficiency of the two BP networks will be investigated and compared by their sum-square error performance and the number of epochs necessary for training to be achieved. 3. Results A. Results for Neural Classifier by MLP Training The sum-square network error indicates whether the classifier can successfully separate the EEG wave bands from each of the simulated EEG raw data sets fed into the 2-hidden layer perceptron neural

classifier. It is found that the sum-square network error can be minimised to 10-2 within 122 epochs if S1=150 and S2=3. The same convergent sum-square network error result can be obtained within 349 epochs if S1=140 and S2=3. These results are shown in Figure 2a and Figure 2b respectively. B. Results for Neural Network Filtering of EEG signals by BP Training For both BP models training results, it is clearly shown that the results for the BP- LevenbergMarquardt (trainlm) training algorithm, Figure 4a-4e, are much faster than those for the fast backpropagation (trainbpx) training algorithm, Figure 3a-3e. The target sum-square error and learning rate are set to be 10-2 in both cases. The graphs show that noise can be filtered out from delta, theta, alpha, beta and gamma wave after 4862,7188,7713,6251 and 5540 epochs respectively by using fast BP (trainbpx) training model. In the BP- Levenberg-Marquardt (trainlm) training model, noise can be dramatically filtered out from the different EEG wave bands within just 2 epochs as shown in Figure 4. 4. Discussion It is clear that the number of epochs that are needed for a successful result is decreased by increasing the number of neurons in the first layer of the neural classifier MLP. Moreover, long training times can also be caused by the presence of an outlier input vector whose length is much larger or smaller than the other input vectors. Thus, an input vector with large elements can lead to changes in the weights and biases that take a long time for a much smaller input vector to overcome [5]. The BP-Levenberg-Marquardt algorithm has proven that its gradient descent method is at least (S x Q) times faster than the fast BP algorithm, where S is the number of output neuron and Q is the number of input or target vectors [5]. The fast BP training model (trainbpx) has two modes of convergence. When it is in momentum mode, it will be used to find the optimal solution. When it is in self-adaptive learning mode, the training time will be shortened. However, comparing with BP- Levenberg-Marquardt (trainlm) training model, the convergent speed of the fast BP algorithm is much slower, and hence more epochs are required in order to reach the same target sum square error. 5. Conclusions A MLP neural classifier model has been demonstrated with the capability to classify EEG wave bands among the simulated data sets. The output of the MLP neural classifier model can be used in biofeedback training and EEG monitoring for the future specific application development of the inhouse BCI prototype. With the comparison of the speed of the convergence between the fast BP and BP- Levenberg-

Marquardt training algorithms, it is obviously preferable to choose the BP- Levenberg-Marquardt training algorithm to filter out the Gaussian noise from the EEG wave bands with the system under development. 6. Acknowledgements The author would like to thank Prof. Godfrey Lucas and Mr. Howard Fu for their support. A special thanks gives to Ms. Pamilla Gan for her spiritual support and cultivation during the paper preparation. 7. References [1] Jonathan R. Wolpaw et al., Brain-Computer Interface Technology: A Review of the First International Meeting, IEEE Trans. On Rehab. Eng., vol.8, no.2, pp 164-173, June 2000. [2] William D. Penny, Stephen J. Roberts, Eleanor A. Curran, and Maria J. Stokes et al., EEGBased Communication: A Pattern Recognition Approach, IEEE Trans. On Rehab. Eng., vol.8, no.2, pp 214-215, June 2000. [3] Aleksander Kostov and Mark Polak et al., Parallel Man-Machine Training in Development of EEG-Based Cursor Control, IEEE Trans. On Rehab. Eng., vol.8, no.2, pp203-205, June 2000. [4] Mingui Sun et al., The Forward EEG Solutions Can be Computed Using Artificial Neural Networks, IEEE Trans. On Biomedical Eng., vol.47, no.8, pp1044-1050,August 2000. [5 ] Howard Demuth and Mark Beale et al., Neural Network Toolbox For Use with MATLAB User Guide Version 4, The MathWorks, Inc.,Available: (http://www.mathworks.com) [Online] [6] Lefteri H. Tsoukalas and Robert E. Uhrig et al., Fuzzy and Neural Approaches in Engineering, John Wiley & Sons, Inc., Chapter 7 and 8,1996. [7] Martin T. Hagan, Howard B. Demuth and Mark Beale et al., Neural Network Design, PWS Pub., 1996.

Figure 3a : Noise filtering in delta wave by fast BP algorithm

Figure 3b : Noise filtering in theta wave by fast BP algorithm

Figure 3d : Noise filtering in beta wave by fast BP algorithm

Figure 3c : Noise filtering in alpha wave by fast BP algorithm

Figure 3e : Noise filtering in gamma wave by fast BP algorithm

Figure 4a : Noise filtering in delta wave by BP-LevenbergMarquardt

algorithm

Figure 4b : Noise filtering in theta wave by BPLevenberg-Marquardt

algorithm

Figure 4c : Noise filtering in alpha wave by BP-Levenberg-Marquardt

algorithm

Figure 4d : Noise filtering in beta wave by BP-Levenberg-Marquardt

algorithm

Figure 4e : Noise filtering in gamma wave by BP-LevenbergMarquardt algorithm

Potrebbero piacerti anche