Sei sulla pagina 1di 24

British Journal of Educational Technology Vol 42 No 3 2011 417440

doi: 10.1111/j.1467-8535.2009.01035.x

Emotion recognition and communication for reducing


second-language speaking anxiety in a web-based
one-to-one synchronous learning environment _1035 417..440

Chih-Ming Chen and Tai-Hung Lee

Chih-Ming Chen is an associate professor of the Graduate Institute of Library, Information and Archival
Studies, National Chengchi University. Tai-Hung Lee is a Ph.D. candidate of the Department of Electrical
Engineering, National Central University. Address for correspondence: Chih-Ming Chen, Graduate Insti-
tute of Library, Information and Archival Studies, National Chengchi University, NO. 64, Sec. 2,
ZhiNan Road, Wenshan District, Taipei City 116, Taiwan. Email: chencm@nccu.edu.tw

Abstract
e-Learning is becoming an increasingly popular educational paradigm because
of the rapid growth of the Internet. Recent studies have argued that affective
modelling (ie, considering a learners emotional or motivational state) should
also be considered while designing learning activities. Many studies indicated
that various learning emotions markedly impact learning outcomes. In the
language education field, many studies have investigated anxiety associated
with learning a second language, noting that anxiety has an adverse effect on
the performance of those speaking English as a second language. Therefore,
how to reduce anxiety associated with learning a second language to increase
learning performance is an important research issue in the language education
field. Accordingly, this study employed a sensor, signal processing, wireless
communication, system-on-chip and machine-learning techniques in develop-
ing an embedded human emotion recognition system based on human pulse
signals for detecting three human emotionsnervousness, peace and joyto
help teachers reduce language-learning anxiety of individual learners in a
web-based one-to-one synchronous learning environment. The accuracy rate
of the proposed emotion recognition model evaluated by cross-validation is as
high as 79.7136% when filtering out human pulse signals that have bias.
Moreover, this study applied the embedded emotion recognition system to assist
instructors teaching in a synchronous English conversation environment by
immediately reporting variations in individual learner emotions to the teacher
during learning. In this instructional experiment, the teacher can give appro-
priate learning assistance or guidance based on the emotion states of individual
learners. Experimental results indicate that the proposed embedded human
emotion recognition system is helpful in reducing language-based anxiety, thus
promoting instruction effectiveness in English conversation classes.

2010 The Authors. British Journal of Educational Technology 2010 Becta. Published by Blackwell Publishing, 9600 Garsington
Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA.
418 British Journal of Educational Technology Vol 42 No 3 2011

1. Introduction
Based on the rapid development and universal use of the Internet, the e-learning
infrastructure has gradually matured and e-learning has become an increasingly
popular educational paradigm (Lin & Hsieh, 2001). Therefore, how to improve the
learning performance of students in the e-learning context by identifying factors that
affect e-learning effectiveness is an important research issue. Many education scholars
have pointed out that emotions are directly related to and affect learning performance
(Goleman, 1995; Piaget, 1989). Emotions can affect attention, the creation of meaning
and formation of memory channels. Hence, emotional status and learning are strongly
related (LeDoux, 1994). Kort, Reilly and Picard (2001) summarised that emotion sets
with positive and the corresponding negative emotion, including anxietyconfidence,
boredomfascination, frustrationeuphoria, dispiritedencouraged and terror
enchantment, are possibly relevant to learning. To interact with students effectively in
an educational context, teachers often try to gain insight into invisible human emo-
tions and thoughts. In the learning scenario, teachers who make correct judgements
about the emotional status of students can improve the effectiveness of their interac-
tions with students.

Additionally, many psychologists and neurologists have indicated that emotions and
motives have important roles in cognitive learning (Izard, 1984). Goleman (1995)
noted that students who are depressed, angry and anxious have trouble learning.
According to Piaget (1989), human emotions can arise from or interfere with learning.
As Izard analysed, the performance of cognitive activities is adversely affected by nega-
tive emotions, but is raised by positive emotions. Coles (1998) argued that teachers can
assist and guide students in developing emotions that promote cognitive development.
Identifying the cognitive and emotional states of students during instruction can facili-
tate the development of positive learning experiences for learners (Reilly 2004).

Particularly, researchers, language teachers and even language learners have been
interested in how anxiety inhibits language learning. Pajares (1996) indicated that
anxiety has the most extensive influence on learning when an individual is uncertain of
his/her own capabilities. Lekkas, Tsianos, Germanakos, Mourlas and Samaras (2008)
study argued that anxiety is probably the most indicative emotion that affects learning
performance. These studies claimed that anxious emotion is an important emotion that
debases learning performance during learning processes. Many studies (Horwitz, 2001;
Horwitz, Horwitz & Cope, 1986) argued that anxiety associated with language learning
is a specific anxiety rather than a trait anxiety. Horwitz et al called this specific anxiety
related to language learning as foreign language anxiety, which is manifested in
student experiences in language classes. These researchers developed a novel instru-
ment, the Foreign Language Classroom Anxiety Scale (FLCAS) (Horwitz, 1986; Horwitz
et al), to measure this anxiety. Foreign language anxiety is defined by some researches
as a feeling of tension, apprehension and nervousness associated with the situation of
learning a foreign language (Ozcan, 2008). Many literatures (Horwitz, 2001;
Rouhani, 2008) indicated that people who have poor language-learning abilities will
experience foreign language anxiety, and anxiety in some individuals is a cause of poor

2010 The Authors. British Journal of Educational Technology 2010 Becta.


Emotion recognition for reducing second-language anxiety 419

language learning. Horwitz also discussed the possible sources of this anxiety, including
difficulty in authentic self-presentation and various language teaching practices.
Woodrow (2006) surveyed a great deal of research into second-language or foreign-
language anxiety over the past two decades. Woodrow claimed that anxiety has an
adverse effect on the language-learning process.

Because of the importance of emotional states to learning, many e-learning scholars


have attempted to identify learner emotions using an artificial intelligence technique
called affective computing. Affective computing is a novel situation-aware technology
that primarily identifies human emotions and builds appropriate human emotion rec-
ognition models. Generally, four methods are used to recognise learner emotions: (1)
voice (prosody) analysis (Kopecek, 2000); (2) observable behaviour, such as user
actions in a systems interface (De Vicente & Pain, 2002); (3) facial expression analysis
(Wehrle & Kaiser, 2000); and (4) analysis of physiological signs (Picard, Healey &
Vyzas, 2001). In previous studies that used affective computing to support learning,
Nosu and Kurokawa (2006) proposed a real-time emotion diagnosis robotic system that
can identify the emotions of an e-learning student based on his/her facial expression
and biometric signals. Their study confirmed the effectiveness of the proposed multi-
modal emotion diagnostic system. Emmanuel, Pierre and Claude (2007) discussed the
use of physiological data for quasi real-time adaptation in intelligent tutoring systems.
Their study analysed learner reactions using physiological signals generated in a game-
like virtual learning environment. These signals were measured by electroencephalo-
graphs (EEGs), galvanic skin response (GSR) and respiration. To support peer-to-peer
e-learning, Mohamed and Mahmoud (2007) proposed the emotional multi-agents
system, which can recognise a learners emotion based on his/her facial expression.
Shaikh, Hua, Ishizuka and Mostafa (2005) developed an emotion model that has eight
emotional states and four transitional emotion rules to identify the emotional states of
individual learners, thereby enhancing the quality of learning and improving accessi-
bility to education and training in the e-learning context. However, recognising learner
emotions in an e-learning environment is extremely challenging.

Many past studies argued that several physiological signals, such as blood pressure,
GSR, electrocardiogram (ECG), EEG and human pulse, relate with human emotions.
Based on our literature survey, no literature indicated that considering those combina-
tions of physiological signals for identifying human emotion can obtain high accuracy
rates of human emotion recognition. Actually, considering more physiological signals
cannot guarantee obtaining a high accuracy rate of human emotion recognition. The
key point is how to extract main features that affect emotion, and even only one physi-
ological signal is considered to construct a human emotion recognition model. Anxiety
is an unpleasant combination of emotions that includes fear, worry and uneasiness,
and is often accompanied by physical reactions such as high blood pressure, increased
heart rate and other body signals (Barlow, 2002; Kim & Gorman, 2005). Moreover,
Hsieh, Shen, Chao and Chen (2007) captured and analysed physiological signals, and
then recognised emotions. Through physiological signal sensor, electroneuromyogra-
phy, ECG and respiration, pulse signals were sensed into 33 features. Via their

2010 The Authors. British Journal of Educational Technology 2010 Becta.


420 British Journal of Educational Technology Vol 42 No 3 2011

experiment, the top 10 features that effected the most were selected, and half of them
were pulse features. In other words, human pulse is indeed an essential physiological
signal that affects human emotion. Therefore, based on human pulse signal, this study
proposes an embedded human emotion recognition system that has an affective display
interface that immediately identifies the emotional states of individual learners to help
English teachers in comprehending language-learning anxiety in a web-based one-to-
one synchronous learning environment. The proposed system uses support vector
machines (SVMs) to construct a human emotion recognition model that can identify
three emotionspeace, nervousness and joybased on emotion features extracted
from frequency-domain human pulse signals. The experiment for second-language
speaking anxiety confirms that the proposed embedded human emotion recognition
system is helpful in reducing language-learning anxiety when teachers can provide
appropriate learning assistance or guidance based on the emotional states of individual
learners.

2. System design
This section introduces the embedded human emotion recognition system for reducing
learner second-language speaking anxiety.

2.1. System architecture


The proposed embedded emotion recognition system is composed of three partsthe
module for measuring pulse, the Advanced RISC Machine (ARM) embedded platform
and human emotion recognition moduleimplemented on a remote computer server.
The module for measuring pulse and the ARM embedded platform consist of an embed-
ded human pulse detection system, which can accurately measure and transfer human
pulse to the human emotion recognition module for identifying learner emotions.
Figure 1 presents the system architecture of the proposed system. The module that
measures human pulse (bottom left portion of Figure 1) senses pulse signals via a
piezoelectric sensor and performs signal pre-processing to filter out noise. To transfer
these pulse signals to the embedded system, analogue pulse signals must be first trans-
formed into digital signals by an analogue-to-digital converter. These digital signals are
then transmitted to the embedded system through a serial transmission device using a
predetermined baud rate. The upper left portion of Figure 1 presents the embedded
system with serial transmission and wireless local area network interfaces. When the
collected pulse data meet a predetermined amount, the embedded system transmits
pulse data to the remote computer server via wireless communication. The remote
server then stores the pulse data in a human physiology database. The right side of
Figure 1 shows the details of the proposed human emotion recognition module that
assists teachers in reducing speaking anxiety of individual learners. The emotion rec-
ognition module consists of the Fast Fourier Transform (FFT, The FFT software, which
can be freely downloaded from http://www.fftw.org/, was developed at MIT by Matteo
Frigo and Steven G. Johnson.) software (Frigo & Johnson, 2005), a library for SVM
(LIBSVM) (Chang & Lin, 2001), which is a SVM tool library, and the Web meeting
system JoinNet (http://www.webmeeting.com.tw/). JoinNet was employed as a speak-
ing training system supported by the proposed embedded human emotion recognition

2010 The Authors. British Journal of Educational Technology 2010 Becta.


Emotion recognition for reducing second-language anxiety 421

ARM embedded system


platform

Internet
WLAN

Server

Signal collection Human emotion


recognition module

FFTW LIBSVM
Serial device

8051 microprocessor Emotion


JoinNet recognition
interface

Signal
pre-processing
and ADC

Sensor
Teacher
Human pulse signal measuring
module
Student

Figure 1: The system architecture of the proposed embedded human emotion recognition system for
supporting teachers to reduce learners language speaking anxiety. ADC, analogue-to-digital converter;
ARM, Advanced RISC Machine; FFTW, Fastest Fourier Transform in the West; LIBSVM, library for
support vector machines; WLAN, wireless local area network

system to aid teachers in reducing student language-learning anxiety in a web-based


English speaking instructional environment. Figure 2 shows the implemented embed-
ded human pulse detection system.

2.2. The proposed human emotion recognition scheme based on human pulse signals
The human emotion recognition module is composed of FFT software (Frigo & Johnson,
2005), LIBSVM (Chang & Lin, 2001) and JoinNet (right side of Figure 1). The FFT is
used to transform time-domain human pulse signals into frequency-domain pulse
signals for extraction of human emotion features. The LIBSVM, which is an integrated

2010 The Authors. British Journal of Educational Technology 2010 Becta.


422 British Journal of Educational Technology Vol 42 No 3 2011

Figure 2: The implemented embedded human pulse detection system

software package for support vector classification and has excellent pattern recognition
performance, was applied to construct the human emotion recognition model that uses
the extracted human emotion features. Moreover, JoinNet supports instructors teach-
ing English conversation online and facilitates communication with students via audio,
video and text chat. Teachers and students can share and discuss slides, figures, docu-
ments, websites and desktops, and even control the PCs or laptops of other students
remotely via this system. This study only employed the JoinNet functionality that allows
teachers to communicate with learners online via an audio channel, and thereby
supports English conversation training with the assistance of learner emotion
recognition.

Notably, the time and amplitude of pulse signals do not have logical mapping with
variations of human emotions. Consequently, this study employed FFT to transform
original time-domain pulse signals into frequency-domain signals, as frequency-
domain signals vary as human emotions vary. The study thus extracted emotion fea-
tures from frequency-domain pulse signals and employed SVMs to construct a human
emotion recognition model based on these extracted features. The following section
further describes the FFT and SVMs in detail.

2.2.1. FFT for human emotion feature extraction


During this FFT, the original pulse signals sensed by the measuring module are approxi-
mately transformed into combinations of many sine waves with corresponding fre-
quencies and amplitudes. In the FFT, the corresponding amplitude of each sine wave
represents a feature weight. Based on experimental results, the feature weights of
amplitudes vary as human emotions vary; that is, different people experiencing the
same emotion can obtain similar feature weight combinations of sine waves. Therefore,

2010 The Authors. British Journal of Educational Technology 2010 Becta.


Emotion recognition for reducing second-language anxiety 423

these feature weight combinations of sine waves derived from an emotion can serve as
emotion features when constructing a human emotion recognition model using
machine-learning models. To promote the identification of human emotions, this study
adopted Fastest Fourier Transform in the West (FFTW) (Frigo & Johnson, 2005), which
is the fastest FFT software and uses C language application programming interfaces, to
transform the original time-domain pulse signals into frequency-domain pulse signals.

Figure 3 presents an example of the extracted human signal associated with the peace-
ful emotion. This emotion is used as the basis for recognising the other two emotions.
Figure 4 shows an example of the pulse signal associated with nervousness generated
when playing a computer game. These two signals differ in waveform shape, amplitude
and period (Figures 3 and 4). However, the features of shape, period and amplitude,
which can be extracted from the time-domain pulse signals, are very difficult to extract
as emotion features. Therefore, this study utilised the FFTW to transform time-domain
pulse signals into frequency-domain pulse signals, such that the transformed pulse
signal consists of many harmonic waveforms with corresponding frequencies and
amplitudes. To transform an original pulse signal into the frequency domain for
emotion feature extraction, the human pulse signal (Figure 3) is segmented into four
parts according to a predetermined time window. Each segmented part has a region
that overlaps the region of the previous part to ensure that the segmented pulse signal
will not lose useful emotion features. Compared with the pulse signal for a calm emotion
(ie, peace), the part of the pulse signal with abnormal emotion variation (ie, nervous-
ness or joy) will occupy an extremely low proportion if the predetermined time window
is set over a long period. To effectively extract emotion features, one must segment
human pulse signals with overlapping regions using an appropriate time window.
Figure 5 shows each corresponding frequency-domain spectrum of the four segmented
time-domain pulse signals of peace (Figure 3). Similarly, the pulse signal for nervous
emotion (Figure 4) is also segmented into four parts according to a predetermined time
window. Figure 6 shows each frequency spectrum corresponding to the four segmented
time-domain pulse signals for nervousness (Figure 4). Comparisons of the differences of
pulse signals (Figures 5 and 6) indicate that the pulse signals for different emotions
extract different weight combinations of amplitudes in frequency-domain analysis.

Figure 3: An example of human pulse signal with peaceful emotion

Figure 4: An example of human pulse signal with nervous emotion

2010 The Authors. British Journal of Educational Technology 2010 Becta.


424 British Journal of Educational Technology Vol 42 No 3 2011

Figure 5: The corresponding frequency-domain spectrum of the four segmented time-domain human
pulse signals with peaceful emotion as shown in Figure 3

Figure 6: The corresponding frequency-domain spectrum of the four segmented time-domain human
pulse signals with nervous emotion as shown in Figure 4

2.2.2. SVM for constructing the human emotion recognition model


The extracted emotion features based on the FFT served as input features to train the
human emotion recognition model using SVMs for predicting emotional variations of
individual learners during learning. The main consideration in employing SVMs to
construct an emotion recognition model is that human pulse signals contain large
feature dimensions and a considerable amount of noise. The SVMs are good at solving
such problems and are superior to other statistics-based machine-learning methods
(Noble, 2003). Moreover, many studies have adopted LIBSVM (Chang & Lin, 2001),
which is an integrated software package for support vector classification, as a tool
because it rapidly analyses data and supports multiple programming languages and
platforms. Furthermore, radial basis function (RBF) is generally a reasonable first
choice for model selection in LIBSVM. There are two parameters while using RBF
kernels: C and g. C is the penalty parameter of the error term, and g is the kernel
parameter. It is not known beforehand which C and g are the best for one problem;
consequently, some kind of model selection (parameter search) must be done. The goal
is to identify good C and g so that the classifier can accurately predict unknown data
(ie, testing data). Notably, LIBSVM can also automatically determine these two
parameters using the grid parameter search approach (Chang & Lin).

2010 The Authors. British Journal of Educational Technology 2010 Becta.


Emotion recognition for reducing second-language anxiety 425

In this study, human pulse signals with corresponding emotion features obtained
through the transformation of the FFTW serve as training data for LIBSVM to build a
recognition model of human emotions. Furthermore, the cross-validation scheme was
used in this study to assess the forecasting accuracy rate of human emotion recogni-
tion. The human emotion recognition model can help teachers immediately offer feed-
back based on the emotions of students during learning processes.

3. Experiments
This section has two parts. First, section 3.1 describes the construction of the human
emotion recognition model based on SVMs. Section 3.2 explains how the developed
embedded human emotion recognition system was applied to aid teachers in reducing
second-language speaking anxiety in a web-based one-to-one synchronous learning
environment.

3.1. Constructing the human emotion recognition model using SVMs


To obtain emotion features for constructing a human emotion recognition model that
can identify the emotional states of peace, joy and nervousness based on SVMs, using
sampled pulse signals associated with these three emotional states as training data is
necessary. Therefore, this study utilised online films and PC games to elicit these human
emotions. Ten volunteers took part in this experiment for collecting human pulse
signals for different emotions. Table 1 lists the personal information and health status of
the volunteers. Among these 10 volunteers, eight were male and two were female.
Average volunteer age was 24 2 years. One of 10 volunteers was sick, another had
particularly good health status and the other eight volunteers had normal health
status.

During collecting learning data for modelling a human emotion recognition model,
each volunteer was asked to sit in front of a computer with the human pulse sampling
interface and wear a human pulse sensor on a finger of the left hand. Additionally, a

Table 1: Statistics information and health statuses of 10 volunteers


who were invited to serve as samples for gathering human pulse
signals with different emotion variations

Volunteer No. Age Sex Health status

1 24 Male Sick
2 26 Female Normal
3 26 Female Good
4 26 Male Normal
5 23 Male Normal
6 22 Male Normal
7 22 Male Normal
8 22 Male Normal
9 22 Male Normal
10 22 Male Normal

2010 The Authors. British Journal of Educational Technology 2010 Becta.


426 British Journal of Educational Technology Vol 42 No 3 2011

Figure 7: A volunteers joyous emotion caused by a funny film, which can elicit the joy emotion

camera was installed behind the volunteer to record the pulse signals and behaviour
responses of volunteers in order to aid in extracting the corresponding human pulse
signals with emotion variation. Figure 7 shows that the joy emotion experienced by
some volunteer was generated by a funny film. The upper part of Figure 7 shows the
online film, and the bottom part of Figure 7 shows the sampled pulse signal associated
with the joy emotion. The nervousness of 10 volunteers was generated by a computer
game in which subjects shot moving objects. Similarly, the peace experienced by some
volunteers was in response to an online film presenting a clear blue sky accompanied by
soft music. Moreover, the human pulse sampling interface for gathering pulse signals
associated with the joy emotion has a button that volunteers click when they feel joy.
The emotion of peace was extracted after 10 volunteers continuously watched the
online film for a long time. Moreover, nervousness was generated when the 10 volun-
teers played the computer game. This study logically assumes that the online film or
computer game elicited specific human emotions. To identify the time an emotion
occurred, the corresponding sampling time of human pulse signals was recorded auto-
matically.

The pulse signals are time-domain data. The sampling rate for each segmented time-
domain pulse signal was 50 samples per second. During the sampling of pulse signals of
emotions, the number of the valid emotion samples from each volunteer is different.
There are a total of 253, 265 and 88 valid records sampled from 10 volunteers for the
nervousness, peace and joy emotions respectively. These records were used to construct
the human emotion recognition model. After collecting pulse data, the data were

2010 The Authors. British Journal of Educational Technology 2010 Becta.


Emotion recognition for reducing second-language anxiety 427

transformed by FFTW from the time domain to the frequency domain for extraction of
emotion features. After FFTW transformation, the top 26 harmonic sine waves with
various magnitudes and frequencies are selected as human emotion features. For
example, the first feature is the ingredient of direct current, the second feature is the
amplitude of the sine wave with frequency f, the third feature is the amplitude of the
sine wave with frequency 2f and so on. After that, each extracted pulse signal that
contained 26 emotion features was normalised to the assigned data format of LIBSVM
for constructing the emotion recognition model. Furthermore, three parameters used
in LIBSVM were set for training the human emotion recognition model and evaluating
forecasting performance by cross-validation. Cross-validation method is a technique for
estimating the performance of a predictive model, and it is sometimes called rotation
estimation. This method randomly splits the data set into training and validation data.
For each such split, the classifier is retrained with the training data and validated on the
remaining data. The results from each split can then be averaged. The first parameter,
-v, is the number of n-fold for cross-validation; the second parameter, -c (i.e. C), sets the
penalty parameter of the error term in the optimisation function of SVM for pattern
classification; the third parameter, -g (i.e. g ), called gamma, sets the kernel parameter
in the default kernel function RBF. To avoid use of the heuristic method to determine the
values of learning parameters -c and -g for modelling the human emotion recognition
model, LIBSVM can automatically determine near-optimal parameters via the grid
parameter search approach (Chang & Lin, 2001). Table 2 lists the corresponding fore-
casting accuracy rates of any two human emotions evaluated by cross-validation under
the automatically determined learning parameters. In this experiment, parameter -v
was set to 5. Based on cross-validation, the forecasting accuracy rate of simultaneously
assessing nervousness and peace was 88.8476%, the forecasting accuracy rate of
simultaneously assessing nervousness and joy was 78.0899% and the forecasting
accuracy rate of simultaneously assessing peace and joy was 90.1639%. The entire
forecasting accuracy rate of simultaneously assessing the three emotions was
76.8254% under the automatically determined parameters (-c = 8 and -g = 8).

Figure 8 shows the distribution of the 26 emotion features for volunteer no. 2 extracted
from 17 human pulse samples with the joy emotion. The distribution of these 26
extracted emotion features of joy has a particular trend; however, some sampled
emotion features have large differences from most samples and were filtered out based
on statistical analysis because they were biased human emotion features. Table 3 dis-
plays descriptive statistics for the 26 joy features extracted from the 10 volunteers. This
study employed the mean and standard deviation of statistical analysis to determine a
range for filtering out some sampled emotion features that have large differences from
those of most samples. The range was computed via the following formula:
Range = n s, (1)

where m is the mean, s is the standard deviation and n is a constant.

This study heuristically determined the constant parameter n for the three human
emotions. The constant parameters for peace, nervousness and joy were set at 2, 2.5

2010 The Authors. British Journal of Educational Technology 2010 Becta.


428

Table 2: The forecasting accuracy rate of human emotion evaluated by cross-validation under the automatically determined learning parameters

Nervous % Peaceful % Joyous %

Nervous (253 records) 100 88.8476 78.0899


(-v = 5, -c = 8, -g = 8) (-v = 5, -c = 128, -g = 2)
Peaceful (265 records) 88.8476 100 90.1639
(-v = 5, -c = 8, -g = 8) (-v = 5, -c = 32768,
-g = 0.0001220703125)
British Journal of Educational Technology

Joyous (88 records) 78.0899 90.1639 100

2010 The Authors. British Journal of Educational Technology 2010 Becta.


(-v = 5, -c = 128, -g = 2) (-v = 5, -c = 32768,
-g = 0.000122003125)
Nervous/peaceful /joyous (606 records) 76.8254
(-v = 5, -c = 8, -g = 8)
Vol 42 No 3 2011
Emotion recognition for reducing second-language anxiety 429

Figure 8: The distribution of 26 joy emotion features of volunteer no. 2 extracted from 17 human
pulse samples with the joyous emotion

Table 3: The descriptive statistics information of the extracted 26 human joy emotion features from
10 volunteers

Number of records Mean Standard deviation Variance

f1 265 6034.7200 70.3100 4943.088


f2 265 152.0886 80.0605 6409.677
f3 265 120.5580 70.1209 4916.936
f4 265 184.9866 124.5282 15507.263
f5 265 152.4036 102.9955 10608.064
f6 265 84.8716 55.3627 3065.031
f7 265 58.5143 36.6664 1344.427
f8 265 49.0770 32.3510 1046.618
f9 265 37.0160 24.1110 581.317
f10 265 24.1106 17.4389 304.117
f11 265 18.1554 12.4398 154.747
f12 265 15.4426 11.2027 125.500
f13 265 13.7766 9.6063 92.282
f14 265 12.4355 9.3169 86.804
f15 265 11.1840 8.0172 64.275
f16 265 10.8046 8.2362 67.835
f17 265 10.3540 7.4250 55.135
f18 265 9.7882 6.9710 48.594
f19 265 9.6560 6.4750 41.926
f20 265 9.5380 6.6652 44.425
f21 265 9.0269 6.0987 37.194
f22 265 8.9995 5.9475 35.372
f23 265 8.8839 5.9743 35.692
f24 265 8.5293 5.8508 34.231
f25 265 8.6844 5.5064 30.321
f26 265 7.7500 6.6500 44.258

2010 The Authors. British Journal of Educational Technology 2010 Becta.


430 British Journal of Educational Technology Vol 42 No 3 2011

and 2.5 respectively. In other words, the human emotion feature values of sampled data
that range in the determined interval are retained and the remaining samples are
filtered out.

After filtering out some human emotion features with bias, 180 records for nervous-
ness, 180 records for peace and 59 records for joy were reserved to construct the human
emotion recognition model using SVMs. Table 4 shows the forecasting accuracy rate for
human emotions with bias filtering evaluated by cross-validation under automatically
determined learning parameters. All emotion training data were again evaluated by
cross-validation; Table 4 shows the cross-validation results. Compared with the experi-
mental result (Table 2), the entire forecasting accuracy rate of simultaneously experi-
encing three emotions was increased from 76.8254% to 79.7136% under the
automatically determined parameters (-v = 5, -c = 8 and -g = 0.125).

Furthermore, effort in modelling the human emotion recognition model was applied to
reduce second-language speaking anxiety in a web-based one-to-one synchronous
learning environment. The user interface can synchronously convey real-time learner
emotions of peace, nervousness and joy to teachers during English conversation train-
ing via the audio channel of JoinNet (Figure 9). The teacher can then refer to the
emotions of individual learners and provide appropriate feedback or guidance to assist
learners and reduce second-language speaking anxiety.

3.2. Online English speaking training supported by the developed embedded human emotion
recognition system in a web-based one-to-one synchronous learning environment
The developed embedded human emotion recognition system was applied to support
online English speaking training in a web-based one-to-one synchronous learning envi-
ronment. This study recruited four students and one English teacher from a senior high
school in Taiwan to take part in the experiment. To identify the learning status of
students, these four students were numbered 14. Among these four participants, the
emotions of learners nos. 1 and 2 were not conveyed immediately to the teacher during
previous online English speaking training; however, their emotional variations over
time were automatically recorded by the embedded human emotion recognition
system. Therefore, the teacher had no emotion reference while teaching English to
learners nos. 1 and 2. Moreover, the emotion variations of learners nos. 3 and 4 were
recorded by the embedded human emotion recognition system and conveyed to the
teachers computer monitor during learning processes. Before the online English speak-
ing instruction started, all learners filled out two pretest questionnaires about the
anxiety and nervousness they experience during English learningthe FLCAS
(Horwitz, 1986; Horwitz et al., 1986) and Anxiety Toward In-Class Activities Question-
naire (ATIAQ) (Young, 1990). Students also filled out these two questionnaires after
finishing the online English speaking training.

After the pretest, each learner and the teacher were asked to wear an ear microphone,
sit in two different language-learning rooms and speak to each other in English via the
JoinNet audio channel. The English content was planned beforehand by the English

2010 The Authors. British Journal of Educational Technology 2010 Becta.


Table 4: The forecasting accuracy rate of human emotion with noise filtering evaluated by cross-validation under the automatically determined learning
parameters

Nervous % Peaceful % Joyous %

Nervous (180 records) 100 90.2778 80.3347


(-v = 5, -c = 32, (-v = 5, -c = 32768,
-g = 0.0078125) -g = 0.0001220703125)
Peaceful (180 records) 90.2778 100 90.3766
(-v = 5, -c = 32, (-v = 5, -c = 8,
-g = 0.0078125) -g = 0.0078125)
Joyous (59 records) 80.3347 90.3766 100
(-v = 5, -c = 32768, (-v = 5, -c = 8,
-g = 0.0001220703125) -g = 0.0078125)
Nervous/peaceful/joyous 79.7136
(419 records) (-v = 5, -c = 8, -g = 0.125)
Emotion recognition for reducing second-language anxiety
431

2010 The Authors. British Journal of Educational Technology 2010 Becta.


432 British Journal of Educational Technology Vol 42 No 3 2011

Figure 9: The user interface for synchronously conveying real-time learner emotions to teachers in a
web-based one-to-one synchronous learning environment

teacher to ensure that individual leaner emotions can be elicited during speaking train-
ing. Additionally, learners nos. 3 and 4 wore a human pulse sensor on a finger of their
left hands for sensing learner emotions during learning processes. Figure 10 shows
learner no. 3 wearing a human pulse sensor and ear microphone during English speak-
ing learning with the teacher via JoinNet. All English speaking sessions with the
teacher were recorded. The speaking processes of learners nos. 3 and 4 were also
displayed on the teachers computer screen with the interface displaying learner emo-
tions. The proposed system predicted learner emotion once every 0.5 seconds because
emotion variation cannot be correctly predicted in advance and generally happens over
a short time interval. Ekman and Davidson (1994) indicated that there is no agreement
about how long an emotion typically lasts, and most of those who distinguish emotion
from moods recognise that moods last longer. However, Rosenberg (1998) identified
some useful properties; primarily, that mood has a longer temporal duration than
emotion and that mood and emotion frequently influence each other. Emotion can be
very brief, typically lasting a matter of seconds or, at most, minutes. Based on our
experiment, identifying human emotion by human pulse satisfies the lasted time that
an emotion stays typically.

In the proposed human emotion recognition system, the status of learner emotions was
transmitted to the teachers computer monitor every 2.5 seconds during learning pro-
cesses based on two considerations. First, an emotion lasts a short time. To avoid incor-
rect recognition of an emotion, the proposed system determined a learners emotion
based on the voting result of five recognised emotions over 2.5 seconds. Another con-
sideration is that displaying learner emotion every 0.5 seconds easily confuses the

2010 The Authors. British Journal of Educational Technology 2010 Becta.


Emotion recognition for reducing second-language anxiety 433

Figure 10: Learner no. 3 who was wearing human pulse sensor and ear microphone for English
speaking learning with a teacher through JoinNet

Figure 11: The teacher who was guiding learner no. 3s English speaking skill through JoinNet with
the assistance of human emotion recognition system

teacher and interferes with speech training. Figure 11 shows the teacher instructing
learner no. 3 in English speaking through JoinNet with the assistance of the human
emotion recognition system. Table 5 presents the statistics for the emotions of the four
learners during an English speaking session with the English teacher. Learner no. 4
experienced nervousness more often than the other three learners during learning.
Additionally, learners nos. 1 and 3, which had good English speaking skills, had lower

2010 The Authors. British Journal of Educational Technology 2010 Becta.


434 British Journal of Educational Technology Vol 42 No 3 2011

Table 5: Statistics information of the four learners emotion variations during the English speaking
training processes

Counts of Frequency of
Total seconds Peaceful (%) Nervous (%) nerve nerve (cpm)

No. 1 665 79.7 20.3 28 2.53


No. 2 650 68.5 31.5 37 3.67
No. 3 612.5 79.7 20.3 25 2.49
No. 4 800 47.8 52.2 52 3.9

CPM, count per minute.

No.1
3
Nervous
2
1
Peaceful
0
2.5
30
57.5
85
113
140
168
195
223
250
278
305
333
360
388
415
443
470
498
525
553
580
608
635
663
Second

Figure 12: The emotion variations of learner no. 1 with time during the English
conversation training

3
No.2

0
5

5
5

5
25

70
5

0
.5

.5

7.

2.

7.

2.

7.

2.

7.

2.

7.

2.

7.

2.
2.

11

16

20

25

29

34

38

43

47

52

56

61
47

92

13

18

22

27

31

36

40

45

49

54

58

63

Second

Figure 13: The emotion variations of learner no. 2 with time during the English
conversation training

percentages of nervousness than learners nos. 2 and 4, who had poor English speaking
skills.

Figures 1215 show the emotions of the four learners over time during English speak-
ing training. Numbers 13 in these four figures stand for peace, nervousness and joy
respectively. In these experiments, no joy was elicited during all English speaking train-
ing processes. This study found that learner no. 2 had more time spent being nervous
and had a higher frequency for nervousness than learner no. 1 (Figures 12 and 13).
Moreover, learner no. 4 spent more time being nervous and had a higher count per
minute for nervousness than learner no. 3.

2010 The Authors. British Journal of Educational Technology 2010 Becta.


Emotion recognition for reducing second-language anxiety 435

No.3
3
2
1
0

0
40
65
90

0
5

0
5

0
5

0
5

0
5
5

0
5

5
0
5

0
5

0
5
29

64
11
14
16
19
21
24
26

31
34
36
39
41
44
46
49
51
54
56
59
61
Second

Figure 14: The emotion variations of learner no. 3 with time during the English
conversation training

No.4
3
2
1
0
2.5
35
67.5
100
133
165
198
230
263
295
328
360
393
425
458
490
523
555
588
620
653
685
718
750
783
Second

Figure 15: The emotion variations of learner no. 4 with time during the English
conversation training

In Figures 14 and 15, the places with circles represent changes from nervousness to
peacefulness because of teacher feedback or guidance to reduce learner anxiety in
response to individual learner emotions. For example, the following English conversa-
tion indicates how the teacher applied guidance or feedback to reduce language-
learning anxiety for learner no. 3:

Teacher: When did you start to learn to dance?

Learner no. 3: Four years ago.

Teacher: In the fourth grade in elementary school?

Learner no. 3: No, four years old.

Teacher: When you were four years old you started to dance?

In this conversation, learner no. 3 became nervous when the teacher misunderstood
the meaning he was trying to convey; however, learner no. 3 became peaceful after the
teacher offered appropriate guidance. The following English conversation indicates how
the teacher offered appropriate guidance or feedback to reduce the language-learning
anxiety experienced by learner no. 4:

2010 The Authors. British Journal of Educational Technology 2010 Becta.


436 British Journal of Educational Technology Vol 42 No 3 2011

Teacher: You didnt study enough or you didnt understand the questions well?

Learner no. 4: I didnt understand the questions well.

Teacher: Dont worry about that. You can study hard next time.

In this conversation, the teacher gave choices and support to reduce learner nervous-
ness when the learner did not understand the meaning conveyed by the teacher.
Another example is as follows:

Teacher: Which department? What does he major in?

Teacher: Do you know?

Teacher: You can speak Chinese.

Learner no. 4: (in Chinese).

In this English conversation, the teacher allowed the student to speak Chinese (his first
language) when the learner became nervous after not understanding some words in the
conversation.

Tables 6 and 7 show the pretest and posttest results for the FLCAS and ATIAQ filled
out by the four learners. The test scores on the FLCAS and ATIAQ indicate the degree
of nervousness; thus, a high test score represents a high degree of nervousness.

Table 6: The pretest and posttest of FLCAS filled out by the four learners

Learner no.1 Learner no.2 Learner no.3 Learner no.4

Pretest Posttest Pretest Posttest Pretest Posttest Pretest Posttest

Sum 97 81 91 89 89 80 99 95
Difference 16 2 9 4

FLCAS, Foreign Language Classroom Anxiety Scale.

Table 7: The pretest and posttest of ATIAQ filled out by the four learners

Learner no.1 Learner no.2 Learner no.3 Learner no.4

Pretest Posttest Pretest Posttest Pretest Posttest Pretest Posttest

Sum 39 31 47 41 34 27 46 46
Difference 8 6 7 0

ATIAQ, Anxiety Toward In-Class Activities Questionnaire.

2010 The Authors. British Journal of Educational Technology 2010 Becta.


Emotion recognition for reducing second-language anxiety 437

Because each learner conversed in English with the teacher for about 10 minutes in
the experiment, the pretest result can be viewed as self-known language-learning
anxiety, and the posttest result can be close to real language-learning anxiety. The
FLCAS and ATIAQ have 25 and 15 items respectively. Responses to each item are on
a 5-point Likert scale. Notably, subject scores for items structured in a negative form
must be reversed when computing the total score. Compared with the FLCAS pretest,
this study found that all learners had lower scores on the posttest FLCAS. Restated,
the anxieties of all learners when speaking English decreased with or without the
support of the embedded human emotion recognition system. In particular, learners
nos. 1 and 3, who have good English speaking skills, had lower test scores on the
FLCAS than learners nos. 2 and 4, who have poor English speaking skills. Moreover,
no difference existed in pretest and posttest scores in the ATIAQ for learner no. 4.
Conversely, the posttest ATIAQ scores for learners nos. 1, 2 and 3 were lower than
their respective ATIAQ pretest scores.

The teacher noted that she occasionally attended to the emotions of students
when guiding students based on her teaching experiences in traditional classrooms
and ignored student emotions in the web-based learning environment because
web-based learning is not a face-to-face learning environment. She also acknowl-
edged that the embedded human emotion recognition system could aid in her
comprehension of student emotions, thus allowing her to provide appropriate
learning feedback or guidance in a web-based one-to-one synchronous learning
environment.

4. Discussion
4.1. Difficulties constructing a human emotion recognition model based on
physiological signals
Compared with the extracted human emotion features for nervousness and joy, this
study found that the human emotion features for peace extracted from the 10 volun-
teers were the most unified. Conversely, the features of nervousness extracted from the
10 volunteers had relatively poorer unity. As the volunteers played the PC game to elicit
nervousness, the fingers of the volunteers were often muscle bound because of ner-
vousness. The muscle-bound fingers easily resulted in indistinguishable emotion fea-
tures for nervousness. Additionally, extracting the emotional features for joy was the
most difficult among the three human emotions. To extract correctly the emotion fea-
tures associated with joy, the 10 volunteers were instructed to click a button in the pulse
sampling interface when they experienced joy when watching the online film. However,
click time was sometimes asynchronous with the time when joy occurred, such that
retrieving the features of joy was difficult. Additionally, joy is frequently related to
volunteers laughing; however, laughing easily generates abnormal human pulse
signals because of finger movement. Moreover, volunteers sometimes forgot to click
button while they experienced joy; thus, a lot of useful features for joy were lost. These
complications and difficulties adversely affected the accuracy of extracting the features
associated with joy.

2010 The Authors. British Journal of Educational Technology 2010 Becta.


438 British Journal of Educational Technology Vol 42 No 3 2011

4.2. Language speaking anxiety derived from personality traits and different issues
associated with English conversation
This study found that the emotions of the four participants during learning varied with
different English conversation subjects, such as talking about family, examinations,
hobbies and the future. All participants felt peace when talking about family and felt
nervous when talking about issues related to examinations, hobbies and the future.
Generally, issues related to the family and hobbies are usually familiar to most English
learners; thus, the emotions associated with these two issues may be unobvious.
However, talking about hobbies still caused nervousness in the experiment. We inferred
that the four participants worried about teacher comments regarding whether they
clearly expressed themselves. Additionally, talking about examinations and the future
require deliberate thought, which easily leads to nervousness. Finally, this study also
found that personality traits also lead to language-learning anxiety. During an inter-
view, the English teacher stated that learner no. 4 easily became nervous in daily life
because of his personality. Experimental results (Table 5) confirm this viewpoint.
However, this study did not consider participant personality traits while applying the
human emotion recognition system to reduce language-learning anxiety. To further
increase language-learning performance, individual personality traits should be con-
sidered, in addition to the emotions of individual learners during learning processes.

5. Conclusions and future work


5.1. Conclusions
This study integrates sensor, signal processing, wireless communication, system-on-
chip and machine-learning technologies to construct an embedded human emotion
recognition system that supports communication tailored to emotions in web-based
learning environments. The forecasting accuracy rate of the proposed human emotion
recognition system evaluated by cross-validation is 79.7136% based on the proposed
emotion feature extraction scheme. The forecasting accuracy rate is sufficient to
support teachers in immediately understanding the emotions of individual learners
during learning processes. Additionally, this study also applied the proposed embedded
human emotion recognition system to support teachers in reducing the anxiety of
English learners while speaking in a web-based one-to-one synchronous learning envi-
ronment. Experimental results demonstrate that the embedded human emotion recog-
nition system provides benefits such as reducing language-learning anxiety and
increasing the effectiveness of English conversation training.

5.2. Future work


Although the proposed embedded human emotion recognition system is helpful in
conveying individual learner emotions to teachers who can then reduce language-
learning anxiety, several issues warrant further investigation. First, developing non-
invasive physiological signal detection technologies, such as wearable physiological
clothes and optical sensors, to support context-aware learning is necessary because
using a pulse sensor worn on a finger to determine the emotions of students during
learning processes is inconvenient. Moreover, Sherebrin and Sherebrin (1990) claimed
that the human pulse shape varied with age. Therefore, investigating how human

2010 The Authors. British Journal of Educational Technology 2010 Becta.


Emotion recognition for reducing second-language anxiety 439

emotions are affected by variability such as gender, personality and health statuses has
been considered as our future work. Additionally, the physiological signal in this study
is the pulse; however, other physiological signals associated with human emotion, such
as blood pressure, ECG and EEG, should be considered in future research. Other physi-
ological signals may be helpful in emotion recognition. Finally, some studies
(Parameswaran, 2002; Winston, ODoherty & Dolan, 2003) indicated that multiple
emotions may be co-occurring in the same person. For example, when a person is angry
about one thing and sad about another, his/her face shows one emotion and the voice
shows another. However, this study mainly focuses on identifing single emotion during
learning processes, but the phenomenon of multiple emotions is valuable for further
investigation.

Acknowledgement
The authors would like to thank the National Science Council of the Republic of China,
Taiwan for financially supporting this research under Contract No. NSC97-2511-S-
004-002-MY3.

References
Barlow, D. H. (2002). Anxiety and its disorders: the nature and treatment of anxiety and panic (2nd
ed.). New York: The Guilford Press.
Chang, C. C. & Lin, C. J. (2001). LIBSVM: a library for support vector machines. Retrieved August 19,
2008, from http://www.csie.ntu.edu.tw/~cjlin/libsvm
Coles, G. (1998). Reading lessons: the debate over literacy. New York: Hill & Wang.
De Vicente, A. & Pain, H. (2002). Informing the detection of the students motivational state: an
empirical study. International Conference on Intelligent Tutoring Systems, 933943.
Ekman, P. & Davidson, R. J. (1994). The nature of emotion. Oxford: Oxford University Press.
Emmanuel, B., Pierre, C. & Claude, F. (2007). Towards advanced learner modeling: discussions on
quasi real-time adaptation with physiological data. The Seventh IEEE International Conference on
Advanced Learning Technologies (ICALT 2007), 809813.
Frigo, M. & Johnson, S. G. (2005). The design and implementation of FFTW3. Proceedings of the
IEEE, 93, 2, 216231. Invited paper, Special Issue on Program Generation, Optimization, and
Platform Adaptation.
Goleman, D. (1995). Emotional intelligence. New York: Bantam Books.
Horwitz, E. K. (1986). Preliminary evidence for the reliability and validity of a foreign language
anxiety scale. TESOL Quarterly, 20, 3, 559562.
Horwitz, E. K. (2001). Language anxiety and achievement. Annual Review of Applied Linguistics,
21, 112126.
Horwitz, E. K., Horwitz, M. B. & Cope, J. A. (1986). Foreign language classroom anxiety. The
Morden Language Journal, 70, 2, 125132.
Hsieh, C. W., Shen, C. T., Chao, Y. P. & Chen, J. H. (2007). Study of human affective response on
multimedia contents. World Congress on Medical Physics and Biomedical Engineering, 14, 2,
787791.
Izard, C. E. (1984). Emotion-cognition relationships and human development. In C. E. Izard,
J. Kagan & R. B. Zajonc (Eds), Emotions, cognition and behavior (pp. 1737). New York: Cam-
bridge University Press.
Kim, J. & Gorman, J. (2005). The psychobiology of anxiety. Clinical Neuroscience Research, 4,
335347.
Kopecek, I. (2000). Emotions and prosody in dialogues: an algebraic approach based on user
modelling. The Proceedings of the ISCA Workshop on Speech and Emotions, 184189.

2010 The Authors. British Journal of Educational Technology 2010 Becta.


440 British Journal of Educational Technology Vol 42 No 3 2011

Kort, B., Reilly, R. & Picard, R. W. (2001). An affective model of interplay between emotions and
learning: reengineering educational pedagogybuilding a learning companion. IEEE Interna-
tional Conference on Advanced Learning Technologies, 4346.
LeDoux, J. (1994). Emotion, memory, and the brain. Scientific American, 270, 5057.
Lekkas, Z., Tsianos, N., Germanakos, P., Mourlas, C. & Samaras, G. (2008). The role of emotions
in the design of personalized educational systems. The Eighth IEEE International Conference on
Advanced Learning Technologies, 886890.
Lin, B. & Hsieh, C. (2001). Web-based teaching and learner control: a research review. Computers
& Education, 37, 3, 377386.
Mohamed, B. A. & Mahmoud, N. (2007). EMASPEL (emotional multi-agents system for peer to
peer e-learning). International Conference on Information Communication Technologies and Acces-
sibility, 201206.
Noble, W. S. (formerly Grundy, W. N.) (2003). Support vector machine applications in computational
biology. A survey article of the Department of Genome Sciences, University of Washington.
Nosu, K. & Kurokawa, T. (2006). A multi-modal emotion-diagnosis system to support e-learning.
Proceedings of the First International Conference on Innovative Computing, Information and Control
(ICICIC 06), 2, 274278.
Ozcan, C. (2008). Anxiety in learning a languagepart I. Retrieved August 17, 2008, from http://
www.eslteachersboard.com/cgi-bin/articles/index.pl?page=2;read=2611
Pajares, F. (1996). Self-efficacy beliefs and mathematical problem-solving of gifted students.
Contemporary Educational Psychology, 21, 325344.
Parameswaran, N. (2002). Emotions in intelligent agents. Proceedings of the Fifteenth International
Florida Artificial Intelligence Research Society Conference, 8286.
Piaget, J. (1989). Les relations entre lintelligence et laffectivit dans le developpement de
lenfant. In B. Rim & K. Scherer (Eds), Les motions. Textes de base en psychologie (pp. 7595).
Paris: Delachaux et Niestl.
Picard, R. W., Healey, J. & Vyzas, E. (2001). Toward machine emotional intelligence: analysis of
affective physiological state. IEEE Transactions Pattern Analysis and Machine Intelligence, 23, 10,
11751191.
Reilly, R. (2004). The science behind the art of teaching science: emotional state and learning. In
R. Ferdig et al. (Eds.), Proceedings of Society for Information Technology and Teacher Education
international conference (pp. 30213026). Chesapeake, Virginia, United States: AACE
Rosenberg, E. L. (1998). Levels of analysis and the organization of affect. Review of General
Psychology, 2, 247270.
Rouhani, A. (2008). An investigation into emotional intelligence, foreign language anxiety and
empathy through a cognitive-affective course in an EFL context. Linguistik Online, 34, 4157.
Shaikh, M. A. M., Hua, W., Ishizuka, M. & Mostafa, A. M. (2005). Modeling an affectionate virtual
teacher for e-learning underpinning 3-dimensional emotion model. Proceedings of the 8th
International Conference on Computer and Information Technology (ICCIT 2005), 280285.
Sherebrin, M. H. & Sherebrin, R. Z. (1990). Frequency analysis of the peripheral pulse wave
detected in the finger with a photoplethysmograph. IEEE Transactions on Biomededical Engineer-
ing, 37, 3, 313317.
Wehrle, T. & Kaiser S. (2000). Emotion and facial expression. In A. Paiva (Ed.), Affect in Interac-
tions: Towards a new generation of interfaces (pp. 4964). Heidelberg: Springer.
Winston, J. S., ODoherty, J. & Dolan, R. J. (2003). Common and distinct neural responses during
direct and incidental processing of multiple facial emotions. NeuroImage, 20, 1, 8497.
Woodrow, L. (2006). Anxiety and speaking English as a second language. RELC Journal, 37, 3,
308328.
Young, D. J. (1990). An investigation of students perspectives on anxiety and speaking. Foreign
Language Annals, 23, 6, 539553.

2010 The Authors. British Journal of Educational Technology 2010 Becta.

Potrebbero piacerti anche