Sei sulla pagina 1di 18

System 42 (2014) 270–287

Contents lists available at ScienceDirect

System
journal homepage: www.elsevier.com/locate/system

Measuring syntactic complexity in L2 pragmatic production:


Investigating relationships among pragmatics, grammar, and
proficiency
Soo Jung Youn
English Department, Northern Arizona University, 700 S. Humphreys Cdr, PO Box 6032, Flagstaff, AZ 86011, USA

a r t i c l e i n f o a b s t r a c t

Article history: The study examines relationships among pragmatics, grammar, and proficiency by
Received 12 November 2012 comparing the syntactic complexity of ESL learners’ written pragmatic production across
Received in revised form 10 December 2013 two independent criterion measures: proficiency and pragmatic performance. Participants
Accepted 20 December 2013
were 40 ESL learners who completed pragmatic assessment tasks. Pragmatic competence
was assessed by three trained raters using task-dependent analytical rating criteria. Syn-
Keywords:
tactic complexity was assessed using three measures: (a) global complexity from mean
Syntactic complexity
length of T-unit, (b) phrasal-level complexity from mean length of clause, and (c) subor-
L2 pragmatics
Proficiency
dination complexity from mean number of clauses per T-unit. The results showed that
Task-based language assessment learners did not possess concomitant written pragmatic competence according to their
proficiency levels. The global complexity measure was general enough to differentiate
levels in both proficiency and pragmatic performance, compared to phrasal-level and
subordination complexity. Yet, the magnitudes of the three complexity measures differ-
ences between pragmatic performance levels were more noticeable, compared to those
between proficiency levels. Except for phrasal-level complexity, learners’ pragmatic per-
formances were more highly correlated with syntactic complexity of their pragmatic
production than their proficiency levels. Pragmatically advanced learners produced longer
utterances, more complex subclausal structures at the phrasal level, and more subordi-
nation, suggesting the crucial roles played by syntactically complex structures in
expressing pragmatic functions.
Ó 2013 Elsevier Ltd. All rights reserved.

1. Introduction

Second language (L2) pragmatic development has increasingly received attention from diverse theoretical and method-
ological perspectives with the investigation of the relationships among pragmatics, grammar, and L2 proficiency being a
major area of interest (e.g., Bardovi-Harlig, 2000, 2001; Takahashi, 1996, 2001, 2005). Two seemingly opposing hypotheses of
the developmental trajectories of grammar and pragmatics have been discussed (Kasper & Rose, 2002): (1) pragmatics
precedes grammar and (2) grammar precedes pragmatics. The proposition of the primacy of pragmatics preceding grammar
is supported by the universal pragmatics principle, which posits that adult L2 learners with established first language (L1)
pragmatic knowledge bring their discourse, pragmatic, and sociolinguistic competence to bear when learning L2 pragmatics.
One example of the primacy of pragmatics is ‘Wes’, a Japanese L1 adult described in Schmidt’s (1983) study, who showed

E-mail address: Soo-Jung.Youn@nau.edu.

0346-251X/$ – see front matter Ó 2013 Elsevier Ltd. All rights reserved.
http://dx.doi.org/10.1016/j.system.2013.12.008
S.J. Youn / System 42 (2014) 270–287 271

considerable pragmatic and discourse competence despite his limited knowledge of English grammar. On the other hand, a
large body of pragmatics literature supports the primacy of grammar over pragmatics, which explains that learners with
established grammar still lack in utilizing their grammar ability to express appropriate illocutionary force. One example of
this is syntactic complexity in Japanese L2 adult learners’ request production. Takahashi (1996, 2001) showed that Japanese
learners of English as a foreign language (EFL) preferred mono-clausal request expressions (e.g., Please or Would/Will you) to
bi-clausal expressions (e.g., I was wondering if you could) even in highly imposing request situations, regardless of proficiency.
This finding indicates that L2 learners with established grammar and advanced proficiency still lack in using various syntactic
structures in pragmatically appropriate ways, such as the use of past tenses and the progressive aspect with conditional
clauses as mitigation devices.
The two contradicting hypotheses illustrate that grammar and pragmatics do not necessarily develop in a linear manner.
L2 learners with advanced proficiency may not necessarily demonstrate advanced pragmatic competence or a high level of
syntactic complexity in their pragmatic production. The relationships among pragmatics, language proficiency, and grammar
(especially the ability to employ syntactically complex structures) remain an open question. The present study examines
these relationships by measuring levels of syntactic complexity in the written pragmatic production of learners of English as a
Second Language (ESL) with regard to two independent criterion measures: proficiency and pragmatic performance. In order
to elicit learners’ written pragmatic production, four pragmatic assessment tasks and the corresponding rating criteria were
developed drawing on politeness theory (Brown & Levinson, 1987). The notion of face is central to politeness theory, con-
sisting of positive and negative face. Positive face refers to the desire of the individual to be approved of, while negative face
refers to the desire of the individual not to be imposed on. According to Brown and Levinson (1987), politeness is the
manifestation of respect for an individual’s face. Out of the various strategies employed to maintain politeness, grammar is
particularly pertinent in relation to the focus of this study. Showing positive face includes the use of indirect forms in ut-
terances, such as modal verbs could or would. Saving negative face includes softening or hedging devices such as I was
thinking, we could perhaps, and if that’s okay. These devices are used for the speaker not to impose on the listener whenever
the speaker is making suggestions. In addition to the notion of face, Brown and Levinson identified three sociological variables
which influence politeness strategies: (a) the power difference between interlocutors, (b) the social distance between in-
terlocutors, and (c) the absolute ranking of the face-threatening act. In the next sections, the literature on the relationships
among pragmatics, grammar, and proficiency is reviewed.

1.1. Interlanguage pragmatics and grammar

Building on Kasper and Schmidt’s (1996) call for a stronger acquisitional focus within interlanguage pragmatics research,
Bardovi-Harlig (1999) proposed to broaden the field of inquiry through investigating an interlanguage system of pragmatics.
One of her proposals, which was grounded in consistent previous research findings that “high levels of grammatical
competence do not guarantee concomitant high levels of pragmatic competence” (Bardovi-Harlig, 1999, p. 686), included
examining the relationship between emergent pragmatic competence and interlanguage grammar. Among the wide range of
grammatical aspects that are associated with interlanguage pragmatics, the tense–mood–aspect system, the degree of
grammatical complexity, and the use of formulaic expressions are closely related to pragmalinguistics (i.e., linguistic re-
sources and strategies for conveying communicative action) (Leech, 1983; Thomas, 1983). Previous research findings well
illustrate the importance of these grammatical choices in pragmatics. For instance, L2 learners’ failure to express pragmatic
meaning appropriately using the tense–mood–aspect system is discussed in Bardovi-Harlig and Hartford (1993). The study
reported the use by non-native English speakers of the modal expression ‘will’ (as in the ‘I will take syntax’) which indicates a
too strong commitment, and is different from what a native English speaker would have said in the same situation, which is ‘I
was thinking of taking syntax’ (using past tense and progressive aspect as a mitigating device). Bardovi-Harlig and Hartford
argued that these examples show that non-native English speakers had a lack of sociopragmatic knowledge (i.e., social
perceptions underlying participants’ communicative action) (Leech, 1983), and as a result they failed to realize that their
production was inappropriate. At the same time, the findings from both Takahashi (1996, 2001) and Bardovi-Harlig and
Hartford (1993) may also illustrate learners’ lack of control over the pragmatic meaning of L2 grammar, despite their well-
established grammatical knowledge of forms.
Several studies also explored how learners’ pragmatic awareness is related to grammatical awareness. Bardovi-Harlig and
Dörnyei (1998) compared how ESL learners and Hungarian EFL learners rated grammatical and pragmatic errors using
carefully designed contextualized pragmatic and grammatical judgment tasks presented in a video format. The results
showed that error recognition for both grammar and pragmatics significantly differed across ESL and EFL contexts. The ESL
learners scored pragmatic errors more severely while the EFL learners were more severe on grammatical errors. In a repli-
cation of Bardovi-Harlig and Dörnyei’s study, Niezgoda and Röver (2001) and Schauer (2006) explored the same issue in the
contexts of Czech EFL and German EFL respectively, but mixed results were reported. Schauer’s results supported the Bardovi-
Harlig and Dörnyei’s finding, namely that ESL learners were more sensitive to pragmatic violations than grammatical ones
while EFL learners were more sensitive to grammatical errors. However, Niezgoda and Röver found that EFL and ESL envi-
ronments had little effect on learners’ pragmatic and grammatical awareness since the Czech EFL group recognized more
pragmatic errors and perceived them to be more serious than the ESL group did. These mixed results might be due to dif-
ferences in the proficiency levels of the participants. The Czech EFL group in Niezgoda and Röver’s study was highly motivated
272 S.J. Youn / System 42 (2014) 270–287

and had a high aptitude for learning English compared with Bardovi-Harlig and Dörnyei’s Hungarian EFL group. This
potentially implicates an L2 proficiency effect on learners’ pragmatic competence.
It should be noted that Bardovi-Harlig and Dörnyei’s study and the subsequent replication studies examined grammatical
and pragmatic awareness, rather than pragmatic performance; learners’ production data were not included in these studies.
Analyses of learners’ production make it possible to examine how learners use various grammatical resources to express
pragmatic meaning, and can provide insight into the complex relationship between pragmatics and grammar (Kasper, 2009).
Such research has been rare in the field, however, except for a few notable studies (e.g., Fulcher & Márquez Reiter, 2003;
Taguchi, 2007a). Thus, the current study focuses on the analysis of learners’ pragmatic production to shed new light on
understanding the relationship between pragmatics and grammar.

1.2. Interlanguage pragmatics and L2 proficiency

The theoretical models on L2 competence conceptualize pragmatic competence as a part of language proficiency (e.g.,
Bachman & Palmer, 1996). However, the relationship between L2 proficiency and pragmatics has been unclear in previous
studies, with mixed results being reported. Positive effects of proficiency on pragmatics have been reported (e.g., Kasper &
Roever, 2005; Roever, 2006; Taguchi, 2007a, 2011). For example, Roever (2006) reported that proficiency was a strong fac-
tor for the production of speech acts, which might be due to the study’s focus on pragmalinguistic knowledge (i.e., the
language interface of pragmatics). Taguchi (2007a) also found a significant proficiency effect on the processing dimension of
pragmatic competence. Japanese EFL learners’ pragmatic production on requests and refusals were analyzed for overall
appropriateness, planning time, and speech rate. Except for planning time, the positive proficiency effect was found on overall
appropriateness and speech rate.
However, other studies showed a relatively weak L2 proficiency effect. For instance, Matsumura (2003) found a stronger
cause–effect relationship between Japanese EFL learners’ pragmatic competence and L2 exposure rather than proficiency.
Compared to Roever’s (2006) finding on the strong proficiency effect on the pragmalinguistic dimension of pragmatics,
Matsumura’s focus on the sociopragmatic dimension of pragmatics (i.e., social perceptions underlying participants’
communicative action) potentially explains the decreased proficiency effect. Takahashi (2005) also reported a low correlation
between pragmalinguistic awareness and proficiency. Instead, motivation subscales, such as intrinsic motivation and personal
relevance toward learning goals, were found to be more correlated with Japanese EFL learners’ L2 pragmalinguistic aware-
ness. These findings support the view that L2 proficiency is not necessarily a primary factor in predicting learners’ L2
pragmatic competence.
The studies discussed above examined various dimensions of pragmatics using a wide range of testing instruments in
order to investigate the relationship between pragmatics and proficiency. Examples include the measurement of pragmatic
competence using a multiple-choice questionnaire (Matsumura, 2003), the investigation of pragmalinguistic knowledge
using discourse completion tasks (Roever, 2006), and the evaluation of pragmalinguistic awareness of request forms using an
immediate retrospective questionnaire (Takahashi, 2005). Despite the attempt to examine various dimensions of pragmatics,
an explicit focus on the relationship between pragmatic performance and proficiency has been rare. For this reason, the
previous research’s findings have limited application to another aspect of pragmatics, namely learners’ pragmatic perfor-
mance in real-life situations. Although researchers have increasingly attempted to assess pragmatic performance, such as
Hudson, Detmer, and Brown (1992, 1995), Taguchi (2007a), and Grabowski (2009), more research on an explicit relationship
between pragmatic performance and proficiency is needed.
In order to examine the theoretically and empirically unresolved relationships among pragmatics, grammar, and profi-
ciency, the research gaps can be addressed in several ways. First, developing authentic pragmatic tasks based on learners’
needs, along with reliable and valid rating criteria, will enable us to measure pragmatic performance and to investigate its
explicit relationship with L2 proficiency. Secondly, a systematic investigation of syntactic complexity of L2 learners’ pragmatic
production elicited from the authentic pragmatic tasks will help us to examine how pragmatically competent L2 learners
utilize grammar, particularly syntactically complex linguistic resources. The important roles of syntactically complex
grammar in pragmatics have been examined previously (e.g., Bardovi-Harlig, 1999; Bardovi-Harlig & Hartford, 1993; Taka-
hashi, 1996, 2001). However, little has been reported about the types and degrees of syntactic complexity in differing levels of
pragmatic performance and how they vary from those found in different proficiency levels. Thus, the present study proposes
employing three distinct syntactic complexity measures that tap global, phrasal-level, and subordination complexity, which
are discussed in the next section.

1.3. Syntactic complexity measures

Complexity, accuracy, and fluency (e.g., Skehan, 1998) have been fundamental concepts in examining learner language
in the field of Second Language Acquisition (SLA). Among these, complexity has been considered to be the most
complicated and ambiguous construct (Housen & Kuiken, 2009; Norris & Ortega, 2009). Two types of complexity have
been identified. One is cognitive complexity caused by task types and the other is syntactic complexity. The current study
focuses on the latter. Among varying definitions of syntactic complexity, Ortega (2003, p. 492) has defined it as “the range
of forms that surface in language production and the degree of sophistication of such forms”. Three types of sub-
measures of syntactic complexity have been commonly employed: overall complexity, coordination, and
S.J. Youn / System 42 (2014) 270–287 273

subordination. These sub-constructs of syntactic complexity develop differently over proficiency levels (Norris & Ortega,
2009). For example, coordination occurs earlier than subordination, and clause-internal complexification happens in a
later stage of proficiency development. The syntactic complexity of L2 learners’ language production has been widely
measured not only in the field of SLA but also in L2 writing studies to examine learners’ syntactic repertoire, grammatical
development, writing ability, and variations across different writing tasks (e.g., Biber, Gray, & Poonpon, 2011; Larsen-
Freeman, 2006; Ortega, 2003; Wolfe-Quintero, Inagaki, & Kim, 1998). Despite its popularity, the measuring of syntactic
complexity still presents challenges. For instance, as discussed by Norris and Ortega (2009), various complexity measures
entail either distinct or redundant sources of complexification, depending on the way they are calculated. Thus, as an
attempt to address such challenge, three distinct measures that tap syntactic complexity multidimensionally without
being redundant were selected in this study.

2. Research questions

The present study seeks to examine the relationships among pragmatics, particular aspects of grammar (syntactic
complexity), and proficiency by investigating three syntactic complexity measures of ESL learners’ written pragmatic pro-
duction with regard to two independent criterion measures: proficiency and pragmatic performance. In previous studies on
the relationship between proficiency and syntactic complexity, it has been implicitly assumed that syntactic complexity is
positively related to proficiency; however, the levels of syntactic complexity differ across proficiency levels depending on sub-
constructs in syntactic complexity (Norris & Ortega, 2009). Thus, the sub-constructs in syntactic complexity might differ
across different pragmatic performance levels. If this is true, measuring different types of syntactic complexity of learners’
pragmatic production will enable us to understand how learners with varying degrees of pragmatic competence utilize
syntactic structures in expressing pragmatic meaning. Additionally, a relationship between pragmatic performance and
proficiency will be explicitly and empirically addressed. For this reason, the assessment of ESL learners’ pragmatic perfor-
mance in real-life situations, independent of learners’ proficiency, is also an integral part of the present study. Two research
questions guided this study:

1. How do three measures of syntactic complexity of ESL learners’ written pragmatic production vary across different levels
of L2 proficiency and pragmatic performance?
2. What are the relationships among measures of learners’ proficiency levels, pragmatic performances, and syntactic
complexity?

3. Method

3.1. Participants

The study participants were forty ESL students enrolled in a four-year American university. The language background of
the participants included Japanese (n ¼ 14, 35%), Korean (n ¼ 11, 27.5%), Chinese (n ¼ 7, 17.5%), and others (Indonesian,
Marathi, Marshallese, Spanish, Tamil, Thai, Turkish). Considering that the target learner population in this study was college-
level ESL learners, participants with different L1s were inevitable. The examinees were categorized into three different levels
of language proficiency according to the TOEFL internet-based test (iBTÒ) score or to the classes that they were taking for
those who did not have the TOEFL iBTÒ scores: (a) low-intermediate proficiency learners studying in a university preparatory
ESL program (n ¼ 9), with the TOEFL iBTÒ test scores ranging from 33 to 72; (b) intermediate and high-intermediate pro-
ficiency international undergraduate or graduate students enrolled in an English for Academic Purposes (EAP) program to
fulfill university English language requirements (n ¼ 19), with the TOEFL iBTÒ test scores ranging from 73 to 100; and (c)
advanced proficiency L2 learners of English studying at the American university who either scored above 100 on the TOEFL
iBTÒ test, took an expository writing 100-level class, or exited from the required university EAP program (n ¼ 12). Addi-
tionally, the examinees were regrouped into three different pragmatic performance groups according to their
performances on the pragmatic assessment tasks, independent of their proficiency levels. No particular L1 was dominant in
each group.

3.2. Pragmatic assessment tasks and task-dependent rating criteria

Originally seven pragmatic assessment tasks, composed of four written and three spoken tasks, were developed based on
the pragmatic learning needs of 102 ESL students in an EAP context (Youn, 2010), following a task-based assessment
framework (Long & Norris, 2000; Norris, 2009). However, the present study only included the four written tasks as presented
in Table 1. Each task represented an authentic situation that requires pragmatic competence in an EAP setting, such as writing
a recommendation letter request e-mail to a professor and giving suggestions on classmates’ class work. In addition to instructions
and task-specific realia for each task (see Appendix A), task-dependent analytical rating criteria reflecting both socio-
pragmatic and pragmalinguistic features of each task were also developed based on qualitative analyses of examinees’
performance data, as well as input from domain experts such as professors and employers (see Appendix B). Each criterion
was associated with one of the three levels, from 1 ¼ ‘inadequate’ through 2 ¼ ‘able’ to 3 ¼ ‘good’. For example, Task 1 writing
274 S.J. Youn / System 42 (2014) 270–287

Table 1
Written pragmatic assessment tasks.

Task Description
1 Write a recommendation letter request e-mail to a professor
2 Write an e-mail to a potential employer to send your application packet
3 Write an e-mail to refuse a professor’s request of helping with your classmate’s class project
4 Write constructive comments on a cover letter written by a classmate

a recommendation letter request e-mail to a professor was scored based on a detailed description of corresponding quality for
three levels with four rating criteria: (a) tone of the e-mail, (b) ability to deliver a clear message and contextual knowledge, (c)
appropriate use of formulaic linguistic expressions, and (d) appropriate e-mail format. The rating criteria measured the
diverse dimensions of written pragmatic performance of certain writing genres. For example, the third criterion, appropriate
use of formulaic linguistic expressions, measured learners’ pragmalinguistic knowledge. The last criterion, appropriate e-mail
format, measured aspects, such as a proper e-mail subject line title and appropriate terms of address. The data were collected
in an individual session with each participant, which took approximately one hour. Participants were allowed to either type
their answers on a computer or handwrite their answers on paper. The researcher retyped the handwritten answers to avoid
any effect of handwriting quality on rating.
Three trained raters scored the learners’ pragmatic performance using the task-dependent analytical rating criteria. One
rater was an English native speaker and the two others were advanced L2 users of English who all had at least two years of
ESL/EFL teaching experience and an MA degree in ESL. They were blind to the identity or background of the participants as
well as to the goals of the study. The raters received three consecutive training sessions on using rating criteria consistently,
each taking about one hour. All ratings were completed within a period of two weeks.

3.3. Data analysis

Syntactic complexity was examined using the CLAN (Computerized Language Analysis) computer program
(MacWhinney, 2000). This program made it possible to perform various automatic analyses of coded data, including fre-
quency counts, word searches, co-occurrence analyses, mean length of T-unit, and morphosyntactic analysis. Participants’
written pragmatic production data were converted into CHAT (Codes for the Human Analysis of Transcripts) format to
facilitate various searches and analyses using the CLAN program. Within the CHAT transcript file for each participant’s
response, T-units, independent, and dependent clauses were identified based on guidelines developed by Ortega, Iwashita,
Rabie, and Norris (1999). An additional coding guideline was also developed for the present study. For example, when
coding the task of writing a request e-mail to a professor, the salutation line (e.g., Dear Professor) was not coded as an
independent T-unit since it ran the risk of affecting the overall mean length of T-unit significantly mainly due to the
formulaic part of the e-mail genre.
In order to estimate the intra-rater reliability of the coding, the researcher coded the data three times, one or two months
apart. Intra-rater reliability measures for all units were calculated, and the average reliability for all codings was 0.97. Any
discrepancies in coding, mostly due to simple coding mistakes, were identified and corrected. Following Norris and Ortega’s
(2009) call for critical understanding on the multidimensionality of complexity measures, three complexity measures that tap
distinct sources of syntactic complexification were computed for each participant’s performance on individual task: global
complexity measures from mean length of T-unit (MLTU), phrasal-level complexity from mean length of clause (MLC), and
subordination complexity from mean number of clauses per T-unit (CTU).
The Multi-faceted Rasch Measurement (MFRM) approach (Linacre, 1989) using the computer program FACETS, version
3.61.0 (Linacre, 2006) was also employed (a) to examine rater behaviors and task characteristics and (b) to provide a basis for
creating groups of differing pragmatic performance levels. The MFRM approach allowed the analyses of examinees’ abilities in
relation to the raters’ severities and tasks’ difficulties, which provides more precise estimates of learners’ pragmatic per-
formance compared to practices in classical testing theories (McNamara, 1996).

4. Results

The results section starts with descriptive statistics for the four written pragmatic tasks, which are needed for inferential
statistics (Chapelle & Duff, 2003). Results from the MFRM analysis using the FACETS program are provided first, which serves
as the basis for subsequent analyses. In particular, detailed measurement reports from the FACETS program allow us to
examine the examinees’ pragmatic abilities, raters’ performances, and tasks’ difficulties, which become evidence for the
validity and reliability of measuring pragmatic performance in the present study. Finally, results from one-way ANOVA an-
alyses and correlation analyses using SPSS version 20 are presented to answer the first and second research questions
respectively.
S.J. Youn / System 42 (2014) 270–287 275

4.1. Descriptive statistics for the four pragmatic assessment tasks

Table 2 shows the descriptive statistics for pragmatic performance scores across tasks and proficiency levels. With a
maximum score of ‘3’ for all tasks, high proficiency learners received the highest mean scores on all tasks while the opposite
was true for low proficiency learners. Low proficiency learners showed greater variability in score across all tasks. However,
large variation was found across all proficiency groups particularly for Task 4 giving constructive comments on a cover letter
written by a classmate, which was the most difficult task.

4.2. FACETS summary and measurement reports

A FACETS summary shows the relative status of four facets (examinee, rater, task, and rating category) used in the study in
a single set of relationships, which is shown in Fig. 1. Each column represents a different facet including the relative abilities of
the examinees, the relative harshness of the raters, the relative difficulties of the assessment tasks, and rating category. The
person reliability statistic was 0.93, which means that the pragmatic tasks quite reliably divided the examinees into different
levels of pragmatic ability. Among the four criteria of the rating category, the third criterion used for all tasks, appropriate use
of formulaic linguistic expressions, was the most difficult, indicating the learners’ lack of ability in utilizing linguistic ex-
pressions for pragmatics.
Detailed measurement reports for each task’s difficulty from the FACETS analysis are shown in Table 3. The FACETS analysis
employs a true interval scale, the logit scale, and it is expressed with logit values. By convention, tasks of above-average
difficulty are indicated with a positive sign while tasks of below-average difficulty are indicated with a negative sign in
logit values. All fit values of the four written pragmatic tasks were within the range 0.75 and 1.3 (Bond & Fox, 2007), indicating
no tasks were misfitting. This also means that the four pragmatic tasks contributed to measuring one construct (i.e., pragmatic
competence), which ensures construct validity of the test instruments. The logit values in Table 3 indicate each written task’s
difficulty. Task 4, giving constructive comments on a cover letter written by a classmate, was identified as the most difficult task
with the highest logit value of 0.67. The examinees’ unfamiliarity with a cover letter as a written genre and the high level of
pragmatic competence required in giving constructive comments made this task potentially challenging. Task 3, writing an
e-mail refusing a professor’s request to help with a classmate’s class project, was the easiest task with the lowest logit value
of 0.94. The examinees seemed to manage refusal of a professor’s request easily, possibly due to the familiarity with the
situation in an EAP setting or the relatively simple syntactic structures needed for apology (e.g., I’m sorry) compared to
request.
Detailed measurement reports of the three raters’ performance are presented in Table 4. Here, the logit values represent
relative rater severity. The raters varied in the degree of severity. The higher the logit value, the more severe the rater. Based
on the logit values, Rater 1 was the most severe and Rater 2 was the least severe. The reliability of the measurement for the
three raters was 0.87, indicating that the different degrees of severity among the three raters were relatively reliable. Finally,

Table 2
Descriptive statistics for four tasks across proficiency levels.

Proficiency N Mean SD Min Max Range


All tasks
High 12 2.48 0.20 2.17 2.77 0.60
Mid 19 2.18 0.15 1.88 2.42 0.54
Low 9 1.84 0.39 1.21 2.31 1.10

Task 1
High 12 2.43 0.27 1.83 2.83 1.00
Mid 19 2.05 0.43 1.42 2.92 1.50
Low 9 1.72 0.43 1.17 2.42 1.25

Task 2
High 12 2.56 0.20 2.08 2.83 0.75
Mid 19 2.24 0.23 1.92 2.67 0.75
Low 9 1.69 0.46 1.00 2.17 1.17

Task 3
High 12 2.65 0.24 2.08 2.92 0.83
Mid 19 2.49 0.23 2.08 2.92 0.84
Low 9 2.33 0.47 1.42 2.75 1.33

Task 4
High 12 2.29 0.49 1.25 2.91 1.67
Mid 19 1.92 0.34 1.42 2.58 1.16
Low 9 1.60 0.50 1.00 2.50 1.50

Note. Scores: 1 ¼ inadequate, 2 ¼ able, 3 ¼ good.


276 S.J. Youn / System 42 (2014) 270–287

Fig. 1. FACETS summary.

Table 3
Difficulty logit values of assessment tasks.

Task Difficulty (logits) Error Infit (mean square)


1 0.30 0.08 1.0
2 0.03 0.08 0.8
3 0.94 0.09 1.0
4 0.67 0.08 1.1

Notes. Person separation reliability ¼ 0.93; item separation reliability ¼ 0.98; separation index ¼ 7.51; fixed (all same) chi-
square ¼ 212.0; significance ¼ 0.00.

Table 4
Severity logit values of three raters.

Rater Severity (logits) Error Infit (mean square)


1 0.23 0.07 0.8
2 0.23 0.07 1.2
3 0.00 0.07 1.0

Notes. Reliability ¼ 0.87; separation index ¼ 2.62; fixed (all same) chi-square ¼ 23.6; significance ¼ 0.00.

no raters were identified as misfitting since all infit mean square values were within the range 0.75 and 1.3 (Bond & Fox,
2007), which means no raters behaved erratically.

4.3. Research question 1: syntactic complexity measures across proficiency and pragmatic performance levels

This section reports the three syntactic complexity measures on each of the four tasks across different levels of proficiency
and pragmatic performance. Table 5 shows descriptive statistics and results from one-way ANOVA for mean length of T-unit
(MLTU), mean length of clause (MLC), and mean number of clauses (independent and dependent clauses) per T-unit (CTU)
across the three different proficiency groups, based on their academic program levels and standardized proficiency test
scores. Statistically significant differences were observed for MLTU, MLC, and CTU. Bonferroni post-hoc analyses showed
significant differences between Mid-Low and High-Low groups for both MLTU and MLC, and High-Low group for CTU.
S.J. Youn / System 42 (2014) 270–287 277

Regarding differences across the three proficiency levels, the variation in MLTU was quite noticeable (1.14 words between
High and Mid, 2.12 words between Mid and Low), indicating that MLTU distinguished between the three different proficiency
levels well. On the other hand, a scarce difference in MLC between High and Mid was found (0.01 words decrease), although a
larger difference in MLC between Mid and Low (1.45 words) was reported. For CTU, a larger difference between High and Mid
(0.14 clauses) was reported than one between Mid and Low (0.09 clauses).
Next, the degree of syntactic complexity was examined across different levels of pragmatic performance. Using the
estimated ability logit values from the FACETS analysis, the learners were regrouped into three pragmatic performance levels,
independent of their proficiency levels. After sorting the learners by the ability logit values, the top 12 examinees were
categorized as the Prag_High group, the next 19 examinees as the Prag_Mid group, and the last nine examinees as the
Prag_Low group. Examinees were sorted in this manner to match the number of participants in each proficiency level group
with the number in each pragmatic performance group. A moderate Spearman’s rho correlation (r ¼ 0.61) between the
original proficiency categories and pragmatic performance level categories was found indicating learners’ proficiency did not
necessarily guarantee corresponding pragmatic performance. Analyzing this result more specifically, seven learners (18%)
ranked lower on their pragmatic performance than their proficiency levels, which means they did not show corresponding
pragmatic performance despite their established overall language proficiency. Interestingly, at the same time, six learners
(15%) showed pragmatic performance that was superior to their proficiency levels.
Table 6 shows descriptive statistics and results from one-way ANOVA for MLTU, MLC, and CTU across the three different
pragmatic performance groups. Statistically significant differences were observed for MLTU, MLC, and CTU. Bonferroni post-
hoc analyses showed significant differences across all groups for MLTU, Mid-Low and High-Low groups for MLC, and High-
Low group for CTU. Noticeable differences in MLTU for all three pragmatic performance levels were also reported (1.17
words between High and Mid, 2.61 words between Mid and Low), and these differences were larger than those reported in
the differences in MLTU across the proficiency levels. This finding indicates that MLTU was general enough to distinguish
between the three different levels of proficiency and pragmatics. In terms of phrasal elaboration, a greater difference in MLC
was found especially between High and Mid pragmatic performance (0.29 words) compared to the much smaller difference
between High and Mid proficiency levels (0.01 words). This result indicates MLC better distinguished between High and Mid
pragmatic performance, compared to High and Mid proficiency levels. For CTU, in contrast to the small difference between
Mid and Low proficiency levels (0.09 clauses), a larger difference between Mid and Low pragmatic performance (0.22 clauses)
was found, indicating CTU distinguished between Mid and Low pragmatic performance well.

4.4. Research question 2: correlations between complexity measures across proficiency and pragmatics

The syntactic complexity measures were correlated with both proficiency and pragmatic performance. Table 7 shows
Spearman’s rho correlations among the variables across all tasks combined. MLTU, MLC, and CTU showed statistically sig-
nificant relationships across both proficiency and pragmatics, but the relationships were not strong. This result indicates that
global complexity from mean length of T-unit (MLTU), phrasal-level complexity (MLC), and subordination at clause-level
(CTU) were not strongly related to increases in both proficiency and pragmatic performance, potentially suggesting non-
linear relationships. More specifically, the highest correlations were seen with MLTU with both proficiency and pragmatic
performance (0.418 with proficiency, 0.436 with pragmatics), compared to MLC (0.333 with proficiency, 0.301 with prag-
matics) and CTU (0.248 with proficiency, 0.303 with pragmatics). This finding suggests that the increases in MLTU were more
closely related to the different levels of proficiency and pragmatic performance, compared to MLC and CTU. Additionally, the
different magnitudes of the relationships between each of the complexity measures and the two criterion measures confirm
that the three complexity measures are distinct sub-constructs of syntactic complexity. Lastly, except for MLC, the re-
lationships between the two complexity measures (MLTU, CTU) and pragmatic performance were stronger than those with
proficiency, suggesting that the increases in MLTU and CTU were more sensitive to the different pragmatic performance levels
rather than the proficiency levels.

Table 5
Syntactic complexity measures across proficiency levels.

Proficiency N Mean SD Min Max F p h2


Mean length of T-unit (MLTU) High 12 11.75 5.23 6.75 21.00 15.762 0.001 0.17
Mid 19 10.61 5.29 5.94 20.00
Low 9 8.49 5.07 2.00 17.43
All 40 10.57 4.91 2.00 21.00
Mean length of clauses (MLCs) High 12 7.50 1.27 4.88 11.00 14.632 0.001 0.16
Mid 19 7.51 1.49 5.00 12.33
Low 9 6.06 1.43 2.00 9.30
All 40 7.19 1.53 2.00 12.33
Mean number of clauses per T-unit (CTU) High 12 1.61 0.33 1.00 2.50 3.258 0.041 0.04
Mid 19 1.47 0.36 1.00 2.89
Low 9 1.38 0.37 1.00 3.00
All 40 1.50 0.36 1.00 3.00
278 S.J. Youn / System 42 (2014) 270–287

Table 6
Syntactic complexity measures across pragmatic performance levels.

Pragmatic performance N Mean SD Min Max F p h2


Mean length of T-unit (MLTU) Prag_High 12 11.86 5.23 7.50 21.00 21.98 0.001 0.22
Prag_Mid 19 10.69 5.50 5.94 16.00
Prag_Low 9 8.08 5.07 2.00 16.50
All 40 10.57 4.26 2.00 21.00
Mean length of clauses (MLCs) Prag_High 12 7.62 1.26 5.27 11.00 9.312 0.001 0.11
Prag_Mid 19 7.33 1.57 4.88 12.33
Prag_Low 9 6.29 1.43 2.00 8.50
All 40 7.19 1.53 2.00 12.33
Mean number of clauses per T-unit (CTU) Prag_High 12 1.60 0.33 1.00 2.89 6.953 0.001 0.08
Prag_Mid 19 1.51 0.36 1.00 2.55
Prag_Low 9 1.29 0.37 1.00 3.00
All 40 1.50 0.36 1.00 3.00

5. Discussion

The study yielded the following major findings. Firstly, learners’ written pragmatic performance was not closely associated
with their proficiency levels, with a Spearman’s rho correlation of 0.61, indicating some learners did not possess corre-
sponding pragmatic competence according to their proficiency levels. This finding contributes to understanding the rela-
tionship between pragmatics and proficiency both theoretically and empirically. Pragmatics has been theoretically
conceptualized as part of L2 proficiency in previous language competence models (e.g., Bachman & Palmer, 1996) and the
inconclusive relationship between pragmatics and proficiency has been reported in the previous studies (Matsumura, 2003;
Roever, 2006; Taguchi, 2007a, 2011; Takahashi, 2005). The result of the present study suggests that although pragmatics and
proficiency are related to some extent, they are clearly distinct constructs. It is evident that L2 proficiency alone does not
guarantee equivalent written pragmatic performance in an EAP setting.
Secondly, the three syntactic complexity measures functioned distinctly across proficiency and pragmatics. Overall, MLTU
distinguished between the three different levels of proficiency and pragmatic performance well. Yet, MLC (i.e., the phrasal-
level complexity measure) tapped a more specific source of subclausal complexification particularly for different pragmatic
performance levels. The magnitudes of the three complexity measures differences between pragmatic performance levels
were more noticeable, compared to those between proficiency levels. For example, the greater difference in the mean length
of clause (MLC, 0.29 words increase) was found between High and Mid in pragmatic performance than the scarce difference
(0.01 words decrease) between High and Mid in proficiency levels, especially compared to the relatively similar differences in
MLTU between High and Mid in pragmatic performance (1.17 words) and High and Mid in proficiency levels (1.14 words). The
noticeable difference in MLC across different pragmatic performance levels supports the conclusion that pragmatically more
advanced learners produced more words at the phrasal level, similar to the findings reported in Bardovi-Harlig (1999). It is
possible that they utilized a greater variety of linguistic resources to express pragmatic functions at the phrasal level,
including more modal verbs, past tense forms, or progressive aspect forms. Bardovi-Harlig argued that pragmatically more
advanced learners use a more diverse tense–aspect system in relation to diverse pragmatic meaning. This often results in
more words, as seen in two example sentences, I will take the course (5 words and 1 clause) vs. I am thinking of taking the course
(7 words and 1 clause). The subclausal complexity in these particular examples is evident at the phrasal level and can be
measured only via MLC which taps a more narrowly defined source of complexification, rather than the general length-based
complexity measure (MLTU). This finding further supports Norris and Ortega’s (2009) argument of the importance of
measuring syntactic complexity multidimensionally with distinct sources of complexification.
The mean number of clauses per T-unit (CTU) shows complexity via subordination. In this study, pragmatically proficient
learners also produced more clauses per T-unit, possibly due to more bi-clausal or conditional mitigations used to convey
various pragmatic meaning. Interestingly, the greater gap was found between Mid and Low in pragmatic performance in CTU
(0.22 clauses) compared to the difference between Mid and High in pragmatic performance (0.09 clauses). This indicates that
learners with intermediate pragmatic performance used more clauses in each utterance than those with low pragmatic
performance, but this was not the case for the increase from intermediate to high pragmatic performance. This finding
potentially implicates distinct developmental rates of syntactic complexity for different pragmatic performance levels.

Table 7
Correlations between syntactic complexity and two criterion measures.

Proficiency Pragmatic performance


MLTU 0.418* 0.436*
MLC 0.333* 0.301*
CTU 0.248* 0.303*

Note. *p < 0.01.


S.J. Youn / System 42 (2014) 270–287 279

Thirdly, the positive relationships between the three complexity measures and the two criterion measures (proficiency
and pragmatic performance) were found. However, the relationships were not very strong indicating potentially non-linear
relationships among the variables. This explanation is supported by the unequal amount of average increases in the three
syntactic complexity measures across the three levels of the two criterion measures, as discussed above. Taken together, one
can speculate, based on these findings, that each syntactic feature might have a different developmental pattern which is not
necessarily linear (e.g., Wolfe-Quintero et al., 1998). Further research on syntactic complexity, with rich research designs that
include longitudinal data as well as comparisons with L1 baseline groups, will be needed to confirm this speculation.
Slightly different magnitudes of the relationships were found between each complexity measure and the two criterion
measures. Specifically, except for phrasal-level complexity (MLC), MLTU and CTU showed stronger relationships to pragmatics
compared to those with proficiency, which further confirms that pragmatically advanced learners produced more words and
clauses per T-unit. Consistent gaps shown in the correlation coefficients between the complexity measures and the two
criterion measures not only suggest that the three complexity measures indeed tapped distinct sources of syntactic com-
plexification, but also the two criterion measures were somewhat independent.
Lastly, the four written pragmatic assessment tasks varied in terms of difficulty. The task of writing an e-mail refusing a
professor’s request was the easiest task, which supports previous studies’ findings that indirect refusals take less cognitive
demand due to more familiarity with the situation and routinized refusal expressions (e.g., Beebe, Takahashi, & Uliss-Weltz,
1990; Taguchi, 2007b). The most difficult task was writing constructive comments on a cover letter written by a classmate
potentially due to learners’ unfamiliarity with the situation in both L1 and L2, which supports the previous research finding
that familiarity of situations becomes a source of pragmatic task difficulty (e.g., Taguchi, 2007a). Among the four criteria of the
rating category, the third criterion used for all tasks, appropriate use of formulaic linguistic expression, was the most difficult.
Taken together, because L2 proficiency does not automatically guarantee concurrent written pragmatic performance espe-
cially when learners are not familiar with pragmatic situations, explicit pedagogical attention to various pragmatic situations
and linguistic expressions for appropriate pragmatic meaning is needed.

6. Conclusion

The previous research that examined the relationships among pragmatics, grammar, and proficiency lacked in an explicit
focus on L2 learners’ pragmatic production elicited from authentic pragmatic tasks. Addressing such research gap, extensive
efforts in designing authentic pragmatic tasks with valid task-dependent rating criteria to measure learners’ pragmatic
performances were made in this study. As a result, as evidenced by the results from the MFRM analysis, the validity and
reliability of measuring learners’ pragmatic performances were ensured, which enabled us to examine the relationship be-
tween pragmatic performance and proficiency systematically. Furthermore, in order to examine the relationship between
pragmatics and grammar, three distinct complexity measures were employed to tap various aspects of syntactic complexity in
learners’ pragmatic production. Although the focus on syntactic complexity and the written pragmatic production elicited
from the particular EAP pragmatic tasks will limit the generalization of results, the current study’s findings contribute to
understanding how pragmatics, grammar, and proficiency are related. With an emphasis on pragmatic performance
considering both pragmalinguistics and sociopragmatics, the present study showed that pragmatics and proficiency are
distinct constructs, although related to some extent. Pragmatically advanced learners utilized various syntactic features, as
shown by the different degrees of the complexity measures across the pragmatic performance levels that differed from those
across proficiency levels.
Several areas remain for future research. Firstly, the present study employed three types of complexity measures: global,
phrasal-level, and subordination complexity measures. More empirical research will be needed to further examine how other
forms of syntactic complexity are related to pragmatic performance. In addition to syntactic complexity, accuracy is another
widely used global measure to examine learners’ language production, but it was not examined in the study. To what extent
accuracy plays an important role in achieving various pragmatic functions is an empirical question that merits further
attention. Secondly, different results might be found for learners’ spoken pragmatic production or tasks covering different
genres or situational variables that influence politeness, such as interlocutors’ power difference, social distance, and the
degree of imposition (Brown & Levinson, 1987). Furthermore, future research should explore a broader range of resources
utilized in L2 learners’ pragmatic production to clarify the relationships among pragmatics, grammar, and proficiency.
Research into L2 pragmatics in interaction (Kasper, 2006) is one such example, as shown in an increasing body of research on
a wide range of interactional resources utilized in spoken interaction using Conversation Analysis (e.g., Huth, 2006; Ishida,
2009; Ochs, Schegloff, & Thompson, 1996; Ross & Kasper, 2013; Sacks, Schegloff, & Jefferson, 1974).
The current study’s findings have the following pedagogical implications for language teachers in an EAP setting. More
attention needs to be paid for explicit L2 pragmatic instruction regardless of learners’ proficiency. The current study suggests
that learners’ L2 proficiency does not necessarily guarantee concomitant pragmatic performance. Additionally, despite their
established understanding of linguistic forms, learners are not necessarily able to use various syntactic features for pragmatic
meaning. Thus, teaching various aspects of grammar focusing on its form, meaning, and use is essential (Larsen-Freeman,
2014). In addition to grammar, EAP pragmatics involves other issues. For example, students might not have institutional
knowledge of a university setting, such as the understanding of an appropriate timeline when requesting a recommendation
letter, or might not be familiar with certain genres of writing, such as writing a cover letter to apply for a job. Therefore,
280 S.J. Youn / System 42 (2014) 270–287

teaching various aspects of EAP pragmatics will be beneficial for students. The authentic EAP pragmatic tasks and task-
dependent rating criteria developed in this study can serve as useful teaching materials as well.

Acknowledgments

My sincere appreciation to Dr. John M. Norris and Dr. Lourdes Ortega for their critical feedback and guidance throughout
noa.
the various stages of this study. This study was funded by the Graduate Student Organization, University of Hawai‘i at Ma
Partial preliminary results were presented at the conference of the American Association for Applied Linguistics in Atlanta in
2010.

Appendix A. Pragmatic assessment tasks

Task 1: write a recommendation letter request to a professor

Situation: You found out there is a research award opportunity, and you are planning to apply for this award. To apply for
this award, you need a recommendation letter from an academic advisor.
Task: Please read the information about the award below. Then, you will write an e-mail to your academic advisor (Professor
Jack Brown, professor@university.edu) to request a recommendation letter.
Time: You have 10 min to complete the task.
Product: You will write a recommendation letter request e-mail to professor Jack Brown to apply for “Arts & Sciences
Student Research Awards”.
Information about the award:

Task 2: write an e-mail to a potential employer to send your application packet

Situation: You have been preparing to apply for an interpreter job at City Council. Your résumé and cover letters are ready
to send, and all application documents should be sent by an e-mail.
Task: You will send an e-mail to Human Resource Manager, Sarah Brown (citycouncil@gmail.com) to send your job
application documents including a cover letter and your résumé.
Time: You have 5 min to complete the task.
Product: In order to apply for an interpreter job, you will write an e-mail to the Human Resource Manager, Sarah Brown, to
send your application packet.

Task 3: write an e-mail to refuse a professor’s request of helping with your classmate’s class project

Situation: You received an e-mail from Professor Jack Brown (see the e-mail below). But, you have a very busy schedule
these days, so you cannot help your classmate. How would you reply to professor’s email?
Task: Write a reply e-mail to your professor to refuse the request.
Time: You have 5 min to complete the task.
S.J. Youn / System 42 (2014) 270–287 281

Product: You will write an e-mail to Professor Jack Brown to refuse his request.

Task 4: write constructive comments on a cover letter

Situation: You want to apply for a job sometime soon, so you need to know how to write a cover letter.
Task: Now, you have an example cover letter below that is written by your classmate for an internship job at City Council to
Human Resource Director, Harry Johnson. Your task is to write constructive comments on the cover letter to find out how this
cover letter can be improved. Think about important criteria and elements of writing a cover letter, and how you would have
done differently.
Time: You have 15 min to complete the task.
Product: You will write a comment on Jessie’s cover letter that you will give to Jessie.
Example cover letter written by your classmate Jessie:
282
Appendix B. Task-dependent rating criteria

1. Write an e-mail to request a recommendation letter.


Tone of e-mail Clear message delivery/content knowledge Formulaic linguistic expression E-mail format
3 (Good)  Maintain a professional and polite  Include a clear/concise purpose and highlight a  Use polite linguistic expressions for request  Use a clear and informative subject
tone consistently throughout the main point (e.g., provide an appropriate amount (e.g., I was wondering if you can-, would it that indicates a purpose of e-mail
e-mail (e.g., not to deprecate yourself of background information, put a main point in be possible for you to-)  Use appropriate salutation and

S.J. Youn / System 42 (2014) 270–287


for lack of knowledge, not to hurry a its own line/paragraph rather than in the bottom  Use linguistic expressions that can reduce term of address (e.g., Dear, Hello,
professor, not to sound pushy, or last of the message) imposition of request (e.g., if you have time, Dr., Professor)
aggressive, and begging for help  Show evidence of knowledge of a if possible, I know that you’re extremely busy)  Briefly introduce yourself if a
desperately, not to assume that a recommendation letter (e.g., a degree of  Use good/acceptable grammar in general, professor does not know you well.
professor will write a letter for you) imposition of asking for a letter to a professor) good spelling  Use appropriate and courteous
and award applications closing

2 (Able)  Inconsistently maintain a professional  Lack a clear/concise purpose of writing an  Use some and/or simple linguistic expressions  Some elements of the e-mail format
tone throughout the e-mail e-mail (e.g., wordy/unnecessary explanation for request and to reduce imposition, but they are present, but they are not used
 Sound more or less polite in general, of why he/she is qualified for an award) do not sound polite enough or appropriate. appropriately (e.g., use of “sir” to a
but an informal tone is present  Show evidence of inconsistent content  Use some unconventional linguistic expressions professor, an unclear subject, “I’m
knowledge although the e-mail includes a  Occasional grammar errors and misspelling waiting for your reply” as a closing)
clear purpose

1 (Inadequate)  The e-mail sound too casual and  The e-mail does not have a clear purpose.  Use inappropriate linguistic expressions for  Either few elements of the e-mail
informal  Show lack of knowledge of award applications request, and sound very direct or imposing format are present, or none/few
 Lack a professional tone throughout (e.g., do not know an award application (e.g., I need a recommendation letter) of the elements are appropriately
the e-mail (e.g., impose the importance procedure, do not recognize the imposition of  Frequent grammatical errors and misspelling used
of receiving a good letter from a professor) recommendation letter request)
2. Write an e-mail to send an application packet.

Tone of e-mail Clear message delivery/content knowledge Polite formulaic expression E-mail format
3 (Good)  Maintain a professional tone consistently  Include a clear/concise purpose and/or  Use polite/appropriate conventional  Use a clear and informative subject
throughout the e-mail (e.g., not to deprecate highlight a main point (e.g., put a main linguistic expressions for requesting that indicates a purpose of e-mail
yourself for lack of knowledge, not to hurry point in its own line/paragraph rather to look at attachments (e.g., Please  Use appropriate salutation and term
an employer for a reply, not to sound pushy, than in the bottom or last of the message) find the attached files) or statement of address (e.g., Dear, Mr., Ms.)
aggressive, and begging for job desperately,  Show evidence of knowledge of writing of sending an application packet  Briefly introduce yourself.
not to assume that an employer will have an e-mail to apply for a job (e.g., (e.g., I’m sending you-)  Use appropriate and courteous closing

S.J. Youn / System 42 (2014) 270–287


an interview immediately) highlight important background information,  Use good/acceptable grammar in
show interest, specify job category, briefly general, good spelling
introduce yourself)

2 (Able)  Inconsistently maintain a professional tone  Lack a clear/concise purpose of writing an  Use some and/or simple linguistic  Some elements in the e-mail format
throughout the e-mail e-mail (e.g., wordy/unnecessary background expressions for request and statement, are present, but they are not used
 Sound more or less polite in general, but an information, why he/she is qualified for a job) but they do not sound polite enough appropriately (e.g., absence of term
informal tone is present  Show evidence of inconsistent content or appropriate. of address, an unclear subject, use of
knowledge although the e-mail includes a  Use some unconventional linguistic first name – “Sarah”, “I’m waiting for
clear purpose expressions (e.g., I will appreciate your reply” as a closing line)
hearing from you)
 Occasional grammar errors and
misspelling

1 (Inadequate)  Sound too casual and informal  The e-mail does not have a clear purpose.  Use inappropriate linguistic expressions,  Either very few elements of the e-mail
 Lack a professional tone throughout the  Show lack of knowledge of job application and sound very direct or imposing format are present, or none of the
e-mail (e.g., impose to give a work process (e.g., ask directly for an immediate/ (e.g., if you choose me, you will elements are appropriately used
opportunity) prompt reply) not regret)
 Frequent grammatical errors and
misspelling

283
284
3. Write an e-mail to refuse a professor’s request.

Tone of e-mail Clear message delivery/content knowledge Polite formulaic expression E-mail format
3 (Good)  Maintain a professional and polite  Include a clear/concise message  Use polite/appropriate conventional  Use appropriate salutation and term of
tone consistently throughout the (i.e., refusal to professor’s request) linguistic expressions for apology address (e.g., Dear, Professor, Dr., “Jack”
e-mail (e.g., not to sound too apologetic) with an appropriate amount of (e.g., I’m afraid that I cannot help, can be acceptable since a relationship
information I don’t think I can help) or suggestions between a professor and a student is well
 Show evidence of knowledge of refusing

S.J. Youn / System 42 (2014) 270–287


(e.g., What about-, Is that okay if I -) established)
professor’s request (e.g., recognize it is a  Use good/acceptable grammar in general,  Use appropriate and courteous closing
face-threatening situation, provide good spelling
explanation, suggest alternative solutions)

2 (Able)  Inconsistently maintain a professional  State an unclear purpose of writing an  Use some and/or simple linguistic  Some elements of the e-mail format are
and polite tone throughout the e-mail e-mail (e.g., intention of refusal is not clear) expressions for apology/suggestions, present, but they are not used appropriately
 Provide unclear accounts but they do not sound polite enough
 Include wordy and unnecessary explanation or appropriate
 Use some unconventional linguistic
expressions
 Occasional grammar errors and
misspelling

1 (Inadequate)  Sound too casual and informal  The e-mail does not have a clear intention  Use very simple (or none) linguistic  Either very few elements of the e-mail
 Lack a professional and polite tone of refusal. expressions for apology that sound format are present, or none of the elements
throughout the e-mail  Show lack of knowledge of how to quite rude are appropriately used
appropriately refuse  Frequent grammatical errors and
misspelling
4. Give comments/suggestions on classmate’s cover letter.

Tone of giving comments/suggestions Clear message delivery Formulaic linguistic expression Knowledge of writing a cover letter
3 (Good)  Maintain a respectful and polite tone  Deliver comments and suggestions clearly  Use polite linguistic expressions for Show knowledge of following elements:
throughout giving comments to a classmate with an appropriate amount of explanation giving suggestions and comments  Basic format (e.g., term of address)
(e.g., not to sound too opinionated, strong, (e.g., I think you can-, It would be a  Brief introduction
reproachful, and pushy) good idea-, If you do-, You can consider  Emphasize important selling points
to-, You could) that are relevant for the position
 Use good/acceptable grammar in general, concisely

S.J. Youn / System 42 (2014) 270–287


good spelling  Do not assume to have an interview
automatically, but ask for an
interview politely
 Keep a professional and formal tone
throughout the cover letter (e.g., not
using thanks, not to sound desperate)

2 (Able)  Inconsistently maintain a respectful and  Comments are more or less clear, but unclear  Use some, although not frequent,  Comments show knowledge of some
polite tone throughout the e-mail sentences are present. and/or simple linguistic expressions that elements of a cover letter mentioned
 Sound more or less polite in general, but sound rather strong for giving comments/ above, but they are not explained
an inappropriate tone is present suggestions. (e.g., you should-, you must-) enough.
 Use some unconventional linguistic
expressions
 Occasional grammar errors and misspelling

1 (Inadequate)  Sound too strong and directive  Comments are not clear and not easily  Use inappropriate linguistic expressions for  Comments include very few elements
 Lack a respectful and polite tone throughout understandable. giving comments/suggestions frequently of a cover letter, or none of the
the e-mail  Comments include no/few explanation. that sound quite rude and strong elements are appropriately explained
(e.g., You should)
 Frequent grammatical errors and misspelling

285
286 S.J. Youn / System 42 (2014) 270–287

References

Bachman, L. F., & Palmer, A. S. (1996). Language testing in practice. Oxford: Oxford University Press.
Bardovi-Harlig, K. (1999). Exploring the interlanguage of interlanguage pragmatics: a research agenda for acquisitional pragmatics. Language Learning, 49,
677–713.
Bardovi-Harlig, K. (2000). Tense and aspect in second language acquisition: Form, meaning, and use. Oxford: Blackwell.
Bardovi-Harlig, K. (2001). Empirical evidence of the need for instruction in pragmatics. In K. R. Rose, & G. Kasper (Eds.), Pragmatics in language teaching
(pp. 13–32). New York: Cambridge University Press.
Bardovi-Harlig, K., & Dörnyei, Z. (1998). Do language learners recognize pragmatic violations? Pragmatics vs. grammatical awareness in instructed L2
learning. TESOL Quarterly, 32, 233–259.
Bardovi-Harlig, K., & Hartford, B. S. (1993). Learning the rules of academic talk: a longitudinal study of pragmatic development. Studies in Second Language
Acquisition, 15, 279–304.
Beebe, L. M., Takahashi, T., & Uliss-Weltz, R. (1990). Pragmatic transfer in ESL refusals. In R. Scarcella, D. Andersen, & S. Krashen (Eds.), Developing
communicative competence in a second language (pp. 55–74). New York: Newbury House.
Biber, D., Gray, B., & Poonpon, K. (2011). Should we use characteristics of conversation to measure grammatical complexity in L2 writing development?
TESOL Quarterly, 45, 5–35.
Bond, T. G., & Fox, C. M. (2007). Applying the Rasch model: Fundamental measurement in the human sciences. Mahwah, NJ: Lawrence Erlbaum Associates.
Brown, P., & Levinson, S. C. (1987). Politeness: Some universals in language usage. Cambridge: Cambridge University Press.
Chapelle, C. A., & Duff, P. A. (2003). Some guidelines for conducting quantitative and qualitative research in TESOL. TESOL Quarterly, 37, 157–178.
Fulcher, G., & Márquez Reiter, R. (2003). Task difficulty in speaking test. Language Testing, 20, 321–344.
Grabowski, K. C. (2009). Investigating the construct validity of a test designed to measure grammatical and pragmatic knowledge in the context of speaking.
Unpublished Ph.D. dissertation. Columbia University.
Housen, A., & Kuiken, F. (2009). Complexity, accuracy, and fluency in second language acquisition. Applied Linguistics, 30, 461–473.
Hudson, T., Detmer, E., & Brown, J. D. (1992). A framework for testing cross-cultural pragmatics (Technical report #2). Honolulu, HI: University of Hawai‘i,
Second Language Teaching and Curriculum Center.
Hudson, T., Detmer, E., & Brown, J. D. (1995). Developing prototype measures of cross-cultural pragmatics (Technical report #7). Honolulu, HI: University of
Hawai‘i, Second Language Teaching and Curriculum Center.
Huth, T. (2006). Negotiating structure and culture: L2 learners’ realization of L2 compliment-response sequences in talk-in-interaction. Journal of Prag-
matics, 38, 2025–2050.
Ishida, M. (2009). Development of interactional competence: changes in the use of ne in L2 Japanese during study abroad. In H. T. Nguyen, & G. Kasper
(Eds.), Talk-in-interaction: Multilingual perspective (pp. 351–385). Honolulu, HI: National Foreign Language Resource Center, University of Hawai‘i.
Kasper, G. (2006). Speech acts in interaction: towards discursive pragmatics. In K. Bardovi-Harlig, C. Félix-Brasdefer, & A. S. Omar (Eds.), Pragmatics and
language learning (Vol. 11); (pp. 281–314). Honolulu, HI: Second Language Teaching and Curriculum Center, University of Hawai‘i.
Kasper, G. (2009). L2 pragmatic development. In W. C. Ritchie, & T. K. Bhatia (Eds.), New handbook of second language acquisition (pp. 259–295). Leeds, UK:
Emerald.
Kasper, G., & Roever, C. (2005). Pragmatics in second language learning. In E. Hinkel (Ed.), Handbook of research in second language teaching and learning
(pp. 317–334). New York: Routledge.
Kasper, G., & Rose, K. R. (2002). Pragmatic development in a second language. Malden: Blackwell Publishing.
Kasper, G., & Schmidt, R. (1996). Developmental issues in interlanguage pragmatics. Studies in Second Language Acquisition, 18, 149–169.
Larsen-Freeman, D. (2006). The emergence of complexity, fluency, and accuracy in the oral and written production of five Chinese learners of English.
Applied Linguistics, 27, 590–619.
Larsen-Freeman, D. (2014). Teaching grammar. In M. Celce-Murcia, D. M. Brinton, & M. A. Snow (Eds.), Teaching English as a second or foreign language
(4th ed.) (pp. 256–270). Boston, MA: National Geographic Learning.
Leech, G. (1983). Principles of pragmatics. Harlow: Longman.
Linacre, J. M. (1989). Many-faceted Rasch measurement. Chicago: MESA.
Linacre, J. M. (2006). Facets Rasch measurement computer program (version 3.61.0) [computer software]. Chicago: Winsteps.com.
Long, M. H., & Norris, J. M. (2000). Task-based teaching and assessment. In M. Byram (Ed.), Routledge encyclopedia of language teaching and learning (pp. 597–
603). London: Routledge.
MacWhinney, B. (2000). The CHILDES project: Tools for analyzing talk (3rd ed.). Mahwah, NJ: Lawrence Erlbaum Associates.
Matsumura, S. (2003). Modelling the relationships among interlanguage pragmatic development, L2 proficiency, and exposure to L2. Applied Linguistics, 24,
465–491.
McNamara, T. F. (1996). Measuring second language performance. New York: Addison Wesley Longman.
Niezgoda, K., & Röver, C. (2001). Pragmatic and grammatical awareness: a function of learning environment? In K. R. Rose, & G. Kasper (Eds.), Pragmatics in
language teaching (pp. 63–79) New York: Cambridge University Press.
Norris, J. M. (2009). Task-based teaching and testing. In M. H. Long, & C. J. Doughty (Eds.), Handbook of language teaching (pp. 578–594). Cambridge:
Blackwell.
Norris, J. M., & Ortega, L. (2009). Towards an organic approach to investigating CAF in instructed SLA: the case of complexity. Applied Linguistics, 30, 555–
578.
Ochs, E., Schegloff, E. A., & Thompson, S. A. (1996). Interaction and grammar. Cambridge: Cambridge University Press.
Ortega, L. (2003). Syntactic complexity measures and their relationship to L2 proficiency: a research synthesis of college-level L2 writing. Applied Linguistics,
24, 492–518.
Ortega, L., Iwashita, N., Rabie, S., & Norris, J. M. (1999). Transcription and coding guidelines for a multilanguage comparison of measures of syntactic complexity.
Honolulu: University of Hawai‘i, National Foreign Language Resource Center. Unpublished paper.
Roever, C. (2006). Validation of a web-based test of ESL pragmalinguistics. Language Testing, 23, 229–256.
Ross, S., & Kasper, G. (Eds.). (2013). Assessing second language pragmatics. Basingstoke, UK: Palgrave Macmillan.
Sacks, H., Schegloff, E. A., & Jefferson, G. (1974). A simplest systematics for the organization of turn-taking for conversation. Language, 50, 696–735.
Schauer, G. (2006). Pragmatic awareness in ESL and EFL contexts: contrast and development. Language Learning, 56, 269–318.
Schmidt, R. (1983). Interaction, acculturation and the acquisition of communicative competence. In N. Wolfen, & E. Judd (Eds.), Sociolinguistics and second
language acquisition (pp. 137–174). Rowley, MA: Newbury House.
Skehan, P. (1998). A cognitive approach to language learning. New York: Oxford University Press.
Taguchi, N. (2007a). Task difficulty in oral speech act production. Applied Linguistics, 28, 113–135.
Taguchi, N. (2007b). Development of speed and accuracy in pragmatic comprehension in English as a foreign language. TESOL Quarterly, 41, 313–338.
Taguchi, N. (2011). Do proficiency and study-abroad experience affect speech act production? Analysis of appropriateness, accuracy, and fluency. Inter-
national Review of Applied Linguistics, 49, 265–293.
Takahashi, S. (1996). Pragmatic transferability. Studies in Second Language Acquisition, 18, 189–223.
Takahashi, S. (2001). The role of input enhancement in developing pragmatic competence. In K. R. Rose, & G. Kasper (Eds.), Pragmatics in language teaching
(pp. 171–199). New York: Cambridge University Press.
Takahashi, S. (2005). Pragmalinguistic awareness: is it related to motivation and proficiency? Applied Linguistics, 26, 90–120.
Thomas, J. (1983). Cross-cultural pragmatic failure. Applied Linguistics, 4, 91–112.
S.J. Youn / System 42 (2014) 270–287 287

Wolfe-Quintero, K., Inagaki, S., & Kim, H.-Y. (1998). Second language development in writing: Measures of fluency, accuracy, and complexity. Honolulu, HI:
University of Hawai‘i, Second Language Teaching and Curriculum Center.
Youn, S. J. (2010). From needs analysis to assessment: Task-based L2 pragmatics in an English for academic purposes setting. Unpublished manuscript. Honolulu:
University of Hawai‘i.

Potrebbero piacerti anche