Sei sulla pagina 1di 15

Available online at www.sciencedirect.

com

Educational Research Review 4 (2009) 26–40

Comparing response rates in e-mail and paper surveys:


A meta-analysis
Tse-Hua Shih a,∗ , Xitao Fan b,1
a University of Virginia, 175 Yellowstone Drive, Apt# 203, Charlottesville, VA 22903, United States
b Curry School of Education, University of Virginia, 405 Emmet Street South, Charlottesville, VA 22903-2495, United States
Received 3 April 2007; received in revised form 8 October 2007; accepted 24 January 2008

Abstract
This meta-analysis examined 35 study results within last 10 years that directly compared the response rates of e-mail versus mail
surveys. Individual studies reported inconsistent findings concerning the response rate difference between e-mail and mail surveys,
but e-mail surveys generally have lower response rate (about 20% lower on the average) than mail surveys. Two study features (popu-
lation type and follow-up reminders) could account for some variation in the e-mail and mail survey response rate differences across
the studies. For the studies involving college populations, the response rate difference between e-mail and mail surveys was much
smaller, or even negligible, suggesting that e-mail survey is reasonably comparable with mail survey for college populations. The
finding about follow-up reminder as a statistically significant study feature turns out to be somewhat an anomaly. Other study features
(i.e., article type, random assignment of survey respondents into e-mail and mail survey modes, and use of incentives) did not prove to
be statistically useful in accounting for the variation of response rate differences between mail and e-mail surveys. The findings here
suggest that, in this age of internet technology, mail survey is still superior to e-mail survey in terms of obtaining higher response rate.
© 2008 Elsevier Ltd. All rights reserved.

Keywords: E-mail survey; Mail survey; Electronic mail; Response rate; Meta-analysis

1. Introduction

E-mail survey has become increasingly popular, as manifested in the growing research on e-mail survey methodology
(e.g., Akl, Maroun, Klocke, Montori, & Schünemann, 2005; Bachmann, Elfrink, & Vazzana, 2000; Couper, Blair, &
Triplett, 1999; Dommeyer & Moriarty, 2000; Kittleson, 1995; Metha & Sivadas, 1995; Ranchhod & Zhou, 2001;
Schaefer & Dillman, 1998). E-mail survey has also been used by researchers in a variety of fields, such as medicine
(Hollowell, Patel, Bales, & Gerber, 2000; Jones & Pitt, 1999; Kim et al., 2000), management (Donohue & Fox, 2000),
market research (Metha & Sivadas, 1995; Ranchhod & Zhou, 2001; Smee & Brenna, 2000), policy research (Enticott,
2002), education (Fraze et al., 2002), and telecommunication (Shermis & Lombard, 1999). The emergence of internet
and e-mail survey has led to numerous studies comparing e-mail and mail surveys, especially about the response rates
of these two survey modes. These comparative studies provide us the opportunities to understand the strengths and
weaknesses of different survey modes, and to explore factors that may affect their response rates.

∗ Corresponding author. Tel.: +1 434 566 1464.


E-mail addresses: ts4ex@virginia.edu (T.-H. Shih), xfan@virginia.edu (X. Fan).
1 Tel.: +1 434 243 8906.

1747-938X/$ – see front matter © 2008 Elsevier Ltd. All rights reserved.
doi:10.1016/j.edurev.2008.01.003
T.-H. Shih, X. Fan / Educational Research Review 4 (2009) 26–40 27

1.1. Pros and cons of comparative studies

Survey response rates are often affected by respondent characteristics. Unique from other so-called electronic surveys
(e.g., computer-administered or web surveys), e-mail surveys are either embedded or attached in e-mail notifications,
and respondents can return surveys through e-mails. There have been some concerns about the response rate of e-
mail survey, partially because of the uneven access to e-mail technology for different target populations, and their
respective comfort level with internet/e-mail technology. Due to such concerns about uneven access to, and uneven
familiarity with, internet in the general population, an e-mail survey study, unlike a mail survey study, often targets
populations that have internet access (Metha & Sivadas, 1995), such as college students (Bachmann et al., 2000;
Sax, Gilmartin, Lee, & Hagedorn, 2003). Single-mode survey studies (e.g. either e-mail survey or mail survey study)
typically involve different types of populations at different times; as such, it is very difficult to use such single-mode
studies to make a meaningful comparison between mail survey and e-mail survey modes in terms of their respective
response rates. On the hand, studies in which e-mail and mail surveys are directly compared tend to sample from the
same population that can be contacted by either e-mail or mail around the same period of time. As a result, for the
purpose of understanding survey mode differences (i.e., e-mail vs. mail surveys), these comparative studies should
provide us better understanding about response rate differences between these two survey modes (i.e., e-mail vs. mail
surveys) than the studies that implemented e-mail or mail surveys separately in different experiments at different
times.
Research related to response rates has generally produced inconsistent results, because survey response rates are
affected by many different characteristics of individual studies. Our understanding of response rate differences between
e-mail and mail surveys is further hindered by different study designs and study attributes. Two major issues demand our
attention. First, computer-administered, disk-by-mail, e-mail, and web-based survey formats, etc., were often grouped
under the generic name of “electronic surveys” or “internet-based surveys”, and very few studies actually examined
the difference between e-mail and mail surveys. Such generic groupings of “electronic survey” or “internet-based
survey” have made it difficult to understand the relative merits/demerits of e-mail versus mail surveys. Second, in the
past, there was an insufficient number of studies that directly compared the e-mail and mail survey modes; as a result,
we have not been able to develop a good understanding about the response rate differences between the two survey
modes. The present study attempts to synthesize the findings concerning the response rates of e-mail and mail surveys
from the studies that directly compared these two survey modes. Through this synthesis, we attempt to explore the
factors that may have contributed to the variation of response rate differences between the two survey modes across
the studies.

1.2. Importance of survey response rate

A high response rate is viewed not only as desirable, but also as an important criterion by which the quality
of the survey is judged (Hox & DeLeeuw, 1994), because a high response rate reflects less serious potential non-
response bias. Along with the rise of internet and e-mail survey methodology, perhaps the most significant issue
is whether e-mail-based surveys produce comparable or even higher response rates than the traditional mail sur-
veys. Given the mixed findings about response rates between e-mail and mail survey modes, there is a research
need to examine the mode effects on response rates between e-mail and mail survey methods. There have been
some discussions exploring how to increase response rates (e.g., Andersen & Blackburn, 2004; Claycomb, Porter,
& Martin, 2000; Kanuk & Berenson, 1975; Truell, James, & Melody, 2002; Yammarino, Skinner, & Childers,
1991), and how to estimate and/or correct for non-response bias (e.g., Bickart & Schmittlein, 1999; Groves, 2002).
The overall goal is the same, that is, to increase response rate, which in turn, reduces potential non-response
bias.

1.3. Past studies on comparing response rates

There have been meta-analyses focusing only on response rates of internet-based surveys (Cook, Heath, &
Thompson, 2000), or those focusing only on paper surveys (Church, 1993; Fox, Crask, & Kim, 1988; Yammarino
et al., 1991), but there has not been a meta-analysis that quantitatively compared the response rates of e-mail and mail
surveys. As far as the authors of this study know, there were only five papers (i.e. Couper et al., 1999; Dommeyer &
28 T.-H. Shih, X. Fan / Educational Research Review 4 (2009) 26–40

Moriarty, 2000; Ilieva, Baron, & Healey, 2002; McDonald & Adam, 2003; Schaefer & Dillman, 1998) that discussed
response rates of e-mail-and-mail comparative studies as part of their literature review or discussion. These studies only
provided a narrative review or discussion of the comparative studies involving e-mail and mail survey comparisons.
Such a narrative approach, however, could not systematically examine the influence of the study features that might
have contributed to the inconsistent findings across the studies. The present study was designed to provide a quantitative
examination on this issue. This study focuses on (1) what the observed response rate differences are between e-mail
and mail surveys as shown in the comparative studies that directly compared these two survey modes and (2) how
different factors (i.e. study features) could explain the inconsistent response rate differences between e-mail and mail
surveys in different comparative studies.

2. Methods

2.1. Source of meta-analytic sample

Business & Company Resource Center, Business Index, Materials Business File (CSA), ERIC, Expanded Academic
Index (ASAP), WebSM, Factivia, InfoTrac, Ingenta, PsycInfo, Pubmed, SCIRUS, Social Research Methodology, Social
Science Abstract, and Sociological Abstract data bases were explored by Boolean search methods using the following
keywords: response rate, return rate, participation rate, survey, questionnaire, postal, paper, mail, electronic, e-mail,
and internet. We also employed Google e-mail search engine to locate some articles (e.g. conference papers) that were
not in these databases. We stopped the literature search on 5 November 2006, and any study published after this date
was not included in this study.
There have been concerns about biased study results because of “file drawer” problem: studies with higher response
rates of e-mail surveys are more likely to be accepted as journal articles (e.g. Cook et al., 2000; Rosenthal & Rubin,
1982). This meta-analysis included both published articles and unpublished conference papers. Simply looking at the
title or abstract of a study could not guarantee that it was a comparative study of mail and e-mail surveys with recorded
response rates. We had to read contents of every single article in order to make sure whether or not a study should be
selected. The most important inclusion criterion in this meta-analysis was that, a study must have compared e-mail and
mail survey modes, and must have reported the response rates of the two survey modes. For studies collected in this meta-
analysis, mail survey respondents were contacted with regular mails, and e-mail survey respondents were contacted
with e-mails. In addition, a study should have the information on some study characteristics (i.e. random assignment,
population types, incentives, and follow-up reminders) related to these two survey modes, or such information could
be reasonably inferred from the description of the study or through our contact with the authors. It turned out that,
out of all the studies that directly compared e-mail and mail survey response rates, none were excluded for lack of
information on study features. There was one article that almost fulfilled our criteria for inclusion, but we could not
confirm whether the internet survey mentioned in the study was an e-mail or web survey, and the outdated author e-mail
contact did not allow us to contact the author directly.
Initially, we identified 854 articles/papers (approximately 20% were located from Google search) spanning from
1992 to 2006. Of those 854 articles, 825 were excluded because they either did not compare e-mail and mail surveys
or did not report sufficient information (e.g., not sure if “internet survey” mentioned in a study was an e-mail survey
or a web survey). Excluded were studies that administered only mail surveys, only e-mail surveys, only web surveys,
mixed-mode surveys (i.e., respondents have the choice for different response options), disk-by-mail survey, and those
that solicited people on streets to fill out paper surveys, those that handed paper surveys to people in classrooms,
administered e-mail and paper surveys at different times or to different populations (thus making the e-mail and mail
survey populations less or non-comparable), and those that could not confirm whether an online survey was an e-mail
survey or a web survey. Because the terms “internet-based survey”, “online survey” and “electronic survey” are often
used in many articles, and may not actually mean e-mail survey, we carefully examined each article, and contacted
authors when necessary. Only studies that compared e-mail survey (i.e. survey embedded or attached in an e-mail
notification) and mail survey (i.e. paper surveys sent and returned through mail) were included in our meta-analysis
sample. In total, we had 35-independent comparisons between e-mail and mail survey response rates from 29 articles,
because three studies (Akl et al., 2005; Metha & Sivadas, 1995; Treat, 1997) contained more than one-independent
comparison. The survey topics of our collected studies covered a broad spectrum of disciplines, such as education,
business, psychology, policy, medicine/health, etc.
T.-H. Shih, X. Fan / Educational Research Review 4 (2009) 26–40 29

2.2. Computation of response rate

Response rate computation can be tricky because each study may use its own definition of response rate. As Groves
and Lyberg (1988) noted, “there are so many ways of calculating response rates that comparisons across surveys are
fraught with misinterpretations”. In order to make fair comparisons across studies, it is important to standardize the
computation of response rate. This meta-analysis adopts the computation of minimum response rate, which is defined
as RR1 in the Standard Definitions of Outcome Rates for Surveys (AAPOR, 2006):
I
RR1 = (1)
(I + P) + (R + NC + O) + (UH + UO)
RR1 = minimum response rate, I = complete survey, P = partial survey, R = refusal and break-off, NC = non-contact,
O = other, UH = unknown if household/occupied housing unit and UO = unknown, other.
In the Standard Definitions of Outcome Rates for Surveys (AAPOR, 2006), there are six alternative ways to compute
response rates (i.e. RR1–RR6). According to the definition of RR1 (AAPOR, 2006), minimum response rate (RR1) is
essentially the number of returned (complete) surveys divided by the total number of surveys sent out, regardless of
different reasons for non-returns (e.g., refusal, non-contact, unknown if household occupied, etc.). The computation
of minimum response rate RR1 requires information that is common and can be obtained accurately from all collected
studies. It also gives researchers an estimate of response rate that is not overly optimistic. The use of minimum response
rate as described here not only avoids uncertainties in possible miscalculations about eligible survey cases and cases
of unknown eligibility, but also makes response rates comparable across studies and survey modes.

2.3. Coding of study features as variables

Meta-analysis is a technique that quantitatively synthesizes findings from individual studies and explores the study
features that may explain the inconsistent findings across individual studies. It was originally proposed to make
quantitative sense out of the explosive growth of the research literature of social sciences (Glass, 1977). Our major
outcome variable was response rate difference between e-mail survey and mail survey. To understand what study
features might have contributed to the inconsistent response rate differences between e-mail and mail survey modes in
different studies, we recorded and coded five salient study features: (a) whether a study is a journal article or a conference
paper; (b) whether potential respondents in a study were randomly assigned to receive e-mail or mail surveys; (c) what
type of population was involved; (d) whether incentive was provided; and (e) number of follow-up reminders for
non-respondents. The outcome variable of interest and these five study features were coded as shown in Table 1.
These five-independent variables (i.e., study features) were chosen both for their perceived relevance in survey
research, and for the fact that the information for those variables could be obtained from the articles or by contacting
the author(s) directly. In terms of survey designs, follow-up reminders and incentives have long been recognized as
important factors for survey response rates (Armstrong, 1975; Dillman, Christenson, Carpenter, & Brooks, 1974),
and naturally, they were the top candidates for being considered as relevant study features in our meta-analysis. For
meta-analysis in general, and for survey experiments in particular, there has been some concern (e.g. Cook et al., 2000;
Rosenthal & Rubin, 1982) about the “publication bias” (i.e., “file drawer” problem) related to published studies that
may potentially bias the results of response rates between published and unpublished studies. For this reason, we used
a code to indicate if a study result is from a published article or an unpublished source (e.g., conference papers).
Methodologically, for a study comparing mail and e-mail surveys, whether or not random assignment is used for
assigning participants into the two survey modes could potentially play an important role for the validity of the findings
(e.g. Cook, 1999; Fraker & Maynard, 1987; LaLonde, 1986). For this reason, this study feature was coded for each
study used in this meta-analysis. In survey literature, respondent characteristics have often been perceived as playing a
role in survey response rate. In the discussion of online data collection, college population has often been distinctively
separated from non-college populations both in terms of internet coverage (e.g., Couper, 2000), and in terms of whether
survey findings from a college population can be generalized (McCabe, 2004). For these considerations, we included
a study feature to code the respondent types in a particular study. In the present study, “college population” includes
students and faculty members in the college environment. “Professionals” are white-collar professionals like doctors,
directors, and managers. “Employees” are wage earners like administrative staffs, or even workers in organizations.
“General population” includes general householders and consumers.
30 T.-H. Shih, X. Fan / Educational Research Review 4 (2009) 26–40

Table 1
Coding of study features
Response rate of e-mail survey
A continuous variable (proportion)
Response rate of paper survey
A continuous variable (proportion)
Total sample size
A continuous variable
Article type
0: journal article
1: conference paper
Random assignment
0: no random assignment into e-mail or mail survey conditions
1: random assignment used for assignment into e-mail or mail survey conditions
Population type
0: college population (e.g., college/university students and faculty)
1: professionals (e.g., doctors, directors, managers)
2: employees (e.g., staffs, school teachers, workers)
3: general population (e.g., general householders)
Incentive
0: no incentive provided
1: some type of incentive provided
Follow-up reminders
0: no follow-up reminder
1: one follow-up reminder
2: two follow-up reminders
3: three follow-up reminders

3. Data analysis

3.1. Response rate difference as effect size measure

In a typical meta-analytic study, the outcome variable of interest from individual studies may not be comparable in
terms of their measurement scale. It is necessary to convert the outcome variable of interest into a common effect size
measure (e.g., standardized mean difference) so that findings from different studies can be aggregated and analyzed.
In our meta-analysis of response rate differences between e-mail surveys and mail surveys, however, the outcome
variable (i.e., response rate difference as a proportion) was already comparable across all the comparative studies, and
its measurement scale was already well-understood. Response rate difference itself could be treated as the effect size
measure, and it was not necessary to convert this variable into another form. In fact, any conversion of the variable
would make it more difficult to understand. Throughout the study, we use “d” to represent the outcome variable of
interest, that is, the response rate difference between e-mail and mail surveys from a study.

3.2. Weighting based on study sample size

Sample size typically plays an important role in the stability of a sample statistic: a larger sample size generally
produces a more stable sample estimate. As a result, sample size is routinely taken into consideration in meta-analytic
studies. Procedurally, in a meta-analysis, sample size is used to construct a weighting variable, which is then applied
to an effect size measure in each study. The accumulated effect size measure across studies is a weighted effect size
measure, and it gives more weight to an outcome from a large-sample study, and less weight to an outcome from a
small-sample study. The weighting procedure based on study sample size reduces the potential bias introduced by
unstable estimates derived from small samples (Hunter & Schmidt, 1990; Rosenthal & Rubin, 1982; Yin & Fan,
2003).
T.-H. Shih, X. Fan / Educational Research Review 4 (2009) 26–40 31

The present study adopted such weighting procedure, and each effect size (i.e., response rate difference between
e-mail and mail surveys) is weighted by using the following weight based on the study sample size and the cumulative
sample size:
ni
wi = 
ni
where ni is the sample size of a comparative study from which a response rate difference is obtained, and ni is the
sum of sample sizes across the studies used in this meta-analysis.
In Section 4, we present three complementary subsections: (1) unweighted results of a preliminary descriptive
analysis; (2) results related to the effects of study features from a weighted analysis based on fixed-effects general
linear model; and (3) results from random-effects model’s forest plot analysis.

4. Results and discussion

4.1. Preliminary descriptive analysis

Table 2 presents the preliminary descriptive analysis results. Of the 35 comparative results (actually 29 studies
containing 35-independent comparisons) included, the sample sizes varied considerably, with the smallest sample size
being 43, and the largest sample size being 8253. The average sample size was 1519. This degree of variation in
sample size suggests that it was potentially important to weight the study effect size with its sample size, because
a response rate from a sample of 43, in general, would not be as stable as a response rate from a sample of 8253.
Across these 35 comparison results, the unweighted average response rate of mail survey was higher than that of
e-mail survey by around 20% (53% for mail survey, and 33% for e-mail survey). The response rate difference also
varied considerably. At one extreme, mail survey response rate was higher than e-mail survey response rate by 48%
(Kittleson, 1995; sample size = 306). At the other extreme, mail survey response rate was lower than e-mail survey
response rate by 31% (Parker, 1992; sample size = 140). The standard deviation of the response rate difference was
0.18, representing considerable variation in the response rate difference between e-mail and mail surveys across these
35 study comparisons. Such variation in the response rate differences between e-mail and mail survey modes across the
comparative studies naturally led to the research question: what study features could have contributed to this variation
of the results across the studies?

4.2. Weighted analyses of fixed-effects general linear model and the effects of study features

Table 3 presents a summary of the general linear model analysis based on weighted effect size and the results
from analyzing the effects of five study features. In meta-analysis, when there is considerable variation in the effect
size measure across the studies, we would be interested in understanding possible reasons for such variation, and we
would like to explore if any study feature is associated with such variation. For this purpose, we conducted analysis
to investigate if any of the five study features was associated with the variation of the effect sizes (i.e., response rate
difference between e-mail survey and mail survey) across the studies.
We used the general linear model analysis to investigate the extent to which study features could have explained the
variation in the effect sizes across the studies. The dependent variable was the weighted effect size (i.e., response rate

Table 2
Preliminary descriptive analysis
Variable Mean STD Min Max

Sample size 1519 2071 43 8253


Response rate
E-mail survey 0.33 0.22 0.05 0.85
Mail survey 0.53 0.21 0.11 0.85
Response rate difference (d)a −0.20 0.18 −0.48 0.31
a d = e-mail survey response rate − mail survey response rate.
32 T.-H. Shih, X. Fan / Educational Research Review 4 (2009) 26–40

Table 3
E-mail and mail survey response rate differences (d) and effects of study features
Study features Weighted response rate analysis

η2a nb Weighted d̄ c Unweighted d̄ ni d

Overall 35 −.23 −.20 53,158


Article type .01
Journal article 27 −.25 −.19 38,863
Conference paper 8 −.17 −.22 14,295
Random assignment .02
No random assignment 17 −.22 −.20 40,501
With random assignment 18 −.26 −.20 12,657
Population types .36**
College population 12 −.01 −.17 10,285
Professionals 7 −.28 −.23 22,092
Employees 11 −.29 −.23 16,981
General population 5 −.23 −.15 3,800
Incentive .02
No incentive 32 −.23 −.20 49,663
Some incentive 3 −.25 −.19 3,495
Follow-up reminders .09*
No reminder 10 −.27 −.20 21,839
One reminder 8 −.26 −.23 5,657
Two reminders 8 .02 −.13 7,423
Three reminders 9 −.27 −.23 18,239
a Proportion of variance in d (i.e., response rate difference between e-mail and paper surveys) associated with this study feature.
b Number of effect sizes involved.
c A positive d indicates that e-mail survey has a higher response rate than paper survey (d = e-mail survey response rate − mail survey response

rate).
d Cumulative sample size across the studies grouped under each study characteristic.
* Statistically significant at α = 0.05.
** Statistically significant at α = 0.01.

difference between e-mail and mail surveys weighted by the sample size of the study), and the independent variables
were the five study features described previously (see Table 1). Table 3 presents the summary results of the analyses
about these study features. Overall, when no study feature was considered, e-mail survey response rate was lower than
mail survey response rate (23% lower for weighted average, and 20% lower for unweighted average). The weighted
general linear model analysis revealed that 46% of variation (R2 = .46) in the weighted effect sizes was associated with
the five study features. We partitioned the effect size variation into the unique proportion (i.e. η2 = Type III sum of
squares/total sum of squares) associated with each of the five study features. Variance partitioning relies on Type III
sum of squares (i.e., unique proportion of variance in the outcome variable that is associated with a factor, given that
all other factors are already in the general linear model). About 36% of the weighted effect size variations could be
explained by population type and 9% of the weighted effect size variations could be explained by follow-up reminders.
The other three study features turned out to be statistically not useful in explaining the variation in the response rate
differences between e-mail and mail surveys across the studies.

4.3. Statistically significant study features

4.3.1. Population types


There has been a long history of mail survey, and most people are familiar with this type of data collection method. E-
mail survey, however, requires that respondents feel reasonably comfortable with the technology. Such consideration led
us to believe that different populations may react to e-mail surveys differently, depending on their exposure to, and their
comfort level with, e-mail technology. We coded the respondents of each study into one of four categories (college pop-
ulation, professionals, employees, and general populations), as discussed in the previous Section 2. Consistent with our
T.-H. Shih, X. Fan / Educational Research Review 4 (2009) 26–40 33

expectation, this study feature showed a strong association with the variation of effect sizes (i.e., response rate difference
between e-mail and mail surveys) across the studies, accounting for about 36% of variance in the effect sizes across the
studies. The weighted e-mail survey average response rate was lower than the mail survey average response rate (1, 28,
29 and 23% lower respectively for the four population categories) for all population categories. However, the response
rate difference between the two survey modes in the studies involving college population is almost negligible (1%),
very different from the large response rate difference in the studies involving other three types of populations (>23%).
Our analysis here showed that, for the comparative studies involving college populations, the weighted average
response rate difference was 1% in favor of mail surveys, and the unweighted average response rate difference was
17%, also in favor of mail surveys. This gap between weighted and unweighted results reveals that, in our collection of
the comparative studies involving college populations, the studies having relatively higher e-mail response rates (i.e.,
smaller response rate difference between e-mail and mail surveys) also have larger sample sizes, thus reducing the
response rate difference based on the weighted average.
When surveying a college population, a high response rate of mail survey is generally expected, because higher
mail survey response rates have been associated with higher education level or higher social status (e.g. Clausen
& Ford, 1947; Dalecki, Ilvento, & Moore, 1988; Donald, 1960). In addition, college populations are also expected
to be the most likely to respond to e-mail surveys, because of their familiarity with e-mail technology, which has
long been widely available in college/university environment. Such familiarity with internet and e-mail can be the
result of their experiences in college teaching (Pitt, 1996; Wild & Winniford, 1993), experience in teacher-student
mentoring (Parnell, 1997), and in academic discussion groups (Berge & Collins, 1995; Huff & Sobiloff, 1993), etc.
Even in a general online environment, education has been found to be highly and positively correlated with internet use
(Hindman, 2000; Hoffman & Novak, 1997; Katz & Aspden, 1997; Sparrow & Vedantham, 1995). Nevertheless, the
fact that response rate difference was still in favor of mail surveys for college population supports the point made by
Meinhold and Gleiber (2005, p. 4): “the fact that college freshmen make considerable use of the internet is, by itself,
insufficient to argue its desirability over the traditional and more accepted forms of interviewing such as face-to-face,
telephone, and self-administered questionnaires”.

4.3.2. Follow-up reminders


It has long been suggested that follow-up reminders may help increase survey response rates, so we coded the actual
number of follow-up reminders in these comparative studies (i.e. no reminder, one reminder, two reminders, and three
reminders), as discussed in Section 2. This study feature showed a statistically significant association with the variation
of effect size measures (i.e., response rate difference between e-mail and mail surveys) across the studies, accounting
for about 9% of variance in the effect sizes across the studies. A closer look at Table 3 reveals that the response rate
difference favors mail survey mode in general, and only under the condition of two reminders, the weighted average
response rate difference was in favor of e-mail survey by 2%. Even under this condition, the unweighted average effect
size still favors mail survey mode by 13%. This finding seems to be difficult to explain, because in the comparative
studies that did not use “two reminders” (i.e. no reminder, one reminder, and three reminders), the weighted average
e-mail survey response rate was consistently and considerably lower than mail survey response rate (27, 26 and 27%
lower for these three follow-up reminder categories of “no reminder”, “one reminder”, and “three reminders”).
This anomaly led us to examine why the studies with two reminders were so different from studies with other
numbers of reminders. It turned out that most of the studies in this category have response rate difference in favor of
mail survey mode, not e-mail survey mode. The major reason for this observed anomaly is that one study (Sax et al.,
2003) used “college population” with a response rate difference of 0.16 in favor of e-mail survey, and this study had a
large-sample size of 4387. The relatively large-sample size with response rate difference favoring e-mail survey pulled
up the weighted average effect size to the direction of favoring e-mail survey. In this sense, the finding here for the
condition of “two reminders” would be considered suspect, or at least inconclusive, and we would not make a general
statement that for some reason, two reminders would be more beneficial for e-mail survey than for the mail survey.

4.4. Statistically non-significant study features

4.4.1. Article type


The study feature “article type” (i.e., whether or not a comparative study is a published article or an unpublished
paper, showed little effect on the variation of effect sizes across the studies. By itself, this study feature accounted
34 T.-H. Shih, X. Fan / Educational Research Review 4 (2009) 26–40

for 1% of the variation in the response rate differences across the studies. The response rate difference favored mail
survey mode by 25% (i.e., mail survey response rate was 25% higher than e-mail survey response rate, on the average)
when studies were journal articles. In those studies that were unpublished conference paper, mail survey response rate
was higher than e-mail survey response rate by 17%. But the discrepancy (25% vs. 17%) did not exceed what can
be expected by sampling error. Although there has been well-known concern (e.g. Cook et al., 2000; Rosenthal &
Rubin, 1982) about publication bias (so-called “file drawer” problem), the findings here suggest that the response rate
difference between e-mail and mail surveys as shown in the comparative studies is not statistically related to whether
or not a study was a journal publication or a conference paper.

4.4.2. Random assignment


The study feature “random assignment” showed little effect on the variation of effect size measures. By itself,
this study feature accounted for 2% of the variation in the response rate differences across the studies. The response
rate difference favored mail survey mode by 22% (i.e., mail survey response rate was 22% higher than e-mail survey
response rate, on the average) when studies did not implement random assignment in assigning participants into either
e-mail and mail survey conditions. In those studies that implemented random assignment, mail survey response rate
was higher than e-mail survey response rate by 26%. But the discrepancy (22% vs. 26%) did not exceed what can
be expected by sampling error. Theoretically, implementation of random assignment in a comparative study should
have made samples in e-mail and mail survey modes more comparable by controlling for other extraneous variables
that might have contributed to response rates. As a result, the effect size from the group of studies that implemented
random assignment should have better internal validity, thus their results should be more trustworthy. However, random
assignment showed little unique contribution to the variation of effect sizes across the studies collected in this meta-
analysis. Findings here suggest that there is no statistical evidence that some comparative studies with design flaws (i.e.,
non-comparable samples for e-mail and mail survey modes due to lack of random assignment) might have confounded
the comparison between e-mail and mail survey modes in their response rates.

4.4.3. Incentives
Little variation (2% of variance) in the response rate differences between e-mail and mail surveys has been accounted
for by the study features of incentive. Statistically, the use of incentive did not appear to be associated with the response
rate difference between e-mail and mail surveys, that is, the comparative studies that used some kind of incentive and
those that did not use any incentive were statistically similar in terms of their response rate difference between e-mail
and mail survey modes. Descriptively, in the comparative studies that used incentives, the response rate difference
between e-mail and mail surveys appears slightly larger (25% in favor of mail survey) than those comparative studies
that did not use any incentive (23% also in favor of mail surveys). But the discrepancy did not exceed what could be
expected by sampling error.
The lack of effect of incentive in accounting for the response rate difference between e-mail and mail survey modes,
however, should not be construed to suggest that incentive cannot increase survey response rate in general (e-mail or
mail survey modes). Here, we did not compare e-mail surveys with incentive versus e-mail surveys without incentive;
nor did we compare mail surveys with incentives with those without incentives. Instead, we examined studies that
compared e-mail and mail surveys, and separated them into two groups: those that used incentives versus those that did
not. The e-mail and mail survey response rate differences of these two groups of studies were compared, and it showed
that providing incentive did not statistically affect the response rate difference between e-mail and mail surveys. The
fact that only three studies used incentives might have made this analysis unreliable due to the very small number of
studies in one condition. It was also possible that providing incentive had increased the response rates in both e-mail and
mail survey modes, but the response rate difference between these two survey modes remained statistically the same.
Göritz (2006, p. 59) commented that, researchers widely agree that if incentives are to increase response, they need
to be given in advance instead of being made contingent on the return of the questionnaire (e.g., Armstrong, 1975;
Church, 1993; James & Bolstein, 1992; Linsky, 1975). If this were also true for surveys based on e-mail, where it is
not easy to send a dollar bill or some other physical token of appreciation to the respondents together with the survey,
researchers would need to find a way to provide incentives for e-mail survey respondents in advance. We had only
three studies using some types of incentives, which might have made the analysis concerning the use of incentive as a
predictor for response rate differences unreliable. Future researchers should consider the potential effect of incentive
delivery for e-mail surveys.
T.-H. Shih, X. Fan / Educational Research Review 4 (2009) 26–40 35

Fig. 1.

4.5. Random-effects model and “forest plot” analysis

Our previous analysis used fixed-effects model (general linear model) for weighted effect sizes. Such weighted
fixed-effects model may be criticized for giving too little weight for studies with small-sample sizes, and giving too
much weight for studies with large samples. For meta-analysis, random-effects model has also been widely used (e.g.,
Dersimonian & Laird, 1986). Here, we analyzed the same data by applying a random-effects model, using the same
approach as discussed by Dersimonian and Laird (1986). We used a graphic approach (“forest plot”) to present the
summary of this random-effects model analysis. This random-effects model analysis provided a graphic illustration
of pooled study effects (forest plot) and triangulated the findings from the previous fixed-effect general linear model.
Fig. 1 presents the forest plot from this analysis.
In Fig. 1, 35 study results (trials) with the cumulated total sample size of 53,158 were used. There was statistically
significant heterogeneity in the effect sizes of the 35 study results (χ2 = 2101.85, d.f. = 34, p < 0.001). The labels on
the left side of the plot are the first author’s name and publication year of each study included in this meta-analysis.
The studies included are ordered by year of publication and author name. The effect of survey mode (e-mail vs. mail
survey) on response rate is represented by “risk ratio” (RR). The solid vertical line corresponds to no effect of survey
mode (RR = 1.0), which means that there is no difference of response rates between mail survey and e-mail survey
modes. Associated with each study, the black dot and its associated short horizontal line represent a study’ risk ratio
36 T.-H. Shih, X. Fan / Educational Research Review 4 (2009) 26–40

and its 95% confidence interval. The response rate difference (d) is computed as: d = e-mail survey rate − mail survey
rate. Those studies with higher e-mail survey response rate than mail survey response rate fall on the right side of “no
effect” vertical line, and the studies with lower e-mail survey response rate than mail survey response rate fall on the
left side of “no effect” vertical line. If a study has confidence interval (the short horizontal line) that covers the “no
effect” vertical line, it suggests that the mode difference (e-mail vs. mail survey modes) is statistically non-significant.
The overall mode effect on response rate (e-mail vs. mail survey modes) is represented by the dashed vertical line
and the associated diamond shape (♦). The dashed vertical line represents the combined mode effect (0.52), and the
horizontal tips of the diamond shape represent the 95% CI (0.43–0.64) of this estimated mode effect. Because the
dashed vertical line is on the left side of the “no effect” solid line, it indicates lower response rate in the e-mail survey
mode than in the mail survey mode in general. More specifically, the risk ratio of 0.52 for the overall mode effect on
response rate suggests a 48% reduction in the odds of survey return for e-mail survey compared to mail survey. The
95% confidence interval of this overall mode effect does not touch the “no effect” line, suggesting that the response rate
difference found between e-mail and mail survey modes was statistically significant. A closer look at the “forest plot”
reveals that, out of the 35 study results, only five had e-mail survey response rate higher than mail survey response
rate (on the right side of the “no-effect” vertical line). Of these five study results, only three had statistically significant
difference in favor of e-mail surveys (Donohue & Fox, 2000; Parker, 1992; Sax et al., 2003) with their confidence
intervals not covering the “no-effect” vertical line. This forest plot from random-effects model analysis agreed with
our previous conclusions based on the fixed-effects general linear model that e-mail survey response rate is statistically
lower than mail survey response rate in the comparative studies that directly compared these two survey modes.

5. Conclusions and other considerations

Response rate is important in survey research, because low response rate has the potential of introducing non-
response bias, and thus resulting in misleading information about the issues covered in a survey. Some studies (e.g.
Bachmann, Elfrink, & Vazzana, 1996; Mehta and Sivadas, 1995) have indicated that e-mail and mail survey response
rates are comparable, and that the gap may be narrowing between e-mail and mail surveys (Smith, 1997). Numerous
studies have been conducted to compare e-mail and mail survey modes in terms of their respective response rates.
The findings from these individual studies have been inconsistent. This study quantitatively synthesized the findings
of these individual studies by meta-analyzing these comparative studies within last 10 years.
Our meta-analysis showed that e-mail survey mode generally has considerably lower response rate (about 20%
lower on the average) than mail survey mode. For all types of populations coded in this meta-analysis, the response rate
was higher for mail surveys, although the response rate difference between the two survey modes was considerably
smaller or even negligible in the studies involving college populations. Follow-up reminder as a study feature did not
appear to be a reliable factor that could account for the variation in the response rate differences between the two survey
modes across the studies, even though it was a statistically significant predictor. A closer look at this variable led us to
believe that there was some anomaly in this finding, which was resulted from one study with very large-sample size,
thus pulling the response rate difference between e-mail and mail surveys in the direction of favoring e-mail survey.
As a result, we would consider this finding unreliable.
Other study features (i.e., article type, implementation of random assignment of survey respondents into e-mail
and mail survey modes, and use of incentive) did not prove to be statistically useful in accounting for the variation
of e-mail and mail survey response rate differences across the comparative studies. However, findings here cannot be
interpreted to mean that these study features are not useful for increasing survey response rate in general, because we
never examined that issue in this paper.
For college populations (i.e. college students and faculty), their familiarity with, and their frequent uses of internet
and e-mail technology did not result in higher e-mail survey response rate that mail survey response rate, although the
response rate difference did become considerably smaller in the studies involving college populations as compared
with the studies involving other populations. This finding suggests that for college populations, as far as response rate
is concerned, e-mail survey is reasonably comparable with mail surveys.
In general, if response rate is the major concern, the findings from this meta-analysis suggest that the traditional mail
survey is superior to e-mail survey, regardless of other survey characteristics (e.g., target population, use of reminders
for non-respondents, use of incentives, etc.). We suspect that the consistently lower response rate in e-mail survey
compared to that of mail survey may partially be the result of prevalent junk/spam e-mails nowadays, which may have
T.-H. Shih, X. Fan / Educational Research Review 4 (2009) 26–40 37

caused many potential respondents to ignore legitimate e-mail surveys. This speculation, although reasonable, is not
testable in the current study.
Lower response rate of e-mail survey, however, does not necessarily mean that e-mail survey should not have its
place in the repertoire of survey researchers. E-mail survey certainly has its own advantages, such as shorter response
time, considerably lower survey cost, capability of reaching a large sample of respondents, knowledge about whether an
e-mail survey has been delivered to the correct e-mail address, etc. These unique characteristics of e-mail survey make
it a viable tool for survey researchers in some research situations, despite its inferiority in terms of survey response
rate currently shown in the recent literature.

5.1. Limitations

This meta-analysis has some obvious limitations. First, the pool of studies used for this meta-analysis was relatively
small. This means that we had a small-sample size for statistical analysis (e.g., general linear model analysis). The
small-sample size made it very difficult or even impossible to conduct some meaningful analysis (e.g., testing for
interactions among the coded study features), because doing so would result in having too many independent variables
(five study feature variables plus many interaction terms) for the limited sample size (i.e., the number of study results
in the meta-analysis). As a result, we did not examine the potential interactions among the five study features.
Second, it is reasonable to expect that survey response rate is directly related to the salience of the survey for the
respondents. It is difficult to imagine that a survey that has no relevance to the respondents would have high response
rate. Ideally, the salience of a survey should be coded as a variable to be analyzed. This was suggested by a reviewer,
and we agree. Unfortunately, for a survey study, we find it almost impossible to reliably code this information, either
because the article (paper) of a survey study did not provide sufficient information, or because we find it too subjective
to provide such coding on our part. As a result, we did not have this information as a study feature in our analysis.
The findings of this study should be interpreted within the context of these and some other limitations. As more
studies comparing e-mail and mail surveys accumulate in the future, a larger pool of usable studies would emerge that
will permit future synthesis of these studies. More importantly, with the development of internet and e-mail technology,
with younger generations more fully exposed to the digital age, the survey landscape may change considerably in the
direction of favoring digital medium (e.g., e-mail survey, web survey) over more traditional medium (e.g., mail survey)
in the not-so-distant future. For this and other reasons, the findings from this study should be considered tentative at best.

Appendix A. Studies used in the analysis

Study Study sample E-mail survey Mail survey Population


size response rate (%) response rate (%)

Parker (1992) 140 68 38 Employees of a corporation


Schuldt and Totten (1994) 543 12 57 College faculty
Kittleson (1995) 306 28 76 College student and faculty
Mehta (1995) 262 40 44 General population from online news
groups
Mehta (1995) 229 61 79 General population from online news
groups
Tse et al. (1995) 400 6 27 Administrative staff of a university
Bachmann et al. (1996) 448 42 66 Business school deans and chairpersons
Comley (1996) 2221 9 44 Magazine subscribers
Treat (1997) 4969 37 71 Employees of federal agencies
Treat (1997) 790 61 81 Employees of federal agencies
Treat (1997) 531 60 74 Employees of federal agencies
Treat (1997) 431 55 76 Employees of federal agencies
Treat (1997) 477 45 75 Employees of federal agencies
Tse (1998) 500 7 52 Administrative staff of a university
Weible and Wallace (1998) 400 24 35 College faculty
Zelwetro (1998) 400 35 33 Director and managers from
environmental organizations
38 T.-H. Shih, X. Fan / Educational Research Review 4 (2009) 26–40

Appendix A (Continued )

Study Study sample E-mail survey Mail survey Population


size response rate (%) response rate (%)

Couper (1999) 8253 43 71 Employees of federal agencies


Jones (1999) 300 34 72 Administrative staff of universities
Shermis (1999) 1170 30 36 Educational researchers
Bachmann et al. (2000) 500 15 46 Business school deans and chairpersons
Donohue (2000) 1694 19 11 Journal editors
Hollowell et al. (2000) 4065 14 60 Urologists
Kim et al. (2000) 4065 6 42 Urologists
Paolo, Bonaminio, Gibson, 146 24 41 Medical college students
Partridge, & Kallail (2000)
Smee (2000) 489 12 49 College faculty
Harewood, Yacavone, Locke, 43 70 85 Patients
& Wiersema (2001)
Kim, Gerber, Patel, 4065 5 42 Urologists
Hollowell, & Bales (2001)
Ranchhod (2001) 1000 5 19 Marketing executives
Boyer, Olson, Calantone, & 1045 37 42 Online customers
Jackson (2002)
Fraze (2003) 190 27 60 Secondary agricultural science teachers
Sax et al. (2003) 4387 32 16 College students
Seguin, Godwin, MacDonald, 1598 34 48 Physicians
& McCall (2004)
Akl et al. (2005) 119 63 80 College student and faculty
Akl et al. (2005) 83 85 81 College student and faculty
Stewart (2005) 6899 7 21 Contractors in the flooring business

Note: the multiple entries of Mehta (1995), Treat (1997), and Akl et al. (2005) represent multiple-independent comparisons in the same study. Effect
size = e-mail survey response rate − mail survey response rate.

References 2

*Akl, E. A., Maroun, N., Klocke, R. A., Montori, V., & Schünemann, H. J. (2005). Electronic mail was not better than postal mail for surveying
residents and faculty. Journal of Clinical Epidemiology, 58(4), 425–429.
American Association for Public Opinion Research (2006). Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys,
Ann Arbor, Michigan: Author, http://www.aapor.org/pdfs/standarddefs 4.pdf.
Andersen, P. A., & Blackburn, T. R. (2004). An experimental study of language intensity and response rate in E-mail surveys. Communication
Reports, 17(2), 73–84.
Armstrong, J. S. (1975). Monetary incentives in mail surveys. Public Opinion Quarterly, 39, 111–116.
*Bachmann, D., Elfrink, J., & Vazzana, G. (1996). Tracking the progress of e-mail vs. snail mail. Marketing Research, 8(2), 31–35.
*Bachmann, D., Elfrink, J., & Vazzana, G. (2000). E-mail and snail mail face off in rematch. Marketing Research, 11(4), 10–15.
Berge, Z. L., & Collins, M. (1995). Computer mediated scholarly discussion groups. Computers and Education, 24(3), 183–189.
Bickart, B., & Schmittlein, D. (1999). The distribution of survey contact and participation in the United States: Constructing a survey-based estimate.
Journal of Marketing Research, 36(2), 286–294.
*Boyer, K. K., Olson, J. R., Calantone, R. J., & Jackson, C. (2002). Print versus electronic surveys: A comparison of two data collection methodologies.
Journal of Operations Management, 20(4), 357–373.
Church, A. H. (1993). Estimating effect of incentives on mail survey response rates: A meta-analysis. Public Opinion Quarterly, 57(1), 62–
79.
Clausen, J., & Ford, R. N. (1947). Controlling bias in mail questionnaires. Journal of the American Statistical Association, 42(1947), 497–
511.
Claycomb, C., Porter, S. S., & Martin, C. L. (2000). Riding the wave: Response rates and the effects of time intervals between successive mail
survey follow-up efforts. Journal of Business Research, 48, 157–162.
*Comley, P. (1996). The use of internet as a data collection method. First ESOMAR paper on Internet data collection. Published at ESOMAR/EMAC
Symposium, November.

2 *Indicates a primary study used in this meta-analysis.


T.-H. Shih, X. Fan / Educational Research Review 4 (2009) 26–40 39

Cook, T. (1999). Considering the major arguments against random assignment: An analysis of the intellectual culture surrounding evaluation
in American schools of education. Paper presented at the Conference on Evaluation of Educational Policies, American Academy of Arts and
Sciences, Cambridge, Mass.
Cook, C., Heath, F., & Thompson, R. L. (2000). A meta-analysis of response rates in e-mail- or internet-based surveys. Educational and Psychological
Measurement, 60(6), 821–836.
Couper, M. P. (2000). Web surveys: A review of issues and approaches. Public Opinion Quarterly, 64, 464–494.
*Couper, M. P., Blair, J., & Triplett, T. (1999). A comparison of mail and e-mail for a survey of employees in federal statistical agencies. Journal
of Official Statistics, 15(1), 39–56.
Dalecki, M. G., Ilvento, T. W., & Moore, D. E. (1988). The effects of multi-wave mailings on the external validity of mail surveys. Journal of the
Community Development Society, 19, 51–70.
Dersimonian, R., & Laird, N. M. (1986). Meta-analysis in clinical trials. Journal of Controlled Clinical Trials, 7, 177–188.
Dillman, D. A., Christenson, J. A., Carpenter, E. H., & Brooks, R. (1974). Increasing mail questionnaire response: A four-state comparison. American
Sociological Review, 39, 744–756.
Dommeyer, C. J., & Moriarty, E. (2000). Comparing two forms of an email survey: Embedded vs. attached. International Journal of Market Research,
42(1), 39–50.
Donald, M. N. (1960). Implications for nonresponse for the interpretation of mail questionnaire data. Public Opinion Quarterly, 24, 99–114.
*Donohue, J. M., & Fox, J. B. (2000). A multi-method evaluation of journals in the decision and management sciences by US academics. Omega-
International Journal of Management Science, 28(1), 17–36.
Enticott, G. (2002). Using electronic research methodologies in policy research. CLRGR paper no. 9.
Fox, R. J., Crask, M. R., & Kim, J. (1988). Mail survey rate-a meta-analysis of selected techniques for inducing response. Public Opinion Quarterly,
52, 467–491.
Fraker, T., & Maynard, R. (1987). Evaluating comparison group designs with employment related programs. Journal of Human Resources, 22,
194–227.
*Fraze, S., Hardin, K., Brashears, T., Smith, J., & Lockaby, J. (2002). The effects of delivery mode upon survey response rate and perceived attitudes
of Texas agri-science teachers. Paper presented at the National Agricultural Education Research Conference, December 11–13, Las Vegas, NV.
Glass, G. V. (1977). Integrating findings: The meta-analysis of research. Review of Research in Education, 5, 351–379.
Göritz, A. S. (2006). Incentives in web studies: Methodological issues and a review. International Journal of Internet Science, 1(1), 58–70.
Groves, R. M. (2002). Survey nonresponse. New York: Wiley.
Groves, R. M., & Lyberg, L. E. (1988). An overview of nonresponse issues in telephone surveys. In M. Robert, Groves, & al. et (Eds.), Telephone
survey methodology. New York: John Wiley & Sons.
*Harewood, G. C., Yacavone, R. F., Locke, G. R., & Wiersema, M. J. (2001). Prospective comparison of endoscopy patient satisfaction surveys:
E-mail versus standard mail versus telephone. The American Journal of Gastroenterology, 96(12), 3312–3317.
Hindman, D. B. (2000). The rural-urban digital divide. Journalism & Mass Communication Quarterly, 77, 549–560.
Hoffman, D. L., & Novak, T. P. (1997). A new paradigm for electronic commerce. The Information Society, 13(1), 43–54.
*Hollowell, C. M., Patel, R. V., Bales, G. T., & Gerber, G. S. (2000). Internet and postal survey of endourologic practice patterns among American
urologists. Journal of Urology, 163, 1779–1782.
Hox, J. J., & DeLeeuw, E. D. (1994). A comparison of nonresponse in mail, telephone, and face-to-face surveys-applying multilevel modeling to
meta-analysis. Quality and Quantity, 28, 329–344.
Huff, C., & Sobiloff, B. (1993). Macpsych: An electronic discussion list and archive for psychology concerning the Macintosh computer. Behaviour
Research Methods, Instruments and Computers, 25(1), 60–64.
Hunter, J. E., & Schmidt, F. L. (1990). Methods of meta-analysis: Correcting error and bias in research findings. Newbury Park: Sage.
Ilieva, J., Baron, S., & Healey, N. M. (2002). Online surveys in marketing research: Pros and cons. International Journal of Marketing Research,
44(3), 361–376.
James, J. M., & Bolstein, R. (1992). Large monetary incentives and their effect on mail survey response rates. Public Opinion Quarterly, 56, 442–
453.
*Jones, R., & Pitt, N. (1999). Health surveys in the workplace: Comparison of postal, email and World Wide Web methods. Occupational Medicine,
49(8), 556–558.
Kanuk, L., & Berenson, C. (1975). Mail surveys and response rates: A literature review. Journal of Marketing Research, 12, 440–453.
Katz, J., & Aspden, P. (1997). Motivations for and barriers to Internet usage: Results of a national public opinion survey. Paper presented at the
24th annual Telecommunications Policy Research Conference, Solomons, Maryland.
*Kim, H. L., Hollowell, C. M. P., Patel, R. V., Bales, G. T., Clayman, R. V., & Gerber, G. S. (2000). Use of new technology in endourology and
laparoscopy by American urologists: Internet and postal survey. Urology, 56(5)
*Kim, H. L., Gerber, G. S., Patel, R. V., Hollowell, C. M., & Bales, G. T. (2001). Practice patterns in the treatment of female urinary incontinence:
A postal and internet survey. Urology, 57(1), 45–48.
*Kittleson, M. J. (1995). An assessment of the response rate via the postal service and email. Health Values, 18(2), 27–29.
LaLonde, R. J. (1986). Evaluating the econometric evaluations of training programs with experimental data. American Economic Review, 76,
604–620.
Linsky, A. S. (1975). Stimulating responses to mailed questionnaires: A review. Public Opinion Quarterly, 39, 82–101.
McCabe, S. E. (2004). Comparison of web and mail surveys in collecting illicit drug use data: A randomized experiment. Journal of Drug Education,
34(1), 61–72.
McDonald, H., & Adam, S. (2003). A comparison of online and postal data collection methods in marketing research. Marketing Intelligence and
Planning, 21(2), 85–95.
40 T.-H. Shih, X. Fan / Educational Research Review 4 (2009) 26–40

Meinhold, S. S., & Gleiber, D. W. (2005). Using the internet to survey college students about their law school plans. A seminar report from Law
School Admission Council (LSAC).
*Metha, R., & Sivadas, E. (1995). Comparing response rates and response content in mail versus electronic mail surveys. Journal of Market Research
Society, 37(4), 429–439.
*Paolo, A. M., Bonaminio, G. A., Gibson, C., Partridge, T., & Kallail, K. (2000). Response rate comparisons of e-mail and mail distributed student
evaluations. Teaching and Learning in Medicine, 12, 81–84.
*Parker, L. (1992). Collecting data the e-mail way. Training and Development, 52–54.
Parnell, J. (1997). A Mannequin Without Clothes. Paper presented to BYTE Conference, Mastrict University, December.
Pitt, M. (1996). The use of electronic mail in undergraduate teaching. British Journal of Educational Technology, 27(1), 45–50.
*Ranchhod, A., & Zhou, F. (2001). Comparing respondents of e-mail and mail surveys: Understanding the implications of technology. Marketing
Intelligence and Planning, 19, 254–262.
Rosenthal, R., & Rubin, D. (1982). Comparing effect sizes of independent studies. Psychological Bulletin, 92, 500–504.
*Sax, L. J., Gilmartin, S. K., Lee, J. J., & Hagedorn, L. S. (2003). Using web surveys to reach community college students: An analysis of response
rates and response bias. Research paper presented at the annual conference of the Association of Institutional Research, Tampa, FL.
Schaefer, D. R., & Dillman, D. A. (1998). Development of a standard email methodology: Results of an experiment. Public Opinion Quarterly,
62(3), 378–397.
*Schuldt, B. A., & Totten, J. W. (1994). Electronic mail versus mail survey response rates. Marketing Research, 6(1), 36–39.
*Seguin, R., Godwin, M., MacDonald, S., & McCall, M. (2004). Email or snail mail? Randomized controlled trial on which works better for surveys.
Canadian Family Physician, 50, 414–419.
*Shermis, M. D., & Lombard, D. (1999). A comparison of survey data collected by regular mail and electronic mail questionnaires. Journal of
Business and Psychology, 14, 341–354.
*Smee, A., & Brenna, M. (2000). Electronic surveys: A comparison of e-mail, web and mail. Paper presented at ANZMAC 2000 Visionary Marketing
for the 21st Century: Facing the Challenge, November 28–December 2, 2000, School of Marketing & Management, Griffith University, Australia,
pp. 1201–1204.
Smith, C. B. (1997). Casting the net: Surveying an internet population. Journal of Communication Mediated by Computers 3 (1). Available online
at http://www.ascusc.org/jcmc/vol3/issue1/.
Sparrow, J., & Vedantham, A. (1995). Inner-city networking: Models and opportunities. Journal of Urban Technology, 3(1), 19–28.
*Stewart, A. (2005). B2B study finds lingering concerns about new tech: Retailers fret over loss of personal contact, costs & security. National
Floor Trends, 7(5), 12–13.
*Treat, J. B. (1997). The effects of questionnaire mode on response in a federal employee survey: Mail versus electronic mail. In Proceedings of
the Section on Survey Research Methods (pp. 600–604). Alexandria, VA: American Statistical Association.
Truell, A. D., James, E. B., & Melody, W. A. (2002). Response rate, speed, and completeness: A comparison of internet-based and mail surveys.
Behavior Research Methods, Instruments, & Computers, 34(1), 46–49.
*Tse, A. C. B. (1998). Comparing the response rate, response speed and response quality of two methods of sending questionnaires: E-mail vs.
mail. Journal of the Market Research Society, 40(4), 353–359.
*Tse, A. C. B., Tse, K. C., Yin, C. H., Ting, C. B., Yi, K. W., Yee, K. P., & Hong, W. C. (1995). Comparing two methods of sending out questionnaires:
E-mail versus mail. Journal of the Market Research Society, 37(4), 441–446.
*Weible, R., & Wallace, J. (1998). Cyber research: The impact of the internet on data collection. Market research, 10(3), 19–31.
Wild, R. H., & Winniford, M. (1993). Remote collaboration among students using electronic mail. Computers and Education, 21(3), 193–203.
Yammarino, F. J., Skinner, S. J., & Childers, T. L. (1991). Understanding mail survey response behavior: A meta-analysis. Public Opinion Quarterly,
55, 539–613.
Yin, P., & Fan, X. (2003). Assessing the invariance of self-concept measurement factor structure across ethnic and gender groups: Findings from a
national sample. Educational and Psychological Measurement, 63, 296–318.
*Zelwetro, J. (1998). The politicization of environmental organizations through the Internet. Information Society, 14, 45–56.

Potrebbero piacerti anche