Sei sulla pagina 1di 14

http://bms.sagepub.

com/
Mthodologie Sociologique
Methodology/Bulletin de
Bulletin of Sociological
http://bms.sagepub.com/content/120/1/47
The online version of this article can be found at:

DOI: 10.1177/0759106313497858
2013 120: 47 Bulletin de Mthodologie Sociologique
Edith D. de Leeuw
BMS Thirty Years of Survey Methodology / Thirty Years of

Published by:

Association Internationale de Methodologie Sociologique

RC33
and
http://www.sagepublications.com
can be found at: Sociologique
Bulletin of Sociological Methodology/Bulletin de Mthodologie Additional services and information for

http://bms.sagepub.com/cgi/alerts Email Alerts:

http://bms.sagepub.com/subscriptions Subscriptions:
http://www.sagepub.com/journalsReprints.nav Reprints:

http://www.sagepub.com/journalsPermissions.nav Permissions:

http://bms.sagepub.com/content/120/1/47.refs.html Citations:

What is This?

- Oct 1, 2013 Version of Record >>


by guest on October 15, 2013 bms.sagepub.com Downloaded from by guest on October 15, 2013 bms.sagepub.com Downloaded from by guest on October 15, 2013 bms.sagepub.com Downloaded from by guest on October 15, 2013 bms.sagepub.com Downloaded from by guest on October 15, 2013 bms.sagepub.com Downloaded from by guest on October 15, 2013 bms.sagepub.com Downloaded from by guest on October 15, 2013 bms.sagepub.com Downloaded from by guest on October 15, 2013 bms.sagepub.com Downloaded from by guest on October 15, 2013 bms.sagepub.com Downloaded from by guest on October 15, 2013 bms.sagepub.com Downloaded from by guest on October 15, 2013 bms.sagepub.com Downloaded from by guest on October 15, 2013 bms.sagepub.com Downloaded from by guest on October 15, 2013 bms.sagepub.com Downloaded from by guest on October 15, 2013 bms.sagepub.com Downloaded from
Ongoing Research/Recherche en cours
Thirty Years of Survey
Methodology / Thirty
Years of BMS
Edith D. de Leeuw
Department of Methodology and Statistics, Faculty of Social Sciences, Utrecht
University, Utrecht, Netherlands
Re sume
Trente ans de me thodologie denque te / Trente ans de BMS : Cet article pre-
sente un apercu de lhistoire de la methodologie de lenquete et les statistiques de
lenquete sur la base dune analyse des numeros du BMS depuis le debut en octobre
1983. Lautrice, membre du Comite scientifique du BMS, discute des changements dans
les modes de collecte de donnees, de levolution de la concentration sur les sources
derreur differentes dans les enquetes, et des tendances en matie`re danalyse et doutils
danalyse. Elle reflechit sur les developpements recents dans les enquetes en mode mixte
et par Internet, et termine par quelques reflexions sur lavenir.
Abstract
This article presents an overview of the history of survey methodology and survey
statistics based on an analysis of BMS issues since the start in October 1983. The author,
member of the BMS Scientific Committee, discusses the changes in modes of data
collection, the changing emphasis on different error sources in surveys, and trends in
analysis and analysis tools. She reflects on recent developments in mixed-mode and
Internet surveys, and ends with some thoughts on the future.
Mots cle s
Methode de collecte de donnees, Interview, Mode mixte denquete, Internet, Non-
reponse, Mesure, TSE - Erreur totale de lenquete
Corresponding Author:
Edith de Leeuw, Department of Methodology and Statistics, Faculty of Social Sciences, Utrecht University,
Utrecht, Netherlands.
Email: e.d.deleeuw@uu.nl
Bulletin de Methodologie Sociologique
120 4759
The Author(s) 2013
Reprints and permission:
sagepub.com/journalsPermissions.nav
DOI: 10.1177/0759106313497858
bms.sagepub.com
Keywords
Data Collection, Interview, Mixed-mode Survey, Internet, Nonresponse, Measurement,
TSE - Total Survey Error
Introduction
Traditionally, an anniversary of 30 years is called a pearl jubilee, and it is an honour to
write an article on survey methodology for the pearl jubilee of the BMS. That the BMS is
now approaching a jubilee, it is thanks to the enthusiasm and the dedication of its found-
ing father and editor, Karl van Meter. When Googling names for jubilees, I discovered a
very special 30-year jubilee, the old Egyptian Sed-festival. Sed-festivals were jubilees
after a ruler had held the throne for thirty years and were primarily celebrated to rejuve-
nate the strengths and stamina of the ruling pharaoh and ensure his continued success
(Wikipedia, 2013).
In the past 30 years, Karl van Meter has made BMS a success. When I look at my field,
Survey Methodology, BMS articles were highly original and often concerned new ideas.
In the terms of Rogers (1962) diffusion of innovations, BMS was always among the
innovators and early adopters of survey methods and statistics. To illustrate this, I would
like to point out the Wiley series in survey methodology, which is devoted to current best
practice in survey methodology. Regularly, important international conferences are
organized on topics that at that time are seen as leading in survey methodology and
on which renowned experts present invited lectures during the conference. These invited
lectures form the basis of a heavily edited monograph that is then published in the Wiley
series on survey methodology; a monograph that later can be found in the library of every
survey methodologist. Still, the methods described do not always come as a surprise for
BMS readers. For instance, in September 1995, the BMS (n. 48) published a special issue
on survey nonresponse and measurement error with guest editors Hans van der Zouwen
and Edith de Leeuw (1995). This was followed by a special issue of Journal of Official
Statistics (JOS) (De Leeuw, 1999a) and by an international conference on nonresponse
in 1999. The resulting monograph, edited by Bob Groves, Don Dillman, John Eltinge,
and Rod Little, was published in 2002. In June 1997, Pamela Campanelli guest edited
a special issue of the BMS, n. 55 (Campanelli, 1997) on (pre)testing survey questions;
in 2002, an innovative conference on questionnaire development, evaluation, and testing
methods was organized in Charleston, South Carolina and the monograph was published
in 2004, edited by Stanley Presser and colleagues (Presser et al.; 2004).
Let us hope that this special issue on thirty years of BMS will act as a Sed-ceremony
and ensure a fruitful and prosperous new area for the BMS. But, what to include in the
ceremony, what were the main developments in survey methodology in the past 30 years,
what should be rejuvenated for the future?
Traditional Data Collection Modes
Until the early nineteen-eighties, the survey interview was the leading data collection
method and face-to-face interviews were dominant in the early years of the BMS. This
48 Bulletin de Methodologie Sociologique 120
led to methodological research into the role of the interviewer and their influence on the
resulting data quality, as is illustrated by the article on inadequate interviewer behaviour
in BMS n. 18 (Van der Zouwen and Dijkstra, 1988).
Due to the rising costs of face-to-face interviews, alternatives became more popular.
The publication of Dillmans 1978 book did much to increase the respectability of paper
mail surveys. Furthermore, the eighties also witnessed the growing popularity of tele-
phone interviewing both in the USA and Europe. Soon, the first methodological mode
comparisons were published and in 1990 the first meta-analytic overviews on the
influence of data collection mode on data quality were published in the BMS (Van der
Zouwen and De Leeuw, 1990, 1991; De Leeuw, 1993).
Changes in society and in technology led to changes in data collection. First of all, the
survey developed from a simple data collection tool into a sophisticated instrument to
investigate special populations (such as children and adolescents, Borgers et al., 2000)
and to measure highly sensitive topics (Van Meter, 2000; Lee and Lee, 2012), such as
the prevalence of HIV (Pollak and Schlitz, 1991; ASCF, 1992). Technological changes
in the eighties (see also, Van Meter, 1999; de Leeuw, 1994) made computer-assisted
forms of data collection possible. In the USA, CATI (Computer-Assisted Telephone
Interviewing) was developed, while in Sweden and Holland the first forms of CAPI
(Computer-Assisted Personal (face-to-face) Interviewing) were pioneered, which were
soon accepted both by respondents and interviewers, and supplied us with high-quality
data (Beckenbach, 1995; De Leeuw et al., 1995). In the Netherlands, an innovative new
form of computer-assisted data collection was initiated by Saris (1991), a computer-at-
home-panel or telepanel, where respondents received a weekly questionnaire through a
(at that time technologically advanced) telephone modem. This telepanel was the proto-
type for modern online panels, such as the LISS-panel in Holland or Knowledge Net-
works in the USA (Scherpenzeel, 2011), which brings us to the latest development in
social surveys: Web surveys (see Couper, 2000; De Leeuw, 2012). For a detailed histor-
ical review of surveys, see de Heer et al. (1999).
Survey Errors
In his classic handbook, Groves (1989) presents an analysis and synthesis of survey
errors and differentiates between coverage error, sampling error, nonresponse error, and
measurement error. In the past thirty years, the majority of articles on survey methodol-
ogy in the BMS were concerned with nonresponse and measurement errors, but some
articles included coverage error and sampling error as well.
Coverage error arises from the failure to give any chance of sample selection to some
persons in the population, because they are not part of the sampling frame or sampling
list used to identify members of the population. Telephone surveys may suffer from cov-
erage error, due to unlisted numbers when telephone directories are used or by omitting
mobile telephones and sampling and surveying landline telephones only (see Beck et al.,
2005; D az de Rada, 2001, 2010). For a review of trend figures on mobile-phone-only
households and the resulting coverage bias, see Mohorko et al. (2013).
From the onset of Internet surveys, coverage error has been a source of major concern
(De Leeuw, 2012). A major problem with Internet surveys is under-coverage resulting
de Leeuw 49
from the digital divide; that is, a difference in rates of Internet access among different
demographic groups (such as an unequal distribution regarding age and education for
those with and without Internet access, see Couper, 2000). For an overview of the digital
divide in Europe over time, see Mohorko et al. (2011). A potential solution for this prob-
lem is a probability based Internet panel, where an addressed-based sample is invited to
become a panel member through a face-to-face interview and were all households
without a computer or Internet access are provided with the necessary equipment by the
survey organization (see Scherpenzeel, 2011).
Sampling error is an error in estimation due to taking a sample instead of measuring
every unit in the sampling frame (and ideally measuring the whole population). Sam-
pling error originates in the heterogeneity on the survey measures among persons in the
population. How persons are sampled for data collection is important, and many methods
exist to ensure representativity through random procedures, such as the Kish method to
randomly select a person within a randomly selected household. For a critical discussion,
see Nemeth (2004). In practice, often more complex sampling schemes than simple ran-
dom sampling are being used (Blasius and Brandt, 2010), which results in complex or
hierarchical data sets. In 1996, a complete special issue of the BMS (n. 51, with Van den
Eeden and Hox, 1996) was devoted to multilevel analysis as a tool to correctly analyze
complex, hierarchical data.
Sampling poses an extra problem for Internet surveys, when no good sampling frames
are available and self-selected volunteers are investigated. For a discussion of the con-
sequences, see Faas (2004). Another important issue is how to sample special or
difficult-to-reach populations, such as the homeless or transient workers (Spreen,
1992; Jansson and Spreen, 1998). A related question is how to properly analyze data
resulting from these special sampling schemes. Snijders (1992) and TenHouten (1992)
addressed analysis methods for snowball samples.
Nonresponse error arises from the failure to collect data on all persons selected in the
sample (e.g., because of non-contact or refusal). Bias will occur if the persons who
respond differ on the variable of interest from those who do not respond. Nonresponse
and nonresponse error have been recurrent topics in the BMS. As mentioned, BMS n.
48 was a special issue on nonresponse, where practical fieldwork strategies to increase
response were discussed (Maas and de Heer, 1995), the abilities and motivation of
difficult-to-interview respondents was investigated (Loosveldt, 1995), and a theoretical
paradigm for survey nonresponse was provided and tested (Hox et al., 1995).
Different sources of nonresponse were explored by Jansen and Hak (1999), while
Smit and Dijkstra (1991) and de Leeuw (1999b) investigated interviewer tactics to suc-
cessfully combat nonresponse (see also Lemay and Durand, 2002; Stocke and Langfeldt,
2004; Durand et al., 2006). Goyder et al. (2006) integrated existing theories on survey
nonresponse, which were applied in later studies. Recently, Haunberger (2011) investi-
gated which of the theoretical principles (saliency, costs) predict participation in Web
surveys; Fuchs et al. (2013) discussed response rates and nonresponse bias in the
European Social Surveys (ESS); Hall et al. (2013) investigated whether or not extended
fieldwork vs. weighting is the best strategy to reduce the risk of nonresponse bias.
Measurement error, or inaccuracies in the responses, has been the object of many pub-
lications in the BMS. According to Groves (1989), measurement error arises from
50 Bulletin de Methodologie Sociologique 120
(a) effects of the interviewer on the respondents answer to survey questions, (b) error
due to the respondents themselves, (c) error due to the weakness in wording of the ques-
tionnaire, and (d) errors due to the effect of mode of data collection. Of course interaction
can exists between these sources, for instance when an interviewer tries to repair a badly
worded question or encounters a difficult to interview respondent.
Measurement error was already mentioned in the first issue of the BMS, where Marie-
Ange Schiltz (1983) discussed factorial analysis as means to reduce noise from measure-
ments. Ten years later, Scherpenzeel and Saris (1993) bring our knowledge a gigantic
step forward by introducing meta-analytic methods to analyze the results of studies with
multitrait-multimethod matrices, and show how this strategy can be used to correct for
measurement error in a substantive model of life satisfaction.
How interviewer characteristics may influence respondent behaviour was analyzed in
a then new and sophisticated way by Van den Eeden and al. (1996), who introduced the
readers of the BMS to multilevel analysis as tool for the analysis of interviewer effects.
Pickery and Loosveldt (1999) took this a step further and showed that significant inter-
viewer variability can influence the data and our conclusions. A year later, Carton (2000)
took a constructive approach and described how quality management may reduce inter-
viewer effects and its resulting measurement error. For an overview of studies of inter-
viewer effects which appeared in the first 20 years of BMS, see Van Meter (2005).
In 2006, several authors described how a lack of rigor and quality control may
enhance interviewer effects in a thematic issue of the BMS (n. 89), thereby emphasizing
the importance of quality control that was advocated by Carton (2000) and Japec (2006).
Finally, the recent need for cross-cultural and cross-national surveys introduced new and
exciting studies into the role of the interviewer, interviewers background, language and
even dialect as Renschler and Kleiner (2013) showed in the Swiss multi-language
context.
Respondents can be a source of measurement error too. Respondents may misunder-
stand the questions posed, fail to remember relevant information needed for an answer,
experience difficulties in combining different parts of information into an answer, or
encounter problems fitting their answer in the required format, when selecting or report-
ing an answer (for an introduction and overview, see Tourangeau et al., 2000). Loosveldt
(1995) describes how respondents may differ in the cognitive skills and motivation nec-
essary for a good performance during the survey interview. To detect and further inves-
tigate respondents who contribute to measurement error both qualitative (cognitive
interviewing methods, see BMS n. 55) or quantitative methods (Loosveldt, 1995; de
Leeuw and Hox, 1998) or a combination of qualitative and quantitative methods can
be used. Especially when retrospective questions are presented to respondents, memory
limitations and problems in the retrieval of relevant information will contribute to
measurement error (see Campanelli and Thomas, 1994; Couppie and Demazie`re,
1995; Dex, 1995; Van der Zouwen et al., 1993).
Sensitive topics pose a special problem to the respondent. Honest responses to ques-
tions on sensitive issues may incite intrinsic (threats to self-image) or extrinsic (carry the
risk of sanctions) threats to the respondent (Lee, 1993). To reduce these threats and to
stimulate more honest responses, several special techniques have been developed, such
as the three card technique (Droitcour and Larson, 2002), and randomized response
de Leeuw 51
(Boeije and Lensvelt-Mulders, 2002). Also a more private (self-administered) data
collection mode stimulates more honest answers (see De Leeuw, 1993; 2012).
Besides the topic, the actual wording of a question itself may introduce measurement
error, and writing good questions is a difficult task. For a good introduction to question
writing and question testing, see for instance Fowler (1995). In the US, there is a long
tradition of experimental research on question wording and context effects (for an early
overview, see Schuman and Presser (1996) and Sudman and Bradburns 1982 work).
This inspired survey researcher in Europe to replicate and extend these experiments,
which is reflected in publications in the BMS (see Loosveldt, 1997; Reuband, 2003).
Question evaluation and pretesting always remain necessary, and there is no substi-
tute for good question evaluation. No matter how expert the questionnaire designer, there
will always be things to be learned from question testing (Fowler, 1995; Sudman and
Bradburn, 1982). The BMS published a very informative thematic issue on pretesting
as early as 1997 (n. 55). Also in later years pretesting procedures got attention, such
as computer-assisted pretesting (Faulbaum, 2004). For an overview and critical compar-
ison of pretesting methods, see Rothgeb and Willis (2007).
Data collection mode as a source of measurement error was a popular topic during
the past 30 years. Driven by changes in data collection techniques, many empirical mode
comparisons were conducted. First, between the then newtelephone interviewand the tra-
ditional face-to-face and mail surveys, and later between the newcomputer-assisted inter-
viewforms (CAI) and the more traditional paper-and-pencil ones (PAPI). This resulted in
several overview papers and meta-analyses (see De Leeuw, 1993, 1994; De Leeuw and
Nicholls, 1996; de Leeuw et al., 1995).
In general, there is a dichotomy between modes with and modes without an inter-
viewer, where self-administered forms result in less social desirable answers and more
openness, and interviewer-administered questionnaires perform better in terms of com-
pleteness and produce higher responses. When telephone and face-to-face interviews are
compared, the former produce more random measurement error, which can be attributed
to the limited (aural only) channel of communication over the telephone. Computer-
assisted forms of data collection generally have a positive effect on data quality; this
is partly due to technological possibilities which prevent errors, such as automatic rout-
ings, and range and consistency checks. Improvements in data quality are similar for
CAPI, CATI, and CASI.
With the growing popularity of online research (De Leeuw, 2012), the quality of Web
survey responses and the influence of Internet as a medium of data collection is gaining
attention. Especially, the potential danger of professional respondents and of satisficing
in stead of optimizing respondents has been a source of worry. Callegaro et al. (2009),
show that both optimizers and satificers can be distinguished in Web surveys, where the
optimizers invest more time in answering questions than the satisficers.
The Present
The present world is far more complex than it was thirty years ago. This is reflected in
the survey methods we rely on and in the articles that are published in the BMS. All main
data collection methods are still being used, as stand-alone (such as face-to-face
52 Bulletin de Methodologie Sociologique 120
interviews in the ESS or the Eurobarometer), or as part of a mixed-mode design (Revilla,
2012). However, there is now no dominant data collection mode as the face-to-face inter-
view was in the nineteen fifties and sixties and the telephone interview was in the nine-
teen eighties and nineties. Technological changes have offered great opportunities and
new methods evolved, such as Web surveys (De Leeuw, 2012) and online panels (Scher-
penzeel, 2011). But, these new data collection methods also pose challenges with regards
to implementation, data quality, and nonresponse. Never in the Western world have
people been so connected technologically, through mobile telephones, Internet, SMS
(text messaging), Twitter, WhatsApp, and Facebook, and never has it been more difficult
to reach them in this twenty-first century. One way out is to try mixed-mode strategies
(De Leeuw, 2005); if we can not get them one way, let us try it another way. This again
poses challenges regarding implantation and analysis (Revilla, 2012).
Furthermore, despite everything we have learned about nonresponse in the past,
we have to continue our efforts to reduce nonresponse error (Fuchs et al., 2013). One
issue that worries survey methodologists is whether or not the increased pressure on peo-
ple to respond is negatively influencing their motivation and data quality. In other words,
if a reluctant respondent, after many attempts, agrees to cooperate, will this be whole-
heartedly and result in good quality data or will a decrease in nonresponse error result
in an increase in measurement error? Hox et al (2012) investigated this issue. Their
results show that late respondents on a mail questionnaire indeed differ from early
respondents on demographic variables. Those who responded after several reminders
were older, less educated, and in general had a lower SES. In itself, this is good news:
the reluctant respondents, who needed more persuasion to respond, made the total sam-
ple more representative! The other good news is that the measurement structure does not
differ between eager and reluctant respondents; there were no systematic difference in
reliability and validity after correction for differences in background characteristics,
such as education. Furthermore, Becker and Mehlkop (2011) reported that measures that
increase response rates, such as prepaid incentives, do not negatively influence data
quality even on sensitive topics. Still, we are only scratching the surface here and
fundamental research into this topic is scarce (but see Olsen et al., 2008).
On the other hand, survey statistics are now blooming and within everyones reach.
Sophisticated statistical methods, such as hierarchical modeling, that were once only
attainable for the expert who could handle a dedicated and usually user-unfriendly
computer program (see BMS n. 51, 1996), are now implemented in statistical packages
like SPSS and are within anyones reach. But just as for the data-collectors, also for the
data-analysts, new challenges are approaching, as our world and our data get more
complex and the paradigm of probability-based sampling is under pressure (AAPOR,
2013).
The Future
What will the future bring? Predicting the future is a hazardous endeavor, especially
when the predictions appear in print for a critical audience of future methodologists to
read. But let me be as bold as the trend watchers in the fashion and design industry, and
extrapolate the present trends to the near future.
de Leeuw 53
Rising costs will make face-to-face interviews something that is rarely affordable (De
Leeuw, 1999b), but interviewers will not disappear; they will be necessary in specialized
surveys (such as health studies involving physical measures), and for other specialized
tasks, such as listing and selecting respondents through random walk methods when
no sampling frame is available, or gathering para-data on respondents and nonrespon-
dents through observation of neighbourhoods and houses (see Couper, 2011; Kreuter,
2013). But, face-to-face interviews will become a luxury and more cost-efficient meth-
ods, such as Web and paper mail surveys will be used whenever possible. Especially, the
later is seeing a revival as the new Postal Delivery Sequence File makes address-based
sampling for the first time possible in the US. A good administrative list of addresses or
postal delivery points is a great help in implementing quality paper mail surveys; it is
also a tool that researchers in many European countries are taking too often for granted.
Perhaps, the renewed interest in paper mail surveys in the US will stimulate research in
self-administered methods, especially on how to implement and analyze paper-mail and
Internet mixed-mode surveys.
Mixed modes or multiple mode surveys will certainly be with us in the coming
decade. Mixing modes is a necessity of life within a country to get acceptable response
rates at affordable costs, but also across countries in Europe. International and cross-
cultural surveys are becoming increasingly necessary in our global world. Still, countries
greatly differ in economical and technological resources and infrastructure, which makes
a mixed-mode strategy almost unavoidable. To quote Bill Blyth (2008), mixed-mode
is the only fitness regime in the near future.
Sampling is another issue that will keep our attention and will need serious studying
in the future. It was only just more than a century ago that the concept of systematic sam-
pling was discussed at meetings of the International Statistical Institute by Andres Kiaer.
It took statisticians several decades before they developed tools to use and analyze
probability-based sampling and sample surveys became a scientific and accurate substi-
tute for complete enumeration or census-type studies (De Heer et al., 1999). Essential for
probability sampling is a good set of instructions (for example, the RandomWalk or RDD),
or a good sampling frame. While we are moving towards more and more mixed-mode
and international surveys, we will also be forced to use dual or multiple sampling
frames. Also, dual-frame sampling will be necessary if we want to include mobile
telephones in our regular telephone surveys.
As it will become increasingly difficult to contact and convince potential respondents
to participate through probability samples, we may even gradually leave the all encom-
passing paradigm of probability sampling and include non-probability or self-selected
volunteer samples in survey methodology. Volunteer samples are now already being
used by our colleagues in market research (see AAPOR, 2013) and have been used for
a long time in other scientific disciplines, such as psychology. It will be up to the statis-
ticians of tomorrow to provide ways of making inferences based on these volunteer sam-
ples. Promising paths at present are propensity score matching and Bayesian adjustment
methods.
Finally, we may be partly moving from a group of askers to a group of observers.
More and more data are evolving almost organically and are just waiting to be analyzed.
Researchers always used administrative records but, in the brave new world of the future,
54 Bulletin de Methodologie Sociologique 120
more and more additional data will be available both on an individual and an aggregate
level. For instance, customer purchase data are being recorded routinely by the large
companies, search engines track movements and combine these in data sets, and we, the
individuals, express our opinions freely in customer online groups, on blogs, and with
tweets, and show our private lives on social media. All that needs to be done is to put
some order into those endless supplies of big data. We will need smart software and fast
computers. But above all, we should learn to ask the right questions to which these data
may provide answers. We need to use our heads instead of mere computational power
and think about what is relevant and what is not. Of course, not all questions can be
answered by available (big) data sets. Just as in the old days, observational studies and
data from administrative records needed to be augmented with survey data, we will con-
tinue to need data from specially designed studies to answer special questions.
Note
1. The Web site http://karlvanmeter.wordpress.com/ contains all abstracts from BMS 1983-2013
in both English and French.
References
ASCF Group (1992) Analysis of Sexual Behaviour in France (ACSF): What Kind of Advance
Letter Increases the Acceptance Rate in a Telephone Survey? BMS 35: 46-54.
AAPOR (2013) Report of the AAPOR Task Force on Non-probability Sampling. Available at
https://www.aapor.org/AM/Template.cfm?SectionReports1&Template/CM/ContentDis-
play.cfm&ContentID5963 (Accessed June 2013).
Beck F, Legleye S and Peretti-Watel P (2005) Aux abonnes absents - Liste rouge et telephone
portable dans les enquetes en population generale sur les drogues. BMS 86: 5-29.
Becker R and Mehlkop G (2011) Effects of Prepaid Monetary Incentives on Mail Response Rates
and on Self-reporting about Delinquency - Empirical Findings. BMS 111: 5-25.
Beckenbach A (1995) Computer-Assisted Questioning: The New Survey Methods in the Percep-
tion of the Respondents. BMS 48: 82-100.
Blasius J and Brandt M (2010) Representativeness in Online Surveys through Stratified Samples.
BMS 107: 5-21.
Borgers N, de Leeuw ED and Hox JJ (2000) Children as Respondents in Survey Research:
Cognitive Development and Response Quality. BMS 66: 60-75.
Boeije H and Lensvelt-Mulders G (2002) Honest by Chance: A Qualitative Interview Study to
Clarify Respondents (Non)Compliance with Computer-assisted Randomized Response. BMS
75: 24-39
Blyth B (2008) Mixed-Mode: The Only Fitness Regime? International Journal of Market
Research 50(2): 241-66.
Callegaro M, Yang Y, Bhola DS, Dillman DA and Chin TY (2009) Response Latency as an Indi-
cator of Optimizing in Online Questionnaires. BMS 103: 5-25.
Campanelli P and Thomas R (1994) Practical Issues in Collecting Life-time Work Histories in
Surveys. BMS 42: 114-36.
Campanelli P (1997) Testing Survey Questions: New Directions in Cognitive Interviewing. BMS
55: 5-17.
de Leeuw 55
Carton A (2000) An Interviewer Network Constructing a Procedure to Evaluate Interviewers. BMS
67: 42-53.
Couper MP (2000) Web Surveys: A Review of Issues and Approaches. Public Opinion Quarterly
64(4): 464-94.
Couper M (2011) Future of Data Collection Modes. Public Opinion Quarterly 75(5): 889-908.
Couppie T and Demazie`re D (1995) Se souvenir de son passe professional - Appel a` la memorie
dans les enquetes retrospectives et construction sociale des donnees. BMS 49: 23-57.
De Heer W, de Leeuw ED and van der Zouwen J (1999) Methodological Issues in Survey
Research: A Historical Review. BMS 64: 25-48.
De Leeuw E (1999a) Survey Nonresponse: Preface. Guest editors Introduction to the Special Issue
on Nonresponse. Journal of Official Statistics (JOS) 15(2): 127-28. The whole issue is available
online at http://www.jos.nu/Contents/issue.asp?vol15&no2.
De Leeuw E (1999b) How Do Successful and Less Successful Interviewers Differ in Tactics for
Combating Survey Nonresponse? BMS 62: 29-42
De Leeuw E (2005) To Mix or Not to Mix. Journal of Official Statistics 21(2): 233-55. Available
online at http://www.jos.nu/Articles/article.asp.
De Leeuw E and Hox J (1998) Detecting Aberrant Response Patterns in Multi-item Scales: Some
Nonparametric Indices and a Computer Program. BMS 58: 68-78.
De Leeuw ED (1993) Mode Effects in Survey Research: A Comparison of Mail, Telephone, and
Face-to-Face Surveys. BMS 41: 3-19.
De Leeuw ED (1994) Computer Assisted Data Collection, Data Quality and Costs: A Taxonomy
and Annotated Bibliography. BMS 44: 60-72.
De Leeuw ED (2012) Counting and Measuring Online: The Quality of Internet Surveys. BMS 114:
68-78.
De Leeuw ED and Nicholls WL II (1996) Technological Innovations in Data Collection: Accep-
tance, Data Quality and Costs. Sociological Research Online . Available at: http://www.socre-
sonline.org.uk/1/4/leeuw.html.
De Leeuw ED, Hox JJ and Snijkers G (1995) The Effect of Computer Assisted Interviewing on
Data Quality; A Review. Journal of the Market Research Society 37: 325-44. Reprinted in
Fielding NG (ed.) (2002) Interviewing, Volume One, Part Two. London: Sage Benchmarks
in Social Research Methods Series and reprinted in de Vaus D (ed.) (2002) Social Surveys, Part
Eight, Measurement Error. London: Sage, Benchmarks in Social Research Methods Series.
Dex S (1995) The Reliability of Recall Data: A Literature Review. BMS 49: 58-89.
D az de Rada V (2001) Is It Fitting to Use the Telephone Directory as a Frame Population in
Surveys? BMS 69: 5-16.
D az de Rada V(2010) Effects (and Defects) of the Telephone Survey in Polling Research: Are We
Abusing of the Telephone Survey? BMS 108: 46-66.
Dillman DA (1978) Mail and Telephone Surveys; The Total Design Method. New York: Wiley.
Droitcour JA and Larson EM (2002) An Innovative Technique for Asking Sensitive Questions:
The Three Card Method. BMS 75: 5-23.
Durand C, Gagnon ME, Doucet C and Lacourse E (2006) An Inquiry into the Efficacy of Compli-
mentary Training Sessions for Telephone Interviewers. BMS 92: 5-27.
Faas T (2004) Online or Not Online? A Comparison of Offline and Online Surveys Conducted in
the Context of the 2002 German Federal Election. BMS 82: 42-57.
Faulbaum F (2004) Computer Assisted Pretesting of CATI Questionnaires. BMS 83: 5-17
56 Bulletin de Methodologie Sociologique 120
Fowler FJ (1995) Improving Survey Questions: Design and Evaluation. Thousand Oaks: Sage.
Fuchs M, Bossert D and Stukowski S (2013) Response Rate and Nonresponse Bias - Impact of the
Number of Contact Attempts on Data Quality in the European Social Survey. BMS, 117: 26-48
Goyder J, Boyer L and Martinelli G (2006) Integrating Exchange and Heuristic Theories of Survey
Nonresponse. BMS 92: 28-44.
Groves RM, Dillman DA, Eltinge JL and Little RJA (2002) Survey Nonresponse. New York:
Wiley.
Groves RM (1989) Survey Errors and Survey Costs. New York: Wiley.
Hall J, Brown V, Nicolaas G and Lynn P (2013) Extended Field Efforts to Reduce the Risk of
Non-response Bias: Have the Effects Changed over Time? Can Weighting Achieve the Same
Effects? BMS 117: 5-25.
Haunberger S (2011) To Participate or Not to Participate: Decision Processes Related to Survey
Non-response. BMS 109: 39-55
Hox J, de Leeuw E and Chang HT (2012) Nonresponse versus Measurement Error: Are Reluctant
Respondents worth Pursuing? BMS 113: 5-19.
Hox J, de Leeuw E and Vorst H (1995) Survey Participation as Reasoned Action: A Behavioral
Paradigm for Survey Nonresponse? BMS 48: 52-67.
Jansen H and Hak T (1999) Nonresponse to Mail Surveys in a Lower Class Urban Area: A Two
Stage Exploration of Access Failure and Refusal. BMS 62: 5-27
Jansson I and Spreen M (1998) The Use of Local Networks in a Study of Heroin Users: Assessing
Average Local Networks. BMS 59: 49-61.
Japec L (2006) Quality Issues in Interview Surveys - Some Contributions. BMS 90: 26-42.
Kreuter F (2013) Improving Surveys with Para-data: Analytic Uses of Process Information.
New York: Wiley.
Lee RM (1993) Doing Research on Sensitive Topics. London: Sage
Lee YO and Lee R (2012) Methodological Research on Sensitive Topics: A Decade Review.
BMS 114: 35-49.
Lemay M and Durand C (2002) The Effect of Interviewer Attitude on Survey Cooperation. BMS
76: 27-44.
Loosveldt G (1995) The Profile of the Difficult to Interview Respondent. BMS 48: 68-81.
Loosveldt G (1997) Interaction Characteristics in Some Question Wording Experiments. BMS 56:
20-30.
Maas C and de Heer W (1995) Response Development and Fieldwork Strategy. BMS 48: 36-41.
Mohorko A, de Leeuw E and Hox J (2011) Internet Coverage and Coverage Bias in Countries
across Europe and over Time: Background, Methods, Question Wording and Bias Tables.
Available at: http://www.joophox.net.
Mohorko A, de Leeuw E and Hox J (2013) Coverage Bias in European Telephone Surveys: Devel-
opments of Landline and Mobile Phone Coverage across Countries and over Time. Survey
Methods: Insights from the Field. Available at: http://surveyinsights.org/?p828.
Nemeth R (2004) Representativeness Problems in Address-based Sampling and a Modification of
the Leslie Kish Grid. BMS 83: 43-60.
Olsen K, Feng C and Witt L (2008) When Do Nonresponse Follow-ups Improve or Reduce Data
Quality? A Meta-analysis and Review of the Existing Literature. Paper presented at the Inter-
national Total Survey Error Workshop, Research Triangle Park, NC, June 2008. Available at:
http://www.niss.org/itsew
de Leeuw 57
Pickery J and Loosveldt G (1999) An Evaluation of a Typology of Respondents with a Multilevel
Multinomial Logit Model. BMS 63: 47-61.
Pollak M and Schlitz MA (1991) Six annees denquetes sur les homo- et bi-sexuels masculins face
au SIDA: Livre des donnees. BMS 31: 32-48.
Presser S, Rothgeb JM, Couper MP, Lessler JT, Martin E, Martin J and Singer E (2004) Methods
for Testing and Evaluating Survey Questionnaires. New York: Wiley.
Renschler I and Kleiner B (2013) Considering Dialect in Survey Research. BMS 118: 51-59.
Reuband KH (2003) The Allow-Forbid Asymmetry in Question Wording A New Look at an Old
Problem. BMS 80: 25-35.
Revilla M (2012) Impact of Mode of Data Collection on the Quality of Answers to Survey Ques-
tions Depending on Respondent Characteristics. BMS 116: 44-60.
Rogers EM (1962) Diffusion of Innovations. New York: Free Press.
Rothgeb J and Willis G (2007) Questionnaire Pretesting Method: Do Different Techniques and
Different Organizations Produce the Same Results? BMS 96: 5-31.
Saris W (1991) Computer-Assisted Interviewing. Newbury Park: Sage.
Scherpenzeel A (2011) Data Collection in a Probability-based Internet Panel: How the LISS Panel
was Build and How It Can Be Used. BMS 109: 56-61.
Scherpenzeel A and Saris W (1993) The Evaluation of Measurement Instruments by Meta-analysis
of Multitrait-Multimethod Studies. BMS 39: 20-44.
Schiltz MA (1983) Lelimination des modalities non pertinentes denquetes par analyse factorial.
BMS 1: 19-40.
Schuman H and Presser S (1996) Questions and Answers in Attitude Surveys: Experiments on
Question Form, Wording, and Context. Thousand Oaks: Sage.
Smit J and Dijkstra W (1991) Persuasion Strategies for Reducing Refusal Rates in Telephone
Surveys. BMS 33: 3-19.
Snijders TAB (1992) Estimation on the Bias of Snowball Samples: How to Weight? BMS 36:
59-70.
Spreen M (1992) Rare Populations, Hidden Populations, and Link-tracing Designs: What and
Why? BMS 36: 34-58.
Stocke V and Langfeldt B (2004) Effects of Survey Experience on Respondents Attitudes towards
Surveys. BMS 81: 5-32.
Sudman S and Bradburn N (1982) Asking Questions: A Practical Guide to Questionnaire Design.
San Francisco: Jossey-Bass.
TenHouten WD (1992) Generalization and Statistical Inference from Snowball Samples. BMS 37:
25-40.
Tourangeau R, Rips LJ and Rasisnki K (2000) The Psychology of Survey Response. Cambridge,
GB: Cambridge University Press.
Van den Eeden P and Hox J (1996) Introduction Multilevel Analysis. BMS 51: 5-9.
Van den Eeden P, Smit JH, Deeg D, Beekman JH and Aartjan TF (1996) The Effects of Inter-
viewer and Respondent Characteristics on Answer Behaviour in Survey Research: A Multilevel
Approach. BMS 51: 74-78.
Van der Zouwen J and de Leeuw E (1995) Survey Nonresponse, Measurement Error, and Data
Quality: An Introduction. BMS 48: 29-35.
Van der Zouwen J and de Leeuw E (1990) The Relationship between Mode of Administration and
Quality of Data in Survey Research. BMS 29: 3-14.
58 Bulletin de Methodologie Sociologique 120
Van der Zouwen J and de Leeuw E (1991) The Relationship between Mode of Administration and
Quality of Data in Survey Research (Final Part). BMS 31: 49-60.
Van der Zouwen J and Dijkstra W (1988) Types of Inadequate Interviewer Behaviour in Survey
Interviews: Their Causes and Effects. BMS 18: 21-30.
Van der Zouwen J, Dijkstra W and van der Vaart W (1993) Effects of Measures Aimed at Increas-
ing the Quality of Recall Data. BMS 39: 3-19.
Van Meter KM (1999) New Technologies in Sociological Research. BMS 63: 63-75.
Van Meter KM (2000) Sensitive Topics - Sensitive Questions: Overview of the Sociological
Research Literature. BMS 68: 58.
Van Meter KM (2005) Studying Survey Interviewers: A Call for Research and an Overview. BMS
88: 51-71.
Wikipedia (2013) Sed festival. Available at: http://en.wikipedia.org/wiki/Sed_festival.
de Leeuw 59

Potrebbero piacerti anche