Sei sulla pagina 1di 9

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/228960637

Designing a web versus a paper questionnaire-some general and special


issues

Article · January 2007

CITATIONS READS

0 732

1 author:

Vesna Bucevska
Ss. Cyril and Methodius University
16 PUBLICATIONS   54 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Croatian Science Foundation granted project "STatistical Modelling for REspoNse to Crisis and Economic GrowTH in WeStern Balkan Countries" (STRENGTHS), IP-2013-
9402 View project

All content following this page was uploaded by Vesna Bucevska on 07 March 2016.

The user has requested enhancement of the downloaded file.


Designing a web versus a paper questionnaire-some
general and special issues
Vesna Bucevska

Faculty of Economics-Skopje, University “Ss. Cyril and Methodius”, Republic of Macedonia


vesna@eccf.ukim.edu.mk

Abstract

The past decade has seen a tremendous increase in use of Internet. As the number of Internet users has
grown, the use of the Web as a data collection format, particularly for surveying specific target groups
and topics, has more and more become an interesting alternative for the survey industry. The web
questionnaire as a new method of data collection, while sharing common features with the “traditional”
paper questionnaire, has brought with it many methodological problems for discussion, ranging from
sampling, non-response, security to questionnaire design. This paper focuses on the last issue,
questionnaire design and examines some of the important concerns that are common for construction of
both a paper and a web questionnaire as well as specific issues that should be taken into consideration
when designing a web questionnaire. It does not represent a detailed discussion of a web questionnaire
design. Several references have been included which provide this information. It concludes that although
web questionnaires have a number of advantages over paper questionnaires, associated with time, cost
and efficiency of completion, they also have particular problems in terms of the technical challenges
involved in both designing and accessing the questionnaire, and the populations that can be reached via
the Web.

Key words: paper questionnaire, web questionnaire, questionnaire design

1 Introduction
Today we are witnessing an explosion in using of Internet. As the number of Internet users
has grown, the use of a web as a data collection format, particularly for surveying special
populations, for surveying members of organizations and for institutional surveys, has more
and more become an interesting alternative for the survey industry. Only a few years ago the
use of web questionnaires for data collection was not an issue that received research attention
from experts in survey research. Today web surveys include a great number of methods with
different purposes and populations.1

This new method of data collection, while sharing common features with more traditional
modes of data collection, brings up new issues for discussion. One of them is the design of a
questionnaire.

It might be worthwhile to open this discussion in terms of comparing the advantages and
disadvantages of web and paper questionnaires (See Figure 1).

Figure 1: Web versus paper questionnaire

1
Couper, M. (2000) Web Surveys: A Review of Issues and Approaches. Public Opinion Quarterly 64, 464-494.

1
Web
Advantages Disadvantages
• Save time (eliminates tedious mail • Sample representativness (no
processes); everyone has access to Internet);
• Faster in transmission; • Lower response rate;
• Minimal cost (no printing and other • Reliability and accuracy (a wrong
costs once the set up has been person may respond);
completed); • Validity of answers;
• Greater volume of text is allowed; • No control over people responding
• Use of thousands of colours, fonts multiple times to bias the results;
and formatting options; • People can quit in the middle of a
• Graphic sophistication (colour questionnaire;
images and text, but also cinematic • Technical problems and pitfalls;
and interactive images can be • Unreliable network;
presented); • Lack of appropriate equipment or
• Automatic question filtering level of computer literacy.
(irrelevant questions can be
automatically hidden based on an
answer from a previous question);
• Narrow scope of a question by
providing a large list of acceptable
responses;
• More honest answers to sensitive
questions;
• Longer answers to open-ended
questions;
• A possibility to combine the survey
answers with pre-existing
information;
• More time to take the survey;
• Automated e-mails and reminder e-
mails;
• Advanced security features.

Paper
Advantages Disadvantages
• More cost effective when • Time consuming;
surveying small groups; • Lack of immediate data analysis;
• For population who does not have • Significant mailing costs;
internet access; • Respondents may discard or lose
• Great for point of contact cards; them;
• Take it out into the field; • Data entry needed to store in
• More conducive format to longer database;
questionnaires; • Handwritten responses can be
• Ability to edit - visual image difficult to interpret;
capabilities; • Questionnaires may become
• No special skill or equipment are damaged during handling;
required to fill them out. • Longer processing turnaround.

2
2 Questionnaire design
Anyone undertaking data collection is aware of the fact that a questionnaire is a crucial
element in the success of the survey research. However the importance of a questionnaire
design is often underestimated. As Sheatsley observes: “Because questionnaires are usually
written by educated persons who have a special interest in and understanding of the topic of
their inquiry, and because these people usually consult with other educated and concerned
persons, it is much more common for questionnaires to be overwritten, overcomplicated, and
too demanding of the respondent than they are to be simpleminded, superficial, and not
demanding enough." 2

This means, in turn, that designing a clear and “simple” questionnaire is a complicated task
for survey researchers.

2.1 Questions composing

The first rule in designing a questionnaire is to design a questionnaire that fits the medium.
Couper3 suggests that the web is a fundamentally different medium from paper and that
design issues need to take account of this. Web questionnaire design may include a variety of
verbal, visual and interactive elements. However, it is impossible to use all these
opportunities since the respondents use various hardware and software systems and may not
be able to download “a fancy” questionnaire in a reasonable time. To balance the variety of
opportunities and limitations is a unique challenge for a web survey researcher.

Dillman4 makes the case that the same principles of good design that apply to “traditional”
paper questionnaires also apply to web questionnaires. Taking into account both opinions,
when designing a web questionnaire, it is important to apply both the standards that underlie
effective paper questionnaires and the computer logic.

The first step in designing a questionnaire is to define the objectives of the survey. Only clear
and focused objectives can assure a good questionnaire design. One of the best ways to define
your survey objectives is to decide how the collected information will be used.

After setting down the aims of the survey, determining the sample to be surveyed and
deciding how to collect information, we come to the design of the actual questionnaire. A
good design is critical to success. It will encourage participants to answer the questions fully
and accurately and provide information which can be analyzed to generate real knowledge.

The key step in designing a questionnaire is the composition of questions. The formulation of
the questions in a web and in a paper questionnaire is of higher importance than in face-to-
face and telephone administered questionnaires. But, what is a good question? A good
question is one that possesses two important qualities: reliability (the question produces
consistent answers) and validity (the questions measures the concept of interest).) Although
there are several potential sources of error in survey data, the validity of survey results
depends primarily on the composition of the questions. Regardless of the mode of data

2
Sheatsley, P. (1983) Questionnaire Construction and Item Writing. In: Handbook of Survey Research (eds. P.H.
Rossi, D. Wright & A.B. Anderson), New York Academic Press, New York, 195-230.
3
See 1.
4
Dillman, D.A. (2000) Mail and Internet Surveys. Wiley, New York.

3
collection (a paper or a web questionnaire), Fowler5 suggested five standards for design of
questions:

1. Questions need to be consistently understood.


2. Questions need to be consistently administered or communicated to respondents.
3. What constitutes an adequate answer should be consistently communicated.
4. Unless measuring knowledge is the goal of the question, all respondents should have
access to the information needed to answer the question accurately.
5. Respondents must be willing to provide the answers called for in the question.

Although there are some standards for questions design, a crucial part in a survey represents
the empirical evaluation of questions through focus group discussions, cognitive interviews,
in which people’s comprehension of questions and how they go about answering questions is
probed and evaluated and field pretests under realistic conditions.6

Generally there are two types of questions: open-response and closed-response questions.
Open-response questions give respondents an opportunity to answer the question in their own
words. Closed-response questions give respondents a choice of answers and the respondent is
supposed to select one.

There are advantages and disadvantages to using one type of question versus another. The
open-response questions allow wide range of possible answers and the obtained answers are
without any influence. Their disadvantages include variability in the clarity and depth of the
responses, they are time consuming, it is difficult for a researcher to “code” responses and
response may not be directly comparable. The closed-response questions restrict the
respondent to select an answer from the specified response options. For the respondent, a
closed question is easier and faster to answer and for the researcher, closed questions are
easier and less expensive to code and analyse. This makes tabulation and analysis of
responses easier. Also, closed questions reduce the number of refusals to questions and the
responses could be directly compared. However, it does not allow the respondent to expresses
himself or herself in own words and there is also a disagreement among researchers on the
type of responses that should be listed.

Most questionnaires also gather demographic data on the respondents. Typically,


demographic questions are put at the beginning of the questionnaire since background
questions are easier to answer and can ease the respondent into the questionnaire.

2.2 Ordering the questions

The next issue is putting questions into a meaningful order and format. There are two broad
issues to keep in mind when considering questions order. One is how the questions and
answer choice order can encourage people to complete the questionnaire. The other issue is
how the order of questions or answers could affect the results of the survey.

Dillman7 suggests a web questionnaire to start with a welcome page. Ideally the opening
questions should be easy and pleasant to answer and not in any way threatening to the
respondents. Questions that require a good deal of thought or those that are sensitive or
personal should be put in the middle or the end of the questionnaire.

5
Fowler, F. (1995) Improving Survey Questions-Design and Evaluation. Sage Publications USA, Thousand
Oaks, California, 1995.
6
See 5.
7
See 4.

4
It should flow smoothly and logically from one topic to another. Questions on one topic, or
one particular aspect of a topic, should be grouped together and physically separated.
Respondents may feel it disconcerting to keep shifting from one topic to another, or to be
asked to return to some subject they thought they gave their opinions about earlier. A
questionnaire should end with a comments section that allows the respondent to record any
other issues not covered by the questionnaire. Leaving space for comments will provide
valuable information not captured by the response categories and will also makes the
questionnaire look easier, thus increasing response rate.

2.3 Length of a questionnaire

The optimal length of a questionnaire has always been a controversial topic for both paper
and web questionnaires. As a general rule, with a few exceptions, long questionnaires get less
response than short questionnaires. If necessary, rate your questions as “very important”,
“important” and “good to know” and use the groupings to prioritise. If a questionnaire is over
a few pages, than the researcher should try to eliminate the last group of questions. While the
design of the questionnaire should try to minimize the disruption related to the survey, shorter
is not always better. The length of a questionnaire should be based on a minimum amount
required to successfully cover the necessary topics. A respondent friendly graphical layout
could make the questionnaire look shorter than the actual questionnaire is.

From the point of view of a researcher the length of a web questionnaire does not have the
same implications on costs as paper questionnaires. Web questionnaires are associated with
minimal cost once the set up has been completed. There are no printing or other costs. On the
other hand, paper questionnaires are associated with higher printing, mailing, storage and
other costs.

On the other hand, longer web questionnaire increase the costs of the respondents (using
expensive dial-up Internet connection, more hours needed to download and fill out the
questionnaire) and the respondent burden by increasing the time for downloading.

The most common way to measure respondent burden has so far been to register the time it
takes to complete the questionnaire. However, this method of measurement is not based on an
analysis of what is perceived as burdensome. In 1978, Norman Bradburn8 suggested a
definition of respondent burden, consisting of four elements: interview length, required
respondent effort, and frequency of being interviewed and the stress of psychologically
disturbing questions which may be asked. It is only the first and in some cases the second of
these factors that can be measured in minutes. Forsman and Varedian9 have shown that it
generally takes more time to fill out a web questionnaire compared to a paper questionnaire.
This type of measurement does not take into consideration the reasons for the respondent
burden and the feeling of the respondent itself. Featherston and Moy10 suggested using a unit
or item nonresponse as an indicator of response burden. However, according to Haraldsen11
this type of measurement is more adequate to measurement of data quality and the correlation
between response burden and data quality is not always straightforward. Fisher and

8
Bradburn, N. (1978) Respondent Burden. Health Survey Research Methods, DHEW Publication no. (PHS) 79-
3207, 49-53.
9
Forsman, G. and Varedian, M. (2002) Mail and web surveys: A Cost and Response Rate Comparison in a Study
on Students Hosing Conditions. Paper presented at the International Conference on Improving Surveys (ICIS),
Copenhagen.
10
Featherston, F. and Moy, L. (1990) Item Nonresponse in Mail Surveys. Paper presented at the International
Conference on Measurement Errors in Surveys, Tucson, Arizona.
11
Haraldsen, G. (2002) Identifying and Reducing the Response Burden in Internet Business Surveys. Paper
presented at the International Conference on Questionnaire Development, Evolution and Testing Methods,
Charleston, South Carolina.

5
Kydoniefs12 define the respondent burden as the personality of the respondent plus
behavioural and attitudinal attributes of respondent that impact on the survey competition task
and are unlikely to me moderated by the survey sponsor. Design burdens are all the burdens
linked to the mode of data collection and to the content and presentation of the questionnaire
used. In their model, the perceived burden is affected by the respondent’s ability to respond,
by the design of the survey and by the combination of these elements. After defining the
respondent burden, they concentrate on general values and attitudes that are solid and difficult
to change. Haraldsen13 focuses on the personal prerequisites of the respondent-motivation and
response competence and Internet interest and competence. When a questionnaire is presented
on the web, it is no longer the interest of the respondent for the topic of the survey and his/her
competence to respond to questions, but also the interest in using Internet for surveys and the
perceived Internet competence. In this respect, web questionnaires may increase the
respondent burden of those respondents who did not face any problems with the paper version
of the questionnaire.

However, the functionality of the Web survey software allowing a questionnaire routing,
which is not possible with a paper questionnaire, can contribute to decreasing of a respondent
burden.

2.4 Visual elements of a questionnaire

The visual image of a questionnaire can have a significant impact on the quantity and quality
of data collected. Unlike the principles that underlie questions composing, ordering of
questions and the length of a questionnaire which are common for both paper and web
questionnaire, the visual experience of filling out a web questionnaire is much different than
that of filling out a traditional paper questionnaire.

The use of a web questionnaire makes possible entirely new ways of presenting information
to respondents. Not only can multiple colours be applied, but also several additional
navigational features (e.g. drop-down menus) can be added to questionnaires. In the literature
on web questionnaires, there are two main opinions about use of colour and visual elements in
web questionnaires. Dillman et al.14 claim that, as with paper questionnaires, web
questionnaire design should remain conservative until further research has been undertaken to
establish the effects of colour and graphics. “The cursor, the mouse and the more limited
scope of individual screen displays add dimensions of visual complexity and display that are
different than those presented by the hand-eye coordination aspects involved in completing
paper questionnaires”15. Thus, Dillman et al.16 recommended that colour and graphics should
be used sparingly so that "figure/ground consistency and readability are maintained,
navigational flow is unimpeded, and measurement properties of the questions are
maintained". On the other hand, Couper17 suggested using graphics to supplement questions
or as an incentive for respondents. He warned that although psychological research had found
that verbal and visual information was processed simultaneously, the intended effects might
not be achieved. Vehovar et al.18 are also against extended use of graphics which can increase
download time of the questionnaires, thus leading to increase of a respondent burden.
12
Fisher, S. and Kydoniefs, L. (2001) Using a Theoretical Model of Response Burden to Identify Sources of
Burden in Surveys. Paper presented at the 12th International Workshop on Household Survey Nonresponse, Oslo.
13
See 11.
14
Dillman, A.D.. Tortora, R.D. and Bowker, D. (1998) Principles for Constructing Web Surveys. SESRC
Technical Report.
15
See 14.
16
See 14.
17
Couper, M. (2001) The Promises and Perils of Web Surveys. In: The Challenge of the Internet, Proceedings of
the 2nd ASC International Conference on Survey Research Methods, (ed. A. e. a. Westlake), Chatham, UK.
18
Vehovar, V., Batagelj, Z., Lozar Manfreda, K. and Zaletel, M. (2002) Nonresponse in Web Surveys. In: Survey
Nonresponse (ed. Groves, R.M, Dillman, A., Eltinge, J.L and Little, R.J.A.), John Wiley, New York.

6
Schonlau et al19 are using the same argument-longer download time to suggest that web
questionnaires should be kept short and simple.

Not all the literature discourage “fancy” design of a web questionnaire. Manfreda et al.20
found out that the use of graphics in the form of logotypes illustrating survey questions
increased abandonment (both computer equipment and payment for internet services playing
a role), but decreased the respondent burden– item non-response was lower with logotype
questions and positively affected measurement error as respondents without the logotypes
were more likely to opt for a non-committal answer than those with the logotypes. The
authors suggest this is due to the logotypes clarifying whether or not the respondent knew the
answer. These conclusions provide a preliminarily indication that visual elements may be a
positive addition to web questionnaire compared to “traditional” paper questionnaires.

3. Problems with web questionnaires

Designing respondent-friendly questionnaire21 means designing a questionnaire which will


reduce or keep low all four sources of sample survey errors (coverage error, sampling error,
measurement error and nonresponse error)22.

Couper describes coverage error as representing "the biggest threat to the representativeness
of sample surveys conducted via the Internet"23. Concerns arise from the penetration of the
technology (what percentage of the population have access to the Internet and the web) and
what type of population has access to the Internet and the web. He pointed out that while web
surveys could gather large numbers of responses, number of respondents did not equate to
statistical validity, i.e. it did not necessarily permit inference to a population. With further
penetration of the technology, non-coverage will become less of an issue.

Manfreda et al.24 note that in web surveys non-response may occur at any stage of the process
and various measures to maintain interest in web surveys, e.g. incentives, simple
questionnaires, constant contact with participants, providing a sense of community, giving the
opportunity to email back and keeping promises. In addition, they argue that respondents need
to receive some additional personal satisfaction from answering the web questionnaire: for
example, some web survey participants may see a survey as a form of entertainment or as an
opportunity to gain new knowledge. Manfreda et al. indicated two main sources of
measurement error in web surveying that stem from the web questionnaire itself: the wording
of the questions or the flow of the questionnaire, both of which may have an effect on the
quality of answers. The other is the questionnaire form, i.e., the visual layout of the
questionnaire, of particular importance in self-administered surveys.

However, the cost advantages of web over paper questionnaires appears to be encouraging
reduction of only one type of survey error.

19
Schonlau, M., Fricker, R.D. and Elliott, M.N. (2002) Conducting Research Surveys via E-mail and the Web.
Santa Monica, California: RAND.
20
Manfreda, K.L., Bategeli, Z. and Vehovar, V. (2002) Design of Web Survey Questionnaire: Three Basic
Experiments. Journal of Computer Mediated Communication (JCMC) 7(3).
21
Dillman et al. define respondent-friendly design as the construction of web questionnaires in a manner that
increase the likelihood that sampled individuals will respond to the survey request and that they will do so
accurately. i.e. by answering each question in the manner intended by the surveyor.
22
Groves, R.M. (1989) Survey Errors and Survey Costs, Wiley, New York.
23
Couper, M. (2000) Web Surveys: A Review of Issues and Approaches. Public Opinion Quarterly 64, 464-494.
24
See 19.

7
4. Conclusion

The advantages of web over paper questionnaires (novelty, speed, low costs, individual
convenience etc) make them practical, cost effective, time-efficient instrument for data
collection. However, web questionnaires are still relatively new mode of data collection and
surveyors are still learning what works best. One principle is to consider good web
questionnaire design when creating a web questionnaire. We have shown that the most of the
principles (with exception of visual image) of a web questionnaire design are based to a
considerable degree on the principles that underlie the design of traditional paper
questionnaires. Further improvement of principles of designing a web questionnaire design
will significantly contribute to increasing the validity of this instrument for data collection
and overcoming the particular problems associated with web surveys, in terms of the
technical challenges involved in both building and assessing the survey, and the population
that can be reached via the Web.

References

1. Bradburn, N. (1978) Respondent Burden. Health Survey Research Methods, DHEW Publication no.
(PHS) 79-3207, 49-53.
2. Couper, M. (2000) Web Surveys: A Review of Issues and Approaches. Public Opinion Quarterly 64,
464-494.
3. Couper, M. (2001) The Promises and Perils of Web Surveys. In: The Challenge of the Internet,
Proceedings of the 2nd ASC International Conference on Survey Research Methods, (ed. A. e. a.
Westlake), Chatham, UK.
4. Dillman, D.A. (2000) Mail and Internet Surveys. Wiley, New York.
5. Dillman, A.D.. Tortora, R.D. and Bowker, D. (1998) Principles for Constructing Web Surveys. SESRC
Technical Report.
6. Featherston, F. and Moy, L. (1990) Item Nonresponse in Mail Surveys. Paper presented at the
International Conference on Measurement Errors in Surveys, Tucson, Arizona.
7. Fisher, S. and Kydoniefs, L. (2001) Using a Theoretical Model of Response Burden to Identify Sources
of Burden in Surveys. Paper presented at the 12th International Workshop on Household Survey
Nonresponse, Oslo.
8. Forsman, G. and Varedian, M. (2002) Mail and web surveys: A Cost and Response Rate Comparison in
a Study on Students Hosing Conditions. Paper presented at the International Conference on Improving
Surveys (ICIS), Copenhagen.
9. Fowler, F. (1995) Improving Survey Questions-Design and Evaluation. Sage Publications USA,
Thousand Oaks, California.
10. Groves, R.M. (1989) Survey Errors and Survey Costs, Wiley, New York.
11. Haraldsen, G. (2002) Identifying and Reducing the Response Burden in Internet Business Surveys. Paper
presented at the International Conference on Questionnaire Development, Evolution and Testing
Methods, Charleston, South Carolina.
12. Manfreda, K.L., Bategeli, Z. and Vehovar, V. (2002) Design of Web Survey Questionnaire: Three Basic
Experiments. Journal of Computer Mediated Communication (JCMC) 7(3).
13. Sheatsley, P. (1983) Questionnaire Construction and Item Writing. In: Handbook of Survey Research
(eds. P.H. Rossi, D. Wright & A.B. Anderson), New York Academic Press, New York, pp. 195-230
14. Schonlau, M., Fricker, R.D. and Elliott, M.N. (2002) Conducting Research Surveys via E-mail and the
Web, Santa Monica, California: RAND.
15. Vehovar, V., Batagelj, Z., Lozar Manfreda, K. and Zaletel, M. (2002) Nonresponse in Web Surveys. In:
Survey Nonresponse (ed. Groves, R.M, Dillman, A., Eltinge, J.L and Little, R.J.A.), John Wiley, New
York.

View publication stats

Potrebbero piacerti anche