Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
www.elsevier.com/locate/dsw
Faculty of Management, Leon Recanati School of Business Administration, Tel Aviv University, Tel Aviv 69978, Israel
b
Department of Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105, Israel
Received 16 August 2004; received in revised form 16 October 2004; accepted 23 April 2005
Available online 7 July 2005
Abstract
We empirically investigated the effect of user-based design and Web site usability on user satisfaction across four types of
commercial Web sites: online shopping, customer self-service, trading, and publish/subscribe. To this end, a Web-based survey
questionnaire was assembled, based on previously reported instruments for measuring user satisfaction, usability, and user-based
design. Three hundred and fifty-nine respondents used the questionnaire to rate a collection of 20 popular commercial Web sites.
Data collected were analyzed to test four hypotheses on the relationships among the attributes examined. The Web site
attributes were also plotted on bi-dimensional perceptual maps in order to visualize their interactions. The two techniques
yielded the same result, namely that trading sites are the lowest rated and that online shopping and customer self-service sites
should serve as models for Web site developers. These findings are especially useful for designers of electronic commerce (EC)
Web sites and can aid in the development and maintenance phases of Web site creation.
# 2005 Elsevier B.V. All rights reserved.
Keywords: User satisfaction; User-based design; Usability; World Wide Web
1. Introduction
The rapid development of the World Wide Web has
allowed people, as never before, to access information
and interact globally with new markets and products
[38,75]. This year, the Web is expected to increase to
200 million sites. According to Nielsen [6769], the
* Corresponding author. Tel.: +972 3 6409671;
fax: +972 3 6407741.
E-mail address: zviran@tau.ac.il (M. Zviran).
0378-7206/$ see front matter # 2005 Elsevier B.V. All rights reserved.
doi:10.1016/j.im.2005.04.002
158
159
3. Research constructs
We investigated the relationship among four
constructs: user-satisfaction, usability, user-based
design, and Web site type.
3.1. User satisfaction
User satisfaction is a common measure of IS
success [93] for which several standardized instruments have been developed and tested. User satisfaction is a critical construct because it is related to other
important variables in systems analysis and design
[50]. It has been used to assess IS success and
effectiveness [7,60,77], the success of decision
support systems (DSS) [6], office automation success
[90], and the utility of IS in decision-making [70].
Definitions incorporate overarching constructs ranging from IS appreciation [87] and user attitudes [22]
to end-user satisfaction. The end-user computing
instrument (EUCI) comprises five measure of user
satisfaction: end-user trust in the system, presenting
accurate information, using a clear presentation
format, ensuring timeliness of information, and
perceived ease of use.
Recognition of the dominance of user satisfaction
in the success of an EC application [23] has led to an
increased effort on the part of the research community
to explore how to measure and model satisfaction of
users and their preferences [51]. Muylle et al. [63]
empirically validated a standard instrument for
160
161
162
5. Methodology
5.1. Instrument
The questionnaire used to collect the data was
constructed from several instruments used in previous
research.
The user satisfaction construct used the wellknown questionnaire developed by Doll et al., which
consists of a 12-item measure of the users reactions to
a specific computer interface. All items had large
(>0.72) and significant loadings on their corresponding factors, indicating good construct validity. Rsquare ranged from 0.52 to 0.79, indicating acceptable
reliability for all items.
Usability was tested using the SUS instrument
developed at Digital Equipment Corporation. It has
been extensively used and adapted. For proprietary
reasons, measures of its validity and reliability have
not been published; however, in an independent study,
Lucey [56] demonstrated that this short 10-item scale
has a reliability of 0.85.
User-based design has not been used in previous
studies on user satisfaction; we merged three
questionnaires that address Web site failures, Web
searching challenges, and the design of transactive
content [32] as the questions after trimming out
redundant items.
The composite preliminary questionnaire then
consisted of 45 questions; four of these collected
demographic details of the respondents. The questionnaire was pre-tested in a pilot study and further
refined and calibrated with the aid of experts,
particularly with respect to the user-based design
constructs. The final questionnaire had 39 questions,
including five demographic items and one question
designed to verify internal consistency. Table 1 depicts
the sources and categories of questions used in the final
questionnaire, which may be obtained from the authors.
163
164
Table 1
Constructs, items and sources
Construct/source
Item
Comments
Questions
Content
Accuracy
Format
Ease of use
Timelines
4-1
6-5
8-7
10-9
12-11
Usability
Personalization
Structure
Navigation
Layout
Search
Performance
Internal consistency
Demographic characteristics
Table 2
Rotated component matrix for user satisfaction
Question Component
Content Accuracy Format Ease of use Timeliness
.807
.792
.819
.695
.242
.324
.231
.240
.233
.136
.292
.228
.257
.230
.074
.300
.875
.783
.165
.103
.094
.142
.178
.323
Q1
Q2
Q3
Q4
Q5
Q6
Q7
Q8
Q9
Q10
Q11
Q12
22-13
.120
.222
.116
.331
.131
.136
.800
.766
.434
.244
.074
.233
.142
.116
.218
.049
.072
.209
.280
.336
.735
.879
.334
.023
.168
.264
.166
.116
.212
.257
.126
.166
.129
.114
.775
.789
25-23
27-26
29-28
33-30
36-34
39-37
40
45-41
6. Hypotheses testing
The hypotheses of this study were investigated
using stepwise regression (see Table 4). Since the
number of observations is sufficiently large relative to
the number of independent variables, there is no need
165
Table 3
First-order factor analysis on user-based design
Component
Content
Navigation
Search
Performance
5
6
7
8
9
10
11
12
13
14
15
Initial eigenvalues
Total
% of
variance
Cumulative (%)
Total
% of
variance
Cumulative (%)
Total
% of
variance
Cumulative (%)
4.02
1.59
1.18
1.06
.97
.86
.81
.73
.68
.62
.53
.53
.48
.46
.43
26.8
10.6
7.92
7.12
6.46
5.78
5.43
4.86
4.54
4.13
3.59
3.53
3.21
3.06
2.89
26.8
37.4
45.3
52.4
58.9
64.7
70.1
75.0
79.5
83.7
87.2
90.8
94.0
97.1
100
4.02
1.59
1.18
1.06
26.8
10.6
7.92
7.12
26.82
37.42
45.35
52.47
2.56
1.95
1.91
1.43
17.0
13.0
12.7
9.56
17.0
30.1
42.9
52.4
Table 4
Correlation summary for constructs (N = 359)
Component
Satisfaction
Satisfaction
r
p
Usability
r
p
Content
r
p
Navigation
r
p
Search
r
p
Performance
r
p
1.00
..
.
Usability
.565
.000
1.00
..
.
Content
Navigation
Search
Performance
.690
.000
.364
.000
.464
.000
.155
.003
.413
.000
.201
.000
.222
.000
.005
.929
.419
.000
.515
.000
.170
.001
.316
.000
.100
.059
1.00
..
.
1.00
..
.
1.00
..
.
.241
.000
1.00
..
.
166
satisfaction
0:218 0:368usability 0:485content
0:139search
The results (see also Table 4) indicated that both H1
and H2 are supported. The amount of variance in user
satisfaction explained by these three constructs is
58.6%. An F-test on the final regression equation
confirmed that all constructs contributed to explaining
the variance in user satisfaction at a significance level
of p < 5%.
Table 5
Backward regression on user satisfaction (without site type)
Modela
Unstandardized
coefficients
Model
Model summary
1
.769
2
.768
3
.766
Standardized coefficients
Significant
S.E.
Beta
(Constant)
USAB
CONTENT
SEARCH
NAV
PERF
.025
.369
.460
.123
.053
.045
.204
.041
.045
.040
.033
.040
.337
.453
.127
.062
.040
.123
8.98
10.1
3.10
1.63
1.12
.902
.000
.000
.002
.103
.260
(Constant)
USAB
CONTENT
SEARCH
NAV
.139
.365
.464
.131
.054
.178
.041
.045
.039
.033
.334
.457
.135
.062
.781
8.92
10.2
3.36
1.64
.435
.000
.000
.001
.101
(Constant)
USAB
CONTENT
SEARCH
.218
.368
.485
.139
.172
.041
.043
.039
.336
.478
.143
1.27
8.96
11.1
3.59
.205
.000
.000
.000
R2
.591
.589
.586
Adjusted R 2
.585
.585
.583
.447
.447
.448
Change statistics
R2 change
F change
d.f.1
d.f.2
Significant F change
.591
.001
.003
101.
1.27
2.71
5
1
1
353
355
356
.000
.260
.101
Predictors: (1) (Constant), PERF, USAB, NAV, SEARCH, CONTENT; (2) (Constant), USAB, NAV, SEARCH, CONTENT; (3) (Constant),
USAB, SEARCH, CONTENT.
a
Dependent variable: User satisfaction (SAT).
167
Table 6
Backward regression on user satisfaction (with site type)
Model
Unstandardized
coefficients
Model
Model summary
1
.780
2
.780
3
.779
4
.779
5
.778
Standardized coefficients
Significant
S.E.
Beta
(Constant)
USAB
CONTENT
NAV
SEARCH
PERF
SHOPPING
SELF-SERVICE
TRADING
.199
.363
.436
.045
.136
.026
0.51
.022
.188
.210
.041
.045
.033
.039
.040
.066
.066
.067
.332
.430
.053
.140
.023
.032
.014
.117
.949
8.92
9.65
1.40
3.49
.672
.783
.340
2.81
.343
.000
.000
.161
.001
.502
.434
.734
.005
(Constant)
USAB
CONTENT
NAV
SEARCH
PERF
SHOPPING
TRADING
.209
.362
.439
.044
.136
.026
.040
.199
.207
.041
.045
.032
.039
.040
.057
.058
.331
.432
.052
.140
.024
.025
.124
1.01
8.93
9.80
1.38
3.49
.679
.709
3.40
.313
.000
.000
.167
.001
.498
.479
.001
(Constant)
USAB
CONTENT
NAV
SEARCH
SHOPPING
TRADING
.280
.360
.440
.044
.141
.040
.203
.179
.040
.045
.032
.038
.057
.058
.329
.434
.052
.145
.026
.127
1.56
8.91
9.86
1.38
3.694
.717
3.50
.119
.000
.000
.166
.000
.474
.001
(Constant)
USAB
CONTENT
NAV
SEARCH
TRADING
.290
.363
.440
.043
.141
.217
.178
.040
.045
.032
.038
.055
.331
.433
.050
.145
.135
1.62
9.03
9.86
1.35
3.68
3.96
.105
.000
.000
.177
.000
.000
(Constant)
USAB
CONTENT
SEARCH
TRADING
.358
.364
.456
.148
.223
.172
.040
.043
.038
.055
.333
.449
.152
.139
2.08
9.06
10.61
3.89
4.08
.038
.000
.000
.000
.000
R2
.608
.608
.607
.607
.605
Adjusted R2
.599
.600
.601
.601
.600
.439
.439
.439
.438
.439
Change statistics
R2 change
F change
d.f.1
d.f.2
Significant F change
.608
.000
.001
.001
.002
8
1
1
1
1
350
352
353
354
355
.000
.734
.498
.474
.177
67.8
.115
.461
.514
1.82
Predictors: (1) (Constant), TRADING, SEARCH, USAB, PERF, SELF-SERVICE, NAV, SHOPPING, CONTENT; (2) (Constant), TRADING,
SEARCH, USAB, PERF, NAV, SHOPPING, CONTENT; (3) (Constant), TRADING, SEARCH, USAB, NAV, SHOPPING, CONTENT; (4)
(Constant), TRADING, SEARCH, USAB, NAV, CONTENT; (5) (Constant), TRADING, SEARCH, USAB, CONTENT.
168
variables:
satisfaction
a b0usability b1content b2search
b3navigation b4performance
b5SITE2 b6SITE3 b7SITE4
The final model was
satisfaction
0:358 0:364usability 0:456content
0:148search 0:223 SITE4
The results (see Table 5) indicated that both H3 and
H4 were supported.
In the case of trading sites, user satisfaction was
significantly lower than that for all other sites, all
coefficients being highly significant. The amount of
variance in user satisfaction explained by the sites
usability, content, search capability and being of type
trading, was 60.5%. An F-test on the final regression
equation verified that they all contribute to explaining
Table 7
Data scheme for factor analysis of Web sites
Site number
Site 1
F 1 S1 ; F 2 S2 ; . . . ; F x Sx
1 S1 ; D
2 S2 ; . . . ; D
y Sy
D
F 1 S1 ; F 2 S2 ; . . . ; F x Sx
1 S1 ; D
2 S2 ; . . . ; D
y Sy
D
F 1 S1 ; F 2 S2 ; . . . ; F x Sx
1 S1 ; D
2 S2 ; . . . ; D
y Sy
D
Site 1 mean
Site 2
Site 2 mean
Site n
Site n mean
F: factor analysis, x: number of factor scores, FS: factor score, D: discriminatory analysis, y: number of discriminatory scores, DS: discriminatory
score, n: number of sites, V: question (variable), R: response, p: question number, s: number of observations per site.
169
Table 8
Data scheme for discriminatory analysis of Web sites
Site type
Site number
Questions,
Vs1, Vs2, . . ., Vsp
Site type 1
Site1
F 1 S1 ; F 2 S2 ; . . . ; F x Sx
1 S1 ; D
2 S2 ; . . . ; D
y Sy
D
..
.
F 1 S1 ; F 2 S2 ; . . . ; F x Sx
...
1 S1 ; D
2 S2 ; . . . ; D
y Sy
D
...
F 1 S1 ; F 2 S2 ; . . . ; F x Sx
1 S1 ; D
2 S2 ; . . . ; D
y Sy
D
Site2
Sitenk
Sitenk mean
Site type 2
Site1
Site2
..
.
Sitenk
Site type k
Sitenk mean
...
Site1
Site2
..
.
Sitenk
Sitenk mean
FS: factor score, D: discriminatory analysis, y: number of discriminatory scores, DS: discriminatory score, p: question number, s: number of
observations per site, F: factor analysis, x: number of factor scores, k: number of site types, n: number of observations per site, V: question
(variable), R: response.
170
The procedures were then repeated for observations at the site level and the site type level.
The non-attribute-based version of the MDS method was used here because it facilitated the naming
of dimensions, made clustering them into groups
with similar characteristics easier, and was more
easily connected to other computer programs [24].
Fig. 4. Perceptual maps using factor analysis (site and site type).
171
Table 9
Discriminant factors and questions at the Web site level
Dimension
Question #
Question content
Question locus
Information and
presentation
27
Graphics presentation
25
Information presentation
26
Do you think you have received complete information both on basic facts
and on full product details?
To what degree is categorization of the content logical?
Content presentation
34
24
To what degree does the search engine deal with misspellings and synonyms?
Is content exposed in logical increments so that people are not overwhelmed?
Search actions
Information actions
23
29
Personalization actions
Error handling
Search
Information
completeness
Personalization
Error handling
172
Table 10
Discriminant factors and questions at the Web site type level
Dimension
Question #
Question content
Question locus
Presentation
27
25
Graphics presentation
Information presentation
Navigation actions
26
28
32
35
User and administrative
tasks
31
24
39
38
33
23
34
Robustness
29
30
Content presentation
Navigation presentation
Navigation presentation
Search presentation
Content (information)
actions
Error actions
System operation actions
Presentation actions
Personalization actions
Search actions
Error handling
(browsabilitya)
Observabilitya
Table 11
Factor analysis in cognitive mappingfindings
Feature
Navigation
Performance
Content
Search
Publish/subscribe
Online shopping
Online shopping
Equal across site types
Self-service
Trading
Comments
173
Table 12
Discriminant dimensionsfindings
Discriminant dimensions
Presentation
User and administrative tasks
Robustness
Table 13
Discriminant analysisfindings
Scope of analysis
Factor
weakness of trading sites (see Fig. 5). Customer selfservice and online shopping, on the other hand, were
quite consistently better on all dimensions. This
finding could be explained by their strong customer
orientation and the fact that the customer was usually
the main source of revenue for most firms.
A similar analysis performed at the Web site type
level elicited the following discriminant dimensions:
robustness,1 presentation and user and administrative tasks (see Table 10). Perceptual maps based
on these dimensions depicted publish/subscribe sites
as the most robust. The best presentation and user-andadministrative-tasks capabilities were exhibited in
online shopping and customer self-service sites. The
weakest were again trading sites. All findings based on
the discriminant analysis methods, including best and
worst performers on each dimension, are summarized
in Tables 1113.
8. Conclusions
Our study empirically investigated the effect of
user-based design and Web site usability on user
1
Level of support provided for successful attainment of users
goals.
Trading
174
Acknowledgement
The authors would like to thank the Editor-in-Chief
and three anonymous reviewers for their valuable and
thorough comments throughout the review process.
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
References
[1] E. Abels, M.D. White, K. Hahn, A user-based design process
for Web sites, Internet Research: Electronic Networking
Applications and Policy 8(1), 1998, pp. 3948.
[2] A.M. Aladwani, P.C. Palvia, Developing and validating an
instrument for measuring user-perceived web quality, Information & Management 39, 2002, pp. 467476.
[3] K. Amoako-Gyampah, A.F. Salam, An extension of the technology acceptance model in an ERP implementation environment, Information & Management 41, 2004, pp. 731745.
[4] J.E. Bailey, S.W. Pearson, Development of a tool for measuring
and analyzing computer user satisfaction, Management
Science 29(5), 1983, pp. 530545.
[5] H. Barki, J. Hardwick, Measuring user participation, user
involvement, and user attitude, MIS Quarterly 18(1), 1994,
pp. 5979.
[6] H. Barki, S. Huff, Change, attitude to change, and decision
support success, Information & Management 9(5), 1985, pp.
261268.
[7] J. Baroudi, M.H. Olson, B. Ives, An empirical study of the
impact of user involvement on system usage and information
satisfaction, Communications of the ACM 29(3), 1986, pp.
232238.
[8] S.J. Barnes, R.T. Vidgen, WebQual: an exploration of Web
site quality, in: Proceedings of the Eighth European Confer-
[19]
[20]
[21]
[22]
[23]
[24]
[25]
175
176
[81]
[82]
[83]
[84]
[85]
[86]
[87]
[88]
[89]
[90]
[91]
[92]
[93]
177
for Informatics Usability, Cambridge University Press, Cambridge, 1991, pp. 2138.
H. Shih, An empirical study on predicting user acceptance of eshopping on the Web, Information & Management 41, 2004,
pp. 351368.
B. Shneiderman, Designing Information Abundant Websites,
1996. ftp://ftp.cs.umd.edu/pub/hcil/Reports-abstracts-Bibliography/3634.txt.
S.L. Smith, J.N. Mosier, Design guidelines for user-system
interface software, Technical Report ESD-TR-84-190, The
Mitre Corporation, Bedford, MA, 1984.
M. Spiliopulou, Web usage mining for Web site evaluation:
making a site better fit its users, Communications of the ACM
43(8), 2000, pp. 127134.
A. Srinivasan, Alternative measures of system effectiveness,
MIS Quarterly 9(3), 1985, pp. 243253.
D.W. Straub, D. Hoffman, B. Weber, C. Steinfield, Measuring
e-commerce in Net-enabled organizations, Information Systems Research 13(2), 2002, pp. 115124.
E.B. Swanson, Management information systems: appreciation and involvement, Management Science 21(2), 1974, pp.
178188.
B.G. Tabachnick, L.S. Fidell, Using Multivariate Statistics,
3rd ed., Harper Collins, New York, 1996.
C. Trepper, E-commerce Strategies,
Microsoft Press,
Washington, DC, 2000.
B.W. Tan, T.W. Lo, Validation of a user satisfaction instrument
for office automation success, Information & Management
18(4), 1990, pp. 203208.
W.L. Yeung, M. Lu, Functional characteristics of commercial
web sites: a longitudinal study in Hong Kong, Information &
Management 41(4), 2004, pp. 483495.
P. Zhang, G.M. Von Dran, User expectations and rankings of
quality factors in different Web site domains, International
Journal of Electronic Commerce 6(2), 2002, pp. 933.
M. Zviran, Z. Erlich, Measuring IS user satisfaction: review
and implications, Communications of the AIS 12(5), 2003, pp.
81104.
178