Sei sulla pagina 1di 22

Information & Management 43 (2006) 157178

www.elsevier.com/locate/dsw

User satisfaction from commercial web sites:


The effect of design and use
Moshe Zviran a,*, Chanan Glezer b, Itay Avni a
a

Faculty of Management, Leon Recanati School of Business Administration, Tel Aviv University, Tel Aviv 69978, Israel
b
Department of Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105, Israel
Received 16 August 2004; received in revised form 16 October 2004; accepted 23 April 2005
Available online 7 July 2005

Abstract
We empirically investigated the effect of user-based design and Web site usability on user satisfaction across four types of
commercial Web sites: online shopping, customer self-service, trading, and publish/subscribe. To this end, a Web-based survey
questionnaire was assembled, based on previously reported instruments for measuring user satisfaction, usability, and user-based
design. Three hundred and fifty-nine respondents used the questionnaire to rate a collection of 20 popular commercial Web sites.
Data collected were analyzed to test four hypotheses on the relationships among the attributes examined. The Web site
attributes were also plotted on bi-dimensional perceptual maps in order to visualize their interactions. The two techniques
yielded the same result, namely that trading sites are the lowest rated and that online shopping and customer self-service sites
should serve as models for Web site developers. These findings are especially useful for designers of electronic commerce (EC)
Web sites and can aid in the development and maintenance phases of Web site creation.
# 2005 Elsevier B.V. All rights reserved.
Keywords: User satisfaction; User-based design; Usability; World Wide Web

1. Introduction
The rapid development of the World Wide Web has
allowed people, as never before, to access information
and interact globally with new markets and products
[38,75]. This year, the Web is expected to increase to
200 million sites. According to Nielsen [6769], the
* Corresponding author. Tel.: +972 3 6409671;
fax: +972 3 6407741.
E-mail address: zviran@tau.ac.il (M. Zviran).

number of Web pages is projected to grow to 50 billion


by the end of the year and in 2007 some 880 million
Internet access devices of various kinds may be sold
worldwide (www2.cio.com/metrics). Considering the
turbulence and size of these developments, it is not
surprising that there has been growing interest in
identifying design principles and features that can
enhance user satisfaction and loyalty to the proliferation of the electronic commerce (EC) sites that use the
Web as their underlying technological platform [52]
and enable the long-term business relationships

0378-7206/$ see front matter # 2005 Elsevier B.V. All rights reserved.
doi:10.1016/j.im.2005.04.002

158

M. Zviran et al. / Information & Management 43 (2006) 157178

critical to the success of these ventures. This claim is


further supported by a survey, which found that three
of the five main concerns about IT are related to poor
user satisfaction [17].User satisfaction with EC
applications has been found to be significantly
associated with usability and design features unique
to the Web, such as download delay, navigation,
content, interactivity, and responsiveness [72]. In
addition, online shopping invokes methods of information gathering that are different from those of the
traditional shopping experience, raising questions
about user satisfaction with the information quality
(IQ) and software quality (SQ) of EC applications,
resulting in discrepancies between prior expectations
and perceived performance. In stock trading sites,
other design principles such as convenience, delightfulness, reliability, and technological advance have all
been found to affect the level of user satisfaction and
loyalty.
The literature indicates that measuring user
satisfaction with EC applications is an important
but complex task. Many factors affect the satisfaction
of users from EC Web sites.
The purpose of this study is therefore to address the
following questions:
1. What are the major factors that drive user
satisfaction from Web sites?
2. Are there differences among different types of Web
sites with regard to user satisfaction?

2. Web success measures


Measuring IS success has received much attention
in the IS literature (e.g. [14,26,31,34,45,74,78,85]).
These view user satisfaction in terms of system use
and acceptance as the practical measure of IS success.
User participation, involvement, and attitude have also
been adopted as success measures [5].
For EC there is no way of directly measuring the
success of an application [35]. Measures such as total
business attracted, site usability, design features,
information and Web site quality, user characteristics,
and fundamental objectives appear to be relevant
indicators [20,65]. There is also difficulty in measuring intentions and actual usage of online shopping;
this was addressed by developing an exhaustive

literature-derived model of online shopping, classified


into: consumer characteristics, Web site and product
characteristics, and perceived characteristics of the
Web as a sales channel [21].
One approach to coping with the complexity of
the issue has been to estimate the quality of EC Web
sites using Web site ranking methods. Thus, the
Webtango project (webtango.ischool.washington.edu/
papers) proposed and tested a quality ranking system to
profile the Web sites and provide insights for design
improvements [46]. This, however, cannot replace
usability testing but complements it by identifying
aspects to be assessed during application acceptance
tests. An extension of the Kano Quality Model [48]
found that the quality factors seemed to change over
time, and that the same quality factor may have different
quality designations in different domains [92]. WebQual is a popular index calculated on the basis of user
perceptions on dimensions of usability, information
quality, and service interaction quality; it has evolved
via a process of iterative refinement [813].
Another alternative exploited automated tools that
analyze logs of Web servers [84]. These were easy to
use and highly effective in capturing the volume of
activity on a Web site in the form of page views, hits,
and even return visits (using cookie technology), but
they did not provide any reliable indication of the
value of the published content to the end-user [62].
This is a serious drawback: user satisfaction is critical
in establishing long-term client relationships [73] and
in increasing profitability [86].
Researchers have investigated various aspects of
success. Aladwani and Palvia [2] reported on the
development of an instrument that captured key
characteristics of Web site quality from a users
perspective. Their 25-item instrument measured four
dimensions: specific content, content quality, appearance, and technical adequacy. Shihs [81] extended
model to predict acceptance of electronic-shopping (eshopping) indicated that user satisfaction with the
WWW and perceptions of information, system and
service affected user acceptance significantly. Raganathan and Ganapathy [76] surveyed online shoppers
and found that security, privacy, design, and information content had an impact on the online purchase
intent. Liu and Arnett [53] surveyed Web-masters
from Fortune 1000 companies and found four factors
that are critical to success: information and service

M. Zviran et al. / Information & Management 43 (2006) 157178

quality, system use, playfulness, and system design


quality. Lu [54] proposed a triangular conceptual
framework for evaluating Web-based business-toconsumer EC applications: EC cost/benefit, EC
functionality and user satisfaction factor arrays. Their
study revealed that for B2C, most main benefits were
fully dependent on or relate to the improvement of
relationship with consumers, and that satisfaction was
determined by EC functionality and maintenance
expense.
Nevertheless, the application functionality categories used by Lu were mainly focused on B2C
scenarios (advertising, e-mail ordering, user payment
registration, and online shopping). In addition the user
satisfaction construct was measured using only a single
item and not by adopting standard instruments [4,30].
Thus, there appears to have been little methodical
evaluation of usability of commercial web sites [16].
Moreover, the focus on Web site characteristics was on
the site as an end-product and did not address the
process of its construction for user satisfaction.
In an attempt to fill these gaps, Lus [55] triangular
evaluation framework of EC applications (Fig. 1) was
adopted as a reference model, with the aim of zooming
in and elaborating on the relationship between Web
site capabilities (v1) and customer assessment (v2).
This goal was achieved by adopting a prominent and
richer instrument for measuring user satisfaction;
adopting a commercial typology of EC applications;
and by introducing usability and user-centered design
constructs as moderators.

159

3. Research constructs
We investigated the relationship among four
constructs: user-satisfaction, usability, user-based
design, and Web site type.
3.1. User satisfaction
User satisfaction is a common measure of IS
success [93] for which several standardized instruments have been developed and tested. User satisfaction is a critical construct because it is related to other
important variables in systems analysis and design
[50]. It has been used to assess IS success and
effectiveness [7,60,77], the success of decision
support systems (DSS) [6], office automation success
[90], and the utility of IS in decision-making [70].
Definitions incorporate overarching constructs ranging from IS appreciation [87] and user attitudes [22]
to end-user satisfaction. The end-user computing
instrument (EUCI) comprises five measure of user
satisfaction: end-user trust in the system, presenting
accurate information, using a clear presentation
format, ensuring timeliness of information, and
perceived ease of use.
Recognition of the dominance of user satisfaction
in the success of an EC application [23] has led to an
increased effort on the part of the research community
to explore how to measure and model satisfaction of
users and their preferences [51]. Muylle et al. [63]
empirically validated a standard instrument for

Fig. 1. An evaluation framework for EC applications (Lu [55]).

160

M. Zviran et al. / Information & Management 43 (2006) 157178

measuring the Web site user satisfaction construct


(WUS). Their instrument consisted of three components: information (relevance, accuracy, comprehensibility, and comprehensiveness), connection (ease-ofuse, entry guidance, structure, hyperlink connotation,
and speed), and layout. Trepper [89] found that
convenient site design and financial security had a
significant effect on user assessment of EC applications, but that, while an EC application can be
technically successful and meet its financial objectives, it can still be a failure if the customers are
unhappy with the result. McKinney et al. [59]
presented evidence that a users satisfaction of an
EC Web site can be modeled as a perceived
disconfirmation, resulting from a gap between user
expectations and the actual performance of the EC
Web site with respect to information and software
quality. Khalifa and Liu [49] argued and empirically
demonstrated the need to consider the evolutionary
nature of satisfaction of Internet-based services.
3.2. Usability
According to ISO 9241 [42,43], usability is the
extent to which intended users of a product achieve
specified goals in an effective, efficient and satisfactory
manner within a specified context of use. Researchers
have adopted different approaches in specifying
usability measures. One approach posits that usability
is promoted if its design method meets a hierarchical set
of criteria in learnability, flexibility, and robustness
[29]. Measuring usability is then based on evaluating
the experience of the user interacting with the system,
which involves a focus on the interface.
Other researchers have viewed usability as dependent on product characteristics such as consistency,
user control, appropriate presentation, error handling,
etc. [58,83]. A different approach adopts clusters of
such factors as speed, errors, time to learn, retention,
flexibility, attitude [80], learnability, efficiency,
retention, errors and pleasing ability [64], or accuracy,
completeness, temporal, human and financial efficiency, comfort and acceptance. Several questionnaires have been developed (www.usabilitynet.org/
tools/r_questionnaire.htm).
It is important to note that, while the usability
engineering approach of deriving appropriate design
targets is useful, usability does not fully determine

actual system use. Thus, designers may produce a well


engineered artifact that meets set criteria but still fails
to gain the acceptance of users. In other words,
usability is a necessary but insufficient determinant of
use [28]. To address this problem, the Technology
Acceptance Model (TAM [25]) was tailored to model
user acceptance of IS, in order to explain behavioral
intention of using the system. Perceived usefulness
(PU) and perceived ease of use (PEU) are important in
explaining the behavioral intention to use IS [3]. Thus,
users may express a preference for a system based on
personal judgment, previous experience, aesthetics,
cost, etc., and the final driver must be the users
perception of or attitude toward the technology.
Our study adopted the system usability scale (SUS)
questionnaire [18], developed at Digital Equipment
Corporation. Mature, robust, extensively used, and
adapted, it is the most strongly recommended of all
public domain questionnaires. It has a simple, 10-item
scale giving a global view for quick assessment of the
usability of a system in comparison to its competitors
or predecessors.
The technique used in constructing the SUS was
that a pool of 50 potential questionnaire items was first
assembled. Two examples of software systems were
then selected (a linguistic tool aimed at end-users and
a tool for systems programmers) where there was
general agreement that one was really easy to use
and the other was almost impossible to use, even for
highly technically skilled users. Twenty people from
the office systems engineering group, with occupations ranging from secretary to systems programmer,
then rated both systems against all 50 items using a 5point Likert scale ranging from strongly agree to
strongly disagree. The items leading to the most
extreme responses were then selected (the intercorrelations between all selected items were close:
0.7 to 0.9). In addition, items were selected so
that the common response to half of them was strong
agreement and the other half strong disagreement (to
prevent biases caused by respondents not having to
think about each statement) (www.usability.serco.com/trump/documents/Suschapt.doc).
3.3. User-based (user-centered) design
In contrast to the usability approach, the user-based
design paradigm has a broader scope. It involves the

M. Zviran et al. / Information & Management 43 (2006) 157178

user throughout the whole life cycle of the system


information gathering, development, evaluation, and
implementation [1]. User input is gathered at three
different times:
(1) early in the project, to determine the evaluative
criteria users apply to the Web sites they use;
(2) after a preliminary design, to elicit feedback and
comments and/or to evaluate aspects of the site;
(3) when the Web site is operational, to elicit
continual feedback and suggestions for additions
and/or modifications to the site.
Fig. 2 depicts the six criteria used for constructing
Web sites with the user in the focus. These criteria are
operationalized into Web site features that can be
measured in order to evaluate Web sites.
The rationale of the user-based design is that users
who are consulted at early stages have less antagonism
towards the new system [36,37,79]. The cultural
variation of the Web underscores the need for a
tailored design [61], with the initial questions during
the design process being who is the user? and
what are his or her goals?though design guidelines are not available at this stage [82]. In another
approach, the visitor and the site manager serve as
focal points for activating the development process
[27].

161

3.4. Classification of Web sites


The Internet houses Web sites of diverse types
with different target populations, making it difficult
to classify them. In studying the evolution of
functional characteristics of 98 Hong Kong-based
commercial sites, Yeung and Lu [91] showed that
though the content of the sampled Web sites grew
larger, their functions were only marginally enhanced.
This is in contrast to the general impression of
fast growing e-commerce activities. Hoffman et al.
[40] proposed a classification of commercial web
sites into six categories: online storefront, Internet
presence, content, mall, incentive, and search agent.
Cappel and Myerscough [19] classified the business
use of the web into marketplace awareness,
customer support, sales, advertising, and electronic
information services. Practitioner classifications
included, among others: inner-directed, informationoriented, transaction-driven, and relationship-oriented
sites (www.businesstown.com/internet/basic-types.
asp), and promotional, content, portal, and e-commerce sites (www.home-basedbusinessopportunities.
com/library/webdesign101-types.shtml).
In our study, we adopted the compact IBM
classification of Web sites according to volume of
traffic [41]. Based on criteria such as: pages retrieved,
number of transactions, their complexity, type, and
number of searches, information stability, and security
concerns, this classification proposed five types of
high-volume Web sites: publish/subscribe, online
shopping, customer self-service, trading, and B2B
(see Appendix A for details). Of these, we excluded
the last because of its overlap with others, due to the
nature of procurement activities of businesses.

4. Research model and hypotheses


development

Fig. 2. User-based design criteria and their relationship (Abels [1]).

The goal of our effort was empirically to test user


satisfaction in different types of Web sites as a
function of two attributes: usability and user-based
design. The independent construct Web site usability
mainly referred to the subjective feeling of the user
towards the Web site that served as a revenue channel
for the merchant [33]. It was expected that the better
the Web sites interface fit the user preferences, the

162

M. Zviran et al. / Information & Management 43 (2006) 157178

higher would be the value and satisfaction attributed to


the Web site. This should result in loyalty and repeat
customers, with potentially increased revenues,
particularly when entering a competitive environment
with well-established brands, where standards are
stringent. Thus, our first hypothesis was
H1. Web sites exhibiting a higher degree of usability
will be associated with greater perceived user satisfaction.
The importance of the user-based design construct
stems from the growing emphasis on design
approaches, with the intention of promoting usability
[44]. A designer should adhere to the following
principles: knowing the user, minimizing memorization, optimizing operations, and engineering for error
[39]. The expectation is that the better the design fits
the user perception, the higher the value and
satisfaction attributed by the user to the Web site.
Thus, our second hypothesis is
H2. Web sites adhering to user-based design principles will result in greater perceived user satisfaction.
The amount and heterogeneity of Web sites make it
difficult to provide a uniform classification of Web
sites. We believed that Web sites belonging to different
types or domains would possess different characteristics that differentially affected the relationship
between usability, user-based design, and user
satisfaction (the dependent variable). For example,
online shopping sites are usually based on visual
catalogues with a relatively low frequency of updates
and a high volume of transactions and searches, while
publish/subscribe sites (like newspapers) have content
that is modified frequently, but the number of
transactions and search operations are lower. This
leads to our third and fourth hypotheses.
H3. The type of a Web site influences the relationship
between the Web sites usability and perceived user
satisfaction.
H4. The type of a Web site influences the relationship
between the Web sites user-based design capabilities
and perceived user satisfaction.
The research model is presented in Fig. 3.

5. Methodology
5.1. Instrument
The questionnaire used to collect the data was
constructed from several instruments used in previous
research.
The user satisfaction construct used the wellknown questionnaire developed by Doll et al., which
consists of a 12-item measure of the users reactions to
a specific computer interface. All items had large
(>0.72) and significant loadings on their corresponding factors, indicating good construct validity. Rsquare ranged from 0.52 to 0.79, indicating acceptable
reliability for all items.
Usability was tested using the SUS instrument
developed at Digital Equipment Corporation. It has
been extensively used and adapted. For proprietary
reasons, measures of its validity and reliability have
not been published; however, in an independent study,
Lucey [56] demonstrated that this short 10-item scale
has a reliability of 0.85.
User-based design has not been used in previous
studies on user satisfaction; we merged three
questionnaires that address Web site failures, Web
searching challenges, and the design of transactive
content [32] as the questions after trimming out
redundant items.
The composite preliminary questionnaire then
consisted of 45 questions; four of these collected
demographic details of the respondents. The questionnaire was pre-tested in a pilot study and further
refined and calibrated with the aid of experts,
particularly with respect to the user-based design
constructs. The final questionnaire had 39 questions,
including five demographic items and one question
designed to verify internal consistency. Table 1 depicts
the sources and categories of questions used in the final
questionnaire, which may be obtained from the authors.

5.2. Instrument refinement


Exploratory Factor Analysis (EFA) was employed as
a data reduction method on the composite questionnaire. For the user satisfaction items, principal
component analysis with Varimax rotation using
Kaisers normalization (Table 2) revealed five factors:

M. Zviran et al. / Information & Management 43 (2006) 157178

163

Fig. 3. The research model.

content, accuracy, format, ease of use, and timeliness.


These explain 81.4% of the user satisfaction variance.
In order to test whether a mean for questions Q1
through 12 can be used to estimate user satisfaction, a
second-order analysis was conducted. The first factor
content (mean of Q1 through 4) is able to explain
61.6% of the variance in user satisfaction (Cronbach
a = 1 for the 12-item questionnaire); this is higher than
the 0.7 or lower threshold found in the literature [71].
For the Web site usability construct, principal
component analysis using eigenvalues revealed three
items in the original SUS questionnaire (Q13, Q17,
Q21) overlapped with items from the other constructs
and these were thus omitted. The Cronbach a reliability
score for the seven-item questionnaire was 0.83.
For the user-based design construct, principal
component analysis with Varimax rotation using
Kaisers normalization produced four factors: content,
navigation, search, performance (see Table 3). These
factors explain 52.5% of the variance in the user-based
design construct and are congruent with the factors for
promoting user-based design of Web-based systems

reported by Abels et al.: content, linkage, search


capability, and use. A second-order factor analysis
yielded one factor that explained 48.4% of the variance.
Finally, the correlation among the user satisfaction,
usability and user-based design constructs is shown in
Table 4. All correlations were significant at p = 0.01,
except performance with navigation ( p = 0.059) and
usability with performance ( p = 0.0929).
5.3. Data collection
The questionnaire was Web-based. This allowed us
easier control and quicker processing of data for
statistical analysis. The Web site presented each
respondent with a list of commercial Web sites that fell
under the heading of one or other of the four types of
Web sites from IBMs classification (Appendix A):
publish/subscribe (90 respondents), online shopping
(90), customer self-service (90), and trading (89). The
respondents were presented with a quota of Web sites
designating the number of required exposures for each
of the types. Upon logging on, the respondent was first

164

M. Zviran et al. / Information & Management 43 (2006) 157178

Table 1
Constructs, items and sources
Construct/source

Item

Comments

Questions

User Satisfaction, Doll et al. [30]

Content
Accuracy
Format
Ease of use
Timelines

User trust in site-provided content


Precision of site-provided information
Clarity of information presentation
Subjective impression of user
Temporal relevance of information

4-1
6-5
8-7
10-9
12-11

Web-site usability, Brooke (SUS) [18]

Usability

User-oriented design, Abels et al. [1]

Personalization
Structure
Navigation
Layout
Search
Performance

Internal consistency
Demographic characteristics

Table 2
Rotated component matrix for user satisfaction
Question Component
Content Accuracy Format Ease of use Timeliness
.807
.792
.819
.695
.242
.324
.231
.240
.233
.136
.292
.228

.257
.230
.074
.300
.875
.783
.165
.103
.094
.142
.178
.323

Organization of information in the site

Quality of user-site dialogue

Gender, marital status, education, average weekly Web surfing time,


age

given an introductory screen with explanations about


the research procedure and then presented with a list of
sites. After selection of a specific one, the system
presented the Web site and questionnaire in two
adjacent windows. Upon completing the questionnaire, the respondent received an acknowledgement
from the system.
Most of the 359 respondents were students at a
major business school (58% men and 42% women);
47.4% of the respondents were undergraduates, 42.9%
were graduate students and the rest were faculty
members. A t-test revealed no significant difference
between the groups. Most respondents (81%) were in

Q1
Q2
Q3
Q4
Q5
Q6
Q7
Q8
Q9
Q10
Q11
Q12

22-13

.120
.222
.116
.331
.131
.136
.800
.766
.434
.244
.074
.233

.142
.116
.218
.049
.072
.209
.280
.336
.735
.879
.334
.023

.168
.264
.166
.116
.212
.257
.126
.166
.129
.114
.775
.789

Extraction method: Principal Component Analysis. Rotation


method: Varimax with Kaiser normalization.

25-23
27-26
29-28
33-30
36-34
39-37
40
45-41

the 2030 age-group and seemed to have had


significant exposure to the Web. For example, 43%
said that they browsed the Web for more than 8 h a
week and 53% for over 6 h a week.
One dilemma in setting up the survey was in
selecting an optimal number of respondents for
detecting usability problems. The recommended
number is 35 [66] and a single user making the same
number of repetitions as a group of users is likely to be
biased. Querying more users makes it is easier to
account for the variance due to individual differences
among users.
It should be noted that our goal was not to
document the usability problem of a given site, but
rather to investigate the relationships across various
Web site types between usability, user-based design,
and user satisfaction. Accordingly, the sample size
was selected to provide approximately 15 responses
per site. This size enabled detection of practically all
usability problems. The Web sites reviewed by the
respondents were almost evenly distributed across all
of the four types investigated.

6. Hypotheses testing
The hypotheses of this study were investigated
using stepwise regression (see Table 4). Since the
number of observations is sufficiently large relative to
the number of independent variables, there is no need

M. Zviran et al. / Information & Management 43 (2006) 157178

165

Table 3
First-order factor analysis on user-based design
Component

Content
Navigation
Search
Performance
5
6
7
8
9
10
11
12
13
14
15

Initial eigenvalues

Extraction sums of squared loadings

Rotation sums of squared loadings

Total

% of
variance

Cumulative (%)

Total

% of
variance

Cumulative (%)

Total

% of
variance

Cumulative (%)

4.02
1.59
1.18
1.06
.97
.86
.81
.73
.68
.62
.53
.53
.48
.46
.43

26.8
10.6
7.92
7.12
6.46
5.78
5.43
4.86
4.54
4.13
3.59
3.53
3.21
3.06
2.89

26.8
37.4
45.3
52.4
58.9
64.7
70.1
75.0
79.5
83.7
87.2
90.8
94.0
97.1
100

4.02
1.59
1.18
1.06

26.8
10.6
7.92
7.12

26.82
37.42
45.35
52.47

2.56
1.95
1.91
1.43

17.0
13.0
12.7
9.56

17.0
30.1
42.9
52.4

Extraction method: Principal Component Analysis.

to use partial least squares regression. Considering the


number of observations in each group of sites,
normality can be assumed.
As noted in the instrument validation section, the
factors driving user-based design were identified as:
content (Q24 through 28); navigation (Q29
through 31); search (Q32 through 35); and
performance (Q23, 38, and 39).

Since usability was measured as a single numeric


value based on the reduced seven-item SUS scale,
the initial regression model was stated as follows:
satisfaction
a b0usability b1content b2search
b3navigation b4performance

Table 4
Correlation summary for constructs (N = 359)
Component

Satisfaction

Satisfaction

r
p

Usability

r
p

Content

r
p

Navigation

r
p

Search

r
p

Performance

r
p

1.00
..
.

Usability
.565
.000
1.00
..
.

Content

Navigation

Search

Performance

.690
.000

.364
.000

.464
.000

.155
.003

.413
.000

.201
.000

.222
.000

.005
.929

.419
.000

.515
.000

.170
.001

.316
.000

.100
.059

1.00
..
.

1.00
..
.

1.00
..
.

.241
.000
1.00
..
.

166

M. Zviran et al. / Information & Management 43 (2006) 157178

The final model was

In order to test H3 and H4, three dummy variables


were used to denote the type of a Web site.

satisfaction
0:218 0:368usability 0:485content

SITE2 1 if the site is of type online shopping;


0 otherwise:
SITE3 1 if the site is of type customerself-service;
0 otherwise:

0:139search
The results (see also Table 4) indicated that both H1
and H2 are supported. The amount of variance in user
satisfaction explained by these three constructs is
58.6%. An F-test on the final regression equation
confirmed that all constructs contributed to explaining
the variance in user satisfaction at a significance level
of p < 5%.

SITE4 1 if the site is of type trading;


0 otherwise:
Therefore, if all the SITE variables are 0, the Web
site is of type publish/subscribe.
The initial regression model was similar to the one
used for testing H1 and H2, except for the dummy

Table 5
Backward regression on user satisfaction (without site type)
Modela

Unstandardized
coefficients

Model

Model summary
1
.769
2
.768
3
.766

Standardized coefficients

Significant

S.E.

Beta

(Constant)
USAB
CONTENT
SEARCH
NAV
PERF

.025
.369
.460
.123
.053
.045

.204
.041
.045
.040
.033
.040

.337
.453
.127
.062
.040

.123
8.98
10.1
3.10
1.63
1.12

.902
.000
.000
.002
.103
.260

(Constant)
USAB
CONTENT
SEARCH
NAV

.139
.365
.464
.131
.054

.178
.041
.045
.039
.033

.334
.457
.135
.062

.781
8.92
10.2
3.36
1.64

.435
.000
.000
.001
.101

(Constant)
USAB
CONTENT
SEARCH

.218
.368
.485
.139

.172
.041
.043
.039

.336
.478
.143

1.27
8.96
11.1
3.59

.205
.000
.000
.000

R2

.591
.589
.586

Adjusted R 2

.585
.585
.583

S.E. of the estimate

.447
.447
.448

Change statistics
R2 change

F change

d.f.1

d.f.2

Significant F change

.591
.001
.003

101.
1.27
2.71

5
1
1

353
355
356

.000
.260
.101

Predictors: (1) (Constant), PERF, USAB, NAV, SEARCH, CONTENT; (2) (Constant), USAB, NAV, SEARCH, CONTENT; (3) (Constant),
USAB, SEARCH, CONTENT.
a
Dependent variable: User satisfaction (SAT).

M. Zviran et al. / Information & Management 43 (2006) 157178

167

Table 6
Backward regression on user satisfaction (with site type)
Model

Unstandardized
coefficients

Model

Model summary
1
.780
2
.780
3
.779
4
.779
5
.778

Standardized coefficients

Significant

S.E.

Beta

(Constant)
USAB
CONTENT
NAV
SEARCH
PERF
SHOPPING
SELF-SERVICE
TRADING

.199
.363
.436
.045
.136
.026
0.51
.022
.188

.210
.041
.045
.033
.039
.040
.066
.066
.067

.332
.430
.053
.140
.023
.032
.014
.117

.949
8.92
9.65
1.40
3.49
.672
.783
.340
2.81

.343
.000
.000
.161
.001
.502
.434
.734
.005

(Constant)
USAB
CONTENT
NAV
SEARCH
PERF
SHOPPING
TRADING

.209
.362
.439
.044
.136
.026
.040
.199

.207
.041
.045
.032
.039
.040
.057
.058

.331
.432
.052
.140
.024
.025
.124

1.01
8.93
9.80
1.38
3.49
.679
.709
3.40

.313
.000
.000
.167
.001
.498
.479
.001

(Constant)
USAB
CONTENT
NAV
SEARCH
SHOPPING
TRADING

.280
.360
.440
.044
.141
.040
.203

.179
.040
.045
.032
.038
.057
.058

.329
.434
.052
.145
.026
.127

1.56
8.91
9.86
1.38
3.694
.717
3.50

.119
.000
.000
.166
.000
.474
.001

(Constant)
USAB
CONTENT
NAV
SEARCH
TRADING

.290
.363
.440
.043
.141
.217

.178
.040
.045
.032
.038
.055

.331
.433
.050
.145
.135

1.62
9.03
9.86
1.35
3.68
3.96

.105
.000
.000
.177
.000
.000

(Constant)
USAB
CONTENT
SEARCH
TRADING

.358
.364
.456
.148
.223

.172
.040
.043
.038
.055

.333
.449
.152
.139

2.08
9.06
10.61
3.89
4.08

.038
.000
.000
.000
.000

R2

.608
.608
.607
.607
.605

Adjusted R2

.599
.600
.601
.601
.600

S.E. of the estimate

.439
.439
.439
.438
.439

Change statistics
R2 change
F change

d.f.1

d.f.2

Significant F change

.608
.000
.001
.001
.002

8
1
1
1
1

350
352
353
354
355

.000
.734
.498
.474
.177

67.8
.115
.461
.514
1.82

Predictors: (1) (Constant), TRADING, SEARCH, USAB, PERF, SELF-SERVICE, NAV, SHOPPING, CONTENT; (2) (Constant), TRADING,
SEARCH, USAB, PERF, NAV, SHOPPING, CONTENT; (3) (Constant), TRADING, SEARCH, USAB, NAV, SHOPPING, CONTENT; (4)
(Constant), TRADING, SEARCH, USAB, NAV, CONTENT; (5) (Constant), TRADING, SEARCH, USAB, CONTENT.

168

M. Zviran et al. / Information & Management 43 (2006) 157178

variables:
satisfaction
a b0usability b1content b2search
b3navigation b4performance
b5SITE2 b6SITE3 b7SITE4
The final model was
satisfaction
0:358 0:364usability 0:456content
0:148search  0:223  SITE4
The results (see Table 5) indicated that both H3 and
H4 were supported.
In the case of trading sites, user satisfaction was
significantly lower than that for all other sites, all
coefficients being highly significant. The amount of
variance in user satisfaction explained by the sites
usability, content, search capability and being of type
trading, was 60.5%. An F-test on the final regression
equation verified that they all contribute to explaining

the variance in user satisfaction at a significance level


of p < 5%.
Finally, we tested the data to exclude the possibility
of multicollinearity between the independent variables.
In the first test, the condition number [15] was
calculated for the matrix of coefficients of the sample
observations. Applications with experimental and
actual datasets suggested that condition numbers
higher than 20 indicated serious collinearity problems.
Two other tests were used to examine the stability of
the regression equations after omitting several observations or several variables [47,57]. The relatively low
condition numbers (varying from 5.32 to 7.43), and the
low variance in the regression coefficients when the
two omission tests were performed (less than 7%),
suggested that a high degree of multicollinearity did
not exist. Hence, the final regression equations were
judged to be stable (Table 6).

7. Visualization of web site attributes


Perceptual maps presented by multidimensional
scaling (MDS) can be considered an alternative to
factor analysis. In factor analysis, the similarities

Table 7
Data scheme for factor analysis of Web sites
Site number

Questions, Vs1, Vs2, . . ., Vsp

Rotated factor scores, F1, F2, . . ., Fx

Rotated factor scores, D1, D2, . . ., Dy

Site 1

R11, R12, . . ., R1p


R21, R22, . . ., R2p
..
.
Rs1, Rs2, . . ., Rsp

FS11, FS12, . . ., FS1x


FS21, FS22, . . ., FS2x
..
.
FSs1, FSs2, . . ., FSsx

DS11, DS12, . . ., DS1y


DS21, DS22, . . ., DS2y
..
.
DSs1, DSs2, . . ., DSsy

F 1 S1 ; F 2 S2 ; . . . ; F x Sx

1 S1 ; D
2 S2 ; . . . ; D
y Sy
D

FS11, FS12, . . ., FS1x


FS21, FS22, . . ., FS2x
..
.
FSs1, FSs2, . . ., FSsx

DS11, DS12, . . ., DS1y


DS21, DS22, . . ., DS2y
..
.
DSs1, DSs2, . . ., DSsy

F 1 S1 ; F 2 S2 ; . . . ; F x Sx

1 S1 ; D
2 S2 ; . . . ; D
y Sy
D

FS11, FS12, . . ., FS1x


FS21, FS22, . . ., FS2x
..
.
FSs1, FSs2, . . ., FSsx

DS11, DS12, . . ., DS1y


DS21, DS22, . . ., DS2y
..
.
DSs1, DSs2, . . ., DSsy

F 1 S1 ; F 2 S2 ; . . . ; F x Sx

1 S1 ; D
2 S2 ; . . . ; D
y Sy
D

Site 1 mean
Site 2

R11, R12, . . ., R1p


R21, R22, . . ., R2p
..
.
Rs1, Rs2, . . ., Rsp

Site 2 mean
Site n

Site n mean

R11, R12, . . ., R1p


R21, R22, . . ., R2p
..
.
Rs1, Rs2,. . ., Rsp

F: factor analysis, x: number of factor scores, FS: factor score, D: discriminatory analysis, y: number of discriminatory scores, DS: discriminatory
score, n: number of sites, V: question (variable), R: response, p: question number, s: number of observations per site.

M. Zviran et al. / Information & Management 43 (2006) 157178

between objects (e.g., variables) are those expressed


in the correlation matrix, but with MDS one can
analyze any kind of similarity or dissimilarity
matrix, in addition to correlation matrices. In
general, the goal of the MDS analysis is to detect
meaningful underlying dimensions that allow the
researcher to explain observed similarities or
dissimilarities (distances) between the investigated
objects. Both factor analysis and MDS reduce the
observed complexity of nature, because the distance

169

matrix explains the observations in terms of fewer


underlying dimensions (www.statsoftinc.com/textbook/stathome.html).
In our study, an MDS procedure was performed
using dimensions and distances based on
(1) scores of the factor analysis procedure;
(2) a discriminant analysis procedure, which yielded
the most powerful discriminant functions across
the sample.

Table 8
Data scheme for discriminatory analysis of Web sites
Site type

Site number

Questions,
Vs1, Vs2, . . ., Vsp

Rotated factor scores,


F1, F2, . . ., Fx

Rotated factor scores,


D1, D2, . . ., Dy

Site type 1

Site1

R11, R12, . . ., R1p


..
.
R11, R12, . . ., R1p
..
.
..
.
R11, R12, . . ., R1p
..
.

FS11, FS12, . . ., FS1x


..
.
FS11, FS12, . . ., FS1x
..
.
..
.
FS11, FS12, . . ., FS1x
..
.

DS11, DS12, . . ., DS1y


..
.
DS11, DS12, . . ., DS1y
..
.
..
.
DS11, DS12, . . ., DS1y
..
.

F 1 S1 ; F 2 S2 ; . . . ; F x Sx

1 S1 ; D
2 S2 ; . . . ; D
y Sy
D

R11, R12, . . ., R1p


..
.
R11, R12, . . ., R1p
..
.
..
.
R11, R12, . . ., R1p
..
.

FS11, FS12, . . ., FS1x


..
.
FS11, FS12, . . ., FS1x
..
.
..
.
FS11, FS12, . . ., FS1x
..
.

DS11, DS12, . . ., DS1y


..
.
DS11, DS12, . . ., DS1y
..
.
..
.
DS11, DS12, . . ., DS1y
..
.

..
.

F 1 S1 ; F 2 S2 ; . . . ; F x Sx
...

1 S1 ; D
2 S2 ; . . . ; D
y Sy
D
...

R11, R12, . . ., R1p


...
R11, R12, . . ., R1p
..
.
..
.
R11, R12, . . ., R1p
..
.

FS11, FS12, . . ., FS1x


...
FS11, FS12, . . ., FS1x
..
.
..
.
FS11, FS12, . . ., FS1x
..
.

DS11, DS12, . . ., DS1y


...
DS11, DS12, . . ., DS1y
..
.
..
.
DS11, DS12, . . ., DS1y
..
.

F 1 S1 ; F 2 S2 ; . . . ; F x Sx

1 S1 ; D
2 S2 ; . . . ; D
y Sy
D

Site2

Sitenk

Site type 1 mean

Sitenk mean

Site type 2

Site1
Site2
..
.
Sitenk

Site type 2 mean

Site type k

Sitenk mean
...
Site1
Site2
..
.
Sitenk

Site type k mean

Sitenk mean

FS: factor score, D: discriminatory analysis, y: number of discriminatory scores, DS: discriminatory score, p: question number, s: number of
observations per site, F: factor analysis, x: number of factor scores, k: number of site types, n: number of observations per site, V: question
(variable), R: response.

170

M. Zviran et al. / Information & Management 43 (2006) 157178

The procedures were then repeated for observations at the site level and the site type level.
The non-attribute-based version of the MDS method was used here because it facilitated the naming
of dimensions, made clustering them into groups
with similar characteristics easier, and was more
easily connected to other computer programs [24].

Tables 7 and 8 depict the arrangement of the data


used for the factor and discriminant analysis
procedures.
As an example, a perceptual map based on factor
analysis at both the Web site and Web site type level
for performance (Y-axis) versus content (X-axis) is
shown in Fig. 4.

Fig. 4. Perceptual maps using factor analysis (site and site type).

M. Zviran et al. / Information & Management 43 (2006) 157178

171

Table 9
Discriminant factors and questions at the Web site level
Dimension

Question #

Question content

Question locus

Information and
presentation

27

Is multimedia/graphics used strictly to support the site purpose?

Graphics presentation

25

Information presentation

26

Do you think you have received complete information both on basic facts
and on full product details?
To what degree is categorization of the content logical?

Content presentation

34
24

To what degree does the search engine deal with misspellings and synonyms?
Is content exposed in logical increments so that people are not overwhelmed?

Search actions
Information actions

23
29

Can you personalize the site in order to speed up use?


Does error handling offer the ability to move forward and not hit dead ends?

Personalization actions
Error handling

Search
Information
completeness
Personalization
Error handling

Fig. 5. Perceptual maps using discriminant functions (by site type).

172

M. Zviran et al. / Information & Management 43 (2006) 157178

Table 10
Discriminant factors and questions at the Web site type level
Dimension

Question #

Question content

Question locus

Presentation

27
25

Is multimedia/graphics used strictly to support the site purpose?


Do you think you have received complete information both on
basic facts and on full product details?
To what degree is categorization of the content logical?
Do navigation aids serve as a logical road map?
Are there update clues (colors, URL or category trail, etc.) to
ensure you that you know your location on the site?
To what degree are the results listed in relevant order?

Graphics presentation
Information presentation

Do multiple navigation bars serve completely separate purposes


and not overlap each other?
Is content exposed in logical increments so that people
are not overwhelmed?
Does the site contain errors, such as JavaScript crashes?
Does the site inform you of browser specific design requirements?
Does the site balance scrolling the page with screen layout density
(the page arrangement)?
Can you personalize the site in order to speed up use?
To what degree does the search engine deal with
misspellings and synonyms?

Navigation actions

26
28
32
35
User and administrative
tasks

31
24
39
38
33
23
34

Robustness

29
30

Does error handling offer the ability to move forward


and not hit dead ends?
Are navigation bars consistent?

Content presentation
Navigation presentation
Navigation presentation
Search presentation

Content (information)
actions
Error actions
System operation actions
Presentation actions
Personalization actions
Search actions
Error handling
(browsabilitya)
Observabilitya

Mentioned as a feature of robustness.

From the map it is evident that, on average, online


shopping sites provided higher content and performance capabilities than all other site types. Trading
sites were relatively low on content capabilities, and
customer self-service sites were relatively low on
performance capabilities. The variance of trading sites
was high, both on content and performance dimensions;
this may indicate that these sites were developed in a
highly dynamic and uncertain environment. Customer
self-service sites also had a small variance on both
dimensions but in general seemed to be mediocre
compared to the other types. Possibly some companies
focus their efforts on developing online shopping sites
because they generate substantial revenue, whereas
customer-service sites are perceived as a burden.

At the individual site level, Barnes and Noble (site


number 24) seemed to be a leader with regard to the
combination of content and performance, whereas the
Virtual Shopping Center (site number 29) lagged
significantly behind.
Discriminant analysis was performed at the site and
site type level. At the site level it produced 15 possible
discriminatory functions. Using the SCREE method
[88], five functions which explained more than 6.66%
(1/15) of the variance were selected. The dimensions
based on these functions were named: information
and presentation, search, information completeness, personalization, and error handling (see
Table 9). Perceptual maps drawn using these dimensions provide additional evidence of the relative

Table 11
Factor analysis in cognitive mappingfindings
Feature

Highest site type

Lowest site type

Navigation
Performance
Content
Search

Publish/subscribe
Online shopping
Online shopping
Equal across site types

Self-service
Trading

Comments

High variance for customer self-service

M. Zviran et al. / Information & Management 43 (2006) 157178

173

Table 12
Discriminant dimensionsfindings

Discriminant dimensions

Analysis by Web Site type

Analysis by Web site

Presentation
User and administrative tasks
Robustness

Level of information and presentation


Search capabilities
Information completeness
Personalization
Error handling

Table 13
Discriminant analysisfindings
Scope of analysis

Factor

Highest site type

Web site name

Level of information and presentation


Search capabilities
Information Completeness
Personalization
Error Handling
Presentation

Equal across sites


Online shopping and customer self-service
Customer self service and publish/subscribe
Online shopping and customer self-service
No significant finding across all sites
Publish/subscribe

User and Administrative Tasks


Robustness

Online shopping and customer self-service


Online shopping and customer self-service

Web site type

weakness of trading sites (see Fig. 5). Customer selfservice and online shopping, on the other hand, were
quite consistently better on all dimensions. This
finding could be explained by their strong customer
orientation and the fact that the customer was usually
the main source of revenue for most firms.
A similar analysis performed at the Web site type
level elicited the following discriminant dimensions:
robustness,1 presentation and user and administrative tasks (see Table 10). Perceptual maps based
on these dimensions depicted publish/subscribe sites
as the most robust. The best presentation and user-andadministrative-tasks capabilities were exhibited in
online shopping and customer self-service sites. The
weakest were again trading sites. All findings based on
the discriminant analysis methods, including best and
worst performers on each dimension, are summarized
in Tables 1113.

8. Conclusions
Our study empirically investigated the effect of
user-based design and Web site usability on user
1
Level of support provided for successful attainment of users
goals.

Lowest site type


Trading
Trading
Trading

Trading

satisfaction across four types of commercial Web


sites: online shopping, customer self-service, trading,
and publish/subscribe. By investigating the typology
of IBM, this study addressed the increasing differentiation of Web sites according to type and purpose,
an issue that has received little attention. We also
refined recent studies showing that Web site success
was related to usability measures, as well as
incorporating the user-based design construct, which
had not been investigated previously in IS user
satisfaction research.
The significance of the findings was enhanced by
the dual validation design of the study, combining both
hypothesis testing and perceptual mapping supported
by MDS visualization capabilities. These two methods
have not yet been used in combination in the context of
user satisfaction.
Our findings indicated that Web sites have
different, hidden, and subjective factors that stem
from the process of user and system interaction and
affect overall user satisfaction and that they can serve
the development and maintenance phases of Web site
creation.
The items of the questionnaire can be used as a
checklist in the development process, especially for
trading sites, which have consistently been found to be
a problem. Online shopping and customer self-service

174

M. Zviran et al. / Information & Management 43 (2006) 157178

exhibited good capabilities and may therefore serve as


a model. The observation that both online shopping
and customer self-service possessed better capabilities
is not surprising in view of the fact the user is the focus
of commercial ventures and must be satisfied if
profitability is to be attained.
The study had several limitations. First, it focuses
on user satisfaction as the dependent variable.
However, as indicated by ISO 13407, there is a
relationship between usability and user centered
design which suggests that alternative models should
be evaluated. Second, the IBM framework, while
proved useful, has not been validated to verify that it
is comprehensive and that its categories are mutually
exclusive. Third, Web site users are random Web
surfers who do not participate in the design process
and, therefore, the user-based design instrument
might need to be tested and adapted. Fourth, the
administration of the experiment asked each
respondent to evaluate only two types of sites;
previous experience of respondents with certain sites
and the time of evaluation were not measured; the
classification of Web sites into different categories
was done by a single author and not an expert panel;
Web sites are dynamic and might have changed
during the evaluation sessions, causing measurement
biases; the measurement of usability also could be
misleading; and finally, demographic limitations of
the study are the relatively small size of the sample
and the fact that almost half of the participants were
students. The latter point is somewhat mitigated by
the t-test performed across the four Web site groups,
which showed no significant differences among
them.

Acknowledgement
The authors would like to thank the Editor-in-Chief
and three anonymous reviewers for their valuable and
thorough comments throughout the review process.

Appendix A. Summary of high-volume Web


site classifications
Publish/subscribe Web sites provide visitors with
information. Some examples include search engines,

media sites such as newspapers and magazines, and


event sites such as those for the Olympics and for the
tennis championships at Wimbledon. Site content
changes frequently, driving changes to page layouts.
While search traffic is low volume, the number of
unique items sought is high resulting in the largest
number of page views of all site types. As an
example, the Wimbledon site successfully handled a
peak volume of 430,000 hits per min using IBM
WebSphere Performance Pack. Security considerations are minor compared to other site types. Data
volatility is low. This site type processes the fewest
transactions and has little or no connection to any
legacy systems.
Online shopping sites let visitors browse and buy.
Examples are typical retail sites where visitors buy
books, clothes, and even cars. Site content can be
relatively static, such as a parts catalog, or dynamic
where items are frequently added and deleted, for
example, as promotions and special discounts come
and go. Search traffic is heavier than the publish/
subscribe site, though the number of unique items
sought is not as large. Data volatility is low.
Transaction traffic is moderate to high, and almost
always grows. The typical daily volumes for many
large retail customers, running on IBM Net.Commerce, range from less than one million hits per day to
over 3 million hits per day, and with a range from
100,000 transactions per day to 700,000 transactions
per day in the top range; of the total transactions,
typically between 1% and 5% are buy transactions.
When visitors buy, security requirements become
significant and include privacy, nonrepudiation,
integrity, authentication, and regulations. Shopping
sites have more connections to legacy systems, such as
fulfillment systems, than the publish/subscribe sites,
but generally less than the other site types.
Customer self-service sites let visitors help
themselves. Sample sites include banking from home,
tracking packages, and making travel arrangements.
Data comes largely from legacy applications and often
comes from multiple sources, thereby exposing data
consistency. Security considerations are significant for
home banking and purchasing travel services, less so
for other uses. Search traffic is low volume;
transaction traffic is low to moderate, but growing.
Trading sites let visitors buy and sell. Of all site
types, trading sites have the most volatile content, the

M. Zviran et al. / Information & Management 43 (2006) 157178

highest transaction volumes (with significant swing),


the most complex transactions, and the most time
sensitivity. Products like IBMs CICS high-volume
transaction processing system play a key role at
these sites. Trading sites are tightly connected to the
legacy systems, for example, using IBM MQSeries
for connectivity. Nearly all transactions interact with
the back-end servers. Security considerations are
high, equivalent to online shopping, with an even
larger number of secure pages. Search traffic is low
volume.
Business-to-business sites let businesses buy from
and sell to each other. Many businesses are
implementing a Web site for their purchasing
applications. Such purchasing activity may also be
characteristic of other site types, such as publish/
subscribe sites and self-service sites. Data comes
largely from legacy applications and often comes from
multiple sources, thereby exposing data consistency.
Security requirements are equivalent to online
shopping. Transaction volume is low to moderate,
but growing; transactions are typically complex,
connecting multiple suppliers and distributors.

[9]

[10]

[11]

[12]

[13]

[14]

[15]

[16]

[17]

[18]

References
[1] E. Abels, M.D. White, K. Hahn, A user-based design process
for Web sites, Internet Research: Electronic Networking
Applications and Policy 8(1), 1998, pp. 3948.
[2] A.M. Aladwani, P.C. Palvia, Developing and validating an
instrument for measuring user-perceived web quality, Information & Management 39, 2002, pp. 467476.
[3] K. Amoako-Gyampah, A.F. Salam, An extension of the technology acceptance model in an ERP implementation environment, Information & Management 41, 2004, pp. 731745.
[4] J.E. Bailey, S.W. Pearson, Development of a tool for measuring
and analyzing computer user satisfaction, Management
Science 29(5), 1983, pp. 530545.
[5] H. Barki, J. Hardwick, Measuring user participation, user
involvement, and user attitude, MIS Quarterly 18(1), 1994,
pp. 5979.
[6] H. Barki, S. Huff, Change, attitude to change, and decision
support success, Information & Management 9(5), 1985, pp.
261268.
[7] J. Baroudi, M.H. Olson, B. Ives, An empirical study of the
impact of user involvement on system usage and information
satisfaction, Communications of the ACM 29(3), 1986, pp.
232238.
[8] S.J. Barnes, R.T. Vidgen, WebQual: an exploration of Web
site quality, in: Proceedings of the Eighth European Confer-

[19]

[20]

[21]

[22]

[23]

[24]

[25]

175

ence on Information Systems, vol. 1, Vienna, 2000, pp. 298


305.
S.J. Barnes, R.T. Vidgen, An evaluation of cyber-bookshops:
The WebQual Method, International Journal of Electronic
Commerce 6, 2001, pp. 625.
S.J. Barnes, R.T. Vidgen, Assessing the effect of a web site
redesign initiative: An SME case study, International Journal
of Management Literature 1, 2001, pp. 113126.
S.J. Barnes, R.T. Vidgen, Assessing the quality of auction Web
sites, in: Proceedings of the Hawaii International Conference
on Systems Sciences, CD-ROM, Maui, Hawaii, 2001.
S.J. Barnes, R.T. Vidgen, An integrative approach to the
assessment of e-commerce quality, Journal of Electronic
Commerce Research 3(3), 2002.
S. Barnes, R.T. Vidgen, Measuring Web site quality improvements: a case study of the forum on strategic management
knowledge exchange, Industrial Management & Data Systems
103(5), 2003, pp. 297309.
C.M. Beise, Assessment of Information Systems Effectiveness
through Examination of IS/Organizational Interfaces, unpublished paper, Georgia State University, 1988.
D.A. Belsley, E. Kuh, R.E. Welsch, Regression Diagnostics,
Identifying Influential Data Sources of Collinearity, Wiley,
New York, NY, 1980.
R. Benbunan-Fich, Using protocol analysis to evaluate the
usability of a commercial website, Information & Management
39, 2001, pp. 151163.
M. Bensaou, M. Earl, The right mind-set for managing information technology, Harvard Business Review 1998, pp. 118
129.
J. Brooke, SUS: a quick and dirty usability scale, in: P.W.
Jordan, B. Thomas, B.A. Weerdmeester, I.L. McClelland
(Eds.), Usability Evaluation in Industry, Taylor & Francis,
London, UK, 1996, pp. 189194. Available from www.cee.hw.ac.uk/ph/sus.html.
J.J. Cappel, M.A. Myerscough, World Wide Web uses for
electronic commerce: towards a classification scheme, in:
Proceedings of the 1996 Second AIS Conference, Phoenix,
Arizona, 1996.
J.F. Chang, G. Torkzadeha, G. Dhillon, Re-examining the
measurement models of success for Internet commerce, Information & Management 41, 2004, pp. 577584.
M.K. Chang, W. Cheung, V.S. Lai, Literature derived reference
models for the adoption of online shopping, Information &
Management 42(4), 2005, pp. 543559.
P.M. Cheney, G.W. Dickson, Organizational characteristics
and information systems: an exploratory investigation, Academy of Management Journal 25(1), 1982, pp. 170184.
S. Cho, Customer-focused Internet commerce at Cisco systems, IEEE Communications Magazine 37(9), 1999, pp. 61
63.
G.A. Churchill, Marketing Research: Methodological
Foundations, 8th ed., South-Western College Pub., Cincinnati,
Ohio, 2001.
F. Davis, Perceived usefulness, perceived ease of use, and user
acceptance of information technology, MIS Quarterly 13(3),
1989, pp. 318339.

176

M. Zviran et al. / Information & Management 43 (2006) 157178

[26] W.H. Delone, E.R. McLean, Information systems success: the


quest for the dependent variable, Information Systems
Research 3(1), 1992, pp. 6095.
[27] O. De Troyer, C. Leune, WSDM: a user-centered design
method for Web sites, in: Proceedings of the Seventh International World Wide Web Conference on Computer Networks
and ISDN Systems, Elsevier, 1998, pp. 8594.
[28] A. Dillon, M. Morris, Power perception and performance:
From usability engineering to technology acceptance with
the P3 model of user response, in: Proceedings of the 43rd
Annual Conference of the Human Factors and Ergonomics
Society, Santa Monica, CA, HFES, 1999, pp. 10171021.
[29] A. Dix, J. Finlay, G. Abowd, R. Beale, HumanComputer
Interaction, Prentice-Hall, 1993.
[30] W.J. Doll, W. Xia, G. Torkzadeh, A confirmatory factor
analysis of the end-user computing satisfaction instrument,
MIS Quarterly 18(4), 1994, pp. 453461.
[31] P. Ein-Dor, E. Segev, Organizational context and success of
management information systems, Management Science
24(10), 1978, pp. 10641077.
[32] Forrester Research, Must search stink, Report, by P.R. Hagen,
2000. http://www.forrester.com.
[33] Forrester Research, Web sites continue to fail the usability test,
IT View and Business View Brief, by B.D. Temkin, 2003.
http://www.forrester.com.
[34] R.Y.K. Fung, A.C. Pereira, W.H.R. Yeung, Performance evaluation of a Web-based information system for laboratories and
service centres, Logistics Information Management 13(4),
2000, pp. 218227.
[35] D.F. Galletta, A.L. Lederer, Some cautions on the measurement of user information satisfaction, Decision Sciences 20,
1989, pp. 419438.
[36] J.D. Gould, C. Lewis, Design for usability: Key principles and
what designers think, Communications of the ACM 28(3),
1985, pp. 360411.
[37] J. Greenbaum, M. King, Design at Work, Cooperative Design
of Computer Systems, Lawrence Erlbaum Associates, Hillsdale, NJ, 1991.
[38] G. Hamel, J. Sampler, E-corporation; more than just Web-based,
its building a new industry order, Fortune 1998, pp. 5263.
[39] W.J. Hansen, User engineering principles for interactive systems, in: Proceedings of the Fall Joint Computer Conference,
Montvale, NJ, AFIPS Press, 1981, pp. 523532.
[40] D.L. Hoffman, T.P. Novak, P. Chatterjee, Commercial scenarios for the web: opportunities and challenges, Journal of
Computer-Mediated Communication, Electronic Commerce
1(3), 1995.
[41] IBM, Summary of high-volume Web site classifications, 1999.
www7b.software.ibm.com/wsdd/library/techarticles/hvws/
personalize.html #appendixb.
[42] ISO 9241 (1994) ISO 9241-11 DIS Ergonomic requirements
for office work with visual display terminals (VDTs). Part 11.
Guidance on usability. Draft international standard.
[43] ISO 9241, ISO 9241-10 Ergonomic requirements for office
work with visual display terminals (VDTs). Part 10. Dialogue
principles, 1996.

[44] ISO13407, Human-centered design processes for interactive


systems, International Organization for Standardization, 1999.
Geneva, Switzerland.
[45] B. Ives, M.H. Olson, User involvement and MIS success: a
review of research, Management Science 30(5), 1984, pp. 586
603.
[46] M.Y. Ivory, M.A. Hearst, Improving web site design, IEEE
Internet Computing, Special Issue on Usability and the World
Wide Web 6(2), 2002, pp. 5663.
[47] J. Johnston, Econometric Methods, 3rd ed., McGraw-Hill,
Japan, 1984.
[48] N. Kano, N. Seraku, F. Takahashi, S. Tsuji, Attractive and
normal quality, Quality 14(2), 1984.
[49] M. Khalifa, V. Liu, Determinants of satisfaction at different
adoption stages of Internet-based services, Journal of AIS 4(5),
2003, pp. 206232.
[50] K. Klenke, Construct measurement in management information systems: A review and critique of user satisfaction and
user involvement instruments, Information Systems and
Operations Research 30(4), 1992, pp. 325348.
[51] S. Kurniawan, Modeling online retailer customer preference
and stickiness: A mediated structural equation model, in:
Proceedings of the Fourth Pacific Asia Conference on Information Systems, 2000, pp. 238252.
[52] Y. Lee, J. Kim, From design features to financial performance:
a comprehensive model of design principles for online stock
trading sites, Journal of Electronic Commerce Research 3(3),
2002, pp. 128143.
[53] C. Liu, K.P. Arnett, Exploring the factors associated with Web
site success in the context of electronic commerce, Information
& Management 38, 2000, pp. 2333.
[54] J. Lu, Assessing web-based electronic commerce applications
with customer satisfaction: an exploratory study, International
Telecommunication Societys Asia-Indian Ocean Regional
Conf., Telecommunications and E-Commerce 2001, pp.
132144.
[55] J. Lu, A model for evaluating e-commerce based on cost/
benefit and customer satisfaction, Information Systems Frontiers 5(3), 2003, pp. 265277.
[56] N.M. Lucey, More than Meets the I: User-Satisfaction of
Computer Systems. Unpublished thesis for Diploma in
Applied Psychology, University College Cork, Ireland,
1991.
[57] G.S. Maddala, Econometrics, McGraw-Hill, Japan, 1977 .
[58] D.J. Mayhew, Principles and Guidelines in Software
User
.
Interface, Prentice-Hall, Englewood Cliffs, NJ, 1992
[59] V. McKinney, K. Yoon, F. Zahedi, The measurement of Webcustomer satisfaction: an expectation and disconfirmation
approach, Information Systems Research 13(3), 2002, pp.
296315.
[60] N.P. Melone, A theoretical assessment of the user satisfaction
construct in information systems research, Management
Science 36(1), 1990, pp. 7691.
[61] M.J. Muller, Defining and designing the Internet: participation
by Internet stakeholder constituencies, Social Science Computer Review 14(1), 1996, pp. 3033.

M. Zviran et al. / Information & Management 43 (2006) 157178


[62] M. Mulvenna, S. Anand, A. Buchner, Personalization on the
Net using Web mining, Communications of the ACM 43(8),
2000, pp. 123125.
[63] S. Muylle, R. Moenaert, M. Despontin, The conceptualization
and empirical validation of web site user satisfaction, Information & Management 41, 2004, pp. 543560.
[64] J. Nielsen, Usability Engineering, Academic Press Inc.,
Boston, MA, 1993.
[65] J. Nielsen, Usability as barrier to entry, Jakob Neilsens Alertbox, 28 November 1999. http://www.useit.com/alertbox/
9911028.html.
[66] J. Nielsen, Why you only need to test with 5 users, Jakob
Neilsens Alertbox, 19 March 2000. http://www.useit.com/
alertbox/20000319.html.
[67] J. Nielsen, Novice vs. expert users, Jakob Neilsens Alertbox,
6 February 2000. http://www.useit.com/alertbox/20000206.
html.
[68] J. Nielsen, Designing Web Usability, New Riders Publishing,
Indianapolis, IN, 2000.
[69] J. Nielsen, J. Levy, Measuring usability: preference vs. performance, Communications of the ACM 37(4), 1994, pp. 66
75.
[70] R.L. Nolan, H.H. Seward, Measuring user satisfaction to
evaluate information systems, in: R.L. Nolan (Ed.), Managing
the Data Resource Function, West Pub., St. Paul, MN, 1974,
pp. 253275.
[71] J. Nunnaly, Psychometric Theory, McGraw-Hill, New York,
1978.
[72] J.W. Palmer, Web site usability, design, and performance
metrics, Information Systems Research 13(2), 2002, pp.
151167.
[73] P.G. Patterson, R.A. Spreng, Modeling the relationship
between perceived value, satisfaction and repurchase intentions in a business-to-business, services context: an empirical
examination, International Journal of Service Industry Management 8(5), 1997, pp. 414434.
[74] J. Preece, Y. Rogers, H. Sharp, D. Benyon, S. Holland,
T. Carey, HumanComputer Interaction, Addison-Wesley,
Wokingham, UK, 1994.
[75] J.A. Quelch, L.R. Klein, The Internet and international
marketing, Sloan Management Review 3, 1996, pp. 60
75.
[76] C. Raganathan, S. Ganapathy, Key dimensions of business-toconsumer web sites, Information & Management 39, 2002, pp.
457465.
[77] L. Raymond, Validating and applying user satisfaction as a
measure of MIS success in small organizations, Information &
Management 12(3), 1987, pp. 173179.
[78] D. Robins, S. Kelsey, Analysis of Web-based information
architecture in a university library: navigating for known items,
Information Technology and Libraries 21(4), 2002, pp. 158
169.
[79] D. Schuler, A. Namioka (Eds.), Participatory Design: Principles and Practices, Lawrence Erlbaum Associates, Hillsdale,
NJ, 1993.
[80] B. Shackel, Usability context, framework, design and evaluation, in: B. Shackel, S. Richardson (Eds.), Human Factors

[81]

[82]

[83]

[84]

[85]
[86]

[87]

[88]
[89]
[90]

[91]

[92]

[93]

177

for Informatics Usability, Cambridge University Press, Cambridge, 1991, pp. 2138.
H. Shih, An empirical study on predicting user acceptance of eshopping on the Web, Information & Management 41, 2004,
pp. 351368.
B. Shneiderman, Designing Information Abundant Websites,
1996. ftp://ftp.cs.umd.edu/pub/hcil/Reports-abstracts-Bibliography/3634.txt.
S.L. Smith, J.N. Mosier, Design guidelines for user-system
interface software, Technical Report ESD-TR-84-190, The
Mitre Corporation, Bedford, MA, 1984.
M. Spiliopulou, Web usage mining for Web site evaluation:
making a site better fit its users, Communications of the ACM
43(8), 2000, pp. 127134.
A. Srinivasan, Alternative measures of system effectiveness,
MIS Quarterly 9(3), 1985, pp. 243253.
D.W. Straub, D. Hoffman, B. Weber, C. Steinfield, Measuring
e-commerce in Net-enabled organizations, Information Systems Research 13(2), 2002, pp. 115124.
E.B. Swanson, Management information systems: appreciation and involvement, Management Science 21(2), 1974, pp.
178188.
B.G. Tabachnick, L.S. Fidell, Using Multivariate Statistics,
3rd ed., Harper Collins, New York, 1996.
C. Trepper, E-commerce Strategies,
Microsoft Press,
Washington, DC, 2000.
B.W. Tan, T.W. Lo, Validation of a user satisfaction instrument
for office automation success, Information & Management
18(4), 1990, pp. 203208.
W.L. Yeung, M. Lu, Functional characteristics of commercial
web sites: a longitudinal study in Hong Kong, Information &
Management 41(4), 2004, pp. 483495.
P. Zhang, G.M. Von Dran, User expectations and rankings of
quality factors in different Web site domains, International
Journal of Electronic Commerce 6(2), 2002, pp. 933.
M. Zviran, Z. Erlich, Measuring IS user satisfaction: review
and implications, Communications of the AIS 12(5), 2003, pp.
81104.

Moshe Zviran is associate professor of Information Systems in the


Faculty of Management, The Leon Racanati Graduate School of
Business Administration, Tel Aviv University. He received his B.Sc.
degree in mathematics and computer science and the M.Sc and
Ph.D. degrees in information systems from Tel Aviv University,
Israel, in 1979, 1982 and 1988, respectively. He held academic
positions at the Claremont Graduate University, California, the
Naval Postgraduate School, California, and Ben-Gurion University,
Israel. His research interests include information systems planning,
measurement of IS success and user satisfaction and information
systems security. He is also a consultant in these areas for a number
of leading organizations. Prof. Zvirans research has been published
in: MIS Quarterly, Communications of the ACM, Journal of Management Information Systems, IEEE Transactions on Engineering
Management, Information and Management, Omega, Data and
Knowledge Engineering, The Computer Journal and other journals.
He is also co-author (with N. Ahituv and S. Neumann) of Information Systems for Management (Tel-Aviv, Dyonon, 1996) and Infor-

178

M. Zviran et al. / Information & Management 43 (2006) 157178

mation Systems from Theory to Practice (Tel-Aviv, Dyonon,


2001).
Chanan Glezer is a lecturer at the department of Information
Systems Engineering, Ben-Gurion University of the Negev, Israel.
He holds a Ph.D. degree in Information Systems from Texas
Tech University. His main areas of interest are: Electronic commerce, organizational computing and Internet security. His
research has been published in journals such as Communications

of the ACM, Journal of Organizational Computing and Electronic


Commerce, Journal of Strategic Information Systems, Data and
Knowledge Engineering, Journal of Information Warfare, International Journal of Electronic Business, and the Journal of
Medical Systems.
Itay Avni is a graduate of the M.Sc. program in Information
Systems at the Faculty of Management, The Leon Racanati Graduate School of Business Administration, Tel Aviv University.

Potrebbero piacerti anche