Sei sulla pagina 1di 17

Kettinger & Lee/Alternative Scales for IS SERVQUAL

RESEARCH NOTE

ZONES OF TOLERANCE: ALTERNATIVE SCALES


FOR MEASURING INFORMATION SYSTEMS
SERVICE QUALITY1

By: William J. Kettinger level of IS service desired, and (2) adequate ser-
Moore School of Business vice: the minimum level of IS service customers
University of South Carolina are willing to accept. Defining these two levels is
Columbia, SC 20208 a zone of tolerance (ZOT) that represents the
U.S.A. range of IS service performance a customer would
bill@sc.edu consider satisfactory. In other words, IS customer
service expectations are characterized by a range
Choong C. Lee of levels, rather than a single expectation point.
Graduate School of Information This research note adapts the ZOT and the
Yonsei University generic operational definition from marketing to the
Seoul IS field, assessing its psychometric properties.
KOREA Our findings conclude that the instrument shows
cclee@yonsei.ac.kr validity of a four-dimension IS ZOT SERVQUAL
measure for desired, adequate, and perceived
service quality levels, identifying 18 commonly
applicable question items. This measure ad-
Abstract dresses past criticism while offering a practical
diagnostic tool.
The expectation norm of Information Systems
SERVQUAL has been challenged on both concep- Keywords: IS service quality, zones of tolerance,
tual and empirical grounds, drawing into question IS management, SERVQUAL, evaluation, user
the instruments practical value. To address the expectations, information services function
criticism that the original IS SERVQUALs expec-
tation measure is ambiguous, we test a new set of
scales that posits that service expectations exist at
two levels that IS customers use as a basis to
assess IS service quality: (1) desired service: the Introduction
Over the past decade, SERVQUAL has garnered
1 considerable scholarly and managerial attention as
V. Sambamurthy was the accepting senior editor for this
paper. Christopher L. Carr and Richard T. Watson a diagnostic tool for uncovering areas of informa-
served as reviewers. tion systems service quality strengths and weak-

MIS Quarterly Vol. 29 No. 4, pp. 607-623/December 2005 607


Kettinger & Lee/Alternative Scales for IS SERVQUAL

nesses. Praised for its practical relevance (e.g., Research Challenges with
Jiang, Klein, and Carr 2002; Jiang, Klein, and
Crampton 2000; Kettinger and Lee 1994, 1997;
SERVQUAL and the Recon-
Pitt et al. 1995, 1997; Watson et al. 1998), it has ceptualization of the
often been criticized on conceptual and psych- Expectation Norm
ometrics grounds (Lee and Kettinger 1996;
Kohlmeyer and Blanton 2000; Van Dyke et al. In marketing consumer research, satisfaction can
1997, 1999). A primary area of criticism concerns be broadly characterized as a post-use evaluation
SERVQUALs reliance on gap scores that are of product or service quality given pre-use expec-
derived by calculating the difference between IS tations. SERVQUAL was developed to measure
users perceived levels of service and their service quality. In developing their SERVQUAL
expectations for service. Critics both in marketing instrument, the intent of Parasuraman, Zeithaml
(e.g., Brown et al. 1993; Cronin and Taylor 1992, and Berry (hereafter PZB) was to derive a service
1994; Teas 1993, 1994) and in IS (e.g., Kettinger quality measure that transcended multiple mea-
and Lee 1997; Van Dyke et. al. 1997, 1999) point surement contexts. Over the years, SERVQUAL
to conceptual and empirical difficulties with the has been adapted to the measurement of many
original SERVQUAL instrument and have sug- service delivery contexts, including IS service
gested that alternatives to the original gap scored delivery. With its widespread application, studies
IS-adapted SERVQUAL be explored. in marketing emerged questioning the conceptual
and empirical integrity of the SERVQUAL instru-
In their 1997 article Kettinger and Lee2 called for ment. These articles focused primarily around two
the further study of an alternative instrument major concerns related to SERVQUALs use of
adapted from marketing referred to as the zones difference or gap scores. Researchers such as
of tolerance (ZOT) service quality measure. This Cronin and Taylor (1992, 1994) questioned the
zones of tolerance measure is conceptualized to predictive superiority of SERVQUALs gap mea-
overcome one of the most significant points of sure (perceived service quality minus expected
criticisms with the original SERVQUAL instrument; service quality) over a performance only service
namely, the need for a more parsimonious con- quality score (SERVPERF); in essence, ques-
ceptualization of service quality expectations, while tioning the need to measure expectations or
retaining the practical diagnostic power of under- calculate a gap score. Marketing scholars such as
standing service expectation levels. This research Teas (1993, 1994) questioned the conceptual
note tests the psychometric properties of an IS integrity of SERVQUALs expectation measure,
ZOT service quality instrument. Structured as a stating that it suffered from different interpre-
context-only extension (Berthon et al. 2002), this tations. Considerable debate and subsequent
study finds that the IS zones of tolerance service study have occurred both in marketing and in IS
quality measure offers significant promise as a concerning these challenges to the expectation
diagnostic tool for IS managers and as an measure of the original SERVQUAL concep-
alternative to overcome problems with the existing tualization.
IS SERVQUAL instrument.

Gap Measure Versus Single


2
As the work on which the SERVQUAL research focuses Perceived Score
originated from that of the marketing scholars A.
Parasuraman (P), Valarie Zeithaml (Z), and Leonard Numerous researchers have shown that the
Berry (B), and because we will make frequent reference
to their various joint publications, we will refer to them as original gap (difference) scored SERVQUAL instru-
PZB, PBZ, ZBP BPZ, or ZPB, as the case may be. As ment demonstrates poorer predictive validity than
we will make frequent reference to the authors Leyland a perceived performance only (SERVPERF) ser-
Pitt, Richard Watson, and C. Bruce Kavan, we will refer
to them as PWK and likewise, William J. Kettinger and
vice quality measure. In fact, it was PZB (1991)
Choong C. Lee will be refer to as K&L. who first reported that the SERVPERF scores

608 MIS Quarterly Vol. 29 No. 4/December 2005


Kettinger & Lee/Alternative Scales for IS SERVQUAL

produced higher adjusted R2 values when com- scriptive insights, take a slightly different tack
pared to SERVQUALs gap scores for each of the where they state that conceptual problems iden-
five dimensions (e.g., reliability, responsiveness, tified with the original SERVQUAL expectation
assurance, empathy, and tangibles). The superior measure push for exploration of alternative con-
predictive power of the performance-only scores figurations of an expectation-based IS SERVQUAL
was further confirmed by Babakus and Boller measure.
(1992), Cronin and Taylor (1992), Boulding et al.
(1993), and PZB (1994b). These same results
were later demonstrated with an IS adapted
SERVPERF measure by Lee and Kettinger in Rethinking the Expectation Measure
1996, by PWK in 1997, and by Van Dyke et al. in
1999. Given these findings, some researchers in A second major criticism of the original
both Marketing and Information Systems have SERVQUAL expectation measure challenges the
argued for a single item comparative of perception ideal service (PZB 1988) and excellent service
and expectation (Cronin 1992, 1994; Peter et al. (PZB 1991) expectation norm of comparison used
1993; Van Dyke et al. 1997). in difference scoring. This criticism highlights an
important distinction between the two main stan-
While conceding the improved predictive power of dards that represent expectations in the confirma-
the perceived performance only instrument, advo- tion/disconfirmation literature. One standard re-
cates (e.g., PZB 1994b; PWK 1997) of the original presents the expectation as a prediction of future
gap-scored SERVQUAL measure argue the value events (Churchill and Suprenant 1982; Miller
of difference scores both on practical and 1977), defined as an objective calculation of the
theoretical grounds. PZB (1994b, p. 116) state, probability of performance. The other standard is
a normative expectation of future events (Miller
executives in companies that have 1977; PZB 1988; Swan and Trawick, 1980; ZBP
switched to a disconfirmation-based 1993), operationalized either as a desired or ideal
measurement approach tell us that the expectation. Although these two standards use
information generated by this approach different expectation measures, expectation and
have greater diagnostic value. Moreover, perceptions are treated as linked via the dis-
examining only performance rating can confirmation of expectation paradigm (Oliver
lead to different actions than examining 1980), which states that the higher the expectation
those ratings relative to expectation. in relation to the actual performance, the greater
the degree of disconfirmation and lower satis-
They rhetorically ask, faction. Critics of the normative standards such as
Teas (1993) in marketing and Van Dyke et al.
Are managers who use service quality (1997, 1999) in IS argue that SERVQUALs expec-
measurements more interested in ac- tation measure suffers from multiple interpretations
curately identifying service shortfalls or depending on whether a customer bases his/her
explaining variance in an overall measure assessment on a prediction of what will occur in
of perceived service quality? (p. 116). the next IS service encounter or on what ideally
should occur.
In the IS context, PWK (1997) argue that the richer
information contained in IS SERVQUALs discon- Recognizing a need for improvement, ZBP (1993,
firmation-based measurements provides IS man- p. 3) acknowledged that their original definition of
agers with diagnostic power that typically out- the expectations was broadand did not stipulate
weighs statistical and convenience benefits de- norms of expectations used by customers in
rived from the use of IS SERVPERF. K&L (1997), assessing service quality. Citing empirical sup-
while agreeing with PZB (1994b) and PWK (1997) port (e.g., Tse and Wilton 1988) for both a pre-
that difference scores offer more meaningful pre- dicted and ideal expectation standard, ZBP point

MIS Quarterly Vol. 29 No. 4/December 2005 609


Kettinger & Lee/Alternative Scales for IS SERVQUAL

to a service quality confirmation/disconfirmation SERVQUAL concept in the IS setting: Is an IS-


process involving complex, simultaneous inter- adapted ZOT SERVQUAL psychometrically
actions that include more than one expectation sound? More specifically, do the two expectation
comparison. Similarly other confirmation/discon- levels of IS service quality (desired and adequate)
firmation researchers (e.g., Oliver et al. 1997) and the perceived IS service level possess the
indicate that a range of satisfaction exists beyond same dimensions of SERVQUAL (including com-
minimum expectations that may move even mon items)? To address these questions, we
beyond desired levels of expectations into a range adapted PZBs (1994a) ZOT SERVQUAL concept
of service surprise sometimes termed delight. to the IS context, whereby the IS adapted
Based on this rethinking of expectations, ZBP measures were subjected to reliability and validity
offered a reconceptualized model of customer testing.
services (see Figure 1). Their revised service
quality expectation comparison norm delineates
two types of expectations. First, a normative
Research Methods
expectation was termed desired expectations (e.g.,
Spreng and MacKoy 1996; Swan and Trawick
and Analysis
1980), which was defined as the level of service
A preferred method to cross-validate an instru-
the customer wanted to be performed. Second, a
ments dimensionality is to examine the factor
minimum tolerable expectation (Miller 1977) was
structure of one sample within the factor structure
defined as the lowest level of performance accep- of a second sample, commonly referred to as the
table to a customer that incorporates the influence holdout sample (Chin and Todd 1995). Since the
of predicted service and situational factors. objective of this study requires a refining process
to obtain a common set of validated items and
Further clarifying their conceptualization, PZB factors for all three levels of IS service quality
(1994b, p. 112) recommended comparative norms (desired, adequate, and perceived), we followed
as two different comparison norms for service Chin and Todds approach to examine the service
quality assessment: desired service (the level of quality of two different sample groups: a university
service a customer believes can and should be IS services sample and an industrial IS service
delivered) and adequate (minimum) service (the sample. The first sample was used to test the
level of service the customer considers accep- factor structure of IS ZOT SERVQUAL using an
table). Separating these two levels is a zone of exploratory factor analysis, and the second sample
tolerance that represents the range of service was used as a holdout sample for a confirmatory
performance a customer would consider satisfac- factor analysis to cross-validate the derived
tory. In other words, customer service expecta- dimensionality from the first sample.
tions are characterized by a range of levels
(between desired and adequate service), rather Using the three column ZOT format proposed by
PZB (1994a) and the IS adapted items of K&L
than a single point. Even though the zones of
(1994), the IS ZOT SERVQUAL instrument was
tolerance (ZOT) SERVQUAL instrumentation had
pretested through a series of interviews with IS
not been empirically validated, it has subsequently
professionals and IS graduate students. Based on
been practically applied to numerous services
the results of pretesting, additional wording
contexts such as the assessment of the student
adjustments were made in the instruction section,
service quality of a business school (Caruna et al. such as the original word of adequate service in
2000) and in assessing the service quality of PZBs scale was changed to minimum service in
university and research libraries (e.g., Blixrud order to clearly differentiate the desired and the
2002; Cook et al. 2003). minimally adequate service levels. It should also
be noted that while researchers (e.g., K&L 1994;
The previous discussion leads us to the following PZB 1991) have questioned the strength of the
questions concerning the validity of the ZOT Tangible dimension, the Tangible dimension was

610 MIS Quarterly Vol. 29 No. 4/December 2005


Kettinger & Lee/Alternative Scales for IS SERVQUAL

Expected Service
Perceived
Service Desired Service
Superiority

Adequate Service

Perceived
Service
Adequacy
Perceived Service

Figure 1. Dual Expectation Comparison Standards: Desired and Adequate IS


Service (Adapted from V. Zeithaml, L. L. Berry, and A. Parasuraman, The Nature of
Determinants of Customer Expectations of Service Quality, Journal of the Academy
of Marketing Science (21:1), 1993, pp. 1-12)

retained in this study given the fact that the original and validity. For example, Jaing, Klein and
PZB ZOT (1994a) instrument included Tangible Crampton (2000) revalidated the original Kettinger
and this dimension has been included in subse- and Lee findings in the industrial context with
quent ZOT operationalizations in other organiza- similar results. In addition, there is ongoing sup-
tional contexts (e.g., Caruana et al. 2000). port for ZOT SERVQUALs continued use in
educational institutions (Caruna et al. 2000; Cook
After pre-testing and refining the instrument, two et al. 2003). As a holdout sample, another set of
samples were chosen for the cross validation: an data (sample 2) was collected from four large
initial sample from the university setting and a companies in Asia (two banks, one telecommuni-
holdout sample from the industry setting. Two cation, and one IS consulting firm). Question-
U.S. universities formed the initial sample for naires were distributed to a total of 500 em-
testing of the IS-adapted ZOT SERVQUAL. ployees;188 were returned for a response rate of
Anonymous, self-administered questionnaires 37.6 percent.
containing items from the IS-adapted ZOT
SERVQUAL (sample 1) instruments were distri- In both the university and industry settings, users
buted to approximately 560 upper-level under- had access to the full array of IS services (e.g.,
graduate and graduate students in several MIS network access and IDs, application software
and management sciences courses in the two access on their PCs and shared servers, Web
universities. Total sample size was 250 with the intranet accounts, e-mail accounts, help desk, con-
response rates averaging about 45 percent at both sulting support, trainingboth online and tutorial
organizations. Such a student sample has been network dial-in, Internet access, laptop hook-up,
used in past research as a general measure of online access to records and accounts, etc.). All
service quality (see Boulding et al. 1993; Ford et users had at least 1 year of experience with the
al. 1999; Rigotti and Pitt 1992) and in the IS ser- provided computing and network services, en-
vice quality context (K&L 1994; Kettinger et al. suring a basic level of computer and network
1995), showing high consistency with measures access competence. In addition, all users had at
tested in industrial settings in terms of its reliability least a minimum level of face-to-face interaction

MIS Quarterly Vol. 29 No. 4/December 2005 611


Kettinger & Lee/Alternative Scales for IS SERVQUAL

with information service function (ISF) staff at each five service quality dimensions on the perceived
sample site whereby they established their net- service level was completed on the sample 1 data.
work, e-mail, and software user authorization; Using commonly accepted factor selection criteria
many had also taken advantage of help desk and as specified in Table 1, four constructs with 18
training services. In general, the users from each items were derived. Three original SERVQUAL
sample can be described as motivated by either constructs emerged from the exploratory factor
class or work responsibilities to actively make use analysis (tangibles, reliability, and responsive-
of the IS resources and to avail themselves of IS ness). However, two of the original dimensions,
support. empathy and assurance, merged into one dimen-
sion. Based on a review of the retained items and
Our objective was to determine if there are a the seeming similarity of the constructs when
validated, common set of factors and items among applied in the IS context, the new merged con-
three levels of service quality. Therefore, a dual struct was named rapport because the construct
design of statistical approaches (exploratory factor items focus on an IS service providers ability to
analysis and confirmatory factor analysis) was convey a rapport of knowledgeable, caring, and
applied for two different samples respectively and courteous support. Past researchers using ZOT
sequentially. The perceived service scale was SERVQUAL in different service contexts have also
selected as a calibration scale for exploratory experienced such a merging of the original five-
factor analysis of sample 1 because, as was factor structure (e.g., Caruana et al. 2000).
discussed previously, it is the one SERVQUAL
indicator that has not been the subject of debate The derived factor structure and items from the
concerning its format throughout the long history of exploratory factor analysis on the perceived scale
criticism concerning SERVQUALs expectation (refer to Table 1) were then subjected to confirma-
measurement. tory factor analysis (CFA) and reliability testing
using the holdout sample for the three different IS
service quality levels. Covariance matrices, the
Mardias non-normality coefficient, and descriptive
Results statistics for three sets of service levels of the
holdout sample are reported in Appendix A. To
To use the ZOT method, its three IS service overcome the limitation of maximum likelihood
quality levels must share the same constructs and method regarding multivariate non-normality4 the
the corresponding items. This requires a test to parameters were estimated using the maximum
determine whether the dimensions of IS ZOT likelihood [ML, Robust] method, which in the first
SERVQUAL are the same for all three levels. This attempt with the holdout sample resulted in good
study examined whether the common constructs of fits of the models for each of the three different
perceived service, desired service, and minimum levels, as shown in the composite fit index of
service expectations captured equivalent dimen- Table 2.
sions with equivalent question items and mapped
into a diagnostic method using all of these three In confirming the validity of the IS ZOT
levels of IS service quality. SERVQUAL at three levels, the guidelines sug-
gested by Andersen and Gerbing (1988) were fol-
Given the extent of revision to the IS ZOT
SERVQUAL to bring it into the IS context, an
exploratory factor analysis3 for items of the original
4
Like most continuous data collected from a ques-
tionnaire, our raw data possess a non-normal shaped
distribution. However, aware of the risk of assuming the
data to be multivariate normal, a sandwich parameter
3
Principal component analysis was applied on the covariance estimator (Satorra and Bentler 1990) was
sample 1 data since we wanted to derive the minimum implemented to correct potential bias in estimating the
number of factors that account for the maximum portion standard errors. EQSs maximum likelihood, robust
of the total variance in an exploratory manner. option provides these estimators.

612 MIS Quarterly Vol. 29 No. 4/December 2005


Kettinger & Lee/Alternative Scales for IS SERVQUAL

Table 1. Exploratory Factor Analysis*: Perceived Service Level for Sample 1


Variable Factor 1 Factor 2 Factor 3 Factor 4 Factor 5 Communality
P1 0.104 0.813 0.037 -0.104 -0.035 0.685
P2 0.283 0.626 -0.052 0.132 -0.028 0.493
P3 0.034 0.759 -0.087 0.098 0.076 0.600
P4 0.008 0.771 -0.001 0.011 0.065 0.598
P5 -0.036 0.745 0.086 -0.068 0.116 0.581
P6 0.038 0.448 0.284 0.232 0.098 0.346
P7 0.044 0.560 0.171 0.322 -0.040 0.461
P8 0.260 0.056 0.092 0.592 0.081 0.437
P9 0.153 0.239 0.066 0.575 0.038 0.417
P10 0.499 0.273 0.106 0.219 0.076 0.388
P11 0.594 -0.080 0.043 0.265 0.137 0.450
P12 0.780 0.099 -0.024 -0.064 -0.008 0.623
P13 0.759 0.165 0.065 -0.190 -0.007 0.644
P14 0.663 0.001 -0.013 0.145 0.070 0.455
P15 0.732 -0.083 0.074 0.168 0.095 0.586
P16 0.680 0.046 0.174 0.102 -0.006 0.505
P17 0.777 -0.017 0.069 0.037 0.037 0.612
P18 0.096 0.040 0.726 0.052 0.003 0.541
P19 0.024 0.202 0.447 0.124 0.321 0.360
P20 0.034 0.045 0.198 -0.064 0.657 0.478
P21 0.423 -0.077 -0.068 0.078 0.525 0.471
P22 0.008 0.297 -0.044 0.122 0.516 0.371
*Principal Components Analysis
Selection Criteria: Oblique rotation, factor loading $ 0.5, no multiple loadings, no single loading, items not meeting these
criteria were dropped as indicated by the struck-out lines.

Table 2. CFA Test Results for IS ZOT SERVQUAL Measure on the Holdout Sample
Recommended
Fit Indices Perceived Desired Minimum Fit Criteria*
Satorra-Bentler Scaled Chi-Square/d.f. 1.947 1.337 1.521 < 3.0
Bollen (IFI) 0.932 0.973 0.965 > 0.90
LISREL AGFI 0.796 0.828 0.817 > 0.80
Root Mean Squared Residual (RMSR) 0.120 0.076 0.107 < 1.0
Comparative Fit Index 0.918 0.943 0.928 > 0.90
Robust CFI 0.931 0.972 0.964 > 0.90
*Segars and Grover (1993)

MIS Quarterly Vol. 29 No. 4/December 2005 613


Kettinger & Lee/Alternative Scales for IS SERVQUAL

lowed. Significant factor loading coefficients and responsiveness items loaded more closely with the
the satisfactory fits of the three CFA models con- reliability factor than the responsiveness dimen-
firm the convergent validity of the four IS service sion, leaving the derived responsiveness construct
quality dimensions. Next, formal tests of discrim- with only two items. The authors recognize that
inant validity were performed (Bagozzi and Phillips such a two-item construct has potential validity
1982). The chi-square differences between the all- problems. Future researchers might consider
possible constrained models (each correlation improving the responsiveness measure around the
between four dimensions was subsequently con- concept of anticipated preparedness to perform a
strained to 1.0) and the final model was tested and service, which can be inferred by the two retained
showed significant chi-square differences, indi- responsiveness items (i.e., willingness to help
cating discriminant validity for all three levels. and readiness to respond).
Reliability tests for the final derived four dimen-
sions with 18 items were conducted with a Research should explore the application of this
Cronbach Alpha test, resulting in acceptable levels instrument and its diagnostic strength. For
of reliability for all dimensions at three levels (refer example, researchers might examine the meaning
to Table 3). In sum, a total of 18 items loaded into
of exceeding desired service levels. Some litera-
four dimensions at all three IS ZOT levels, indi-
ture suggests that this offers the service provider
cating strong support for construct validity of the
benefits by bringing their customers into a level of
measures as well as demonstrating a common
surprised satisfaction sometimes called customer
structure of four dimensions and items among
delight (Oliver et al. 1997), while a different stream
three levels of IS service quality. These results
of literature reminds us that there is a cost of
provide the statistical legitimacy for use of IS ZOT
quality and one must be mindful to make sure that
SERVQUAL in the IS setting. The final version of
IS service quality has a bottom line impact on the
items retained is displayed in Table 4.
firm. In this regard, future research should investi-
gate the relationships of IS ZOT service quality to
overall ISF performance and, ultimately, business
performance.
Implications for Research
and Practice
The survey length of the IS ZOT SERVQUAL adds
some complexity when compared to a single point
This study introduced and validated the ZOT
concept in the IS setting using a dual measure of (perception only) measure. Researchers need to
IS service quality expectations. The findings determine the relative diagnostic value of mea-
represent an important step toward addressing suring IS customer expectations within a zones of
past concerns with the original IS SERVQUALs tolerance scheme over the use of a perceived-only
expectation measure and gap-scoring. As will be (SERVPERF) measure. In cases where brevity,
discussed later in the section, the new IS ZOT cost, or predictive validity concerns demand, the
SERVQUAL instrument has strong practical poten- seemingly less clinical perception-only (SERV-
tial as a diagnostic tool through which managers PERF) measure might be a better option.
can quickly visualize their current IS service quality Learning how and when managers use the IS ZOT
situation and design corrective actions. However, as a diagnostic tool to shape their service delivery
to further establish the instruments external strategies needs to be investigated both quali-
validity and reliability, additional applications in tatively and quantitatively in multiple industry
multiple industrial contexts need to be undertaken. contexts before the relative value of this measure
is fully understood.
Future researchers also should attempt to further
distinguish the responsiveness dimension of IS In terms of practice, employing the IS ZOT
ZOT SERVQUAL from the reliability dimension. SERVQUAL in a periodic IS service quality man-
In this study, two of the original IS SERVQUAL agement program is important for two reasons.

614 MIS Quarterly Vol. 29 No. 4/December 2005


Kettinger & Lee/Alternative Scales for IS SERVQUAL

Table 3. Revised IS ZOT SERVQUAL Constructs and Reliabilities of the Holdout


Sample
Reliabilities*
Original Items
Constructs Retained** Perceived Desired Minimum
Reliability 1, 2, 3, 4, 5, 7 0.88 0.84 0.86
Responsiveness 8, 9 0.78 0.74 0.73
Rapport 11, 12, 13, 14, 15, 16, 17 0.81 0.92 0.84
Tangibles 20, 21, 22 0.86 0.81 0.85
*Cronbach Alpha values for Perceived, Desired, and Minimum service levels.
**Refer to Table 4 for retained item descriptions

Table 4. Operationalization of the IS ZOT SERVQUAL Constructs Before and After


Factor Analysis
IS ZOT SERVQUALs Anchor Questions and Format for Item Descriptions (below):

Minimum Service Level the expected minimum level of service performance you consider adequate.
Desired Service Level the level of service performance you desire.
My Perception of
the [Organizations
Computer Service
My Minimum My Desired Level Units Name]
Service Level is: of Service is: Performance is:
Low High Low High Low High
When it comes to
1. Providing service as promised 123456789 123456789 123456789
Original Final
Item Descriptions Constructs Constructs
1Providing services as promised Reliability Reliability
2Dependability in handling customers service problems Reliability Reliability
3Performing service right the first time Reliability Reliability
4Providing services at the promised time Reliability Reliability
5Maintaining the reliable technology and system Reliability Reliability
6Keeping customers informed about when service will be made Responsiveness Dropped
7Prompt service to customers Responsiveness Reliability
8Willingness to help customers Responsiveness Responsiveness
9Readiness to respond to customers requests Responsiveness Responsiveness
10IS employees who instill confidence in customers Assurance Dropped
11Making customers feel safer in computer transactions Assurance Rapport
12IS employees who are consistently courteous Assurance Rapport
13IS employees who have the knowledge to answer customers
questions Assurance Rapport
14Giving customers individual attention Empathy Rapport
15IS employees who deal with customers in a caring fashion Empathy Rapport
16Having the customers best interest at heart Empathy Rapport
17IS employees who understand the needs of customers Empathy Rapport
18Convenient business hours Empathy Dropped
19Up-to-date technology Tangibles Dropped
20Visually appealing facilities Tangibles Tangibles
21IS employees who appear professional Tangibles Tangibles
22Useful support materials (such as documentation, training, videos,
etc.) Tangibles Tangibles
*Items dropped after factor analyses are struck out; final constructs are in bold.

MIS Quarterly Vol. 29 No. 4/December 2005 615


Kettinger & Lee/Alternative Scales for IS SERVQUAL

First, as a diagnostic tool, it has the potential to (2) What is the relative position of the perceived
measure changes in IS service quality relative to service pointers within each ZOT band? Are
customers expectations over time. Second, it can the perceived service pointers closer to the
be a basis for corrective actions leading to desired expectation level than the minimum
strategies to manage minimum service level level?
expectations, to improve perceived service levels,
or to allocate IS resources to specific IS customer (3) If all the perceived service pointers are within
segments based on an identified need. their respective zones, what is the compara-
tive size and the relative positioning of ZOT
ZOT customer service expectations are charac- bands?
terized by tolerance bands. These tolerance
bands, representing the difference between These criteria can be examined to determine
desired service and the level of service considered whether possible expectation management could
minimally adequate, can differ in size. Over time extend the band and possibly lower expectation
these bandwidths may either expand, contract, or levels. For example, in University 2, the respon-
move up or down based on expectation changes. siveness dimensions should be picked as the
Variations can also exist in different customers second most troubled IS service quality dimension,
tolerance zones. Some customers may have a given the relative positioning of the pointers within
small zone of tolerance, which may require a the ZOT.
consistent level of service by an IS provider to hit
within a small band, whereas other customers may Further diagnosis related to this targeted respon-
tolerate a greater range of service quality. The siveness dimension for university 2 might be
potential for IS managers to learn to identify and obtained by comparing different customer user
manipulate expectation tolerance bands to im- segmentations (refer to Figure 3). In this illustra-
prove IS service strategies offers great promise tion, responsiveness of different customer groups
over the single SERVPERF measure. is compared. The Grad Student Class 1 customer
segment shows the most serious deficiency
To illustrate this potential, we examine the actual despite a relatively large ZOT band. It is possible
ZOT results from our sample 1 (two universities). that IS service delivery faults have occurred with
As Figure 2 demonstrates, if University 1 relied this group, placing their current perceptions of
solely on the perceived service measure, it may actual responsiveness of the information system
mistakenly identify the rapport dimension as a function (ISF) far below minimum service quality
more problematic area than the reliability area, levels. This group might be prime for a special
since the single perception-only indicator shows a service recovery activity by the IS provider to win
lower score of rapport than reliability. However, back confidence that they are responsive to this
incorporating the ZOT band, one can visually groups IS needs. Looking at Grad Class 2, it is
pinpoint the area of deficiency; namely, the observed that while the band is smaller and the
reliability dimension, where the perceived perfor- single pointer higher, these graduate students are
mance pointer is furthest outside the ZOT band. also unsatisfied with current the level of service
given the distance between the perceived service
There are three criteria that help provide the basis pointer and the ZOT.
for diagnosis and judgments concerning IS service
quality deficiencies and service quality manage- Since the perceived service level pointers for both
ment: undergraduate student groups are within their
respective ZOT bands, this might point the ISF to
(1) Is the perceived service quality pointer offer more specialized services targeted to grad-
outside and below the ZOT? If so, how great uate students. Such a segmentation services stra-
is the distance from the ZOT (adequate tegy could be applied to departmental, divisional,
service level) to the perceived service quality or even company segments. Benchmarking com-
pointer? parisons with external companies would be pos-

616 MIS Quarterly Vol. 29 No. 4/December 2005


Kettinger & Lee/Alternative Scales for IS SERVQUAL

University 1's IS Service Quality University 2's IS Service Quality


9 9
8 8
7 7
6 6
5 5

Reliability
Responsiveness

Responsiveness
4 4
Reliability

Rapport

Tangibles

Tangibles
Rapport
3 3
2 2
1 1

Boxes for each dimension represent the zone of tolerance


Legend:
between desired service quality levels and minimum
acceptable levels.
Perceived Service Pointer

Figure 2. Assessments of the Service Quality of Two University IS Departments


Using the Zones of Tolerance Approach

Responsiveness dimension for University 2 across multiple customer service segments

9
8
7
6
5
4
3
Boxes represent the
2 Legend: zone of tolerance of
different customer
1 segments for the
Responsiveness
dimension.
Undergrad 1
Undergrad 2
Grad Class 2

Grad Class 1

Perceived Service
Pointer

Figure 3. Illustration of Zone of Tolerance Dimensional Assessment Comparing


Customer Service Segments

MIS Quarterly Vol. 29 No. 4/December 2005 617


Kettinger & Lee/Alternative Scales for IS SERVQUAL

sible to provide relative service quality levels. For customer loyalty may be more difficult based on
example, going back to Figure 2 and comparing distinguished service delivery that substantially
these two universities, University 2 shows rela- exceeds minimum levels. This raises the question:
tively better IS service quality than University 1, Would superior IS service vendors be better off
given its larger ZOT bands and with three of the attempting to narrow IS customers tolerance
four perceived service pointers inside the ZOT zones by striving to move minimum service levels
dimensional bands. up to reduce the competitive appeal of a mediocre
IS providers? As this question suggests, the appli-
Longitudinal study should be carried out to better cation of the IS ZOT SERVQUAL and its asso-
learn the efficacy of managerial interventions (as ciated expectation management schemes are
discussed above) to manipulate ZOTs minimum flexible to different service contexts and begin to
expectation levels. To gain this insight, we need to offer ISF providers a more exact tool to shape
better understand the antecedents of expectation service quality strategies.
levels and their possible managerial implications.
A customers level of minimum service is
influenced by a least four factors (Berry and Acknowledgments
Parasuraman 1997; ZBP 1993). First, transitory
service intensifiers are temporary, usually short- The authors would like to acknowledge the excel-
term, individual factors that lead customers to a lent direction of the senior editor and the important
heightened sensitivity to service. Second, per- insights and contributions of the reviewers.
ceived service alternatives are customers percep-
tions of the degree to which they can obtain better
service through providers other than the focal IS References
service provider. A third factor is the customers
self-perceived service role. This can be defined as Andersen, J. C., and Gerbing, D. W. Structural
the customers perceptions of the degree to which Equation Modeling in Practice: A Review and
they themselves influence the level of service they Recommended Two-Step Approach, Psycho-
receive. Fourth, levels of minimum service logical Bulletin (103:3), 1988, pp. 411-423.
adequacy are influenced by situational factors, Babakus, E., and Boller, G. W. An Empirical
defined as service-performance contingencies that Assessment of the SERVQUAL Scale, Journal
customers perceive are beyond the control of the of Business Research (24:2), 1992, pp. 253-
service provider. For IS researchers, these four 268.
antecedents of minimum expectations provide a Bagozzi, R. P., and Phillips, L. W. Representing
valuable starting point for future study. IS man- and Testing Organizational Theories: A Holistic
agers should note that these four factors share a Construal, Administrative Science Quarterly
common implication for expectation management (27:3), 1982, pp. 459-489.
strategies and customer communication. Speci- Berry, L., and Parasuraman, A. Listening to the
fically, IS providers must convince customers of CustomerThe Concept of a Service-Quality
the benefits of using IS services, inform them of Information System, Sloan Management
their roles and limits in using IS services, and Review (38:3), 1997, pp. 65-78.
clarify the line of responsibility in the case of IS Berthon, P., Pitt, L., Ewing, M., and Carr, C. L.
service problems. Potential Research Space in MIS: A Frame-
work for Envisioning and Evaluating Research
Finally, this article focuses on the service quality Replication, Extension, and Generation, Infor-
levels of internal IS service providers who mation Systems Research (13:4), 2002, pp.
seemingly would desire to have IS customers with 416-427.
large ZOT bands. In the case of an external IS Blixrud, J. C. Evaluating Library Service Quality:
service providers point-of-view, such as an IS Use of LibQUAL+ TM, IATUL Proceedings
outsourcing vendor, a different implication might (New Series), Volume 12, Partnerships, Con-
emerge. Namely, if external customers have sortia and 21st Century Library Service, Kansas
relatively large zones of tolerance, establishing City, MO, June 2-6, 2002.

618 MIS Quarterly Vol. 29 No. 4/December 2005


Kettinger & Lee/Alternative Scales for IS SERVQUAL

Boulding, W., Kalra, A., Staelin, R., and Zeithaml, Information Services Function, Decision
V. A. A Dynamic Process Model of Service Sciences (25:5), 1994, pp. 737-766.
Quality: From Expectations to Behavioral Inten- Kettinger, W. J., and Lee, C. C. Pragmatic Per-
tions, Journal of Marketing Research (30:1), spectives on the Measurement of Information
1993, pp. 7-27. Systems Service Quality, MIS Quarterly (21:2),
Brown, T. J., Churchill, G. A., Jr., and Peter, J. P. 1997, pp. 223-240.
Research Note: Improving the Measurement Kettinger, W. J., Lee, C. C., and Lee, S. Global
of Service Quality, Journal of Retailing (69:1), Measures of Information Service Quality: A
1993, pp. 127-139. Cross-National Study, Decision Sciences
Caruana, A., Ewing, M. T., and Ramaseshan, B. (26:5), 1995, pp. 569-588.
Assessment of the Three-Column Format Kohlmeyer J. M., and Blanton, J. E. Improving IS
SERVQUAL: An Experimental Approach, Service Quality, Journal of Information Theory
Journal of Business Research (49:1), 2000, pp. and Applications (2:1), 2000, pp. 1-10 (available
57-65. online at www.jitta.org).
Chin, W. W., and Todd, P. A. On the Use, Lee, C. C., and Kettinger, W. J. A Test of the
Usefulness, and Ease of Structural Equation Psychometric Properties of the IS Adapted
Modeling in MIS Research: A Note of Caution, SERVQUAL Measure, Paper presented at the
MIS Quarterly (19:2), 1995, pp. 237-246. 1996 National INFORMS Meeting, Atlanta
Cook, C., Heath, F., and Thompson, B. Zones of Georgia, November 4, 1996.
Tolerance In the Perceptions of Library Service Miller, J. A., Studying Satisfaction, Modifying
Quality: A LibQUAL+ Study, Libraries and Models, Eliciting Expectations, Posing Prob-
the Academy (3:1), 2003, pp. 113-123. lems, and Making Meaningful Measurements,
Cronin, J. J., and Taylor, S. A. Measuring Ser- in Conceptualization and Measurement of Con-
vice Quality: A Reexamination and Extension, sumer Satisfaction and Dissatisfaction, K. Hunt
Journal of Marketing (56:3), 1992, pp. 55-68. (Ed.), Report No. 77-103, Marketing Science
Cronin, J. J., and Taylor, S. A. SERVPERF Institute, Cambridge, MA, 1977, pp. 72-91.
Versus SERVQUAL: Reconciling Performance- Oliver, R. L. A Cognitive Model of the Antece-
dents and Consequences of Satisfaction Deci-
Based and Perceptions-Minus-Expectations
sions, Journal of Marketing Research (17),
Measurements of Service Quality, Journal of
November 1980, pp. 460-469.
Marketing (58:1), 1994, pp. 125-131.
Oliver, R. L. Rust, R. T., and Varki S. V. Custo-
Churchill, G. A., Jr., and Surprenant, C. An
mer Delight: Foundations, Findings and Mana-
Investigation Into the Determinants of Customer
gerial Insight, Journal of Retailing (73:3), 1997,
Satisfaction, Journal of Marketing Research
pp. 311-336.
(19), November 1982, pp. 491-504.
Parasuraman, A., Berry, L. L., and Zeithaml, V. A.
Ford, J. B., Joseph, M., and Joseph, B. Impor-
Research Note: More on Improving Quality
tance-Performance Analysis as a Strategic Tool Measurement, Journal of Retailing (69:1),
for Service Marketers: The Case of Service 1993, pp. 140-147.
Quality Perceptions of Business Students in Parasuraman, A., Zeithaml, V. A., and Berry, L. L.
New Zealand and the USA, Journal of Services Alternative Scales for Measuring Service
Marketing (13:2), 1999, pp. 171-186. Quality: A Comparative Assessment Based on
Jiang, J. J., Klein, G., and Carr, C. Measuring Psychometric and Diagnostic Criteria, Journal
Information Systems Quality: SERVQUAL from of Retailing (70:3), 1994a, pp. 201-229.
the Other Side, MIS Quarterly (26:2), 2002, pp. Parasuraman, A., Zeithaml, V. A., and Berry, L. L.
145-166. A Conceptual Model of Service Quality and its
Jiang, J. J., Klein, G., and Crampton, S. A Note Implications for Future Research, Journal of
on SERVQUAL Reliability and Validity in Infor- Marketing (49), Fall 1985, pp. 41-50.
mation System Service Quality Measurement, Parasuraman, A., Zeithaml, V. A., and Berry, L. L.
Decision Sciences (31:3), 2000, pp. 725-745. Refinement and Reassessment of the
Kettinger, W. J., and Lee, C. C. Perceived SERVQUAL Scale, Journal of Retailing (67:4),
Service Quality and User Satisfaction with the 1991, pp. 420-450.

MIS Quarterly Vol. 29 No. 4/December 2005 619


Kettinger & Lee/Alternative Scales for IS SERVQUAL

Parasuraman, A., Zeithaml, V. A., and Berry, L. L. Tse D. K., and Wilton, P. C. Models of Consumer
Reassessment of Expectations as a Compari- Satisfaction Formation: An Extension, Journal
son Standard in Measuring Service Quality: of Market Research (25), May 1988, pp. 204-
Implications for Further Research, Journal of 212.
Marketing (58), January 1994b, pp. 111-124. Van Dyke, T. P., Kappelman, L. A., and Prybutok,
Parasuraman, A., Zeithaml, V. A., and Berry, L. L. V. R. Caution on the Use of SERVQUAL Mea-
SERVQUAL: A Multiple-Item Scale for Mea- sures to Assess the Quality of Information
suring Consumer Perceptions of Service Qual- Systems Services, Decision Science (30:3),
ity, Journal of Retailing (64:1), 1988, pp. 12-40. 1999, pp. 877-891.
Peter, J. P., Churchill, G. A., Jr., and Brown, T. J. Van Dyke, T. P., Kappelman, L. A., and Prybutok,
Caution in the Use of Difference Scores in V. R. Measuring Information Systems Service
Consumer Research, Journal of Consumer Quality: Concerns on the Use of the SERV-
Research (19), March 1993, pp. 655-662. QUAL Questionnaire, MIS Quarterly (21:2),
Pitt, L. F., Watson, R. T., and Kavan, C. B. Mea- 1997, pp. 195-208.
suring Information Systems Service Quality: Watson, R. T., Pitt, L. F., and Kavan, C. B., Mea-
Concerns for a Complete Canvas, MIS Quar- suring Information Systems Service Quality:
terly (21:2), 1997, pp. 209-221. Lessons from Two Longitudinal Case Studies,
Pitt, L. F., Watson, R. T., and Kavan, C. B. MIS Quarterly (22:1), 1998, pp. 61-79.
Service Quality: A Measure of Information Zeithaml, V., Berry, L. L., and Parasuraman, A.
Systems Effectiveness, MIS Quarterly (19:2), The Nature and Determinants of Customer
1995, pp. 173-187. Expectations of Service Quality, Journal of the
Rigotti, S., and Pitt, L. F. SERVQUAL as a Academy of Marketing Science (21:1) 1993, pp.
Measuring Instrument for Service Provider 1-12.
Gaps in Business Schools, Marketing
Research News (15:3), 1992, pp. 9-17.
Satorra, A., and Bentler., P. M. Model Conditions About the Authors
for Asymptotic Robustness in the Analysis of
Linear Relations, Computational Statistics and William J. Kettinger is an associate professor of
Data Analysis (10), 1990, pp. 235-249. Information Systems at the Moore School of Busi-
Segars, A., and Grover, V. Re-examining Per- ness, University of South Carolina. He teaches
ceived Ease of Use and Usefulness: A Con- and researches in the areas of strategic infor-
firmatory Factor Analysis, MIS Quarterly (17:4), mation management, business process change,
1993, pp. 517-527. and IS performance measurement. He consults
Spreng, R. A., and MacKoy, R. D. An Empirical domestically and abroad on these topics and has
Examination of a Model of Perceived Service published numerous books and research articles
Quality and Satisfaction, Journal of Retailing including five previous articles in MIS Quarterly.
(72:2), 1996, pp. 201-214.
Swan J., and Trawick, F. Satisfaction Related to Choong C. Lee is Associate Dean of the Gradu-
Predictive vs. Desired Expectation, in Refining ate School of Information at Yonsei University,
Concepts and Measures of Consumer Satis- Seoul, Korea. He has a Ph.D. degree in MIS from
faction and Complaining Behavior, H. K. Hunt the University of South Carolina and previously
and R. L. Day (Eds.), Indiana University Press, served as an associate professor at Perdue
Bloomington, IN, 1980, pp. 7-12. School of Business, Salisbury University. Actively
Teas, R. K. Expectations as a Comparison involved with research and consulting projects in
Standard in Measuring Service Quality: An IS performance measurement, he also works as a
Assessment of a Reassessment, Journal of Senior Researcher/Consultant for enterpriseIQ in
Marketing (58), January 1994, pp. 132-139. Lausanne, Switzerland. His past research results
Teas, R. K. Expectations, Performance Evalua- have been published in MIS Quarterly, Decision
tion and Consumers Perception of Quality, Sciences, Journal of MIS, Communications of the
Journal of Marketing (57:4), 1993, pp. 18-34. ACM, and Information and Management.

620 MIS Quarterly Vol. 29 No. 4/December 2005


Appendix A
Covariance Matrix, Univariate, and Multivariate Statistics of the Holdout Sample

Table A1. Perceived Service Level (n = 188)


P1 P2 P3 P4 P5 P7 P8 P9 P11 P12 P13 P14 P15 P16 P17 P20 P21 P22
P1 1.609
P2 1.121 1.93
P3 0.979 1.251 1.864
P4 0.936 1.082 1.185 2.171
P5 0.945 1.026 1.090 1.154 1.899
P7 0.793 1.112 1.059 1.144 0.956 2.065
P8 0.893 1.145 1.186 1.283 0.979 1.526 2.803
P9 0.637 0.840 0.953 1.049 0.903 1.147 1.539 1.951
P11 0.720 0.946 0.988 0.891 0.982 1.206 1.546 1.417 2.604
P12 0.819 0.891 0.957 1.017 0.996 1.002 1.354 1.058 1.720 2.295
P13 0.893 0.997 1.162 1.163 1.063 1.174 1.204 0.918 1.374 1.456 1.879
P14 0.517 0.729 0.844 1.006 0.623 1.073 1.803 1.174 1.338 1.172 1.048 2.377
P15 0.699 0.877 0.852 0.818 0.668 0.916 1.390 0.943 1.496 1.346 1.146 1.360 1.988
P16 0.585 0.746 0.949 0.968 0.798 1.009 1.490 0.953 1.484 1.385 1.239 1.400 1.422 2.049
P17 0.623 0.859 0.889 0.914 0.814 1.047 1.496 1.024 1.344 1.418 1.228 1.405 1.497 1.503 2.168
P20 0.854 0.952 0.952 1.027 0.833 1.066 1.509 1.149 1.261 1.180 1.133 1.431 1.105 1.187 1.448 2.098
P21 0.735 0.815 0.845 0.965 0.756 1.009 1.454 1.078 1.335 1.212 1.173 1.313 1.015 1.106 1.187 1.448 2.098
P22 0.778 1.035 0.950 0.979 0.808 1.036 1.417 0.950 1.233 1.206 1.049 1.245 1.096 1.168 1.265 1.353 1.693 2.590
Item 5.5699 5.6183 5.6398 5.7634 5.7849 5.4462 4.8280 5.2742 5.5645 5.7312 5.8656 4.9570 5.4409 5.3656 5.3656 5.2258 5.1237 5.1613
Means
Skew- -0.0776 0.668 -0.3016 -0.5330 -0.320 -0.1441 0.2247 0.0879 -0.2403 -0.1559 -0.0344 0.1520 0.0339 0.0722 0.3207 0.1967 0.1581 0.2280
ness
Kurtosis -0.2422 -0.1167 -0.140 0.1727 -0.6730 -0.3227 -0.3919 -0.2505 -0.5108 -0.3734 -0.2413 0.0107 -0.0679 -0.3093 -0.2482 -0.2052 0.3919 -0.0869
Standard 1.2553 1.3831 1.3653 1.4734 1.3782 1.4369 1.6741 1.3969 1.6137 1.5149 1.3708 1.5416 1.4101 1.4316 1.4725 1.4529 1.4485 1.6094
Deviation

Multivariate normality index for all 22 items: Mardias Coefficient = 67.04; Normalized Estimate = 17.04
Table A2. Desired Service Level (n = 188)
D1 D2 D3 D4 D5 D7 D8 D9 D11 D12 D13 D14 D15 D16 D17 D20 D21 D22
D1 1.359
D2 0.895 1.534
D3 0.742 0.901 1.513
D4 0.860 0.853 0.958 1.644
D5 0.727 0.976 0.955 1.064 1.654
D7 0.707 0.827 0.799 1.011 0.953 1.513
D8 0.766 0.980 0.862 0.973 1.011 0.987 1.899
D9 0.640 0.791 0.714 0.849 0.989 0.771 0.952 1.335
D11 0.686 0.801 0.870 0.786 0.811 0.873 0.963 0.738 1.574
D12 0.828 0.914 0.781 0.970 0.932 0.878 1.030 0.859 1.082 1.702
D13 0.669 0.584 0.839 0.911 0.830 0.823 0.797 0.625 0.860 0.921 1.368
D14 0.699 0.821 0.862 0.769 0.790 0.876 1.169 0.853 1.074 1.080 0.803 2.013
D15 0.739 0.806 0.943 0.992 0.848 0.937 0.937 0.876 1.084 1.339 0.947 1.304 1.955
D16 0.849 0.987 0.91 0.831 0.949 0.853 1.105 0.955 1.129 1.157 0.772 1.164 1.023 1.743
D17 0.888 1.033 0.937 0.948 1.100 0.953 1.073 1.009 1.113 1.376 0.903 1.268 1.372 1.358 2.056
D20 0.749 0.829 0.852 0.688 0.771 0.760 0.866 0.792 0.950 0.994 0.721 1.059 1.043 1.108 1.168 1.683
D21 0.632 0.616 0.760 0.652 0.668 0.538 0.819 0.647 0.947 0.896 0.797 1.009 1.070 0.976 1.089 1.037 1.899
D22 0.737 0.723 0.613 0.648 0.680 0.690 0.849 0.638 0.829 0.922 0.830 1.102 0.922 0.977 1.116 1.091 1.127 2.029
Item 7.6596 7.5213 7.6064 7.6383 7.7500 7.7074 7.1915 7.2766 7.4415 7.6223 7.5798 6.7606 7.3298 7.3723 7.3138 7.117 7.1915 7.0904
Means
Skew- -0.8557 -0.8783 -0.6528 -0.8282 -0.9766 -0.7607 -0.6789 -0.5114 -0.7630 -1.0590 -0.6253 -0.5303 -0.7070 -0.8896 -0.9981 -0.8271 -0.4945 -0.7050
ness
Kurtosis 0.1891 0.1439 -0.2654 -0.0854 0.2242 -0.2391 -.0017 -0.3235 0.1937 0.9565 -0.4185 -0.0371 0.1186 0.3728 0.7528 0.9417 -0.5125 0.4884
Standard 1.1659 1.2387 1.2299 1.2821 1.2860 1.2300 1.3780 1.1544 1.2346 1.3044 1.1696 1.4184 1.3982 1.3202 1.4339 1.2972 1.3780 1.4245
Deviation

Multivariate normality index for all 22 items: Mardias Coefficient = 124.61; Normalized Estimate = 31.83
Table A3. Minimum Service Level (n = 188)
M1 M2 M3 M4 M5 M7 M8 M9 M11 M12 M13 M14 M15 M16 M17 M20 M21 M22
M1 1.372
M2 0.789 1.616
M3 0.866 0.875 1.781
M4 0.770 1.046 1.099 1.896
M5 0.708 0.890 0.980 0.913 1.850
M7 0.643 0.822 0.936 1.023 0.801 2.074
M8 0.535 0.709 0.926 0.921 0.821 1.118 2.202
M9 0.545 0.667 0.947 0.887 0.946 1.044 1.105 1.718
M11 0.602 0.638 0.956 0.793 0.846 1.128 1.013 1.051 1.997
M12 0.680 0.738 0.740 0.794 0.902 0.884 0.993 0.863 1.397 1.905
M13 0.674 0.673 1.015 0.849 0.885 1.043 1.058 0.919 1.318 1.134 1.725
M14 0.442 0.563 0.929 0.566 0.802 0.865 1.378 1.101 1.186 1.016 1.136 2.176
M15 0.848 0.550 0.830 0.787 0.664 0.930 1.041 0.934 1.325 1.271 1.138 1.151 1.851
M16 0.601 0.754 0.788 0.830 0.739 0.966 1.148 0.962 1.171 1.198 1.076 1.078 1.119 1.816
M17 0.656 0.665 0.882 0.819 0.867 1.054 1.231 1.081 1.516 1.500 1.187 1.339 1.473 1.455 2.314
M20 0.550 0.673 0.793 0.678 0.624 0.805 1.045 0.886 1.233 1.072 0.970 1.163 1.030 1.021 1.136 1.881
M21 0.649 0.580 0.901 0.796 0.687 0.710 1.041 0.865 1.133 0.939 1.069 1.201 1.125 0.963 1.059 1.140 1.944
M22 0.720 0.727 1.079 0.810 0.955 0.976 1.092 1.063 1.321 1.065 1.253 1.512 1.147 1.077 1.284 1.325 1.514 2.239
Item 5.6862 5.7394 5.6436 5.8298 5.9894 5.8511 5.2553 5.4043 5.6383 5.7979 5.7128 5.0244 5.5691 5.6220 5.5372 5.3032 5.2766 5.2660
Means
Skew- 0.6770 0.2000 -0.0053 -0.1479 0.0065 -0.1689 0.1469 -0.0192 0.0648 0.2438 -0.0430 -0.3867 0.1478 0.2679 -0.1943 0.0038 0.1034 0.0198
ness
Kurtosis 0.0953 -0.2839 -0.1932 -0.0610 -0.4778 0.2850 -0.1758 0.1200 0.0836 -0.2443 -0.1038 0.0503 -0.0421 -0.2193 0.0200 0.0504 0.1341 0.2688
Standard 1.1711 1.2713 1.3347 1.3770 1.3602 1.4401 1.4839 1.3107 1.4131 1.2804 1.3132 1.4750 1.3602 1.3478 1.5212 1.3714 1.3944 1.4963
Deviation

Multivariate normality index for all 22 items: Mardias Coefficient = 122.03; Normalized Estimate = 31.18

Potrebbero piacerti anche