Sei sulla pagina 1di 12

The current issue and full text archive of this journal is available at

www.emeraldinsight.com/0960-4529.htm

Service quality
Customer satisfaction and measurement
service quality measurement
in Indian call centres
405
Anand Kumar Jaiswal
Indian Institute of Management, Vastrapur, India

Abstract
Purpose – The purpose of this research is to examine customer satisfaction and service quality
measurement practices followed in call centres.
Design/methodology/approach – The study uses qualitative methodology involving in-depth
interviews. The respondents were senior managers belonging to quality or operation divisions in four
large call centres in India.
Findings – It is found that service quality management in call centres disregards customers.
The study suggests that call centre managers overly depend on operational measures. Customer
orientation in assessing service performance is either low or absent in most call centres.
Research limitations/implications – Since the study has used qualitative methodology,
observations and findings need to be validated with empirical data.
Practical implications – The paper suggests that call centres need to develop systematic and
comprehensive measurement of perceived service quality in order to provide superior call centre
experience to their customers.
Originality/value – The paper is the first systematic study that examines customer satisfaction and
service quality measurement practices in call centres in India, a country which has emerged as a
leading player in the global business process outsourcing industry.
Keywords Call centres, Customer satisfaction, Customer services quality, India
Paper type Research paper

Companies in diverse business commit a great deal of time and resources on customer
satisfaction. Delivering superior service and ensuring higher customer satisfaction have
become strategic necessities for companies to survive in competitive business environment
(Reichheld and Sasser, 1990). Realizing the negative ramifications caused by dissatisfied
customers, companies are increasingly making senior management accountable for
ensuring high degree of customer satisfaction (Szymanski and Henard, 2001).
Customer call centers have emerged as an important tool for providing higher
customer satisfaction (Anton, 1997). In call centres, human agents and/or automatic
voice response machines handle computer assisted telephonic communications with
customers (Moon et al., 2004). Companies use call centres for establishing direct
communication with their customers. The primary objective of call centre operations
is customer care and achievement of high levels of customer satisfaction.
Call centres are increasingly playing a crucial role in customer relationship
management. Most business organizations see call centres services as a potentially
effective way of keeping customers happy and satisfied, and gaining competitive Managing Service Quality
Vol. 18 No. 4, 2008
advantage. However, it is widely argued that in reality call centres have failed to realize pp. 405-416
their actual potential in helping organizations achieve the goals of providing high q Emerald Group Publishing Limited
0960-4529
levels of customer satisfaction. Several studies provide ample evidence on severe DOI 10.1108/09604520810885635
MSQ customer dissatisfaction with call centre services. A study conducted by the Citizens’
18,4 Advice Bureaux found that 97 per cent customers cringed at the thought of using a call
centre number, 90 per cent of them had complaints, and 40 per cent were totally
dissatisfied (The Times of India, 2004). Customers are less satisfied with call centre
services compared with office-based in-person services (Bennington et al., 2000).
The fact that most call centres have failed to contribute effectively towards the aim of
406 achieving customer satisfaction indicates that there is a significant gap in our
understanding of just what makes a satisfied customer in call centre operations. Is there
a gap between perceptions of managers about service quality offered and perceptions of
customers about service quality received? Do the criteria used by managers and
customers to assess call centre service quality vary? Are call centres managed based on
some naı̈ve assumptions on service quality? This paper is aimed at addressing these
questions. Conducted in the context of Indian call centres, it investigates the current
practices for measuring customer satisfaction and service quality.

Call centre industry in India


The call centre industry has emerged as one of the fastest growing sectors in India.
According to a NASSCOM (2006) study, the IT enabled services – business process
outsourcing (BPO) industry earned a revenue of US$ 5.2 billion in 2005. Call centres are
most prominent among BPO firms. Different kinds of back-office work such as handling
customer enquiries related to credit card transactions, reconciling accounts, and
transcribing medical prescriptions are undertaken by BPO firms. Indian call centres
provide both inbound and outbound services. Generally, in inbound services, calls
originate at the customer’s end whereas in outbound services call centres initiate
contacts with customers for specific purposes. Inbound call services involve handling
customer calls for receiving orders, making reservations, resolving customer complaints,
and providing after sales service. Outbound services involve direct selling activities,
conducting marketing research surveys, and managing public relations. The Indian call
centre industry is highly competitive. Shortage of workforce with required skills and
high-employee attrition are common features. Amidst cutthroat competition, offering
superior service quality and achieving high levels of customer satisfaction have become
strategic necessity for survival and growth of call centre organizations.

Customer satisfaction and service quality: literature overview


Extensive literature exists on customer satisfaction and service quality. Customer
satisfaction and service quality have been defined by marketing researchers in
different ways. Oliver (1997, p. 28) defined satisfaction as “the consumer’s fulfillment
response, the degree to which the level of fulfillment is pleasant or unpleasant”.
Zeithaml and Bitner (2000, p. 75) defined customer satisfaction as the “customers’
evaluation of a product or service in terms of whether that product or service has met
their needs and expectations”. Parasuraman et al. (1988) conceptualized customer
evaluations of overall service quality as the gap between expectations and perceptions
of service performance levels. They developed the SERVQUAL instrument for
measuring service quality offered by service firms. SERVQUAL has five dimensions:
reliability, responsiveness, assurance, empathy, and tangibility. However, their gap
theory does not explain the fact that confirmation of high expectations would result in
positive evaluation of services but the confirmation of low expectations is less likely to
cause positive assessment of service quality. Opponents of the gap theory suggest that Service quality
perceptions of service performance directly determine service quality (Cronin and measurement
Taylor, 1992; Tse and Wilton, 1988). Consequently, SERVPERF has been proposed for
measuring service performance (Cronin and Taylor, 1992).
Customer satisfaction and service quality are two distinct, though highly correlated,
constructs (Bansal and Taylor, 1997; Dabholkar et al., 2000). In marketing literature
several studies have found positive relationships of service quality and customer 407
satisfaction with customer behavioral intentions (Anderson and Sullivan, 1993;
Parasuraman et al., 1988). Further, studies have also shown that customer satisfaction
mediates the effect of service quality on behavioral intentions (Gotlieb et al., 1994). It is
recommended that customer satisfaction should be measured separately from service
quality in order to understand how customers evaluate service performance
(Dabholkar et al., 2000).

Research design
As the study is exploratory, I adopted qualitative methodology. Qualitative research is
particularly suitable in situations such as traditional exploration (Baker, 2001). It offers
certain advantages such as bringing researchers close to the marketing situation and
providing wealth of information. It also provides high flexibility in understanding the
phenomenon (Carson et al., 2001). The choice of methodology is consistent with the
objective of the study and is similar to those used by Robinson and Morley (2006) and
Taylor and Bain (1999) in their studies of call centres in Australia and Scotland,
respectively. In-depth interview technique of qualitative research was chosen. For
conducting in-depth interviews, a semi-structured questionnaire was prepared based on
insights obtained from a review of literature on customer satisfaction and service
quality. Existing studies on call centres were specially looked at in the literature review.
About 12 in-depth interviews were conducted with senior managers of four large
call centres in a South Indian city. The interviews were held at the premises of the call
centres and each interview took about 45-60 minutes. All interviews were conducted in
person. The non-probabilistic convenient sampling approach was used in selecting the
four call centres. Convenient sampling approach is regarded as acceptable in
exploratory research (Kinnear and Taylor, 1991). Three call centres were third party
call centres which were either subsidiaries or associate organizations of large Indian
business organizations. The fourth was a captive call centre of a multinational
corporation. The respondents were either managers responsible for quality control
function or managers handling operation function. Most of the interviewed managers
had worked in several call centres before taking up their current position. Hence, their
responses were not specific only to their current organization but reflected the practices
followed in the call industry in general.

Results
Quality management in call centres
During my discussions with call centre managers I found that most call centres in India
have a separate quality control department for ensuring superior quality performance.
In the call centre industry, quality control and performance evaluation are frequently
done on the basis of several operational measures which are also referred as key
performance indicators (KPI). All operational measures are recorded daily but analysis
MSQ is generally done weekly or monthly. Performance on operational measures is collected
18,4 and recorded at individual level or sometimes at team level. The required performance
level in terms of these measures is governed by the service level agreement (SLA)
between call centres and their clients. It also varies based on the type of service
provided, inbound or outbound. Frequently, used operational measures are (for further
details on operational measures, see Anton (1997) and Feinberg et al. (2000)):
408 .
Average speed of answer (ASA). This is the average time taken to answer
customer calls.
.
Abandonment rate. This is the ratio of number of calls abandoned by the
customer prior to answer to number of calls made to the call centre.
.
Total calls. This is the total number of calls made to the call centre.
.
Longest delay. This is the maximum time taken either before answering a
customer call or call abandoned by a customer.
. Average talk time. This is the total time the customer was connected to a call
centre agent.
.
Average work time after-call. This is the average time required to finish the work
required to be done immediately after an inbound call. This includes keying-in
data, filling out forms, and making outbound calls, if required. During this period
the agent is unavailable to take another inbound call.
.
Average handle time. This is the average time taken to handle per customer per
agent. In other words, it is the sum of average talk time and average work time
after-call.
. Service level. This is the ratio of number of calls answered within the agreed upon
time interval and total calls received.
.
Queue time. This is the total time the customer is in the telephone line before
getting an answer.
.
First-call resolution. This is the percentage of customers who have satisfactory
problem resolution on the first call.
.
Percentage of calls blocked. This is the percentage of customers who receive
“number is busy” message and could not even enter in the call queue.
.
Calls per agent. This is the total number of calls handled per agent in a shift
(usually of eight hours).
.
Adherence. This is the percentage of call centre agents who are on their seats as
scheduled.
.
Agent turnover. This is the percentage of agents who quit in a specified period of
time.
Besides, using operational measures, call centres have systems for continuous
monitoring of customer service agents (CSA). During monitoring, calls are often
evaluated on aspects like information provided, quality of information, tone of voice,
and enthusiasm of the agent. Broadly call centres use three types of monitoring. First is
side-by-side monitoring in which a quality control staff sits besides an agent to
supervise how he/she handles a particular call. These are particularly helpful to
oversee how an agent uses computer support devices to retrieve data or information for
handling a call. Second is remote monitoring in which calls are monitored at remote Service quality
locations with the help of available technological means. In the third type of measurement
monitoring, call centres record specified number of calls. Out of three types of
monitoring, in general, remote monitoring is more frequently used. For example, in a
call centre based in South India, I found that remote monitoring was done 60 per cent of
the time while side-by-side and recorded monitoring were done 20 per cent of the time.
Monitoring is done with or without the knowledge of agents. Each call centre has its 409
own plans for monitoring calls. For example, in one call centre, three calls per week of
each agent were monitored as part of the quality plan.

Customer satisfaction and service quality in call centres


We found customer satisfaction measurement practices varying significantly in Indian
call centres. Many call centres do not have an appropriate system for measuring
satisfaction at customer or end-user level. While captive call centres collect
satisfaction-related data from customers, this is not done by most third party call
centres. Most commonly, overall satisfaction ratings using a single item scale is
collected. Attribute or dimension-wise satisfaction ratings are not available in
many call centres. Generally “top box” satisfaction is reported. “Top box” satisfaction
indicates the proportion of callers who reported that they were “extremely satisfied”
(or whatever the highest score was on the scale) with the quality of the call. Call centres
use a five-point (or in some cases seven-point) scale for measuring customer
satisfaction. So the proportion of respondents giving five out five (or seven out seven)
in a scale will comprise top box satisfaction. Another way of reporting customer
satisfaction is through “top two” and “bottom two” figures. In one particular captive
call centre of a computer hardware manufacturing company visited by the author,
“top two” figure was typically 65 per cent and “bottom two” was 15 per cent.
We found that call centres covered in the study did not differentiate service quality
from overall customer satisfaction. They did not have a mechanism for comprehensive
measurement of service quality offered to their customers. Use of SERVQUAL or other
similar instruments for collecting customer perceived dimension-wise service
performance did not exist. In general, call centres did not measure the entire service
experience of customers during their engagement with a call centre.

Using operational variables for customer service assessment


Owing to absence of direct touch with customers or unavailability of data about how
customers actually perceive different aspects of service performance, call centres use
operational measures as a proxy for CSA. Service evaluation is limited to recording and
monitoring performance across different operational measures or call centre metrics
such as number of calls taken, average speed to answer, queue time, etc. However,
previous studies have shown that these measures are poor predictors of customer
service experience. Miciak and Desmarais (2001) found low level of customer
satisfaction in call centres in spite of their superior performance across operational
variables such as service levels and first-call resolution. Their study found average
service level and first-call resolution for call centres as 90 and 85.7 per cent,
respectively. The corresponding figure for customer satisfaction was only 66 per cent.
Feinberg et al. (2000) studied the relationship between caller satisfaction and 13
operational variables. They found that, out of 13 operational variables, only the
MSQ percentage of calls closed on first contact and average abandonment had a significant,
18,4 though weak, effect on caller satisfaction. Other variables such as ASA, queue time,
average talk time, adherence, average work time after call, percentage of calls blocked,
time before abandoning, number of inbound calls, agent turnover, and service levels
did not have any statistically significant impact on caller satisfaction. Further, the two
significant variables together explained only 5 per cent variation in caller satisfaction.
410 Their study showed low-predictive validity of operational measures used for assessing
customer satisfaction with call centres. Feinberg et al. (2002) found that in
banking/financial services call centres none of the operational measures were
significantly related to caller satisfaction. These studies suggest that most operational
measures have no effect on customer satisfaction. Even the measures which are
significantly related to customer satisfaction have weak influence on it. Further, they
have little explanatory power in terms of R 2 value.
It is somewhat surprising to find call centre managers assessing service performance
based on operational measures. However, there are reasons for the prevalent practice of
measuring and producing piles of data on operational measures. Operational measures
are broadly of two types. The first is related to telephone calls such as ASA, queue time,
and average talk time. The second deals with human resource practices such as agent
turnover and adherence. The reason for total reliance of call centres on these operational
measures is the ease in measuring them. With advances in telecommunication-related
technologies, telephone call related operational measures are automatically recorded.
Similarly measures related to human resource practices are easily available. Hence,
reporting these measures is possible without much effort compared to direct
measurement of customer perceptions. Listening customers directly through primary
data collection would be time consuming and resource demanding. Further,
measurement of these operational variables is so prevalent in the call centre industry
that everybody assumes these are measured because these are vital indicators. In reality
these are measured only because of automatic recording (Feinberg et al., 2000). The
operational variables are not related or at best remotely related to customers’ perception
about service quality. It is obvious that there should be other variables that determine
customer satisfaction with call centres.

What constitutes service quality of call centres


For understanding what constitutes service quality offered by call centres, it is
important to understand the difference between service encounters occurring in call
centres and other conventional service organizations such as restaurants, banks and
hospitals. In call canters, service encounters are phone encounters that happen every
time a customer interacts with call centres or a company through call centres over
telephone. This is different from face-to-face encounters that occur between employees
and customers in non-call centre service firms. During face-to-face encounters tangible
factors such as physical appearance and dress of employees and characteristics of
place where encounters take place (e.g. air conditioning, ambience) affect service
quality perceptions of customers. In telephonic encounters, tangible factors do not
contribute to service quality evaluations. Customers play a less active role and verbal
cues assume high importance. Because of these inherent characteristics of telephonic
encounters, interpersonal skills of call centre agents directly affect service quality
(Burgers et al., 2000). While tangibility plays no or lesser role, reliability,
responsiveness, assurance and empathy affect service quality perceptions. Service quality
Keiningham et al. (2006) has shown that call centre satisfaction has all the measurement
dimensions as found in SERVQUAL (e.g. reliability, responsiveness, assurance, and
empathy) except tangibility. Customers want higher performance on dimensions such
as empathy and assurance; call centres frequently ignore these “softer” dimensions
(Malhotra and Mukherjee, 2003). Gilmore (2001) found that call centres lack the system
for measuring and monitoring intangible aspects of customer service. 411
Although call centres use metrics comprising operational measures in their quality
evaluation system, many of these measures can potentially lead to poor service quality
perceptions by customers. For instance, standard measures such as average talk time
and calls per agent can actually restrict the agent’s ability to answer customers’ call
effectively. Agents feel frustrated since these measures do not allow them to satisfactory
handle customers’ queries (Gilmore, 2001). The measures only indicate the efficiency
level in call centres (Marr and Parry, 2004) and could actually lead to unintended
consequences such as lost revenue and annoyed customers (Eric, 2006). Robinson and
Morley (2006) have found that most call centre managers assume operational measures
to be dominant key performance indicators whereas measurement of many of these
variables actually goes against achieving customer service goals.

Relative importance of service attributes


Customers in general place varying importance to different attributes or in other words
dimensions of service quality. Some attributes are perceived to be critically important
and hence customers expect high performance by service firms on them. For less
important attributes, customers may accept somewhat lower performance. Prior
research has found reliability as the most important dimension of service quality
(Parasuraman et al., 1988). However, call centres are different in this respect. In a study
conducted in Australian call centres, Dean (2004) found consistently high ratings of
adequate expectations for all measured attributes. The scale used for measuring
service quality included attributes such as adaptiveness, assurance, authority given to
agents to solve customers’ problems, lack of queues, empathy, and friendly manner of
agents. The study suggested that call centres do not have options of selecting one or
fewer dimensions on which to offer superior performance. Instead, they need to excel
across all key attributes. Further, very high ratings of adequate service expectation in
the study suggested very narrow “zone of tolerance” in call centres. Zone of tolerance is
the region between adequate (minimum) service and desired service (mix of service
which can and should be provided) (Zeithaml et al., 1996). It means customers
expectations of adequate service performance is almost same as or very close to their
expectations of desired service performance.

Discussion
This paper has adopted qualitative methodology involving in-depth interviews with call
centres managers in India. The aim is to understand the current practices of measuring
customer satisfaction and service quality in call centres. I found that call centres in
general overly depend on metrics comprising operational measures for service
performance measurement. Over-reliance on operational measures results in focusing on
calls rather than call outcome as experienced by customers (Robinson and Morley, 2006).
Previous studies have shown that operational measures have no or only weak effect on
MSQ customer satisfaction with call centre transactions (Feinberg et al., 2000). Further,
18,4 certain operational measures such as average talk time and calls per agent are indicators
of efficiency which actually limit the ability of agents to satisfactorily resolve customers’
problems and hence could lead to customer dissatisfaction.
The paper also shows that quite often the voice of the customer is not captured in
quality evaluation. As a result, call centre managers have false assumptions of service
412 quality. It is customers who ultimately consume the services offered by call centres,
hence their perceptions count most. Ignoring voice of customers is against the basic
tenants of good customer relationship management, the very reason of call centres’
existence. Listening to customers can contribute immensely to making a call centre a
customer-centric organization.
As part of listening to customers, call centres need to measure overall as well as
attribute-level satisfaction experiences of customers. Comprehensive measurement of
attribute-level or dimension-wise satisfaction should complement overall customer
satisfaction measurement initiatives similar to reporting “top box” or “top two”
satisfaction. Attribute-level or dimension-wise satisfaction measurement is
recommended over overall satisfaction measurement for several reasons:
.
There is greater possibility that customers would make post-transaction
satisfaction judgments at attribute level than at product or service level.
.
Attribute-level measurement allows capturing customers’ mixed feelings toward
a service when customers are satisfied with one attribute but dissatisfied with
others. For example, customers may be satisfied with responsiveness of call
centre agents but dissatisfied on empathy.
.
Attribute-level measurement has greater specificity and higher diagnostic
capabilities than overall satisfaction measurement (Mittal et al., 1998).

The economic benefits of ensuring high level of customer satisfaction are immense.
Several studies have shown the positive relationship of customer satisfaction and
service quality with customer loyalty (Cronin and Taylor, 1992; Oliver, 1980;
Parasuraman et al., 1988; Reichheld and Sasser, 1990). There is also empirical support
for positive association between customer satisfaction and intentions to spread
word-of-mouth (Dabholkar and Thorpe, 1994; Richins, 1983). In the context of call
centres, it has been shown that service quality affects customer loyalty (Dean, 2002).
Providing superior service to customers through call centres can be extremely
important for organizations from the long-term objective of customer retention. As a
short-term payoff, superior service can help call centre managers in minimizing
“phone-rage” – customers losing their temper on the telephone.

Research contributions
The study adds to our understanding of what makes a customer satisfied in call centre
transaction. It shows that there is a gap in the literature – both practical and theoretical.
The gap is that we assume that operational indices are a valid surrogate for measures of
customer service. Further, this is the first systematic study that examines customer
satisfaction and service quality measurement practices in call centres in India. India has
emerged as a leading player in the global BPO industry. Large numbers of multinational
companies have shifted their call centre operations to India. They have either set up their
units or they outsource these services from third parties call centres. The study also
assumes importance as it is conducted in an emerging economy. One of the limitations of Service quality
the existing body of knowledge of marketing is that it is based almost entirely on measurement
research carried out in high-income developed economies (Burgessa and Steenkamp,
2006). Hence, research conducted in emerging economies can make significant
contribution to the literature.

Managerial implications 413


The study suggests that call centre managers often overly rely on operational measures
for evaluating the quality of call centre service. The criteria used by customers for
quality assessment are different from the ones used by managers. This result in a gap
between perceptions of managers about service quality offered and perceptions of
customers about service quality received. Hence, it is imperative for call centre managers
to develop a suitable system for systematic measurement of customers’ perceptions
of service quality. Emphasis needs to be given to attributes capturing quality of
customer-agent interactions. Using an instrument with items of reliability, assurance,
empathy, and responsiveness can be a first step to measure service quality.
Managers should also be aware that telephonic interactions with customers are
devoid of the richness of face-to-face interactions. During service delivery in call
centres, agents play a critical role: their interpersonal skills and motivation determine
to a large extent excellence in service performance. Therefore, recruiting employees
with required skill set and adopting suitable human resources practices to keep
employees happy and motivated are of paramount importance for superior call centre
service.

Limitations and directions for future research


Like any other research, this study is subject to some limitations. I used qualitative
approach involving in-depth interviews. Qualitative methodology is recommended in
exploratory research. However, there are limitations such as researcher’s bias and
inability or restricted ability to quantify the data and do statistical analysis. The
research findings are based on study conducted in one country and hence, they may
not necessarily be generalizable for call centres in other countries. Future researchers
can corroborate the research findings by undertaking similar studies in different
countries.
Future researchers can develop better instruments to measure service quality
offered by call centres. Multi-item scales with good psychometric properties need to be
developed. Valid and reliable instruments can help immensely in improving current
practices of measurement of customer satisfaction and service quality in the call centre
industry. Future researchers can examine the effect of call centre quality on customer
channel switching in multi-channel environment. The impact of various factors such as
location of call centre (country where a caller resides vs different country) and type of
services (e.g. low involvement such as ticket booking vs high involvement such as
banking) on how customers assess call centre service can be examined by researchers.

Conclusion
The study shows that call centres managers overly depend on metrics comprising
operational measures for service quality evaluation. Operational variables cannot
provide a true picture of how customers perceive service quality. Most operational
MSQ measures in fact only act as indicators of efficiency. Efficiency-driven approach can
18,4 cause several undesirable consequences such as customer defection and loss of market
share. Systematic and comprehensive measurement of service quality would be the
first step towards providing exceptional call centre experience to customers.

References
414
Anderson, E.W. and Sullivan, M.W. (1993), “The antecedents and consequences of customer
satisfaction for firms”, Marketing Science, Vol. 12 No. 2, pp. 125-43.
Anton, J. (1997), Call Center Management by the Numbers, Purdue University Press/Call Center
Press, Annapolis, MD.
Baker, M.J. (2001), “Selecting a research methodology”, The Marketing Review, Vol. 1, pp. 373-97.
Bansal, H.S. and Taylor, S. (1997), “Investigating the relationship between service quality,
satisfaction and switching intentions”, in Elizabeth, J.W. and Joseph, C.H. (Eds),
Developments in Marketing Science, Academy of Marketing Science, Coral Gables, FL,
pp. 304-13.
Bennington, L., Cummane, J. and Conn, P. (2000), “Customer satisfaction and call centers:
an Australian study”, International Journal of Service Industry Management, Vol. 11 No. 2,
pp. 162-73.
Burgers, A., Ruyter, K., Keen, C. and Streukens, S. (2000), “Customer expectation dimensions of
voice-to-voice service encounters: a scale development study”, International Journal of
Service Industry Management, Vol. 11 No. 2, pp. 142-61.
Burgessa, S.M. and Steenkamp, J-B.E.M. (2006), “Marketing renaissance: how research in
emerging markets advances marketing science and practice”, Int. J. Res. Mark., Vol. 23,
pp. 337-56.
Carson, D., Gilmore, A., Perry, C. and Gronhaug, K. (2001), Qualitative Marketing Research, Sage,
London.
Cronin, J.J. and Taylor, S.A. (1992), “Measuring service quality: a re-examination and extension”,
Journal of Marketing, Vol. 56, pp. 55-68.
Dabholkar, P.A. and Thorpe, D.I. (1994), “Does customer satisfaction predict shopper
intentions?”, Journal of Consumer Satisfaction, Dissatisfaction and Complaining Behavior,
Vol. 7, pp. 161-71.
Dabholkar, P.A., Shepherd, C.D. and Thorpe, D.I. (2000), “A comprehensive framework for
service quality: an investigation of critical conceptual and measurement issues through a
longitudinal study”, Journal of Retailing, Vol. 76 No. 2, pp. 139-73.
Dean, A.M. (2002), “Service quality in call centres: implications for customer loyalty”, Managing
Service Quality, Vol. 12 No. 6, pp. 414-23.
Dean, A.M. (2004), “Rethinking customer expectations of service quality: are call centers
different?”, Journal of Services Marketing, Vol. 18 No. 1, pp. 60-77.
Eric, P.J. (2006), “Operational challenges in the call center industry: a case study and
resource-based framework”, Managing Service Quality, Vol. 16 No. 5, pp. 477-500.
Feinberg, R.A., Hokama, L., Kadam, R. and Kim, I-S. (2002), “Operational determinants of caller
satisfaction in the banking/financial services call center”, International Journal of Bank
Marketing, Vol. 20 No. 4, pp. 174-80.
Feinberg, R.A., Kim, I-S., Hokama, L., de Ruyter, K. and Keen, C. (2000), “Operational
determinants of caller satisfaction in the call center”, International Journal of Service
Industry Management, Vol. 11 No. 2, pp. 31-41.
Gilmore, A. (2001), “Call centre management: is service quality a priority?”, Managing Service Service quality
Quality, Vol. 11 No. 3, pp. 153-9.
measurement
Gotlieb, J.B., Grewal, D. and Brown, S.W. (1994), “Consumer satisfaction and perceived quality:
complementary or divergent constructs?”, Journal of Applied Psychology, Vol. 79 No. 6,
pp. 875-85.
Keiningham, T.L., Aksoy, L., Andreassen, T.W., Cooil, B. and Wahren, B.J. (2006), “Call center
satisfaction and customer retention in a co-branded service context”, Managing Service 415
Quality, Vol. 16 No. 3, pp. 269-89.
Kinnear, T.C. and Taylor, J.R. (1991), Marketing Research: An Applied Approach, 4th ed.,
McGraw-Hill, New York, NY.
Malhotra, N. and Mukherjee, A. (2003), “Analysing the commitment-service quality relationship:
a comparative study of retail banking call centres and branches”, Journal of Marketing
Management, Vol. 19 Nos 9/10, pp. 941-72.
Marr, B. and Parry, S. (2004), “Performance management in call centers: lessons, pitfalls and
achievements in Fujitsu services”, Measuring Business Excellence, Vol. 8 No. 4, pp. 55-62.
Miciak, A. and Desmarais, M. (2001), “Benchmarking service quality performance at
business-to-business and business-to-consumer call centers”, The Journal of Business &
Industrial Marketing, Vol. 16 No. 5, pp. 340-53.
Mittal, V., Ross, W.T. Jr and Baldasare, P.M. (1998), “The asymmetric impact of negative and
positive attribute-level performance on overall satisfaction and repurchase intentions”,
Journal of Marketing, Vol. 62 No. 1, pp. 33-47.
Moon, B.K., Lee, J.K. and Lee, K.J. (2004), “A next generation multimedia call center for internet
commerce: IMC”, Journal of Organizational Computing & Electronic Commerce, Vol. 10
No. 4, pp. 227-40.
NASSCOM (2006), “Key highlights of the IT-ITES sector performance”, available at: www.
nasscom.in/Nasscom/templates/NormalPage.aspx?id ¼ 28485
Oliver, R.L. (1980), “A cognitive model of the antecedents and consequences of satisfaction
decisions”, Journal of Marketing Research, Vol. 17, pp. 460-9.
Oliver, R.L. (1997), Satisfaction: A Behavioral Perspective on the Consumer, McGraw-Hill,
New York, NY.
Parasuraman, A., Zeithaml, V.A. and Berry, L.L. (1988), “SERVQUAL: a multiple-item scale for
measuring consumer perceptions of service quality”, Journal of Retailing, Vol. 64, pp. 12-40.
Reichheld, F.F. and Sasser, W.E. Jr (1990), “Zero defections: quality comes to services”, Harvard
Business Review, Vol. 68, pp. 105-11.
Richins, M.L. (1983), “Negative word-of-mouth by dissatisfied consumers: a pilot study”,
Journal of Marketing, Vol. 47, pp. 68-78.
Robinson, G. and Morley, C. (2006), “Call centre management: responsibilities and performance”,
International Journal of Service Industry Management, Vol. 17 No. 3, pp. 284-300.
Szymanski, D.M. and Henard, D.H. (2001), “Customer satisfaction: a meta-analysis of the
empirical evidence”, Journal of the Academy of Marketing Science, Vol. 29 No. 1, pp. 16-35.
Taylor, P. and Bain, P. (1999), “An assembly line in the head: work and employee relations in the
call centre”, Industrial Relations Journal, Vol. 30 No. 2, pp. 101-17.
(The) Times of India (2004), “97 pc customers hate call centres”, The Times of India, September 8.
Tse, D.K. and Wilton, P.C. (1988), “Models of consumer satisfaction formation: an extension”,
Journal of Marketing Research, Vol. 25, pp. 204-12.
Zeithaml, V.A. and Bitner, M.J. (2000), Services Marketing, McGraw-Hill, New York, NY.
MSQ Zeithaml, V.A., Berry, L.L. and Parasuraman, A. (1996), “The behavioural consequences of
service quality”, Journal of Marketing, Vol. 60 No. 2, pp. 31-46.
18,4
Further reading
Hair, J.F., Anderson, R.E., Tatham, R.L. and Black, W.C. (1998), Multivariate Data Analysis,
5th ed., Prentice-Hall, Upper Saddle River, NJ.
416
About the author
Anand Kumar Jaiswal is Visiting Assistant Professor of Marketing at Indian Institute of
Management Ahmedabad, India. He has published papers in Journal of the Academy of Business
and Economics, Economic and Political Weekly, Decision, etc. He has presented several papers at
international conferences such as INFORMS Marketing Science Conference 2004 and 2007.
He has also written several case studies. His research interests include services management,
customer satisfaction, business-to-consumer e-commerce, and brand extension management.
Anand Kumar Jaiswal can be contacted at: akjaiswal@iimahd.ernet.in

To purchase reprints of this article please e-mail: reprints@emeraldinsight.com


Or visit our web site for further details: www.emeraldinsight.com/reprints

Potrebbero piacerti anche