Sei sulla pagina 1di 17

Research Note: What Makes a Helpful Online Review?

A Study of Customer Reviews on


Amazon.com
Author(s): Susan M. Mudambi and David Schuff
Source: MIS Quarterly, Vol. 34, No. 1 (March 2010), pp. 185-200
Published by: Management Information Systems Research Center, University of Minnesota
Stable URL: http://www.jstor.org/stable/20721420
Accessed: 26-02-2016 03:43 UTC

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at http://www.jstor.org/page/
info/about/policies/terms.jsp

JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content
in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship.
For more information about JSTOR, please contact support@jstor.org.

Management Information Systems Research Center, University of Minnesota is collaborating with JSTOR to digitize,
preserve and extend access to MIS Quarterly.

http://www.jstor.org

This content downloaded from 147.8.31.43 on Fri, 26 Feb 2016 03:43:24 UTC
All use subject to JSTOR Terms and Conditions
Mudambi & Schuff/Consumer Reviews on Amazon.com

Quarterly

What Makes a Helpful Online Review?


A Study of Customer Reviews on Amazon.com1

By: Susan M. Mudambi toa consumer in theprocess ofmaking a purchase decision.


Department of Marketing and Supply Drawing on theparadigm of search and experience goods
Chain Management from informationeconomics, we develop and testa model of
Fox School of Business customer review helpfulness. An analysis of 1,587 reviews
Temple University from Amazon.com across sixproducts indicated that review
524 Alter Hall extremity,review depth, and product typeaffect theperceived
1801 Liacouras Walk helpfulness of the review. Product typemoderates the effect
Philadelphia, PA 19122 of review extremity on the helpfulness of the review. For
U.S.A. experience goods, reviews with extreme ratings are less help
Susan.Mudambi@temple.edu ful than reviews with moderate ratings. For both product
types, review depth has a positive effecton thehelpfulness of
David Schuff thereview, but theproduct typemoderates theeffectofreview

Department of Management Information Systems depth on the helpfulness of the review. Review depth has a
Fox School of Business greater positive effect on the helpfulness of the review for
search goods thanfor experience goods. We discuss the
Temple University
207G Speakman Hall implications of ourfindings for both theoryand practice.
1810 North 13thStreet
Electronic commerce, product reviews, search
Philadelphia, PA 19122 Keywords:
U.S.A. and experience goods, consumer behavior, information

David.Schuff@temple.edu economics, diagnosticity

Abstract Introduction W????f???????????K???K????^


Customer reviews are increasingly available online for a As consumers search online for product information and to
wide range of products and services. They supplement other evaluate product alternatives, they often have access to
informationprovided by electronic storefronts such as pro dozens or hundreds of product reviews fromother consumers.
duct descriptions, reviews from experts, and personalized These customer reviews are provided in addition to product
advice generated by automated recommendation systems. descriptions, reviews from experts, and personalized advice
While researchers have demonstrated the benefits of the generated by automated recommendation systems. Each of
presence of customer reviews to an online retailer, a largely these options has thepotential to add value for a prospective
uninvestigated issue iswhat makes customer reviews helpful customer. Past research has extensively examined the role of
expert reviews (Chen and Xie 2005), and the role of online
1 recommendation systems (Bakos 1997; Chen et al. 2004;
Carol Saunders was the accepting senior editor for this paper.
Gretzel and Fesenmaier 2006), and the positive effect feed
Both authors contributed equally to this paper. back mechanisms can have on buyer trust (Ba and Pavlou

MIS Quarterly Vol. 34 No. 1, pp. 185-200/March 2010 185

This content downloaded from 147.8.31.43 on Fri, 26 Feb 2016 03:43:24 UTC
All use subject to JSTOR Terms and Conditions
Mudambi & Schuff/Consumer Reviews on Amazon.com

2002; Pavlou and Gefen 2004). More recently, research has view. For example, aftereach customer review,Amazon.com
examined the role of online customer product reviews, speci asks, "Was this review helpful to you?" Amazon provides
fically looking at the characteristics of the reviewers (Forman helpfulness information alongside the review ("26 of 31
et al. 2008, Smith et al. 2005) and self-selection bias (Hu et al. people found the following review helpful") and positions the
2008; Li and Hitt 2008). Recent research has also shown that most helpful reviews more prominently on the product's
customer reviews can have a positive influence on sales (see informationpage. Consumers can also sort reviews by their
Chen et al. 2008; Chevalier andMayzlin 2006; Clemons et al. level of helpfulness. However, past research has not provided
2006; Ghose and Ipeirotis 2006). Specifically, Clemons et al. a theoretically grounded explanation of what constitutes a
(2006) found that strongly positive ratings can positively helpful review. We define a helpful customer review as a
influence the growth of product sales, and Chen et al. (2008)
peer-generated product evaluation thatfacilitates the con
found that the quality of the review as measured by helpful sumer 'spurchase decision process.
ness votes also positively influences sales. One area inneed
of furtherexamination iswhat makes an online review helpful Review helpfulness can be seen as a reflection of review diag
to consumers.
nosticity. Interpretinghelpfulness as ameasure of perceived
value in the decision-making process is consistent with the
Online customer reviews can be defined as peer-generated notion of informationdiagnosticity found in the literature(see
product evaluations posted on company or thirdparty web Jiang and Benbasat 2004 2007; Kempf and Smith, 1998;
sites. Retail websites offer consumers the opportunity topost
Pavlou and Fygenson 2006; Pavlou et al. 2007). Customer
product reviews with content in the form of numerical star reviews can provide diagnostic value across multiple stages
ratings (usually ranging from 1 to 5 stars) and open-ended of thepurchase decision process. The purchase decision pro
customer-authored comments about theproduct. Leading on
cess includes the stages of need recognition, information
line retailers such asAmazon.com have enabled consumers to
search, evaluation of alternatives, purchase decision, pur
submit product reviews formany years, with other retailers
chase, and post-purchase evaluation (adapted fromKotier and
offering thisoption to consumers more recently. Some other Keller 2005). Once a need is recognized, consumers can use
firms choose to buy customer reviews fromAmazon.com or
customer reviews for informationsearch and theevaluation of
other sites and post the reviews on theirown electronic store
alternatives. The ability to explore informationabout alterna
fronts. In thisway, the reviews themselves provide an addi
tives helps consumers make better decisions and experience
tional revenue stream forAmazon and other online retailers.
A number of sites that provide consumer ratings have greater satisfaction with the online channel (Kohli et al.
2004). For some consumers, information seeking is itself a
emerged in specialty areas (Dabholkar 2006) such as travel
source of pleasure (Mathwick and Rigdon 2004). After the
(www.travelpost.com) and charities (www.charitynavigator.org).
purchase decision and the purchase itself, some consumers
return to thewebsite in thepost-purchase evaluation stage to
The presence of customer reviews on a website has been
shown to improve customer perception of theusefulness and post comments on theproduct purchased. After reading peer
social presence of thewebsite (Kumar and Benbasat 2006). comments, consumers may become aware of an unfilled pro
duct need, therebybringing thepurchase decision process full
Reviews have thepotential to attractconsumer visits, increase
circle.
the time spent on the site ("stickiness"), and create a sense of
community among frequent shoppers. However, as the avail
This implies thatonline retail siteswith more helpful reviews
ability of customer reviews becomes widespread, the strategic
focus shifts from themere presence of customer reviews to offergreater potential value to customers. Providing easy ac
the customer evaluation and use of the reviews. Online cess tohelpful reviews can create a source of differentiation.

retailers have an incentive to provide online content thatcus In practice, encouraging quality customer reviews does appear
tomers perceive to be valuable, and sites such as eOpinions to be an importantcomponent of the strategyofmany online
andAmazon.com post detailed guidelines forwriting reviews. retailers. Given the strategic potential of customer reviews,
we draw on information economics theory and on past
Making a better decision more easily is the main reason
consumers use a ratingswebsite (Dabholkar 2006), and the research todevelop a conceptual understanding of the compo
nents of helpfulness. We then empirically test themodel
perceived diagnosticity of website information positively
affects consumers' attitudes toward shopping online (Jiang using actual customer review data fromAmazon.com. Over
and Benbasat 2007). all, the analysis contributes to a betterunderstanding ofwhat
makes a customer review helpful in the purchase decision
Online retailershave commonly used review "helpfulness" as process. In thefinal section,we conclude with a discussion
theprimaryway ofmeasuring how consumers evaluate a re of themanagerial implications.

186 MIS QuarterlyVol. 34 No. 1'/March


2010

This content downloaded from 147.8.31.43 on Fri, 26 Feb 2016 03:43:24 UTC
All use subject to JSTOR Terms and Conditions
Mudambi & Schuff/Consumer Reviews on Amazon.com

Theoretical Foundation and Model experience good depends more on subjective attributes that
are a matter of personal taste. Several researchers have
The economics of informationprovides a relevant foundation focused on the differing information needs of various pro
to address the role of online customer reviews in the con ducts and on how consumers evaluate and compare theirmost
sumer decision process. Consumers often must make relevant attributes. The dominant attributes of a search good
purchase decisions with incomplete information as they lack can be evaluated and compared easily, and in an objective
full information on product quality, seller quality, and the manner, without sampling or buying the product, while the
availablealternatives. They also know that seeking this dominant attributes of an experience goods are evaluated or
information is costly and time consuming, and that there are
compared more subjectively and with more difficulty (Huang
trade-offs between the perceived costs and benefits of et al. 2009). Unlike search goods, experience goods aremore
additional search (Stigler 1961). Consumers follow a pur
likely to require sampling inorder to arrive at a purchase deci
chase decision process thatseeks to reduce uncertainty,while
sion, and sampling often requires an actual purchase. For
acknowledging that purchase uncertainty cannot be totally example, the ability to listen online to several 30-second clips
eliminated. from amusic CD allows the customer to gather pre-purchase
informationand even attain a degree of "virtual experience"
Therefore, the total cost of a product must include both the
(Klein 1998), but assessment of the full product or the full
product cost and the cost of search (Nelson 1970). Both experience requires a purchase. In addition,Weathers et al.
physical search and cognitive processing efforts can be (2007) categorized goods according towhether or not itwas
considered search costs. For a wide range of choices, con
necessary togo beyond simply reading informationto also use
sumers recognize that there are tradeoffsbetween effortand
one's senses to evaluate quality.
accuracy (Johnson and Payne 1985). Those who are willing
to put more effort into the decision process expect, at least
We identifyan experience good as one inwhich it is rela
partially, increased decision accuracy. Consumers can use
tively difficult and costly to obtain information on product
decision and comparison aids (Todd and Benbasat 1992) and
quality prior to interactionwith theproduct; key attributesare
numerical content ratings (Poston and Speier 2005) to con
serve cognitive resources and reduce energy expenditure, but subjective or difficult to compare, and there is a need to use
one's senses to evaluate quality. For a search good, it is
also to ease or improve the purchase decision process. One
such numerical rating, the starrating,has been shown to serve relatively easy to obtain informationon product quality prior
to interactionwith the product; key attributes are objective
as a cue for the review content (Poston and Speier 2005).
and easy to compare, and there is no strongneed touse one's
senses to evaluate quality.
A key determinant of search cost is thenature of theproduct
under consideration. According to Nelson (1970, 1974),
This difference between search and experience goods can
search goods are those forwhich consumers have the ability
inform our understanding of the helpfulness of an online
to obtain information on product quality prior to purchase,
customer review. Customer reviews are posted on a wide
while experience goods are products that require sampling or
range of products and services, and have become part of the
purchase in order to evaluate product quality. Examples of
search goods include cameras (Nelson 1970) and natural decision process formany consumers. Although consumers
use online reviews to help themmake decisions regarding
supplement pills (Weathers et al. 2007), and examples of
both types of products, itfollows thata purchase decision for
experience goods include music (Bhattacharjee et al. 2006;
Nelson 1970) and wine (Klein 1998). Although many pro a search good may have different information requirements
ducts involve a mix of search and experience attributes, the than a purchase decision for an experience good.

categorization of search and experience goods continues tobe


relevant and widely accepted (Huang et al. 2009). Products In the economics of information literature,a close connection
can be described as existing along a continuum from pure ismade between informationand uncertainty (Nelson 1970).
search goods to pure experience goods. Information quality is critical in online customer reviews, as
it can reduce purchase uncertainty. Our model of customer
To furtherclarify the relevant distinctions between search and review helpfulness, as illustrated in Figure 1, startswith the

experience goods, the startingpoint isNelson's (1974, p. 73 8) assumption of a consumer's need to reduce purchase uncer
assertion that"goods can be classified bywhether thequality tainty.Although previous research has analyzed both product
variation was ascertained predominantly by search or by and seller quality uncertainty (Pavlou et al. 2007), we
examine thehelpfulness of reviews that focus on theproduct
experience." Perceived quality of a search good involves
attributesof an objective nature,while perceived quality of an itself,not on reviews of thepurchase experience or the seller.

MIS QuarterlyVol. 34 No. 1/March


2010 187

This content downloaded from 147.8.31.43 on Fri, 26 Feb 2016 03:43:24 UTC
All use subject to JSTOR Terms and Conditions
Mudambi & Schuff/Consumer Reviews on Amazon.com

In past research on online consumers, diagnosticity has been and a three-star rating reflects a moderate view. The star
defined and measured inmultiple ways, with a commonality ratings are a reflection of attitude extremity, that is, the
of the helpfulness to a decision process, as subjectively deviation from themidpoint of an attitude scale (Krosnick et
perceived by consumers. Kempf and Smith (1998) assessed al. 1993). Past research has identified two explanations for a
overall product-level diagnosticity by asking how helpful the midpoint rating such as three stars out of five (Kaplan 1972;
website experience was in judging the quality and perfor Presser and Schuman 1980). A three-starreview could reflect
mance of theproduct. Product diagnosticity is a reflection of a trulymoderate review (indifference), or a series of positive
how helpful a website is to online buyers for evaluating and negative comments that cancel each other out (ambi
product quality (Pavlou and Fygenson 2006; Pavlou et al. valence). In either case, a midpoint rating has been shown to
2007). Perceived diagnosticity has been described as the be a legitimatemeasure of a middle-ground attitude.
perceived ability of aWeb interface to convey to customers
relevant product informationthathelps them inunderstanding One issue with review extremity is how the helpfulness of a
and evaluating the quality and performance of products sold review with an extreme rating of one or five compares to that
online (Jiang and Benbasat 2004, and has been measured as of a reviewwith amoderate ratingof three. Previous research
whether it is "helpful forme to evaluate theproduct," "helpful on two-sided arguments provides theoretical insights on the
in familiarizingme with theproduct," and "helpful forme to relative diagnosticity ofmoderate versus extreme reviews.
understand the product" (Jiang and Benbasat 2007, p. 468). There is solid evidence thattwo-sidedmessages inadvertising
can enhance source credibility in consumer communications
This established connection between perceived diagnosticity (Eisend 2006; Hunt and Smith 1987), and can enhance brand
and perceived helpfulness is highly relevant to the context of attitude (Eisend 2006). This would imply thatmoderate
online reviews. For example, Amazon asks, "Was thisreview reviews aremore helpful than extreme reviews.
helpful toyou?" In this context, thequestion is essentially an
assessment of helpfulness during theproduct decision-making Yet, past research on reviews provides findings with con
process. A review is helpful if it aids one ormore stages of flicting implications for review diagnosticity and helpfulness.
this process. This understanding of review helpfulness is For reviews ofmovies with moderate star ratings, Schlosser
consistent with the previously cited conceptualizations of
(2005) found that two-sided arguments were more credible
perceived diagnosticity. and led tomore positive attitudes about themovie, but in the
case of movies with extreme ratings, two-sided arguments
For our study of online reviews, we adapt the established were less credible.
view of perceived diagnosticity as perceived helpfulness to a
decision process. We seek tobetter understand what makes a Other research on online reviews provides insights on the
helpful review. Our model (Figure 1) illustrates two factors relationship between review diagnosticity and review ex
thatconsumers take intoaccount when determining thehelp
tremity. Pavlou and Dimoka (2006) found that the extreme
fulness of a review. These are review extremity (whether the
ratings of eBay sellers were more influential thanmoderate
review ispositive, negative, or neutral), and review depth (the
ratings, and Forman et al. (2008) found that for books,
extensiveness of the reviewer comments). Given the differ moderate reviews were less helpful than extreme reviews.
ences in the nature of information search across search and One possible explanatory factor is the consumer's initial
experience goods, we expect theproduct type tomoderate the attitude. For example, Crowley and Hoyer (1994) found that
perceived helpfulness of an online customer review. These two-sided arguments are more persuasive than one-sided
factors and relationships will be explained inmore detail in
positive argumentswhen the initial attitude of the consumer
the following sections. is neutral or negative, but not in other situations.

These mixed findings do not lead to a definitive expectation


Review Extremity and Star Ratings of whether extreme reviews or moderate reviews are more
helpful. This ambiguitymay be partly explained by theobser
Previous research on extreme and two-sided arguments raises vation that previous research on moderate versus extreme
theoretical questions on the relative diagnosticity or helpful reviews failed to take product type into consideration. The
ness of extreme versus moderate reviews. Numerical star relative value ofmoderate versus extreme reviews may differ

ratings for online customer reviews typically range from one depending on whether the product is a search good or an
to five stars. A very low rating (one star) indicates an experience good. Research in advertising has found that
extremely negative view of the product, a very high rating consumers are more skeptical of experience than search

(five stars) reflects an extremely positive view of theproduct, attribute claims, and more skeptical of subjective than objec

188 MIS QuarterlyVol. 34 No. 1/March


2010

This content downloaded from 147.8.31.43 on Fri, 26 Feb 2016 03:43:24 UTC
All use subject to JSTOR Terms and Conditions
Mudambi & Schuff/Consumer Reviews on Amazon.com

Review
extremity
star rating
H1

Product type Helpfulness


search or of the
customer
experience
review
good

H3
Review depth
word count
H2

Control
number of votes
on helpfulness

Figure 1. Model of Customer Review Helpfulness

tive claims (Ford et al. 1990). This indicates resistance to extremely positive review, but is unlikely to find that an
strong or extreme statementswhen those claims cannot be extreme review will help the purchase decision process.
easily substantiated. Similarly, an extremely negative reviewwill conflictwith the
consumer's initial perception without adding value to the
There may be an interactionbetween product type and review purchase decision process.
extremity, as different products have differing information
needs. On consumer ratings sites, experience goods often Reviews of search goods aremore likely to address specific,
have many extreme ratings and fewmoderate ratings,which tangible aspects of the product, and how the product per
can be explained by the subjective nature of the dominant formed in different situations. Consumers are in search of
attributes of experience goods. Taste plays a large role in specific informationregarding the functional attributesof the
many experience goods, and consumers are often highly product. Since objective claims about tangible attributes are
confident about theirown tastes and subjective evaluations, more easily substantiated, extreme claims for search goods
and skeptical about the extreme views of others. Experience can be perceived as credible, as shown in the advertising
goods such as movies andmusic seem to attractreviews from literature(Ford et al. 1990). Extreme claims for search goods
consumers who either love themor hate them,with extremely can provide more informationthan extreme claims for experi
positive reviews especially common (Ghose and Ipeirotis ence goods, and can show evidence of logical argument.We
2006). Consumers may discount extreme ratings ifthey seem expect differences in the diagnosticity and helpfulness of
to reflecta simple difference in taste. Evidence of high levels extreme reviews across search and experience goods.
of cognitive processing typicallydoes not accompany extreme Therefore, we hypothesize
attitudes on experience goods. Consumers aremore open to
moderate ratings of experience goods, as theycould represent HI. Product typemoderates the effect of review
a more
objective assessment. extremity on the helpfulness of the review. For
reviews with extreme are
experience goods, ratings
For experience goods, thiswould imply thatobjective content less helpful than reviews with moderate ratings.
is favored, and thatmoderate reviews would be likely to be
more helpful than either extremely negative or extremely Sample reviews fromAmazon.com can serve to illustrate the
positive reviews inmaking a purchase decision. For example, key differences in the nature of reviews of experience and
a consumer who has an initial positive perception of an search goods. As presented inAppendix A, reviews with
experience good (such as a music CD) may agree with an extreme ratingsof experience goods often appear very subjec

MIS Quarterly Vol. 34 No. 1/March 2010 189

This content downloaded from 147.8.31.43 on Fri, 26 Feb 2016 03:43:24 UTC
All use subject to JSTOR Terms and Conditions
Mudambi & Schuff/Consumer Reviews on Amazon.com

tive, sometimes go off on tangents, and can include senti selves buying and using the product. Both of these aspects
ments that are unique or personal to the reviewer. Reviews can increase the diagnosticity of a review and facilitate the
with moderate ratingsof experience goods have amore objec purchase decision process. Therefore, we hypothesize
tive tone, keep more focused, and reflect less idiosyncratic
tastes. In contrast, both extreme and moderate reviews of H2. Review depth has a positive effect on the
search goods often take an objective tone, refer to facts and helpfulness of the review.
measurable features, and discuss aspects of general concern.
Overall, we expect extreme reviews of experience goods tobe However, the depth of the review may not be equally impor
less helpful thanmoderate reviews. For search goods, both tantfor all purchase situations, and may differdepending on
extreme and moderate reviews can be helpful. An in-depth whether the consumer is considering a search good or an

analysis of the textof reviewer comments, while beyond the experience good. For experience goods, the social presence
scope of thispaper, could yield additional insights. provided by comments can be important.According to social
comparison theory (Festinger 1954), individuals have a drive
to compare themselves to other people. Shoppers frequently
look to other shoppers for social cues in a retail environment,
Review Depth and Peer Comments
as brand choice may be seen as making a statement about the
individual's taste and values. Information that is personally
Review depth can increase informationdiagnosticity, and this
delivered from a non-marketer has been shown to be
is especially beneficial to the consumer ifthe informationcan
be obtained without additional search costs (Johnson and especially credible (Herr et al. 1991).

Payne 1985). A reviewer's open-ended comments offeraddi Prior research has examined ways to increase the social
tional explanation and context to the numerical star ratings
presence of the seller to thebuyer (Jiang and Benbasat 2004),
and can affect theperceived helpfulness of a review. When
consumers are willing to read and compare open-ended especially as a way of mitigating uncertainty in the buyer
seller online relationships (Pavlou et al. 2007). Kumar and
comments frompeers, theamount of informationcan matter.
Benbasat (2006) found that themere existence of reviews
We expect the depth of information in the review content to
established social presence, and thatonline, open-ended peer
improve diagnosticity and affect perceived helpfulness. comments can emulate the subjective and social norms of
offline interpersonal interaction. The more comments and
Consumers sometimes expend time and effort to evaluate
stories, themore cues for the subjective attributes related to
alternatives, but then lack the confidence or motivation to taste.
personal
make a purchase decision and the actual purchase. People are
most confident in decisions when information is highly
However, reviews for experience products can be highly
diagnostic. Tversky and Kahneman (1974) found that the and often contain tangential information idio
personal,
increased availability of reasons for a decision increases the to the reviewer. This additional content is not
syncratic
decision maker's confidence. Similarly, the arguments of
uniformly helpful to the purchase decision. In contrast,
seniormanagers were found tobe more persuasive when they
customers purchasing search goods are more likely to seek
provided a largerquantity of information (Schwenk 1986). A factual information about the product's objective attributes
consumer may have a positive inclination toward a product,
and features. Since these reviews are often presented in a
but have not made the cognitive effort to identify themain
fact-based, sometimes bulleted format, search good reviews
reasons to choose a product, or tomake a list of thepros and can be relatively short. The factual nature of search reviews
cons. Or, a consumer may be negatively predisposed toward
implies thatadditional content in those reviews ismore likely
a product, but not have themotivation to search and process to contain important information about how the product is
information about other alternatives. In these situations, an used and how it compares to alternatives. Therefore, we
in-depth review from someone who has already expended the argue thatwhile additional review content is helpful for all
effort is diagnostic, as itwill help the consumer make the
reviews, the incremental value of additional content in a
purchase decision. search review ismore likely to be helpful to the purchase
decision than the incremental value of additional content for
The added depth of informationcan help thedecision process
experience reviews. This leads us to hypothesize
by increasing the consumer's confidence in the decision.
Longer reviews often includemore product details, and more H3. The product typemoderates theeffectof review
details about how and where theproduct was used in specific
depth on thehelpfulness of thereview. Review depth
contexts. The quantity of peer comments can reduce product has a greater positive effecton thehelpfulness of the
quality uncertainty, and allow the consumers topicture them reviewfor search goods thanfor experience goods.

190 MIS QuarterlyVol. 34 No. 1/March


2010

This content downloaded from 147.8.31.43 on Fri, 26 Feb 2016 03:43:24 UTC
All use subject to JSTOR Terms and Conditions
Mudambi & Schuff/Consumer Reviews on Amazon.com

To summarize, our model of customer review helpfulness An MP3 player has several objective, functional features such
(Figure 1) is an application of informationeconomics theory as storage capacity, size, and weight thatcan be judged prior
and theparadigm of search versus experience goods. When to purchase. However, theMP3 player we chose (the iPod)
consumers determine the helpfulness of a review, they take iswidely regarded as being popular more due its image and
intoaccount review extremity,review depth, andwhether the style than its functionality. Evaluation of the iPod rests
product is a search good or an experience good. heavily on interactingwith theproduct and hearing its sound
quality. A video game can also be described with some tech
nical specifications, but the real testof quality iswhether the
game is entertainingand engaging. The entertainmentquality
Research Methodology is a subjective judgment that requires playing the game.

Data Collection We define a search good as one inwhich it is relatively easy


to obtain information on product quality prior to interaction
We collected data for this study using the online reviews with theproduct; key attributes are objective and easy to com
available through Amazon.com as of September 2006. pare, and there is no strong need to use one's senses to
Review data on Amazon.com is provided through the pro evaluate quality. We found threegoods thatfit these qualifi
duct's page, along with general product and price information cations: a digital camera, a cell phone, and a laser printer.
thatmay include Amazon.com's own product review. We Like our experience goods, these are representative of search
retrieved the pages containing all customer reviews for six goods used inprevious research (see Bei et al. 2004; Nelson
products (see Table 1). 1970; Weathers et al. 2007). The product descriptions on
Amazon, com heavily emphasized functional features and
We chose these six products in the study based on several benefits. Comparison tables and bullet points highlighted
criteria.Our first criterion for selection was that the specific objective attributes. Digital cameras were compared on their
product had a relatively large number of product reviews image resolution (megapixels), display size, and level of
compared with other products in thatcategory. Secondly, we optical zoom. Key cell phone attributes included hours of talk
chose both search and experience goods, building on Nelson time,product dimensions, and network compatibility. Laser
(1970, 1974). Although the categorization of search and printerswere compared on print resolution, print speed, and
experience goods continues to be relevant and widely maximum sheet capacity. There is also an assumption thatthe
accepted (Huang et al. 2009), forproducts outside ofNelson's products take some time to learn how to use, so a quick
original list of products, researchers have disagreed on their sampling or trial of theproduct is not perceived to be a good
categorizations. The Internethas contributed to blurring of way of evaluating quality. Although using theproduct prior
the lines between search and experience goods by allowing to purchase may be helpful, it is not as essential to assess the
consumers to read about the experiences of others, and to quality of thekey attributes.
compare and share information at a low cost (Klein 1998;
Weathers et al. 2007). Given thatproducts can be described For each product, we obtained all of the posted reviews, for
as existing along a continuum from pure search to pure a total of 1,608 reviews. Each web page containing the set of

experience, we took care to avoid products thatfell to close to reviews for a particular product was parsed to remove the
the center and were therefore too difficult to classify. HTML formattingfrom the text and then transformed into an
XML file that separated thedata into records (the review) and
We identify an experience good as one inwhich it is rela fields (the data in each review). We collected the following
tively difficult and costly to obtain information on product data:
quality prior to interactionwith theproduct; key attributesare
subjective and difficult to compare, and there is a need touse (1) The star rating (1 to 5) the reviewer gave theproduct.
one's senses to evaluate quality. We selected threegoods that (2) The total number of people thatvoted in response to the
fit these qualifications well: amusic CD, anMP3 player, and question, "Was this review helpful to you (yes/no)?"
a video game. These are also typical of experience goods as (3) The number of people who voted that the review was
classified in previous studies (see Bhattacharjee et al. 2006, helpful.
Bragge and Storg?rds 2007, Nelson 1970, Weathers et al. (4) The word count of the review.
2007). Purchase decisions on music are highly personal,
based on issues more related to subjective taste thanmeasur We excluded from the analysis reviews that did not have
able attributes. It is difficult to judge the quality of a melody anyone vote whether the review was helpful or not. This led
without hearing it. Even seeing the song's musical notes on us to eliminate 21 reviews, or 1.3 percent of the total,
a page would not be adequate for judging itsoverall quality. resulting in a data set of 1,587 reviews of the 6 products.

MIS QuarterlyVol. 34 No. 1/March


2010 191

This content downloaded from 147.8.31.43 on Fri, 26 Feb 2016 03:43:24 UTC
All use subject to JSTOR Terms and Conditions
Mudambi & Schuff/Consumer Reviews on Amazon.com

Table 1. Products Used in the Study

Product Description Type Sources


MP3 player 5thgeneration iPod (Apple) Experience Weathers et al. 2007
Music CD "Loose" by Nelly Furtado Experience Bhattacherjee et al. 2006
Nelson 1970
Weathers et al. 2007
PC video game "The Sims: Nightlife"by Electronic Arts Experience Bragge and Storg?rds 2007
Cell phone RAZR V3 byMotorola Search Bei et al. 2004

Digital camera PowerShot A620 fromCanon Search Nelson 1970


Laser printer HP1012 by Hewlett-Packard Search Weathers et al. 2007

Variables included inTable 3. The average review is positive, with an


average star rating of 3.99. On average, about 63 percent of
We were able to operationalize the variables of our model thosewho voted on a particular review's helpfulness found
using theAmazon data set. The dependent variable is help thatreview to be helpful inmaking a purchase decision. This
fulness,measured by thepercentage of people who found the indicates thatalthough people tend tofind the reviews helpful,
review helpful (Helpfulness %). This was derived by dividing a sizable number do not.
the number of people who voted that the review was helpful
by the totalvotes in response to the "was this reviews helpful
to you" question (Total Votes).
Analysis Method

The explanatory variables are review extremity,review depth, We used Tobit regression to analyze themodel due to the
and product type. Review extremity ismeasured as the star nature of our dependent variable (helpfulness) and the cen
rating of the review (Rating). Review depth ismeasured by sored nature of the sample. The variable is bounded in its
the number of words of the review (Word Count). Both of range because the response is limited at the extremes. Con
thesemeasures are taken directly from theAmazon data for sumersmay eithervote the review helpful or unhelpful; there
each review. Product type (Product Type) is coded as a is no way to be more extreme in their assessment. For
binary variable, with a value of 0 for search goods and 1 for example, they cannot vote the review "essential" (better than
experience goods. helpful) or "damaging" (worse thanunhelpful) to thepurchase
decision process. A second reason to use Tobit is thepoten
We included the totalnumber of votes on each review's help tial selection bias inherent in this type of sample. Amazon
fulness (Total Votes) as a control variable. Since the depen does not indicate thenumber of persons who read the review.
dent variable is a percentage, this could hide some potentially They provided only thenumber of totalvotes on a review and
important information. For example, "5 out of 10 people how many of those voted the review was helpful. Since it is
found the review helpful" may have a different interpretation unlikely that all readers of the review voted on helpfulness,
than "50 out of 100 people found the review helpful." there is a potential selection problem. According toKennedy
(1994), if the probability of being included in the sample is
The dependent variable is a measure of helpfulness as correlated with an explanatory variable, the OLS and GLS
obtained fromAmazon.com. For each review, Amazon asks estimates can be biased.
the question, "Was this review helpful?" with the option of
responding "yes" or "no." We aggregated the dichotomous There are several reasons to believe these correlations may
responses and calculated theproportion of "yes" votes to the exist. First, people may be more inclined to vote on extreme
totalvotes cast on helpfulness. The resulting dependent vari reviews, since these are more likely to generate an opinion
able is a percentage limited to values from 0 to 100. from the reader. Following similar reasoning, people may
also be more likely tovote on reviews thatare longer because
The descriptive statistics for the variables in the full data set the additional content has more potential to generate a reac
are included inTable 2, and a comparison of the descriptive tion from the reader. Even the number of votes may be cor
statistics for the search and experience goods subsamples are relatedwith likelihood to vote due to a "bandwagon" effect.

192 2010
MIS QuarterlyVol. 34 No. 1/March

This content downloaded from 147.8.31.43 on Fri, 26 Feb 2016 03:43:24 UTC
All use subject to JSTOR Terms and Conditions
Mudambi & Schuff/Consumer Reviews on Amazon.com

Table 2. Descriptive Statistics for Full Sample

Variable Mean SD

Rating 3.99 1.33 1587


Word Count 186.63 206.43 1587
Total Votes 15.18 45.56 1587

Helpfulness % 63.17 32.32 1587

Table 3. Descriptive Statistics and Comparison of Means for Subsamples

Search = =
(N 559) Experience (N 1028)
Variable Mean (SD) Mean (SD) p-value

Rating 3.86(1.422) 4.06(1.277) 0.027t


Word Count 173.16(181.948) 193.95 (218.338) 0.043*
Helpfulness Votes 18.05 (27.344) 13.61 (52.841) 0.064t
Helpfulness % 68.06 (29.803) 60.52 (33.321) o.ooot

fUsing the Mann-Whitney test

fUsing the t-test

The censored nature of the sample and thepotential selection The resultingmodel is2
problem indicate a limiteddependent variable. Therefore,we
=
used Tobit regression to analyze the data, and measured Helpfulness % ?jRating + ?2 Rating? + ?3 Product
goodness of fitwith the likelihood ratio and Efron's pseudo type+ ?4Word Count + ?fTotal Votes + ?gating
R-square value (Long 1997). Product type+ ?7 Rating2 xProduct type+ ?8 Word
Count x Product type+ e
InHI, we hypothesized thatproduct typemoderates theeffect
of review extremity on the helpfulness of the review. We
expect that for experience goods, reviews with extreme
Results BBB^^^^^^^^^^^^^H
ratings are less helpful than reviews with moderate ratings.
Therefore, we expect a nonlinear relationship between the
The results of the regression analysis are included inTable 4.
ratingand helpfulness,modeled by including the starratingas
The analysis of themodel indicates a good fit,with a highly
both a linear term (Rating) and a quadratic term (Rating2). =
We expect the linear term to be positive and the quadratic significant likelihood ratio (p 0.000), and an Efron's pseudo
term to be negative, indicating an invertedU-shaped rela R-square value of 0.402.3

tionship, implying that extreme reviews will be less helpful


thanmoderate reviews. Because we believe that the relation 2
We thank the anonymous reviewers for their suggestions for additional
shipbetween ratingand helpfulness changes depending on the effects to include in our model. Specifically, itwas suggested that we
product type,we include interactiontermsbetween ratingand investigate thepotential interaction ofRating andWord Count, andmodel the
influence of Total Votes as a quadratic. When we included Rating Word
product type.
Count in our model, we found that it was not significant, nor did it

meaningfully affect the level of significance or the direction of the parameter


We include word count to testH2, that review depth has a estimates. Total Votes2 was significant (p < 0.0001), but italso did not affect
positive effect on the helpfulness of the review. In H3, we the level of significance or the direction of the other parameter estimates.

expect thatproduct typemoderates the effectof review depth Therefore, we left those terms out of our final model.

on thehelpfulness of the review. To testH3, we include an


interaction termforword count and product type.We expect 3As a robustness check, we reran our analysis using an ordinary linear

that review depth has a greater positive effecton thehelpful regression model. We found similar results. That is, the ordinary regression
model did not meaningfully affect the level of significance or the direction
ness of the review for search goods thanfor experience goods. of the parameter estimates.

MIS QuarterlyVol. 34 No. 1/March


2010 193

This content downloaded from 147.8.31.43 on Fri, 26 Feb 2016 03:43:24 UTC
All use subject to JSTOR Terms and Conditions
Mudambi & Schuff/Consumer Reviews on Amazon.com

Table 4. Regression Output for Full Sample

Standard
Variable Coefficient Error t-value

(Constant) 61.941 9.79 5.305

Rating -6.704 7.166 -0.935

Rating2 2.126 1.118 1.901


Word Count 0.067 0.010 6.936
Product Type -45.626 12.506 -3.648

Total Votes -0.0375 0.022 -1.697

Rating Product Type 32.174 9.021 3.567

Rating2 Product Type -5.057 1.400 -3.613

Word Count Product Type -0.024 0.011 -2.120

LikelihoodRatio = 205.56 (p = 0.000, df8, 1587


Efron's R2 = 0.402

To testHypothesis 1,we examined the interaction of rating depth on thehelpfulness of the review. This support is indi
and product type. Rating Product type (p < 0.000) and cated by the significant interaction termWord Count
Rating2
<
Product type (p 0.000) were statistically signi ProductType (p < 0.034) in thefull
model (Table 3). The
ficant. Product typemoderates the effectof review extremity negative coefficient for the interaction term indicates that
on the helpfulness of the review. To furtherexamine this review depth has a greater positive effect on thehelpfulness
relationship, we split the data into two subsamples, search of the review for search goods than for experience goods. A
goods and experience goods. This is because in thepresence summary of the results of all thehypotheses tests are provided
of the interaction effects, themain effects are more difficult inTable 7.
to interpret. The output from these two regressions are
included inTables 5 and 6.

Discussion
For experience goods, there is a significant relationship
between both Rating (p < 0.000) and Rating2 (p < 0.001) and Two insights emerge from the results of this study. The first
helpfulness. The positive coefficient for Rating and the is that product type, specifically whether the product is a
negative coefficient forRating2 also indicates our hypothe search or experience good, is importantinunderstanding what
sized "inverted-U" relationship. For experience goods, makes a review helpful to consumers. We found thatmoder
reviews with extremely high or low starratings are associated ate reviews aremore helpful than extreme reviews (whether
with lower levels of helpfulness than reviews with moderate
they are stronglypositive or negative) for experience goods,
star ratings. For search goods (Table 6), ratingdoes not have but not for search goods. Further, lengthierreviews generally
a significant relationshipwith helpfulness, while Rating2 does increase the helpfulness of the review, but this effect is
=
(p 0.04). Therefore, we find support forHI. Product type greater for search goods than experience goods.
moderates theeffectof review extremityon thehelpfulness of
the review. As with any study, there are several limitations thatpresent
opportunities for futureresearch. Although our sample of six
In H2, we hypothesize a positive effect of review depth on consumer products is sufficiently diverse to support our
the helpfulness of the review. We find strong support for findings, our findings are strictlygeneralizable only to those
Hypothesis 2. For both search and experience products, products. Future studies could sample a larger set of products
review depth has a positive, significant effecton helpfulness. in order to confirm that our results hold. For example,
Word count is a highly significant (p < 0.000) predictor of including differentbrands within the same product category
helpfulness inboth the experience good subsample (Table 5) would allow for an analysis of the potentially moderating
and in the search good subsample (Table 6). effect of brand perception.

The results also provide strong support forH3, which hypoth Second, thegeneralizability of our findings is limited to those
esizes that the product typemoderates the effect of review consumers who rate reviews.We do not know whether those

194 MIS QuarterlyVol. 34 No. 1/March


2010

This content downloaded from 147.8.31.43 on Fri, 26 Feb 2016 03:43:24 UTC
All use subject to JSTOR Terms and Conditions
Mudambi & Schuff/Consumer Reviews on Amazon.com

Table 5. Regression Output for Experience Goods

Standard
Variable Coefficient Error t-value Sig.
(Constant) 5.633 8.486 0.664 0.507

Rating 25.969 5.961 4.357 0.000

Rating2 -2.988 0.916 -3.253 0.001


Word Count 0.043 0.006 6.732 0.000
Total Votes -0.028 0.026 -1.096 0.273

LikelihoodRatio = 98.87 (p = 0.000, df4, 1028)


Efron's R2 = 0.361

Table 6. Regression Output for Search Goods

Standard
Variable Coefficient Error t-value Sig.
(Constant) 52.623 8.365 6.291 0.000

Rating -6.040 6.088 -0.992 0.321

Rating2 1.954 0.950 2.057 0.040


Word Count 0.067 0.008 8.044 0.000
Total Votes -0.106 0.053 -2.006 0.045

LikelihoodRatio = 98.87 (p = 0.000, df4, 1028)


Efron's R2 = 0.361

Table 7. Summary of Findings

Description Result
H1 Product typemoderates the effectof review extremityon the helpfulness of the review. For experi Supported
ence goods, reviewswith extreme ratingsare less helpful than reviewswithmoderate ratings.
H2 Review depth has a positive effecton the helpfulness of the review. Supported
H3 The product typemoderates the effectof review depth on the helpfulness of the review. Review Supported
depth has a greater positive effecton the helpfulness of the review forsearch goods than for
experience goods. _

reviews would be as helpful (or unhelpful) to thosewho do subjectivity is requiredwhen determiningwhether comments
not vote on reviews at all. Future studies could survey amore or reviews aremoderate, positive, or negative.
general cross-section of consumers to determine if our
findings remain consistent. Still, qualitative analysis opens up several avenues for future
research. One could analyze the textof the review and com
A third limitation is that ourmeasures for review extremity pare this to the star rating to determine how well themagni
(star rating) and review depth (word count) are quantitative tude of the star ratingmatches with the review's content. In
surrogates and not directmeasures of these constructs.Using addition, one could use qualitative analysis to develop amore
data from theAmazon.com site has the advantage of being a direct measure to create a more nuanced differentiation
more objective, data-driven approach than alternative ap between moderate reviews and extreme reviews, as well as to
proaches relying on subjective interpretations.For example, develop ameasure of review depth. For example, this typeof

MIS Quarterly Vol. 34 No. 1'/March 2010 195

This content downloaded from 147.8.31.43 on Fri, 26 Feb 2016 03:43:24 UTC
All use subject to JSTOR Terms and Conditions
Mudambi & Schuff/Consumer Reviews on Amazon.com

analysis could differentiate a three star review that contains reviews. Overall, extremely negative reviews are viewed as

conflicting but extreme statements from a three star review less helpful thanmoderate reviews, but product typematters.
that contains multiple moderate statements. For experience goods, reviews with extreme ratings are less
helpful than reviewswith moderate ratings,but thiseffectwas
not seen for search goods. Extreme reviews for experience
Qualitative analysis could also be used to obtain additional
quantitative data thatcan be incorporated into futurestudies. products may be seen as less credible. Although we didn't
Pavlou and Dimoka (2006) performed a content analysis of specifically examine the relative helpfulness of negative
comments regarding sellers on eBay and found that this versus positive reviews, future studies could address this

yielded insightsbeyond what could be explained using purely question of asymmetry perceptions.
a
quantitative measures. The analysis of themodel indicates
with a likelihood ratio and a Our study provides an interestingcontrast to the finding by
good fit, highly significant high
Efron's pseudo R-square value, yet there are additional Forman et al. (2008) thatmoderate book reviews are less

components of review helpfulness unaccounted for in this helpful than extreme book reviews. In contrast,we found that
for experience goods, reviews with extreme ratings are less
study. These data could be operationalized as new quanti
tative variables that could extend the regression model helpful than reviews with moderate ratings, although this
in thispaper. effectwas not seen for search goods. Although books can be
developed
considered experience goods, theyare a ratherunique product

Finally, our regression model could also be extended to category. Studies by Hu et al. (2008) and Li and Hitt (2008)
include otherpossible antecedents of review helpfulness, such look at the positive self-selection bias that occurs in early
as reviewer characteristics. This may be particularly relevant reviews for books. Our analysis of a wider range of experi
ence and search goods indicates thatadditional insightscan be
since review helpfulness is a subjective assessment, and could
be influenced by the perceived credibility of the reviewer. gained by looking beyond one product type. Reviews and
Future studies could apply the search/experience paradigm to their effect on perceived helpfulness differ across product

whether the reviewer's identity is disclosed (Forman et al. types.

2008) and the reviewer's statuswithin the site (i.e.,Amazon's


We also found that length increases the diagnosticity of a
"top reviewer" designations).
search good review more than that of an experience good
review. This is consistentwith Nelson's (1970,1974) classi
fication of search and experience goods, in that it is easier to
Conclusions gather informationon product quality for search goods prior
to purchase. In the context of an online retailer, information
This study contributes to both theory and practice. By comes in the form of a product review, and reviews of search
building on the foundation of the economics of information, goods lend themselves more easily to a textual description
we provide a theoretical framework tounderstand the context than do reviews of experience goods. For experience goods,
of online reviews. Through the application of the paradigm sampling is required (Huang et al. 2009; Klein 1998).
of search and experience goods (Nelson 1970), we offer a Additional length in the textual review cannot compensate or
conceptualization ofwhat contributes to theperceived help substitute for sampling.
fulness of an online review in the multistage consumer
decision process. The type of product (search or experience This study is also has implications forpractitioner audiences.
con Previous research has shown that the mere presence of
good) affects information search and evaluation by
sumers. We show that the type of product moderates the customer reviews on a website can improve customer percep
effect of review extremity and depth on the helpfulness of a tion of thewebsite (Kumar and Benbasat 2006). Sites such as
review. We ground the commonly used measure of helpful Amazon.com elicit customer reviews for several reasons, such
ness in theory by linking it to the concept of information as to serve as a mechanism to increase site "stickiness," and

diagnosticity (Jiang and Benbasat 2004). As


a result, our to create an information product that can be sold to other

findings help extend the literature on information diagnos online retailers. Reviews that are perceived as helpful to
ticitywithin the context of online reviews. We find that customers have greater potential value to companies,
review extremityand review lengthhave differing effects on including increased sales (Chen et al. 2008; Chevalier and
the information diagnosticity of that review, depending on Mayzlin 2006; Clemons et al. 2006; Ghose and Ipeirotis
product type. 2006).

new insights on the con Our study builds on these findings by exploring the ante
Specifically, this study provides
flicting findings of previous research regarding extreme cedents of perceived quality of online customer reviews. Our

196 2010
MIS QuarterlyVol. 34 No. 1/March

This content downloaded from 147.8.31.43 on Fri, 26 Feb 2016 03:43:24 UTC
All use subject to JSTOR Terms and Conditions
Mudambi & Schuff/Consumer Reviews on Amazon.com

findings can increase online retailers' understanding of the Bei, L-T, Chen, E. Y. I., and Widdows, R. 2004. "Consumers'
role online reviews play in themultiple stages of the con Online InformationSearch Behavior and thePhenomenon of
sumer's purchase decision process. The results of this study Search and Experience Products," Journal of Family and
can be used to develop guidelines for creatingmore valuable Economic Issues (25:4), pp. 449-467.
online reviews. For example, our results imply online Bhattacharjee, S., Gopal, R. D., Lertwachara, K., andMarsden, J.R.
retailers should consider different guidelines for customer 2006. "Consumer Search andRetailer Strategies in thePresence
feedback, depending whether that feedback is for a search ofOnlineMusic Sharing,"Journal ofManagement Information
good or an experience good. For a search good (such as a Systems (23:1), pp. 129-159.
digital camera), customers could be encouraged toprovide as Bragge, J.,and Storg?rds,J. 2007. "UtilizingText-MiningTools to
much depth, or detail, as possible. For an experience good Enrich TraditionalLiteratureReviews. Case: Digital Games,"
(such as a music CD), depth is important,but so is providing inProceedings of the30thInformationSystemsResearch Seminar
a moderate review. For these goods, customers should be in Scandinavia, Tampere, Finalnd, pp. 1-24.

encouraged to list both pros and cons for each product, as Chen, P., Dhanasobhon, S., and Smith, M. 2008. "All Reviews Are
these reviews are themost helpful to thatpurchase decision. Not Created Equal: The Disaggregate Impact of Reviews on
Reviewers can be incentivized to leave these moderate Sales on Amazon.com,"
working paper, Carnegie Mellon Uni
reviews. Currently, the "top reviewer" designation from versity (available at SSRN: http://ssm.com/abstract=918083).
Amazon is primarily determined as a function of helpfulness Chen, P., Wu, S., and Yoon, J. 2004. "The Impact of Online
votes and the number of their contributions. Qualitative Recommendations and Consumer Feedback on Sales," in
assessments could also be used, such as whether the reviews Proceedings of the25thInternationalConference on Information
present pros and cons, in rewarding reviewers. Systems, R. Agarwal, L. Kirsch, and J. I. DeGross (eds.),

Washington, DC, December 12-14, pp. 711-724.


Our study also shows that online retailers need not always Chen, Y., and Xie,J. 2005. "Third-Party Product Review and Firm
fear negative reviews of their products. For experience Marketing Strategy,"Marketing Science (24:2), pp. 218-240.
goods, extremely negative reviews are viewed as less helpful Chevalier, J., andMayzlin, D. 2006. "The Effect ofWord of
thanmoderate reviews. For search goods, extremelynegative Mouth on Sales: Online Book Reviews," Journal ofMarketing
reviews are less helpful thanmoderate and positive reviews. Research (43:3), pp. 345-354.
Overall, thispaper contributes to the literatureby introducing Clemons, E., Gao, G, andHitt, L. 2006. "WhenOnline Reviews
a conceptualization of the helpfulness of online consumer Meet Hyperdifferentiation: A Studyof theCraftBeer Industry,"
reviews, and grounding helpfulness in the theoryof informa Journal of
Management InformationSystems(23:2), pp. 149-171.
tion economics. In practice, helpfulness is often viewed as a Crowley, A. E., and Hoyer, W. D. 1994. "An Integrative Frame

simple "yes/no" choice, but our findings provide evidence that work forUnderstanding Two-Sided Persuasion," Journal of
it is also dependent upon the typeof product being evaluated. Consumer Research (20:4), pp. 561-574.
As customer review sites become more widely used, our Dabholkar, P. 2006. "Factors InfluencingConsumer Choice of a
findings imply that it is importantto recognize thatconsumers Web Site' :An Experimental Investigationof anOnline
'Rating
shopping for search goods and experience goods may make InteractiveDecision Aid," Journal ofMarketing Theory and
different information-consumption choices. Practice (14:4), pp. 259-273.
Eisend, M. 2006. "Two-Sided Advertising: A Meta-Analysis,"

InternationalJournal ofResearch in
Marketing (23), pp. 18-198.
L. 1954. "A Theory of Social
Acknowledgments Festinger, Comparison Processes,"
Human Relations (7), pp.117-140.
The authorswould liketo thankPanahMosaferirad forhis helpwith Ford, G. T., Smith, D. B., and Swasy, J. L. 1990. "Consumer
thecollection of thedata used in thispaper. Skepticism of Advertising Claims: Testing Hypotheses from
Economics of Information," Journal of Consumer Research

(16:4), pp. 433-441.


. 2008.
References Forman, C, Ghose, A., Wiesenfeld, "Examining the

RelationshipBetweenReviews and Sales: The Role ofReviewer


Ba, S., and Pavlou, P. 2002. "Evidence of the Effect of Trust IdentityDisclosure inElectronicMarkets," InformationSystems
Building Technology inElectronicMarkets: Price Premiumsand Research (19:3), pp. 291-313.
Buyer Behavior,"MS Quarterly (26:3), pp. 243-268. Ghose, A., and Ipeirotis,P. 2006. "DesigningRanking Systems for
Bakos, J. 1997. "Reducing Buyer Search Costs: Implicationsfor Consumer Reviews: The Impact of Review Subjectivity on
Electronic Marketplaces," Management Science (43:12), pp. Product Sales and Review Quality," inProceedings of the 16th
1676-1692. Annual Workshop on InformationTechnology and Systems

MIS QuarterlyVol. 34 No. 1/March2010 197

This content downloaded from 147.8.31.43 on Fri, 26 Feb 2016 03:43:24 UTC
All use subject to JSTOR Terms and Conditions
Mudambi & Schuff/Consumer Reviews on Amazon.com

at Kumar, ., and Benbasat, I. 2006. "The Influence of Recommen


(available http://pages.stern.nyu.edu/~panos/publications/
dations on Consumer Reviews on Evaluations of Websites,"
wits2006.pdf).
Gretzel, U, and Fesenmaier, D. R. 2006. "Persuasion in Recom InformationSystemsResearch (17:4), pp. 425-439.
Li, X., andHitt, L. 2008. "Self-Selection and Information
Role of
mendation Systems," International Journal of Electronic
Online ProductReviews," InformationSystemsResearch (19:4),
Commerce (11:2), pp. 81-100.
pp. 456-474.
Herr, P. M, Kardes, F. R., and Kim, J. 1991. "Effects ofWord-of

Mouth and Product-Attribute Information on Persuasion: An


Long, J.S. 1997. RegressionModels for Categorical and Limited
Dependent Variables, Thousand Oaks, CA: Sage Publications.
Accessibility-DiagnosticityPerspective,"Journal ofConsumer
Mathwick, C, and Rigdon, E. 2004. "Play, Flow, and theOnline
Research (17:3), pp. 454-462.
Search Experience," Journal of Consumer Research (31:2), pp.
Hu, N., Liu, L., and Zhang, J. 2008. "Do Online Reviews Affect
324-332.
Product Sales? The Role of Reviewer Characteristics and
Nelson, P. 1970. "Information and Consumer Behavior," Journal
Temporal Effects,"InformationTechnologyManagement (9:3),
ofPolitical Economy (78:20), pp. 311-329.
pp. 201-214.
. H.,
Nelson, P. 1974. "Advertisingas Information"Journal ofPolitical
Huang, P., Lurie, and Mitra, S. 2009. "Searching for
Economy (81:4), pp. 729-754.
Experience on the
Web: An EmpiricalExamination ofConsumer
A. 2006. "The Nature and Role of Feed
Pavlou, P., and Dimoka,
Behavior for Search and Experience Goods," Journal of
back Text Comments inOnlineMarketplaces: Implicationsfor
Marketing (73:2), pp. 55-69. Trust Building, Price Premiums, and Seller Differentiation,"
Hunt, J.M., and Smith, M. F. "The Persuasive Impact of Two
InformationSystemsResearch (17:4), pp. 392-414.
Sided SellingAppeals foranUnknown Brand Name," Journal
Pavlou, P., and Gefen, D. 2004. "Building Effective Online
of theAcademy ofMarketing Science (15:1), 1987,pp. 11-18. with Institution-Based Trust,"
Marketplaces Information Systems
Jiang, Z., and Benbasat, I. 2004. "Virtual Product Experience: Research (15:1), pp. 37-59.
Effects ofVisual and Functional Control of Products on Per
Pavlou, P., and Fygenson,M. 2006. "UnderstandingandPredicting
ceivedDiagnosticity and Flow inElectronic Shopping,"Journal
ElectronicCommerceAdoption: An Extension of theTheory of
ofManagement InformationSystems (21:3), pp. 111-147. Planned Behavior,"MS Quarterly (30:1), pp. 115-143.
Jiang,Z., andBenbasat, I. 2007. "InvestigatingtheInfluenceof the Pavlou, P., Liang, H., and Xue, Y. 2007. and Mitiga
"Uncertainty
Functional Mechanisms of Online Product Presentations,"
tingUncertaintyinOnline Exchange Relationships: A Principal
InformationSystemsResearch (18:4), pp. 221-244. MS Quarterly (31:1 ), pp. 105-131.
Agent Perspective,"
Johnson, E., and Payne, J. 1985. "Effort and Accuracy in Choice,"
Poston, R., and Speier, C. 2005. "Effective Use of Knowledge
Management Science (31:4), pp. 395-415. Management Systems:A ProcessModel ofContentRatings and
K. J. 1972. "On the Ambivalence-Indifference Problem in
Kaplan, MS Quarterly (29:2), pp. 221-244.
Credibility Indicators,"
AttitudeTheory andMeasurement: A SuggestedModification of aMiddle
Presser, S., and Schuman, H. 1980. "The Measurement
of theSemanticDifferentialTechnique,"Psychological Bulletin Position inAttitude Surveys,"Public Opinion Quarterly (44:1),
(11),May, pp. 361-372. pp. 70-85.
Kennedy, P. 1994. A Guide to Econometrics (3rd ed.), Oxford, Schlosser, A. 2005. "Source Perceptions and the Persuasiveness of

England: Blackwell Publishers. Internet Word-of-Mouth Communication," in Advances in

Kempf, D. S., and Smith, R. E. 1998. "Consumer Processing of


Consumer Research (32), G. Menon and A. Rao (eds.), Duluth,
ProductTrial and the Influenceof PriorAdvertising: A Struc MN: Association for Consumer Research, pp. 202-203.
Modeling Approach," Journal ofMarketing Research (35),
tural
Schwenk, C. 1986. "Information, Cognitive Biases, and Commit
pp. 325-337. ment to a Course of Action," Academy ofManagement Review
L. 1998. the Potential of Interactive Media
Klein, "Evaluating (11:2), pp. 298-310.
Through a New Lens: Search Versus Experience Goods," Smith, D., Menon, S., and Sivakumar, K. 2005. "Online Peer and
Journal ofBusiness Research (41:3), pp. 195-203. Editorial Recommendations, Trust, and Choice in Virtual
S., and Mahmood, M. A. 2004.
Kohli, R., Devaraj, "Understanding Markets," Journal of Interactive Marketing (19:3), pp. 15-37.
Determinants of Online Consumer Satisfaction: A Decision Stigler, G. J. 1961. "The Economics of Information," Journal of
Process Perspective," Journal of Management Information Political Economy (69:3), pp. 213-225.
Systems (2l:\), pp. 115-135. Todd, P., andBenbasat, I. 1992. "TheUse of InformationinDeci
Kotier, P., andKeller, K. L. 2005.MarketingManagement (12th sionMaking: An Experimental Investigationof the Impact of
ed.),Upper Saddle River,NJ: Prentice-Hall. Computer-Based Decision Aids," MS Quarterly (16:3), pp.
Krosnick, J.A., Boninger, D. S., Chuang, Y. C, Berent, M. K., and 373-393.

Camot,C.G. 1993. "Attitude One Construct orMany and Kahneman, D. 1974. Under Uncer
Strength: Tversky, A., "Judgment
Related Constructs?," Journal of Personality and Social
tainty: Heuristics and Biases," Science (185:4157), pp.
Psychology (65:6), pp. 1132-1151. 1124-1131.

198 MIS QuarterlyVol. 34 No. 1/March


2010

This content downloaded from 147.8.31.43 on Fri, 26 Feb 2016 03:43:24 UTC
All use subject to JSTOR Terms and Conditions
Mudambi & Schuff/Consumer Reviews on Amazon, com

Weathers, D., Sharma, S., and Wood, S. L. 2007. "Effects of Management, Industrial Marketing Management, and Management

Online Communication Practices on Consumer Perceptions of InternationalReview. She has a A fromMiami University,anMS
Performance Uncertainty for Search and Experience Goods," from Cornell University and a Ph.D. from theUniversity of
Journal ofRetailing (83:4), pp. 393-401. Warwick (UK).

David Schuff is an associate professorofManagement Information


Systems in theFox School ofBusiness andManagement atTemple
About theAuthors
University. He holds a BA inEconomics from theUniversity of
Susan M. Mudambi is an associate professor ofMarketing and Pittsburgh,anMBA fromVillanova University, anMS in Infor
mationManagement fromArizona StateUniversity,and a Ph.D. in
SupplyChainManagement, and an affiliatedfacultymember of the
Business Administration fromArizona State University. His
of
Department Management Information Systems, theFox School
in
research interests include the application of information visuali
of Business and Management at Temple University. Her research
zation to decision data warehousing, and the
addresses strategicand policy issues inmarketing,with special support systems,

interests in the role of technology inmarketing, international assessmentof totalcost of ownership.His work has been published

business, and business relationships. She has publishedmore than in Decision Support Systems, Information & Management,
a dozen articles, includingarticles inJournal ofProduct Innovation Communicationsof theACM, and InformationSystemsJournal.

Appendix A

Differences inReviews of Search Versus Experience Goods W?????????I??????????I

We observe thatreviewswith extremeratingsof experiencegoods oftenappear very subjective,sometimesgo offon tangents,and can include
sentiments that are unique or personal to the reviewer. Reviews with moderate ratings of experience goods have a more objective tone and

reflectless idiosyncratictaste.In contrast,both extremeandmoderate reviewsof searchgoods oftentakean objective tone,referto factsand
measurable features,and discuss aspects of general concern.This leads us to our hypothesis (HI) thatproduct typemoderates theeffectof
review extremityon thehelpfulnessof thereview.To demonstratethis,we have includedfourreviews fromAmazon.com. We chose twoof
theproduct categoriesused in thisstudy,one experiencegood (amusic CD), and one searchgood (a digital camera).We selectedone review
with an extreme rating and one review with a moderate rating for each product.

The textof thereviews exemplifies thedifferencebetweenmoderate and extremereviews for searchand experienceproducts.For example,
the extreme review for the experience good takes a strongly taste-based tone ("the killer comeback R.E.M.'s long-suffering original fans have
been for...") while the moderate review uses more measured, objective ("The album is no means bad.. .But there are no
hoping language by
classics here..."). The extreme review appears to be more of a personal reaction to the product than a careful consideration of its attributes.

For search goods, both reviews with extreme and moderate ratings refer to specific features of the product. The extreme review references

product attributes ("battery life is excellent," "the flash is great"), as does the moderate review ("slow, slow, slow," "grainy images,"
"underpowered flash"). We learn information about the camera from both reviews, even though the reviewers have reached different
conclusions about the product.

MIS QuarterlyVol. 34 No. 1/March


2010 199

This content downloaded from 147.8.31.43 on Fri, 26 Feb 2016 03:43:24 UTC
All use subject to JSTOR Terms and Conditions
Mudambi & Schuff/Consumer Reviews on Amazon.com

Experience Good: Music CD (REM's Accelerate)

Excerpts fromExtreme Review (5 stars) Excerpts fromModerate Review (3 stars)


This is it.This really is the one: the killercomeback R.E.M.'s There's no doubt thatR.E.M. were feeling the pressure to
long-suffering original fans have been hoping forsince the get back to being a rock band after theirpast three releases.
band detoured intoelectronic introspection in 1998. Peter ...Peter Buck's guitar screams and shreds like ithasn't done
Buck's guitars are frontand centre, driving the tracks rather inyears and there isactually a drummer instead of a drum
than decorating theiredges. Mike Mills can finallybe heard machine and looped beats...But you get the sense while
again on bass and backups. Stipe's vocals are as richand listeningtoAccelerate, thatStipe and company were
complex and scathing as ever, but for the firsttime ina primarilyconcerned with rockingout and they letthe
decade he sounds likehe believes every word...It's songwriting take a back seat. The album is by no means
- bad. Several tracks are fast and furiousas the title indicates.
exuberant, angry, joyous, wild everything the last three
albums, forall theirdeep and subtle rewards,were not. ... But there are no classics here, no songs thatare going to
Tight, richand consummately professional, the immediate returnthe band to the superstars theywere inthe 90's. And
loose-and-live feel of "Accelerate" is deceptive.... Best of thisalbum is not even close to their80's output as some
all, they sound likethey'reenjoying themselves again. And have suggested. ...While Accelerate isa solid rock record,
that joy is irresistible... itstill ranks near the bottom of the bands canon...

Search Good: Digital Camera (Canon SD1100IS)

Excerpts fromExtreme Review (5 stars) Excerpts fromModerate Review (3 stars)


...Cannons are excellent cameras. The only reason i Just bought the SD1100IS to replace a 1-1/2year old Casio
decided to replace my older cannon was because Iam Exilim EX-Z850 that Ibroke.... But after receiving the
gettingmarried, going to Hawaii and Iwanted something a camera and using itfora fewweeks Iam beginning to have
littlenewer/betterfor the tripof a lifetime. my doubts....

...Battery life isexcellent. Ihad the camera forover a month, 1. Slow, slow, slow - the setup time forevery shot,
used ita lot (especially playing around with the new particularly indoorshots, is reallyannoying...2. Grainy
features) and finallyjust had to charge the battery... images on screen forany nightime indoorshots.
The flash isgreat- ...I only had red eyes inabout half the 3. Severely under-powered flash -1 thought I read some
shots (which forme isgreat)...There are a ton of different reviews thatgave itan OK ratingat 10 feet...tryabout 8 feet
options as to how you can take your photo, (indoor, outdoor, and you might be more accurate. Many flash shots were
beach, sunrise, color swap, fireworks,pets, kids etc etc)... severely darkened by lack of light...andoften the flash only
Cannons never disappoint inmy experience! covered a portion of the image leaving faces dark and
everythingelse in the photo bright...

200 MIS QuarterlyVol. 34 No. 1/March


2010

This content downloaded from 147.8.31.43 on Fri, 26 Feb 2016 03:43:24 UTC
All use subject to JSTOR Terms and Conditions

Potrebbero piacerti anche