Sei sulla pagina 1di 22

Chapter V

Measuring Price Sensitivity & Estimating Demand


5.1. Aims and Objective

The aim of this chapter is to focus on tools to measure price sensitivity of the customer.
In other words, we will be trying to elicit the Willingness to pay of the customer. While
doing that we will also be able to estimate demand.
The specific objectives are

1. Understand the use of statistical tools and techniques to measure price sensitivity
2. Understand the use of statistical tools and techniques to estimate demand.

5.2. Introduction
Market researchers face two challenges as they provide market intelligence for managers.
First, they must meet managers objectives with useful, valid results. Second, they have
to communicate those results effectively. Failure on either of these two points is fatal.
Many managers have limited experience with statistics and can be skeptical of or
intimidated by advanced methods like the ones discussed here. Unfortunately simpler
ones can be unrealistic or useless. In an ideal world, researchers could accurately
measure price sensitivity by manipulating prices in test markets and measuring changes
in demand. While scanner technology has made this sort of analysis more feasible than
ever before for many categories of consumer goods, these real world experiments face
crippling hurdles. Market forces do not remain constant for the duration of the
experiment: macroeconomic forces can alter demand; competitors change their prices
and /or promotions; buyers stock up to take advantage of lower prices; new products are
introduced. Therefore the challenge before any researcher is to manage a real world
scenario in an experimental condition. The challenge is in sacrificing realism in favor of
control.

In the background of such need and challenge researchers have from time to time
suggested various methods for pricing research. Basically these methods can be
categorized into two basic categories viz., Historical Data Modeling; and

Purchase

Simulations. While each method throws up their unique challenges and methodological
problems, it is important for the manager to choose the method that is most suitable to
him for the problem at hand. For example, if the manager wishes to study the pricing
behavior of an existing product, for which a lot of past data is available, historical data
modeling might be the best but for new products, it is not possible to use this method.
Let us now focus a bit more on each of the methods.

5.3.

Historical Data Modeling:


The focus of this method is to use the data provided from business records. This is a
method that finds favour with several consumer goods companies since scanner data can
be obtained from the retailer and the same can be analyzed to give trends and pricequantity relationships. This helps the customer having multiple brands in the same
product category to check for cross-cannibalization (effect of price changes in one brand
on another of the same company), effects of sales promotion (price and quantity
discounts) and so on. Several services companies also use historical data modeling for
solving capacity related problems. For example, airlines use this method for solving yield
management problems. If an airline chooses to reduce prices to increase the number of
people flying, it can use this method to fairly accurately predict the number of customer
who would buy the product at the reduced price. The advantages of these methods are
that it gives us a fairly accurate price data to analyze. These data are available at a fairly
low price and therefore the cost of this research is quite low. However, the most
important drawback of this method is that it does not take into account the reasons for
purchase. Moreover this method is useless for new products.

How does one use historical data to know price sensitivity? From a method perspective,
it is quite simple. We observe the price data over large periods and relate it to the
quantity purchased. Both these variables can be obtained from the retailers computer.
Based on this we can estimate the price sensitivity of the customer. For example, P&G
connects its main computer to each of the retailer computers. Whenever a customer
purchases a product of P&G, the transaction data is uploaded to P&Gs main computer.
These transaction data then is collected over a long period of time (anything between 2 to
4 years). A simple regression analysis will tell us the price-quantity relationship.

The above example shows the limitations of this method. Firstly, the basic assumption of
it is only price that leads to purchase of a specific quantity may not be true. Again, this
method does not take in the customer variances that may influence price sensitivity. For
example, are women more price sensitive then men? Do purchase occasions influence
price sensitivity? Some companies take care of this problem by tying up with the
retailers loyalty programme. When a customer keys in the loyalty card details, the data
can be used to profile customers on their price sensitivity. Infact, some companies even
sponsor the loyalty programme of the retailer to get such data.Thirdly, this method
cannot be used for new products. One has to distinguish between new brands of existing
products and new to the market products. For example, one can use the Historical Data
Modeling for new brands of soap or detergent. But for new to the market products like
Sonys AIBO, this method cannot be used. Finally, this method relies on the fact that the
past is a good reflection of the future. So, while the method can be used to evaluate
past pricing decisions, one needs to understand current consumer behavior to decide
future pricing!

5.4.

Purchase Simulations:
This involves the creation of artificial market simulations. The main advantage of this
method is that it is more controllable than the other method. Also, the context of the
pricing variable is also taken into account. This means that customers respond to price of
a product in a certain context of time, space, brand etc. So, in case we are able to create
the context, the responses are more meaningful. In the historical data modeling, this rich
data gets lost simply because it cannot be captured. However, this would also mean that
we need to create an artificial setting for obtaining price information. This might make
the entire study artificial, if the researcher is not careful!

Basically Price Simulations have two kinds of measures, the first being explicit measures
where the respondents are questioned directly. Typically, we use the Price Ladder,
Rotated Monadic Designs and Van Westendrop Price Sensitivity Measurement as the
explicit measures. In the explicit measures, the customer is given a description of the
product and then asked about their willingness to pay. These methods are simple to
implement but data quality is suspect. Customers generally overestimate their
willingness to pay!
In the second kind of simulation, called the derived measures, the specific technique used
is the conjoint analysis. Here the respondents are questioned indirectly. The context of
the pricing decision by the customer is taken into account here and price is viewed as a
part of the entire evaluation of a product. The obvious advantage is the richness of data,
the disadvantage being the increased cost and complexty of the research.

Therefore, the utility of the above two measures depends on the cost, complexity, output,
quality and context in which the studies are made. Since price simulations are widely
used even by companies not having sufficient IT infrastructure, subsequent discussions
will elaborate some more on this method.

Explicit Measures: As mentioned earlier, explicit measure includes the price


ladder, the rotatic monadic design, and the Van Westendrop Price Sensitivity
measures.
o Price Ladder: The methodology of price ladder is simple. The respondent
is exposed to a product at a set price. He then is asked to give a purchase
likelihood rating at that price. The price is then changed and then the
customer is again asked to give the likelihood rating. This new price is
dependent on whether the customer gave a positive likelihood or a negative
likelihood in the first place. For example, suppose, the customer gave a
positive purchase likelihood in the first response. The new price is set higher
than the 1st price. This would go on till the purchase likelihood changes
from positive to negative. That is the point where we can set the willingness
to pay for the customer. In case, the initial response is negative, the new
price is set below the 1st price and so on. The advantage of this method is
that it is very easy to administer. As managers, we are not very particular
about the exactness of the measure, but more likely focus on ballpark
measures of Price Sensitivity to help us take decisions. The biggest
disadvantage of this method is that the quality of data is poor. It is not very
easy for customer to answer the question on purchase likelihood on a
dichotomous scale (Purchase/Not Purchase). If, in case, we use a Likart
scale, it is difficult to translate purchase consideration scale to actual
purchase behavior (See box for example). The other problem with the Price
Ladder method is that it is limited to a fixed product scenario. In the
example given in the box , one could only talk of Washing Machines

Tell me more
Example of the Price Ladder
Step 1: Give details of a product on a placard as follows

o The Rotated Monadic Design: One problem that is there in the Price
ladder method is called maturity. What this means is that once a
respondent is exposed to one price, this would influence his response to any
future price combination that is shown to him, which would inturn
contaminate the data. To avoid this problem, one can use the rotated
monadic design. Here the respondents are divided into groups and each
group is shown one price and their purchase likelihood is observed. The
advantage of this method over the price ladder method is that the data
quality here is not as contaminated since context effects are eliminated.
Each group is given only one price and the relationship between the group
and price comfortable with can be found out. However since each group is
exposed to one price the number of such groups needs to be high and
therefore the sample size needs to be much higher than that required for the
price ladder. The disadvantages with the price ladder like translating
purchase preference scale to actual behavior and being product specific still

persists. Like the price ladder method this is easy to execute. The cost
however is higher than that of the price ladder.

Tell me more
Example:
Step 1: Give details of a product on a placard as follows, to a group of respondents having
income of $ 4000- -- to $5000 per month
Consider the following washing machine specifications
Manufacturer : Samsung
Type: Automatic
Capacity : 4 Kg
Warranty : 1 year
Step 2: Now ask the respondent the following question
How likely would you be to purchase this washing machine for $. 300/1.
2.
3.
4.

Definitely would not purchase


Probably would not purchase
Probably would purchase
Definitely would purchase

Step 3: Give details of a product on a placard as follows, to a group of respondents having


income of $ 4000/- -- to $. 4500 per month
Consider the following washing machine specifications

Manufacturer : BPL
Type: Automatic
Capacity : 4 Kg
Warranty : 1 year

Step 4: Now ask the respondent the following question

How likely would you be to purchase this washing machine for $. 400
1. Definitely would not purchase

o Van Westendrop Price Sensitivity Measurements: The difference


between the above methods and this is instead of giving the price to the
customer and then checking the willingness to pay, here we try an elicit the
price from the customer. Here we get a series of customer willingness to
pay, which then can be plotted to find the products optimum price. The
advantages of the Van Westendrop PSM is that like the Price ladder
mechanism the questionnaire design is easy and simple to administer. The
sample size required is also small and is therefore relatively inexpensive.
The output gives a range of prices suitable for a single product. However
this test has some major shortcomings. These include some basic
assumptions like points at which the product/service becomes acceptable or
unacceptable. Moreover it ignores the competitive environment and this
tests external validity is suspect. No academic studies have been made to
test its external validity. An example of this method is given in Annexure
5.1.

The steps of this method are as follows. First, give the respondent details of
the product on a placard. The 2nd step is to ask him four questions

1. At what price would you consider this washing machine to be so


expensive that you would not consider buying it? (Referred to as
too expensive)
2. At what price would you consider this washing machine to be priced
so low that you would begin to doubt its quality? (Referred to as too
cheap)
3. At what price would you consider this washing machine to start
getting expensive, but still a possible purchase? (Referred to as
expensive)
4. At what price would you consider this washing machine to be a
bargaina great buy for money? (Referred to as cheap)
So, essentially we will get 4 distributions for each of the 4 questions, namely,
too expensive, too cheap, expensive, cheap. Additionally, we can also derive
two additional distributions, If a group of people rate a price to answer to
question 3 as expensive, the total sample minus this group would consider it not
expensive. For example, if my sample size is 100, and for a price of 300 USD,
30 customer answered to question 3 as the price being expensive. This means
that there are 70 people (100-30) who consider this as not expensive. Similarly,
we can get a not cheap distribution from question 4. Once we plot this on a
graph, we can get the price schedule for the product. The graph will look
something like Figure 5.1 below. This graph has 6 distributions
1. Too ExpensiveDerived from Question 1
2. Too CheapDerived from question 2
3. ExpensiveDerived from Question 3
4. ExpensiveDerived from Question 4
5. Not CheapDerived from subtracting number of respondents responding to
question 3 from total sample

6. Not

ExpensiveDerived from subtracting number of respondents

responding to question 4 from total sample

Finally, interpret as follows.


1. Intersection of expensive and cheap denotes the Indifference Price Point
(IDP). This is point at which most people dont think that the product is neither
too expensive nor too cheap they are indifferent.
2. The intersection between too expensive and too cheap denotes the
Optimum Price Point (OPP). This is the point at which fewest reject a product
because of price.
3. The intersection between too cheap and not cheap are plotted to define the
Point of Marginal Cheapness (PMC). This is the point where an increase in
price will change the price from being cheap to not cheap.
4. The intersection between too expensive and not expensive are plotted to
define the Point of Marginal Expensiveness (PME). This is the point where a
decrease in price will change the price from being Too Expensive to not
expensive.
5. Finally define the Range of Acceptable Prices (RAP) as between PMC and
PME. The product normally should be priced in this range.

As indicated, this is a much richer output than the other methods. Now let us
focus on the derived measures
Derived or Implicit Measures : The major statistical tool that is used in this is the
Conjoint Analysis. Conjoint Analysis is a simple yet powerful tool, which provides
results that are easy for managers to embrace and understand. It is perhaps one of the
fastest growing and one of the most widely used market research techniques today.

Conjoint aims at greater realism, grounds attributes in concrete descriptions and


results in greater discrimination between attribute importance. Conjoint measurement
can be employed for a wide variety of tasks, such as new product development, price
sensitivity analysis and in the preparation of marketing and advertising Strategies.

The decision to buy is based on a complex mix of factors; the buyer is faced with the
need to trade-off desirable and less desirable product features (for example high
performance vs. high price). Conjoint Measurement can model this decision-making
process in order to single out those product features which most strongly influence the
purchase decision, and identify purchasing thresholds (for example price steps) or the
optimal price for the product or service. The greatest advantage of conjoint analysis is
that it can help answer questions about hypothetical scenarios for example, What is
my main competitor cuts his price by 10%? This application of conjoint measurement
offers management and marketing departments a reliable guide for decision-making.

A precondition for the proper use of conjoint measurement is that the product/service
has definable characteristics which are relevant to the purchase decision, at least one
of which can be varied. Furthermore, conjoint measurement can only work well if the
product or service options can be explained in clear and unambiguous terms and
realistic images, enabling the respondent to make clear choices. The same size
required is also very large (always >75) When there is too much heterogeneity then
the sample size should be much more (around 300). This becomes especially
important if a behavioral segmentation of the respondents is required. Essentially there
are 3 types of conjoint analysis. These are:
Adaptive Conjoint: Here the respondents are shown a subset of product
attributes, which they trade off against one another. New tradeoffs are based on

previous answers. This makes the task easy, even if there are a large number of
attributes since the respondents consider 2-3 attributes at a time. The cost of this
method is moderately high and the complexity of the design and analytical is also
moderately high. The richness of output is high since accessing alternative product
versions. The Market-simulator allows a wide range of what-if scenarios. The
quality is a little suspect since it consistently underestimates the importance of
price in the model.
Full Profile Conjoint: Respondents are shown complete products, which include
all attributes. The respondent can evaluate this as a single concept or can treat it
like an adaptive conjoint. The complexity and costs of this method is very high
and the output of this method is also very high.
Choice Based Conjoint: The Respondents are shown multiple packages of
features of the same product. This method directly measures choice, not purchase
consideration or preference. The process allows the respondents to choose none
of the above. The cost and complexity of this method is the highest since the
sample size is the highest. The output of this is also the best.
5.5.

Context of Pricing Research


This refers to the format and overall context of the pricing research. The respondents
are questioned in a simulated market condition. The researcher has to be extremely
careful in his/her research design so that errors do not creep in. The most important
questions that the researcher has to ask him are:
a. How to Present the Product to the Respondent? The complexity and costs
increase in an ascending order as follows:
i. Written Description
ii. Drawing or photograph
iii. Video
iv. Non-working prototype

v. Working Prototype
vi. Fully functional product
b. How realistic is the Context?
i. How do consumers make the purchase decision for the product?
ii. How do they best understand product cost? Is it based on total cost
or monthly payments?
iii. What are the alternatives that compete with the product? Is it a
standalone product or is it a component of a larger product (like a
car stereo)?
5.6.

Comparison between the methods


The researchers must be aware of the advantages and disadvantages of each of the
methods enumerated above. The choice criteria is based on Cost, Complexity,
Richness of Output and data quality. Table 5.1 below gives a comparison of the
various methods on a 6-point scale. The researcher must choose the method depending
on the issues involved.

Table 5.1: The Rating of various methods of price sensitivity study based on
choice criteria
1best to 6Worst
Cost

Complexity

Richness of
Output

Data Quality

Price Ladder

Rotated

Method

5.7.

Demand Estimation using Research and Statistical Techniques


Marketing researchers employ qualitative and qualitative research techniques to
understand consumer behaviour. Some of the more popular techniques are
consumer surveys, market experiments, and consumer clinics. Consumer surveys
involve gathering information about consumer behaviour from a sample of
consumers. The questions are designed to gather information on the sensitivity of
the consumer to price changes, on quantities that will be demanded at different
price levels, on the levels of awareness regarding a particular product and its
substitutes, the effectiveness of advertising, and other related aspects. This
information is collected from a representative sample of consumers and then
analyzed. The results are then projected onto the population. Surveys are conducted
to assess consumers perception of various aspects, such as new variations in
products, new variations in services provided, such as online shopping, location of a
fast food joint or a book store, etc. The biggest drawback of using this method to
estimate demand relationships is that the consumer is forced to respond to

hypothetical situations. It may be difficult for the consumer to respond honestly and
realistically in hypothetical situations. The reliability of such data is, therefore,
suspected. To overcome this, the seller of a product introduces variations and
actually tries it out in a representative market, and gathers information on the
behavior of the consumers. These are called market experiments, wherein a product
is tested out in a representative market and the results are projected onto the
population. This method is very effective when one wants to gather information on
elasticity. This is a high-cost technique and a highly risky one also, because the
situation cannot be controlled fully. Another variant of the technique is the concept
of consumer clinics. The consumers are asked to act in a simulated situation,
wherein they are given some amount of money and made to indulge in buying, and
their behavior is observed. This again suffers from the drawback of surveys, that is,
the consumer may not be responding in a realistic manner. However, the context in
which the consumer is placed when in a consumer clinic is less removed from
reality than consumer surveys.

The drawback of relying on direct methods such as surveys, conducted through


interviews and focus groups, and consumer clinics was that the data was
unrealistic. The alternative is to rely on historical data. This data will be available
from secondary sources

available at the firm level or in trade and industry

publications or with the government, in its publication. In India, one can depend on
the publications of the Centre for Monitoring Indian Economy, the Confederation of
Indian Industries, and other such bodies. A set of techniques, which uses this
historical data and enables us to estimate the demand relationships, falls under the
broad subject area of econometrics. The principal econometric technique
estimation is regression analysis.

5.8.

Estimation using Regression Methods


The estimation of a demand function using regression or any other econometric
analysis involves the following steps:
1. Identification of variables
2. Collection of data
3. Mathematical specification of the relationship amongst the variables
4. Estimating the parameters of the model
5. Using these estimates to arrive at estimates of variables
We will work with the requirement of a domestic automobile manufacturer, who
wishes to estimate the demand for small fuel-efficient cars in the domestic market
in India.
Step 1: Identification of Variables
We are required to identify the variables affecting the demand for small cars. The
following variables may be listed:
n1 : The number of small families, defined as consisting of not more than four
members
n2 : Income of the household
n3 : Price
n4 : Price of competitors
n5 : Advertising by domestic manufacturers

n6: Advertising by importers of foreign cars


n 7 :Interest rate on car loans
Step 2: Collecting Historical Data
The second step involves collecting past data on each of these variables. These
variables supposedly affect the demand for small cars, and are, therefore, referred
to as independent variables. The demand for small cars, the demand for which is the
subject matter of the study, is called the dependent variable because the demand for
cars is dependent on the variables listed above. Data on the cars sold (which is the
past data on demand), price, price of competitors, and advertisement expenses can
be collected from manufacturers and importers; data on interest rates can be
obtained from credit providers; and data on income and family size can be obtained
from the governments database. The number of data items is governed by the
optimal sample size to be used. The larger the better.
Step 3: Specification of the Model
The third important step is the specifications of the model. The choice of the
specification depends on what the researcher intuitively feels the relationship
between independent variables and the dependent variable to be. More than one
specific form is tried out to arrive at the best possible form that captures the
relationship. The specification could be linear, logarithmic, exponential, or any
other type. However, the most common form of estimation is a linear relationship.
Let us deal with this form first. For the domestic auto manufacturers, a linear
relationship between dependent and independent variables would be specified as

Qd = a n1 + b n2 +c n3+ d n4+ en5+ f n6+ g n7 + E


Where:
Qd is the demand (past sales)
a, b, c, d, e, f, and g are parameters of the equation and need to be estimated
n1 to n7 are variables defined in step 1
and E is the error term

How does this work? We can estimate the parameters from past data. Once we have
estimated the parameters we can insert the value of the variables and get the
demand.
How do we interpret these estimates? Suppose the estimates turned out to be as
follows:
QD=215 *1.1 n1+ 2n2+0.5 n3+0.005n4+2.5n5+1.5n6
What meaning do we attribute to these? The value of each estimate, which is also a
coefficient, gives the amount by which Q D will change for a unit change in the
respective variable, given that all other variables remain unchanged. In calculus
terminology, the coefficients are nothing but partial derivatives of the demand
function. This means that each additional unit of every independent variable has a
constant effect or impact on the dependent variable, regardless of the level at which
the independent variables are. For example, in our equation, the impact of unit
change in income Y on Q D will be 2.5, regardless of the level of Y, be it 50 or 5,000.

This is how we would interpret the coefficient of a linear equation, which is also the
slope and is constant throughout.
5.9.

Summary
Price Sensitivity Measures form the backbone of any marketing strategy. An idea of
the measure will help the manager to plan and price his products. The researcher
too is very interested in this because it will enable him to measure the elusive
willingness to pay aspect of the consumer. This chapter reviews the methods
available and their pros and cons. It is for the manager to choose among the various
methods depending on their objectives, cost constraints and time. The chapter also
focuses on the regression method of estimating demand.

Annexure 5.1.
Price Sensitivity Measure by the Van Westendrop MethodAn Empirical Example

Summary of Study Features: A survey was done in Bhubaneswar to find the willingness to pay. The study tried to
come as close as possible to the notion of willingness to pay:
1. Our questions were brand specific and for brands the respondents had purchased in the past.
2. To the extent that it was possible, we matched product and brand cues used in the questioning to
those present in the choice context (e.g., through the use of actual photographs of the products
tested).
3. The questioning took place during a purchase occasion to maximize the number of contextual cues
and to ensure the presence of normally available shopping knowledge.

Product Category Selection: The product categories for the survey were chosen so that they would represent high or
low (but not medium) levels of each of involvement. I have chosen detergents and Washing in the high and low
involvement respectively.

Selection, Instrument and Interview Procedure: The interviews all took place in one hypermarket in Bhubaneswar,
the Bapuji Nagar Market. It was selected because it is the busiest market and both the products are available there.
The objective of our respondent selection was to interview a representative sample of regular purchasers of the
stores in this area. As soon as an interviewer became available he or she had to solicit the third person that entered
the store for participation in the interview. The interview was simply introduced as part of a study on consumer
products. To qualify for the interview, purchasers had to pass a filter questions: they had to normally do their
shopping themselves and they are the decision making power for the products. 275 purchasers were interviewed in
the months of January and February 2002. Because I expected to find different types of purchasers at different times
of the day and the week, the interviews were scheduled such that we covered each relevant time slot (morning,
midday, evening; beginning of the week, normal weekday and weekend).
Products Tested
1. Product 1: A detergent of the following details
a. Manufacturer: Hindustan Lever Limited
b. Color: Green
c. Texture: Flaky
d. Packaging: Plastic Pouch
2. Product 2: Washing Machine with the following details

Manufacturer: BPL

Type: Automatic

Capacity: 4 Kg

Warranty: 1 year

Results for Detergent:


The following are the responses for Detergents
Table 5.A.1
Price in
Rupees
Too
Cheap
Too
Expensive
Expensive
Bargain

10

15

20

25

30

35

40

45

50

55

60

65

70

75

80

85

90

Total

60

26

25

32

35

42

50

27

32

34

28

42

68

21

19

275

19

36

65

45

50

31

16

275

275

Chart 5.1
VWPSM For Detergents
120
Too Cheap

100
%

80
Too Expensive

60
40
20

Expensive

0
Bargain

Price

Analysis
Detergents: As can be seen from Chart 5.1 the optimal price for a new detergent is Rs. 50. In any case the price
should lie between Rs.40 (lower bound) and Rs. 65 (upper bound). This means any price below Rs. 40 will affect the
brands credibility in terms of quality and any price above Rs. 65 will overprice the brand. Now it is left to the
managers perspective as to how he prices the detergent.

Multiple Choice Questions


1.

While doing Price Sensitivity research one has to keep in mind


a. The Costs
b. The quality of data
c. The context of the research
d. All the above

2.

The major disadvantage of Historical Data Modeling is that it is based on


e. Needs a lot of data
f.

Past Data

g. Does on talk of customer behavior


h. All the above
3. Context of Price Sensitivity measure mean
a. How is the product presented to the customer
b. How does the researcher behave
c. Why is the manager interested in the data
d. How does the retailer price the product
4. Explicit measures are more robust than implicit measures
a. True
b. False
5. The Van Westdrop Method, the major disadvantage compared to conjoint is
a. Costs
b. Quality of data
c. Number of respondents
d. None of the above
6. The Implicit measures are more robust because,
a. Price is considered as variable rather than the only issue
b. Advanced statistical analysis is done for results
c. The simulations are more realistic
d. All the above

Potrebbero piacerti anche