Sei sulla pagina 1di 54

Decision Analysis

Example
Consider the following problem with three decision
alternatives and three states of nature with the following
payoff table representing profits:
States of Nature
s1
s2
s3
d1
4
4
Decisions d2
0
3
d3
1
5
Which decision do you choose?

-2
-1
-3

Problem Formulation
A decision problem is characterized by decision
alternatives, states of nature, and resulting payoffs.
The decision alternatives are the different possible
strategies the decision maker can employ.
The states of nature refer to future events, not under
the control of the decision maker, which may occur.
States of nature should be defined so that they are
mutually exclusive and collectively exhaustive.

Payoff Tables
The consequence resulting from a specific combination
of a decision alternative and a state of nature is a payoff.
A table showing payoffs for all combinations of decision
alternatives and states of nature is a payoff table.
Payoffs can be expressed in terms of profit, cost, time,
distance or any other appropriate measure.

Decision Making without Probabilities


Three commonly used criteria for decision making
when probability information regarding the likelihood
of the states of nature is unavailable are:
the optimistic approach
the conservative approach
the minimax regret approach.

Optimistic Approach
The optimistic approach would be used by an
optimistic decision maker.
The decision with the largest possible payoff is
chosen.
If the payoff table was in terms of costs, the decision
with the lowest cost would be chosen.

Example
Consider the following problem with three decision
alternatives and three states of nature with the following
payoff table representing profits:
States of Nature
s1
s2
s3
d1
Decisions d2
d3

4
0
1

4
3
5

-2
-1
-3

Example: Optimistic Approach


An optimistic decision maker would use the
optimistic (maximax) approach. We choose the decision
that has the largest single value in the payoff table.

Maximax
decision

Maximum
Decision
Payoff
d1
4
d2
3
d3
5

Maximax
payoff

Conservative Approach
The conservative approach would be used by a
conservative decision maker.
For each decision the minimum payoff is listed and
then the decision corresponding to the maximum of
these minimum payoffs is selected. (Hence, the
minimum possible payoff is maximized.)
If the payoff was in terms of costs, the maximum costs
would be determined for each decision and then the
decision corresponding to the minimum of these
maximum costs is selected. (Hence, the maximum
possible cost is minimized.)

Example
Consider the following problem with three decision
alternatives and three states of nature with the following
payoff table representing profits:
States of Nature
s1
s2
s3
d1
Decisions d2
d3

4
0
1

4
3
5

-2
-1
-3

Example: Conservative Approach


A conservative decision maker would use the
conservative (maximin) approach. List the minimum
payoff for each decision. Choose the decision with the
maximum of these minimum payoffs.

Maximin
decision

Minimum
Decision
Payoff
d1
-2
d2
-1
d3
-3

Maximin
payoff

Minimax Regret Approach


The minimax regret approach requires the
construction of a regret or opportunity loss table.
Regret: for each state of nature (column), regret for a
decision is the difference between that payoff and the
largest one.
Regret is with respect to decisions, not states of
nature.
Minimax regret:
Find maximum regret for each decision.
Choose decision with the smallest maximum regret

Example
Consider the following problem with three decision
alternatives and three states of nature with the following
payoff table representing profits:
States of Nature
s1
s2
s3
d1
Decisions d2
d3

4
0
1

4
3
5

-2
-1
-3

Example: Minimax Regret


Approach
For the minimax regret approach, first compute a
regret table by subtracting each payoff in a column
from the largest payoff in that column. In this
example, in the first column subtract 4, 0, and 1 from
4; etc. The resulting regret table is:

d1
d2
d3

s1

s2

s3

0
4
3

1
2
0

1
0
2

Example: Minimax Regret


Approach
For each decision list the maximum regret.
Choose the decision with the minimum of these values.

Minimax
decision

Decision
d1
d2
d3

Maximum
Regret
1
4
3

Minimax
regret

Decision Making with Probabilities

Decision Making with Probabilities


Expected Value Approach
If probabilistic information regarding the states of
nature is available, one may use the expected
value (EV) approach.
Here the expected return for each decision is
calculated by summing the products of the payoff
under each state of nature and the probability of
the respective state of nature occurring.
The decision yielding the best expected return is
chosen.

Expected Value of a Decision Alternative

The expected value of a decision alternative is the sum


of weighted payoffs for the decision alternative.
The expected value (EV) of decision alternative di is
defined as:
N

EV( d i ) P( s j )Vij
j 1

where:

N = the number of states of nature


P(sj ) = the probability of state of nature sj
Vij = the payoff corresponding to decision
alternative di and state of nature sj

Example: Burger Prince


Burger Prince Restaurant is considering opening a new
restaurant on Main Street. It has three different
models, each with a different seating capacity
(A=small, B=medium, C=large).
Burger Prince estimates that the average number of
customers per hour will be 80, 100, or 120, with
respective probabilities 0.4, 0.2, 0.4.

Payoff Table
Average Number of Customers Per Hour
s1 = 80 s2 = 100 s3 = 120
Model A
Model B
Model C
probabilities

10,000
8,000
6,000

15,000
18,000
16,000

14,000
12,000
21,000

0.4

0.2

0.4

Expected Value Approach


Calculate the expected value for each decision.

Expected value of each decision


Average Number of Customers Per Hour
s1 = 80 s2 = 100 s3 = 120
Model A
10,000
Model B
8,000
Model C
6,000
probabilities
0.4

15,000
18,000
16,000
0.2

14,000 EV= 12,600


12,000 EV= 11,600
21,000 EV= 14,000
0.4

E.g., EV for Model A = .4(10,000)+.2(15,000)+.4(14,000)


= 4,000 + 3,000 + 5,600 = 12,600.
Choose Model C: highest EV.

Expected Value with Decision Trees


The same calculation can be done with a decision tree
(next slide).
Here d1, d2, d3 represent the decision alternatives of
models A, B, C, and s1, s2, s3 represent the states of
nature of 80, 100, and 120.

Decision Trees
A decision tree is a chronological
representation of the decision problem.
Branches leaving round nodes correspond to
different states of nature
Branches leaving square nodes correspond to
different decision alternatives.
At the end of each limb of the tree (each leaf) is the
payoff from that series of branches.

Expected Value (EV) for Each Decision

Model A

d1

Model B d2

Model C

EV = .4(10,000) + .2(15,000) + .4(14,000)


= 12,600

EV = .4(8,000) + .2(18,000) + .4(12,000)


= 11,600

d3 EV = .4(6,000) + .2(16,000) + .4(21,000)


4

= 14,000

Choose the model with largest EV, Model C.

Sensitivity Analysis
Sensitivity analysis can be used to determine how
changes to the following inputs affect the recommended
decision alternative:
probabilities for the states of nature
values of the payoffs
If a small change in the value of one of the inputs causes
a change in the recommended decision alternative, extra
effort and care should be taken in estimating the input
value.

Expected Value of Perfect Information


You can often (at a cost) obtain further information
which can improve the probability estimates for the
states of nature (e.g., do market research).
The expected value of perfect information (EVPI) is
the increase in the expected profit that would result if
one knew with certainty which state of nature would
occur.
The EVPI provides an upper bound on the expected
value of further information (normally less-thanperfect).

Expected Value of Perfect Information


EVPI Calculation
Determine the best possible return corresponding to
each state of nature.
Compute the expected value of these optimal returns
(what youd expect to get if you were omniscient).
Compare with the EV of the optimal decision
(knowing what you know now).
The difference between these EVs is the expected
value of perfect information. (It is the [expected]
value added by the information itself).

EVPI example
Average Number of Customers Per Hour
s1 = 80 s2 = 100 s3 = 120
Model A
10,000
Model B
8,000
Model C
6,000
probabilities
0.4

15,000
18,000
16,000
0.2

14,000 EV= 12,600


12,000 EV= 11,600
21,000 EV= 14,000
0.4

EV if omniscient = .4(10,000)+.2(18,000)+.4(21,000) = 16,000


EV of Model C (best alternative) = 14,000
Expected value of perfect information = 2,000

EVPI example
EV if omniscient = .4(10,000)+.2(18,000)+.4(21,000) = 16,000
EV of Model C (best alternative) = 14,000
Expected value of perfect information = 2,000
If it cost 3,000 to do a study to clarify whether you were most likely to get 80,
100, or 120 customers/hour, would the study be worthwhile?
- No. Even perfect information could only add an expected value of
2,000.
What if it cost 1,000?
- Maybe. Yes, if your information would be very good; no, if it doesnt
improve your probability estimates enough to justify the cost.

Expected Value of Sample Information

The expected value of sample information (EVSI) is


the additional expected profit possible through
knowledge of the sample or survey information.
Similar to expected value of perfect information
(EVPI), only not perfect. ;)

Expected Value of Sample Information


EVSI Calculation
Determine the optimal decision and its expected return,
for the possible outcomes of the sample, using the
posterior probabilities for the states of nature.
Compute the expected value of these optimal returns.
Subtract the EV of the optimal decision based on the
information you have now.
This difference is the (expected) value of the
(imperfect) information you could gain.
Like EVPI, but with sample information rather than
omniscience.

Efficiency of Sample Information


Efficiency of sample information is the ratio EVSI/EVPI.
As the EVPI provides an upper bound for the EVSI,
efficiency is always a number between 0 and 1.

Posterior Probabilities
Suppose you expect the survey to be favorable (high demand) with
probability 0.54, unfavorable w.p. 0.46.
Suppose the posterior probabilities are:
Favorable case:

Pr( 80 | favorable) = 0.148


Pr(100 | favorable) = 0.185
Pr(120 | favorable) = 0.667

Check: these sum to 1!

Unfavorable case:

Pr( 80 | unfavorable) = 0.696


Pr(100 | unfavorable) = 0.217
Pr(120 | unfavorable) = 0.087

Check: these sum to 1!

Check that the posteriors match the priors (0.4, 0.2, 0.4)
Pr(80) = Pr(favorable)*Pr(80 | favorable) + Pr(unfavorable)*Pr(80 | unfavorable)
= 0.54*0.148 + 0.46*0.696 = 0.40. Good!
(Otherwise you hold contradictory beliefs.) Check Pr(100), Pr(120) similarly.

Decision Tree
top half (case where survey is favorable)
s1 (.148)

d1
2

I1
(.54)

d2

d3
6

10,000

s2 (.185)
15,000
s3 (.667)

14,000
s1 (.148) 8,000
s2 (.185)
18,000
s3 (.667)

12,000

s1 (.148)
6,000
s2 (.185)
16,000
s3 (.667)

21,000

Decision Tree
bottom half (case where survey is unfavorable)
1

I2
(.46)
3

d1

d2

d3
9

s1 (.696)
10,000
s2 (.217)
15,000
s3 (.087)

14,000
s1 (.696)
8,000
s2 (.217)
18,000
s3 (.087)

12,000
6,000

s1 (.696)
s2 (.217)
16,000
s3 (.087)

21,000

Decision Tree
d1
17,855
I1
(.54)

d2

d3

1
d1
I2
(.46)

d2

3
11,433

EV = .148(10,000) + .185(15,000)
+ .667(14,000) = 13,593

EV = .148 (8,000) + .185(18,000)


+ .667(12,000) = 12,518

EV = .148(6,000) + .185(16,000)
+.667(21,000) = 17,855

EV = .696(10,000) + .217(15,000)
+.087(14,000)= 11,433

EV = .696(8,000) + .217(18,000)
+ .087(12,000) = 10,554

EV = .696(6,000) + .217(16,000)
+.087(21,000) = 9,475

d3

If the outcome of the survey is "favorable, choose C. Unfavorable, choose A.

Decision Tree
If the outcome of the survey is "favorable, choose C. Unfavorable, choose A.
Expected value with sample information =
.54(17,855) + .46(11,433) = 14,900.88

This is how much we expect to get if we do the survey, wait for the results, then
choose an alternative.

Without the survey, our best option was Model C. Recall that
EV of Model C = 14,000

This is how much we get if we choose an alternative without the survey.

Expected value of sample information:


EVSI = 14,900.88 - 14,000 = 900.88
Since this is less than the cost of the survey (1,000), the survey should not be
purchased.

Efficiency of Sample Information


The efficiency of the survey:
EVSI/EVPI = (900.88)/(2000) = .4504

The survey gives 45% of the extra value that perfect


information would give.

Utility and multiple objectives

Utility: Risk Attitude


Risk averse (uRA):

utility

uRA

1.00

0.75

Risk neutral (uRN):


uRN
uRS

0.50

Risk seeking (uRS):

0.25

0.00
-5

10

15

Profit
[millions of pounds]

42

Utility
In general, utility (how much you care about something) is not
linear.

You own a house worth 1M.


The chance it burns down is 1/2,000 in a year.
Insurance costs 1,000/year.
Is it worth your insuring it?

Expected value of insurance = 1M/2,000 - 1,000 = -500.


Not worth it.

But maybe your utilities are:


Utility of 1,000 = 0.01 (it wont change your life).
Utility of house = 100 (its hugely important).
Expected utility of insurance = 100/2,000 - .01 = +0.04.
Worth it.

Multi-criteria decision analysis (MCDA)


long term
sustainability

size of
business

profitability

short
term

long
term

market
share

growth

flexibility

Developing Value Functions


value function - market share

vMS
100
90

80
70
60
50
40
30
20
10
0
5

7.5

10

12.5

15

17.5

20

22.5

25

27.5

30

market share [%]

Develop a value function (utility) for each attribute of concern.

Compute the value of each decision


alternative, for each attribute
value function - market share

vMS
100
90
80
70
60

50
40
30
20
10
0
5

7.5

10

12.5

15

17.5

20

22.5

market share [%]

25

27.5

30

Developing Value Functions


Value Function - Flexibility
Attribute: Degree of flexibility
provided by the alternative

Score

Easy to diversify to similar product

100

Diversification is possible, but it


requires some adaptation

60

Diversification is possible, but hard to


implement

40

Inflexible, very hard to diversify

0
47

vMS

value function - market share

100
90

Partial Performances

80
70
60
50
40
30
20
10
0
5

long term
sustainability

7.5

10

12.5

15

17.5

20

22.5

25

27.5

30

market share [%]


Score

Value Function Flexibility


Attribute: Degree of flexibility
provided by the alternative

profitability

short
term

long
term

size of
business

market
share

growth

Easy to diversify to similar


product

100

Diversification is possible, but it


requires some adaptation

60

Diversification is possible, but


hard to implement

40

Inflexible, very hard to diversify

flexibility

48

Defining Value Trade-Offs


A reduction from $2,000 mi to $500 mi in
Best strategy for
profitability is compensated by an
getting new customers
increase from 20% to 32%
in market share.
Max Market Share
[market share at
the end of 2 years]

Max Profitability
[profit over 2 years]

$2,000 mi

S2

S1

40%
32%

$500 mi

S1

S1 is preferred to S2

S1 is indifferent to S2
20%

Adapted from Keeney (2002) Common Mistakes in Making


Value Trade-offs, Operations Research 50(6), 953-945.

49

Making Trade-Offs
Associate a swing weight with each attribute

The value of a decision alternative is the sum of the utilities for each
attribute, weighted by the swing weights
Suppose alternative A had 30% market share (value 100) and flexibility
possible (value 40).
Suppose market share is 2 times as important to you as flexibility,
leading you to choose swing weights of 2 and 1.
The value of decision A would be 2*100 + 1*40, plus similar contributions
from the other attributes.
The best decision is the one with the largest weighted utility.

If uncertainty is involved, the best decision is the one maximizing the


expected weighted utility.

Evaluating Options (Making Tradeoffs)


Overall Performances
long term
sustainability
50%
44%
size of
business

profitability
34%

short
term

66%

long
term

62%

market
share

59
47
46

6%
38%
growth

flexibility
100
c

0
51

Decision and Risk Analysis


For more, consider OR435, Advanced
Decision Sciences (Dr Montibeller)

multiple objectives, multiple decisionmakers, intangibles, value trade-offs,


long time horizons, risk and uncertainty,
difficulty of identifying good alternatives
Applications in nuclear waste disposal,
UK DEFRA (Department of
Environment, Food and Rural Affairs),
etc.

Other multi-criteria decision-making methods


In mathematical optimization, the goal is to maximize a
function f subject to constraints.
Suppose you wish to maximize two functions, f and g.
Their maxima are likely to occur at different points.
If you give relative weights a and b to the two
attributes, you could maximize a*f+b*g.
If f is more important to you, and you only want to
maximize g if it results in no loss in f, you could try
maximizing f+0.001*g, for example (this is similar to
goal programming in the books next chapter).
You might try to put everything on a common scale,
utility or (more simple-mindedly), $$.

Other multi-criteria decision-making methods


In mathematical optimization, the goal is to maximize a
function f subject to constraints.
Suppose you wish to maximize two functions, f and g.
Their maxima are likely to occur at different points.
If there are just 2 attributes (2 dimensions), plot the
efficient frontier (the points maximizing f for a given
g).
That is, discard points where you can do better in
both f and g.
Pick the trade-off you like after seeing the
possibilities.

Game theory
MA402
OR409

game theory
auctions and game theory

Potrebbero piacerti anche