Sei sulla pagina 1di 94

Decision Analysis

Scott Ferson, scott@ramas.com


25 September 2007, Stony Brook University, MAR 550, Challenger 165

Outline
Risk and uncertainty
Expected utility decisions
St. Petersburg game, Ellsberg Paradox

Decisions under uncertainty


Maximin, maximax, Hurwicz, minimax regret, etc.

Junk science and the precautionary principle


Decisions under ranked probabilities
Extreme expected payoffs

Decisions under imprecision


E-admissibility, maximality, -maximin, -maximax, etc.

Synopsis and conclusions

Decision theory
Formal process for evaluating possible actions
and making decisions
Statistical decision theory is decision theory
using statistical information
Knight (1921)
Decision under risk (probabilities known)
Decision under uncertainty (probabilities not known)

Discrete decision problem


Actions Ai (strategies, decisions, choices)
Scenarios Sj
Payoffs Xij for action Ai in scenario Sj
Probability Pj (if known) of scenario Sj
S1 S2 S3
Decision criterion
A X X X
1

11

12

13

A2 X21 X22 X23


A. 3 X.31 X. 32 X.33
..
..
..
..
P1

P2

P3

Decisions under risk


If you make many similar decisions, then
youll perform best in the long run using
expected utility (EU) as the decision rule
EU = maximize expected utility (Pascal 1670)
Pick the action Ai so (Pj Xij) is biggest

Example
Action A
Action B
Action C
Action D
Probability

Scenario 1 Scenario 2 Scenario 3 Scenario 4


10
5
15
5
20
10
0
5
10
10
20
15
0
5
60
25
.5
.25
.15
.1

10*.5
20*.5
10*.5
0*.5

+ 5*.25
+ 10*.25
+ 10*.25
+ 5*.25

+ 15*.15
+ 0*.15
+ 20*.15
+ 60*.15

+ 5*.1
+ 5*.1
+ 15*.1
+ 25*.1

= 9
= 13
= 12
= 12.75

Maximizing expected utility prefers action B

Strategizing possible actions


Office printing

Repair old printer


Buy new HP printer
Buy new Lexmark printer
Outsource print jobs

Protection Australia marine resources

Undertake treaty to define marine reserve


Pay Australian fishing vessels not to exploit
Pay all fishing vessels not to exploit
Repel encroachments militarily
Support further research
Do nothing

Scenario development
Office printing
Printing needs stay about the same/decline/explode
Paper/ink/drum costs vary
Printer fails out of warranty

Protection Australia marine resources

Fishing varies in response to healthy diet ads/mercury scare


Poaching increases/decreases
Coastal fishing farms flourish/are decimated by viral disease
New longline fisheries adversely affect wild fish populations
International opinion fosters environmental cooperation
Chinese/Taiwanese tensions increase in areas near reserve

How do we get the probabilities?


Modeling, risk assessment, or prediction
Subjective assessment
Asserting A means youll pay $1 if not A
If the probability of A is P, then a Bayesian
agrees to assert A for a fee of $(1-P),
and to assert not-A for a fee of $P
Different people will have different Ps for an A

Rationality
Your probabilities must make sense
Coherent if your bets dont expose you to sure loss
guaranteed loss no matter what the actual outcome is

Probabilities larger than one are incoherent


Dutch books are incoherent
Let P(X) denote the price of a promise to pay $1 if X
Setting P(Hillary OR Obama) to something other than the
sum P(Hillary) + P(Obama) is a Dutch book
If P(Hillary OR Obama) is smaller than the sum, someone could make
a sure profit by buying it from you and selling you the other two

St. Petersburg game

Pot starts at 1
Pot doubles with every coin toss
Coin tossed until tail appears
You win whatevers in the pot
What would you pay to play?

First tail Winnings


1
0.01
2
0.02
3
0.04
4
0.08
5
0.16
6
0.32
7
0.64
8
1.28
9
2.56
10
5.12
11
10.24
12
20.48
13
40.96
14
81.92
15
163.84.
..
..
.
28 1,342,177.28
29 2,684,354.56
30
5,368,709.12.
..
..
.

Whats a fair price?


The expected winnings would be a fair price
The chance of ending the game on the kth toss (i.e.,
the chance of getting k1 heads in a row) is 1/2k
If the game ends on the kth toss, the winnings would
be 2k1 cents

1
1
1
1
1
EU 1 2 4 8 16
2
4
8
16
32
1 1 1 1 1 1 1

2 2 2 2 2 2 2

So you should be willing to pay any


price to play this game of chance

St. Petersburg paradox


The paradox is that nobodys gonna pay more
than a few cents to play
To see why, and for a good time, call
http://www.mathematik.com/Petersburg/Petersburg.html click

No solution really resolves the paradox


Bankrolls are actually finite
Cant buy whats not sold
Diminishing marginal utility of money

Utilities
Payoffs neednt be in terms of money
Probably shouldn't be if marginal value of
different amounts vary widely
Compare $10 for a child versus Bill Gates
A small profit may be a lot more valuable than the
amount of money it takes to cover a small loss

Use utilities in matrix instead of dollars

Risk aversion
EITHER get $50
OR get $100 if a randomly drawn ball is
red from urn with half red and half blue
balls
Which prize do you want?

$50
EU is the same, but most people take the sure $50

HE
RO

Ellsberg Paradox

Balls can be red, black or yellow (probs are R, B, Y )


A well-mixed urn has 30 red balls and 60 other balls
Dont know how many are black, how many are yellow
Gamble A Gamble
R>B B
Get $100 if draw red Get $100 if draw black
R+Y<B+Y
Gamble C Gamble D
Get $100 if red or yellow Get $100 if black or yellow

Persistent paradox
Most people prefer A to B (so are saying
R.> B) but also prefer D to C (saying R < B)
Doesnt depend on your utility function
Payoff size is irrelevant
Not related to risk aversion
Evidence for ambiguity aversion
Cant be accounted for by EU

Ambiguity aversion

Balls can be either red or blue


Two urns, both with 36 balls
Get $100 if a randomly drawn ball is red
Which urn do you wanna draw from?

Assumptions
Discrete decisions

Closed world P = 1

Analyst can come up with A , S , X , P

A and S are few in number

X are unidimensional
.

ij

ij

Ai not rewarded/punished beyond payoff


Picking Ai doesnt influence scenarios

Uncertainty about X

ij

is negligible

Why not use EU?

Clearly doesnt describe how people act


Needs a lot of information to use
Unsuitable for important unique decisions
Inappropriate if gamblers ruin is possible
Sometimes Pj are inconsistent

Getting even subjective Pj can be difficult

Decisions under uncertainty

Decisions without probability

Pareto (some action dominates in all scenarios)


Maximin (largest minimum payoff)
Maximax (largest maximum payoff)
Hurwicz (largest average of min and max payoffs)
Minimax regret (smallest of maximum regret)
Bayes-Laplace (max EU assuming equiprobable scenarios)

Maximin
Cautious decision maker
Select Ai that gives largest minimum payoff
(across Sj)
Important if gamblers ruin is possible
Scenario S
(e.g. extinction)
1
2
3
4
Action Ai

Chooses action C

10

15

20

10

10

10

20

15

60

25

Maximax

Optimistic decision maker


Loss-tolerant decision maker
Examine max payoffs across Sj
Select Ai with the largest of these
Action Ai

Prefers action D

Scenario Sj
2
3

10

15

20

10

10

10

20

15

60

25

Hurwicz

If h=1, its maximin


If h=0 its maximax

Favors D if h=0.5

Action Ai

Compromise of maximin and maximax


Index of pessimism h where 0 h 1
Average min and max payoffs weighted by
h and (1h) respectively
Select Ai with highest average
Scenario S
j

10

15

20

10

10

10

20

15

60

25

Minimax regret

Several competing decision makers


Regret Rij = (max Xij under Sj)Xij
Replace Xij with regret Rij
Select Ai with smallest max regret

Payoff

10

15

20

10

10

10

Regret

10

45

20

60

20

20

15

10

40

10

60

25

20

(20)

(10)

Favors action D

(60) (25)

minuends

Bayes-Laplace
Assume all scenarios are equally likely
Use maximum expected value
Chris Rocks lottery investments
10*.25 + 5*.25 + 15*.25 + 5*.25
20*.25 + 10*.25 + 0*.25 + 5*.25
10*.25 + 10*.25 + 20*.25 + 15*.25
0*.25 + 5*.25 + 60*.25 + 25*.25

Prefers action D

= 8.75
= 8.75
= 13.75
= 22.5

Pareto
Choose an action if it cant lose
Select Ai if its payoff is always biggest
(across Sj)

Action Ai

Chooses action B

Scenario Sj
2
3

10

20

15

30

25

10

10

20

15

20

25

Why not
Complete lack of knowledge of Pj is rare
Except for Bayes-Laplace, the criteria
depend non-robustly on extreme payoffs
Intermediate payoffs may be more likely
than extremes (especially when extremes
dont differ much)

Junk science and the


precautionary principle

Junk science (sensu Milloy)


Faulty scientific data and analysis used to further a
special agenda
Myths and misinformation from scientists, regulators, attorneys, media, and
activists seeking money, fame or social change that create alarm about
pesticides, global warming, second-hand smoke, radiation, etc.

Surely not all science is sound


Small sample sizes
Overreaching conclusions

Wishful thinking
Sensationalized reporting

But Milloy has a very narrow definition of science


The scientific method must be followed or you will soon find yourself
heading into the junk science briar patch. The scientific method [is] just
the simple and common process of trial and error. A hypothesis is tested until
it is credible enough to be labeled a theory. Anecdotes arent data.
Statistics arent science. (http://www.junkscience.com/JSJ_Course/jsjudocourse/1.html)

Science is more

Hypothesis testing
Objectivity and repeatability
Specification and clarity
Coherence into theories
Promulgation of results
Full disclosure (biases, uncertainties)
Deduction and argumentation

Classical hypothesis testing


Alpha
probability of Type I error
i.e., accepting a false statement
strictly controlled, usually at 0.05 level, so false
statements dont easily enter the scientific canon

Beta

probability of Type II error


rejecting a true statement
one minus the power of the test
traditionally left completely uncontrolled

Decision making
Balances the two kinds of error
Weights each kind of error with its cost
Not anti-scientific, but it has much broader
perspective than simple hypothesis testing

Why is a balance needed?


Consider arriving at the train station 5 min
before, or 5 min after, your train leaves
Error of identical magnitudes
Grossly different costs

Decision theory = scientific way to make


optimal decisions given risks and costs

Statistics in the decision context


Estimation and inference problems can be
reexpressed as decision problems
Costs are determined by the use that will be
made of the statistic or inference
The question isnt just whether anymore,
its what are we gonna do

Its not your fathers statistics


Classical statistics
addresses the use of sample information to make
inferences which are, for the most part, made
without regard to the use to which theyll be put

Modern (Bayesian) statistics


combines sample information, plus knowledge of
the possible consequences of decisions, and prior
information, in order to make the best decision

Policy is not science


Policy making may be sound even if it does
not derive specifically from application of the
scientific method
The precautionary principle is a nonquantitative way of acknowledging the
differences in costs of the two kinds of errors

Precautionary principle (PP)


Better safe than sorry
Has entered the general discourse,
international treaties and conventions
Some managers have asked how to defend
against the precautionary principle (!)
Must it mean no risks, no progress?

Two essential elements


Uncertainty
Without uncertainty, whats at stake would be clear
and negotiation and trades could resolve disputes.

High costs or irreversible effects


Without high costs, thered be no debate. It is these
costs that justify shifting the burden of proof.

Proposed guidelines for using PP


(Science 12 May 2000)

Transparency
Proportionality
Non-discrimination
Consistency
Explicit examination of the costs and
benefits of action or inaction
Review of scientific developments

But consistency is not essential


Managers shouldnt be bound by prior risky
decisions
Irrational risk choices very common
e.g., driving cars versus pollutant risks

Different risks are tolerated differently


control
scale
fairness

Take-home messages
Guidelines (a lumpers version)
Be explicit about your decisions
Revisit the question with more data

Quantitative risk assessments can overrule


PP
Balancing errors and their costs is essential
for sound decisions

Ranked probabilities

Intermediate approach
Knights division is awkward
Rare to know nothing about probabilities
But also rare to know them all precisely

Like to have some hybrid approach

Kmietowicz and Pearman (1981)


Criteria based on extreme expected payoffs
Can be computed if probabilities can be ranked

Arrange scenarios so Pj Pj +1
Extremize partial averages of payoffs, e.g.
j

max
(
X /j)
j
k=1 ik

Difficult example
Scenario 1 Scenario 2 Scenario 3 Scenario 4
Action A
Action B

7.5
5.5

-5
9

15
-5

9
15

Neither action dominates the other


Min and max are the same so maximin,
maximax and Hurwicz cannot distinguish
Minimax regret and Bayes-Laplace favor A

If probabilities are ranked


Scenario 1 Scenario 2 Scenario 3 Scenario 4
Action A
7.5
-5
15
9
Action B
5.5
9
-5
15
Most likely

Partial
averages

7.5
5.5

Least likely

1.25
7.25

5.83
3.17

6.62
6.12

Maximin chooses B since 3.17 > 1.25


Maximax slightly prefers A because 7.5 > 7.25
Hurwicz favors B except in strong optimism
Minimax regret still favors A somewhat (7.33 > 7)

Sometimes appropriate
Uses available information more fully than
criteria based on limiting payoffs
Better than decisions under uncertainty if
several decisions are made (even if actions,
scenarios, payoffs and rankings change)
number of scenarios is large (because standard
methods ignore intermediate payoffs)

Extreme expected payoffs


Focusing on maximin expected payoff is
more conservative than traditional maximin
Focusing on maximax expected payoff is
more optimistic than the old maximax
Focusing on minimax expected regret will
have less regret than using minimax regret

Generality
Robust to revising scenario ranks
Mostly a selected action won't change by inversion
of ranks or by the introduction of a new scenario

Can easily extend to intervals for payoffs


Max (min) expected values found by applying partial
averaging technique to all upper (lower) limits

When its useful


For difficult payoff matrices
When you can only rank scenarios
When multiple decision must be made
When the number of scenarios is large
When facing identical problem a small number
of times (up to 10 or so)

When you shouldnt use it


If probability ranks are rank guesses
If you actually know the risks
When you face the identical problem often
You should be able to estimate probabilities

Imprecise probabilities

Decision making under imprecision


State of the world is a random variable, S S
Payoff (reward) of an action depends on S
We identify an action a with its reward fa : S R
i.e., fa is the action and its entire row from the payoff matrix

Wed like to choose the decision with the largest


expected reward, but without precisely specifying
1) the probability measure governing scenarios S
2) the payoff from an action a under scenario S

Imprecision about probabilities


Subjective probability
Bayesian rational agents are compelled to either sell
or buy any bet, but rational agents could decline to bet
Interval probability for event A is the range between the
largest P such that, for a fee of $(1P), you agree to
pay $1 if A is doesnt occur, and the smallest Q such
that, for a fee of $Q, you agree to pay $1 if A occurs
Lloyd Dobler

Frequentist probability
Incertitude or other uncertainties in simulations may
preclude our getting a precise estimate of a frequency

Comparing actions a and b


Strictly preferred a > b

Ep( fa) > Ep( fb) for all p M

Almost preferred a b

Ep( fa) Ep( fb) for all p M

Indifferent a b

Ep( fa) = Ep( fb) for all p M

Incomparable a || b

Ep( fa) < Ep( fb) and

Eq( fa) > Eq( fb) some p,q M


where Ep( f ) =

p(s) f (s), and

sS

M is the set of possible probability distributions

E-admissibility
Fix p in M and, assuming its the correct
probability measure, see which decision
emerges as the one that maximizes EU
The result is then the set of all such decisions
for all p M

Alternative: maximality
Maximal decisions are undominated over all p
a is maximal if theres no b where Ep( fb) Ep( fa) for all p M

Actions cannot be
linearly ordered,
but only partially
ordered

Another alternative: -maximin


We could take the decision that maximizes the
worst-case expected reward
Essentially a worst-case optimization
Generalizes two criteria from traditional
theory
Maximize expected utility
Maximin

Interval dominance
_
If E(fa) > E(fb) then action b is inadmissible
because a interval-dominates b
dominant
overlap
inadmissible

Admissible actions are those that are not


inadmissible to any other action

Several IP decision criteria


-maximin
maximal

-maximax
E-admissible

interval dominance

Example
(after Troffaes 2004)

Suppose we are betting on a coin toss


Only know probability of heads is in [0.28, 0.7]
Want to decide among seven available gambles
1: Pays 4 for heads, pays 0 for tails
2: Pays 0 for heads, pays 4 for tails
3: Pays 3 for heads, pays 2 for tails
4: Pays for heads, pays 3 for tails
5: Pays 2.35 for heads, pays 2.35 for tails
6: Pays 4.1 for heads, pays 0.3 for tails
7: Pays 0.1 for heads, pays 0.1 for tails

Problem setup
p(H) [0.28, 0.7]
f1(H) = 4

p(T) [0.3, 0.72]


f1(T) = 0

f2(H) = 0

f2(T) = 4

f3(H) = 3

f3(T) = 2

f4(H) = 0.5

f4(T) = 3

f5(H) = 2.35

f5(T) = 2.35

f6(H) = 4.1

f6(T) = 0.3

f7(H) = 0.1

f7(T) = 0.1

M
M is all Bernoulli probability distributions (mass at
only two points, H and T) such that 0.28 p(H) 0.7
p=0

p = 1/3

p=

p = 2/3

p=1

Its a (one-dimensional) space of probability measures


0

Reward

3
5

2
1

Action 1
Action 2
Action 3
Action 4
Action 5
Action 6
Action 7

0
0.2

0.3

0.4

0.5

p(H)

0.6

0.7

0.8

E-admissibility
Probability Preference
p(H) < 2/5 2
p(H) = 2/5 2, 3 (indifferent)
/5 < p(H) < 2/3

p(H) = 2/3 1, 3 (indifferent)


/3 < p(H) 1

Criteria yield different answers


-maximin
{5}

-maximax
{2}

maximal
{1,2,3,5}

E-admissible
{1,2,3}

interval dominance
{1,2,3,5,6}

So many answers
Different criteria are useful in different settings
The more precise the input, the tighter the outputs
criteria usually yield only one decision
criteria not good if many sequential decisions
Some argue that E-admissibility is best overall
Maximality is close to E-admissibility, but might be
easier to compute for large problems

Traditional Bayesian answer


Decision allows only one action, unless
were indifferent between actions
Action 3 (or possibly 2, or even 1); different
people would get different answers
Depends on which prior we use for p(H)
Typically do not express any doubt about
the decision thats made

IP versus traditional approaches


Decisions under IP allow indecision when
your uncertainty entails it
Bayes always produces a single decision
(up to indifference), no matter how little
information may be available
IP unifies the two poles of Knights division
into a continuum

Comparison to Bayesian approach


Axioms identical except IP doesnt use completeness
Bayesian rationality implies not only avoidance of
sure loss & coherence, but also the idea that an agent
must agree to buy or sell any bet at one price
Uncertainty of probability is meaningful, and its
operationalized as the difference between the max
buying price and min selling price
If you know all the probabilities (and utilities)
perfectly, then IP reduces to Bayes

Why Bayes fares poorly


Bayesian approaches dont distinguish ignorance
from equiprobability
Neuroimaging and clinical psychology shows
humans strongly distinguish uncertainty from risk
Most humans regularly and strongly deviate from Bayes
Hsu (2005) reported that people who have brain lesions
associated with the site believed to handle uncertainty
behave according to the Bayesian normative rules

Bayesians are too sure of themselves (e.g., Clippy)

IP does groups
Bayesian theory does not work for groups
Rationality inconsistent with democratic process

Scientific decision are not personal


Teams, agencies, collaborators, companies, clients
Reviewers, peers

IP does generalize to group decisions


Can be rational and coherent if indecision is
admitted occasionally

Take-home messages
Antiscientific (or at least silly) to say you
know more than you do
Bayesian decision making always yields one
answer, even if this is not really tenable
IP tells you when you need to be careful and
reserve judgment

Synopsis and conclusions

Decisions under risk


How?
Payoffs and probabilities are known
Select decision that maximizes expected utility

Why?
If you make many similar decisions, then youll
perform best in the long run using this rule

Why not?

Needs a lot of information to use


Unsuitable for important unique decisions
Inappropriate if gamblers ruin is possible
Getting subjective probabilities can be difficult
Sometimes probabilities are inconsistent

Bayesian (personalist) decisions


Not a good description of how people act
Paradoxes (St. Petersburg, Ellsberg, Allais, etc.)

No such thing as a group decision


Review panels, juries, teams, corporations
Cannot maintain rationality in this context
Unless run as a constant dictatorship

Multi-criteria decision analysis


Used when there are multiple, competing goals
E.g., USFS multiple use (biodiversity, aesthetics, habitat, timber, recreation,)
No universal solution; can only rank in one dimension

Group decision based on subjective assessments


Organizational help with conflicting evaluations
Identifying the conflicts
Deriving schemes for a transparent compromise

Several approaches
Analytic Hierarchy Process (AHP); Evidential Reasoning;
Weight of Evidence (WoE)

Analytic Hierarchy Process


Identify possible actions
buy house in Stony Brook / PJ / rent

Identify and rank significant attributes


location > price > school > near bus

For each attribute, and every pair of actions, specify


preference
Evaluate consistency (transitivity) of the matrix of
preferences by eigenanalysis
Calculate a score for each alternative and rank
Subject to rank reversals (e.g., without Perot, Bush beat Clinton)

Decision under uncertainty


How?
Probabilities are not known
Use a criterion corresponding to your attitude about risk
(Pareto, Maximin, Maximax, Hurwicz, Minimax regret, Bayes-Laplace, etc.)

Select an optimal decision under this criterion

Why?
Answer reflects your attitudes about risk

Why not?
Complete ignorance about probabilities is rare
Results depend on extreme payoffs, except for Bayes-Laplace
Intermediate payoffs may be more likely than extremes
(especially when extremes dont differ much)

Why IP?

Uses all available information


Doesnt require unjustified assumptions
Tells you when you dont know
Conforms with human psychology
Can make rational group decisions
Better in uncertainty-critical situations
Gains and losses heavily depend on unknowns
Nuclear risk, endangered species, etc.

Policy juggernauts
Precautionary principle is a mantra
intoned by lefty environmentalists
Junk science is an epithet used by rightwing corporatists
Yet claims made with both are important
and should be taken seriously, and can be
via risk assessment and decision analysis

Underpinning for regulation


Narrow definition of science?
Hypothesis testing is clearly insufficient
Need to acknowledge differential costs

Decision theory?
Decision theory only optimal for unitary decision maker
(group decisions are much more tenuous)
Gaming the decision is rampant

Maybe environmental regulation should be modeled


on game theory instead of decision theory

Game-theoretic strategies
Building trust
explicitness
reciprocity
inclusion of all stake holders

Checking
monitoring
adaptive management
renewal licensing

Multiplicity
sovereignty, subsidiarity
some countries take a risk (GMOs, thalidomide)

References

Foster, K.R., P. Vecchia, and M.H. Repacholi. 2000.Science and the precautionary principle.
Science 288(12 May): 979-981.
Cosmides, L., and
Hsu, M., M. Bhatt, R. Adolphs, D. Tranel, and C.F. Camerer. 2005. Neural systems responding to
degrees of uncertainty in human decision-making. Science 310:1680-1683.
Kikuti, D., F.G. Cozman and C.P. de Campos. 2005. Partially ordered preferences in decision trees:
computing strategies with imprecision in probabilities. Multidisciplinary IJCAI-05 Workshop on
Advances in Preference Handling, R. Brafman and U. Junker (eds.), pp. 118-123.
http://wikix.ilog.fr/wiki/pub/Preference05/WebHome/P40.pdf
Kmietowicz, Z.W. and A.D. Pearman.1976. Decision theory and incomplete knowledge: maximum
variance. Journal of Management Studies 13: 164174.
Kmietowicz, Z.W. and A.D. Pearman. 1981. Decision Theory and Incomplete Knowledge. Gower,
Hampshire, England.
Knight, F.H. 1921. Risk, Uncertainty and Profit. L.S.E., London.
Milloy, S. http://www.junkscience.com/JSJ_Course/jsjudocourse/1.html
Plous, S. 1993. The Psychology of Judgment and Decision Making. McGraw-Hill.
Sewell, M. Expected utility theory http://expected-utility-theory.behaviouralfinance.net/
Troffaes, M. 2004. Decision making with imprecise probabilities: a short review. The SIPTA
Newsletter 2(1): 4-7.
Walley, P. 1991. Statistical Reasoning with Imprecise Probabilities. Chapman and Hall, London.

Exercises
1.

2.

3.

If you have a bankroll of $100, how many tosses could you allow in a
finite version of the St. Petersburg game (if you were compelled to pay
the winnings from this bankroll)? What are the expected winnings for
a game limited to this many tosses? How much do you think your
friends might actually pay to play this game? What are the numbers if
the bankroll is $100 million?
Demographic simulations of an endangered marine turtle suggest that
conservation strategy Beachhead will yield between 3 and 4.25 extra
animals per unit area if its a normal year but only 3 if its a warm year,
and that strategy Longliner will produce between 1 and 3 extra
animals in a normal year and between 2 and 5 extra animals in a warm
year. Assume that the probability of next year being warm is between
0.5 and 0.75. Graph the reward functions for these two strategies as a
function of the probability of next year being warm. Which strategy
would conservationists prefer? Why?
How are -maximin and maximin expected payoff related?

End

Dutch book example


Horse
Offered odds
Probability
Danger Spree
Evens
0.5
Windtower
3 to 1 against
0.25
Shoeless Bob
4 to 1 against
0.2
0.95 (total)

A gambler could lock in a profit of 10, by betting


100, 50 and 40 on the three horses respectively
http://en.wikipedia.org/wiki/Dutch_book

Knights dichotomy bridged


Decisions under risk
Probabilities known
Maximize expected utility
Decisions under imprecision
Probabilities somewhat known
E-admissability, partial averages, et al.
Decisions under uncertainty
Probabilities unknown
Several possible strategies

Bayesians
Updating with Bayes rule
Subjective probabilities (defined by bets)
Decision analysis context
Distribution even for fixed parameter
Allows natural confidence statements
Uses all information, including priors

Prior information
Suppose I claim to be able to distinguish
music by Haydn from music by Mozart
What if the claim were that I can predict the
flips of a coin taken from your pocket?
Prior knowledge conditions us to believe the
former claim but not the latter, even if the
latter were buttressed by sample data of 10
flips at a significance level of 1/210

Decision theory paradoxes

St. Petersburg Paradox


Ellsberg Paradox
Allais Paradox
Borel Paradox
Rumsfelds quandry (unknown unknowns)
Open worldness
Intransitivity

Potrebbero piacerti anche