Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
gtap
gtap
OPENING EXERCISE
You are asked to make an investment in ONE of two
ventures, given the following information:
qIf you invest in Venture A, there is a 5% chance of
making a profit of Rs. 80 crores, and a 95% chance
of no profit at all.
gtap
gtap
OPENING EXERCISE
You have Rs. 100 with you, with which you have to
purchase either of two lotteries.
gtap
gtap
gtap
gtap
Beach
Park
gtap
gtap
gtap
gtap
gtap
gtap
Beach
ALOK
Alok
Park
Park
Manoj
Beach
100, 80
?, ?
0, 0
?, ?
?, ?
?, ?
0, 0
?, ?
60, 120
gtap
gtap
q From Aloks viewpoint: On 10% of the days, Alok will go to the Park, and on
the remaining 90%, he will go to the Beach. Thus, when the game is
played over a large number of rounds, the mixing of the two pure
strategies happens in the proportions indicated by the mixed strategies.
q From Manojs viewpoint: Manojs perception of how likely Alok is, to
choose either place. In this example, Manoj might think there is a 10%
chance that Alok will go to the Park and a 90% chance that he will go to the
beach. If so, Manoj will say that Alok is playing the mixed strategy - (0.1,
0.9).
q However, on any given day, Alok will either go to the Park or the Beach.
Thus, whenever Alok plays the game, he actually plays only a pure strategy.
Manoj also knows this.
This means that the mixed strategy is an abstraction and is never
actually played by the player in any particular round of the game.
Prof. J. Ajith Kumar, TAPMI, Manipal
gtap
gtap
q Alok and Manoj are not particular individuals but represent two different
populations. In this example chosen, a member of each type gets value
only in interaction with a member of the other type and no value
otherwise. Further, each population has its preferred distribution.
q Thus, Alok plays (0.1, 0.9) means that when asked what they prefer, 10%
members of the Alok population will choose Park and 90% will choose
Beach. Thus the mixed strategy represents the preference distribution of
the population.
1
0
gtap
gtap
If Alok = (0.1, 0.9) and Manoj = (0.2, 0.8) then what are their respective payoffs?
Park
Beach
Park
100, 80
0, 0
Beach
MANOJ
0, 0
60, 120
ALOK
gtap
gtap
Beach (1-q)
100, 80
0, 0
ALOK
0, 0
60, 120
Park (q)
Beach (1-q)
Park (p)
MANOJ
p.q
p.(1-q)
Beach (1-p)
ALOK
Beach (1-p)
Park (p)
MANOJ
(1-p).q
(1-p).(1-q)
12
gtap
gtap
13
gtap
gtap
UAlok
100
ALOK
Beach (1-q)
Park (p)
Park (q)
100, 80
0, 0
Beach (1-p)
MANOJ
0, 0
60, 120
Park
What do the utility functions tell us?
60
Beach
37.5
3/8
14
gtap
gtap
UManoj
120
Beach
Beach (1-q)
Park (p)
ALOK
Park (q)
100, 80
0, 0
Beach (1-p)
MANOJ
0, 0
60, 120
80
Park
48
3/5
15
gtap
gtap
q = BManoj(p)
p=1
park
3/5
p=0
beach
3/8
q=0
beach
q=1
park
16
gtap
gtap
p = BAlok(q)
3/5
p=0
beachq=0
beach
q = BManoj(p)
3/8
q=1
park
This
means
that
the
equilibrium strategies of Alok
and Manoj are:
*Alok = (3/5, 2/5) and
*Manoj
=
(3/8,
5/8)
respectively.
17
gtap
gtap
18
gtap
gtap
Review Questions
Now suppose, Alok increases his likelihood of going to
the Park to 75%. Then how should Manoj respond?
What is his best response?
As per our model, Manoj should respond by only going to the
Park: (1, 0).
For any likelihood of Aloks going to Park that is < 60%, Manojs
payoff is always higher in going to the Beach than to the Park.
19
gtap
gtap
20
gtap
gtap
21
gtap
gtap
22
gtap
gtap
23
gtap
gtap
24
gtap
gtap
PLAYER 2
L
1, 2
3, 3
1, 1
2, 1
0, 1
2, 0
PLAYER 1
0, 4
5, 1
0, 7
25
gtap
gtap
26
gtap
gtap
1, 2
3, 3
1, 1
2, 1
0, 1
2, 0
PLAYER 2
0, 4
5, 1
0, 7
PLAYER 1
27
gtap
gtap
28
gtap
gtap
The method of using best responses is easy to use in 2player 2-action games, but becomes difficult to apply
when the number of actions is more than two.
In 2-player games with more than 2 actions we can use a
trail and error method, in conjunction with the
Opponents Indifference Property to find the MSNE.
29
gtap
gtap
4, 2
0, 0
0, 1
PLAYER 1
PLAYER 2
0, 0
2, 4
1, 3
30
gtap
gtap
31
gtap
gtap
Player
2
assigns
positive
probability to only one action;
Player 1 to more than one action.
32
gtap
gtap
4, 2
0, 0
0, 1
PLAYER 1
PLAYER 2
0, 0
2, 4
1, 3
This game has three MSNEs, of which two are PSNEs and
one is not:
[ (1, 0), (1, 0, 0) ],
[ (0, 1), (0, 1, 0) ],
[ (3/4, 1/4), (1/5, 0, 4/5) ]
Prof. J. Ajith Kumar, TAPMI, Manipal
33
gtap
gtap
34
gtap
gtap
35
gtap
gtap
EXERCISE
Find all the NEs in this game.
Park (q)
Beach (1-q)
Park (p)
40, 40
10, 70
Beach (1-p)
ALOK
MANOJ
70, 10
20, 20
36
gtap
gtap
EXERCISE
Find all the NEs in this game.
1, 1
1, 1
1, 1
Left (1q)
Right (p)
1, 1
Right (q)
1, 1
1, 1
Left (1p)
Black (1q)
NIKHIL
Red (q)
Red (p)
MANISH
Black (1p)
KAVISH
SAILESH
1, 1
1, 1
37
gtap
gtap
EXERCISE
Find all the NEs in this game.
C (q)
M (1q)
C (p)
30, 20
0, 0
M (1p)
HUSBAND
WIFE
0, 0
20, 30
q Three NEs:
[(1, 0), (1, 0)], [(0, 1), (0, 1)], [(0.6, 0.4), (0.4, 0.6)].
38
gtap
gtap
GENERAL REALIZATIONS
39
gtap
gtap
EXERCISE SET C
q Problem 6: Hawk Dove Game.
q Problem 9: Swimming with sharks.
q Problem 10: Testing for MSNE.
q Problem 13: Defending Territory.
40
gtap
gtap
Do people actually
play mixed
strategies as
predicted by Game
Theory?
Palacious-Huerta, I. (2003),
Professionals
Play
Minimax,
Review
of
Economic Studies, Vol. 70,
pg. 395-415.
41
gtap
gtap
q Analyzed data from 1417 Penalty kicks from FIFA games: Spain,
England, Italy.
The payoff (success) matrix.
42
gtap
gtap
43
gtap
gtap
44
gtap
gtap
Friend
Yes No
You
No Yes
45
gtap
gtap
q You are considering watching the first day first show of a new movie (there
is a certain thrill in doing so and ideally, you wouldnt want to lose that). If
you find the movie good, you will surely watch it a second time. But you
also realize that there is a probability the movie will not be good and
associate a pain c with a bad experience. Will you watch it first and then
tell your friend, or wait for friends to watch it and then tell you?
q Two firms from the same country are considering investing in a new
country, about whose market there is some uncertainty. Both can benefit
from investing in the first year itself, however there is a possibility of things
going wrong. Thus each might benefit by waiting for the other firm to
invest, watching the result, and then deciding whether to invest in the
second year or not.
q Any situation where being the first mover can sometimes be better and
sometimes worse than being a late mover.
Other situations where this can be used?
46
gtap
gtap
CREATIVE EXERCISE
q A project of GTAPers of 2013-15 batch.
47
gtap
gtap
48
gtap
gtap
PRISONER B
Quiet
Admit
Quiet
1, 1
10, 0
Admit
PRISONER A
0, 10
5, 5
Player 1 thinks: Is there any strategy that Player 2 will surely not play?
He examines Player 2s payoffs and finds that Player 2s strategy Quiet is
strictly dominated by his strategy Admit. Hence, Player 1 concludes that
Player 2 will never play Quiet. Which means that Player 2 can play only
Admit.
If that is so, Player 1 asks what should my response be?
Obviously, Admit since, for him too, Quiet is strictly dominated by Admit.
He then asks: Player 2 will think about what I will do. How will he think?. He
realizes that Player 2 will reason about him (Player 1) similar to the way he
reasoned about Player 2 and conclude that he (Player 1) would play Admit.
Prof. J. Ajith Kumar, TAPMI, Manipal
49
gtap
gtap
q What is the best strategy for me / others as well as what is never a best
strategy for myself / others?
q What the other player(s) think(s) about what I think as the best strategy
for me / themselves as well as what is never a best strategy for me /
themselves?
q What the other player(s) think(s) about what I think about their thinking
about the best strategy for me / themselves as well as what is never a
best strategy for me / themselves? on so on
It becomes clear from this way of thinking that each player can
eliminate all strategies of himself / others that can never be a best
strategy since that strategy(-ies) will never be played.
50
gtap
2, 0
2, 1
0, 0
1, 1
1, 1
5, 0
0, 1
4, 2
0, 1
2, 0
2, 1
1, 1
0, 1
1. R is strictly dominated by C
Eliminate R and create a smaller game. In the
smaller game, we find,
3. L is strictly dominated by C
Eliminate L and create a smaller
game. In the smaller game,
1, 1
4, 2
2, 0
2, 1
2, 1
4. U is strictly
dominated by D
Eliminate U.
C
0, 1
4, 2
GAME THEORY NOTES
2. M is strictly dominated by U
Eliminate M and create a smaller
game. In the smaller game,
gtap
4, 2
4, 2
51
gtap
gtap
2, 0
2, 1
0, 0
1, 1
1, 1
5, 0
0, 1
4, 2
0, 1
If there were other Nash Equilibria, the game would not terminate this way,
in a single strategy profile.
52
gtap
gtap
3, 1
0, 1
0, 0
1, 1
1, 1
5, 0
0, 1
4, 1
0, 0
3, 1
0, 1
1, 1
1, 1
0, 1
3, 1
0, 1
0, 1
4, 1
4, 1
gtap
gtap
54
gtap
gtap
55
gtap
Venture
B
A
gtap
Lottery
In solving for mixed strategies, we used the concept of
expected value. But do people actually make their
preference on the basis of expected value?
Prof. J. Ajith Kumar, TAPMI, Manipal
56
gtap
gtap
Lottery
For the decision to purchase lottery A
EV(A) = 0.000004*100000000 + 0.999996*(-100) = 400 99.9996
EV(A) = ~300.
For the decision to purchase lottery B
EV(B) = 0.00004*20000000 + 0.99996*(-100) = 800 99.996
EV(B) = ~700.
Prof. J. Ajith Kumar, TAPMI, Manipal
57
gtap
gtap
Investment
B
A
Investment
B
A
A
B
Lottery
8, 300 8, 700
A
B
Lottery
4, 300 4, 700
gtap