Sei sulla pagina 1di 6

2.

6 Probability of Reaching a State

2.6.1 Probability of Reaching a State

In the previous section we calculated

the probability that the system will have been in a certain state by a certain time?

or

the probability that the system will be in a certain state for the first time at a certain time?

In this section we want to calculate the probability of reaching a state at all in the case where there is a
chance the system might not reach that state.

Example 1. A company manufactures circuit boards. There are four steps involved.
1. Timing
2. Forming
3. Insertion
4. Soldering

There is a 5% chance that after the forming step a board has to go back to timing. There is a 20% chance
that after the insertion step the board has to be scrapped. There is a 30% chance that after the soldering step
the board has to go back to the insertion step and a 10% chance that the board has to be scrapped.

Suppose we want to know the probability that a board is completed successfully, i.e. it does not end up as
scrap.

Here is a transition diagram of this process. In addition to the 4 states above we also include
5. Scrapped
6. Completed successfully

0.05 0.3

0.8 0.6
Tinning - 1 Forming - 2 Insertion - 3 Solder - 4 Success - 6
1 0.95
0.2
0.1

Scrap - 5

2.6.1 - 1
The transition matrix is

0 1 0 0 0 0
 0.05 0 0.95 0 0 0 
(1) P =
 0 0 0 0.8 0.2 0 
 00 0 0.3
0 0
0 0.1
0 1
0.6
0

 0 0 0 0 0 1 
The probability that a board is completed successfully is

(2) Pr{T(6) <  | X0 = 1} = probability of reaching state 6 if we start in state 1

Recall T(6) is the time we reach state 6. In order to find this probability we consider the more general
problem of finding

(3) Fi = Pr{T(6) <  | X0 = i} = probability of reaching state 6 if we start in state i

for i = 1, 2, 3, 4, 5 and 6. The probability in (2) is F1. Note that F5 = 0 and F6 = 1. The notation (3) is a
special case of the following more general notation.

Definition 1. If i and j are states in a Markov chain, let

(4) Fij = Pr{T(j) <  | X0 = i} = probability of reaching state j if we start in state i

In (3) and the following we will omit the second subscript 6 for convenience. With that in mind, the Fi
satisfy a system of equations. Note that

Fi = Pr{reach state 6 | start in state i}


= Pr{next state is 1 | start in state i}  Pr{reach state 6 | start in state 1}
+ Pr{next state is 2 | start in state i}  Pr{reach state 6 | start in state 2}
+ Pr{next state is 3 | start in state i}  Pr{reach state 6 | start in state 3}
+ Pr{next state is 4 | start in state i}  Pr{reach state 6 | start in state 4}
+ Pr{next state is 5 | start in state i}  Pr{reach state 6 | start in state 5}

+ Pr{next state is 6 | start in state i}  Pr{reach state 6 | start in state 6}

= pi1F1 + pi2F2 + pi3F3 + pi4F4 + pi5F5 + pi6F6

Since F5 = 0 and F6 = 1 we get

Fi = pi1F1 + pi2F2 + pi3F3 + pi4F4 + pi6

Putting in i = 1, 2, 3 and 4 we get

2.6.1 - 2
F1 = p11F1 + p12F2 + p13F3 + p14F4 + p16
(5) F2 = p21F1 + p22F2 + p23F3 + p24F4 + p26
F3 = p31F1 + p32F2 + p33F3 + p34F4 + p36
F4 = p41F1 + p42F2 + p43F3 + p44F4 + p46

This is a system of four equations which we can solve for F1, F2, F3, F4. To see the structure of the
equations we can put them in vector form.

 F1   p11 p12 p13 p14   F1   p16 


  =  p21
F 2 p22 p23 p24   F2   p26 
 F3   p31 p32 p33 p34   F3  +  p36 
 F4   p41 p42 p43 p44   F4   p46 

or

F = PTF + P●,6

or

(6) (I – PT)F = P●,6

 F1   p11 p12 p13 p14   p16 


where F =  F , PT =  p
p24 
= p 
F 2 p21 p22 p23 p26
and P●,6
 3  31 p32 p33 p34   36 
 F4   p41 p42 p43 p44   p46 

Note that PT is just the part of P corresponding to states 1, 2, 3 and 4 and P●,6 contains the probabilities of
going from the states 1, 2, 3 and 4 to state 6 in one time step. Sometimes it is convenient to write the
solution to (6) as

F = (I – PT)-1P●,6

although it is usually faster to solve (6) directly using Gaussian elimination. In our example

 0 1 0 0   1 -1 0 0 
PT =  0
0.05 0 0.95 0   - 0.05 1 - 0.95 0 
I - PT =
 0 0 0.8   0 0 1 - 0.8 
 0 0 0.3 0   0 0 - 0.3 1 

 0 
=  0 
0
P●,6
 
 0.6 

So the equations (6) are

 1 -1 0 0   F1   0 
 - 0.05 1 - 0.95 0   =  0 
F 2
 0 0 1 - 0.8   F3   0 
 0 0 - 0.3 1   F4   0.6 

2.6.1 - 3
or

F1 - F2 = 0

- 0.05 F1 + F2 - 0.95 F3 = 0

F3 - 0.8 F4 = 0

- 0.3 F3 + F4 = 0.6

The first equation gives F2 = F1. Putting this in the second equation we get 0.95F1 - 0.95F3 = 0 or F3 = F1.
Putting this in the third equation we get F1 – 0.8F4 = 0 or F4 = 1.25F1. Putting this in the fourth equation
we get – 0.3F1 + 1.25F1 = 0.6 or 0.95F1 = 0.6 or F1 = 0.6/0.95  0.63. So the probability of a board turning
out successful is about 63%. We also have F3 = F2 = F1 = 0.63 and F4 = 1.25F1 = 15/19  0.79.

Transient and recurrent states. States 1 – 4 in Example 1 are examples of transient states and states 5
and 6 are examples of recurrent states.

Definition 2. A state j in a Markov chain is transient if there is a non-zero probability that the system will
never return to the state if it starts in the state, i.e.

Pr{ T(j) =  | X0 = j} > 0

A state is recurrent if the probability that the system will return to the state if it starts in the state is one, i.e.

Pr{ T(j) <  | X0 = j} > 1

In the notation (3) a state is transient or recurrent according to whether Fjj < 1 or Fjj = 1.

If there is a finite number of states, then there is an easy way to determine if a state is transient or recurrent.

Theorem 1. Suppose there is a finite number of states. Then a state j is recurrent if whenever it is possible
to go from state j to another state k then it is possible to go from k back to j. A state j is transient if there is
another state k such that one can go from j to k but one can not go from k to j.

This theorem is sometimes stated in graph theoretic terms.

Definition 3. A graph is a set of vertices along with a set of edges. Each edge goes from one vertex to
another.

Definition 4. Given a Markov chain, the associated graph has vertices equal to the states and an edge from
one state i to another state j if pij > 0.

Definition 5. A path from one vertex i to another vertex j is a graph is a sequence of vertices i0, i1, …, ip
such that i0 = i and ip = j and there is an edge from ik to ik+1 for k = 0, 1, …, p-1.

2.6.1 - 4
So Theorem 1 says that in a finite state Markov chain a state is transient if there is a state
k such that there is a path from j to k but no path from k to j. A state is recurrent if 1 1
whenever there is a path from j to k then there is a path from k to j.

Example 2. Consider the Markov process with graph at the right. States 1, 2 and 3 are 1
transient and states 4, 5 and 6 are recurrent.

In the Examples considered in sections 2.1 – 2.4 all the states are recurrent.
1
Here is another example.

Example 3 (Gambler's ruin). Bill goes to the casino with $2 in his pocket. He plays
1 1
the slots playing $1 each time. For simplicity, suppose the probability that he wins a
dollar on a particular play is 0.55 and the probability that he loses a dollar on a
particular play is 0.45 If he reaches the point where he has $4 in his pocket, he will stop and go home $2
richer than when he arrived at the casino. On the other hand if he eventually loses all his money then he
goes home $2 poorer than when he arrived at the casino. We want to know the probabilities that he goes
home a winner and a loser.

We model this by a Markov chain where Xn = the amount Bill has in his pocket after n plays. The possible
states are 0, 1, 2, 3, and 4. A transition diagram and the transition matrix are as follows.

0.45 0.45 0.45


1 0 1 2 3 4 1
0.55 0.55 0.55

1 0 0 0 0
 0.55 
P =
 0 0 0.45 0
0.55 0 0.45 0
0

 0 0 0.55 0 0.45 
 0 0 0 0 1 
We want to know the probability that Bill ends up in state 4 given he starts in state 2. Following the
procedure of Example 1, we let

Fi = Pr{T(4) <  | X0 = i} = probability of reaching state 4 if we start in state i

for i = 0, 1, 2, 3 and 4. We want to know F2. Note that F0 = 0 and F4 = 1. Arguing as in Example 1, the Fi
satisfy the equations

Fi = pi0F0 + pi1F1 + pi2F2 + pi3F3 + pi4F4

for i = 0, 1, …, 4. Since F0 = 0 and F4 = 1 we get

2.6.1 - 5
F1 = p11F1 + p12F2 + p13F3 + p14
F2 = p21F1 + p22F2 + p23F3 + p24
F3 = p31F1 + p32F2 + p33F3 + p34

or

F1 = 0.45F2
F2 = 0.55F1 + 0.45F3
F3 = 0.55F2 + 0.45
Using the first equation to eliminate F1 in the second equation gives (1 – (0.45)(0.55))F2 = 0.45F3.
Multiplying the third equation by 0.45 and using (1 – (0.45)(0.55))F2 = 0.45F3 gives
(1 - (0.45)(0.55))F2 = (0.45)(0.55)F2 + (0.45)2 or F2 = (0.45)2/(1 - (0.45)(0.55))  0.401. So there is about
a 40 chance that Bill goes home a winner. Bill would have had a better chance of going home a winner if
he just placed a single bet of $2 to begin with. We also have F1 = 0.45F2  0.180 and
F3 = (1 - (0.45)(0.55))F2/0.45  67.1.

2.6.1 - 6

Potrebbero piacerti anche