Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Chains
Professor Izhak Rubin
Electrical Engineering Department
UCLA
2014-2015 by Izhak Rubin
Markov Property (Given present and past states, distribution of future states
is independent of the past):
P(X k+1 = j| X 0 = i 0 ,..., X k-1 = i k-1 , X k = i) = P(X k+1 = j| X k = i ) = Pk (i,j),
for each time k 1, and states (i, j , i0 ,...ik 1 ) S .
Assume a time homogeneous process: its statistical behavior is charaterized
by the (stationary) transition probability function (TPF)
Pk (i, j ) P(i, j ) P ( X k 1 j | X k i ) P ( X 1 j | X 0 i ), i,j S,k 1.
X2
Transition Probability Matrix:
Xk
X4
PT {P(i, j ), i, j S}.
X3
0
Prof. Izhak Rubin
k
2
Properties of PT
1. P(i, j ) 0, each i, j S ;
2.
jS
P(i, j ) 1, each i S .
Initial Distribution:
P0 (i) P( X 0 i), i S
1
P
0 1; 0 1
during
the
k th slot
N0 0 :
Assume :{M k , k 1} - i.i.d. RVs, with
1 p, j 0
P( M k j )
p, j 1
Then N is a DT Markov Chain with
1-p if j=i
P(i,j) =
p if j=i+1
Prof. Izhak Rubin
P M i j | M 1 n1 , M 2 M 2 n2 ,..., M i j
i 1
i 1
P i M k 1 j
1 p, j i
P (i, j ) N is a DT MC
p, j i 1
Associated discrete time point process A { An , n 0}, A0 0
An time (slot) of n-th occurence.
Tn An An 1 , n 1
P Tn i (1 p )i 1 p; Hence, A = DT renewal point process with intervals
that are Geometrically distributed = Geometric Point Process
Distribution of the counting variable is Binomial:
k
k n
P N k n p n 1 p , n 0,1,..., k
n
Prof. Izhak Rubin
P X m 1 j | X 0 i, X m l P X m l | X 0 i P m i, l P l , j .
l S
l S
P m 1 i, j P m i, l P l , j .
l S
We can compute the state distribution at time k by using the k-step TPF:
Pk j P0 i P k i, j .
i S
Note:
P m n i, j P m i, l P n l , j .
l S
to obtain
Pk 1 0 Pk 0 1 Pk 1
Pk 1 1 Pk 0 Pk 11 .
Normalization condition:
Pk 1 0 Pk 1 1 1.
Hence: Pk 1 0 Pk 0 1 .
By iteration, we obtain:
k
Pk 0 P0 0
1
, Pk 1 1 Pk 0
Note: As k : Pk 0
, Pk 1
P j 1; P j 0.
jS
We can write
lim Pk 1 j lim Pk i P i, j lim Pk i P i, j
k
k
k
iS
iS
leading to the following set of linear equations:
P j P i P i, j , j S
(1.1)
P j 1
(1.2)
iS
jS
Example
Consider a DTMC X over the state space S = {0,1,2} with TPF:
0.2 0.3 0.5
P = 0.4 0.2 0.4
0.6 0.3 0.1
(1.1)
P j 1
(1.2)
iS
jS
P , | | 1
(2)
10
Example (Cont.)
For this example we write:
P(0)=0.2P(0)+0.4P(1)+0.6P(2) (1)
P(1)=0.3P(0)+0.2P(1)+0.3P(2) (2)
P(2)=0.5P(0)+0.4P(1)+0.1P(2) (3)
1=P(0)+P(1)+P(2)
(4)
One of Eqs. (1) - (3) is redundant (these equations are linearly
dependent) and is not used.
We obtain the solution:
P(0)=30/77; P(1)=3/11; P(2)=26/77.
Prof. Izhak Rubin
11
Example: Discrete-Time
Birth & Death Markov Chain
A discrete-time Markov chain X={X k ,k 0}
over the state space S={0,1,2,...,}
is said to be a Discrete Time Birth-and-Death
(DTBD) process if its TPF is given by
for j i 1, i 0
i ,
,
Xk
for j i 1, i 1
i
P i, j
1 i i , for j i, i 0
0,
otherwise
where 0 0; 0 0; i 1: i 0; i 0; and i i 1 for i 0.
i+1 i+1
i
12
P j P j 1 j 1 P j 1 j j P j 1 j 1 , j 1.
Rearranging, we obtain the balance equations
P 1 1 P 0 0 0;
P j 1 j 1 P j j P j j P j 1 j 1 , j 1.
Hence,
P j 1 j 1 P j j 0, j 0.
Prof. Izhak Rubin
13
1; a j
01 j 1
, j 1.
12 j
j 0
j 0
1 P( j ) P(0) a j .
If
a
j 0
aj
a
i 0
, j 0.
14
Limiting Probabilities
In turn, if
a
j 0
15
Xk
N
P j 1 j 1 P j j 0, N-1 j 0.
16
a0
1; a j
01 j 1
, j 0.
12 j
N
a
j 0
aj
N
a
i 0
, N j 0.
For a DTBD process, when i i 1 for some state i, we observe the process to
be aperiodic. Then, it also has a steady state distribution, so that
lim P X k j P j , N j 0.
k
17