Sei sulla pagina 1di 16

Markov chains: Chapman-Kolmogorov, classication of states, limiting probabililities

01KRR - STOCHASTIC PROCESSES

October 2010

Markov chains (MC): denition


We consider a stochastic process X = {Xn , n N} with discrete time n = 0, 1, 2 . . . , that takes on a nite or countable number of possible values: The random variables Xn , n = 0, 1, 2 . . . , are dened on some sample space S and have values in a nite or countable state space E. A trajectory is a sequence of values in the state space. The stochastic process is a Markov chain if P {Xn+1 = j |Xn = i , Xn1 = in1 , . . . , X1 = i1 , X0 = i0 } = Pij Markov property: Xn+1 depends only on Xn . The transition probabilities Pij are non negative numbers, and j Pij = 1. The matrix P = [Pij ] is the transition matrix. The distribution 0 of X0 is the initial probability.

Joint distribution in a MC
The joint nite distributions of the stochastic process are computed in terms of 0 and P using the product formula, e.g. P {A0 A1 A2 A3 A4 } = P {A0 }P {A1 |A0 }P {A2 |A0 A1 } P {A3 |A0 A1 A2 }P {A4 |A0 A1 A2 A3 } and the denition of Markov chain. E.g. the probability of the trajectory i0 i1 i2 i3 i4 is P {X0 = i0 , X1 = i1 , X2 = i2 , X3 = i3 , X4 = i4 } = P {X0 = i0 }P {X1 = i1 |X0 = i0 }P {X2 = i2 |X0 = i0 , X1 = i1 } P {X3 = i3 |X0 = i0 , X1 = i1 , X2 = i2 } P {X4 = i4 |X0 = i0 , X1 = i1 , X2 = i2 , X3 = i3 } = 0 (i0 )Pi0 i1 Pi1 i2 Pi2 i3 Pi3 i4

Suciency

Theorem
The following equations are equivalent: P {A|B C } = P {A|B }. B is sucient for predicting A with B and C. P {A C |B } = P {A|B }P {C |B }. A and C are independent given B. This applies to the Markov property. E.g. if n = 3 and n + 1 = 4, then P {X4 = i4 |X3 = i3 , X2 = i2 , X1 = i1 , X0 = i0 } = P {X4 = i4 |X3 = i3 } is to be compared with the rst equation with A = {X4 = i4 }, B = {X3 = i3 }, C = {X2 = i2 , X1 = i1 , X0 = i0 }.

Stationarity

Given the initial point, all pieces of trajectory have the same probability, irrespective of the initial time n: P {Xn+4 = i4 , Xn+3 = i3 , Xn+2 = i2 , Xn+1 = i1 |Xn = i0 } = P {Xn+1 = i1 |Xn = i0 }P {Xn+2 = i2 |Xn+1 = i1 , Xn = i0 } P {Xn+3 = i3 |Xn+2 = i2 , Xn+1 = i1 , Xn = i0 } P {Xn+4 = i4 |Xn+3 = i3 , Xn+2 = i2 , Xn+1 = i1 , Xn = i0 } = P {Xn+1 = i1 |Xn = i0 }P {Xn+2 = i2 |Xn+1 = i1 } P {Xn+3 = i3 |Xn+2 = i2 }P {Xn+4 = i4 |Xn+3 = i3 } = Pi0 i1 Pi1 i2 Pi2 i3 Pi3 i4

Chapman-Kolmogorov Equations
The 2-step transition probabilities are P {Xk +2 = j |Xk = i } =
k E

P {Xk +2 = j , Xk +1 = k |Xk = i } P {X2 = j , X1 = k |X0 = i }


k

by stationarity (Markov)

= =
k

P {X2 = j |X1 = k }P {X1 = k |X0 = i }


2 Pik Pkj = Pij k

It is the second power of the transition matrix P 2 = PP . The n-th power is dened by P n = P n1 P for each n > 1, and
n P {Xn+k = j |Xk = i } = Pij n P n = Pij

The equations
m+n Pij = k m n Pik Pkj

are the Chapman-Kolmogorov equations.

Use of the matrix notation


If P is the transition matrix of the MC Xn , n = 0, 1, 2 . . . , E is the state space, and n is the density of Xn written as a column vector, then n P =
x

n (x )Pxy P {Xn = x }p {Xn+1 = y |Xn = x }


x y E

= P {Xn+1 = y }

= n+1

In the same spirit, if f : E R and we denote by the same symbol the column vector [f (x )]x E , Pf =
y

P {Xn+1 = y |Xn = x }f (y )

= E[f (Xn+1 )|Xn = x ]

Classication of states in a MC
Let Xn , n 0, be a MC with state space E and transition matrix P = {Pij }. The elements i , j , k are states in E .
n > 0 for some State j is said to be accessible from state i if Pij n 0. In such a case we write i j. Otherwise, we write j ; in such a case the event

Xn = j for some n = has probability 0, given X0 = i . In fact,

{Xn = j }
n=0

P { n=0 {Xn = j }|X0 = i }


n=0

P {{Xn = j }|X0 = i }
n Pij =0 n=0

If i j e j i, we say that i and j communicate, and write i j . Communicating couples of states form a graph on the state space.

Irreducible MC
The relation of communication is an equivalence relation. In fact.
0 Each state i communicates with itself because Pii = 1.

The relation is symmetric, because i j is the same as j i . The relation is transitive. This requires a proof: if i j and j k , m n then Pij > 0 and Pjk > 0 for some m and some n. From the Chapman-Kolmogorov Eqs it follows that
m+n Pik = r m n m n Pir Prk Pij Pjk > 0

The communication relation divides the state space into a number of classes. In each class the states communicate each other. A MC is said to be irreducible if there is only one class, that is all the couples of states communicate

Recurrence
Let fi be the probability that, starting in state i , the process will ever reenter state i . I.e. fi = P { n=1 {Xn = i }|X0 = i }

State i is said to be recurrent if fi = 1, transient if fi < 1. If state i is recurrent, the Markov property implies that the process will visit i innitely many times. If state i is transient, then there is a positive probability 1 fi of never entering again that state. Also, the probability of visiting state i exactly n times is fi n (1 fi ). The distribution of the number of visits is Geometric(1 fi ) and the mean number of visits is 1/(1 fi ). It follows from the previous points that state i is recurrent if and only if the expected number of visits to i is ; state i transient if the expected number of visits to i is nite.

Recurrence theorem
Theorem
A state i is recurrent if transient if
n=1 n=1 n Pii = ; n Pii < . n=1 (Xn

Proof. The number of visits is

= i ), therefore E[(Xn = i )|X0 = i ]

E[
n=1

(Xn = i )|X0 = i ] =
n=1

=
n=1

P {Xn = i |X0 = i }
n Pii n=1

Classes and recurrence

Theorem
If i is recurrent and i j, then j is recurrent. In each class the states are either all recurrent or all transient.
k m Proof. Because of the assumption i j , Pij > 0, Pji > 0, for some k m+n+k m n k and m. For all n, we have Pjj Pji Pii Pij . Summing on n, n Pjj n=1 n=1 m+n+k m k Pjj Pji Pij n=1 n Pii =

Ergodic MC
If j is transient then
n=1 n n Pij = for all i and Pij 0 as n .

Let jj denote the expected number of transitions needed to return to state j : if j is transient jj = n nf if j is recurrent. n=1 jj State j is said to be positive recurrent if jj < and null recurrent if jj = . State j is said to be periodic if there is an integer d > 1 such that n = 0 if d does not divide n. The minimal such d is called period. Pjj State j is aperiodic if there is no such a d > 1.

Theorem
n If j is aperiodic then limn Pij = 1 jj d jj

nd If j has period d then limn Pij =

A state which is both positive recurrent and aperiodic is said to be ergodic.

Stationary distribution
A probability distribution P = {Pj , j 0} is said to be stationary for the MC if

Pj =
i =0

Pi Pij

If the initial probability distribution is stationary, then Xn has the same distribution for all n.

Theorem
An irreducible aperiodic MC belongs to one of the following two classes:
n The states are all transient or all null recurrent: in this case Pij 0 as n 0 for all i , j and there exists no stationary distribution. n All states are positive recurrent, that is j = limn Pij > 0.

Proof: see Theorem 4.3.3 in Ross. In the second case = {j , j = 0, 1, . . .} is a stationary distribution and there exist no other stationary distributions.

When the situation is as described in the second case then the MC is said ergodic. In the irreducible, positive recurrent periodic case we have that = {j , j = 0, 1, . . .} is the unique nonnegative solution of

j =
i =0

i Pij j = 1

j must be interpreted as the longrun proportion of time that the 1 MC is in state j : j = . jj

Sojourn times

Let Tj = min{n 0 : Xn = j |X0 = j }: sojourn time in state j At any transition, the probability of moving out from state j is 1 Pjj , independently on the past. Thus, Tj has a Geometric distribution with parameter 1 Pjj :
n P {Tj = n} = Pjj (1 Pjj ) n 0,

and E [ Tj ] =

nP {Tj = n} =
n=1

1 . 1 Pjj

Potrebbero piacerti anche