Sei sulla pagina 1di 24

Presented by:

Anil Kumar H A (13MVD1002)


Naveen Chaubey (13MVD1037)
Aditya Dwivedi (13MVD1062)

Indexed collection of random variables

{Xt} tfor each t T Xt is a random variable


T = Index Set
State Space = range (possible values) of all X t

StationaryProcess:

JointDistributionoftheXsdependentonlyon
theirrelativepositions.(notaffectedbytime
shift)(Xt1,...,Xtn)hasthesamedistribution
as(Xt1+h,Xt2+h...,Xtn+h)
e.g.) (X8, X11) has same distribution as (X20, X23)

Markov

Process: Pr of any future event


given present does not depend on past:
t0 < t1 < ... < tn-1 < tn < t
P(a Xt b | Xtn = xtn, ........., Xt0 = xt0)
| future | | present | |
past
|
= P (a Xt b | Xtn = xtn)
Another way of writing this:
P{Xt+1 = j | X0 = k0, X1 = k1,..., Xt = i} =
P{Xt+1 = j | Xt = i} for t=0,1,.. And
every sequence i, j, k0, k1,... kt-1,
3

Markov Chains:
State Space {0, 1, ...}
Discrete Time
Continuous Time
T=(0,1,2,...)}
{T = [0,
Finite number of states
The markovian property
Stationary transition probabilities
A set of initial probabilities P{X0 = i} for i

Note:
Pij = P(Xt+1 = j | Xt = i)
= P(X1 = j | X0 = i)
Only depends on going ONE step

Stage (t)
State i

Pij

Stage (t + 1)
State j

(with prob. Pij)

Theseareconditionalprobabilities!
Note that given Xt = i, must enter some state at
stage t + 1
0
1
2
......
j
......
m

with
prob.

Pi0
Pi1
Pi2
.....
Pij
.....
Pim

P
j0

ij

Convenient to give transition probabilities in


matrix form
go to ith state

P = P(m+1) (m+1) = Pij


0

=
Rows are
given in
this stage

0
1
2

P00
P10
P20

...

...

P0j
P1j
P2j

Pi0

Pij

Pm0

Pmj

Rows
sum
to 1
Pmm
7

Example:

t = day index 0, 1, 2, ...


Xt = 0 high defective rate on tth day
= 1 low defective rate on tth day
two states ===> n = 1 (0, 1)
P00
P01
P10
P11

=
=
=
=

P(Xt+1
P(Xt+1
P(Xt+1
P(Xt+1

P =

=
=
=
=

0
1
0
1

|
|
|
|

Xt
Xt
Xt
Xt

=
=
=
=

0)
0)
1)
1)

=
=
=
=

1/4
3/4
1/2
1/2

0
0
1
1

0
1
0
1

1 / 4 3 / 4
1/ 2 1/ 2

Note:
Row sum to 1
P00 = P(X1 = 0 | X0 = 0) = 1/4
= P(X36 = 0 | X35 = 0)
Also
= P(X2 = 0 | X1 = 0, X0 = 1)
= P(X2 = 0 | X1 = 0) = P00

What is P(X2 = 0 | X0 = 0)
This is a two-step trans.
stage
stage
0
2
or t
t+2
9

Stage
(t + 1)

Stage
(t + 0)
P 00

Stage
(t + 2)
0

P 00

0
P 01

P 10

P(X 2 = 0, X 1 = 0 | X 0 = 0) = P
P(X 2 = 0 | X 0 = 0) = P00( 2 )

00

P 00

= P 00 P 00 + P 01 P 10
= 1/4 *1/4 + 3/4 * 1/2 = 7/16 or 0.4575
10

Properties:
Homogeneous, Irreducible, Aperiodic
Limiting State Probabilities:

Pj lim Pj (k ),
k
(j=0, 1, 2...)
ExistandareIndependentofthePj(0)s

11

If all states of the chain are recurrent and


their mean recurrence time is finite,
Pjs are a stationary probability
distribution and can be determined by
solving the equations
P
j =P
iPij,(j=0,1,2..)andP
i=1
i
i

Solution==>EquilibriumStateProbabilities

12

Mean Recurrence Time of Sj:


trj = 1 / Pj
Independence allows us to calculate the time
intervals spent in Sj
Pr ob( t j n ) (1 Pjj ) Pjjn 1 , n (1,2, )
Statedurationsaregeometricallydistributed
withmean1/(1Pjj)

13

Example: Consider a communication


system which transmits the digits 0 and 1
through several stages. At each stage the
probability that the same digit will be
received by the next stage, as
transmitted, is 0.75. What is the
probability that a 0 that is entered at the
first stage is received as a 0 by the 5th
stage?

14

Solution: We want to find P 4 . The state transition matrix P


00
is given by P = 0.75 0.25
0.25 0.75

Hence
P2 = 0.625 0.375 and P4 = P2P2 = 0.53125 0.46875
0.375 0.625
0.46875 0.53125

Therefore the probability that a zero will be transmitted


through four stages as a zero is P004 0.53125
It is clear that this Markov chain is irreducuble and
aperidoic.
15

We have the equations


+ = 1, = 0.75 + 0.25 , = 0.25 + 0.75.
The unique solution of these equations is = 0.5,
= 0.5. This means that if data are passed through a
large number of stages, the output is independent of
the original input and each digit received is equally
likely to be a 0 or a 1. This also means that

0.5 0.5
lim P

n
0
.
5
0
.
5

16

Note that:
0.501953125 0.498046875
P

0
.
498046875
0
.
501953125

Note also that


P = (0.5, 0.5) =
so is a stationary distribution.

17

Problem:
CPU of a multiprogramming system is at any
time executing instructions from:
User program or

==> Problem State (S3)

OS routine explicitly called by a user program


(S2)
OS routine performing system wide ctrl task (S1)

==> Supervisor State


wait loop ==> Idle State (S0)

18

Assume time spent in each state 50 s


Note: Should split S1 into 3 states
(S3, S1), (S2, S1),(S0, S1)
so that a distinction can be made regarding
entering S0.

19

State Transition Diagram of discrete-time Markov of a CPU


0.99

IDLE
STATE

USER
SUPERVISOR

0.01

SYSTEM
SUPERVISOR
SUPERVISOR
STATES

WAIT
LOOP

0.90

0.02

0.02
1

0.01

0.92

0.01

0.01

0.04
PROBLEM
STATE

S
0.98

0.09
3

USER
PROGRAMS

20

From
State

S0
S1
S2
S3

To State
S0 S1 S2
0.99 0.01 0
0.02 0.92 0.02
0
0.01 0.90
0
0.01 0.01

S3
0
0.04
0.09
0.98

Transition Probability Matrix

21

P0 = 0.99P0 + 0.02P1
P1 = 0.01P0 + 0.92P1+ 0.01P2 + 0.01P3
P2 =
0.02P1+ 0.90P2 + 0.01P3
P3 =
0.04P1+ 0.09P2 + 0.98P3
1 =
P0 +
P1+
P2 +
P3
Equilibrium state probabilities can be
computed by solving system of equations.
So we have:
P0 = 2/9, P1 = 1/9, P2 = 8/99, P3 = 58/99
22

Utilization of CPU
1 - P0 = 77.7%
58.6% of total time spent for processing
users programs
19.1% (77.7 - 58.6) of time spent in
supervisor state
11.1% in S1
8% in S2

23

24

Potrebbero piacerti anche