Sei sulla pagina 1di 208

Lecture Notes

on
Teletrac Engineering
Prof. Tony T. Lee
(Prepared by Cathy Chan)
Department of Information Engineering
The Chinese University of Hong Kong
Shatin, N.T., Hong Kong
Email: ttlee@ie.cuhk.edu.hk
Fall 1998
Contents
1 Markov Chain 1
1.1 Chapman-Kolmogorov Equation . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Accessibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Recurrence and Transience . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Periodicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.5 Limiting Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.6 Branching Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
1.7 Time Reversible Markov Chain . . . . . . . . . . . . . . . . . . . . . . . . . 41
2 Exponential Distribution and Poisson Process 50
2.1 Exponential Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
2.2 Counting Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
2.3 Poisson Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
2.4 Interarrival Time and Waiting Time Distribution . . . . . . . . . . . . . . . 59
2.5 Order Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
2.6 Combination and Splitting of Poisson Processes . . . . . . . . . . . . . . . . 66
3 Continuous Time Markov Chain 82
i
3.1 Birth-death Process (M/M/1) . . . . . . . . . . . . . . . . . . . . . . . . . . 83
3.2 The Kolmogorov Dierential Equation . . . . . . . . . . . . . . . . . . . . . 87
3.3 Limiting Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
3.4 Time Reversibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
4 Renewal Process 101
4.1 Useful Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
4.2 Mean Value Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
4.3 Renewal Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
4.4 Renewal Equations and the Elementary Renewal Theorem . . . . . . . . . . 111
4.5 Limiting Distribution of Residual Lifetime . . . . . . . . . . . . . . . . . . . 117
4.6 The Inspection Paradox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
4.7 Life, Age (current life) and Residual Life . . . . . . . . . . . . . . . . . . . . 122
4.8 Alternating Renewal Process . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
4.9 Renewal Reward Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
4.9.1 The Average Current Life (Age) of a Renewal Process . . . . . . . . . 128
4.10 Average Residual Life (Excess) of a Renewal Process . . . . . . . . . . . . . 129
4.11 Semi-Markov Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
5 Queueing Theory 139
5.1 Littles Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
5.2 M/M/1 Queues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
5.3 Queueing System with Bulk Service . . . . . . . . . . . . . . . . . . . . . . . 148
5.4 M/M/k, Erlang Loss System . . . . . . . . . . . . . . . . . . . . . . . . . . 151
5.5 M/G/1 Queues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
ii
5.5.1 Pollaczek-Khinchin (P-K) formula . . . . . . . . . . . . . . . . . . . . 152
5.5.2 Busy Periods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
5.6 M/G/1 Queues with Priority . . . . . . . . . . . . . . . . . . . . . . . . . . 171
5.6.1 Non-preemptive Priority . . . . . . . . . . . . . . . . . . . . . . . . . 171
5.6.2 Preemptive Resume Priority . . . . . . . . . . . . . . . . . . . . . . . 174
5.7 Burkes Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
5.8 Open Queueing Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
5.8.1 Tandem Queues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
5.8.2 Queues with Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . 183
5.9 Closed Queueing Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
5.9.1 Arrival Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
5.9.2 Mean Value Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
5.10 G/G/1 Queues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
iii
Chapter 1
Markov Chain
A discrete-time Markov Chain is represented by
{X
n
, n = 0, 1, 2, . . . },
where X
n
{0, 1, 2, . . . }.
X
n
= i : the process is in state i at time n.
Transition Probability
Memoryless property: In a Markov chain, the state of the process de-
pends only on the previous state and is independent of all previous states.
P{X
n+1
= j | X
n
= i, X
n1
= i
n1
, . . . , X
0
= i
0
}
= P{X
n+1
= j | X
n
= i} = P
ij
1
We have
P
ij
0, i, j 0

j=0
P
ij
= 1, i = 0, 1, . . .
One-step transition probability matrix
P =
_

_
P
00
P
01
P
02
. . .
P
10
P
11
P
12
. . .
.
.
.
.
.
.
.
.
.
.
.
.
_

_
n-step transition probability
P
(n)
ij
= P{X
n+l
= j | X
l
= i}, n 0, i, j 0
= P{X
n
= j | X
0
= i}
=

k=0
P{X
n
= j, X
1
= k | X
o
= i}
=

k=0
P{X
1
= k | X
0
= i}P{X
n
= j | X
0
= i, X
1
= k}
=

k=0
P
ik
P
(n1)
kj
The n-step transition probability matrix is the n-th power of the one-step
transition probability matrix.
P
(n)
= [P
(n)
ij
] = P
n
.
2
1.1 Chapman-Kolmogorov Equation
P
(n+m)
ij
=

k=0
P
(n)
ik
P
(m)
kj
for all n, m 0, for all i, j.
Proof.
P
(n+m)
ij
= P{X
n+m
= j | X
0
= i}
=

k=0
P{X
n+m
= j, X
n
= k | X
0
= i}
=

k=0
P{X
n+m
= j | X
n
= k, X
0
= i}P{X
n
= k | X
0
= i}
=

k=0
P{X
n+m
= j | X
n
= k}P{X
n
= k | X
0
= i}
=

k=0
P
(m)
kj
P
(n)
ik
1.2 Accessibility
State j is accessible from state i (i j) if P
n
ij
> 0 for some n 0.
State j is accessible from state i i
starting from state i, it will ever enter state j.
3
If j is not accessible from i, then
P{ever enter j | start in i} = P
_

_
n=0
[X
n
= j | X
0
= i]
_

n=0
P{X
n
= j | X
0
= i}
=

n=0
P
n
ij
= 0
If two state i and j are accessible to each other, we say that i and j
communicate (represented as i j).
Communication is an equivalence relation:
1. i i
2. i j then j i
3. i j and j k then i k.
Proof:
i j: P
(n)
ij
> 0 for some n.
j k: P
(m)
jk
> 0 for some m.
4
Then
P
n+m
ik
=

r=0
P
n
ir
P
m
rk
P
n
ij
P
m
jk
> 0
If two states i j, then i and j are in the same class. (S/

partition)
If there is only one class then the chain is irreducible.
1.3 Recurrence and Transience
For any state i,
Let f
i
be the probability that starting in state i, the process will return
to state i.
_

_
state i is recurrent if f
i
= 1
state i is transient if f
i
< 1
If a state i is recurrent, then it will reenter state i innitely many times.
However, if a state i is transient, it will be in state i exactly n times
with probability f
n1
i
(1f
i
). In other words, state i will be visited only
a nite number of times (mean =
1
1f
i
).
5
Not all states can be transient.
Suppose
A
n
=
_

_
1 if X
n
= i
0 if X
n
= i

n=0
A
n
= no. of time periods that the process is in state i.
E
_

n=0
A
n
| X
0
= i
_
=

n=0
E[A
n
| X
0
= i]
=

n=0
1 P{X
n
= i | X
0
= i} + 0 P{X
n
= i | X
0
= i}
=

n=0
P
n
ii
= 1 +

n=1
P
n
ii
Thus,
_

_
state i is recurrent if

n=1
P
n
ii
=
state i is transient if

n=1
P
n
ii
<
6
Or equivalently,
_

_
state i is recurrent if f
i
= 1
state i is transient if f
i
< 1
Example
A gambler wins with probability p and loses with prob 1-p, where 0 <
p < 1.
-3 -2 -1 0 1 2
p
1-p
p
1-p
i-1 i i+1
There is only one class. Are the states all recurrent or all transient?
Consider

n=1
P
n
00
.
P
2n+1
00
= 0 for n = 1, 2, . . . .
After 2n trials, what is the probability of breaking even? This happens
if we won n trials and lost n trials. Thus,
P
2n
00
=
_
2n
n
_
p
n
(1 p)
n
=
(2n)!
n!n!
_
p(1 p)

n
, n = 1, 2, 3, . . .
By Stirlings approximation n! n
n
e
n

2n, we have
P
2n
00

_
4p(1 p)
_
n

n
7
Now, 4p(1 p) 1 and 4p(1 p) = 1 when p =
1
2
. Thus,
p =
1
2
:

n=1
1

n
= and the chain is recurrent.
p =
1
2
:

n=1
_
4p(1 p)
_
n

n
<
Only one-dimensional and two-dimensional symmetric random walks are
recurrent.
More Examples
1. Markov chain with 3 states, S = {0, 1, 2}.
P =
_

_
1
2
1
2
0
1
2
1
4
1
4
0
1
3
2
3
_

_
0 2 1 1/2
1/2
1/2
1/4
1/4
1/3
2/3
There is only 1 class and the chain is irreducible.
8
2. Markov chain with 4 states, S = {0, 1, 2, 3}.
P =
_

_
1
2
1
2
0 0
1
2
1
2
0 0
1
4
1
4
1
4
1
4
0 0 0 1
_

_
1/2
1/2
1/2
1/2
1/4
1
1/4
1/4
2
3
1 0
1/4
States 0 and 1 are recurrent, state 2 is transient and state 3 is recurrent
and absorbing. We have 3 classes: {0,1}, {2} and {3}.
3. Markov chain with 5 states, S = {0, 1, 2, 3, 4}.
P =
_

_
1
2
1
2
0 0 0
1
2
1
2
0 0 0
0 0
1
2
1
2
0
0 0
1
2
1
2
0
1
4
1
4
0 0
1
2
_

_
9
1/2
1/2
1/2
1/2
1/4
1/2
2
3
1 0
1/2
4
1/4
1/2
1/2
1/2
There are 3 classes: {0,1} (recurrent), {2,3} (recurrent) and {4} (tran-
sient).
Some Remarks on Recurrence and Transience
In a nite Markov chain, not all states can be transient. (i.e. at least
one state must be recurrent.)
Suppose S = {0, 1, . . . , M} and all states are transient. Let T
i
be the
last time the process is in state i.
After T
0
, state 0 will never be visited.
After T
1
, state 1 will never be visited.
.
.
.
After T = max{T
0
, T
1
, . . . , T
M
}, no states will be visited.
But the chain must be in some state contradiction.
Recurrence is a class property. (i.e. every state in the class has the
property.)
10
Suppose i is recurrent and i j. Then P
k
ij
> 0 and P
m
ji
> 0 for some k
and m.
P
m+n+k
jj
=

r
P
m
jr
P
n+k
rj
P
m
ji
P
n+k
ij
P
m
ji
P
n
ii
P
k
ij

n=1
P
m+n+k
jj
P
m
ji
P
k
ij

n=1
P
n
ii
=

n=1
P
n
jj
=
Thus, if i is recurrent and i j, then j must be recurrent.
Transience is a class property.
Suppose S/

= {S
1
, S
2
, . . . }. If any state in the class is transient, then
the whole class is transient.
All states of a nite irreducible Markov chain are recurrent. Otherwise,
the whole class is transient.
1.4 Periodicity
We dene the period of state i, d(i), to be the greatest common divisor
of all integers n for which P
n
ii
> 0.
Example
11
0
n
1
1
p
q
1
p
q
2
p
q
n-1
p
3
p
q
States 1, 2, . . . , n 1 are transient and have period 2. States 1 and n
are recurrent with period 1.
A state with period 1 is said to be aperiodic.
A state i is positive recurrent if starting in i, the expected time until the
process returns to state i is nite.
positive recurrent + aperiodic state = ergodic state.
1.5 Limiting Probability
Examples
1. P =
_

_
1 0
0 1
_

_
0 1
1
1
P
n
= P for all n and the Markov chain has limiting probability depend-
ing on its initial state.
12
2. P =
_

_
0 1
1 0
_

_
0 1
1
1
P
n
=
_

_
P n is odd
I =
_

_
1 0
0 1
_

_
n is even
The Markov chain is periodic and has no limiting probability.
3. P =
_

_
1
2
1
2
0 1
_

_
0 1 1/2
1/2
1
P
n
=
_

_
(
1
2
)
n
1 (
1
2
)
n
0 1
_

_
lim
n
P
n
=
_

_
0 1
0 1
_

0
= 0

1
= 1
State 0 is transient; after the process starts from state 0, there is a
positive probability that it will never return to that state.
13
If the Markov chain is irreducible (i j, i, j C) and ergodic, then
lim
n
P
(n)
ij
=
j
.
lim
n
P{X
n
= i} =
i
P{X
n+1
= j} =

i=0
P{X
n+1
= j | X
n
= i}P{X
n
= i}
=

i=0
P
ij
P{X
n
= i}
As n ,

j
=

i=0
P
ij

i
,

j=0

j
= 1
= P
where = [
0

1
. . . ].
Example
0 1 0.7
0.3
0.4
0.6
One-step transition probability
P =
_

_
0.7 0.3
0.4 0.6
_

_
14
Two-step transition probability
P
(2)
= P
2
=
_

_
0.61 0.39
0.52 0.48
_

_
Four-step transition probability
P
(4)
= P
4
=
_

_
0.5749 0.4251
0.5668 0.4332
_

_
Eight-step transition probability
P
(8)
= P
8
=
_

_
0.572 0.428
0.570 0.430
_

_
The limiting probabilities can be found by
P

=
_

0

1

0

1
_

_
Alternatively, we can solve
= P and
0
+
1
= 1
or
_

0
=
0
0.7 +
1
0.4

1
=
0
0.3 +
1
0.6
and
0
+
1
= 1
to obtain
0
=
4
7
= 0.428 . . . and
1
=
3
7
= 0.571 . . . .
Example
15
P =
_

_
1
1
_

_
0 1
1
1

[
0

1
] = [
0

1
]
_

_
1
1
_

0
+
1
= 1

0
=
0
+
1

1
=
0
(1 ) +
1
(1 )

0
=

1

0
+
1
=

1

1
+
1
= 1
=
1 +
1

1
= 1

1
=
1
1+

0
=

1+
Example (Slotted Aloha System)
Satellite
Ground Stations
16
Multiaccess Channel
1. In each time slot, only one packet can be transmitted successfully.
2. Each backlogged node retransmits with some xed probability p.
2.1 The number of slots j from a collision until the retransmission is a
random variable with geometric distribution.
P
j
= p(1 p)
j1
2.2 If there are k backlogged nodes, the probability that i backlogged
nodes will retransmit a packet is
b
i
=
_
k
i
_
p
i
(1 p)
ki
3. Let a
i
be the probability that i unbacklogged nodes transmit packets in
a given slot.
4. The state of the system is the number of backlogged nodes.
From one slot to the next, the state transition probabilities are depicted
as follows.
17

B
E
r
r
r
r
r
r
r
r
rj
d
d
d
d
d
d
d
d
d
k
k 1 a
0
b
1
k a
1
b
0
+a
0
(1 b
1
)
k + 1 a
1
(1 b
0
)
k +i,
i 2
a
i
Prob. Unbacklogged Backlogged
0 1
1 0
0 0,2,3,. . .
1 1,2,3,. . .
i 0,1,2,. . .
0
k+1
1
2 k-1
k
P
P
P
02
10
21
Probability of a successful transmission is
P
succ
(k) = a
1
b
0
+ a
0
b
1
For any > 0, there is an N such that
P
succ
(N) < , for all n > N.
18
Proof. Let N
0
be the smallest integer such that
N
0
1 + N
0
> 1 p
N
0
> (1 +N
0
)(1 p)
N
0
(1 p)
N
0
1
> (1 +N
0
)(1 p)
N
0
N
0
p(1 p)
N
0
1
> (1 +N
0
)p(1 p)
N
0
b
1
(N
0
) > b
1
(N
0
+ 1)
a
1
(1 p)
N
0
+ a
0
N
0
p(1 p)
N
0
1
> a
1
(1 p)
N
0
+1
+ a
0
(N
0
+ 1)p(1 p
0
)
N
0
P
succ
(N
0
) > P
succ
(N
0
+ 1)
Therefore, P
succ
(k) is monotonically decreasing and
N s.t. P
succ
(N) <
Let
I
k
=
_

_
1
if the rst time the system departs state k
it goes to state k 1
0
if the rst time it departs state k it goes to
other state k + i (i 1), or it has never been in state k
19
Sequence of states Probability
k k 1 P
k,k1
k k k 1 P
k,k
P
k,k1
.
.
.
.
.
.
k k
. .
n
k 1 P
n1
k,k
P
k,k1
P{I
k
= 1 | the state k is ever visited} =
P
k,k1
1 P
k,k
P{I
k
= 0 | the state K is ever visited} = 1
P
k,k1
1 P
k,k
=
1 P
k,k
P
k,k1
1 P
k,k
E
_

k=0
I
k
_
=

k=0
E[I
k
]
=

k=0
P{I
k
= 1}
=

k=0
_
P{I
k
= 1 | k is ever visited}P{k is ever visited}
+P{I
k
= 1 | k is never visited}P{k is never visited}
_

k=0
P{I
k
= 1 | k is ever visited}
=

k=0
P
k,k1
1 P
k,k
20
Thus,
E
_

k=0
I
k
_

k=0
a
0
b
1
1 a
0
(1 b
1
) a
1
b
0
Consider the denominator on the right hand side,
1 b
1
< 1
a
0
(1 b
1
) < a
0
1 a
0
(1 b
1
) a
1
b
0
> 1 a
0
a
1
b
0
> 1 a
0
a
1
> 0
So we have
E
_

k=0
I
k
_

_
a
0
1 a
0
a
1
_

k=0
kp(1 p)
k1
=
a
0
(1 a
0
a
1
)p
<

k=0
I
k
<
Hence, there are only nite number of states such that I
k
= 1.
Let M be the largest state such that I
M
= 1. Then all the states
n > M are transient. Since there is only one class, all the states must
be transient.
Let T
0
, T
1
, . . . , T
k
, . . . , T
M
be the time that state k will never be visited
again. Then after T = max{T
0
, . . . , T
M
} the system will never transmit
any packet successfully.
P
succ
(n) = 0 for n > M
21
Since T is nite, only nite number of packets can be transmitted.
Example (A Markov Chain in Genetics)
Consider a large population, each individual possessing a pair of genes, each
of which could be type A or type a. So, there are three types of individuals:
AA, aa, Aa.
The make-up of the next generation is determined by random reproduc-
tion.
1. Initially,
P(AA) = p
0
P(aa) = q
0
, p
0
+ q
0
+ r
0
= 1
P(Aa) = r
0
22
2. What is the percentage of AA, aa and Aa types in the next generation?
p
1
= P(AA)
= P{AA | both parents are AA} p
2
0
+P{AA | one parent is AA, one is Aa} (p
0
r
0
+ r
0
p
0
)
+P{AA | both parents are Aa} r
2
0
+P{AA | at least one parent is aa} [1 (1 q
0
)
2
]
= p
2
0
+
1
2
2p
0
r
0
+
1
4
r
2
0
=
_
p
0
+
r
0
2
_
2
q
1
= P(aa) =
_
q
0
+
r
0
2
_
2
r
1
= 2
_
p
0
+
r
0
2
__
q
0
+
r
0
2
_
3. What is the percentage of AA, Aa and aa types in the following gener-
ation?
p
2
=
_
p
1
+
r
1
2
_
2
=
_
_
p
0
+
r
0
2
_
2
+
_
p
0
+
r
0
2
__
q
0
+
r
0
2
_
_
2
=
_
_
p
0
+
r
0
2
__
p
0
+
r
0
2
+ q
0
+
r
0
2
_
_
2
=
_
p
0
+
r
0
2
_
2
= p
1
23
q
2
= q
1
r
2
= r
1
p
n
= p
1
, q
n
= q
1
, r
n
= r
1
for n 1
The percentage of gene pairs in the population remains stable after the
rst generation.
Alternatively, the process can be modeled as a Markov chain. Suppose
{X
n
} is the genetic state of the n
th
generation, where X
n
{AA, aa, Aa}.
The transition probability from state AA to state AA is
P{AA | AA} = P{X
1
= AA | X
0
= AA}
= P{AA | AA, married with AA} p
0
+P{AA | AA, married with aa} q
0
+P{AA | AA, married with Aa} r
0
= p
0
+
r
0
2
24
Similarly, the other transition probabilities can be found to obtain the
transition probability matrix
P =
_

_
p
0
+
r
0
2
0 q
0
+
r
0
2
0 q
0
+
r
0
2
p
0
+
r
0
2
p
0
2
+
r
0
4
q
0
2
+
r
0
4
p
0
2
+
q
0
2
+
r
0
2
_

_
where the states are AA, aa and Aa from left to right and from top to bottom.
Letting = p
0
+
r
0
2
, we have
P =
_

_
0 1
0 1

2
1
2
1
2
_

_
AA Aa aa

1
1
/2
(1)/2

1/2
Let = [
0

1

2
] be the limiting probabilities.

0
=
0
+
1
2

1
= (1 )
1
+
1
2
(1 )
2

2
= (1 )
0
+
1
+
1
2

2
and
0
+
1
+
2
= 1
25

0
=
1
2


1

2

1
=
1
2


2
_
1
2
_

1
_
+
1
2
_
1

_
+ 1
_

2
= 1
_

2
+ (1 )
2
+ 2(1 )
_

2
= 2(1 )

2
= 2(1 ) = 2
_
p
0
+
r
0
2
__
q
0
+
r
0
2
_
= r
1

0
=
1
2

1
2(1 ) =
2
=
_
p
0
+
r
0
2
_
2
= p
1

1
=
_
q
0
+
r
0
2
_
2
= q
1
p
n
= p
1
=
_
p
0
+
r
0
2
_
2
, q
n
= q
1
=
_
q
0
+
r
0
2
_
2
,
r
n
= r
1
= 2
_
p
0
+
r
0
2
__
q
0
+
r
0
2
_
Example (Gamblers Ruin Problem)
0 1 2 i-1 i i+1 N-1 N
p
1-p 1-p 1-p 1-p
p p p
There are 3 classes:
{0}, {1, . . . , N 1}, {N}
recurrent transient recurrent
26
Let P
i
, i = 1, . . . , N 1 denote the probability that starting with i the
gamblers fortune will reach N.
P
i
= P{starting with i units, his fortune will reach N}
= P{starting with i-1 units, he reaches N | he loses the initial play} (1 p)
+P{starting with i+1 units, he reaches N | he wins the initial play} p
= pP
i+1
+ qP
i1
, i = 1, 2, . . . , N 1
P
i+1
P
i
=
1
p
[P
i
qP
i1
] P
i
=
q
p
[P
i
P
i1
] , i = 1, 2, . . . , N 1
Since P
0
= 0,
P
2
P
1
=
q
p
(P
1
P
0
) =
q
p
P
1
P
3
P
2
=
q
p
(P
2
P
1
) =
_
q
p
_
2
P
1
.
.
.
P
i
P
i1
=
_
q
p
_
i1
P
1
.
.
.
P
N
P
N1
=
_
q
p
_
N1
P
1
P
i
P
1
= P
1
_
q
p
+
_
q
p
_
2
+ +
_
q
p
_
i1
_
27
P
i
=
_

_
1
_
q
p
_
i
1
q
p
P
1
q
p
= 1
iP
1
q
p
= 1
_
p = q =
1
2
_
Because P
N
= 1,
P
1
=
_

_
1
q
p
1
_
q
p
_
N
if p =
1
2
1
N
if p =
1
2
and
P
i
=
_

_
1
_
q
p
_
i
1
_
q
p
_
N
if p =
1
2
i
N
if p =
1
2
As N ,
P
i

_

_
1
_
q
p
_
i
if p >
1
2
0 if p
1
2
Example
0
1
i
i+1

i+1
1
0
1
1 1
1
i i
1
i+1 i+1
Let T
i
be the time from state i to state i + 1, where i 0.
E[T
0
] =
1

0
28
Dene
I
i
=
_

_
1 i i + 1
0 i i 1
P{I
i
= 1} =

i

i
+
i
P{I
i
= 0} =

i

i
+
i
E[T
i
| I
i
= 1] =
1

i
+
i
E[T
i
| I
i
= 0] =
1

i
+
i
+ E[T
i1
] + E[T
i
]
E[T
i
] =
1

i
+
i

i
+
i
+
_
1

i
+
i
+ E[T
i1
] + E[T
i
]
_

i
+
i
=
1

i
+
i
+

i

i
+
i
_
E[T
i1
] + E[T
i
]
_

i
+
i
E[T
i
] =
1

i
+
i
+

i

i
+
i
E[T
i1
]
E[T
i
] =
1

i
+

i

i
E[T
i1
] , for i > 0
Thus,
E[time from i to j, i < j] = E[T
i
] + . . . + E[T
j1
]
where
E[T
i
] =
_

_
1

i
+

i

i
E[T
i1
] i > 0
1

0
i = 0
29
Suppose
i
= and
i
= . Then
E[T
i
] =
1

_
1 + E[T
i1
]
_
, E[T
0
] =
1

E[T
1
] =
1

_
1 +

_
E[T
2
] =
1

_
1 +

+
_

_
2
_
.
.
.
E[T
i
] =
1

_
1 + + +
i
_
, =

=
1

_
1
i+1
1
_
=
1
_

_
i+1

, i 0.
Example (Independent tosses of a coin)
The outcomes of each independent toss of a coin is
H : head with prob. p
T : tail with prob. 1 p.
What is the expected number of tosses needed for HTHT to appear?
Solution:
First, consider the expected number of tosses needed for HH to appear.
30
T
HT
TT TH
HH
H
T
H
T
T
T
T
T
H
H
H
H
H
(transient) (transient)
p
TT
= q
TH

TH
= q(
HT
+
HH
)
q
HH
= p
HT

HT
= p(
HT
+
TT
)

HH
= p
2
,
TH
= pq ,
HT
= pq ,
TT
= q
2
During T units of time, we have observed j state T
j
times. Then

j
=
T
j
T
=
1
T
T
j
=
1
m
jj
where m
jj
is the time between two consecutive visits of state j.
31
However, the time from the rst toss to the rst time the pattern HH
appears is larger than m
HH,HH
.
HH
two tosses
HHH
it is possible that only 1 toss is required
between two consecutive HH states
Thus,
the rst time HH appears
= the rst time H appears + m
HH,HH
=
1
p
+
1
p
2
Now, consider HTHT. The Markov chain has the following states:
H , T , HH , HT , TH , TT , HHH , . . . , TTT
. .
transient states
,
HHHH , . . . , HTHT , . . . , TTTT
. .
recurrent states
It may take only two more tosses to go from one HTHT to the next HTHT
while it takes at least 4 tosses for the rst HTHT to appear.
the rst time HTHT appears
= the rst time HT appears + m
HT,HT
=
1
pq
+
1
p
2
q
2
32
More examples
E[time until HTHHTHTHH]
= E[HTHH] +
1
p
6
q
3
= E[H] +
1
p
3
q
+
1
p
6
q
3
=
1
p
+
1
p
3
q
+
1
p
6
q
3
Suppose in a sequence, successive values are independently and identi-
cally distributed with p
j
denoting the probability that any given value
is equal to j. Then
E[time until 012301] = E[time until 01] +
1
p
2
0
p
2
1
p
2
p
3
=
1
p
0
p
1
+
1
p
2
0
p
2
1
p
2
p
3
Example (Eciency of Simplex Algorithm)
Minimize cx
subject to Ax = b,
x 0.
33
where
A is an mn matrix, where n > m
c = (c
1
c
n
)
b = (b
1
b
m
)
x = (x
1
x
n
)
The optimal value of x has at least nm zeros or extreme points of the
feasibility region.
Simplex method always moves from an extreme point to a better extreme
point. There are
N =
_
n
m
_
extreme points. We label them from N to 1, where the N
th
is the worst
one and the 1
st
is the best one.
Suppose the algorithm is at the j
th
point, the next extreme point will
be equally likely to be any one of j 1, j 2, . . . , 1. So,
P
ij
=
_

_
1
i1
j = 1, . . . , i 1 , i > 1
0 j i
P
11
= 1
34
1/2
1/2 1/3
1
2
1
3 4
1/3
1/3
Let T
i
be the number of transitions to go from state i to state 1.
E[T
1
] = 0
E[T
i
] =
i1

j=1
E[T
i
| next state is j] P
ij
= 1 +
_
1
i 1
_
i1

j=1
E[T
j
]
iE[T
i+1
] (i 1)E[T
i
] = i (i 1) +E[T
i
]
E[T
i+1
] =
1
i
+ E[T
i
] =
1
i
+
1
i 1
+ + 1 + 0
E[T
i
] =
i1

j=1
1
j
_
N
1
dx
x
< E[T
N
] =
N1

j=1
1
j
< 1 +
_
N1
1
dx
x
log N < E[T
N
] < 1 + log(N 1)
E[T
N
] log N
N =
_
n
m
_
=
n
n+
1
2
(n m)
nm+
1
2
m
m+
1
2

2
35
Let c =
n
m
,
log N m
_
c log
c
c 1
+ log(c 1)

log N m[1 + log(c 1)]


Therefore, in the worst case, the number of transitions required is E[T
N
]
e
m[1+log(c1)]
.
For example, if n = 8000 and m = 1000, then c =
n
m
= 8, N =
_
8000
1000
_
and
E[T
N
] 1000[1 + log 7] 3000.
1.6 Branching Process
g =1
0
g =2
1
g =4
2
g =6
3
Let g
n
be the number of members of the n
th
generation. Each member of the
n
th
generation gives birth to a family of members of the (n+1)
th
generation.
The family size of the individuals are i.i.d. random variables with distribution
36
function F and moment generating function
G
1
(z) = E
_
z
g
1

j=0
P{g
1
= j}z
j
We are interested in the random sequence g
0
, g
1
, . . . of generation sizes, where
g
0
= 1. Let
G
n
(z) = E
_
z
g
n

j=0
P{X = j}z
j
, where g
n
= X
Label the members of the m
th
generation by 1, 2, . . . , g
m
. Let X
i
, 1 i g
m
be the number of descendants of member i after n more generations. Thus, at
the (m+n)
th
generation, the number of members is g
m+n
= X
1
+X
2
+. . . X
g
m
.
X
1
X
2
X
g
m
1 2
g
m
m generation
th
(m+n) generation
th
37
G
m+n
(z) = E
_
z
g
m+n

= E
g
m
E
_
z
X
1
+X
2
++X
k
| g
m
= k
_
=

k
E
_
z
X

P{g
m
= k}
=

k
_
G
n
(z)
_
k
P{g
m
= k}
= G
m
(G
n
(z))
G
n
(z) = G
1
(G
1
( (G
1
(z)) )) , G
1
(z) = G(z) = E
_
z
g
1

If E[g
1
] = , then E[g
n
] =
n
because
G
n
(z) = G(G
n1
(z))
G

n
(z) = G

(G
n1
(z))G

n1
(z)
z = 1 E[g
n
] = E[g
n1
] = =
n
E[g
0
] =
n
.
It can be shown that P{g
n
= 0} , where is the smallest root of the
equation
z = G(z) (1.1)
and
= 1 if E[g
1
] = < 1.
38
Proof. Let
n
= P{g
n
= 0}. Then

n
= G
n
(0) = G(G
n1
(0)) = G(
n1
).
Take the limit as n , the continuity of G(z) gives
= G()
If e is any non-negative root of z = G(z), then e. Since G(z) is non-
decreasing in [0,1], G

(z) 0 for z [0, 1].

1
= G(0) G(e) = e

2
= G(
1
) G(e) = e
.
.
.

n
e for all n
and = lim
n

n
e
Now, z = 1 is another root of (1.1). Observe that
G(0) = g
0
> 0
G(r)

n
g
n
= 1 for r 1
G(r)

n
g
n
= 1 for r 1
G(1) = 1
39
G

(z) = E
_
g
1
z
g
1
1

0
G

(z) = E
_
g
1
(g
1
1)z
g
1
2

0
G(z) is convex and non-decreasing in [0,1]
We have the following plot which illustrates that y = z and y = G(z) have
two intersections in [0,1], namely z = and z = 1. They are coincident i
G

(1) = E[g
1
] = < 1, that is, i the slope of G(z) at z = 1 is less than 1.
In other words,
P{g
n
= 0 for some n} = = 1 i = E[g
1
] < 1
z
y
y=z
y=G(z), G(1)>1
y=G(z), G(1)<1
1
1

45
o
Let
0
denote the probability that, starting with a single individual, the
40
population ever dies out.

0
= P{population dies out}
=

j=0
P{population dies out | X
1
= j} p
j
=

j=0

j
0
p
j
= G(
0
)
Suppose that p
0
> 0 and p
0
+p
1
< 1 then
0
is the smallest positive number
satisfying

0
=

j=0

j
0
p
j
.
1.7 Time Reversible Markov Chain
The process {X
n
, n = 0, 1, 2, . . . } is irreducible and positive recurrent Markov
chain if there exists such that 0
i
1 and

i
= 1. Consider the
reverse sequence
X
n
, X
n1
, . . . , X
m+1
, X
m
, . . . .
41
We are interested in the following probability
P{X
m
= j | X
m+1
= i, X
m+2
= i
m+2
, . . . }
=
P{X
m
= j, X
m+1
= i, X
m+2
, . . . }
P{X
m+1
= i, X
m+2
, . . . }
=
P{X
j
= j, X
m+2
= i
m+2
, . . . | X
m+1
= i} P{X
m+1
= i}
P{X
m+1
= i, X
m+2
= i
m+2
, . . . }
=
P{X
m
= j | X
m+1
= i} P{X
m+2
= i
m+2
| X
m+1
= i} P{X
m+1
= i}
P{X
m+1
= i, X
m+2
= i
m+2
, . . . }
=
P{X
m
= j | X
m+1
= i} P{X
m+1
= i, X
m+2
= i
m+2
, . . . }
P{X
m+1
= i, X
m+2
= i
m+2
, . . . }
= P{X
m
= j | X
m+1
= i} = Q
ij
(1.2)
=
P{X
m
= j, X
m+1
= i}
P{X
m+1
= i}
=
P{X
m
= j} P{X
m+1
= i | X
m
= j}
P{X
m+1
= i}
=

j
P
ji

i
By (1.2), the reverse process is also a Markov chain with transition probability
Q
ij
. Furthermore, the Markov chain is time reversible if Q
ij
= P
ij
for all i, j
or

i
P
ij
=
j
P
ji
.
Theorem 1. An irreducible and positive recurrent Markov chain
P =
_
P
ij

for which P
ij
= 0 whenever P
ji
= 0
is time reversible i starting in state i, any path back to i has the same
42
probability as the reverse path.
P
i,i
1
P
i
k
,i
= P
i,i
k
P
i
1
,i

i
P
ij
=
j
P
ji
i, j
Proof.
( )

i
P
ij
=
j
P
ji

k
P
kj
=
j
P
jk

i
P
ik
=
k
P
ki

i
P
ik
P
kj
P
ji
=
k
P
ki
P
kj
P
ji
= P
ki

j
P
jk
P
ji
= P
ki
P
jk
P
ij

i
P
ik
P
kj
P
ji
= P
ij
P
jk
P
ki
,
i
= 0
i j k i = i k j i
( )

i
1
i
2
i
k
P
ii
1
P
i
1
i
2
P
i
k
j
P
j,i
=

i
1
i
2
i
k
P
ij
P
ji
k
P
i
1
i
P
(k)
ij
P
ji
= P
ij
P
(k)
ji
k ,
j
P
ji
= P
ij

i
43
Example (Random Walk)
0 1 i-1 i i+1 M-1 M
1
M
1
i+1 1
i 1
1
1
0

0
i-1

M-1

1
Between any two transitions from i to i +1, there must be one from i +1
to i.
i, i + 1, , i, i + 1, , i, i + 1
i + 1, i i + 1, i
We must have

T
i,i+1
=

T
i+1,i
=

T
i,i+1
T
i
T

T
i,i+1
T
i
=

T
i+1,i
T
i+1
T
i+1
T
As T ,

i
P
i,i+1
=
i+1
P
i+1,i
.
44
Thus, this process is time reversible and
i
P
ij
=
j
P
ji
for all i, j.

0
=
1
(1
1
)

1
=
2
(1
2
)
.
.
.

i1

i1
=
i
(1
i
) , i = 1, . . . , M

0

i1

0

i1
=
1

i
(1
1
) (1
i
)

i
=

i1

0
(1
i
) (1
1
)

0
, i = 1, . . . , M
Suppose
i
= ,

i
=

i
(1 )
i

0
=
i

0
, =

1
M

i=0

i
= 1 (1 + + +
M
)
0
= 1

0
=
1
1 + + +
M
=
1
1
M+1

i
=

i
(1 )
1
M+1
Example (Model of Two Urns State dependent transition)
45
0 1
i M-i
There are M balls, labelled 1, 2, . . . , M, each of which can be in either urn 0
or urn 1.

i
=
_
M
i
_
_
1
2
_
i
_
1
1
2
_
Mi
=
_
M
i
_
_
1
2
_
M
We can also model this process by a Markov chain.
0 1 i+1
M
1
M
1
1
1
M
2
M
1

M-1
1
1
M
i 2 M-1
1
i+1
M
M-i
M
M-1
M

i
1
i+1
1
2

0
=
_
1 +
M

i=1

0

i1
(1
1
) (1
i
)
_
1
=
_
1 +
M

i=1
_
M
i
__
1
=
_
1
2
_
M
46

i
=

0

i1
(1
1
) (1
i
)

0
=
M
M
M1
M

Mi+1
M
1
M
2
M

i
M

0
=
M!
(Mi)!
i!
=
_
M
i
_

0
=
_
M
i
_
_
1
2
_
M
Example (Ross Ex. 4.28)
Let
L = life of light bulb , P{L = i} = P
i
X
n
= i , the bulb is in its i
th
day of use at day n
{X
n
, n = 0, 1, . . . } is a Markov chain. An example sequence can be
1 2 3 4 5 1 2 3 1 2 3 4 5 6 1 2 3 . . .

B
r
r
r
r
r
r
r
r
rj
L i
L i + 1
L = i
P
i,i+1
=
P{Li+1}
P{Li}
= 1 P
i,1
P
i,1
=
P{L=i}
P{Li}
47
i
i+1
2 1
The transition matrix is of the form
1 2
1
2
P =
Consider the reverse chain
. . . 3 2 1 6 5 4 3 2 1 3 2 1 5 4 3 2 1
2 1
1 1
P
1
P
i
P
i-1
P
2
i-1 i
The transition probabilities are
Q
i,i1
= 1 , i > 1 (1.3)
Q
1,i
= P
i
, i 1 (1.4)
48
Find {
i
} such that
i
P
ij
=
j
Q
ji
. Consider j = 1. From (1.4),

i
P
i,1
=
1
Q
1,i

i
P{L = i}
P{L i}
=
1
P{L = i}

i
=
1
P{L i}
1 =

i=1

i
=
1

i=1
P{L i} =
1
E[L]

i
=
P{L i}
E[L]
This can be veried with (1.3).

i
P
i,i+1
=
i+1
Q
i+1,i

i
P{L i + 1}
P{L i}
=
i+1
P{L i}
E[L]

P{L i + 1}
P{L i}
=
P{L i + 1}
E[L]
=
i+1
49
Chapter 2
Exponential Distribution and Poisson
Process
2.1 Exponential Distribution
A random variable X is memoryless if
P{X > s + t|X > t} = P{X > s} , t, s 0
t t+s 0
t s
life X
50
P{X > s + t, X > t}
P{X > t}
= P{X > s}
P{X > t} = g(t)
g(s + t) = g(s) g(t) , g(0) = 1
g(t + h) g(t)
h
=
g(t) g(h) g(t)
h
= g(t)
g(h) 1
h
=
g(t)
h
_
g(0) +
h
1!
g

(0) +
h
2
2!
g

(0) + 1
_
d g(t)
dt
= g(t) g

(0)
ln g(t) = g

(0)t + c
g(t) = K e
g

(0)t
, g(0) = 1 K = 1
g(t) = e
t
F(t) = 1 e
t
, t 0.
The random variable X is said to have an exponential distribution with pa-
rameter > 0.
51
f(x) =
_

_
e
x
, X 0
0 , X < 0
F(x) =
_
x

f(y)dy =
_

_
1 e
x
, X 0
0 , X < 0
E[X] =
1

(t) = E
_
e
tX

=
_

0
e
tX
e
x
dx
=
_

0
e
(t)x
dx
=

t
for t <
E
_
X
2

=
d
2
dt
2
(t)

t=0
=
2
( t)
3

t=0
=
2

2
var[X] = E
_
X
2

_
E[X]
_
2
=
2

2

1

2
=
1

2
standard deviation = mean
Note: The exponential random variable is the only continuous ran-
dom variable with the memoryless property.
52
2.2 Counting Process
{N(t), t 0} is a counting process if N(t) represents the total number
of events occurred up to time t.
t
N(t)
Examples
N(t) : number of persons entered a store up to time t.
N(t) : total number of persons who were born up to time t.
Properties of counting processes:
(i) N(t) 0
(ii) N(t) is an integer
(iii) s t N(s) N(t)
(iv) s < t N(t) N(s) is the number of events occurred in (s, t)
interval.
53
N(t) has independent increment if
N(t + s) N(s) , (s, t + s) is independent of
N(t + s

) N(s

) , (s

, t + s

)
That is, the number of events that occurred in a time interval is inde-
pendent of the number of events in a disjoint interval.
s t+s s t+s
t + s < s
t t
N(t) has stationary increment if
N(t +s) N(s) only depends on the length of the interval t = t +s s
and not on the starting or ending point of the interval.
2.3 Poisson Process
The counting process N(t) is said to be a Poisson process with rate > 0
if
(i) N(0) = 0,
(ii) N(t) has independent increment,
(iii) N(t) has stationary increment, and
(iv) P{N(t) = n} = P{N(t + s) N(s) = n} = e
t
(t)
n
n!
, n = 0, 1, . . . .
54
Poisson distribution with mean t
E[N(t)] = t
=
E[N(t)]
t
= arrival rate
t
f(t)
N(t)
E[N(t)] = t
Divide a time interval [0,t] into k subintervals.
t
k
0 t
2t
k
kt
k
0 1 1 0
When k is very large, at most one event may occur in a subinterval.
P{N(t) = n} =
_
k
n
_
p
n
(1 p)
kn
where p = P(1) in a subinterval.
k

n=0
P{N(t) = n}z
n
= (1 p + pz)
k
Since p k = t = E[N(t)],

n=0
P{N(t) = n}z
n
=
_
1
(1 z)t
k
_
k
55
As k ,

n=0
P{N(t) = n}z
n
= e
(1z)t
= e
t
e
tz
= e
t

n=0
(t)
n
z
n
n!
P{N(t) = n} = e
t
(t)
n
n!
Thus, the Poisson distribution is an approximation of the binomial dis-
tribution.
1. f(h) is o(h) if
lim
n0
f(h)
h
= 0
f(x) = x
2
is o(h)
f(x) = x is not o(h)
2. If f() and g() are o(h), then f() + g() is also o(h).
{N(t), t 0} is a Poisson process with rate > 0 if
(i) N(0) = 0,
(ii) stationary and independent increments,
(iii) P{N(h) = 1} = h + o(h),
56
(iv) P{N(h) 2} = o(h).
Proof.
P
n
(t) = P{N(t) = n}
P
0
(t + h) = P{N(t + h) = 0}
= P{N(t) = 0, N(t + h) N(t) = 0}
= P{N(t) = 0}P{N(t + h) N(t) = 0}
= P{N(t) = 0}
_
1 h + o(h)

= P
0
(t)[1 h + o(h)

P
0
(t + h) P
0
(t)
h
= P
0
(t) +
o(h)
h
P

0
(t) = lim
h0
P
0
(t) +
o(h)
h
= P
0
(t)
P
0
(t) = ke
t
P
0
(0) = 1 k = 1
= e
t
57
P
n
(t + h) = P{N(t + h) = n}
= P{N(t) = n, N(t + h) N(t) = 0}
+P{N(t) = n 1, N(t + h) N(t) = 1}
+
n

k=2
P{N(t) = n k, N(t + h) N(t) = k}
= (1 h)P
n
(t) + hP
n1
(t) + o(h)
P

n
(t) = P
n
(t) + P
n1
(t)
Taking Laplace transform,
(s + )P
n
(s) = P
n1
(s)
P
n
(s) =

s +
P
n1
(s)
.
.
.
=
_

s +
_
n
_
1
s +
_
=

n
(s + )
n+1
Taking inverse Laplace transform,
P
n
(t) = e
t

n
_
t
n
n!
_
58
2.4 Interarrival Time and Waiting Time Distribution
0 1 2 n-1 n
T
2
T
1
T
n
t+s t
Arrivals
Time
The interarrival times T
1
, T
2
, . . . , T
n
are independent random variables, each
having the exponential probability density function
f
T
k
(t) = e
t
, t 0
Proof.
1. Distribution of T
1
P{T
1
> t} = P{N(t) = 0} = e
t
f
T
1
(t) =
_

_
e
t
t 0
0 t < 0
2. Distribution of T
2
P{T
2
> t} = E
T
1
_
P{T
2
> t|T
1
}
_
P{T
2
> t|T
1
= s} = P{N(t + s) N(s) = 0|T
1
= s}
= P{N(t) = 0} = e
t
P{T
2
> t} =
_

0
e
t
f
T
1
(s)ds = e
t
f
T
2
(t) = e
t
, t 0
59
3. Joint distribution of T
1
, T
2
0 1
T
1
T
2
2
t
1
t
0 1

1
t
2
t
Arrivals
P{t
1
< T
1
< t
1
+ t
1
, t
2
< T
2
< t
2
+ t
2
}
= P{N(t
1
) = 0} P{N(t
1
, t
1
+ t
1
) = 1}
P{N(t
1
+ t
1
, t
1
+ t
2
+ t
1
) = 0}
P{N(t
1
+ t
2
+ t
1
, t
1
+ t
2
+ t
1
+ t
2
) = 1}
= e
t
1
t
1
e
t
1
e
t
2
t
2
e
t
2
= e
t
1
e
t
2
t
1
t
2
+ o(t
1
t
2
)
f
T
1
,T
2
(t
1
, t
2
) = e
t
1
e
t
2
, t
1
, t
2
0
= f
T
1
(t
1
) f
T
2
(t
2
)
Distribution of the arrival time of the n
th
event
S
n
=
n

i=1
T
i
, n 1
F
S
n
(t) = P{S
n
t} = P{N(t) n}
=

j+n
e
t
(t)
j
j!
60
f
S
n
(t) =
dF
S
n
(t)
dt
=

j=n
e
t
(t)
j
j!
+

j=n
e
t
(t)
j1
(j 1)!
= e
t
(t)
n1
(n 1)!

S
n
() = E
_
e
S
n

= E
_
e

n
i=1
T
i
_
= E
_
e
T
1

E
_
e
T
2

E
_
e
T
n

=
_

+
_
n

n
e
t
Theorem 2. If {N(t), t 0} is a Poisson process of rate > 0, then for
0 < < t and 0 k n,
P{N() = k | N(t) = n} =
n!
k!(n k)!
_

t
_
k
_
1

t
_
nk
0 1 k n Arrivals
Time
k+1
t
n+1
61
Proof.
P{N() = k | N(t) = n} =
P{N() = k and N(t) = n}
P{N(t) = n}
=
P{N() = k, N(t) N() = n k}
P{N(t) = n}
=
e

()
k
k!
e
(t)
[(t)]
nk
(nk)!
e
t
(t)
n
n!
=
_
n
k
_
_

t
_
k
_
1

k
_
nk
2.5 Order Statistics
Throw n points randomly at the line segment such that each points landing
position U
1
, . . . , U
n
is uniformly distributed along (0, t].
f
U
(u) =
_

_
1
t
; 0 u t
0 ; elsewhere
0 t
U
1
U
n
U
2
U
3
U
n-1
S
n
S
n-1
S
3
S
2
S
1
Now S
1
S
2
S
n
denote the same positions but ordered along
62
(0, t]. When n = 2,
P{s
1
S
1
s
1
+ s
1
, s
2
S
2
s
2
+ s
2
}
= P{s
1
U
1
s
1
+ s
1
, s
2
U
2
s
2
+ s
2
}
+P{s
1
U
2
s
1
+ s
1
, s
2
U
2
s
2
+ s
2
}
= 2
s
1
t

s
2
t
= 2! t
2
s
1
s
2
= f(s
1
, s
2
)s
1
s
2
f(s
1
, s
2
) = lim
s
1
0
s
2
0
P{s
1
S
1
s
1
+ s
1
, s
2
S
2
s
2
+ s
2
} =
2!
t
2
In general,
f(s
1
, . . . , s
n
) =
n!
t
n
, 0 < s
1
< s
2
< < s
n
t
Theorem 3. Given N(t) = n, the n arrival times S
1
, . . . , S
n
have the same
distribution as the order statistics corresponding to n independent random
variables uniformly distributed on the interval (0, t].
0 t
T = S
1 1
T = S - S
2 2 1
T = S - S
n n n-1
S
1
S
2
S
n-1
S
n
0 arrivals
63
Proof.
f(s
1
, s
2
, . . . , s
n
| N(t) = n)
=
P{T
1
= s
1
}P{T
2
= s
2
s
1
} P{T
n
= s
n
s
n1
}P{N(t) N(s
n
) = 0}
P{N(t) = n}
=
e
s
1
e
(s
2
s
1
)
e
(s
n
s
n1
)
e
(ts
n
)
e
t
(t)
n
n!
=

n
e
t
e
t
(t)
n
n!
=
n!
t
n
, 0 < s
1
< < s
n
< t
Example (Pure and Slotted Aloha)
p
2p
p
vulnerable period
of slotted aloha
vulnerable period
of pure aloha
Time
Let
S =
throughput of the channel (average number of successful
transmission per transmission period p)
G =
average channel trac (number of packet transmissions
attempted per p seconds)
P
0
= P{no packets are generated during the vulnerable period}
We have
G = p , S = GP
0
64
Pure Aloha (t = 2p)
P
0
= (t)
0
e
t
0!
= e
2p
= e
2G
S = Ge
2G
dS
dG
= e
2G
+ G(2) e
2G
= 0 G =
1
2
S
max
=
1
2e
= 0.184
Slotted Aloha (t = p)
P
0
= e
G
S = Ge
G
dS
dG
= e
G
+ G(1) e
G
= 0 G = 1
S
max
=
1
e
= 0.368
65
2.6 Combination and Splitting of Poisson Processes
N (t)
N (t)
1
2
N(t), +
1 2

1
2
E
_
z
N
1
(t)

n=0
P{N
1
(t) = n}z
n
=

n=0
e

1
t
(
1
t)
n
n!
z
n
=

n=0
e

1
t
(
1
tz)
n
n!
= e

1
t(1z)
E
_
z
N
2
(t)

= e

2
t(1z)
E
_
z
N(t)

= E
_
z
N
1
(t)+N
2
(t)

= E
_
z
N
1
(t)

E
_
z
N
2
(t)

= e

1
t(1z)
e

2
t(1z)
= e
(
1
+
2
)t(1z)
=

n=0
P{N(t) = n}z
n
P{N(t) = n} = e
(
1
+
2
)t(1z)

_
(
1
+
2
)t

n
n!
Thus, combining two independent Poisson processes with rates
1
and
2
results in a Poisson process with rate
1
+
2
.
66
N (t)
N (t)
1
2
N(t), +
1 2

1
2
P{N
1
(t) = j} =

n=j
P{N
1
(t) = j | N(t) = n}P{N(t) = n}
=

n=j
_
n
j
_
p
j
(1 p)
nj

(t)
n
n!
e
t
=
(pt)
j
j!
e
t

n=j
_
(1 p)t
_
nj
(n j)!
=
(pt)
j
j!
e
t
e
(1p)t
=
(pt)
j
j!
e
pt
Similarly,
P{N
2
(t) = j} =
_
(1 p)t

j
j!
e
(1p)t
67
Moreover,
P{N
1
(t) = n, N
2
(t) = m}
=

k=0
P{N
1
(t) = n, N
2
(t) = m | N(t) = k}P{N(t) = k}
= P{N
1
(t) = n, N
2
(t) = m | N(t) = n + m}P{N(t) = n + m}
=
_
m + n
n
_
p
n
(1 p)
m
e
t
(t)
n+m
(n + m)!
=
(m + n)!
n! m!
p
n
(1 p)
m
e
(p+1p)t
(t)
n
(t)
m
(n + m)!
= e
pt
(pt)
n
n!
e
(1p)t
_
(1 p)t

m
m!
= P{N
1
(t) = n}P{N
2
(t) = m}
The resulting processes are independent of each other.
Example
a b
s
Time
s+t
0
Suppose that cars enter a highway in accordance with a Poisson process
with rate .
Each car travels at a constant speed, randomly distributed from a dis-
tribution G.
When a faster car encounters a slower one, it passes it with no time
68
being lost.
If your car enters the highway at time s, and you are able to choose
your speed, what speed minimizes the expected number of encounters
you will have with other cars?
encounter occurs
_

_
your car passes another car
or your car is passed by another car
Solution:
Let d = b a be the road distance and x be the speed. Your car will
enter the road at time s and will depart at time s + t
0
, where t
0
=
d
x
is
the travel time.
Other cars enter the road according to a Poisson process with rate .
Each of them choose a speed X from distribution G and the travel time
is T =
d
X
with distribution
F
T
(t) = P{T t}
= P
_
d
X
t
_
= P
_
X
d
t
_
= G
_
d
t
_
69
A car entering the road at time t will encounter your car if its travel
time T is such that
t + T > s + t
0
if t < s
t + T < s + t
0
if s < t < s + t
0
a b
t t+T
Type I
Other type
You pass him
He passes you
He is always
behind you
He is always
ahead of you
s
s+t
0
s
s+t
0
s
s+t
0
s
s+t
0
t
t
t
t+T
t+T
t+T
You
The
other
car
An event occurs at time t if a car enters the road at time t. The event is
a type I event if that car will encounter your car. The probability that
an event is type I is given by
70
p(t) =
_

_
P{t + T > s + t
0
} = F(s + t
0
t) if t < s
P{t + T < s + t
0
} = F(s + t
0
t) if s < t < s + t
0
0 if t > s + t
0
Total number of type I events that ever occurs is Poisson with mean
E[N
1
] =
_

0
p(t)dt
=
_
s
0
F(s + t
0
t)dt +
_
s+t
0
s
F(s + t
0
t)dt
=
_
s+t
0
t
0
F(y)dy +
_
t
0
0
F(y)dy
To choose the t
0
to minimize E[N
1
],
d
dt
E[N
1
] =
_
F(s + t
0
) F(t
0
) + F(t
0
)

= 0
If s , F(s + t
0
) = 0 and
F(t
0
) = F(t
0
) F(t
0
) =
1
2
Since x =
d
t
0
,
F
_
d
x
_
=
1
2
G(x) =
1
2
= G(x)
Thus, the optimal speed is the median of the distribution of speeds.
71
If you drive too fast, you pass too many cars.
If you drive too slow, too many other cars will pass you.
Example (Tracking the Number of HIV Infections)
Suppose that individuals contract HIV virus in accordance with a Pois-
son process with rate which is unknown.
Suppose that the time from when an individual becomes infected until
symptoms of the disease appear is a random variable having a known
distribution G.
N
1
(t) is the number of individuals who have shown symptoms of the
disease by time t.
N
2
(t) is the number of HIV positive individuals who have not yet shown
any symptoms by time t.
s t
Contracts
the virus
Symptoms
appear
t-s
72
E[N
1
(t)] =
_
t
0
G(t s)ds =
_
t
0
G(y)dy
E[N
2
(t)] =
_
t
0
G(t s)ds =
_
t
0
G(y)dy
Suppose the number of individuals who have symptoms by time t is n
1
.
Then
n
1
E[N
1
(t)] =
_
t
0
G(y)dy

=
n
1
_
t
0
G(y)dy

E[N
2
(t)]

_
t
0
G(y)dy =
n
1
_
t
0
G(y)dy
_
t
0
G(y)dy
Suppose G is exponential with mean . Then
G(y) = e
y/

E[N
2
(t)] =
n
1

_
1 e
t/
_
t
_
1 e
t/
_
For instance, if t = 16 years, = 10 years and n
1
= 220K, then

E[N
2
(t)] =
2200
_
1 e
1.6
_
16 10
_
1 e
1.6
_
219K.
Example
Suppose that M = {N(t), t 0} is a non-decreasing right-continuous integer-
valued process with M(0) = 0, having stationary independent increments.
N(t) has only jump discontinuities of size 1.
73
M(t)
T=t
For u, v 0,
E[N(u + v)] = E[N(u)] + E[N(u + v) N(u)]
= E[N(u)] + E[N(v)]
E[N(u)] is non-decreasing. By the assumption of stationary increments,
E[N(u)] = u , u 0
Let T = sup{t : N(t) = 0} be the time of the rst jump of M. From
right-continuity of M,
N(t) = 1 , so that T is a stopping time for M
E[N()] = E
_
E[N() | T]

For < T = t,
E[N() | T = t] = 0,
74
and for T,
E[N() | T = t] = E[N(t) | T = t] + E[N() N(t) | T = t]
= 1 + E[N() N(t) | N(t) = 1 , N(u) = 0 for u < t]
= 1 + E[N( t)]
Thus, we have
E[N()] =
_

0
_
1 + E
_
N( t)

_
dF(t)
where F is the distribution function of T.
Now E[N()] = for all . We have
= F() +
_

0
( t)dF(t)
which is an integral equation for the unknown function F. Solving it using
Laplace transform, we get
F(t) = 1 e
t
So, T has an exponential distribution and hence, M is a Poisson process with
intensity .
Theorem 4. Suppose N(t) is a Poisson process with rate . An event at
time y is classied as type i with probability
P
i
(y) , i = 1, . . . , k
75
where
k

i=1
P
i
(y) = 1.
If N
i
(t), i = 1, . . . , k is the number of type i events occurring up to time t,
then N
i
(t), i = 1, . . . , k are independent Poisson random variables with mean
E[N
i
(t)] =
_
t
0
P
i
(s)ds = P
i
t
where
P
i
=
1
t
_
t
0
P
i
(s)ds
is the probability that an event which occurred in (0, t) is type i.
Proof.
P{N
1
(t) = n
1
, N
2
(t) = n
2
, . . . , N
k
(t) = n
k
}
= P
_
N
1
(t) = n
1
, N
2
(t) = n
2
, . . . , N
k
(t) = n
k
| N(t) =
k

i=1
n
i
_
P
_
N(t) =
k

i=1
n
i
_
76
P
_
N
1
(t) = n
1
, N
2
(t) = n
2
, . . . , N
k
(t) = n
k
| N(t) =
k

i=1
n
i
_
=
_

k
i=1
n
i
_
!
n
1
! n
k
!
P
n
1
1
P
n
k
k
P{N
1
(t) = n
1
, N
2
(t) = n
2
, . . . , N
k
(t) = n
k
}
=
_

k
i=1
n
i
_
!
n
1
! n
k
!
P
n
1
1
P
n
k
k
e
t
_
t
_

n
i
_
n
i
_
!
=
k

i=1
e
tP
i
_
tP
i
_
n
i
n
i
!
Example
A shop consists of M machines and has a single repairman. Assume
1. the amount of time a machine runs before breaking down is exponentially
distributed with rate and
2. the amount of time it takes the repairman to x any broken machine is
exponential with rate .
Let the state of the shop be n whenever there are n machines down. The
state space is {0, 1, . . . , M}.
77
M
M-1 0 1 2
n-1
n
n+1

M (M-n) (M-1) [M-(n-1)]

n
= , n 1

n
= (M n) , n = 0, 1, . . . , M
(M) = P
1
.
.
.
_
M (n 1)

P
n1
= P
n
.
.
.
P
M1
= P
M

n
P
n
= M(M 1)
_
M (n 1)

n
P
0
P
n
=
_
M!
(M n)!
_

_
n
_
P
0
, n = 0, 1, . . . , M
M

n=0
P
n
= 1 P
0
=
1
1 +

M
n=1
_

_
n
M!
(Mn)!
The average number of machines not in use is
M

n=0
nP
n
=

M
n=0
nM!
(Mn)!

n
1 +

M
n=1
M!
(Mn)!

n
, =

Example
78
For X(0) = 1,
P{X(t + h) X(t) = 1 | X(t) = k} =
k
h + o(h)
P{X(t + h) X(t) = 0 | X(t) = k} = 1
k
h + o(h)
P{X(t + h) X(t) < 0 | X(t) = k} = 0 , k 0.
P
n
(t) = P{X(t) = n}
P

0
(t) =
0
P
0
(t)
P

n
(t) =
n
P
n
(t) +
n1
P
n1
(t) , for n 1
P
n
(t + h) =

k=0
P
k
(t)P{X(t + h) = n | X(t) = k}
=

k=0
P
k
(t)P{X(t + h) X(t) = n k | X(t) = k}
= P
n
(t)
_
1
n
h + o(h)

+P
n1
(t)
_

n
h + o(h)

+
n2

k=0
P
k
(t)o(h)
P
n
(t + h) P
n
(t) = P
n
(t)
_
h + o(h)

+ P
n1
(t)
_

n1
(h) + o(h)

+ o(h)
P

n
(t) =
n
P
n
(t) +
n1
P
n1
(t)
P

0
(t) =
0
P
0
(t)
79
For a Yule process, X(0) = 1 and
n
= n,
P
1
(0) = 1 P
n
(0) = 0 , n = 2, 3, . . .
P

n
(t) =
_
nP
n
(t) (n 1)P
n1
(t)

, n = 1, 2, . . .
Taking Laplace transform,
sP
n
(s) P
n
(0) =
_
nP
n
(s) (n 1)P
n1
(s)

which can be solved to obtain


P
n
(t) = e
t
_
1 e
t
_
n1
, n 1
Example
0 t
U
S
2
1
S
2
S
3
S
n
U
1
U
n
U
3
The arrival of electric pulses is a Poisson process with rate . A sequence of
n pulses arrived at S
1
, S
2
, . . . , S
n
with amplitude A
i
(t) = A
i
e
t
at time t.
The total amplitude at time t is
A(t) = A
1
e
(tS
1
)
+ A
2
e
(tS
2
)
+ + A
n
e
(tS
n
)
The expected amplitude at time t is
E[A(t)] =

n=0
E[A(t) | N(t) = n] e
t
(t)
n
n!
80
Given N(t) = n, the n unordered arrival points U
1
U
2
. . . U
n
are distributed
uniformly on (0, t).
E[A(t) | N(t) = n] = E
_
n

i=1
A
i
e
(tS
i
)
| N(t) = n
_
= E
_
n

i=1
A
i
e
(tU
i
)
_
= n E[A] E
_
e
(tU
i
)

E
_
e
(tU
i
)

=
1
t
_
t
0
e
(tu)
du
=
1 e
t
t
E[A(t) | N(t) = n] = n E[A]
_
1 e
t
t
_
E[A(t)] = E[n] E[A]
_
1 e
t
t
_
=

E[A]
_
1 e
t
_
(E[n] = t)

E[A] as t .
81
Chapter 3
Continuous Time Markov Chain
t+s s u
X(s)=i X(t+s)=j
Assume that X(t), t 0, takes on only non-negative values and, for 0 u
s,
P{X(t + s) = j | X(s) = i, X(u) = k(u)} = P{X(t + s) = j | X(s) = i}
If P{X(t + s) = j | X(s) = i} is independent of s, then the transition
probability is stationary.
Let T
i
be the amount of time that the process stays in state i before making
a transition into a dierent state.
P{T
i
> 15 | T
i
> 10} = P{T
i
> 5}.
82
In general,
P{T
i
> s + t | T
i
> s} = P{T
i
> t}.
T
i
is memoryless and exponentially distributed.
Continuous-time Markov Chain - A stochastic process that moves from
state to state in accordance with a Markov Chain, and
1. the amount of time it spends in state i, T
i
, is exponentially distributed
with mean 1/v
i
.
2. P
ii
= 0 and

j
P
ij
= 1 for all i.
3.1 Birth-death Process (M/M/1)
0
1

1
2

2
i-1
i i+1

i+1

i-1
83
Let T and S be the interarrival time and service time respectively. Their
density and distribution functions are
f
T
(t) =
_

_
e
t
; t 0
0 ; t < 0
f
S
(t) =
_

_
e
t
; t 0
0 ; t < 0
F
T
(t) =
_
t

f
T
()d =
_

_
1 e
t
; t 0
0 ; t < 0
F
S
(t) =
_
t

f
S
()d =
_

_
1 e
t
; t 0
0 ; t < 0
We have
P{T > S} = E
_
P{T > t | S = t}

=
_

0
e
t
e
t
dt =

+
and
P
i,i+1
= 1 P{T
i
> S
i
} =

i

i
+
i
P
i,i1
= P{T
i
> S
i
} =

i

i
+
i
84
P{v
0
> t} = P{T
0
> t}
= e

0
t
P{v
i
> t} = P{T
i
> t, S
i
> t}
= P{T
i
> t} P{S
i
> t}
= e

i
t
e

i
t
= e
(
i
+
i
)t
Thus, v
i
, i 1 is exponentially distributed with rate
i
+
i
and mean
1

i
+
i
.
The Poisson Process is represented by having
i
= 0 and
i
= for all i 0.
Example (Linear growth model with immigration)
We have

n
= n , n 1

n
= n + , n 0
Let X(t) be the population size at time t. Let X(0) = i.
M(t) = E[X(t)]
M(t + h) = E[X(t + h)] = E[E[X(t + h) | X(t)]]
85
X(t + h) =
_

_
X(t) + 1 with prob.
n
h + o(h) = ( + X(t))h + o(h)
X(t) 1 with prob.
n
h + o(h) = X(t)h + o(h)
X(t) with prob. 1 ( + X(t) + X(t))h + o(h)
Therefore,
E[X(t + h) | X(t)] = X(t) + + (X(t) X(t))h + o(h).
and
M(t + h) = M(t) + ( )M(t)h + (h) + o(h)
M(t + h) M(t)
h
= ( )(t) + +
o(h)
h
M

(t) = ( )M(t) +
86
Taking Laplace Transform,
sM(s) i = ( )M(s) +

s
_
s ( )

M(s) = i +

s
M(s) =
i
s ( )
+

s(s ( ))
=
i
s ( )
+

1
s ( )


( )

1
s
M(t) = (i +


)e
()t



=


_
e
()t
1
_
+ ie
()t
3.2 The Kolmogorov Dierential Equation
Let {X(t) = 0, 1, 2, . . . t 0} be a continuous-time Markov Chain.
P
ij
(t) = P{X(t + s) = j | X(s) = i} for all s 0
Let P
ij
be the probability when the system is in state i, the next state is j,
if a transition occurs. Ignoring the time, P
ij
is the transition probability of
a discrete time Markov Chain. The time T
i
that the system stays in state i
is an exponentially distributed random variable with mean 1/v
i
.
1 P
ii
(h) = P{T
i
< h} = [1 e
v
i
h
] = v
i
h + o(h)
lim
h0
1 P
ii
(h)
h
= v
i
= transition rate.
87
P
ij
(h) = P{X(h + s) = j | X(s) = i}
= P
ij
P{T
i
< h} = P
ij
[1 e
v
i
h
]
= P
ij
[1 1 + v
i
h + o(h)]
lim
h0
P
ij
(h)
h
= P
ij
v
i
= q
ij
= rate that the process makes a transition from i to j = i.
We have
v
i
=

j
v
i
P
ij
=

j
q
ij
which implies that
P
ij
=
q
ij

j
q
ij
.
Lemma 1 (Chapman Kolmogorov Equation). For all s 0, t 0,
P
ij
(t + s) =

k=0
P
ik
(t)P
kj
(s) (3.1)
0 t t+s
i k j
t s
time
88
Theorem 5 (Kolmogorovs Backward Equation). Consider the time in-
tervals (0, t + h), (0, h) and (h, t + h) as shown in the gure.
0 h t+h
i k j
h t
time
P
ij
(t + h) =

k=0
P
ik
(h)P
kj
(t)
=

k=i
P
ik
(h)P
kj
(t) + P
ii
(h)P
ij
(t)
P
ij
(t + h) P
ij
(t) =

k=i
P
ik
(h)P
kj
(t) [1 P
ii
(h)]P
ij
(t)
lim
h0
P
ij
(t + h) P
ij
(t)
h
=

k=i
P
ik
(h)
h
P
kj
(t)
1 P
ii
(h)
h
P
ij
(t)
P

ij
(t) =

k=i
q
ik
P
kj
(t) v
i
P
ij
(t) (3.2)
Theorem 6 (Kolmogorovs Forward Equation). Consider the time in-
tervals (0, t + h), (0, t) and (t, t + h) as shown in the gure.
0 t t+h
i k j
h t
time
89
P
ij
(t + h) =

k=0
P
ik
(t)P
kj
(h)
=

k=j
P
ik
(t)P
kj
(h) + P
ij
(t)P
jj
(h)
P
ij
(t + h) P
ij
(t) =

k=j
P
ik
(t)P
kj
(h) [1 P
jj
]P
ij
(t)
lim
h0
P
ij
(t + h) P
ij
(t)
h
= P

ij
(t) =

k=j
P
ik
(t)q
kj
v
j
P
ij
(t) (3.3)
Example (Backward equation for pure birth process)
P

ij
(t) =

k=i
q
ik
P
kj
(t) v
i
P
ij
(t)
= q
i,i+1
P
i+1,j
(t) v
i
P
ij
(t)
We have,
q
i,i+1
= lim
h0
P
i,i+1
(h)
h
= lim
h0
P{X(s + h) = i + 1 | X(s) = i}
h
= lim
h0
e

i
h

(
i
h)
1
1!
h
=
i
(Exponential interarrival time implies Poisson Process.)
and
v
i
=
i
(since 1/v
i
is the mean interarrival time.)
90
Therefore,
P

ij
(t) =
i
P
i+1,j
(t)
i
P
ij
(t).
For a Poisson Process,

n
= 0 for all n 0

n
= for all n 0
Example (Backward equation for birth and death process)
P

ij
(t) =

k=i
q
ik
P
kj
(t) v
i
P
ij
(t)
When i = 0,
P

0j
(t) = q
0,1
P
1,j
(t) v
0
P
0,j
(t)
q
0,1
= lim
h0
P
01
(h)
h
= lim
h0
P
01
P{T
0
< h}
h
= lim
h0
1(1 e

0
h
)
h
=
0
In general,
P

ij
(t) = q
i,i+1
P
i+1,j
(t) + q
i,i1
P
i1,j
(t) v
i
P
ij
(t).
q
i,i+1
= v
i
P
i,i+1
= (
i
+
i
)

i

i
+
i
=
i
(i > 0)
q
i,i1
= v
i
P
i,i1
= (
i
+
i
)

i

i
+
i
=
i
(i > 0)
91
Therefore,
P

ij
(t) =
i
P
i+1,j
(t) +
i
P
i1,j
(t) (
i
+
i
)P
i,j
(t) (i > 0)
Example (Forward equation of birth and death process)
P

ij
(t) =

k=j
q
kj
P
ik
(t) v
j
P
ij
(t)
When j = 0,
P

i,0
(t) =

k=0
(v
k
P
k,0
)P
i,k
(t) v
0
P
i,0
(t)
= (
1
+
1
)
_

1

1
+
1
_
P
i,1
(t)
0
P
i,0
(t)
=
1
P
i,1
(t)
0
P
i,0
(t)
When j = 0,
P

ij
(t) = q
j+1,j
P
i,j+1
(t) + q
j1,j
P
i,j1
(t) v
j
P
ij
(t)
= (
j+1
+
j+1
)
_

j+1

j+1
+
j+1
_
P
i,j+1
(t)
+(
j1
+
j1
)
_

j1

j1
+
j1
_
P
i,j1
(t)
(
j
+
j
)P
i,j
(t)
P

ij
(t) =
j1
P
i,j1
(t) +
j+1
P
i,j+1
(
j
+
j
)P
ij
(t)
92
3.3 Limiting Probability
The probability that a continuous-time Markov Chain will be in state j at
time t will converge to a limiting value which is independent of the initial
state.
P
j
= lim
t
P
ij
(t) =

j
/v
j

i
/v
i
and

j
P
j
= 1.
The sucient conditions for P
j
to exist:
1. All state of the Markov Chain communicate; and
2. The Markov Chain is positive recurrent. That is, starting in any state
i, the mean time to return to state i is nite.
If the limit P
j
exists, the process is ergodic. Consider the forward equation
P

ij
(t) =

k=j
q
kj
P
ik
(t) v
i
P
ij
(t).
As t ,
lim
t
P

ij
(t) = 0.
93
Therefore,
v
j
P
j
=

k=j
q
kj
P
k
for all state j.
v
j
P
j
= rate at which the process leaves state j

k=j
q
kj
P
k
= rate at which the process enters state j.
If the embedded discrete-time Markov Chain with transition probability P
ij
is irreducible and positive recurrent, then
P
j
= lim
t
P
ij
(t) are given by P
j
=

j
/v
j

i
/v
i
where
j
are the unique non-negative solution of

j
=

k=j

k
P
kj
,

i
= 1 (P
ii
= 0)
v
j
P
j
=

k=j
v
k
P
k
P
kj
,

j
P
j
= 1
or q
kj
= v
k
P
kj
, k = j,
v
j
P
j
=

k=j
q
kj
P
k
,

j
P
j
= 1
which is same as before.
Example (Birth death process)
94
1 2 0 n n+1 n-1

n-1

n+1
State Rate process leaves Rate process enters
0
0
P
0

1
P
1
n, n > 0 (
n
+
n
)P
n

n+1
P
n+1
+
n1
P
n1

0
P
0

1
P
1
= 0

n
P
n

n+1
P
n+1
=
n1
P
n1

n
P
n
= 0 (by induction)
Therefore,

0
P
0
=
1
P
1

1
P
1
=
2
P
2
.
.
.

n
P
n
=
n+1
P
n+1
95
Solving yields,
P
1
=

0

1
P
0
P
2
=

1

2
P
1
=

1

1
P
0
.
.
.
P
n
=

n1

n
P
n
=

n1

n2

0

n1

1
P
0
Since

i
P
i
= 1,
P
0
+ P
0

n=1

n1

0

n

1
= 1
P
0
=
_
1 +

n=1

n1

0

n

1
_
1
If for all n,
n
= ,
n
= and = /,
P
0
= (1 + +
2
+ )
1
= 1
and
P
n
= (1 )
n
(M/M/1)
3.4 Time Reversibility
Let {X(t), < t < } be a continuous-time Markov Chain having a
stationary distribution.
P{X(t) = j} = P
j
96
Let {Y (t) = X(t)}. {X(t)} is time-reversible if {X(t)} and {Y (t)} has the
same probability law.
P
ij
(t) = P{X(t) = j | X(0) = i}
Q
ij
(t) = P{Y (t) = j | Y (0) = i}
{X(t)} is time-reversible if P
ij
(t) = Q
ij
(t).
Q
ij
(h) = P{Y (h) = j | Y (0) = i}
= P{X(h) = j | X(0) = i}
= P{X(0) = j | X(h) = i} (stationary)
=
P{X(0) = j, X(h) = i}
P{X(h) = i}
=
P{X(h) = i | X(0) = j}P{X(0) = j}
P{X(h) = i}
=
P
ji
(h)P
j
P
i
Since Q
ij
(h) = P
ij
(h),
P
i
P
ij
(h) = P
j
P
ji
(h)
lim
h0
P
i
P
ij
(h)
h
= lim
h0
P
j
P
ji
(h)
h
P
i
q
ij
= P
j
q
ji
The sequence of states visited by the reversed process constitutes a discrete-
97
time Markov Chain with transition probability
Q
ij
=

j
P
ji

i
.
Thus the continuous-time Markov Chain will be time-reversible if the embed-
ded chain is time reversible. That is

i
P
ij
=
j
P
ji
.
Now
P
i
=

i
/v
i

j
(
j
/v
j
)
Therefore, the continuous-time Markov chain is time reversible if
P
i
v
i
P
ij
=
j
v
i
P
ji
for all i = j. Equivalently,
P
i
q
ij
= P
j
q
ji
.
The rate at which the process goes directly from i to j is equal to the rate at
which it goes directly from j to i.
All birth death processes having stationary distribution are time reversible.
q
i,i+1
=
i
, q
i,i1
=
i
, q
ij
= 0 if | i j |> 1.
98
Since
P
i
= P
0
_

1

i1

2

i
_
we have
P
i

i
= P
i+1

i+1
i i+1

i+1
The rate at which the process goes directly from state i to i + 1 is equal
to the rate at which it goes directly from i + 1 to i.
Consider an ergodic continuous-time Markov Chain. Suppose it has been
in operation an innitely long time and suppose it started at t = .
X(t-s)=i X(t)=i
t-s t
time
Given the process is in state i at time t, the probability that the process
99
has been in this state for an amount of time greater than s is
P{process is in state i throughout [t s, t] | X(t) = i}
=
P{process is in state i throughout [t s, t]}
P{X(t) = i}
=
P{X(t s) = i}e
v
i
s
P{X(t) = i}
= e
v
i
s
since P{X(t s) = i} = P{X(t) = i} = P
i
.
Going backward in time, the amount of time the process spends in state i is
also exponentially distributed with rate v
i
.
100
Chapter 4
Renewal Process
A renewal (counting) process {N(t), t 0} is a non-negative integer-valued
stochastic process, where the interarrival times are positive, independent,
identically distributed random variables.
N(t)
t S
0
S
1
S
2
S
3
X
1
X
2
X
3
X
n
denotes the time between (n1)
st
and n
th
event of this process where
n 1.
101
S
n
denotes the time of the n
th
renewal. We have S
0
= 0 and S
n
=

n
i=1
X
i
.
F(t) = P{X
n
t} is the distribution of interarrival times. We assume
that F(0) = P{X
n
0} < 1.
= E[X
n
] is the mean of interarrival time.
N(t) = max{ n : S
n
t} is the number renewals up to time t.
Note that
N(t) n S
n
t. (4.1)
By the Strong Law of Large Numbers,
S
n
n
=

n
i=1
X
i
n
as n (4.2)
1. For any t where t < ,
S
n
t
S
n
n

t
n
= 0
if n . However, since
S
n
n
= > 0 and is nite, N(t) cannot be
innite for nite t.
102
2. For nite n,
P{N(t) = n} = P{N(t) n} P{N(t) n + 1}
= P{S
n
t} P{S
n+1
t}
lim
t
P{N(t) = n} = lim
t
P{S
n
t} P{S
n+1
t}
= 1 1 = 0
Therefore, with probability 1,
lim
t
N(t) = .
Hence we have
lim
n
P{N(t) = n} = 0 for nite t (4.3)
lim
t
P{N(t) = n} = 0 for nite n (4.4)
P{N(t) n} = P{S
n
t}
= F
n
(t) ( n-fold convolution of F.)
= F
n1
(t) F(t)
=
_

0
F
n1
(t ) dF()
=
_
t
0
F
n1
(t ) dF()
103
E
_
e
S
n

= E
_
e

i
X
i

= E
_

i
e
X
i

= E
_
e
X
i

n
(4.5)
Example (Tosses of a coin)
T
1-p
T
1-p
T
1-p
T
1-p
T
1-p
T
1-p
T
1-p
T
1-p
T
1-p
S = 4
1
S = 9
2
S = k
n
H
p
H
p
H
p
P{X
n
= i} = (1 p)
i1
p
E
_
z
X
n

i=1
(1 p)
i1
pz
i
=
pz
1 (1 p)z
104
E
_
z
S
n

= E
_
z
X
i

n
=
_
pz
1 (1 p)z
_
n
= (pz)
n

_
1 (1 p)z
_
n
= (pz)
n

j=0
(n)(n 1) (n j + 1)
j!
(1)
j
(1 p)
j
z
j
= (pz)
n

j=0
(1)
2j
(n)(n + 1) (n + j 1)
j!
(1 p)
j
z
j
= (pz)
n

j=0
(n + j 1)!
j!(n 1)!
(1 p)
j
z
j
= (pz)
n

j=0
_
n 1 + j
n 1
_
(1 p)
j
z
j
=

j=0
_
n 1 + j
n 1
_
p
n
(1 p)
j
z
n+j
=

k=n
_
k 1
n 1
_
(1 p)
kn
p
n
z
k
Therefore,
P{S
n
= k} =
_

_
_
k1
n1
_
(1 p)
kn
p
n
, k n
0 , k < n
4.1 Useful Formulas
If X is a non-negative continuous random variable, then
E[X] =
_

0
[1 F(x)] dx.
105
If X is non-negative and integer valued (X 0), then
E[X] =

k=1
kP{X = k}
= p
1
+ (p
2
+ p
2
) + (p
3
+ p
3
+ p
3
) + . . .
= (p
1
+ p
2
+ . . . ) + (p
2
+ p
3
+ . . . ) + . . .
= P{X 1} + P{X 2} + . . .
=

k=1
P{X n}
4.2 Mean Value Function
m(t) = E[N(t)] =

n=1
P{N(t) n}
=

n=1
P{S
n
t}
=

n=1
F
n
(t)
Example (Poisson Process)
E[N(t)] = m(t)
m(t) =

n=1
ne
t
(t)
n
n!
= t
106
4.3 Renewal Equation
0 x t
first
arrival
m(t) = E[N(t)]
=
_

0
E[N(t) | X
1
= x]f(x) dx
=
_
t
0
(1 +E[N(t x)])f(x) dx
(since E[N(t) | X
1
= x] = 0 for x > t)
m(t) = F(t) +
_
t
0
m(t x)f(x) dx (4.6)
Example
Suppose interarrival time is uniformly distributed between (0,1). For t 1,
nd m(t).
m(t) = t +
_
t
0
m(t x) dx
= t +
_
0
t
m(y) (dy) (y = t x)
= t +
_
t
0
m(y)dy
m

(t) = 1 + m(t)
107
Taking Laplace transform and noting that m(0) = 0, we have
sm(s) 0 =
1
s
+ m(s)
(s 1)m(s) =
1
s
m(s) =
1
s(s 1)
=
1
s 1

1
s
Taking inverse transform,
m(t) = e
t
1, 0 t 1
If we substitute t = 1 into m(t), we have m(1) = e 1 > 1 which implies
that there is at least one arrival before t = 1.
Example
Consider a light bulb whose life, measured in discrete units, is a random
variable X where
P{X = k} = P
k
for k = 1, 2, . . . .
108
The number of replacements up to time n is given by N(n).
m(n) = E[N(n)]
=

k=1
E[N(n) | X
1
= k]P
k
=
n

k=1
_
1 + E[N(n k)]
_
P
k
+

k=n+1
0 P
k
=
n

k=1
P
k
+
n

k=1
m(n k)P
k
m(n) = F
X
(n) +
n1

k=1
m(n k)P
k
(4.7)
The formula for m(n) can be computed recursively:
m(0) = 0
m(1) = F
X
(1) +P
1
m(0) = P
1
m(2) = F
X
(2) +P
1
m(1)
m(3) = F
X
(3) +P
1
m(2) +P
2
m(1)
.
.
.
Let
m(z) =

n=1
m(n)z
n
and P(z) =

k=1
P
k
z
k
Then

n=1
F
X
(n)z
n
=
P(z)
1 z
109
This can be seen by considering the following:
zF
X
(1) = z(P
1
)
z
2
F
X
(2) = z
2
(P
1
+ P
2
)
z
3
F
X
(3) = z
3
(P
1
+ P
2
+ P
3
)
.
.
.
Summing vertically and collecting terms for P
1
, P
2
, P
3
, . . . , we have

n=1
F
X
(n)z
n
=
P
1
z
1 z
+
P
2
z
2
1 z
+
P
3
z
3
1 z
+ . . . =
P(z)
1 z
Using (4.7),
m(z) =
P(z)
1 z
+ m(z)P(z) (convolution)
m(z) =
P(z)
(1 z)(1 P(z))
Example
Determine m(n) when the interarrival times have the geometric distribution.
P{X
1
= k} = P
k
= p(1 p)
k1
for k 1.
P(z) =

k=1
P
k
z
k
=
pz
1 (1 p)z
110
m(z) =
pz
1(1p)z
(1 z)(1
pz
1(1p)z
)
=
pz
(1 z)
2
= pz
_

i=0
z
i
_
2
(since
_

i=0
z
i
_
k
=

n=0
_
n + k 1
k 1
_
z
n
)
= pz

n=0
(n + 1)z
n
= p

n=0
(n + 1)z
n+1

n=1
m(n)z
n
= p

n=1
nz
n
Therefore, m(n) = pn.
4.4 Renewal Equations and the Elementary Renewal
Theorem
m(t) = E[N(t)] =

n=1
nP{N(t) = n} =

n=1
P{N(t) n}
=

n=1
P{S
n
t} =

n=1
F
n
(t)
When only F(t) is given,
m(t) = F(t) +
_
t
0
m(t x) dF(x)
111
Given a(t) and F(t),
A(t) = a(t) +
_
t
0
A(t x) dF(x) , t 0 (4.8)
The solution of renewal equation is
A(t) = a(t) +
_
t
0
a(t x)dm(x) , t 0 (4.9)
where m(t) =

n=1
F
n
(t).
To verify that (4.8) and (4.9) are equivalent, we see from (4.9),
A(t) = a(t) + a(t) m(t)
= a(t) + a(t)
_

n=1
F
n
(t)
_
= a(t) + a(t)
_
F(t) +

n=2
F
n
(t)
_
= a(t) +
_
a(t) + a(t)

n=1
F
n
(t)
_
F(t)
= a(t) + A(t) F(t)
Theorem 7 (The Basic Renewal Theorem). If
A(t) = a(t) +
_
t
0
A(t x) dF(x)
then
lim
t
A(t) =
_

_
1

0
a(x)dx if = E[X] <
0 if =
112
Proof. Taking Laplace Transform of (4.8),
A

(s) = a

(s) + A

(s) F

(s)
A

(s) =
a

(s)
1 F

(s)
lim
t
A(t) = lim
s0
sA

(s) (Final value Theorem)


= lim
s0
sa

(s)
1 F

(s)
= lim
s0
a

(s) lim
s0
s
1 F

(s)
= lim
t
_
t
0
a(x)dx lim
s0
1

d
ds
F

(s)
=
1

_

0
a(x)dx
Example
Derive
A(t) = E[S
N(t)+1
] = [1 + m(t)]
where E[S
N(t)+1
] is the expected time of rst renewal after time t and m(t)
is the mean number of renewal by time t.
113
Solution:
E[S
N(t)+1
| X
1
= x] =
_

_
x if x > t, N(t) = 0
x + E[S
N(tx)+1
] if x t, N(t) 1
Case 1 Case 2
x t S
N(t)+1
S
N(t)
t x
A(t) = E[S
N(t)+1
] =
_

0
E[S
N(t)+1
| X
1
= x] dF(x)
=
_
t
0
_
x + A(t x)
_
dF(x) +
_

t
x dF(x)
=
_

0
x dF(x) +
_
t
0
A(t x) dF(x)
= +
_
t
0
A(t x) dF(x)
Thus A(t) satises renewal equation with a(t) = :
A(t) = +
_
t
0
dm(x)
= + m(t)
= [1 + m(t)]
Theorem 8 (Elementary Renewal Theorem).
lim
t
m(t)
t
=
1

114
Proof. Since t < S
N(t)+1
,
t < E[S
N(t)+1
] = [1 + m(t)]

m(t)
t
>
1

1
t
lim
t
m(t)
t
lim
t
1

1
t
=
1

(4.10)
Now let c > 0 and set the truncated interarrival time
X
c
i
=
_

_
X
i
if X
i
c
c if X
i
> c
Consider the renewal process having interarrival time {X
c
i
}. Let S
c
n
and N
c
(t)
be the nth arrival time of the counting process.
0
S
n
c
X
1
c
S
n+1
c
N
c
(t) n=
t
X
n+1
c
Since X
c
n+1
c, therefore, t + c S
c
N
c
(t)+1
.
t + c E
_
S
c
N
c
(t)+1

=
c
[1 + m
c
(t)]
115

c
= E[X
c
i
] =
_
c
0
xf(x)dx +
_

c
cf(x)dx
=
_
xF(x)

c
0

_
c
0
F(x)dx + c[F() F(c)]
= cF(c)
_
c
0
F(x)dx + c cF(c)
= c
_
c
0
F(x)dx

c
=
_
c
0
_
1 F(x)
_
dx
Since the truncated arrivals are more frequent (X
c
i
X
i
),
N
c
(t) N(t)
m
c
(t) = E[N
c
(t)] m(t)
Therefore
t + c
c
[1 + m
c
(t)]
c
[1 + m(t)]
and
m(t)
t

1

c
+
1
t
_
c

c
1
_
for any c > 0.
Since
lim
c

c
=
_

0
[1 F(x)]dx =
we have
lim
t
m(t)
t

1

c
for any c > 0
lim
t
sup
m(t)
t
lim
c
1

c
=
1

(4.11)
116
Hence, combining (4.10) and (4.11),
1

lim
t
m(t)
t

1

or lim
t
m(t)
t
=
1

4.5 Limiting Distribution of Residual Lifetime

t
S
N(t)+1
S
N(t)
t

t
: current life at time t.

t
: residual life at time t.
We have
t
= S
N(t)+1
t. For xed z > 0, dene
A
z
(t) = P{
t
> z}
Consider S
1
= x. There are three cases:
1. If x > t + z, then N(t) = 0. Given S
N(t)+1
t = S
1
t = x t > z,
P{S
N(t)+1
t > z | X
1
= x} = 1
117
2. If t < x < t +z, then N(t) = 0. Given S
N(t)+1
t = S
1
t = x t z,
P{S
N(t)+1
t > z | X
1
= x} = 0
3. If 0 < x < t, then N(t) 1 and
P{
t
> z | X
1
= x} = A
z
(t x)
Therefore,
A
z
(t) =
_

0
P{
t
> z | X
1
= x} dF(x)
=
_
t
0
A
z
(t x) dF(x) +
_
t+z
t
0 dF(x) +
_

t+z
1 dF(x)
= 1 F(t + z) +
_
t
0
A
z
(t x) dF(x)
From Renewal Theorem, a(t) = 1 F(t + z)
A
z
(t) = 1 F(t + z) +
_
t
0
_
1 F(t + z x)

dm(x)
From the Basic Renewal Theorem,
lim
t
A
z
(t) =
1

_

0
_
1 F(x + z)

dx
lim
t
P{
t
> z} =
1

_

z
_
1 F(y)

dy (x + z = y)
P{ z} =
1

_

_

z
_
1 F(y)

dy
_
=
1

_
z
0
_
1 F(y)

dy
f

(z) =
1 F(z)

118
The current life
t
y if and only if there is no arrival in (ty, t] for ty 0
and t y.
S
N(t)+1
S
N(t)
t-y t

t-y
y

t
Thus,
P{
t
y} = P{
ty
y}
lim
t
P{
t
y} = P{ > y} = P{ > y}
If y t, P{
t
y} = 0.
4.6 The Inspection Paradox
The length of the renewal interval we inspected is larger than an ordinary
renewal interval. In other words,
P{X
N(t)+1
> x} P{X
i
> x} for all x.
S
N(t)+1
S
N(t)
t
X
N(t)+1
s
119
Proof.
P{X
N(t)+1
> x} = E
_
P{X
N(t)+1
> x | S
N(t)
= t s}

But,
P{X
N(t)+1
> x | S
N(t)
= t s}
= P{interarrival time > x | interarrival time > s}
=
P{interarrival time > x, interarrival time > s}
P{interarrival time > s}
=
_

_
P{interarrival time>x}
P{interarrival time>s}
, x s
P{interarrival time>s}
P{interarrival time>s}
, x < s
=
_

_
1F(x)
1F(s)
, x s
1 , x < s
1 F(x) (in both cases)
Taking expectation on both sides, the right hand side is independent of s.
Therefore,
P{X
N(t)+1
> x} 1 F(x) = P{interarrival time > x} = P{X
i
> x}.
Example (Poisson Process)
120
S
N(t)+1
S
N(t)
t
X
N(t)+1
A(t) Y(t)
X
N(t)+1
= A(t) + Y (t)
For Poisson process, A(t) and Y (t) are independent.
P{Y (t) x} = 1 e
x
The distribution of A(t) may be obtained as follows:
P{A(t) > x} =
_

_
P{no renewals in [t x, t]} , x < t
0 , x t
=
_

_
e
x
, x < t
0 , x t
Therefore,
P{A(t) x} =
_

_
1 e
x
, x < t
1 , x t
121
As t , A(t) is an exponential random variable.
E[X
N(t)+1
] = E[A(t) + Y (t)] =
1

+ E[A(t)]
=
1

+
_

0
P{A(t) > x}dx =
1

+
_
t
0
e
x
dx
=
1

+
1

(1 e
t
) >
1

= E[X
n
]
As t ,
lim
t
E[X
N(t)+1
] =
2

.
4.7 Life, Age (current life) and Residual Life
Let X be the interarrival time of a renewal process with distribution F
X
(x).
Let Z be the length of an interarrival interval which is intercepted by a
random arrival.
Since long intervals between renewal points occupy larger segments of the
time axis than do shorter intervals, it is more likely that the random point
t will fall in a long interval. Thus, the probability that an interval of length
x is chosen should be proportional to the length of x as well as the relative
122
occurrence of such interval.
f
Z
(x)dx = kxf
X
(x)dx
or
P{x < Z x + dx} = kxP{x < X x + dx}
_

0
f
Z
(x)dx = k
_

0
xf
X
(x)dx = k E[X] = 1
Therefore,
k =
1
E[X]
=
1

and
f
Z
(x) =
xf
X
(x)
E[X]
.
Since the randomly selected point t is uniformly distributed with the in-
terval, given that Z = x, the probability that the residual life Y does not
exceed the value y is given by
P{Y y | X = x} =
y
x
for 0 y x.
Thus, the joint density of Z and Y should be
P{y < Y y + dy, x < Z x + dx} =
_
dy
x
_

_
xf
X
(x)
E[X]
dx
_
123
for 0 y x. Thus
f
Y
(y)dy = P{y < Y y + dy}
=
_

x=y
f
X
(x)
E[X]
dxdy
=
1 F
X
(y)
E[X]
dy
f
Y
(y) =
1 F
X
(y)
E[X]
4.8 Alternating Renewal Process
Z
1
Y
1
Z
2
Y
2
Z
n
Y
n
on off on on off off
P
on
=
E[Z]
E[Y ] + E[Z]
P
o
= 1 P
on
=
E[o]
E[on] + E[o]
=
E[Y ]
E[Y ] + E[Z]
To compute the proportion of time that current life (age) is less than c, we
set
Z
i
=
_

_
c , if X
i
> c
X
i
, if X
i
c
124
c
Z
1
Y
1
Z
2
Y
2
Z
n
Y
n

t
X
1
X
n
X
2
P{
t
< c} = P
on
=
E[Z]
E[Y ] + E[Z]
=
E[min(X, c)]
E[X]
=
_

0
P{min(X, c) > x} dx
E[X]
=
_
c
0
P{X > c, c > x} dx +
_

c
P{X > x, c > x} dx
E[X]
=
_
c
0
P{X > x} dx
E[X]
=
_
c
0
[1 F(x)] dx
E[X]
f

t
(y) =
1 F
X
(y)
E[X]
125
c
Z
1
Y
1
Z
2
Y
2
Z
n
Y
n
X
1
X
n
X
2

t
f

(y) =
1 F
x
(y)
E[X]
_

0
f

(y)e
sy
dy =
1
E[X]
_

0
_
1 F
X
(y)

e
sy
dy
=
1
E[X]
_

1
s
_

0
_
1 F
X
(y)

de
sy
_
=
1
sE[X]
_
1
_

0
f
X
(y)e
sy
dy
_
E[e
s
] =
1 E[e
sX
]
sE[X]
126
Taking the Taylor Expansion of e
s
,

n=0
(1)
n
n!
s
n
E[
n
] =
1

n=0
(1)
n
n!
s
n
E[X
n
]
sE[X]
=

n=1
(1)
n1
n!
s
n1
E[X
n
]
E[X]
=

n=0
(1)
n
(n + 1)!
s
n
E[X
n+1
]
E[X]
E[
n
] =
E[X
n+1
]
(n + 1)E[X]
In particular,
E[] =
E[X
2
]
2E[X]
4.9 Renewal Reward Process
Consider a renewal process {N(t), t 0}. Suppose we receive a reward R
n
at
the time of the nth renewal. {R
n
} are independent and identically distributed
random variables. Let
R(t) =
N(t)

n=1
R
n
be the total reward received by time t. Then the average reward is
R(t)
t
=
E[R]
E[X]
as t
127
Proof.
R(t)
t
=

N(t)
n=1
R
n
t
=

N(t)
n=1
R
n
N(t)

N(t)
t
=
E[R]
E[X]
as t
4.9.1 The Average Current Life (Age) of a Renewal Process

t
S
N(t)
t S
2
S
1
S
N(t)+1
t
Current life =
t
= t S
N(t)
. Suppose we are paid at a rate of
t
, then
_
s
0

t
dt
is the total earning by time s.
E[
t
] =
_
s
0

t
dt
s
=
E[reward earned in one cycle]
E[length of cycle]
Reward during a cycle =
_
S
N(t)+1
S
N(t)

t
dt
=
_
S
N(t)+1
S
N(t)
_
S
N(t)
_
d
=
_
X
0
t dt (t = S
N(t)
)
=
1
2
X
2
128
Therefore,
E[
t
] =
E[reward earned in one cycle]
E[length of cycle]
=
E[X
2
]
2E[X]
4.10 Average Residual Life (Excess) of a Renewal Pro-
cess
Let residual life be
t
= S
N(t)+1
t. Suppose we are paid at rate of
t
.
S
N(t)
t S
2
S
1
S
N(t)+1
t

t
Reward during a cycle =
_
S
N(t)+1
S
N(t)

t
dt
=
_
S
N(t)+1
S
N(t)
_
S
N(t)+1
t
_
dt
=
_
X
0
(S
N(t)
+ X S
N(t)
) d ( = t S
N(t)
)
=
_
X
0
(X ) d
=
_
X
1
2

2
_
X
0
= X
2

1
2
X
2
=
1
2
X
2
129
Therefore,
E[
t
] =
E[reward during a cycle]
E[length of cycle]
=
E[X
2
]
2E[X]
4.11 Semi-Markov Process
Suppose a process can be in any one of the state 1, 2, . . . , N and each time
it enters state i, it remains there for a random amount of time, having mean

i
and makes a transition into state j with probability P
ij
. Such a process is
called a semi-Markov Process. If
i
= 1, then it is a Markov chain.
1 2 3

2
t
P
i
=

i

1
+
2
+
3
i = 1, 2, 3, . . .
Let J
0
be the initial state of the process and J
n
denote the state of the process
immediately after the nth transition has occurred. Then {J
n
, n = 0, 1, 2, . . . }
is a Markov chain with transition probability P
ij
. This is the embedded
130
Markov chain. There is an one-to-one correspondence between the semi-
Markov process and its embedded Markov chain.
Let
i
be the limiting probability. That is,
N

i=1

i
= 1

i
=
N

j=1

j
P
ji
, i = 1, . . . , N.
The long-run proportion of time the process spends in state i is
P
i
=

i

N
j=1

j
, i = 1, 2, . . . , N.
Let Q
ij
(t) denote the probability that after making a transition into state i,
the process next makes a transition into state j, in an amount of time less
than or equal to t.

j=0
Q
ij
() = 1 , for i = 0, 1, 2, . . .
We have P
ij
= Q
ij
().
Let N
i
(t) be the number of transitions into state i in the interval (0, t]. Dene
N(t) =

i=0
N
i
(t).
If we let Z(t) be the state of the process at time t. Then
Z(t) = J
N(t)
, t 0
131
is a semi-Markov process. Setting N(t) = {N
1
(t), N
2
(t), . . . }, the stochastic
process {N(t), t 0} is a Markov Renewal Process.
Example (M/M/1)
Customers arrive at a Poisson rate and service time S are i.i.d. exponential
random variables with mean
1

. Let X(t) denote the number of customers in


the system at time t. {X(t), t 0} is a semi-Markov process.
P{S t} = 1 e
t
P{A t} = 1 e
t
P{T t} = 1 P{T > t}
= 1 P{min(A, S) > t}
= 1 P{A > t, S > t}
= 1 P{A > t}P{S > t}
= 1 e
(+)t
Q
0,1
(t) = 1 e
t
132
Q
i,i1
(t) = P{S A, S t}
=
_

0
P{S , S t | A = }e

d
=
_
t
0
_
1 e
min(t,)

d
=
_
t
0
(1 e
t
)e

d +
_

t
(1 e
t
)e

d
= e

+

+
_
e
(+)
_
t
0
+
_
(1 e
t
)(e

)
_

t
=

+
_
1 e
(+)t
_
, i 1.
Similarly,
Q
i,i+1
(t) =

+
_
1 e
(+)t
_
, i 1.
The transition probabilities of the embedded Markov process are
P
0,1
= Q
0,1
() = 1
P
i,i1
= Q
i,i1
() =

+
P
i,i+1
= Q
i,i+1
() =

+
133
and the limiting probabilities are

0
=

j=0
P
j,0

j
=

+

i
=

j=0
P
j,i

j
= P
i1,i

i1
+ P
i+1,i

i+1
=

+

i1
+

+

i+1
( + )
i
=
i1
+
i+1

i+1
=
i1

i
.
.
.
=
0

1
=

2
+

i+1
=

i
+

+

1
, i 1
Example
Consider a renewal process N(t) with inter-arrival time distribution F(x).
Let the Laplace transform of the distribution function F(x) and the mean
value function m(t) = E[N(t)] be dened by
F

() =
_

0
e
x
dF(x)
134
and
m

() =
_

0
e
t
dm(t)
(a) Show that
m

() =
F

()
1 F

()
Hint:
m(t) =

n=1
F
n
(t)
(b) If m(t) = t, show by using (a) that the inter-arrival time is exponen-
tially distributed.
Solution:
(a) Since m(t) =

n=1
F
n
(t),
m

() =

n=1
F

()
n
=
F

()
1 F

()
(b) If m(t) = t, then m

() =

. It follows from (a) that


F

() =
m

()
1 + m

()
=

+
F(x) = 1 e
x
Therefore, the inter-arrival time is exponentially distributed.
135
Example
The arrival process of vehicles by the side of a road is an ordinary renewal
process. You are waiting to cross the road until a gap (time between succes-
sive vehicles) greater than T occurs. Let W be the waiting time until you
begin to cross the road.
Assume that F(T) = P{X > T} < 1. (If F(T) = P{X > T} = 1, then
W = 0 and you dont have to wait.)
Let
P{W t} = V (t)
Conditioning on X
1
, we have
(i) P{W t | X
1
> T} = 1, i.e., W = 0 if X
1
> T.
(ii) P
_
W t | X
1
= x, x [0, T]
_
= V (t x).
However,
V () = P{W } = 0 if < 0
so we have V (t x) = 0 if t < x < T. Also, observe that
P
_
W t | X
1
= x, x [0, T]
_
= P
_
W t < x | X
1
= x [0, T]
_
= 0
because it is impossible that you can cross the road in the rst renewal
interval.
136
Unconditioning X
1
, we have
V (t) = P{W t} = 1 P{X
1
> T} +
_
T
0
V (t x)dF(x)
For t < T,
V (t) = 1 F(T) +
_
t
0
V (t x)dF(x) +
_
T
t
V (t x)dF(x)
= 1 F(T) +
_
t
0
V (t x)dF(x) (4.12)
For t > T,
V (t) = 1 F(T) +
_
T
0
V (t x)dF(x) (4.13)
Dene
G(x) =
_

_
F(x) for x T
F(T) for x > T
Then (4.12) and (4.13) can be combined into
V (t) = 1 F(T) +
_
t
0
V (t x)dG(x) , for all t 0
P{W t} = V (t) = 1 F(T) +
_
t
0
_
1 F(T)
_
dm(x)
= 1 F(T) +
_
1 F(T)
_
m(t) , t 0
137
where
m(t) =

j=1
G
(j)
(t)
m

(s) =

j=1
G

(s)
n
=
G

(s)
1 G

(s)
lim
t
m(t) = lim
s0
s m

(s)
= lim
s0
s G

(s)
1 G

(s)
=
lim
t
G(t)
1 G

(0)
=
F(T)
1 F(T)
G

(0) =
_

0
dG(t) = G() G(0) = F(T)
Therefore,
lim
t
P{W t} = 1 F(t) +
_
1 F(t)
_
F(t)
1 F(t)
= 1
138
Chapter 5
Queueing Theory
5.1 Littles Law
server

Theorem 9 (Littles Formula). Let


N = average number of customers in the system
= arrival rate
T = average waiting time
We have
N = T
139
Proof. Let
N(t) = number of customers in the system at time t
(t) = number of customers who arrived in the interval [0, t]
(t) = number of customers who left in the interval [0, t]
T
i
= time that the i
th
arriving customers spent in the system
t

()
()
() N
(t) (t),
1
2
3
delay T
2
delay T
1
The time average of N() up to time t is
N
t
=
1
t
_
t
0
N()d
The time average arrival rate over the interval [0, t]

t
=
(t)
t
The time average of customer delay up to time t is
T
t
=

(t)
i=0
T
i
(t)
140
The long-term averages are
N = lim
t
N
t
= lim
t

t
T = lim
t
T
t
Consider N(t).
N
t
=
1
t
_
t
0
N()d
=
(t)
t
_
1
(t)

(t)

i=1
T
i
_
=
t
T
t
As t ,
N = T
5.2 M/M/1 Queues
Markovian (Poisson) arrivals and departures, 1 server.
0 1 2 i-1 i i+1
h

h
h

h
h

h
h
1(+)h 1(+)h 1(+)h 1(+)h 1(+)h
1h
141
Let the state i of the system be the number of customers in the system
and P
i
(t) be the probability that the system is in state i at time t.
P
n
(t + h) =
_
h + o(h)
_
P
n1
(t) +
_
1 h h + o(h)
_
P
n
(t)
+
_
h + o(h)
_
P
n+1
(t)
dP
n
(t)
dt
= P
n1
(t) ( + )P
n
(t) + P
n+1
(t)
In the steady state, t and P

n
(t) = 0,
P
n1
( + )P
n
+ P
n+1
= 0
P
n
P
n+1
= P
n1
P
n
= = P
0
P
1
= 0
P
n+1
= P
n
, n = 0, 1, 2, . . .
=
n+1
P
0
, n = 0, 1, 2, . . .
where
=

Finally,

n=0

n
P
0
=
P
0
1
= 1
P
0
= 1
P
n
=
n
(1 )
142
Note that = 1 P
0
is the utilization of the system or the fraction of time
that the server is working.
The expected number of customers in the system is
N =

n=0
nP
n
=

n=0
n
n
(1 )
= (1 )

n=0
n
n1
= (1 )

n=0

n
_
= (1 )

_
1
1
_
= (1 )
1
(1 )
2
N =

1
=


and the average delay is
T =
N

=
1

The average waiting time is
W =
1

=


=

( )
143
=

N
1
average # of
customers in system
Waiting time distribution F
W
(t) of M/M/1.
Let X be the service time of a customer.

X
= E
_
e
sX

=

s +
E
_
e
s(X
1
++X
n
)

=
_

s +
_
n
f
X
1
++X
n
(x) =
n
e
x
x
n1
(n 1)!
= e
x
(x)
n1
(n 1)!
144
F
W
(0) = P{W 0} = P{W = 0} = P
0
= 1
F
W
(t) = P{W t}
=

n=1
P{n service completions in t | arrival found n in system}P
n
+F
W
(0)
= (1 )

n=1

n
_
t
0
f
X
1
++X
n
(x)dx + (1 )
= (1 )

n=1

n
_
t
0
(x)
n1
(n 1)!
e
x
dx + (1 )
= (1 )
_
t
0
e
x

n=1
(x)
n1
(n 1)!
dx + (1 )
= (1 )
_
t
0
e
x
e
x
dx + (1 )
= (1 )
_
1
1
_
1 e
(1)t
_
_
+ (1 )
= 1 e
(1)t
, t > 0.
Thus,
F
W
(t) =
_

_
1 ; t = 0
1 e
(1)t
; t > 0
E[W] = 0 (1 ) +
_

0
tdF
W
(t)
=
_

0
t( )e
()t
dt
=


=

( )
145
We also have
E[T] = average delay =
1

+

( )
=
1

E[N] =

1
=


and hence,
E[T] =


= E[N]
which is Littles Law.
M/M/1 Queue with Finite Capacity
0 1

h
h
1(+)h
1h
N
h

h
i-1 i i+1
h

h
h

h
1(+)h 1(+)h 1(+)h
1
h
P
1
= P
0
.
.
.
P
i
=
i
P
0
P
N
=
N
P
0
146
1 =
_
N

n=0

n
_
P
0
=
_
1
N+1
1
_
P
0
P
0
=
1
1
N+1
P
n
=

n
(1 )
1
N+1
Average number of customers in system is
L =
N

n=0
nP
n
=

_
1 + N
N+1
(N + 1)
N

( )
_
1
N+1
_
Since customers only enter the system if the number of customers already in
the system is less than N, the actual arrival rate is

a
= (1 P
N
)
and the expected waiting time is
W =
L

a
Note also that
P
N
=

N
(1 )
1
N+1
=
1 +
N

N+1
+ 1
1
N+1
= 1
1
N
1
N+1
1 P
N
=
1
N
1
N+1
147
As N ,
1 P
N
= 1
and
a
=
5.3 Queueing System with Bulk Service
The single server is capable of serving two customers simultaneously.
The service time is exponential with rate regardless of the number of
customers being served.
Example: cable car with two seats.
2
3
1 0
0

Let the state n, n > 0 of the system denote the number of customers in the
queue. The system can be in two dierent states when there are no customers
in the queue.
0

: queue is empty and no one is in service


0 : queue is empty and server is busy
148
P
0
= P
0
( + )P
0
= P
0
+ P
1
+ P
2
( + )P
n
= P
n1
+ P
n+2
, n = 1, 2, . . .
.
.
.
Let
P
n
=
n
P
0
then
( + )
n
P
0
=
n1
P
0
+
n+2
P
0
( + ) = +
3
= 1 , =
1

1 + 4
2
, =
1 +

1 + 4
2
The rst two roots are impossible and thus
=
1 +

1 + 4
2
P
n
=
n
P
0
P
0
= P
0
149
P
0
+ P
0
+

n=1
P
n
= 1
P
0
_
1 + +

n=1

_
= 1
P
0
_
1 + +

1
_
= 1
P
0
_
1
1
+
_
= 1
P
0
=
(1 )
+ (1 )
P
n
=

n
(1 )
+ (1 )
, n 0
P
0
=
(1 )
+ (1 )
> 0
We require that < 1,
=
1 +

1 + 4
2
< 1
< 2 < 2
Thus, the arrival rate must be less than 2 which is the maximum service
rate.
The average number of customers in the queue is
L
Q
=

n=1
nP
n
=

(1 )
_
+ (1 )

150

2
L
Q
5.4 M/M/k, Erlang Loss System
0
1

2
i-1
i i+1

(i+1) i
k
P
0
= P
1
( + )P
1
= 2P
2
+ P
0
.
.
.
( + i)P
i
= (i + 1)P
i+1
+ P
i1
151
P
0
= P
1
P
1
=

P
0
P
1
= 2P
2
P
2
=

2
P
1
.
.
.
.
.
.
P
k1
= kP
k
P
k
=

k
P
k1
P
n
=

n
n!
P
0
P
0
=
1

k
n=0

n
n!
P
N
=

N
N!

N
n=0

n
n!
Erlang Blocking Formula
5.5 M/G/1 Queues
5.5.1 Pollaczek-Khinchin (P-K) formula
E[W] =
E[X
2
]
2(1 )
Proof of P-K Formula (Method 1).
152
Let
W
i
= waiting time in queue of the i
th
customer
R
i
= residual service time seen by i
th
customer
X
i
= service time of the i
th
customer, i.i.d. r.v., E[X
i
] =
1

N
i
= no. of customers in queue seen by i
th
customer upon arrival
W
i
= R
i
+
i1

j=iN
i
X
j
= R
i
+ X
iN
i
+ X
iN
i+1
+ + X
i1
i - 1 i
i - N
i-1
R
i
i
i - N
i
i - N
i+1
( i - 1 ) - ( i - N ) + 1 = N
i
E[W
i
| N
i
] = E[R
i
] + N
i
E[X]
E[W
i
] = E[R
i
] +
1

E[N
i
]
= E[R
i
] +
1

E[W
i
] (by Littles Law)
As i ,
E[W] =
E[R]
1
(5.1)
Let r() be the residual service time at .
153
X
2
X
1
X
3
X
2
X
1
X
3
time
residual
r( )
service time
t

X
M(t)
X
M(t)
1
t
_
t
0
r()d =
1
2

M(t)
t

1
M(t)
M(t)

i=1
X
2
i
E[R] = lim
t
1
t
_
t
0
r()d =
1
2
E[X
2
]
Thus,
E[W] =
E[X
2
]
2(1 )
Proof of P-K Formula (Method 2).
time
new arrival
X
1
X
3
X
4
X
2
idle with probability 1 -
X
^
3
154
Suppose the service time X has the pdf f
X
(x), then the residual service time

X has the pdf


f

X
(x) =
xf(x)
E[X]
(See Kleinrock, Queueing Systems Vol. 1, P. 169, Sec. 5.2.)
The Laplace transform of f

X
(x) is
F

X
(s) =
1 F

X
(s)
sE[X]
(5.2)
where F

X
(s) is the Laplace transform of f
X
(x).
Let m
n
be the n
th
moment of the service time X and r
n
be the n
th
moment
of the residual service time

X, then
F

(s) =
_

0
e
sx
f(x)dx
=
_

0

n=0
(sx)
n
n!
f(x)dx
=

n=0
(1)
n
m
n

s
n
n!
Similarly,
F

X
(s) =

n=0
(1)
n
r
n

s
n
n!
(5.3)
155
From (5.2),
F

X
(s) =
1 F

X
(s)
sm
1
=
1
sm
1

n=1
(1)
n1

m
n
s
n
n!
=

n=1
(1)
n1

m
n
s
n1
n! m
1
=

n=0
(1)
n

m
n+1
s
n
(n + 1)! m
1
=

n=0
(1)
n

m
n+1
(n + 1)m
1

s
n
n!
(5.4)
By comparing coecients of (5.3) and (5.4),
r
n
=
m
n+1
(n + 1)m
1
E[R] = E[

X | the server is busy] + 0 (1 )
= r
1

=
m
2
2 m
1
=

E[X
2
]
2 E[X]
=
E[X
2
]
2
From (5.1),
E[W] =
E[R]
1
156
Thus, we have
E[W] =
E[X
2
]
2(1 )
Proof of P-K Formula (Method 3).
Let
q
n
:
number of customers in queue at the end
of the n
th
service completion point
a
n
:
number of customers that arrive during
the service of the n
th
customer
We have
q
n+1
= q
n
U(q
n
) + a
n+1
where
U(x) =
_

_
1 if x > 0
0 if x = 0
Suppose the service time X has pdf f(x). Let

k
= P{a
n
= k}
=
_

0
e
x
(x)
k
k!
f(x)dx
157
The generating function of a
n
is
(z) =

k=0

k
z
k
=

k=0
_

0
e
x
(x)
k
k!
z
k
f(x)dx
=
_

0
e
x
e
xz
f(x)dx
= F

X
( z) (5.5)
E[a
n
] =
d(z)
dz

z=1
=
_
d
ds
F

X
(s)
_
s=0

d
dz
( z)
= (m
1
) ()
=

=
E[a
n
(a
n
1)] = ()
2

d
2
ds
2
F

X
(s)

s=0
=
2
m
2
=
2
E[X
2
]
E[a
2
n
] = +
2
E[X
2
]
var[a
n
] = +
2
E[X
2
]
2
= +
2

2
X
= +
2
c
2
X
158
where
c
2
X
=

2
X
_
1

_
2
= coecient of variation of the service time X.
Next consider
E[q
2
n+1
] = E[q
2
n
] + E[U(q
n
)] + E[a
2
n+1
]
+2E[q
n
a
n+1
] 2E[a
n+1
U(q
n
)] 2E[q
n
]
But
E[q
2
n+1
] = E[q
2
n
]
E[U(q
n
)] = P{q
n
> 0} =
E[a
n+1
U(q
n
)] = E[a
n+1
] E[U(q
n
)]
E[q
n
a
n+1
] = E[q
n
] E[a
n+1
]
Therefore,
+ E[a
2
n+1
] + 2E[q
n
] 2
2
2E[q
n
] = 0
2E[q
n
](1 ) = + +
2
c
2
X
+
2
2
2
E[q
n
] = +

2
(1 +c
2
X
)
2(1 )
159
From Littles Law, we have
E[T] =
E[q
n
]

=
1

+

2
(1 +c
2
X
)
2(1 )
E[W] =

2
(1 +c
2
X
)
2(1 )
=

2
+
2
var(X)
2(1 )
=

2
+
2
_
E[X
2
] E[X]
2

2(1 )
=
E[X
2
]
2(1 )
If the service time is exponentially distributed, then c
2
X
= 1 and
E[q
n
] = +
2
2
2(1 )
=

1
E[W] =

2
(1 )
=


If the service time is constant, then c
2
X
= 0 and
E[q
n
] = +

2
2(1 )
E[W] =

2
2(1 )
Proof of P-K Formula (Method 4).
Let
Q
n
(z) =

k=0
P{q
n
= k}z
k
160
be the generating function of q
n
.
Q
n+1
(z) =

k=0
P{q
n+1
= k}z
k
= E
_
z
q
n+1

= E
_
z
q
n
U(q
n
)+a
n+1

= E
_
z
a
n+1

E
_
z
q
n
U(q
n
)

= (z)E
_
z
q
n
U(q
n
)

E
_
z
q
n
U(q
n
)

= P{q
n
= 0} +

k=1
P{q
n
= k}z
k1
= 1 +
Q
n
(z) (1 )
z
Q
n+1
(z) = (z)
_
1 +
Q
n
(z) (1 )
z
_
As n , Q
n
(z) = Q
n+1
(z) = Q(z),
Q(z) = (z)
_
1 +
Q(z) (1 )
z
_
zQ(z) = (z)(1 )z + Q(z)(z) (1 )(z)
_
(z) z
_
Q(z) = (z)(1 )(1 z)
Q(z) =
(1 )(1 z)(z)
(z) z
(5.6)
E[q
n
] =
d
dz
Q(z)

z=1
161
If the service time is exponentially distributed with mean
1

, then
F

X
(s) =

s +
(z) = F

X
( z) =

+ z
Q(z) =
(1 )(1 z)
1 z
+z

=
1
1 z
= (1 )

k=0

k
z
k
P{q
n
= k} = P
k
= (1 )
k
When < 1, the queue length settles down into an equilibrium distribution
. Let W be the waiting time of a customer. Suppose that a customer waits
for a period of length W and then is served for a period of length X. On
departure, he leaves behind him all those customers who have arrived during
the period of length W + X during which he was in the system.
Let T(t) be the total waiting time (queueing + service) distribution, then
Q(z) =

k=0
P{q
n
= k}z
k
=

k=0
_

0
z
k
e
t
(t)
k
k!
dT(t)
=
_

0
e
t(1z)
dT(t)
= T

( z)
162
But from (5.5) and (5.6),
Q(z) =
(1 )(1 z)(z)
(z) z
(z) = F

X
( z)
Letting s = z, we have
T

(s) =
(1 )sF

X
(s)
s + F

X
(s)
If the service time is exponential, then F

X
(s) =

s+
and
T

(s) =
(1 )s

s+
s +

s+
=

+ s
f
T
(t) = ( )e
()t
, t 0
E[T] =
1

E[W] =
1

=


Another approach (using M/M/1)
T

(s) = E
_
e
sT

= E
n
E
_
e
sT
| n customers in the system at time of arrival

n=0
_

s +
_
n+1

n
(1 )
=

+ s
163
Proof of P-K Formula (Method 5).
Let
P
k
= P{q
n
= k} , k 0.
There are two cases in which we have q
n
= 0:
# of customers
in system
0 1 0
idle busy
Case 1
arrival
# of customers
in system
1 0
busy
Case 2
Thus, we have
P
0
=
0
P
0
+
0
P
1
P
k
=
k

i=0
P
ki+1

i
+ P
0

k
, k 1

k=0
P
k
z
k
= P
0

0
+ P
1

0
+

k=1
_
k

i=0
P
ki+1

i
_
z
k
+ P
0

k=1

k
z
k
Summing the rst and last terms on the right hand side,
P
0

0
+ P
0

k=1

k
z
k
= P
0
(z)
164
P
1

0
+

k=1
_
k

i=0
P
ki+1

i
_
z
k
= P
1

0
+(P
2

0
+ P
1

1
)z
+(P
3

0
+ P
2

1
+ P
1

2
)z
2
+(P
4

0
+ P
3

1
+ P
2

2
+ P
1

3
)z
3
+
=
0
(P
1
+ P
2
z + P
3
z
2
+ )
+
1
(P
1
z + P
2
z
2
+ P
3
z
3
+ )
+
2
(P
1
z
2
+ P
2
z
3
+ P
3
z
4
+ )
+
=

0
z
_
Q(z) P
0
_
+
1
_
Q(z) P
0
_
+
2
z
_
Q(z) P
0
_
+
=
_
Q(z) P
0
z
__

0
+
1
z +
2
z
2
+
_
=
_
Q(z) P
0
z
_
(z)
165
Thus,
Q(z) =

k=0
P
k
z
k
=
_
Q(z) P
0
z
_
(z) + (z)P
0
zQ(z) = Q(z)(z) P
0
(z) + z(z)P
0
Q(z)
_
z (z)

= P
0
(z 1)(z)
Q(z) =
(1 z)(z)P
0
(z) z
Since
lim
z1
Q(z) = 1
lim
z1

_
(z) + (1 z)

(z)

P
0

(z) 1
= 1
P
0
= 1
which follows from

(z)

z=1
= E[a
n
] =
Thus,
Q(z) =
(1 )(1 z)(z)
(z) z
166
5.5.2 Busy Periods
t
B
2
I
2
B
1
I
1
P
0
= P{system is idle}
= lim
n
I
1
+ + I
n
I
1
+ + I
n
+ B
1
+ + B
n
= lim
n
(I
1
+ + I
n
)/n
(I
1
+ + I
n
)/n + (B
1
+ + B
n
)/n
=
E[I]
E[I] + E[B]
= 1 E[S]
Since we have Poisson arrivals,
E[I] = mean interarrival time
=
1

As a result,
E[B] =
E[S]
1 E[S]
=
1
(1 )
167
Let C be the number of customers served in a busy period.
E[B | C] = CE[S]
E[B] = E
_
CE[S]

= E[C]E[S]
E[C] =
1
1
t
B
2 I
2
B
1
I
1
These arrivals see empty system
arrivals arrivals
C
2
C
1
For the i
th
busy period, only one (namely, the rst) arrival out of the C
i
arrivals nds that the system is idle. Thus,
P
0
= lim
n
n
C
1
+ C
2
+ + C
n
=
1
E[C]
= 1
Busy Period Distribution of M/G/1
168
Let B be the duration of the busy period and
B(t) = P{B t}
B

() = E
_
e
B

(Laplace transform of B)
F

S
() = E
_
e
S

(Laplace transform of service time S)


We have
B

() = F

S
_
+ B

()
_
Proof.
During the service of the rst customer, there are A arrivals. Given that
S = x and A(x) = n
then n sub-busy periods B
1
, B
2
, . . . , B
n
are generated by the n descendants,
where
B = x + B
1
+ B
2
+ + B
n
g =1
0
g =2
1
g =4
2
169
E
_
e
B
| S = x, A = n

= E
_
e
(x+B
1
++B
n
)

= e
x
E
_
e
(B
1
++B
n
)

= e
x
B

()
n
E
_
e
B
| S = x

n=0
E
_
e
B
| S = x, A = n

P{A = n | S = x}
=

n=0
e
x
B

()
n

e
x
(x)
n
n!
=

n=0
e
x
e
x

(xB

())
n
n!
= e
x
e
x
e
xB

()
= e
(+B

())x
E
_
e
B

=
_

0
e
(+B

())x
dF
S
(x)
B

() = F

S
_
+ B

()
_
170
5.6 M/G/1 Queues with Priority
In a priority queue, customers are divided into n dierent classes, where
class 1 has the highest priority and class n has the lowest priority. Class
k customers appear with an arrival rate of
k
and are served with average
service time X
k
=
1

k
.
There are two kinds of priority queues: preemptive and non-preemptive.
In a preemptive queue, the system is always serving the customer with the
highest priority. If an arriving customer A has a highest priority than the
customer C currently being served, the service of C is suspended and A
is served. In a non-preemptive queue, once the service starts, it will be
completed before the system serves another customer, even if customers with
higher priority arrive during the service.
5.6.1 Non-preemptive Priority
Let
N
k
= average number in queue for priority k
R = mean residual service time
171
The average waiting time of customers of dierent classes are
W
1
= R +
1

1
N
1
= R +
1

1
W
1
W
1
=
R
1
1
W
2
= R +
1

1
N
1
+
1

2
N
2
+
1

1
(
1
W
2
)
= R +
1
W
1
+
2
W
2
+
1
W
2
=
R +
1
W
1
1
1

2
=
R
_
1 +

1
1
1
_
1
1

2
=
R
(1
1
) (1
1

2
)
.
.
.
W
k
=
R
(1
1

k1
) (1
1

k
)
X
1,1
time
residual
r( )
service time

X
2,M (t)
2
172
Let
X
i,j
= service time of the j
th
customer of class i
M
i
(t) = number of customers of class i who have been served up to time t
M(t) = total number of customers who have been served up to time t
=
n

i=1
M
i
(t)
R = lim
t
1
t
_
t
0
r(t)dt
= lim
t
1
t
_
M
1
(t)

j=1
1
2
X
2
1,j
+
M
2
(t)

j=1
1
2
X
2
2,j
+ +
M
n
(t)

j=1
1
2
X
2
n,j
_
=
1
2
_
M
1
(t)
t

M
1
(t)
j=1
X
2
1,j
M
1
(t)
+ +
M
n
(t)
t

M
n
(t)
j=1
X
2
k,j
M
n
(t)
_
=
1
2
_

1
X
2
1
+ +
n
X
2
n

=
1
2
n

i=1

i
X
2
i
Thus, the average waiting time of class i customers is
W
i
=
1
2

n
i=1

i
X
2
i
(1
1

i1
) (1
1

i
)
, i = 1, . . . , n
and the total (waiting + service) delay is
T
i
=
1

i
+ W
i
173
5.6.2 Preemptive Resume Priority
For a class k customer, all customers of classes j > k can be ignored. Let T
k
be the total service time of a class k customer.
1 2 k-1 k k 1 2 k k
T
k
=
1

k
+
k1

i=1
1

i
T
k
+
1

1
N
1
+
1

2
N
2
+ . . .
1

k
N
k
+ R
. .
same as W
k
for M/G/1 without priority
where the rst term is the average service time of the customer, the second
term arises from service of new customers of higher priority who arrive during
his service and the last term accounts for service of customers who arrive
before the customer under consideration.
W
k
= R
k
+

1

1
W
k
+

2

2
W
k
+ +

k

k
W
k
=
R
k
1
1

k
=
1
2

k
i=1

i
X
2
i
1
k
Note that in this case, R
k
includes only service of customers with higher
priority than k, as opposed to the non-preemptive case in which R takes all
174
classes into consideration.
T
k
=
1

k
+ T
k
k1

i=1

i
+
1
2

k
i=1

i
X
2
i
1
k
=
1
1
1

k1
_
1

k
+
1
2

k
i=1

i
X
2
i
1
1

k
_
, i = 1, 2, . . . , k
5.7 Burkes Theorem
t
t
Departure for X( )
Arrivals for X( ) Departure for Y( )
Arrivals for Y( )
X(t) Y(-t)
The future arrivals for Y (t) in interval [t, 0] are independent of Y (t) =
X(t). Thus, the departure of X(t) in [0, t] is independent of X(t).
In steady state, in any interval [s, t + s],
P{n arrivals in [s, t + s]} = P{n departures in [s, t + s]}
= e
t
(t)
n
n!
Let {X(t), t 0} be a birth and death process with constant birth rate
175

k
= , k = 0, 1, 2, . . . and arbitrary death parameters
n
for n = 1, 2, . . . .
Suppose there exists a stationary distribution
k
0, where

k

k
= 1, and
that P{X(0) = k} =
k
for k = 0, 1, . . . . Let D(t) denote the number of
deaths in [0, t]. Then
P{X(t) = k, D(t) = j} = P{X(t) = k} P{D(t) = j}
=
k

e
t
(t)
j
j!
Theorem 10 (Burkes Theorem). Consider an M/M/1, M/M/s or
M/M/ system with arrival rate . Suppose the system starts in steady
state,
(a) The departure process is Poisson with rate ;
(b) At each time t, the number of customers in the system is independent of
the sequence of departure times prior to t.
Proof.
(a) The forward and reverse systems are statistically indistinguishable in
steady-state, and the departure process in the forward system is the
arrival process in the reversed system.
(b) The departures prior to t in the forward process are also the arrivals
176
after t in the reversed process, which is Poisson. So the future arrival
process does not depend on the number in the system.
5.8 Open Queueing Networks
Arrivals Departures
1 2
3
4 5
Customers arrive from outside the system to server i, i = 1, . . . , k in accor-
dance with independent Poisson process at rate r
i
. Once a customer is served
by server i, he joins the queue of server j with probability P
ij
where
k

j=1
P
ij
1
and
1
k

j=1
P
ij
is the probability that a customer departs the system after being served by
i.
177
Let
j
be the total arrival rate of customers to server j. Then

j
= r
j
+
k

i=1

i
P
ij
, i = 1, . . . , k
P{n customers at server j} =
_

j
_
n
_
1

j

j
_
, n 1
P{n
1
, . . . , n
k
} =
k

j=1
_

j
_
n
j
_
1

j

j
_
Theorem 11 (Jacksons Theorem). Suppose a queueing network consists
of K nodes satisfying the following three conditions:
1. Each node consists of c
k
identical exponential servers, each with average
service rate
k
;
2. Customers arriving at node k from outside the system arrive in a Poisson
pattern with the average arrival rate r
k
;
3. Once served at node k, a customer goes to node j(j = 1, 2, . . . , m) with
probability P
kj
or leaves the network with probability
1
K

j=1
P
kj
Then, for each node k, the average arrival rate to the node,
k
, is given by

k
= r
k
+
K

j=1
P
jk

j
178
In addition, if we let P(n
1
, . . . , n
K
) denote the steady state probability that
there are n
k
customers in node k for k = 1, 2, . . . , K and
k
< c
k

k
for
k = 1, 2, . . . , K, then
P(n
1
, . . . , n
K
) = P
1
(n
1
) P
2
(n
2
) P
K
(n
K
)
where P
k
(n
k
) is the steady state probability that there are n
k
customers in the
k
th
node if it is treated as an M/M/c
k
queueing system with an average arrival
rate
k
and average service time
1

k
for each of the c
k
servers. Furthermore,
each node k behaves as if it were an independent M/M/c
k
queueing system
with average arrival rate
k
.
P{n customers at server j} =
_

j
_
n
_
1

j

j
_
, n 1
P(n
1
, . . . , n
k
) =
K

j=1
_

j
_
n
j _
1

j

j
_
L =
K

j=1
(average number at server j) =
K

j=1

j
The average time a customer spends in the system can be obtained from L =
W where =

K
j=1
r
j
.
W =
K

j=1

j
K

j=1
r
j
179
Example
5
4
= P
21
1
4
=
1
2
P
12
=8
1
3
4
1
2
=10
2

1
= 4 +
1
4

2

1
= 6
1
=

1

1
=
3
4

2
= 5 +
1
2

1

2
= 8
2
=

2

2
=
4
5
P{n at server 1, m at server 2} =
_
3
4
_
n
_
1
4
__
4
5
_
m
_
1
5
_
=
1
20
_
3
4
_
n
_
4
5
_
m
L =
6
8 6
+
8
10 8
= 7
W =
L
9
=
7
9
5.8.1 Tandem Queues

2
180
The input to the second queue is the departure process of the rst queue and
is Poisson with rate .
P{n at queue 1} =
_

1
_
n
_
1

1
_
P{m at queue 2} =
_

2
_
m
_
1

2
_
P
n,m
= P{n at queue 1, m at queue 2}
= P{n at queue 1} P{m at queue 2}
=
_

1
_
n
_
1

1
_

2
_
m
_
1

2
_
L =

m,n
(m + n)P
n,m
= L
1
+ L
2
=

W =
L

=
1

+
1

181
n,0
n,1
n,m
2,1
2,0
1,1
1,0
0,1
0,0
0,m 1,m

2
2

m+1
0, 1,
m-1
0,
m-1
1,
m-1
n-1,
1
n-1,
0
n+1,
0
n+1,
1
n+1,
m-1
n+1,
m
n+1,
m+1
n,
m+1
n-1,
m+1
n-1,
m
n,
m-1
n-1,
m-1

1
Rate that process leaves = Rate that process enters
P
0,0
=
2
P
0,1
( +
1
)P
n,0
=
2
P
n,1
+ P
n1,0
( +
2
)P
0,m
=
2
P
0,m+1
+
1
P
1,m1
( +
1
+
2
)P
n,m
=
2
P
n,m+1
+
1
P
n+1,m1
+ P
n1,m
The solution is of the form
P
n,m
= P
n
P
m
= K
n

m
182
Solving the set of equations, we have
=

1
and =

2
5.8.2 Queues with Feedback
Server 1

1
Server 2

2
p=1-q
q

2
Denote the state of the system by (m, n) where m and n is the number of
customers in Server 1 and 2 respectively. The possible transitions are depicted
in the following table and diagram.
Current
State
Next
State
Rate of
Transition
(m, n) (m + 1, n) arrival of new customer
(m, n) (m + 1, n 1)
2
input of feedback customer
(m, n) (m1, n) q
1
departures of customer
(m, n) (m1, n + 1) p
1
feedback to Server 2
183

0,
n-1
1,
n-1
0,n
0,
n+1
1,n
m-1,
n-1
m,
n+1
m+1,
n+1
m+1,
n
m,n
m,
n-1
m+1,
n-1
m-1,
n-1
m-1,
n
1,2
0,2
1,1
0,1
1,0
0,0
m+1,
0
m,0 m,1
m-1,
1
m-1,
0

2
2

2
q
1
q
1
q
1
q
1
q
1
q
1
q
1
q
1
q
1
p
1
p
1
p
1
p
1
p
1

2
p
1

2
p
1

2
p
1
We have the following set of dierence equations.
P
0,0
= q
1
P
1,0
(5.7)
( +
2
)P
0,n
= p
1
P
1,n1
+ q
1
P
1,n
(5.8)
( +
1
)P
m,0
= P
m1,0
+ q
1
P
m+1,0
+
2
P
m1,1
(5.9)
( +
1
+
2
)P
m,n
= P
m1,n
+ p
1
P
m+1,n1
+q
1
P
m+1,n
+
2
P
m1,n+1
(5.10)
The solution is of the form
P
m,n
= K
m

n
184
From (5.7), we have
= q
1
=

q
1
From (5.8),
( +
2
) = p
1
+ q
1

= p
1

q
1
+ q
1

q
1

=
p
q
2
Thus, we have
P
m,n
= K
_

q
1
_
m
_
p
q
2
_
n
where
K =
_
1

q
1
__
1
p
q
2
_
is the normalization constant.
Example (Computer System with Feedback Loop for I/O)
I / O

1
CPU

1
+
+

1
P

1
= (1 - P ) = P
1 2

2
185

1
= +
2

2
= P
2

1

1
=

P
1
and
2
=
P
2
P
1
The steady-state distribution of the system is
P(n
1
, n
2
) =
n
1
1
(1
1
)
n
2
2
(1
2
)
N
1
=

1
1
1
and N
2
=

2
1
2
where

1
=

1

1
and
2
=

2

2
and
N = N
1
+ N
2
=

1
1
1
+

2
1
2
Average time in the system is
T =
N

=

1
(1
1
)
+

2
(1
2
)
=
_

1
_

_
1

1

1
_ +
_

2
_

_
1

2

2
_
=
_

1
P
1
_

_
1

1
P
1
_ +
_
P
2

2
P
1
_

_
1
P
2

2
P
1
_
=
S
1
1 S
1
+
S
2
1 S
2
=
1
1
S
1

+
1
1
S
2

186
where
S
1
=
1

1
P
1
, S
2
=
P
2

2
P
1
I / O
1/S
2
CPU
1/S
1

5.9 Closed Queueing Networks


m customers moving around k servers.
Let P
ij
be the probability that a customer goes from server i to server j
when service is completed. Then, we have
k

j=1
P
ij
= 1
and
_
P
ij

is a Markov transition matrix with limiting probabilities


i
, i =
1, . . . , k.

j
=
k

i=1

i
P
ij
,
k

i=1
= 1
(Note that
i
s are independent of m.)
Suppose the average arrival rate at server j is
m
(j), j = 1, . . . , k.

m
(j) =
k

i=1

m
(i) P
ij

m
(j) =
m

j
187
where

m
=
k

j=1

m
(j)
is the average service completion rate of the entire system, or the throughput.
The steady-state distribution of the network is
P
m
(n
1
, . . . , n
k
) =
_

_
K
m
k

j=1
_

m
(j)

j
_
n
j
, if
k

j=1
n
j
= m
0 , otherwise
=
_

_
C
m
k

j=1
_

j
_
n
j
, if
k

j=1
n
j
= m
0 , otherwise
where
C
m
=
_

n
1
,...,n
k

n
j
=m
k

j=1
_

j
_
n
j
_
1
is a normalization constant.
Note that the number of terms to be summed to obtain C
m
is A(m, k) =
_
m+k1
m
_
.
Alternatively, we can write
P(n
1
, . . . , n
k
) =
1
g(m, k)
k

i=1
_

i
_
n
i
where
g(m, k) =

n
1
++n
k
=m
k

i=1
_

i
_
n
i
188
Let
(n n
0
) =
_

_
1 , if n = n
0
0 , otherwise
or
(n n
0
) =
1
2i
_
C
z
nn
0
dz
z
Then, we have
g(m, k) =

n
1
=0

n
2
=0

n
k
=0
_

i
_
n
i

_
k

i=1
n
i
m
_
=
1
2i
_
C

n
1
=0

n
2
=0

n
k
=0
_

i
_
n
i
z
(

k
i=1
n
i
m)
dz
z
=
1
2i
_
C
m

i=1
1
1
i
z
dz
z
m+1
Let
g(m, k, z) =
m

i=1
1
1
i
z
and rewrite it as a power series,
g(m, k, z) =

j=0
g
j
z
j
189
Then, we have
g(m, k) =
1
2i
_
C
_

j=0
g
j
z
j
__
dz
z
m+1
_
=
1
2i
_
C

j=0
g
j
z
jm
dz
z
= g
m
(5.11)
Example
1 2
4 3
5
r = 0.5
51
r = 0.5
53
2 2

In this example, we have k = 5 and

1
=
2
=
3
=
4
= 0.5
5
= [2, 2, 1, 1, 1]
Let
5
=

= , then

1
=
2
=
0.5
5
2
=
1
4

3
=
4
=
0.5
5

=
1
2

190
We have
g(m, 5, z) =
1
_
1

4
z
_
2

1
_
1

2
z
_
2

1
1 z
=

16
9
1

4
z
+

1
3
_
1

4
z
_
2
+
4
_
1

2
z
_
2
+
64
9
1 z
Now,

i=0
_

4
_
i
z
i
=
1
1

4
z
d
dz

i=0
_

4
_
i
z
i
=
d
dz
_
1
1

4
z
_

i=1
i
_

4
_
i1
z
i1
=

4
_
1

4
z
_
2

i=0
(i + 1)
_

4
_
i
z
i
=

4
_
1

4
z
_
2
4

i=0
(i + 1)
_

4
_
i
z
i
=
1
_
1

4
z
_
2
Similarly,
1
_
1

2
z
_
2
=
2

i=0
(i + 1)
_

2
_
i
z
i
1
1 z
=

i=0

i
z
i
Thus,
g(m, 5, z) =
16
9

i=0
_

4
_
i
z
i

1
3
4

i=0
(i + 1)
_

4
_
i
z
i
4
2

i=0
(i + 1)
_

2
_
i
z
i
+
64
9

i=0

i
z
i
191
By (5.11), we have
g(m, 5) =
16
9
_

4
_
m

1
3
4

(m + 1)
_

4
_
m
4
2

(m + 1)
_

2
_
m
+
64
9

m
Suppose = 4, then
g(4, 5) =
16
9

1
3
(5) 4
1
2
(5)2
4
+
64
9
4
4
= 1657
and
P(n
1
, n
2
, n
3
, n
4
, n
5
) =
_

_
1
1657
2
n
3
+n
4
4
n
5
, n
1
+ n
2
+ n
3
+ n
4
+ n
5
= 4
0 , otherwise
5.9.1 Arrival Theorem
P{customer observes n
l
at server l, l = 1, . . . , k | customer goes from i to j}
=
P{state is (n
1
, . . . , n
i
+ 1, . . . , n
j
, . . . , n
k
), customer goes from i to j}
P{customer goes from i to j}
=
P
m
(n
1
, . . . , n
i
+ 1, . . . , n
j
, . . . , n
k
)
i
P
ij

1
++n

k
=m1
P
m
(n

1
, . . . , n

i
+ 1, . . . , n

k
)
i
P
ij
=
_

i
_
k
j=1
_

j
_
n
j
_

i
_
n

1
++n

k
=m1
_
k

j=1
_

p
_
n

p
_
= P
m1
(n
1
, . . . , n
k
) ,
k

i=1
n
i
= m1
Thus, in a closed system with m customers, the system seen by arrivals to
192
server j has the same distribution as the same network where there are only
m1 customers.
5.9.2 Mean Value Analysis
Let
L
m
(j) = average number of customers at server j
W
m
(j) = average time a customer spends at server j when he visits server j
=
1 + L
m1
(j)

j
By Littles Law,
L
m1
(j) =
m1
(j)W
m1
(j) (5.12)
=
m1

j
W
m1
(j)
W
m
(j) =
1 +
m1

j
W
m1
(j)

j
k

j=1
L
m1
(j) = m1
193
From (5.12),
m1 =
k

j=1

m1
(j)W
m1
(j)
=
m1
k

j=1

j
W
m1
(j)

m1
=
m1

k
j=1

j
W
m1
(j)
W
m
(j) =
1

j
+
(m1)
j
W
m1
(j)
_

k
j=1

j
W
m1
(j)
_

j
Initially, we have
W
1
(j) =
1

j
Since {
i
} can be obtained from
_
P
ij

, the values of W
2
(j), W
3
(j), . . . , W
m
(j)
can be found recursively.
Example (A Time-Sharing System)
Terminal 1
Terminal 2
Terminal N
Computer
Average reflection
time R
C B A

Average
Job processing
time P
194
Let
T = average time a user spends in the system
D =
delay between a job is submitted to the computer
and the time the execution is completed
N = number of terminals
Applying Littles Law to the B C portion,
=
N
T

1
P
( P is the average number of jobs at the computer and is always 1.)
T = R + D
In the worst case, a particular job is the last among the N jobs to be com-
pleted, in which case we have T = R+NP. At the other extreme, a particular
job is the rst one to be completed and we have T = R + P. Thus,
R + P T R + NP
N
R + NP

N
R + P
N
R + NP
min
_
N
R + P
,
1
P
_
max(NP, R + P) T R + NP
195

Throughput
N, # of terminals
N
R + NP
<
N
R + P
N = 1 +
R
P
1
P
<
1
P
When N ,
1
P

1
P
=
1
P
NP T R + NP T = NP
For N < 1 +
R
P
, the throughput bottleneck is N the number of terminal
(the computer is mostly idle)
For N > 1 +
R
P
, the throughput bottleneck is the processing power
1
P
(the computer is always busy)
196
R+P
R
2R+P
1 N = 1+
R
P
T, average time
a job is in the system
T = N P
T = R + N P
delay assuming no waiting at CPU
lower bound of delay due to
limited CPU processing power
N
The above system is equivalent to the following.

1 2
1/R
M / M /
8
M / M / 1
1/P
As before, we have
1
P
. Let
=
1
P
We have

1
=
1
P
R =
R
P
,
2
=
1
P
P = 1
197
P(n, N n) =

n
1

Nn
2
n!G(N)
=
_
R
P
_
n
n!G(N)
1 =
N

n=0
P(n, N n)
=
1
G(N)
_
1 +
_
R
P
_
+
_
R
P
_
2
2!
+ +
_
R
P
_
N
N!
_
G(N) = 1 +
R
P
+
1
2!
_
R
P
_
2
+ +
1
N!
_
R
P
_
N
= G(N 1) +
1
N!
_
R
P
_
N
= G(N 1) +G(N) P(N, 0)
Utilization of the CPU is
U(N) = 1 P(N, 0)
= 1
_
R
P
_
N
N!G(N)
=
G(N 1)
G(N)
= (N) P
Thus,
(N) =
G(N 1)
PG(N)
=
1 +
R
P
+
1
2!
_
R
P
_
2
+ +
1
(N1)!
_
R
P
_
N1
P
_
1 +
R
P
+
1
2!
_
R
P
_
2
+ +
1
N!
_
R
P
_
N
_

1
P
as N
198
as shown by the bold line in the following gure.

Throughput
N, # of terminals
N
R + NP
<
N
R + P
N = 1 +
R
P
1
P
<
1
P
G(N-1)
P G(N)
=
Example (Multiprogramming System)
1/ 1/
q = 1 - p
p
I/O
Server 2
CPU
Server 1
New program
multiprogramming level = N
P
11
= p P
12
= q = 1 p
P
21
= 1 P
22
= 0
199

1
=
1
P
11
+
2
P
21
= p
1
+
2

2
=
1
P
12
+
2
P
22
= q
1
=

1
=

q

2
=
=

q

2
= 1
P(n, N n) =

n
1
Nn
G(N)
=

n
G(n)
G(N) = 1 + +
2
+ +
N
=
N

n=0

n
P(0, N) =
1
G(N)
=
1

N
n=0

n
P(N, 0) =

N

N
n=0

n
Let

1
= 1 P
0
= 1
1

N
n=0

n
(N) = p(1 P
0
)
= p(1 P
0
) = p
1
= p
_
1
1

N
n=0

n
_
W(N) =
N
(N)
=
N
p
_
1
1

N
n=0

n
_
200
Average service time of a job =
1
p
= m
Average job I/O time = m1 =
1
p
1 =
q
p
average CPU time
average I/O time
=
1
p

q
p

=

q
=
Thus,
< 1 ; bottleneck is at the I/O
> 1 ; bottleneck is at the CPU
5.10 G/G/1 Queues
An upper bound for the G/G/1 system
The average waiting time in queue is
W
(
2
a
+
2
b
)
2(1 )
where

2
a
= variance of the interarrival times

2
b
= variance of the service times
= average interarrival time
=

201
Proof. Let
W
k
= waiting time of the k
th
customer
X
k
= service time of the k
th
customer

k
= interarrival time between the k
th
and (k + 1)
st
customer
Then,
W
k+1
= max{0, W
k
+ X
k

k
}
Let
Y
+
= max{0, Y } Y

= min{0, Y }
Y = E[Y ]
2
Y
= E
_
Y
2
Y
2

with the following properties


Y = Y
+
Y

Y
+
Y

= 0
Y = Y
+
Y


2
Y
=
2
Y
+ +
2
Y
+ 2Y
+
Y

W
k
X
k
arrival of k customer
th

k
arrival of (k+1) customer
st
W = W + X -
k+1

k k k
W
k
X
k
I
k
arrival of k customer
th

k
arrival of (k+1) customer
st
service of (k+1) customer
st
W = 0
k+1
service of (k+1) customer
st
202
Let
V
k
= X
k

k
then, we have
W
k+1
=
_
W
k
+ V
k
_
+
I
k
=
_
W
k
+ V
k
_

= idle period between k


th
and (k + 1)
st
arrivals
and

2
(W
k
+V
k
)
=
2
(W
k
+V
k
)
+ +
2
(W
k
+V
k
)
+ 2(W
k
+ V
k
)
+
(W
k
+ V
k
)

=
2
W
k+1
+
2
I
k
+ 2W
k+1
I
k
(5.13)
Since W
k
and V
k
are independent, we also have

2
W
k
+V
k
=
2
W
k
+
2
V
k
=
2
W
k
+
2
a
+
2
b
(5.14)
Combining (5.13) and (5.14), we have

2
W
k
+
2
a
+
2
b
=
2
W
k+1
+
2
I
k
+ 2W
k+1
I
k
As k ,

2
W
k

2
W
,
2
I
k

2
I
, W
k+1
W , I
k
I
203
and
W =

2
a
+
2
b
2I


2
I
2I
The average idle time between two successive arrivals is
I = E[I
k
] =
1

=
1

Hence,
W =
(
2
a
+
2
b
)
2(1 )


2
I
2(1 )

(
2
a
+
2
b
)
2(1 )
204

Potrebbero piacerti anche