Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Stochastic Processes
Introduction
Let denote the random outcome of an experiment. To every such
outcome suppose a waveform X (t , )
X (t , ) is assigned.
The collection of such X (t, )
n
waveforms form a X (t, )
k
stochastic process. The
set of { k } and the time X (t, )
2
index t can be continuous
X (t, )
or discrete (countably 1
t
0 t t
infinite or finite) as well. 1 2
Fig. 14.1
For fixed i S (the set of
all experimental outcomes), X (t , ) is a specific time function.
For fixed t,
X 1 X (t1 , i )
is a random variable. The ensemble of all such realizations 1
X (t , ) over time represents the stochastic PILLAI/Cha
process X(t). (see Fig 14.1). For example
X (t ) a cos( 0t ),
where is a uniformly distributed random variable in (0, 2 ),
represents a stochastic process. Stochastic processes are everywhere:
Brownian motion, stock market fluctuations, various queuing systems
all represent stochastic phenomena.
If X(t) is a stochastic process, then for fixed t, X(t) represents
a random variable. Its distribution function is given by
FX ( x, t ) P{X (t ) x} (14-1)
Notice that FX ( x, t ) depends on t, since for a different t, we obtain
a different random variable. Further
dFX ( x, t )
f X ( x, t ) (14-2)
dx
represents the first-order probability density function of the
process X(t). 2
PILLAI/Cha
For t = t1 and t = t2, X(t) represents two different random variables
X1 = X(t1) and X2 = X(t2) respectively. Their joint distribution is
given by
and
2
FX ( x1 , x2 , t1 , t2 )
f X ( x1 , x2 , t1 , t2 ) (14-4)
x1 x2
represents the second-order density function of the process X(t).
Similarly f X ( x1 , x2 , xn , t1 , t2 , tn ) represents the nth order density
function of the process X(t). Complete specification of the stochastic
process X(t) requires the knowledge of f X ( x1 , x2 , xn , t1 , t2 , tn )
for all ti , i 1, 2, , n and for all n. (an almost impossible task
in reality).
3
PILLAI/Cha
Mean of a Stochastic Process:
(t ) E{ X (t )} x f ( x, t )dx
X
(14-5)
represents the mean value of a process X(t). In general, the mean of
a process can depend on the time index t.
Properties:
The function i 1
Then
T T
E[| z | ] T T E{ X (t1 ) X * (t2 )}dt1dt2
2
T T
T T R XX (t1 , t2 )dt1dt2 (14-10) 5
PILLAI/Cha
Example 14.2
X (t ) a cos( 0t ), ~ U (0,2 ). (14-11)
This gives
(t ) E{ X (t )} aE{cos( 0 t )}
X
k 1 11 k 1
X
2 2
and hence the above integral reduces to
2T 2T | |
2 t R ( )(2T | |) d
2
z XX
1
2T 2t XX
R ( )(1 2 T ) d . 12
(14-24) PILLAI/Cha
Systems with Stochastic Inputs
A deterministic system1 transforms each input waveform X (t , i ) into
an output waveform Y (t , i ) T [ X (t , i )] by operating only on the
time variable t. Thus a set of realizations at the input corresponding
to a process X(t) generates a new set of realizations {Y (t , )} at the
output associated with a new process Y(t).
Y (t, i )
X (t, i )
X (t )
T [] Y
(t )
t t
Fig. 14.3
Our goal is to study the output process statistics in terms of the input
process statistics and the system function.
1A stochastic system on the other hand operates on both the variables t and .
13
PILLAI/Cha
Deterministic Systems
Linear-Time Invariant
(LTI) systems
X (t ) h(t ) Y (t ) h(t ) X ( )d
LTI system h( ) X (t )d . 14
PILLAI/Cha
Memoryless Systems:
The output Y(t) in this case depends only on the present value of the
input X(t). i.e.,
Y (t ) g{ X (t )} (14-25)
Proof:
RXY ( ) E{ X (t )Y (t )} E[ X (t ) g{ X (t )}]
x1 g ( x2 ) f X1X 2 (x1 , x2 )dx1dx2 (14-27)
R ( ) R (0)
16
XX XX
PILLAI/Cha
where L is an upper triangular factor matrix with positive diagonal
entries. i.e.,
l11 l12
L .
0 l22
Consider the transformation
Z L1 X ( Z1 , Z 2 )T , z L1 x ( z1 , z2 )T
so that
1 *1 1 *1
E{Z Z } L E{ X X }L L AL I
* *
17
The Jacobaian of the transformation is given by PILLAI/Cha
| J || L1 || A |1/ 2 .
Hence substituting these into (14-27), we obtain
RXY ( ) (l 11
z1 l12 z2 ) g (l22 z2 ) 1 1
| J | 2 | A|1/ 2 e z12 / 2 z22 / 2
e
l11 z g (l
1 22
z2 ) f z1 ( z1 ) f z2 ( z2 )dz1dz2
l12 z g (l
2 22
z2 ) f z1 ( z1 ) f z2 ( z2 )dz1dz2
0
l11 z1 f z1 ( z1 )dz1 g (l22 z2 ) f z2 ( z2 )dz2
l12 z2 g (l22 z2 ) f z2 ( z2 ) dz2
1 e z / 2
2
2
2
l12
ug (u)
u 2 / 2 l222
l222
1
2
e du,
where u l22 z2 . This gives 18
PILLAI/Cha
fu ( u )
RXY ( ) l12 l22 g (u ) lu2 1
e u 2 / 2 l222
du
22 2 l222
df u ( u )
f u ( u )
du
RXX ( ) g (u ) f u(u )du,
X (t )
t
X (t ) Y (t )
t LTI
Y (t ) h(t ) X ( )d
arbitrary Fig. 14.6
input
h( ) X (t )d (14-31)
Eq. (14-31) follows by expressing X(t) as
X (t ) X ( ) (t )d (14-32)
and applying (14-28) and (14-30) to Y (t ) L{ X (t )}. Thus
Y (t ) L{ X (t )} L{ X ( ) (t )d }
L{ X ( ) (t )d } By Linearity
X ( ) L{ (t )}d By Time-invariance
X ( )h (t )d h ( ) X (t )d . (14-33) 21
PILLAI/Cha
Output Statistics: Using (14-33), the mean of the output process
is given by
(t ) E{Y (t )} E{ X ( )h(t )d }
Y
X ( )h(t )d X (t ) h(t ). (14-34)
(t )
X h(t) (t )
Y
(a)
RXX (t1 , t2 )
h*(t2)
R XY ( t1 ,t 2 )
h(t1)
RYY (t1 , t2 )
(b)
23
Fig. 14.7 PILLAI/Cha
In particular if X(t) is wide-sense stationary, then we have X (t ) X
so that from (14-34)
(t )
Y X h( )d X
c, a constant. (14-38)
X (t ) Y (t )
wide-sense LTI system wide-sense
stationary process h(t) stationary process.
(a)
X (t ) Y (t )
strict-sense LTI system strict-sense
stationary process h(t) stationary process
(b) (see Text for proof )
X (t ) Y (t )
Gaussian Linear system Gaussian process
process (also (also stationary)
stationary) (c)
25
Fig. 14.8 PILLAI/Cha
White Noise Process:
W(t) is said to be a white noise process if
RWW (t1 , t2 ) q(t1 ) (t1 t2 ), (14-42)
i.e., E[W(t1) W*(t2)] = 0 unless t1 = t2.
W(t) is said to be wide-sense stationary (w.s.s) white noise
if E[W(t)] = constant, and
RWW (t1 , t2 ) q (t1 t2 ) q ( ). (14-43)
Fig. 14.9
For w.s.s. white noise input W(t), we have 26
PILLAI/Cha
E[ N (t )] W h( )d , a constant (14-44)
and
where
( ) h( ) h ( ) h( )h* ( )d .
*
(14-46)
We wish to determine .
Fig. 14.10 Downcrossing
Since X(t) is a stationary Gaussian process, its derivative process X (t )
is also zero mean stationary Gaussian with autocorrelation function
( ) (see (9-101)-(9-106), Text). Further X(t) and X (t )
RX X ( ) RXX
are jointly Gaussian stationary processes, and since (see (9-106), Text)
dRXX ( )
RXX ( ) ,
d
28
PILLAI/Cha
we have
dRXX ( ) dRXX ( )
RXX ( ) RXX ( ) (14-47)
d ( ) d
which for 0 gives
RXX (0) 0 E[ X (t ) X (t )] 0 (14-48)
i.e., the jointly Gaussian zero mean random variables
X 1 X (t ) and X 2 X (t ) (14-49)
are uncorrelated and hence independent with variances
12 R (0) and 22 R (0) R (0) 0
XX X X XX (14-50)
respectively. Thus
x2 x2
1 2 1 2
1 2 1 2 2 (14-51)
f X1X 2 ( x1 , x2 ) f X ( x1 ) f X ( x2 ) e .
2 1 2
To determine , the probability of upcrossing rate,
29
PILLAI/Cha
we argue as follows: In an interval(t , t t ), the realization moves
from X(t) = X1 to X (t t ) X (t ) X (t )t X 1 X 2 t,
and hence the realization intersects with the zero level somewhere
in that interval if
X 1 0, X 2 0, and X (t t ) X 1 X 2 t 0 (14-52)
i.e., X 1 X 2 t. X (t )
Hence the probability of upcrossing X ( t t )
in (t , t t ) is given by t t
t t
0
t x x x t f ( x1 , x2 )d x1dx2
2 0
X1 X 2
1 2 X (t ) Fig. 14.11
0 f X 2 ( x2 )d x2 x t f X1 ( x1 )d x1 . (14-53)
2
1 1 1 (0)
R XX
( 2 2 / ) (14-55)
2R XX (0) 2 2 R XX (0)
[where we have made use of (5-78), Text]. There is an equal
probability for downcrossings, and hence the total probability for
crossing the zero line in an interval (t , t t ) equals 0 t , where
1
(0) / RXX (0) 0.
RXX (14-56)
0
It follows that in a long interval T, there will be approximately 0T
crossings of the mean value. If RXX (0) is large, then the
autocorrelation function RXX ( ) decays more rapidly as moves
away from zero, implying a large random variation around the origin
(mean value) for X(t), and the likelihood of zero crossings should
increase with increase in RXX (0), agreeing with (14-56). 31
PILLAI/Cha
Discrete Time Stochastic Processes:
A discrete time stochastic process Xn = X(nT) is a sequence of
random variables. The mean, autocorrelation and auto-covariance
functions of a discrete-time process are gives by
n E{ X (nT )} (14-57)
R(n1 , n2 ) E{ X (n1T ) X * (n2T )} (14-58)
and
C (n1 , n2 ) R(n1 , n2 ) n1 n*2 (14-59)
respectively. As before strict sense stationarity and wide-sense
stationarity definitions apply here also.
For example, X(nT) is wide sense stationary if
E{ X (nT )} , a constant (14-60)
and
E[ X {(k n )T }X *{(k )T }] R(n ) rn r*n (14-61) 32
PILLAI/Cha
i.e., R(n1, n2) = R(n1 – n2) = R*(n2 – n1). The positive-definite
property of the autocorrelation sequence in (14-8) can be expressed
in terms of certain Hermitian-Toeplitz matrices as follows:
Theorem: A sequence {rn } forms an autocorrelation sequence of
a wide sense stationary stochastic process if and only if every
Hermitian-Toeplitz matrix Tn given by
r0 r1 r2 rn
*
r1 r0 r1 rn 1
Tn
n T *
(14-62)
r* r* r1* r0
n n 1
is non-negative (positive) definite for n 0, 1, 2, , .
Proof: Let a [a0 , a1 , , an ]T represent an arbitrary constant vector.
Then from (14-62), n n
a Tn a ai ak* rk i
*
(14-63)
i 0 k 0
since the Toeplitz character gives (Tn )i ,k rk i . Using (14-61), 33
Eq. (14-63) reduces to PILLAI/Cha
n n
n 2
a Tn a ai ak E{ X (kT ) X (iT )} E ak X (kT ) 0. (14-64)
* * * *
i 0 k 0 k 0
From (14-64), if X(nT) is a wide sense stationary stochastic process
then Tn is a non-negative definite matrix for every n 0, 1, 2,, .
Similarly the converse also follows from (14-64). (see section 9.4, Text)
or
1 2 q
X ( z ) b0 b1 z b2 z bq z B( z )
H ( z ) h( k ) z k 1 2 p
k 0 W ( z ) 1 a1 z a2 z a p z A( z )
35
(14-70) PILLAI/Cha
represents the transfer function of the associated system response {h(n)}
in Fig 14.12 so that
X ( n ) h( n k )W ( k ). (14-71)
k 0
Notice that the transfer function H(z) in (14-70) is rational with p poles
and q zeros that determine the model order of the underlying system.
From (14-68), the output undergoes regression over p of its previous
values and at the same time a moving average based on W (n), W (n 1),
, W (n q) of the input over (q + 1) values is added to it, thus
generating an Auto Regressive Moving Average (ARMA (p, q))
process X(n). Generally the input {W(n)} represents a sequence of
uncorrelated random variables of zero mean and constant variance W2
so that
RWW (n) W2 (n). (14-72)
If in addition, {W(n)} is normally distributed then the output {X(n)}
also represents a strict-sense stationary normal process.
If q = 0, then (14-68) represents an AR(p) process (all-pole
36
process), and if p = 0, then (14-68) represents an MA(q) PILLAI/Cha
process (all-zero process). Next, we shall discuss AR(1) and AR(2)
processes through explicit calculations.
AR(1) process: An AR(1) process has the form (see (14-68))
X (n) aX (n 1) W (n) (14-73)
and from (14-70) the corresponding system transfer
1
H ( z)
1 az 1
a n n
z (14-74)
n 0
given by
RXX (n )
X (n) a |n| , | n | 0. (14-77)
RXX (0)
It is instructive to compare an AR(1) model discussed above by
superimposing a random component to it, which may be an error
term associated with observing a first order AR process X(n). Thus
Y ( n) X ( n) V ( n) (14-78)
where X(n) ~ AR(1) as in (14-73), and V(n) is an uncorrelated random
sequence with zero mean and variance V that is also uncorrelated
2
RYY ( n ) 1 n0
Y ( n ) |n|
RYY (0) c a n 1, 2, (14-80)
where
2
c 2 W
1.
(1 a )
W
2
V
2 (14-81)
Eqs. (14-77) and (14-80) demonstrate the effect of superimposing
an error sequence on an AR(1) model. For non-zero lags, the
autocorrelation of the observed sequence {Y(n)}is reduced by a constant
factor compared to the original process {X(n)}.
From (14-78), the superimposed
( 0) ( 0) 1
error sequence V(n) only affects X Y
k 0
(14-89)
| b1 | ( ) b b ( ) b b ( ) | b2 | ( )
2 * n * * n * * n 2 * n
W
2
1
1 2 1
1 2 2
2
1 | 1 | 1 12 1 1 | 2 |
2 * * 2 41
1 2
PILLAI/Cha
where we have made use of (14-85). From (14-89), the normalized
output autocorrelations may be expressed as
RXX (n)
X ( n) c11 c2 2
*n *n
(14-90)
RXX (0)
where c1 and c2 are appropriate constants.
Damped Exponentials: When the second order system in
(14-83)-(14-85) is real and corresponds to a damped exponential
response, the poles are complex conjugate which gives a12 4a2 0
in (14-83). Thus
1 r e j , 2 1* , r 1. (14-91)
j
In that case c1 c2 c e in (14-90) so that the normalized
*
so that
a1
X (1) 2cr cos( ) (14-95)
1 a2
where the later form is obtained from (14-92) with n = 1. But X (0) 1
in (14-92) gives
2c cos 1, or c 1 / 2 cos . (14-96)
Substituting (14-96) into (14-92) and (14-95) we obtain the normalized
output autocorrelations to be 43
PILLAI/Cha
cos( n )
( n ) ( a2 ) n/2
, a2 1 (14-97)
cos
X
where satisfies
cos( ) a 1
1 . (14-98)
cos 1 a2 a2
Thus the normalized autocorrelations of a damped second order
system with real coefficients subject to random uncorrelated
impulses satisfy (14-97).
bq h0 aq h1aq 1 hm
0 h0 aq i h1aq i 1 hq i 1a1 hq i , i 1. (14-102)
h p 1a p 1 h p 2 a p h2 p 2 0, (14-104)
and that gives det Hp+1 = 0 etc. (Notice that a p k 0, k 1, 2, )
(For sufficiency proof, see Dienes.)
It is possible to obtain similar determinantial conditions for ARMA
systems in terms of Hankel matrices generated from its output
autocorrelation sequence.
Referring back to the ARMA (p, q) model in (14-68),
the input white noise process w(n) there is uncorrelated with its own
past sample values as well as the past values of the system output.
This gives
E{w(n) w* (n k )} 0, k 1 (14-105)
E{w(n) x* (n k )} 0, k 1. (14-106) 47
PILLAI/Cha
Together with (14-68), we obtain
ri E{x ( n ) x * ( n i )}
p q
ak {x ( n k ) x * (n i )} bk {w(n k ) w* (n i )}
k 1 k 0
p q
ak ri k bk {w(n k ) x * (n i )} (14-107)
k 1 k 0
ak ri k ri 0, iq (14-108)
k 1
and
p
ak ri k ri 0, i q 1. (14-109)
k 1
where
r0 r1 r2 rk
r r r3 rk 1
1
Dk 2
(14-111)
r r r2 k
k k 1 rk 2
represents the (k 1) (k 1) Hankel matrix generated from
r0 , r1 , , rk , , r2 k . It follows that for ARMA (p, q) systems, we have
det Dn 0, for all sufficiently large n. (14-112) 49
PILLAI/Cha