Sei sulla pagina 1di 53

Fundamentals of Stochastic Processes

Sachin C. Patawardhan Department of Chemical Engineering I.I.T. . . . Bombay Email: sachinp@iitb.ac.in

Outline

Automation Lab IIT Bombay

Axiomatic formulation of probability of random events Random variables Probability distribution and density functions Random or Stochastic Processes Multivariate Gaussian distributions and their properties M k Process Markov P White Noise and Examples or simple stationary stochastic processes Summary
Intro. to Random Processes 2

8/9/2013

Probability (Maybeck, 1979)

Automation Lab IIT Bombay

Intuitive approach to define probabilities of events of interest in terms of the relative frequencies of occurrence If th the event nt A is i observed b d to t occur N (A) tim times in a total of N trials, then P (A) is defined by

provided that this limit in fact exists. Although this is a conceptually appealing basis for probability theory, it does not allow precise treatment of many problems and issues of direct importance. Modern probability theory is more rigorously based on anaxiomatic definition of probability.
8/9/2013 Intro. to Random Processes 3

Sample Space (Maybeck, 1979)


of the experiment conducted

Automation Lab IIT Bombay

: fundamental sample space containing all possible outcomes

: single l elementary l outcome of f the h experiment (i.e. ( )


A : a specific event of interest, a specific set of outcomes of the experiment. Each such event A is a subset of , i.e. A

An event A is said to occur if the observed outcome is an element of A, i.e. if A .


Discrete Sample space (with a finite or countably infinite number of elements): e.g. coin toss experiments Continuous Sample Space (with an uncountable number of elements): e.g. values measurement noise can take in a sensor
8/9/2013 Intro. to Random Processes 4

Borel Field

(Maybeck, 1979)

Automation Lab IIT Bombay

Sample space for the purposes of our applications : Set of points in n - dimensiona l Euclidean space R n
Borel Field (B) B denote the class of sets Ai generated by sets of the form (each of which is a subset of ) : A = { : a , } and their complements, unions, and intersections.
b interpreted componentw ise : a is to be a means 1 a1, 2 a2 ,......, n an for the n components i and a2 , of vectors and a, respectively.
Note : Sets and (null set) are elements of the Borel field.
8/9/2013 Intro. to Random Processes 5

Borel Field

(Maybeck, 1979)

Automation Lab IIT Bombay

Taking complements, unions, and intersections of the sets Ai leads to finite intervals (open, closed, half open) and point values along each of the n directions.

Thus, a Borel field is composed of virtually all subsets of Rn of interest in describing a probability problems.

Example : R i.e. real line. Let a1 and a2 be points on R such that a1 < a2 . Then, the following sets beling to the Borel Field

8/9/2013

Intro. to Random Processes

Probability Function
(Maybeck, 1979)

Automation Lab IIT Bombay

The probability function (or probability measure) P ( . ) is defined to be a real scalar - valued function defined on Borel field B that assigns a value, P(Ai ), to each Ai which is a member of B (Ai B) such that : 1. P(Ai ) 0 for all Ai B 2. P() = 1 3. If A1 , A2 ,...AN are disjoint or mutually exclusive elements of B then P( ( U Ai ) = P(A ( i)
i =1 i =1 N N

for all finite and countably infinite N.


Probability space : Defined by the triplet (, B, P) of the sample space, the underlying Borel field, and the probability function, all defined axiomatically.
8/9/2013 Intro. to Random Processes 7

Automation Lab IIT Bombay

Example: Rolling of a Die

(Jazwinski, 1970)

Consider experiment of rolling a die once


We can define p probability ( (or sample) p ) space p as 1 { 1,2,3,4,5,6} Now, a Borel field can be defined in multiple ways. Case 1 : Borel field B1 can be taken as set of all subsets of 1 (i.e. power set of 1 )

Case 2 : If we are interested in betting on only events (odd) and (even), then Borel field B2 can be defined as B2 = { , { 1,3,5}, {2,4,6}, 1}

Probability function can be defined on B1 and B2 .


8/9/2013 Intro. to Random Processes 8

Automation Lab IIT Bombay

Example: Rolling of a Die

(Jazwinski, 1970)

Now consider set A defined as 1,3,5} = { 1,2,3,5} A Note : {2} { A = { ,{2}, { 1,3,5}, {2,4,6}, 1}

even though { 1,2,3,5} 1 and 1 A. Thus set A does not quaify as a Borel field.
Note : Definition of a probability space for a given experiment is NOT unique. We can define 2 R i.e. set of all real numbers. Borel field B3 Set of all subsets of R Pr{ : 2.5} = 2
8/9/2013

1 1 ; Pr{ : 5.3} = 5 6 6
9

Intro. to Random Processes

Automation Lab IIT Bombay

Example: Rolling of a Die


Pr{ : 1} = 0 ;

(Jazwinski, 1970)

Pr{ : 1.9 5.3} = 4

1 6 Pr{ : 10} = 1

Pr{} = 1 does NOT imply =


Note Triplet (, B, P) must be 'specified' and NOT determined by the physical experiemnt.
8/9/2013 Intro. to Random Processes 10

Pr{} = 0 does NOT imply =

Note

Random Variable (Maybeck, 1979)

Automation Lab IIT Bombay

The concept of a random variable is introduced because, we need a mapping from the sample space to the real numbers for carrying out quantitative analysis.
A scalar random variable x( ) is a real - valued point 'function' which assigns a real sclar value to each point in , denoted as x( ) = x, such that every set A of the form A = { : x( ( ) x } for any value x on the real line ( x R) is an element of Borel field B.
x( ) : Random variable (function) x : a realizution of the random variable (i.e. the value that this function assumes for a particular )
8/9/2013 Intro. to Random Processes 11

Random Variable (Papoulis, 1991)


We can define probability space as 3 {f 1, f 2, f 3, f 4, f 5, f 6}

Automation Lab IIT Bombay

Consider experiment of rolling a die once

Set {x 35} {f 1, f 2, f 3 } because x(f i ) 35 only if i = 1,2,3


Set{x 6} because there is no outcome x(f i) 6

We can define RV x(f i ) = 10i

Set Set{17.5 x 46.8} {f 2, f 3, f 4}

because 17.5 x(f i ) 46.8 for i = 2,3,4


Set{x = 50} {f 5 } because x(f i ) = 50 only if i = 5 Set{x = 28} because there is no outcome such that x(f i ) = 28
8/9/2013 Intro. to Random Processes 12

Random Variable (Maybeck, 1979)

Automation Lab IIT Bombay

Scalar random variables are specifically mappings from into R such that inverse images of half - open i t intervals l of f the th form f ( , x] in (i R are events t in i for f which hi h probabilities have been defined through probability function P.

Vector random variable or random vector


A vector random variable X() is a real - valued point 'function' which assigns a real vector value to each point in , denoted as X() = x, such that every set A of the form A = { : X() x } for any x Rn is an element of Borel field B.
8/9/2013 Intro. to Random Processes 13

Random Variable (Maybeck, 1979)


The random variable mapping

Automation Lab IIT Bombay

For systems y under consideration Sample Space () Rn Random Variable Definition Identity Mapping i.e. X() = Thus, each realizatio n X() is an n - dimensiona l vector whose components can take on any value in (-, ).
8/9/2013 Intro. to Random Processes 14

Automation Lab IIT Bombay

Probability Distribution Function


By definition , of random variable X() every set A of the form A = { : X( () x } have p probabilities.

Therefore, the probability distribution , function FX (x), a real scalar - valued function defined by
FX (x) = P({ : X() x}) = P({X x}) = P({x1 x1 , x2 x2 ,...., xn xn }) FX (x) : Rn [0,1]
8/9/2013 Intro. to Random Processes 15

always exists such that

Probability Distribution Function

Automation Lab IIT Bombay

Moreover, F ( x) is a monotonic nondecreas ing function of any y component p xi of vector x.


FX ( x) 1 as xi for all i FX ( x) 0 as xi for all i
Marginal Probability Distribution If we are interested only in probabilities concerning the first k of the n random variables, then Fx1 ,...xk ( x1 ,..., xk ) = Fx1 ,...xk ,xk+1 ,...xn ( x1 ,..., xk , ,..., ) = P({x1 x1 ,...., xk xk , xk +1 ...., xn })
Intro. to Random Processes 16

8/9/2013

Coin Tossing Example

(Jazwinski, 1970)

Automation Lab IIT Bombay

8/9/2013

Intro. to Random Processes

17

Coin Tossing Example (Jazwinski, 1970)

Automation Lab IIT Bombay

Probability Distribution Function

8/9/2013

Intro. to Random Processes

18

Example: Rolling a Die


Distribution function of RV x(f i ) = 10i is a staircase function.

Automation Lab IIT Bombay

Consider experiment of rolling a fair die once

In particular F(38) = P{x 38} = P{f 1, f 2, f 3} = 3 1 6

F(31.9) = P{x 31.9} = P{f 1, f 2, f 3} = 3

1 6 1 F(29.5) = P{x 29.5} = P{f 1, f 2} = 2 6 F(6) = P{x 6} = P{} = 0 F(82) = P{x 82} = P{ 3} = 1
Intro. to Random Processes 19

8/9/2013

Example: Telephone Calls


Let [0, T ], P(t1 t 2 ) =

Automation Lab IIT Bombay

Telephone calls occur at random during time interval [0 ,T ]. t 2 t1 , t1 , t 2 [0, T ] T Define RV as x( ) =

8/9/2013

Intro. to Random Processes

20

10

Example: Telephone Calls


Differentiating this distribution, we get 0 f x ( ) = 1 / T 0 x<0 0< x <T x >T

Automation Lab IIT Bombay

The distribution is not differentiable at 0 and at T.

Uniform Distribution and Density Functions


8/9/2013 Intro. to Random Processes 21

Probability Density Function

Automation Lab IIT Bombay

If a scalar valued function, p ( x), exists such that FX ( x1, x2 ,..., xn ) = ..... f X (1, 2 ,... n )d1d 2 ...d n
- x1 xn

holds for all values of x, then this function is probability density function of X().
x Simpler p Notation : F ( x ) = f X ( ) )d X -

If such a density function exists, then X() is termed a continuous random variable.
8/9/2013 Intro. to Random Processes 22

11

Properties of PDF
Using fundamenta l theorem of calculus f X ( x) = nFX ( x) x1x2 ...xn

Automation Lab IIT Bombay

Properties 1. Since FX ( x) is monotonic nondecreas ing function , f X ( x) 0 for all x

2.

( x)dx = 1
b

3. P (a 2 X4 b) = f X ( x)dx 1 4 3
Event A a

x
23

8/9/2013

Intro. to Random Processes

Commonly Used Distributions


Automation Lab IIT Bombay

Two forms of random variables useful for modeling g empirically p y observed phenomena p are the uniformly and Gaussian (or normally) distributed random variables. The uniformly distributed random variable models a situation in which a quantity of interest can take on any value in a specified range and in which there is no reason to believe certain ranges g of values to be more probable than others. The Gaussian, or normal, random variable serves as a good model for many observed phenomena

8/9/2013

Intro. to Random Processes

24

12

Univariate PDF Examples

Automation Lab IIT Bombay

(Ref.: ( f Soderstrom, , 2003) )

Gaussian / Normal Distribution 1 ( x m) f x ( x) = exp 2 2 2 > 0 and x R


8/9/2013

Unifirm Distribution 1 a < x b (b > a ) f X ( x) = b a Elsewhere 0


25

Intro. to Random Processes

Random Variable: Summary

Automation Lab IIT Bombay

A random variable is a mapping function which assigns outcomes of a random experiment to real numbers

Occurrence of the outcome follows certain probability distribution. Therefore, a random variable is completely characterized by its probability density function (PDF).
Ref.: http://www.rslabntu.net/Random_Processes/Chapter1.pdf 8/9/2013 Intro. System to Random Identification Processes 26

13

Conditional Probability

Automation Lab IIT Bombay

Suppose we have two continuous random variables X and Y mapping from a sample space into Rn and R m , respectively.
Problem : To find conditiona l density of X (event A) conditione d on knowledge g that the random variable Y has assumed the realization y : Y( ) = y (i.e. event B has occurred).
Intro. to Random Processes 27

(Maybeck, 1979)
8/9/2013

Conditional Probability Densities

Automation Lab IIT Bombay

The conditiona l probability that X( ) lies in set A, conditione d on the fact that Y( ) set B, is P(X( ) A | Y( ) B) = P(X( ) A andY( ) B) P(Y( ) B) Baye's Rule
f Y|X (y | x) f X (x) f Y (y )
Y|X

Conditiona l probability density is defined as f X|Y (x | y ) =

f X,Y (x, y ) f Y (y )

f Y (y ) =

f X,Y (x, y )dy =

(y | x) f X (x)dx

Note that the integrand in the denominator is of the same form as the numerator.
8/9/2013 Intro. to Random Processes 28

14

Independent Random Variables

Automation Lab IIT Bombay

Through conditional probabilities and densities, we specify interrelationships among random variables.

If f two continuous RVs, RV X and d Y, Y are independen d d t, then h f X,Y (x, y ) = f X (x) f Y (y ) FX,Y (x, y ) = FX (x) FY (y ) P(X( ) A andY( ) B) = P(X( ) A)P(Y( ) B)
Note If the occurrence of event A implies that B did not occur and vice versa, then A and B are mutually exclusive. If P(A and B)= P(A) P(B) then A and B are independent

events

8/9/2013

Intro. to Random Processes

29

Functions of Random Variables

Automation Lab IIT Bombay

Let X be a vector random variable vector that maps the sample space into n - dimensiona l Euclidean space Rn . N consider Now id a continuous ti mapping i (.) ( ) : Rn R m Y() = (X()) then Y() is itself a random variable.

The random variable Y has a distribution FY ( y ) = P({ : Y() y}) = P({ : [X()] y}) induced by the distribution of X

Induced probability density function, if it exists, is given by f Y (y) =


8/9/2013

mFY ( y ) y1y2 ...ym


30

Intro. to Random Processes

15

Expectations and Moments


E [Y] = ( x) f X ( x)dx = yf Y ( y )dy

Automation Lab IIT Bombay

Expactation of random variable function Y() is defined as

If the density function f X ( x) does not exist, then E[Y] can still be defined as E [Y] = ( x())dP() = ( x)dFX ( x)
First moment of X (or mean) is defined by choosing (X) = X m = E [X] = xf X ( x)dx

Rn

8/9/2013

Intro. to Random Processes

31

Moments of RV (Maybeck, 1979)

Automation Lab IIT Bombay

8/9/2013

Intro. to Random Processes

32

16

Moments of RV (Maybeck, 1979)


(i, j) component is correlatio n of xi and xj

Automation Lab IIT Bombay

Define a matrix, denoted as , as the n - by - n matrix whose ( d thus (and th the th diagonal di l terms t are autocorrel t lations, ti ) or mean squared values, the square roots of which are termed root mean squared, or RMS, values

This matrix is the second (non-central) moment of X or the autocorrelation matrix of X, and can be written as

8/9/2013

Intro. to Random Processes

33

Moments of RV (Maybeck, 1979)


Now, consider another function,

Automation Lab IIT Bombay

This function allows us to define an n - by - n matrix P whose (i, j) component is the covariance of xi and and xj :

Note specifically that the mean values are not random variables, but are statistics : deterministic numbers.
8/9/2013 Intro. to Random Processes 34

17

Moments of RV (Maybeck, 1979)

Automation Lab IIT Bombay

The matrix P is the second central moment of X or the covariance matrix of X, and can be written as

The matrix P is a symmetric, positive semi-definite matrix (its eigenvalues are nonnegative). The variances of the individual components of X are along the diagonal diagonal:

The (positive) square root of a variance Pii is termed as the standard deviation of xi , and is denoted as i .
8/9/2013 Intro. to Random Processes 35

Moments of RV (Maybeck, 1979)

Automation Lab IIT Bombay

8/9/2013

Intro. to Random Processes

36

18

Moments of RV (Maybeck, 1979)


If the correlation coefficien t rij is zero,

Automation Lab IIT Bombay

then the components xi and xj are said to be uncorrelated. Consequently, if P is diagonal, i.e., if rij = 0 for all i and j with i # j , then X is said to, be composed of uncorrelated components.
Relationship between the central and non-central second moments

8/9/2013

Intro. to Random Processes

37

Cross Correlation Matrix

Automation Lab IIT Bombay

Second moment relationship between two random variable vectors X and Y Let X be an n-dimensional random vector and Y an m-dimensional random vector. Then the crosscorrelation matrix of X and Y is the n-by-m matrix whose (i,j) component is

This matrix is then expressed notationally as

8/9/2013

Intro. to Random Processes

38

19

Cross Covariance Matrix

Automation Lab IIT Bombay

Similarly, the second central moment generalizes to the cross-covariance matrix between X and Y

Two random vectors X and Y are termed uncorrelated if their correlation matrix is equal to the outer product of their first order moments, i.e., if

or
8/9/2013 Intro. to Random Processes 39

Uncorrelatedness and Independence

Automation Lab IIT Bombay

Uncorrelatedness: a condition under which generalized second moments can be expressed as products of first order moments Independence: a condition under which the entire joint distribution or density function can be expressed as a product of marginal functions If x and y are independent, then they are uncorrelated, l t d b but t not t necessarily il vice i versa.
Orthogonality: two random vectors X and Y are termed orthogonal if their correlation matrix is the null matrix.
8/9/2013 Intro. to Random Processes 40

20

Correlatedness and Independence


Consider two RVs defined as y1 = sin( 2x) and y2 = cos(2x)

Automation Lab IIT Bombay

where x is unifirmly distributes over [0,1]. It can be shown that E(y1 ) = E(y2 ) = E(y1 y2 ) = 0 Thus, cov(y1 y2 ) = 0 and y1 and y2 are uncorrelated.
However, (y1 )2 + (y2 )2 = 1. Hence, y1 and y2 are NOT independen t.
(Jazwinski, 1970)
8/9/2013 Intro. to Random Processes 41

Conditional Expectations
Let X and Y be two random variable vectors mapping into Rn and R m , respectively. Let Z be b a continuous function f of f X, i.e. Z () = (X())

Automation Lab IIT Bombay

Then the conditiona l expected value, or conditiona l mean, of Z, conditione d on the fact that Y has assumed the realizatio n y R m , i.e., Y( ( ) = y , is

8/9/2013

Intro. to Random Processes

42

21

Conditional Mean and Covariance


Z = (X) = X

Automation Lab IIT Bombay

Conditiona l mean of X, assuming that Y() = y ,

Conditiona l covariance of X, assuming that Y() = y , is defined as

8/9/2013

Intro. to Random Processes

43

Conditional Mean and Covariance

Automation Lab IIT Bombay

Let X and Y be jointly distributed random vectors. Then, conditiona l density f X|Y ( x | y ) depends on realizatio n Y. As a result, f X|Y (x | y ) itself is a random variable. It is a function of random vector Y.

Similarly, conditiona l expectation E{X | Y} is also a random vector and function of random vector Y. Also the conditiona l covariance is a random matrix. Also, matrix
Note f X (x) = E{ f X|Y (x | y ) } = f X|Y (x | y ) f Y (y ) dy E{X} = E E{ f X|Y (x | y )}
8/9/2013 Intro. to Random Processes

44

22

Multivariate Gaussian Distributions

Automation Lab IIT Bombay

Why study multivariate Gaussian distribution? From Central Limit Theorem, it follows that sum of many independent and equally distributed random variables can be well approximated by Gaussian distribution. If unknown disturbances are assumed to be arising from many independent physical sources, then Gaussian distribution is appropriate for modeling their behavior Attractive mathematical properties: linear transformations of Gaussian distributions are still Gaussian distributed. For Gaussian distributed random variables, optimal estimated have a simple form.
8/9/2013 Intro. to Random Processes 45

Multivariate Gaussian Distribution


Consider random variable X R n Let m R n represent mean of X and P represent s t + ve definite d fi it covarince i matrix t i

Automation Lab IIT Bombay

f X ( x ) = ( m, P )

1 (2 )
n/2

det(P)

exp (x m ) P 1 (x m )
T

Characterized completely by mean (m) and covariance ( P)


Linear transformations of Gaussian random variables are also Gaussian random variables
If x ~ (m, P ) is a random vector and A is a (r n) known matrix and b is a (r 1) vector, then z = Ax + b is also a Gaussian distributed z ~ ( Am + b, APA T )
8/9/2013 Intro. to Random Processes 46

23

Multivariate Gaussian Distribution


Consider two random variable x and z.
If vectors x and z are uncorrelated and Gaussian x and z are independen t

Automation Lab IIT Bombay

A two-dimensional Gaussian random vector

8/9/2013

Intro. to Random Processes

47

2-dimensional Gaussian PDF

Automation Lab IIT Bombay

(Maybeck, 1979)
8/9/2013 Intro. to Random Processes 48

24

Gaussian Random Variables

Automation Lab IIT Bombay

Linear combinations of jointly Gaussian random vectors are also Gaussian random vector.

8/9/2013

Intro. to Random Processes

49

Gaussian Random Variables

Automation Lab IIT Bombay

Any portion of a Gaussian random vector is itself Gaussian

8/9/2013

Intro. to Random Processes

50

25

Conditional Gaussian density


Let X and Y be two jointly Gaussian vectors mapping into Rn and R m , respectively.

Automation Lab IIT Bombay

Conditional density of X given Y can be obtained using application Bayes theorem


8/9/2013 Intro. to Random Processes 51

Conditional Gaussian density

Automation Lab IIT Bombay

Performing algebraic manipulations yields

Conditiona l mean of X, assuming that Y() = y , is


8/9/2013 Intro. to Random Processes 52

26

Conditional Gaussian Density


The conditional covariance

Automation Lab IIT Bombay

Further, it can be shown that [X - E(X | Y = y )] and Y are uncorrelat ed and have Gaussian Distribution and therefore independen t.

Later when state estimation is discussed, the conditional Guussian density will be of primary interest.
8/9/2013 Intro. to Random Processes 53

Conditional Gaussian Density


following transforme d variable
-1 W = X - PXY PYY Y and Y

Automation Lab IIT Bombay

This result follows from the fact that the

E WY

}= E{(X - P

are uncorrelated, i.e.


XY

-1 -1 PYY Y)YT = E XYT - PXY PYY E YYT

} {

-1 = PXY PXY PYY PYY = [0]

Moreover W is normally distributed with Moreover,


-1 E WW T = PXX PXY PYY PYX -1 Hence, W = X - PXY PYY Y and Y are independen t.

-1 E{W} = m x - PXY PYY my

8/9/2013

Intro. to Random Processes

54

27

Stochastic Process
Foundations of theory laid by Russian mathematician Kolmogorov around 1930
Let be a fundamental sample space and

Automation Lab IIT Bombay

T be a subset of the real line denoting a time set of interest. Then a stochastic process can be defined as a real - valued function X(t, ) defined on the product space T x such that for any y fixed t * T, ~ X( (t *, ) is a random variable.

If we fix the second argument instead of the first, then to each point = * , there can be associated a time function X(t, * ) = X(t), each of which is a sample from the stochastic process.
8/9/2013 Intro. to Random Processes 55

Stochastic Process

Automation Lab IIT Bombay

Note : Domain of is the set of all experimental outcomes


A discrete stochastic process can be regarded as a family of stochastic variables {X(k, ) : k = ... - 1, 0, 1, .....}

Index k represents sampling instants in context of control systems.

A random process may be considered as s function f ti of f two t variables i bl s X(k, X(k ). )


For a fixed = *, the function X(., * ) is an ordinary time function called 'realizatio n' of the stochastic process. For a fixed k = k* , X(k* ,.) is a random variable.
8/9/2013 Intro. System to Random Identification Processes 56

28

Discrete Stochastic Process

Automation Lab IIT Bombay

Realizations of a discrete stochastic process

8/9/2013

8/9/2013

Intro. System to Random Identification Processes

57

Example: Discrete Random Process

Automation Lab IIT Bombay

Random walk process: Suppose at time instants k = 0, 1,2, we toss a coin. If heads comes up, then we take +x forward. If tails comes up, then we take -x backward (x >= 0).

Let = {H,T }, P( H ) = p, P(T ) = q with p + q = 1 For each time instant 'k' we define a RV + x = H d(k, ) = x = T P{d(k, ) = + x} = p and P{d(k, ) = x} = q
Let us define a stostachic process x(k, ) as distance at instant n from the origin and let x(0, ) = 0
8/9/2013 Intro. to Random Processes 58

29

Example: Discrete Random Process


x(k, ) at any instant k is given by x(k, ) = d(j, )
j =0 k -1

Automation Lab IIT Bombay

k = 0,1,....

and is givened by difference equation x(k, ) = x(k - 1, ) + d(k, ) (Random walk model)
Each realization is a sequence of real numbers of the form kx k = 0,1,2, ,....
8/9/2013 Intro. to Random Processes 59

Example: Discrete Random Process

Automation Lab IIT Bombay

Note : The sample (probability) space, ', on which the random walk is defined is the space p of sequences q of the form ' = HTTHHHT .....

The figure on the previous slide shows one realizatio n of the random walk process. Another element f from ' , say y

'' = TTHTTTHHTHHH ...


will yield another realizatio n of the random walk process
8/9/2013 Intro. to Random Processes 60

30

Example: Temperature Measurements

Automation Lab IIT Bombay

Consider temperature measurements obtained at one minute interval from a water storage tank using a noisy sensor. The true temperature of water in the tank changes during the day as T (t ) = T + A sin(t ) where t represents time.
Measurements of tanks temperature are collected in a computer through a sampling device at sampling interval of T = 1 min. min These measurements are corroupted with measurement noise w(k) {k = 0,1,2,..} each of which has Gaussian distribution N(0, 2 ). Then Tm (k ) = T + A sin(kT ) + w(k ) can be viewed as a continuous - discrete random process.
8/9/2013 Intro. to Random Processes 61

Discrete Time Stochastic Processes

Automation Lab IIT Bombay

Heater Mixer Experimental Setup: Cold Water Temperature

Examples of processes modeled as stochastic time series 1. Stock market and exchange rate fluctuations 2. Signals such as speech, audio and video, 3. Medical data such as a patient's EKG, EEG, blood pressure or temperature 4. Random movement such as Brownian motion or random walks.
8/9/2013 Intro. System to Random Identification Processes 62

31

Continuous Stochastic Process

Automation Lab IIT Bombay

Ref.: http://www.rslabntu.net/Random_Processes/Chapter1.pdf 8/9/2013 Intro. to Random Processes System Identification 63

Realizations of a Stochastic Process

Automation Lab IIT Bombay

A random process is a family or an ensemble of functions x(t, i ).

Realizations of a stochastic process


8/9/2013 8/9/2013 Intro. System to Random Identification Processes 64

32

Example: Satellite Altitude

Automation Lab IIT Bombay

Consider a satellite in a periodic orbit around earth. Altitude of the satellite as a function of time to to complete an orbit can be modeled as a stochastic process.

8/9/2013

Intro. to Random Processes

65

Classification of Random Processes

Automation Lab IIT Bombay

8/9/2013

Intro. to Random Processes

66

33

PDF of Stochastic Processes

Automation Lab IIT Bombay

If T is of the discrete form of a finite sequence of N points along the real line, the set of RVs can be characterized by the joint probability distribution function

or the joint density function (if it exists):

8/9/2013

Intro. System to Random Identification Processes

67

PDF of Stochastic Processes

Automation Lab IIT Bombay

Joint distribution functions

Joint density functions

8/9/2013

Intro. to Random Processes

68

34

Moments of Random Process

Automation Lab IIT Bombay

Rather than trying to generate explicit relations for joint distributions (or densities if they exist), it is often convenient, especially computationally, computationally to describe these functions to some degree by the associated first two moments.

8/9/2013

Intro. to Random Processes

69

Moments of Random Process

Automation Lab IIT Bombay

8/9/2013

Intro. to Random Processes

70

35

Moments of Random Process

Automation Lab IIT Bombay

8/9/2013

Intro. to Random Processes

71

Stationary Stochastic Process

Automation Lab IIT Bombay

8/9/2013

Intro. System to Random Identification Processes

72

36

Stationary Stochastic Process

Automation Lab IIT Bombay

8/9/2013

Intro. to Random Processes

73

Stationary Stochastic Process

Automation Lab IIT Bombay

In g general, , a strictly y stationary y process p is wide-sense stationary if and only if it has finite second moments, and widesense stationarity does not imply strict-sense stationarity. However, in the special case of Gaussian processes, a widesense stationary process is strict-sense stationary as well.
8/9/2013 Intro. to Random Processes 74

37

Gaussian or Normal Process

Automation Lab IIT Bombay

Each variable x(k) has a Gaussian distribution for k=0,1,2,

Then, joint distribution of any subset x(k1 ), x(k 2 ).....x(k n ) at finite time points is jointly Gaussian for all n and for all possible subsets
A Gaussian G ussi n process p c ss is completely c mpl t l characteri ch ct ized d by b its mean m n m(ki ) = E{x(ki )} and its covariance kernels

PXX (ki , kj ) = E{(x(ki ) m(ki ) )(x(k j ) m(k j ) )}


8/9/2013 Intro. System to Random Identification Processes 75

Gaussian Process Example


x(k + 1) = x(k ) + w (k )

Automation Lab IIT Bombay

Example : Consider linear difference equation where h { w (k )} is a zero mean Gaussian G ssi white hit noise is process ss such that w (k ) ~ N(0, Q ) , x(0) ~ N(m 0 , P0 ) and x(0) is uncorrelated with w (k ).
Using properties of Gaussian Random vectors, the process { x(k ) : k = 0,1,....} is a Gaussian process as x(1) = x(0) + w (0) x(2) = x(1) + w (1) ...... are all Gaussian random variables. (Linear combinatio ns of Gaussian RVs are Gaussian RVs)
8/9/2013 Intro. to Random Processes 76

38

Gaussian Process Example


E [x(k + 1)] = E [x(k ) + w (k )] = E [x(k )] i.e. the mean evolves as m(k + 1) = m(k ) Covariance computations

Automation Lab IIT Bombay

E (x(k + 1) m(k + 1) )(x(k + 1) m(k + 1) )

= E (x(k ) m(k ) )(x(k ) m(k ) ) T + E w (k )w (k )T


T

x(k + 1) m(k + 1) = [x(k ) m( k )] + w (k )


T

or P (k + 1) = P (k ) + Q
T

This follows from E (x(k ) m(k ) )(w (k ) ) = [0]


T
8/9/2013 Intro. to Random Processes 77

Gaussian Process Example


Thus, { x(k )} is a Gaussian Random process characterized completely by mean m (k ) and d covariance i ( P (k )) f X(k) [x(k )] = (m (k ), P (k ))

Automation Lab IIT Bombay

(m (k ), P(k )) (k ) exp (x(k ) m (k ) ) P(k ) 1 (x(k ) m(k ) )


T

(k ) =

(2 )

n/2

1 det(P(k ))

Mean m(k ) and Covariance ( P (k )) are governed by the following difference equations m(k + 1) = m(k ) P (k + 1) = P( k ) T + Q
8/9/2013 Intro. to Random Processes 78

39

Gaussian Process Example


x(k + 1) m(k + 1) with E = x( k ) m ( k ) x(k + 1) P (k + 1) P( k ) Cov = T P(k ) x(k ) P (k )

Automation Lab IIT Bombay

Moreover, x(k + 1) and x( k ) are jointly Gaussian

In general, x( k + p ) and x(k ) are jointly Gaussian x(k + p ) m( k + p ) with E = x( k ) m ( k ) x( k + p ) P ( k + p ) p P ( k ) Cov = p T P(k ) x( k ) P ( k )

[ ]

8/9/2013

Intro. to Random Processes

79

Gaussian Process Example


In Particular, consider case p = 3

Automation Lab IIT Bombay

x(k + 3) = 3 x(k ) + 2 w (k ) + w (k + 1) + w (k + 2) m(k + 3) = 3m(k ) Combining

[x(k + 3) m(k + 3)] = 3 [x(k ) m(k )] + 2 w (k ) + w (k + 1) + w (k + 2)


Post multiplying both sides of above eqn. by [x(k ) m(k )] E [x(k + 3) m(k + 3)][x(k ) m(k )] E [w (k + j )][x(k ) m(k )] for j = 0,1,2
8/9/2013 Intro. to Random Processes 80
T

and taking expectations


T

}= P(k | k )
3

This follows from the fact that

} = [ 0]

40

Markov Process

Automation Lab IIT Bombay

In deterministic systems, state vector contains all information about the history the history of the system that has an impact on the future behavior of the system.

Consider differenti al equation dx = F (x(t ) ) dt which says that rate of change of x(t ) is function of x at t (now) and not on x( ) where < t . As a consequence, e solution of ODE at t 2 > t1 , i.e. ie

x(t 2 ) = (t 2 ; x(t1 ), t1 ) is a function of x(t1 ) and does not depend on x( ), < t1

To extend this concept to stochastic systems, it becomes necessary to introduce concept of Markov process.
8/9/2013 Intro. to Random Processes 81

Markov Process
Markov Process

Automation Lab IIT Bombay

Let k1 < k 2 < k3 .... < k n < k . The stochastic process { x(k , )} is P(x(k , ) | x(k1 ),...., x(k n ) ) = P(x(k , ) | x(k n ) )
A continuous parameter stochastic process { x(k ) } is called a Markov process if the conditiona l pdf satisfies f (x(k ) | x(k1 ), ),...., , x(k n ) ) = f (x( k ) | x( k n ) )

called ll d a Markov M k process if for f any

Thus, if several old values are known, only the most recent one has influence on the future behavior. If the process takes only a finite number of values, then it is called a Markov chain.
8/9/2013 Intro. to Random Processes 82

41

Markov Process

Automation Lab IIT Bombay

Consider joint density function for a Markov process = f (x(k n ) | x(k n 1 ) ) f (x(k1 ),...., x(k n 1 ) )

f (x(k1 ),...., x(k n ) ) = f (x(k n ) | x(k1 ),...., x(k n 1 ) ) f (x(k1 ),...., x(k n 1 ) ) (follows from Markov property)

Continuing this process, we get f (x(k1 ),...., x(k n ) ) = f (x(k n ) | x(k n 1 ) ) f (x(k n 1 ) | x(k n 2 ) ).... f (x(k 2 ) | x(k1 ) ) f (x(k1 ) )

Thus, a Markov processcan be completely characterized if we specify the transition probability densities f (x(k j ) | x(k j 1 ) ) and f (x(k1 ) ).
Intro. to Random Processes 8/9/2013 83

Gauss Markov Process


x(k + 1) = x(k ) + w (k )

Automation Lab IIT Bombay

Example 1 : Consider linear difference equation where { w (k )} is a zero mean Gaussian white noise process such that w (k ) ~ N(0, Q ) , x(0) ~ N(m 0 , P0 ) and x(0) is uncorrelated with w (k ).

Conditiona l mean of x(k + 1) given vector x(k ) Cov[x( k + 1) | x(k )] = E w ( k ) w (k )T = Q E [x(k + 1) | x(k )] = x(k )

Thus p[x(k + 1) | x(k )] depends only on x(k ).


8/9/2013 Intro. to Random Processes 84

42

Gauss Markov Process


Thus, we have f (x( j + 1) | x( j ) ) N(x( j ), Q )

Automation Lab IIT Bombay

and it follows ll that h f (x(k ), x(k 1),..., x(0) ) = N(x(k 1), Q ) N(x(k 2), Q ) .... N(x(0), Q ) N(m 0 , P0 )
Note that 1 T f w(k) [w (k )] = exp (w (k ) ) Q 1 (w (k ) ) n/2 (2 ) det(Q)

] ]

f (x( j + 1) | x( j ) ) = exp (x( j + 1) x( j ) ) Q 1 (x( j + 1) x( j ) )


T

and

8/9/2013

Intro. to Random Processes

85

Gauss Markov Process


f (x( j + 1) | x( j ) ) f w(k) [(x( j + 1) x( j ) )] i.e. RHS is density of RV w(k) evaluated at w (k ) = (x( j + 1) x( j ) ) This implies that

Automation Lab IIT Bombay

Thus, joint pdf of X k = {x(k ), x(k 1),..., x(0)}is given as f Xk =

( )

N(m 0 , P0 ) exp (x( j + 1) x( j ) )T Q 1 (x( j + 1) x( j ) )


j =0

k 1

8/9/2013

Intro. to Random Processes

86

43

Gauss Markov Process


x(k + 1) = F[x(k )] + w (k )

Automation Lab IIT Bombay

Example 2 : Consider linear difference equation where { w (k )} is a zero mean Gaussian white noise process such that w (k ) ~ N(0, Q ) , x(0) ~ N(m 0 , P0 ) and x(0) is uncorrelated with w (k ).
F[x] : General Nonlinear Function Vector C diti Conditiona l mean m of f x(k + 1) given i vector t x( k ) E [x(k + 1) | x(k )] = E [F[x(k )] | x(k )] = F[x(k )] Cov[x(k + 1) | x(k )] = E w (k )w (k )T = Q
Intro. to Random Processes

Thus p[x(k + 1) | x(k )] depends only on x(k ).


8/9/2013 87

Markov Process
Thus, we have f (x( j + 1) | x( j ) ) N(F[x( j )], Q ) and it follows that f (x(k ), x(k 1),..., x(0) ) =

Automation Lab IIT Bombay

N(F[x(k 1)], Q ) N(F[x(k 2)], Q ) .... N(F[x(0)], Q ) N(m 0 , P0 )


Thus, joint pdf of X k = {x( k ), x(k 1),..., x(0)}is given as f Xk =

( )

N(m 0 , P0 ) exp (x( j + 1) F[x( j )])T Q 1 (x( j + 1) F[x( j )])


j =0

k 1

8/9/2013

Intro. to Random Processes

88

44

Discrete Time White Noise

Automation Lab IIT Bombay

Consider a random process {e(k): k= 1, 2, } such that e(k) are independent and equally distributed random vectors
Collection of uncorrelated random variables such that mean{e(k)} = 0 variance{e(k)} =

Such sequence is called discrete time white noise Simplest Random Process
Gaussian White Noise e(k ) are independednt and identically distrubuted (iid) Gaussian or normal random variables
8/9/2013 Intro. System to Random Identification Processes 89

White Noise
sequence for which

Automation Lab IIT Bombay

A white random sequence {e(k)} is a Markov f (e(k) | e(n) ) = f (e(k) ) for k > n i.e. all e(k) are mutually independen t.
As a consequence, knowing the realization of e(n) in no manner helps in predicting e(k). A white noise sequence is completely random or totally unpredictable. A white noise source is often used for modeling time correlated stochastic signals
8/9/2013 Intro. to Random Processes 90

45

Automation Lab IIT Bombay

Example: Discrete Time White Noise


Consider a scalar discrete time zero mean white noise process {e(k): k= 1, 2, } with standard deviation Since e(k) and e(j) are uncorrelated when j k, it follows that

E [e(k )e( j )] = 0

when j k

Thus, auto-covariance of white noise process is

2 for = 0 E [e(k )e(k + )] = 0 for = 1, 2,....


8/9/2013 Intro. System to Random Identification Processes 91

Automation Lab IIT Bombay

Example: Gaussian White Noise Process


Three realizations of zero mean white noise process with std = 1 3 2 1 Value 0 -1 -2

0
8/9/2013

6 7 8 9 10 11 12 13 14 15 Time instant (k)


92

Intro. System to Random Identification Processes

46

Example: White Noise


29 Measurement Mean value 28.5

Automation Lab IIT Bombay

Temperature (deg C)

28

27.5

27

26.5

Experimental Data Temperature sensor kept in a beaker containing water at the room temperature
20 40 60 80 100
Sampling Instant

26 0

Mean = 27.33 0C Std. Dev. = 0.633


8/9/2013

0C
93

Intro. System to Random Identification Processes

Measurement Errors: Histogram


Histogram of Measurement Errors

Automation Lab IIT Bombay

45 40 35 30
No. of Samples

Experimental Data Estimated e(k)=T(k)-Tmean

25 20 15 10 5 0 -1.5

Is this a white noise?


-1 -0.5 0
Measurement Error

0.5

1.5

8/9/2013

Intro. System to Random Identification Processes

94

47

Estimated Autocorrelation Function


Autocorrelation Function: Temperature Data 1.2 1 0.8 0.6 0.4 02 0.2 0 -0.2

Automation Lab IIT Bombay

Measurement errors can be modeled as a Gaussian white noise!

10 lags

15

20

8/9/2013

Intro. System to Random Identification Processes

95

Measurement Errors: Estimated pdf


Gaussian Density Function estimated fromTemperature data
0.7

Automation Lab IIT Bombay

0.6

0.5 Probability Density

0.4

0.3

0.2

0.1

Estimated mean = -2.3e-15 Estimated std. dev. = 0.633

0 -1.5

-1

-0.5

0.5

1.5

Deviation Temperature
8/9/2013 Intro. System to Random Identification Processes 96

48

Moving Average Process


Consider a Gaussian white noise process {e(k): k= 1, 2, }

Automation Lab IIT Bombay

We can introduce a moving average that smoothes the series v(k) = (1/3) { e(k-1) + e(k) + e(k+1) } {v(k): k= 1, 2, )} called Moving Average Process We can introduce an auto-regressive series x(k) = x(k-1) 0.9 x(k-2) + e(k) {x(k): k= 1, 2, )} called Auto Regressive Process Capable of capturing Quasi-Periodic Behavior in Data
8/9/2013 Intro. System to Random Identification Processes 97

Example: Moving Average Process


White noise e(k) and Moving average process v(k) 2 1 0 e(k) and v(k) -1 -2 2 -3 -4 0
8/9/2013

Automation Lab IIT Bombay

White Noise e(k) Moving average v(k)

6 7 8 9 10 11 12 13 14 15 Time instant (k)


98

Intro. System to Random Identification Processes

49

Example: Auto-Regressive Process


White Noise e(k) and Auto Regressive process x(k) 4 White Noise e(k) 3 Regressive Process x(k) Auto 2 e(k) and x(k) 1 0 -1 1 -2 -3 0
8/9/2013

Automation Lab IIT Bombay

10 Time instant (k)


Intro. System to Random Identification Processes

15

20
99

Mean: MA and AR processes

Automation Lab IIT Bombay

MA and AR are examples of stationary random processes


E Example l 1: 1 Moving M i Average A process v(k) = (1/3) { e(k-1) + e(k) + e(k+1) } E{v(k)) = (1/3) E{ e(k-1) } + E {e(k) }+ E {e(k+1) } = 0 Example 2: Auto Regressive process x(k) (k) = x(k-1) (k 1) 0.9 0 9 x(k-2) (k 2) + e(k) (k) E{x(k)} = E{x(k-1)} 0.9 E{x(k-2)} + E{e(k)} Let = E{x(k)} = E{x(k-1)} = E{x(k-2)}
-0.9 = E{e(k)} = 0 = 0
8/9/2013 Intro. System to Random Identification Processes 100

50

Automation Lab IIT Bombay

Covariance Kernel: MA process


Consider the Moving Average Process v(k) = (1/3) { e(k-1) e(k 1) + e(k) + e(k e(k+1) 1) }
rv (k ,t ) = Cov [ v (k ),v ( t )] = E {[ v (k ) 0][ v( t ) 0]}
= 1 E {[e(k 1) + e(k ) + e(k + 1)][e( t 1) + e( t ) + e( t + 1)]} 9

For k = t
rvv (k , k ) = E {[e (k 1) + e (k ) + e (k + 1)][e (k 1) + e (k ) + e (k + 1)]}
1 9 1 1 = [E { e (k 1) 2 } + E { e (k ) 2 } + E { e (k + 1) 2 }] = 2 9 3
Intro. System to Random Identification Processes

8/9/2013

101

Covariance Kernel: MA process


1 9

Automation Lab IIT Bombay

rvv (k + 1, k ) = E {[e (k ) + e (k + 1) + e (k + 2)][e (k 1) + e (k ) + e (k + 1)]}


= 1 [E {e (k ) 2 } + E {e (k + 1) 2 }] = 2 2 9 9

Other auto-covariance coefficients can be derived in a similar manner

Summary 2 /3 k=t 2 2 / 9 k - t = 1 rvv (k ,t ) = 2 / 9 k - t = 2 0 k-t 3


8/9/2013

Note: Auto-covariance For the MA process d depends d only l on the h time separation (k-t) or lag and not n the absolute location of time points in series
102

Intro. System to Random Identification Processes

51

Covariance Kernel: AR process


Consider Auto Regressive (AR) series x(k) = x(k-1) 0.9 x(k-2) + e(k)

Automation Lab IIT Bombay

rxe (k , k ) = Cov [x (k ), e (k )]

= E {[x (k 1) 0.9x (k 2) + e (k )][e (k )]}


Since E {[x (k 1) ][e (k ) ]} = E {[x (k 2) ][e (k ) ]} = 0

= E {[x (k ) 0][e (k ) 0]}

rxe (k , k ) = E {[e (k ) ][e (k ) ]} = 2

rxe (k , k 2) = Cov [x (k ), e(k 2)] = E {[x (k 1) 0.9x (k 2) + e(k )][e(k 2)]}


= E {[ 0.9x (k 2)][e(k 2)]} = -0.9 2
Intro. System to Random Identification Processes 8/9/2013 103

Summary

Automation Lab IIT Bombay

Random variables and stochastic processes can provide a framework for modeling p g unmeasured disturbances encountered in engineering systems. In particular, Gaussian random processes appears to be an attractive option for modeling due to their interesting properties. Markov Processes arise in the study of stochastic difference d fference equations equat ons (i.e. ( .e. d discrete screte time t me dynam dynamical cal systems). (Note: All stochastic process that we will encounter in the remainder of this course are Markov processes.)

8/9/2013

Intro. to Random Processes

104

52

References

Automation Lab IIT Bombay

Astrom, K.J. and B. Wittenmark, Computer Controlled Systems, Prentice Hall, 1994. Astrom K.J., Astrom, K J An Introduction to Stochastic Control Theory Theory, Dover, 1970. Gelb, A., Applied Optimal Estimation, MIT Press, 1974. Jazwinski. A. H., Stochastic Processes and Filtering Theory, Dover, 1970. Maybeck, P. S., Stochastic models, estimation, and control: Volume 1, Academic Press, 1979. P p lis A Papoulis, A., P Probability, b bilit R Random nd m V Variables i bl s and nd Stochastic St h sti Processes, McGraw Hill, 1991. Soderstrom, T., Discrete Time Stochastic Systems, Springer, 2002.

(Highlighted books are easy to follow for a beginner)


8/9/2013 Intro. to Random Processes 105

53

Potrebbero piacerti anche