Sei sulla pagina 1di 32

Probability–An Overview

B. Sainath
EEE Dept., BITS PILANI

Jan. 2018

B. Sainath (BITS, PILANI) Probability–An Overview Jan. 2018 1 / 29


Outline

1 Ordered Random Variables

2 Complex Random Variables

3 Random Vectors

4 Random Signals

5 Random Processes

6 Further Reading & References

B. Sainath (BITS, PILANI) Probability–An Overview Jan. 2018 2 / 29


Ordered RVs & Statistics
Applications in wireless communications (eg. antenna selection/relay
selection)
Eg. Z = Max(X,Y)

Find pdf: Max(X1 , . . . , Xn ) and Min(X1 , . . . , Xn )


Eg. Min. of independent RVs

P(Min {X1 , . . . , Xn } > y ) = P(X1 > y , . . . , Xn > y)


= P(X1 > y )..P(Xn > y )

Exercise:
Derive Max. RV pdf when Xi ∼ exp(1), i = 1, . . . , n
Find MV, MSV, Variance
B. Sainath (BITS, PILANI) Probability–An Overview Jan. 2018 3 / 29
Complex RVs

Noise waveforms after demodulation to baseband are cplx.


Cplx. RV: Z = Zre + jZIm
Z Gaussian if Zre and ZIm are jointly Gaussian
MGF of Z if Zre and ZIm are jointly standard Gaussian ?
MAR (Magnitude-Angle representation) R exp(jΘ)

R =?
Θ =?

B. Sainath (BITS, PILANI) Probability–An Overview Jan. 2018 4 / 29


Example Problem

2 2
Let Z = X + jY , X ∼ N (0, σ2 ) and Y ∼ N (0, σ2 ) are independent RVs.

It’s magnitude R = X 2 + Y 2 and phase Θ = arctan(Y /X )

Find pdfs of R and Θ


Hint: Use Cartesian to Polar transformation
Resultant pdfs are ? and ? (prove)

B. Sainath (BITS, PILANI) Probability–An Overview Jan. 2018 5 / 29


Example Problem

2 2
Let Z = X + jY , X ∼ N (0, σ2 ) and Y ∼ N (0, σ2 ) are independent RVs.

It’s magnitude R = X 2 + Y 2 and phase Θ = arctan(Y /X )

Find pdfs of R and Θ


Hint: Use Cartesian to Polar transformation
Resultant pdfs are ? and ? (prove)
Magnitude: Rayleigh distribution
Phase: Uniform distribution
Distribution of R 2 ?
Exponential distribution

B. Sainath (BITS, PILANI) Probability–An Overview Jan. 2018 5 / 29


Gaussian Random Vectors

Standard Gaussian random vector


Collection of n i.i.d. standard Gaussian RVs X1 , . . . , Xn ∈ Rn
X = [X1 , . . . , Xn ]t
pdf of X ? !
1 k x k2
PX (x) = n exp −
(2π) 2 2
Result: Let O is orthogonal matrix (transformation). If X is std. Gaussian,
OX is also std. Gaussian (proof in class)
Qs:
How is k x k2 distributed (Ans. χ-square RV with n degrees of freedom)
How is k x k2 = x12 + x22 distributed?

B. Sainath (BITS, PILANI) Probability–An Overview Jan. 2018 6 / 29


Gaussian Random Vectors

Standard Gaussian random vector


Collection of n i.i.d. standard Gaussian RVs X1 , . . . , Xn ∈ Rn
X = [X1 , . . . , Xn ]t
pdf of X ? !
1 k x k2
PX (x) = n exp −
(2π) 2 2
Result: Let O is orthogonal matrix (transformation). If X is std. Gaussian,
OX is also std. Gaussian (proof in class)
Qs:
How is k x k2 distributed (Ans. χ-square RV with n degrees of freedom)
How is k x k2 = x12 + x22 distributed? (Ans. Exponential)
Y = AX + µ Gaussian random vector
A: Matrix representing linear transformation
µ mean vector
pdf of Y?

B. Sainath (BITS, PILANI) Probability–An Overview Jan. 2018 6 / 29


Further Transformation

For any c ∈ Rn , c t Y ∼?
A std. Gaussian random vector is also Gaussian
Mean: zero vector
Covariance matrix: In
Any linear combination of elements of a Gaussian random vector is also
Gaussian
If A invertible, Gaussian random vector is completely characterized by
Mean vector
Covariance matrix: Symmetric, non-negative definite
Symmetric n × n real matrix M is said to be non-negative definite if c t Mc is
non-negative for every non-zero column vector c ∈ Rn

B. Sainath (BITS, PILANI) Probability–An Overview Jan. 2018 7 / 29


SOME KEY POINTS

O orthogonal matrix
Covariance matrices of Y and OY same = AAt (show this)
White Gaussian random vector
Independent Gaussian RVs and covariance matrix diagonal
RVs are uncorrelated
What if A is not invertible?
Some components of AX linearly dependent (LD)
Focus on components that are linearly independent (LI)

B. Sainath (BITS, PILANI) Probability–An Overview Jan. 2018 8 / 29


Complex Gaussian Random Vectors

Cplx. random vector: Z = Zre + jZIm


Zre and ZIm are real random vectors
Distribution completely specified by mean and covariance matrix of real
random vector [Zre , ZIm ]t
Mean µ, Covariance matrix K, Pseudo-covariance J

µ , E [Z]
K , E [(Z − µ)(Z − µ)∗ ]
J , E (Z − µ)(Z − µ)t
 

Covariance matrix: Hermitian (K = K∗ )


Complete second order statistics: n + n2 + n2 = n(2n + 1)
Note:
In wireless communication, we are interested in circularly symmetric
complex Gaussian random vectors

B. Sainath (BITS, PILANI) Probability–An Overview Jan. 2018 9 / 29


Circulary Symmetric Complex Gaussian (CSCG)

Result:
Z is circularly symmetric complex random vector if ejθ Z has same
distribution of Z for any θ
Circularly symmetric complex random vector Z
Mean zero (Proof in class)
Pseudo-covariance zero (Proof in class)
K fully specifies first and second order statistics Z ∼ CN (0, K)
Spl. Cases
CSCG must have i.i.d.zero mean Re. and Im. components
Statistics fully specified by σ 2 , Z ∼ CN (0, σ 2 )

B. Sainath (BITS, PILANI) Probability–An Overview Jan. 2018 10 / 29


Density Function of Standard CSCG Random Vector

Standard CSCG RV Z
Z ∼ CN (0, 1), z ∈ C
1
Re. and Im. parts are RVs with variance 2

1  
PZ (z) = exp −k z k2
π
Standard CSCG random vector Z ∼ CN (0, I), z ∈ C n
Collection of n i.i.d.Z ∼ CN (0, 1)

1  
PZ (z) = n
exp −k z k2
π
Property: UZ has same distribution as Z (U: unitary matrix)
Q. Let A is an invertible matrix. pdf of X = AZ?

B. Sainath (BITS, PILANI) Probability–An Overview Jan. 2018 11 / 29


Random Signals or Waveforms

Deterministic signals modeled by mathematical formula


Not enough to model real-world phenomena

B. Sainath (BITS, PILANI) Probability–An Overview Jan. 2018 12 / 29


Random Signals or Waveforms

Deterministic signals modeled by mathematical formula


Not enough to model real-world phenomena
Random signals (due to random processes)
Modeled in probabilistic terms
Eg., Speech, Stock market exchange rates, Noise, Interference, ECG, EEG

Figure: Types of waveforms

B. Sainath (BITS, PILANI) Probability–An Overview Jan. 2018 12 / 29


Random (stochastic) Processes (RPs)
RV: Assignment of real number X (ω) to each point ω of sample space in
such a way that CDF exists
Random process X (t) (evolves with time): Assign a real-time function
(called sample function) X (ω, t) to each point ω

Figure: Simple random process. Source: WC & J [1965]

Probability space: Sample space, waveforms, probability measure


Collection of waveforms with probabilities define RV at observed time
instant
B. Sainath (BITS, PILANI) Probability–An Overview Jan. 2018 13 / 29
Examples of Stochastic Processes

Gaussian processes, Brownian motion, Markov chains, Martingales ..


Specification of RP
A rule is given or implied for determining joint pdf PX (t) for any finite set of
observation instants (t1 , . . . , tn )
Three methods of specifying RP
Direct way to find joint pdf, eg. Gaussian process (discuss later)
Time function involving one or more parameters, eg.

Y (t) = R sin(2πt + Θ)

Sample functions: Y (ω, t) = R(ω) sin(2πt + Θ(ω)), ω ∈ Ω


To find PY (t) , knowledge of PR,Θ required
Eg. R, Θ as polar coordinates, Ω is 2D plane
A possible joint pdf
(  2
r
exp − r2 ; 0 ≤ r < ∞, 0 ≤ θ < 2π
PR,Θ (r , θ) = 2π
0; otherwise

B. Sainath (BITS, PILANI) Probability–An Overview Jan. 2018 14 / 29


Third Method

Generate ensemble of RP by applying stated operation to sample


functions of known process
Eg. Time translation: Y (ω, t) = X (ω, t + T ), ∀ω ∈ Ω
Thus PY (t) = PX (t+T )
Eg. Linear filtering Z ∞
Y (t) = h(t − τ )X (τ )
−∞

h(t) impulse response


In general, finding PY (t) difficult unless X (t) is Gaussian process

B. Sainath (BITS, PILANI) Probability–An Overview Jan. 2018 15 / 29


Stationary RPs

Stationary in strict sense: Let p(xt1 . . . xtn ) denote joint pdf

p(xt1 . . . xtn ) = p(xt1 +T . . . xtn +T ), ∀T &n

Statistics of stationary RPs are invariant to time-axis translation


Non-stationary if joint pdfs are different
Statistical (or ensemble) averages:
nth moment of RV Xti :
Z ∞
E Xtni = xtni p(xti ) dxti .
 
−∞

Autocorrelation function (ACF) RX (t1 , t2 )


Z ∞ Z ∞
E [Xt1 , Xt2 ] = xt1 xt2 p(xt1 , xt2 ) dxt1 dxt2 .
−∞ −∞

For stationary RP: RX (t1 , t2 ) = RX (t1 − t2 ) = RX (τ ), where τ = t1 − t2

B. Sainath (BITS, PILANI) Probability–An Overview Jan. 2018 16 / 29


Some Properties of ACF & Covariance Function

RX (−τ ) = RX (τ ) ⇒ Even function


RX (0) = E Xt2 (since t1 = t2 = t) denotes avg. power in RP
 

Autocovarinace function
h i
CX (t1 , t2 ) = E (Xt1 − µXt1 )(Xt2 − µXt1 )

Stationary RP: CX (t1 , t2 ) = RX (τ ) − µ2

B. Sainath (BITS, PILANI) Probability–An Overview Jan. 2018 17 / 29


Wide-Sense Stationary (WSS) RP

MV is independent of time (µX (t) = µX )


ACF: RX (t1 , t2 ) == RX (τ )
Various RPs by Venn diagram

B. Sainath (BITS, PILANI) Probability–An Overview Jan. 2018 18 / 29


Gaussian RP

X (t) Gaussian RP ⇒ at t = ti , i = 1 . . . n, RVs are jointly Gaussian


∼ (µX (ti ), CX (ti , tj ))
Higher order moments can be expressed in terms of first and second
moments
Gaussian RP completely specified by µX (ti ) and CX (ti , tj )
If Gaussian RP is WSS implies also strict sense stationary

B. Sainath (BITS, PILANI) Probability–An Overview Jan. 2018 19 / 29


Cross-Correlation Function (CCF) & Cross Covariance

CCF of X (t) and Y (t):

RXY (t1 , t2 ) = E [Xt1 Yt2 ]


Z ∞Z ∞
= xt1 yt2 p(xt1 , yt2 ) dxt1 dyt2 .
−∞ −∞

Cross covariance:

CXY (t1 , t2 ) = RXY (t1 , t2 ) − µXt1 µYt2

RPs X (t) and Y (t) are jointly and individually stationary:


CXY (t1 , t2 ) = RXY (τ ) − µX µY and RXY (−τ ) = RYX (τ )

B. Sainath (BITS, PILANI) Probability–An Overview Jan. 2018 20 / 29


Statistical Independence & Uncorrelated Condition

Statistical independence of X (t) and Y (t): ∀t&t 0 , ∀n&m ∈ Z +

p(xt1 . . . xtn ; yt10 . . . ytm0 ) = p(xt1 . . . xtn )p(yt10 . . . ytm0 )

X (t) and Y (t) uncorrelated if RXY (t1 , t2 ) = µXt1 µYt2 ⇒ CXY (t1 , t2 ) = 0

B. Sainath (BITS, PILANI) Probability–An Overview Jan. 2018 21 / 29


Complex-Valued RP

Z (ti ), i = 1 . . . n (Z (t) = X (t) + jY (t))


Given by joint pdf of X (ti ), Y (ti ): p(xt1 . . . xtn ; yt1 . . . ytn )
For instance: encounter in narrowband band-pass noise
ACF:
1 
RZZ (t1 , t2 ) = E Zt1 Zt∗2

2

Express in terms of real and img. parts


X (t), Y (t) jointly & individually stationary (Show this):

RZZ (t1 , t2 ) = RZZ (τ )


R∗ZZ (τ ) = RZZ (−τ )

B. Sainath (BITS, PILANI) Probability–An Overview Jan. 2018 22 / 29


Two Complex-Valued RP

Z (t) = X (t) + jY (t) and W (t) = U(t) + jV (t) two complex valued RPs:

1 
R∗ZW (τ )RZW (t1 , t2 ) = E Zt1 Wt∗2

2

RPs pairwise-stationary:

R∗ZW (τ ) = RZW (−τ )

B. Sainath (BITS, PILANI) Probability–An Overview Jan. 2018 23 / 29


Power Spectral Density (PSD)

X (t) RP
PSD: Fourier Transform of ACF
Z ∞
SXX (f ) = RXX (τ ) exp (−j2πf τ ) dτ
−∞

If RP
 real,RPSD is real and even

E Xt2 = −∞ SXX (f ) df
Cross PSD of cross correlation function (CCF):
Z ∞
SXY (f ) = RXY (τ ) exp (−j2πf τ ) dτ
−∞

B. Sainath (BITS, PILANI) Probability–An Overview Jan. 2018 24 / 29


Response of LTI system to RP
linear time-invariant system (filter) characterized by
Impulse response h(t) (t-domain), its Fourier transform H(f )

Output y(t): Convolution of X (t) and h(t)


Z ∞
y (t) = x(t − τ )h(τ ) dτ
−∞
Y (f ) = X (f )H(f )

B. Sainath (BITS, PILANI) Probability–An Overview Jan. 2018 25 / 29


LTI Example

AWGN input RP:


RXX (τ ) = N20 δ(τ )
SXX (f ) = N20 for all f
Output RP PSD?
N0 2
SYY (f ) = |H(f )|
2

H(f ): transfer function of LTI system

B. Sainath (BITS, PILANI) Probability–An Overview Jan. 2018 26 / 29


Cyclostationary RP

RPs that have periodic statistical averages


Let {an }: Discrete-time sequence of RVs

X
x(t) = an gp (t − nT )
−∞

Mean: µa = E [an ] for all n


ACF: Raa (k) = 12 E [an∗ an+k ]
gp (t) is deterministic
{an } represents(in modulation schemes):
Digital information sequence of symbols transmitted
Rate of transmission: T1

B. Sainath (BITS, PILANI) Probability–An Overview Jan. 2018 27 / 29


Mean, ACF, PSD of x(t) [Proakis 2001]

Determine mean, ACF of x(t)


Show that statistical averages are periodic
How to compute time-average ACF and PSD?
Exercise:
Y (t) = G1 (ω) cos(ωc t) + G2 (ω) cos(ωc t)
G1 ∼ N (µ1 , σ12 ) G2 ∼ N (µ, σ22 ) independent RVs
Find statistical averages of Y (t)
Show that Y (t) is periodically (cyclo-) stationary

B. Sainath (BITS, PILANI) Probability–An Overview Jan. 2018 28 / 29


Text Book & References

Text book: Digital communications by J. G. Proakis, 4th Ed., MGH.


References (brief list)
Principles of communication engg, Wozencraft and Jacobs
Principles of digital communication by Gallager
Wireless communication by Andrea Goldsmith
Fundamentals of wireless communication by Tse & Viswanath

B. Sainath (BITS, PILANI) Probability–An Overview Jan. 2018 29 / 29

Potrebbero piacerti anche