Sei sulla pagina 1di 61

5.5.

SOLVED PROBLEMS 81

Example 5.5.10. Let X be uniformly distributed in [0, 2π] and Y = sin(X). Calculate the

p.d.f. fY of Y .

Since Y = g(X), we know that

X 1
fY (y) = fX (xn )
|g 0 (x n )|

where the sum is over all the xn such that g(xn ) = y.

For each y ∈ (−1, 1), there are two values of xn in [0, 2π] such that g(xn ) = sin(xn ) = y.

For those values, we find that

q p
|g 0 (xn )| = | cos(xn )| = 1 − sin2 (xn ) = 1 − y2,

and
1
fX (xn ) = .

Hence,

1 1 1
fY (y) = 2 p = p .
1 − y 2π
2 π 1 − y2

Example 5.5.11. Let {X, Y } be independent random variables with X exponentially dis-

tributed with mean 1 and Y uniformly distributed in [0, 1]. Calculate E(max{X, Y }).

Let Z = max{X, Y }. Then

P (Z ≤ z) = P (X ≤ z, Y ≤ z) = P (X ≤ z)P (Y ≤ z)

 z(1 − e−z ), for z ∈ [0, 1]
=
 1 − e−z , for z ≥ 1.

Hence, 
 1 − e−z + ze−z , for z ∈ [0, 1]
fZ (z) =
 e−z , for z ≥ 1.
82 CHAPTER 5. RANDOM VARIABLES

Accordingly,
Z ∞ Z 1 Z ∞
−z −z
E(Z) = zfZ (z)dz = z(1 − e + ze )dz + ze−z dz
0 0 1

To do the calculation we note that


Z 1
zdz = [z 2 /2]10 = 1/2,
0

Z 1 Z 1 Z 1
ze−z dz = − zde−z = −[ze−z ]10 + e−z dz
0 0 0
−1
= −e − [e−z ]10 = 1 − 2e −1
.

Z 1 Z 1 Z 1
z 2 e−z dz = − z 2 de−z = −[z 2 e−z ]10 + 2ze−z dz
0 0 0
−1 −1 −1
= −e + 2(1 − 2e ) = 2 − 5e .
Z ∞ Z 1
−z
ze dz = 1 − ze−z dz = 2e−1 .
1 0

Collecting the pieces, we find that

1
E(Z) = − (1 − 2e−1 ) + (2 − 5e−1 ) + 2e−1 = 3 − 5e−1 ≈ 1.16.
2

Example 5.5.12. Let {Xn , n ≥ 1} be i.i.d. with E(Xn ) = µ and var(Xn ) = σ 2 . Use

Chebyshev’s inequality to get a bound on

X1 + · · · + Xn
α := P (| − µ| ≥ ²).
n

Chebyshev’s inequality (4.8.1) states that

1 X1 + · · · + Xn 1 nvar(X1 ) σ2
α≤ 2
var( )= 2 2
= 2.
² n ² n n²

This calculation shows that the sample mean gets closer and closer to the mean: the

variance of the error decreases like 1/n.


5.5. SOLVED PROBLEMS 83

Example 5.5.13. Let X =D P (λ). You pick X white balls. You color the balls indepen-

dently, each red with probability p and blue with probability 1 − p. Let Y be the number

of red balls and Z the number of blue balls. Show that Y and Z are independent and that

Y =D P (λp) and Z =D P (λ(1 − p)).

We find
µ ¶
m+n m
P (Y = m, Z = n) = P (X = m + n) p (1 − p)n
m
µ ¶
λm+n m+n m λm+n (m + n)! m
= p (1 − p)n = × p (1 − p)n
(m + n)! m (m + n)! m!n!
(λp)m −λp (λ(1 − p))n −λ(1−p)
= [ e ]×[ e ],
m! n!

which proves the result.


6.7. SOLVED PROBLEMS 95

6.7 Solved Problems

Example 6.7.1. Let (X, Y ) be a point picked uniformly in the quarter circle {(x, y) | x ≥

0, y ≥ 0, x2 + y 2 ≤ 1}. Find E[X | Y ].

p
Given Y = y, X is uniformly distributed in [0, 1 − y 2 ]. Hence

1p
E[X | Y ] = 1 − Y 2.
2

Example 6.7.2. A customer entering a store is served by clerk i with probability pi , i =

1, 2, . . . , n. The time taken by clerk i to service a customer is an exponentially distributed

random variable with parameter αi .

a. Find the pdf of T , the time taken to service a customer.

b. Find E[T ].

c. Find V ar[T ].

Designate by X the clerk who serves the customer.


P P
a. fT (t) = ni=1 pi fT |X [t|i] = ni=1 pi αi e−αi t
P
b. E[T ] = E(E[T | X]) = E( α1X ) = ni=1 pi α1i .
Pn
c. We first find E[T 2 ] = E(E[T 2 | X]) = E( α12 ) = 2
i=1 pi α2i . Hence, var(T ) =
i
P P
E(T 2 ) − (E(T ))2 = ni=1 pi α22 − ( ni=1 pi α1i )2 .
i

Example 6.7.3. The random variables Xi are i.i.d. and such that E[Xi ] = µ and var(Xi ) =

σ 2 . Let N be a random variable independent of all the Xi s taking on nonnegative integer

values. Let S = X1 + X2 + . . . + XN .

a. Find E(S).

b. Find var(S).

a. E(S)] = E(E[S | N ]) = E(N µ) = µE(N ).


96 CHAPTER 6. CONDITIONAL EXPECTATION

b. First we calculate E(S 2 ). We find

E(S 2 ) = E(E[S 2 | N ]) = E(E[(X1 + X2 + . . . + XN )2 | N ])


X
= E(E[X12 + · · · + XN
2
+ Xi Xj | N ])
i6=j

= E(N E(X12 ) + N (N − 1)E(X1 X2 )) = E(N (µ2 + σ 2 ) + N (N − 1)µ2 )

= E(N )σ 2 + E(N 2 )µ2 .

Then,

var(S) = E(S 2 ) − (E(S))2 = E(N )σ 2 + E(N 2 )µ2 − µ2 (E(N ))2 = E(N )σ 2 + var(N )µ2 .

Example 6.7.4. Let X, Y be independent and uniform in [0, 1]. Calculate E[X 2 | X + Y ].

Given X + Y = z, the point (X, Y ) is uniformly distributed on the line {(x, y) | x ≥

0, y ≥ 0, x + y = z}. Draw a picture to see that if z > 1, then X is uniform on [z − 1, 1] and

if z < 1, then X is uniform on [0, z]. Thus, if z > 1 one has

Z 1
1 1 x3 1 1 − (z − 1)3
E[X 2 | X + Y = z] = x2 dx = [ ]z−1 = .
z−1 2−z 2−z 3 3(2 − z)
Similarly, if z < 1, then

Z z
1 1 x3 z2
E[X 2 | X + Y = z] = x2 dx = [ ]z0 = .
0 z z 3 3

Example 6.7.5. Let (X, Y ) be the coordinates of a point chosen uniformly in [0, 1]2 . Cal-

culate E[X | XY ].

This is an example where we use the straightforward approach, based on the definition.

The problem is interesting because is illustrates that approach in a tractable but nontrivial

example. Let Z = XY .

Z 1
E[X | Z = z] = xf[X|Z] [x | z]dx.
0
6.7. SOLVED PROBLEMS 97

Now,
fX,Z (x, z)
f[X|Z] [x | z] = .
fZ (z)
Also,

fX,Z (x, z)dxdz = P (X ∈ (x, x + dx), Z ∈ (z, z + dz))

= P (X ∈ (x, x + dx))P [Z ∈ (z, z + dz) | X = x] = dxP (xY ∈ (z, z + dz))


z z dz dz
= dxP (Y ∈ ( , + )) = dx 1{z ≤ x}.
x x x x

Hence, 
 1
x, if x ∈ [0, 1] and z ∈ [0, x]
fX,Z (x, z) =
 0, otherwise.

Consequently,
Z 1 Z 1
1
fZ (z) = fX,Z (x, z)dx = dx = −ln(z), 0 ≤ z ≤ 1.
0 z x

Finally,

1
f[X|Z] [x | z] = − , for x ∈ [0, 1] and z ∈ [0, x],
xln(z)
and

Z 1
1 z−1
E[X | Z = z] = x(− )dx = ,
z xln(z) ln(z)
so that
XY − 1
E[X | XY ] = .
ln(XY )
Examples of values:

E[X | XY = 1] = 1, E[X | XY = 0.1] = 0.39, E[X | XY ≈ 0] ≈ 0.

Example 6.7.6. Let X, Y be independent and exponentially distributed with mean 1. Find

E[cos(X + Y ) | X].
98 CHAPTER 6. CONDITIONAL EXPECTATION

We have
Z ∞ Z ∞
−y
E[cos(X + Y ) | X = x] = cos(x + y)e dy = Re{ ei(x+y)−y dy}
0 0
eix cos(x) − sin(x)
= Re{ }= .
1−i 2

Example 6.7.7. Let X1 , X2 , . . . , Xn be i.i.d. U [0, 1] and Y = max{X1 , . . . , Xn }. Calculate

E[X1 | Y ].

Intuition suggests, and it is not too hard to justify, that if Y = y, then X1 = y with prob-

ability 1/n, and with probability (n − 1)/n the random variable X1 is uniformly distributed

in [0, y]. Hence,


1 n−1Y n+1
E[X1 | Y ] = Y + = Y.
n n 2 2n

Example 6.7.8. Let X, Y, Z be independent and uniform in [0, 1]. Calculate E[(X + 2Y +

Z)2 | X].

One has, E[(X + 2Y + Z)2 | X] = E[X 2 + 4Y 2 + Z 2 + 4XY + 4Y Z + 2XZ | X]. Now,

E[X 2 + 4Y 2 + Z 2 + 4XY + 4Y Z + 2XZ | X]

= X 2 + 4E(Y 2 ) + E(Z 2 ) + 4XE(Y ) + 4E(Y )E(Z) + 2XE(Z)

= X 2 + 4/3 + 1/3 + 2X + 1 + X = X 2 + 3X + 8/3.

Example 6.7.9. Let X, Y, Z be three random variables defined on the same probability

space. Prove formally that

E(|X − E[X | Y ]|2 ) ≥ E(|X − E[X | Y, Z]|2 ).

Let X1 = E[X | Y ] and X2 = E[X | Y, Z]. Note that

E((X − X2 )(X2 − X1 )) = E(E[(X − X2 )(X2 − X1 ) | Y, Z])


6.7. SOLVED PROBLEMS 99

and

E[(X − X2 )(X2 − X1 ) | Y, Z] = (X2 − X1 )E[X − X2 | Y, Z] = X2 − X2 = 0.

Hence,

E((X −X1 )2 ) = E((X −X2 +X2 −X1 )2 ) = E((X −X2 )2 )+E((X2 −X1 )2 ) ≥ E((X −X2 )2 ).

Example 6.7.10. Pick the point (X, Y ) uniformly in the triangle {(x, y) | 0 ≤ x ≤

1 and 0 ≤ y ≤ x}.

a. Calculate E[X | Y ].

b. Calculate E[Y | X].

c. Calculate E[(X − Y )2 | X].

a. Given {Y = y}, X is U [y, 1], so that E[X | Y = y] = (1 + y)/2. Hence,

1+Y
E[X | Y ] = .
2

b. Given {X = x}, Y is U [0, x], so that E[Y | X = x] = x/2. Hence,

X
E[Y | X] = .
2

c. Since given {X = x}, Y is U [0, x], we find


Z x Z x
2 21 1 x2
E[(X − Y ) | X = x] = (x − y) dy = y 2 dy = . Hence,
0 x x 0 3

X2
E[(X − Y )2 | X] = .
3

Example 6.7.11. Assume that the two random variables X and Y are such that E[X |

Y ] = Y and E[Y | X] = X. Show that P (X = Y ) = 1.

We show that E((X − Y )2 ) = 0. This will prove that X − Y = 0 with probability one.

Note that

E((X − Y )2 ) = E(X 2 ) − E(XY ) + E(Y 2 ) − E(XY ).


100 CHAPTER 6. CONDITIONAL EXPECTATION

Now,

E(XY ) = E(E[XY | X]) = E(XE[Y | X]) = E(X 2 ).

Similarly, one finds that E(XY ) = E(Y 2 ). Putting together the pieces, we get E((X −

Y )2 ) = 0.

Example 6.7.12. Let X, Y be independent random variables uniformly distributed in [0, 1].

Calculate E[X|X < Y ].

Drawing a unit square, we see that given {X < Y }, the pair (X, Y ) is uniformly dis-

tributed in the triangle left of the diagonal from the upper left corner to the bottom right

corner of that square. Accordingly, the p.d.f. f (x) of X is given by f (x) = 2(1 − x). Hence,

Z 1
1
E[X|X < Y ] = x × 2(1 − x)dx = .
0 3
108 CHAPTER 7. GAUSSIAN RANDOM VARIABLES

7.4 Summary

We defined the Gaussian random variables N (0, 1), N (µ, σ 2 ), and N (µ


µ, Σ ) both in terms of

their density and their characteristic function.

Jointly Gaussian random variables that are uncorrelated are independent.

If X, Y are jointly Gaussian, then E[X | Y ] = E(X) + cov(X, Y )var(Y )−1 (Y − E(Y )).

In the vector case,

E[X X ) + ΣX,Y ΣY−1 (Y


X | Y ] = E(X Y − E(Y
Y ),

when ΣY is invertible. We also discussed the non-invertible case.

7.5 Solved Problems

Example 7.5.1. The noise voltage X in an electric circuit can be modelled as a Gaussian

random variable with mean zero and variance equal to 10−8 .

a. What is the probability that it exceeds 10−4 ? What is the probability that it exceeds

2 × 10−4 ? What is the probability that its value is between −2 × 10−4 and 10−4 ?

b. Given that the noise value is positive, what is the probability that it exceeds 10−4 ?

c. What is the expected value of |X|?

Let Z = 104 X, then Z =D N (0, 1) and we can reformulate the questions in terms of Z.

a. Using (7.1) we find P (Z > 1) = 0.159 and P (Z > 2) = 0.023. Indeed, P (Z > d) =

P (|Z| > d)/2, by symmetry of the density. Moreover,

P (−2 < Z < 1) = P (Z < 1)−P (Z ≤ −2) = 1−P (Z > 1)−P (Z > 2) = 1−0.159−0.023 = 0.818.

b. We have

P (Z > 1)
P [Z > 1 | Z > 0] = = 2P (Z > 1) = 0.318.
P (Z > 0)
7.5. SOLVED PROBLEMS 109

c. Since Z = 104 X, one has E(|Z|) = 104 E(|X|). Now,


Z ∞ Z ∞ Z ∞
1 1
E(|Z|) = |z|fZ (z)dz = 2
zfZ (z)dz = 2 √ z exp{− z 2 }dz
−∞ 0 0 2π 2
r Z ∞ r
2 1 2 2
= − d[exp{− z }] = .
π 0 2 π

Hence,
r
−4 2
E(|X|) = 10 .
π

Example 7.5.2. Let U = {Un , n ≥ 1} be a sequence of independent standard Gaussian

random variables. A low-pass filter takes the sequence U and produces the output sequence

Xn = Un + Un+1 . A high-pass filter produces the output sequence Yn = Un − Un+1 .

a. Find the joint pdf of Xn and Xn−1 and find the joint pdf of Xn and Xn+m for m > 1.

b. Find the joint pdf of Yn and Yn−1 and find the joint pdf of Yn and Yn+m for m > 1.

c. Find the joint pdf of Xn and Ym .

We start with some preliminary observations. First, since the Ui are independent, they

are jointly Gaussian. Second, Xn and Yn are linear combinations of the Ui and thus are

also jointly Gaussian. Third, the jpdf of jointly gaussian random variables Z is

1 1
fZ (zz ) = p exp[− (zz − m )C −1 (zz − m )]
(2π)n det(C) 2

where n is the dimension of Z , m is the vector of expectations of Z , and C is the covariance

matrix
  Z − m )(Z
E[(Z Z − m )T ]. Finally, we need some basic facts
 from algebra. If C =
a b d −b
 , then det(C) = ad − bc and C −1 = 1  . We are now ready to
det(C)
c d −c a
answer the questions.

U.
a. Express in the form X = AU
 
    Un−1
1 1  
Xn 0
 = 2 2   Un 
1 1  
Xn−1 2 2 0
Un+1
112 CHAPTER 7. GAUSSIAN RANDOM VARIABLES

1 1 3
Then det(C) = 4 − 14 = 16 and
 
1
16  − 14
C −1 = 2 
3 − 14 1
2

2
fXn Yn (xn , yn ) = √
π 3
exp[− 43 (x2n − xn yn + yn2 )]

ii. Consider m=n+1.


    
1 1
Xn Un
 = 2 2  
1
Yn+1 2 − 12 Un+1

Then E[[Xn Yn+1 ]T ] = AE[U


U ] = 0.
     
1 1 1 1 1
1 0 0
U U T ]AT = 
C = AE[U 2 2   2 2 = 2 
1
2 − 12 0 1 1
2 − 12 0 1
2

1
Then det(C) = 4 and  
2 0
C −1 =  
0 2
1
fXn Yn+1 (xn , yn+1 ) = π exp[− 14 (x2n + yn+1
2 )]

iii. For all other m.


 
    Un

Xn 1 1
0 0  Un+1 
 = 2 2 



Ym 0 0 − 12 1  Um−1 
2  
Um

Then E[[Xn Ym ]T ] = AE[U


U ] = 0.
  
1
  1 0 0 0 0
 2
  
1 1
0 0  0 1 0 0  0 1  1
0
    2
U U T ]AT = 
C = AE[U 2 2   2
 = 
0 0 − 12 1  0 0 1 0   0 −1  0 1
2   2  2
1
0 0 0 1 0 2
1
Then det(C) = 4 and  
2 0
C −1 =  
0 2
1
fXn Ym (xn , ym ) = π exp[− 14 (x2n + ym
2 )]
7.5. SOLVED PROBLEMS 113

Example 7.5.3. Let X, Y, Z, V be i.i.d. N (0, 1). Calculate E[X + 2Y |3X + Z, 4Y + 2V ].

We have  
3X + Z
E[X + 2Y |3X + Z, 4Y + 2V ] = a Σ−1  
4Y + 2V
where

a = [E((X + 2Y )(3X + Z)), E((X + 2Y )(4Y + 2V ))] = [3, 8]

and
   
var(3X + Z) E((3X + Z)(4Y + 2V )) 10 0
Σ= = .
E((3X + Z)(4Y + 2V )) var(4Y + 2V ) 0 20

Hence,
  
10−1 0 3X + Z
E[X+2Y |3X+Z, 4Y +2V ] = [3, 8]    = 3 (3X+Z)+ 4 (4Y +2V ).
0 20−1 4Y + 2V 10 10

Example 7.5.4. Assume that {X, Yn , n ≥ 1} are mutually independent random variables

with X = N (0, 1) and Yn = N (0, σ 2 ). Let X̂n = E[X | X + Y1 , . . . , X + Yn ]. Find the

smallest value of n such that

P (|X − X̂n | > 0.1) ≤ 5%.

We know that X̂n = an (nX + Y1 + · · · + Yn ). The value of an is such that

E((X − X̂n )(X + Yj )) = 0, i.e., E((X − an (nX + Yj ))(X + Yj )) = 0,

which implies that


1
an = .
n + σ2
Then

var(X − X̂n ) = var((1 − nan )X − an (Y1 + · · · + Yn )) = (1 − nan )2 + n(an )2 σ 2


σ2
= .
n + σ2
114 CHAPTER 7. GAUSSIAN RANDOM VARIABLES

σ 2
Thus we know that X − X̂n = N (0, n+σ 2 ). Accordingly,

σ2 0.1
P (|X − X̂n | > 0.1) = P (|N (0, 2
)| > 0.1) = P (|N (0, 1)| > )
n+σ αn
q
σ2
where αn = n+σ 2
. For this probability to be at most 5% we need
r
0.1 σ2 0.1
= 2, i.e., αn = 2
= ,
αn n+σ 2

so that

n = 19σ 2 .

The result is intuitively pleasing: If the observations are more noisy (σ 2 large), we need

more of them to estimate X.

Example 7.5.5. Assume that X, Y are i.i.d. N (0, 1). Calculate E[(X + Y )4 | X − Y ].

Note that X + Y and X − Y are independent because they are jointly Gaussian and

uncorrelated. Hence,

E[(X +Y )4 | X −Y ] = E((X +Y )4 ) = E(X 4 +4X 3 Y +6X 2 Y 2 +4XY 3 +Y 4 ) = 3+6+3 = 12.

Example 7.5.6. Let X, Y be independent N (0, 1) random variables. Show that W :=

X 2 + Y 2 =D Exd(1/2). That is, the sum of the squares of two i.i.d. zero-mean Gaussian

random variables is exponentially distributed!

We calculate the characteristic function of W . We find

Z ∞ Z ∞
iuW 2 +y 2 ) 1 −(x2 +y2 )/2
E(e ) = eiu(x e dxdy
−∞ −∞ 2π
Z 2π Z ∞
2 1 −r2 /2
= eiur e rdrdθ

Z0 ∞ 0
2 2 /2
= eiur e−r rdr
0
Z ∞
1 2 2 1
= d[eiur −r /2 ] = .
0 2iu − 1 1 − 2iu
7.5. SOLVED PROBLEMS 115

On the other hand, if W =D Exd(λ), then


Z ∞
iuW
E(e ) = eiux λe−λx dx
0
λ 1
= = .
λ − iu 1 − λ−1 iu

Comparing these expressions shows that X 2 + Y 2 =D Exd(1/2) as claimed.

Example 7.5.7. Let {Xn , n ≥ 0} be Gaussian N (0, 1) random variables. Assume that

Yn+1 = aYn + Xn for n ≥ 0 where Y0 is a Gaussian random variable with mean zero and

variance σ 2 independent of the Xn ’s and |a| < 1.

a. Calculate var(Yn ) for n ≥ 0. Show that var(Yn ) → γ 2 as n → ∞ for some value γ 2 .

b. Find the values of σ 2 so that the variance of Yn does not depend on n ≥ 1.

a. We see that

var(Yn+1 ) = var(aYn + Xn ) = a2 var(Yn ) + var(Xn ) = a2 var(Yn ) + 1.

Thus, we αn := var(Yn ), one has

αn+1 = a2 αn + 1 and α0 = σ 2 .

Solving these equations we find

1 − a2n
var(Yn ) = αn = a2n σ 2 + , for n ≥ 0.
1 − a2

Since |a| < 1, it follows that

1
var(Yn ) → γ 2 := as n → ∞.
1 − a2

b. The obvious answer is σ 2 = γ 2 .


116 CHAPTER 7. GAUSSIAN RANDOM VARIABLES

Example 7.5.8. Let the Xn ’s be as in Example 7.5.7.

a.Calculate

E[X1 + X2 + X3 | X1 + X2 , X2 + X3 , X3 + X4 ].

b. Calculate

E[X1 + X2 + X3 | X1 + X2 + X3 + X4 + X5 ].

a. We know that the solution is of the form Y = a(X1 + X2 ) + b(X2 + X3 ) + c(X3 + X4 )

where the coefficients a, b, c must be such that the estimation error is orthogonal to the

conditioning variables. That is,

E((X1 + X2 + X3 ) − Y )(X1 + X2 )) = E((X1 + X2 + X3 ) − Y )(X2 + X3 ))

= E((X1 + X2 + X3 ) − Y )(X3 + X4 )) = 0.

These equalities read

2 − a − (a + b) = 2 − (a + b) − (b + c) = 1 − (b + c) − c = 0,

and solving these equalities gives a = 3/4, b = 1/2, and c = 1/4.

b. Here we use symmetry. For k = 1, . . . , 5, let

Yk = E[Xk | X1 + X2 + X3 + X4 + X5 ].

Note that Y1 = Y2 = · · · = Y5 , by symmetry. Moreover,

Y1 +Y2 +Y3 +Y4 +Y5 = E[X1 +X2 +X3 +X4 +X5 | X1 +X2 +X3 +X4 +X5 ] = X1 +X2 +X3 +X4 +X5 .

It follows that Yk = (X1 + X2 + X3 + X4 + X5 )/5 for k = 1, . . . , 5. Hence,

3
E[X1 + X2 + X3 | X1 + X2 + X3 + X4 + X5 ] = Y1 + Y2 + Y3 = (X1 + X2 + X3 + X4 + X5 ).
5

Example 7.5.9. Let the Xn ’s be as in Example 7.5.7. Find the jpdf of (X1 + 2X2 +

3X3 , 2X1 + 3X2 + X3 , 3X1 + X2 + 2X3 ).


7.5. SOLVED PROBLEMS 117

These random variables are jointly Gaussian, zero mean, and with covariance matrix Σ

given by  
14 11 11
 
Σ= 
 11 14 11  .
11 11 14
Indeed, Σ is the matrix of covariances. For instance, its entry (2, 3) is given by

E((2X1 + 3X2 + X3 )(3X1 + X2 + 2X3 )) = 2 × 3 + 3 × 1 + 1 × 2 = 11.

We conclude that the jpdf is

1 1
x) =
fX (x exp{− x T Σ−1x }.
(2π)3/2 |Σ|1/2 2

We let you calculate |Σ| and Σ−1 .

Example 7.5.10. Let X1 , X2 , X3 be independent N (0, 1) random variables. Calculate

Y ] where
E[X1 + 3X2 |Y  
 
 X1 
 1 2 3  
Y =  
 X2 
3 2 1  
X3

By now, this should be familiar. The solution is Y := a(X1 + 2X2 + 3X3 ) + b(3X1 +

2X2 + X3 ) where a and b are such that

0 = E((X1 +3X2 −Y )(X1 +2X2 +3X3 )) = 7−(a+3b)−(4a+4b)−(9a+3b) = 7−14a−10b

and

0 = E((X1 +3X2 −Y )(3X1 +2X2 +X3 )) = 9−(3a+9b)−(4a+4b)−(3a+b) = 9−10a−14b.

Solving these equations gives a = 1/12 and b = 7/12.

Example 7.5.11. Find the jpdf of (2X1 + X2 , X1 + 3X2 ) where X1 and X2 are independent

N (0, 1) random variables.


118 CHAPTER 7. GAUSSIAN RANDOM VARIABLES

These random variables are jointly Gaussian, zero-mean, with covariance Σ given by
 
5 5
Σ= .
5 10

Hence,

1 1
x) =
fX (x 1/2
exp{− x T Σ−1x }
2π|Σ| 2
1 1 T −1
= exp{− x Σ x }
10π 2

where  
1  10 −5
Σ−1 = .
25 −5 5

Example 7.5.12. The random variable X is N (µ, 1). Find an approximate value of µ so

that

P (−0.5 ≤ X ≤ −0.1) ≈ P (1 ≤ X ≤ 2).

We write X = µ + Y where Y is N (0, 1). We must find µ so that

g(µ) := P (−0.5 − µ ≤ Y ≤ −0.1 − µ) − P (1 − µ ≤ Y ≤ 2 − µ) ≈ 0.

We do a little search using a table of the N (0, 1) distribution or using a calculator. I

find that µ ≈ 0.065.

Example 7.5.13. Let X be a N (0, 1) random variable. Calculate the mean and the variance

of cos(X) and sin(X).

a. Mean Values. We know that

2 /2
E(eiuX ) = e−u and eiθ = cos(θ) + i sin(θ).

Therefore,
2 /2
E(cos(uX) + i sin(uX)) = e−u ,
7.5. SOLVED PROBLEMS 119

so that
2 /2
E(cos(uX)) = e−u and E(sin(uX)) = 0.

In particular, E(cos(X)) = e−1/2 and E(sin(X)) = 0.

b. Variances. We first calculate E(cos2 (X)). We find

1 1 1
E(cos2 (X)) = E( (1 + cos(2X))) = + E(cos(2X)).
2 2 2

Using the previous derivation, we find that

2 /2
E(cos(2X)) = e−2 = e−2 ,

so that E(cos2 (X)) = (1/2) + (1/2)e−2 . We conclude that

1 1 −2 1 1
var(cos(X)) = E(cos2 (X)) − (E(cos(uX)))2 = + e − (e−1/2 )2 = + e−2 − e−1 .
2 2 2 2

Similarly, we find

1 1 −2
E(sin2 (X)) = E(1 − cos2 (X)) = − e = var(sin(X)).
2 2

Example 7.5.14. Let X be a N (0, 1) random variable. Define




 X, if |X| ≤ 1
Y =

 −X, if |X| > 1.

Find the pdf of Y .

By symmetry, X is N (0, 1).

Example 7.5.15. Let {X, Y, Z} be independent N (0, 1) random variables.

a. Calculate

E[3X + 5Y | 2X − Y, X + Z].

b. How does the expression change if X, Y, Z are i.i.d. N (1, 1)?


120 CHAPTER 7. GAUSSIAN RANDOM VARIABLES

a. Let V1 = 2X − Y, V2 = X + Z and V = [V1 , V2 ]T . Then

E[3X + 5Y | V ] = a Σ−1
V V

where

V T ) = [1, 3]
a = E((3X + 5Y )V

and  
5 2
ΣV =  .
2 2
Hence,
 −1  
5 2 1  2 −2 
E[3X + 5Y | V ] = [1, 3]   V = [1, 3] V
2 2 6 −2 5
1 2 13
V = − (2X − Y ) + (X + Z).
= [−4, 13]V
6 3 6

b. Now,

1
E[3X + 5Y | V ] = E(3X + 5Y ) + a Σ−1 V − E(V
V (V V − [1, 2]T )
V )) = 8 + [−4, 13](V
6
26 2 13
= − (2X − Y ) + (X + Z).
6 3 6

Example 7.5.16. Let (X, Y ) be jointly Gaussian. Show that X − E[X | Y ] is Gaussian

and calculate its mean and variance.

We know that
cov(X, Y )
E[X | Y ] = E(X) + (Y − E(Y )).
var(Y )
Consequently,
cov(X, Y )
X − E[X | Y ] = X − E(X) − (Y − E(Y ))
var(Y )
and is certainly Gaussian. This difference is zero-mean. Its variance is

cov(X, Y ) 2 cov(X, Y ) [cov(X, Y )]2


var(X) + [ ] var(Y ) − 2 cov(X, Y ) = var(X) − .
var(Y ) var(Y ) var(Y )
2.7. SOLVED PROBLEMS 19

and P : F → [0, 1] is a σ-additive set function such that P (Ω) = 1.

The idea is to specify the likelihood of various outcomes (elements of Ω). If one can

specify the probability of individual outcomes (e.g., when Ω is countable), then one can

choose F = 2Ω , so that all sets of outcomes are events. However, this is generally not

possible as the example of the uniform distribution on [0, 1] shows. (See Appendix C.)

2.6.1 Stars and Bars Method

In many problems, we use a method for counting the number of ordered groupings of

identical objects. This method is called the stars and bars method. Suppose we are given

identical objects we call stars. Any ordered grouping of these stars can be obtained by

separating them by bars. For example, || ∗ ∗ ∗ |∗ separates four stars into four groups of sizes

0, 0, 3, and 1.

Suppose we wish to separate N stars into M ordered groups. We need M − 1 bars to

form M groups. The number of orderings is the number of ways of placing the N identical
¡ ¢
stars and M − 1 identical bars into N + M − 1 spaces, N +M
M
−1
.

Creating compound objects of stars and bars is useful when there are bounds on the

sizes of the groups.

2.7 Solved Problems

Example 2.7.1. Describe the probability space {Ω, F, P } that corresponds to the random

experiment “picking five cards without replacement from a perfectly shuffled 52-card deck.”

1. One can choose Ω to be all the permutations of A := {1, 2, . . . , 52}. The interpretation

of ω ∈ Ω is then the shuffled deck. Each permutation is equally likely, so that pω = 1/(52!)

for ω ∈ Ω. When we pick the five cards, these cards are (ω1 , ω2 , . . . , ω5 ), the top 5 cards of

the deck.
20 CHAPTER 2. PROBABILITY SPACE

2. One can also choose Ω to be all the subsets of A with five elements. In this case, each
¡ ¢
subset is equally likely and, since there are N := 525 such subsets, one defines pω = 1/N

for ω ∈ Ω.

3. One can choose Ω = {ω = (ω1 , ω2 , ω3 , ω4 , ω5 ) | ωn ∈ A and ωm 6= ωn , ∀m 6= n, m, n ∈

{1, 2, . . . , 5}}. In this case, the outcome specifies the order in which we pick the cards.

Since there are M := 52!/(47!) such ordered lists of five cards without replacement, we

define pω = 1/M for ω ∈ Ω.

As this example shows, there are multiple ways of describing a random experiment.

What matters is that Ω is large enough to specify completely the outcome of the experiment.

Example 2.7.2. Pick three balls without replacement from an urn with fifteen balls that

are identical except that ten are red and five are blue. Specify the probability space.

One possibility is to specify the color of the three balls in the order they are picked.

Then

10 9 8 5 4 3
Ω = {R, B}3 , F = 2Ω , P ({RRR}) = , . . . , P ({BBB}) = .
15 14 13 15 14 13

Example 2.7.3. You flip a fair coin until you get three consecutive ‘heads’. Specify the

probability space.

One possible choice is Ω = {H, T }∗ , the set of finite sequences of H and T . That is,

{H, T }∗ = ∪∞ n
n=1 {H, T } .

This set Ω is countable, so we can choose F = 2Ω . Here,

P ({ω}) = 2−n where n := length of ω.

This is another example of a probability space that is bigger than necessary, but easier

to specify than the smallest probability space we need.


2.7. SOLVED PROBLEMS 21

Example 2.7.4. Let Ω = {0, 1, 2, . . .}. Let F be the collection of subsets of Ω that are

either finite or whose complement is finite. Is F a σ-field?

No, F is not closed under countable set operations. For instance, {2n} ∈ F for each

n ≥ 0 because {2n} is finite. However,

A := ∪∞
n=0 {2n}

is not in F because both A and Ac are infinite.

Example 2.7.5. In a class with 24 students, what is the probability that no two students

have the same birthday?

Let N = 365 and n = 24. The probability is

N N −1 N −2 N −n+1
α := × × × ··· × .
N N N N

To estimate this quantity we proceed as follows. Note that


n
X Z n
N −n+k N −n+x
ln(α) = ln( )≈ ln( )dx
N 1 N
k=1
Z 1
= N ln(y)dy = N [yln(y) − y]1a
a
N −n+1
= −(N − n + 1)ln( ) − (n − 1).
N

(In this derivation we defined a = (N − n + 1)/N .) With n = 24 and N = 365 we find that

α ≈ 0.48.

Example 2.7.6. Let A, B, C be three events. Assume that P (A) = 0.6, P (B) = 0.6, P (C) =

0.7, P (A ∩ B) = 0.3, P (A ∩ C) = 0.4, P (B ∩ C) = 0.4, and P (A ∪ B ∪ C) = 1. Find

P (A ∩ B ∩ C).

We know that (draw a picture)

P (A ∪ B ∪ C) = P (A) + P (B) + P (C) − P (A ∩ B) − P (A ∩ C) − P (B ∩ C) + P (A ∩ B ∩ C).


22 CHAPTER 2. PROBABILITY SPACE

Substituting the known values, we find

1 = 0.6 + 0.6 + 0.7 − 0.3 − 0.4 − 0.4 + P (A ∩ B ∩ C),

so that

P (A ∩ B ∩ C) = 0.2.

Example 2.7.7. Let Ω = {1, 2, 3, 4} and let F = 2Ω be the collection of all the subsets of

Ω. Give an example of a collection A of subsets of Ω and probability measures P1 and P2

such that

(i). P1 (A) = P2 (A), ∀A ∈ A.

(ii). The σ-field generated by A is F. (This means that F is the smallest σ-field of Ω

that contains A.)

(iii). P1 and P2 are not the same.

Let A= {{1, 2}, {2, 4}}.

Assign probabilities P1 ({1}) = 18 , P1 ({2}) = 18 , P1 ({3}) = 38 , P1 ({4}) = 38 ; and P2 ({1}) =


1 2 5 4
12 , P2 ({2}) = 12 , P2 ({3}) = 12 , P2 ({4}) = 12 .

Note that P1 and P2 are not the same, thus satisfying (iii).
1 1 1
P1 ({1, 2}) = P1 ({1}) + P1 ({2}) = 8 + 8 = 4
1 2 1
P2 ({1, 2}) = P2 ({1}) + P2 ({2}) = 12 + 12 = 4

Hence P1 ({1, 2}) = P2 ({1, 2}).


1 3 1
P1 ({2, 4}) = P1 ({2}) + P1 ({4}) = 8 + 8 = 2
2 4 1
P2 ({2, 4}) = P2 ({2}) + P2 ({4}) = 12 + 12 = 2

Hence P1 ({2, 4}) = P2 ({2, 4}).

Thus P1 (A) = P2 (A)∀A ∈ A, thus satisfying (i).

To check (ii), we only need to check that ∀k ∈ Ω, {k} can be formed by set operations

on sets in A ∪ φ∪ Ω. Then any other set in F can be formed by set operations on {k}.

{1} = {1, 2} ∩ {2, 4}C


2.7. SOLVED PROBLEMS 23

{2} = {1, 2} ∩ {2, 4}

{3} = {1, 2}C ∩ {2, 4}C

{4} = {1, 2}C ∩ {2, 4}.

Example 2.7.8. Choose a number randomly between 1 and 999999 inclusive, all choices

being equally likely. What is the probability that the digits sum up to 23? For example, the

number 7646 is between 1 and 999999 and its digits sum up to 23 (7+6+4+6=23).

Numbers between 1 and 999999 inclusive have 6 digits for which each digit has a value in

{0, 1, 2, 3, 4, 5, 6, 7, 8, 9}. We are interested in finding the numbers x1 +x2 +x3 +x4 +x5 +x6 =

23 where xi represents the ith digit.

First consider all nonnegative xi where each digit can range from 0 to 23, the number
¡ ¢
of ways to distribute 23 amongst the xi ’s is 28
5 .

But we need to restrict the digits xi < 10. So we need to subtract the number of ways

to distribute 23 amongst the xi ’s when xk ≥ 10 for some k. Specifically, when xk ≥ 10 we

can express it as xk = 10 + yk . For all other j 6= k write yj = xj . The number of ways to

arrange 23 amongst xi when some xk ≥ 10 is the same as the number of ways to arrange
P ¡ ¢
yi so that 6i=1 yi = 23 − 10 is 18
5 . There are 6 possible ways for some xk ≥ 10 so there
¡ ¢
are a total of 6 18
5 ways for some digit to be greater than or equal to 10, as we can see by

using the stars and bars method (see 2.6.1).

However, the above counts events multiple times. For instance, x1 = x2 = 10 is counted

both when x1 ≥ 10 and when x2 ≥ 10. We need to account for these events that are counted

multiple times. We can consider when two digits are greater than or equal to 10: xj ≥ 10

and xk ≥ 10 when j 6= k. Let xj = 10 + yj and xk = 10 + yk and xi = yi ∀i 6= j, k. Then the

number of ways to distribute 23 amongst xi when there are 2 greater than or equal to 10 is
P
equivalent to the number of ways to distribute yi when 6i=1 yi = 23 − 10 − 10 = 3. There
¡¢ ¡¢
are 85 ways to distribute these yi and there are 62 ways to choose the possible two digits

that are greater than or equal to 10.


24 CHAPTER 2. PROBABILITY SPACE

We are interested in when the sum of xi ’s is equal to 23. So we can have at most 2 xi ’s

greater than or equal to 10. So we are done.


¡ ¢ ¡18¢ ¡6¢¡8¢
Thus there are 28 5 − 6 5 + 2 5 numbers between 1 through 999999 whose digits

sum up to 23. The probability that a number randomly chosen has digits that sum up to
(28
5)
−6(18 + 6 8
5 ) (2)(5)
23 is 999999 .

P
Example 2.7.9. Let A1 , A2 , . . . , An , n ≥ 2 be events. Prove that P (∪ni=1 Ai ) = i P (Ai ) −
P P n+1 P (A ∩ A ∩ . . . ∩ A ).
i<j P (Ai ∩ Aj ) + i<j<k P (Ai ∩ Aj ∩ Ak ) − · · · + (−1) 1 2 n

We prove the result by induction on n.

First consider the base case when n = 2. P (A1 ∪ A2 ) = P (A1 ) + P (A2 ) − P (A1 ∩ A2 ).

Assume the result holds true for n, prove the result for n + 1.

P (∪n+1 n n
i=1 Ai ) = P (∪i=1 Ai ) + P (An+1 ) − P ((∪i=1 Ai ) ∩ An+1 )

= P (∪ni=1 Ai ) + P (An+1 ) − P (∪ni=1 (Ai ∩ An+1 ))


X X X
= P (Ai ) − P (Ai ∩ Aj ) + P (Ai ∩ Aj ∩ Ak ) − . . .
i i<j i<j<k
X
n+1
+ (−1) P (A1 ∩ A2 ∩ . . . ∩ An ) + P (An+1 ) − ( P (Ai ∩ An+1 )
i
X X
− P (Ai ∩ Aj ∩ An+1 ) + P (Ai ∩ Aj ∩ Ak ∩ An+1 ) − . . .
i<j i<j<k

+ (−1)n+1 P (A1 ∩ A2 ∩ . . . ∩ An ∩ An+1 ))


X X X
= P (Ai ) − P (Ai ∩ Aj ) + P (Ai ∩ Aj ∩ Ak ) − . . .
i i<j i<j<k
n+2
+ (−1) P (A1 ∩ A2 ∩ . . . ∩ An+1 )

Example 2.7.10. Let {An , n ≥ 1} be a collection of events in some probability space


P
{Ω, F, P }. Assume that ∞n=1 P (An ) < ∞. Show that the probability that infinitely many

of those events occur is zero. This result is known as the Borel-Cantelli Lemma.

To prove this result we must write the event “infinitely many of the events An occur”
1.4 Functions of a random variable
Recall that a random variable X on a probability space (Ω, F, P ) is a function mapping Ω to the
real line R , satisfying the condition {ω : X(ω) ≤ a} ∈ F for all a ∈ R. Suppose g is a function
mapping R to R that is not too bizarre. Specifically, suppose for any constant c that {x : g(x) ≤ c}
is a Borel subset of R. Let Y (ω) = g(X(ω)). Then Y maps Ω to R and Y is a random variable.
See Figure 1.6. We write Y = g(X).
X g

X(ω) g(X(ω))

Figure 1.6: A function of a random variable as a composition of mappings.

Often we’d like to compute the distribution of Y from knowledge of g and the distribution of
X. In case X is a continuous random variable with known distribution, the following three step
procedure works well:
(1) Examine the ranges of possible values of X and Y . Sketch the function g.
(2) Find the CDF of Y , using FY (c) = P {Y ≤ c} = P {g(X) ≤ c}. The idea is to express the
event {g(X) ≤ c} as {X ∈ A} for some set A depending on c.
(3) If FY has a piecewise continuous derivative, and if the pmf fY is desired, differentiate FY .
If instead X is a discrete random variable then step 1 should be followed. After that the pmf of Y
can be found from the pmf of X using
X
pY (y) = P {g(X) = y} = pX (x)
x:g(x)=y

Example 1.4 Suppose X is a N (µ = 2, σ 2 = 3) random variable (see Section 1.6 for the definition)
and Y = X 2 . Let us describe the density of Y . Note that Y = g(X) where g(x) = x2 . The support
of the distribution of X is the whole real line, and the range of g over this support is R+ . Next we
find the CDF, FY . Since P {Y ≥ 0} = 1, FY (c) = 0 for c < 0. For c ≥ 0,
√ √
FY (c) = P {X 2 ≤ c} = P {− c ≤ X ≤ c}
√ √
− c−2 X −2 c−2
= P{ √ ≤ √ ≤ √ }
3 3 3
√ √
c−2 − c−2
= Φ( √ ) − Φ( √ )
3 3
2
Differentiate with respect to c, using the chain rule and the fact, Φ0 (s) = √1

exp(− s2 ) to obtain
( √ √
√ 1 {exp(−[ √c−2 2
] ) + exp(−[ − √c−2 ]2 )} if y ≥ 0
fY (c) = 24πc 6 6 (1.7)
0 if y < 0

9
Example 1.5 Suppose a vehicle is traveling in a straight line at speed a, and that a random
direction is selected, subtending an angle Θ from the direction of travel which is uniformly dis-
tributed over the interval [0, π]. See Figure 1.7. Then the effective speed of the vehicle in the

B
Θ

Figure 1.7: Direction of travel and a random direction.

random direction is B = a cos(Θ). Let us find the pdf of B.


The range of a cos(Θ) as θ ranges over [0, π] is the interval [−a, a]. Therefore, FB (c) = 0 for
c ≤ −a and FB (c) = 1 for c ≥ a. Let now −a < c < a. Then, because cos is monotone nonincreasing
on the interval [0, π],
c
FB (c) = P {a cos(Θ) ≤ c} = P {cos(Θ) ≤ }
a
c
= P {Θ ≥ cos−1 ( )}
a
cos−1 ( ac )
= 1−
π
1
Therefore, because cos−1 (y) has derivative, −(1 − y 2 )− 2 ,
(
√ 1 | c |< a
fB (c) = π a2 −c2
0 | c |> a

A sketch of the density is given in Figure 1.8.

fB

−a 0 a

Figure 1.8: The pdf of the effective speed in a uniformly distributed direction.

10
Θ

0 Y

Figure 1.9: A horizontal line, a fixed point at unit distance, and a line through the point with
random direction.

Example 1.6 Suppose Y = tan(Θ), as illustrated in Figure 1.9, where Θ is uniformly distributed
over the interval (− π2 , π2 ) . Let us find the pdf of Y . The function tan(θ) increases from −∞ to ∞
as θ ranges over the interval (− π2 , π2 ). For any real c,

FY (c) = P {Y ≤ c}
= P {tan(Θ) ≤ c}
tan−1 (c) + π
= P {Θ ≤ tan−1 (c)} = 2
π
Differentiating the CDF with respect to c yields that Y has the Cauchy pdf:

1
fY (c) = −∞<c<∞
π(1 + c2 )

Example 1.7 Given an angle θ expressed in radians, let (θ mod 2π) denote the equivalent angle
in the interval [0, 2π]. Thus, (θ mod 2π) is equal to θ + 2πn, where the integer n is such that
0 ≤ θ + 2πn < 2π.
Let Θ be uniformly distributed over [0, 2π], let h be a constant, and let

Θ̃ = (Θ + h mod 2π)

Let us find the distribution of Θ̃.


Clearly Θ̃ takes values in the interval [0, 2π], so fix c with 0 ≤ c < 2π and seek to find
P {Θ̃ S
≤ c}. Let A denote the interval [h, h + 2π]. Thus, Θ + h is uniformly distributed over A. Let
B = n [2πn, 2πn + c]. Thus Θ̃ ≤ c if and only if Θ + h ∈ B. Therefore,
Z
1
P {Θ̃ ≤ c} = T 2π dθ
A B
T
By sketching the set B, it is easy to see that A B is either a single interval of length c, or the
c
union of two intervals with lengths adding to c. Therefore, P {Θ̃ ≤ c} = 2π , so that Θ̃ is itself
uniformly distributed over [0, 2π]

Example 1.8 Let X be an exponentially distributed random variable with parameter λ. Let
Y = bXc, which is the integer part of X, and let R = X − bXc, which is the remainder. We shall
describe the distributions of Y and R.

11
Proposition 1.10.1 Under the above assumptions, Y is a continuous type random vector and for
y in the range of g:

fX (x) ∂x
fY (y) = ∂y = fX (x) (y)

| (x) | ∂y
∂x

Example 1.10 Let U , V have the joint pdf:



u + v 0 ≤ u, v ≤ 1
fU V (u, v) =
0 else

and let X = U 2 and Y = U (1 + V ). Let’s find the pdf fXY . The vector (U, V ) in the u − v plane is
transformed into the vector (X, Y ) in the x − y plane under a mapping g that maps u, v to x = u2
and y = u(1 + v). The image in the x − y plane of the square [0, 1]2 in the u − v plane is the set A
given by
√ √
A = {(x, y) : 0 ≤ x ≤ 1, and x ≤ y ≤ 2 x}

See Figure 1.12 The mapping from the square is one to one, for if (x, y) ∈ A then (u, v) can be
y
2

v
1 1

u x
1

Figure 1.12: Transformation from the u − v plane to the x − y plane.



recovered by u = x and v = √y − 1. The Jacobian determinant is
x

∂x ∂x

= 2u 0 2

∂u ∂v
1 + v u = 2u
∂y ∂y

∂u ∂v

Therefore, using the transformation formula and expressing u and V in terms of x and y yields
( √ √y
x+( x −1)
fXY (x, y) = 2x if (x, y) ∈ A
0 else

Example 1.11 Let U and V be independent continuous type random variables. Let X = U + V
and Y = V . Let us find the joint density of X, Y and the marginal density of X. The mapping
u    
x u+v
g: → =
v y v

24
is invertible, with inverse given by u = x − y and v = y. The absolute value of the Jacobian
determinant is given by
∂x ∂x
1 1
∂u ∂v
∂y ∂y = 0 1 = 1

∂u ∂u

Therefore

fXY (x, y) = fU V (u, v) = fU (x − y)fV (y)

The marginal density of X is given by


Z ∞ Z ∞
fX (x) = fXY (x, y)dy = fU (x − y)fV (y)dy
−∞ −∞

That is fX = fU ∗ fV .

Example 1.12 Let X1 and X2 be independent N (0, σ 2 ) random variables, and let X = (X1 , X2 )T
denote the two-dimensional random vector with coordinates X1 and X2 . Any point of x ∈ R2 can
1
be represented in polar coordinates by the vector (r, θ)T such that r = kxk = (x21 + x22 ) 2 and
θ = tan−1 ( xx21 ) with values r ≥ 0 and 0 ≤ θ < 2π. The inverse of this mapping is given by

x1 = r cos(θ)
x2 = r sin(θ)

We endeavor to find the pdf of the random vector (R, Θ)T , the polar coordinates of X. The pdf of
X is given by
1 − r22
fX (x) = fX1 (x1 )fX2 (x2 ) = e 2σ
2πσ 2

The range of the mapping is the set r > 0 and 0 < θ ≤ 2π. On the range,
∂x
1 ∂x1 cos(θ) −r sin(θ)

∂x
r = ∂r ∂θ =
∂x2 ∂x2 sin(θ) r cos(θ) = r

∂( )
θ ∂r ∂θ

Therefore for (r, θ)T in the range of the mapping,



∂x r − r22
fR,Θ (r, θ) = fX (x) r =
e 2σ
∂( θ ) 2πσ 2

Of course fR,Θ (r, θ) = 0 off the range of the mapping. The joint density factors into a function of
r and a function of θ, so R and Θ are independent. Moreover, R has the Rayleigh density with
parameter σ 2 , and Θ is uniformly distributed on [0, 2π].

25
ELEG–636 Homework #1, Spring 2003

 
    
     
1. Show that if
$%'&  $*,+.-/ , and
then !#" ()"

Answer:


 02134 5687079%:;1< #
5687079%:;1<>=#

56

 213


()"

 
?
 #

(#"
In the same way


 )

 )"
(
Then   @
? IA2JLBDK CFE.G.HE  I- JLAOK NCFE.G
B CME.G N CFE.G
?QP%J R IA JLB K CFE.G.HE 1< P%J R I- JLA2K NCME.G 13
R B CFE.G R N CFE.G
?QP J R H IK JLB K CME.GSHE   P J R IH JLK K NTCFE.G
R B CFE.G R N CME.G
?VUSW & 8#" L$*,+*  U.W & (#" X,+
? 8#" L$%'& 8#" @$%,+.-

Y X Z\[*]
$% [^$*_` $ [^$*_
2.J @Express
b $% the density of the RV in terms of if (a) ; (b)
a .

Answer:
(a)
@Xc 5d#eX
"
 5df[^]8eX
 5d g@eX
X 7
d
 h'i i
56 Xde0e9XQXdj
i
X_7
 h i i
560e9X 560e XVX_j
i
X_7
 h i i
 X   XVX_j
" )" i

1  X

 Xc "
X

1
ELEG–636 Homework #1, Spring 2003

X_7
 h i K  L J K J  i
Hk B C G  B C Gml X_j i
H
X_7
 h i K  K J  i
H BD C G H BTC  G X_j i
H H
Xd7
 hni i

Xo:p
  XQXdj
i
(b)
@X` 56)eX
"
 56f[^](eX
 56 a Jq b ]reX
X_7
 h i 
J q i
56 a e XVX_j

i
X_7
 hni i
56j U.W 
 
X 
 s
 _
X j
i
X_7
 hni i
 USW XsX_j
(#" i

@X

X` "
X
Xd7
 h i I JLK Jrtvu  i
Hk BTC  C fG Gml Xdj
H i
Xd7
 h i Jgtvu  i
A B C  C G.G Xdj
i
 

 $%xw a  b are
3. The RVs and L
J y $*>independent J{ 3b exponential
=z
@Xx a with X densities
_: q
Find the densities of the following RVs: | ;}
€':  ‚ 
Answer: (1) Let ~ and
„ w J† ‡ ƒ b „ˆ

ƒg„… 
 
‚ ‚ ‚ a
„ X
Since and are independent, we have

Y‰/‹Š<Œ
ƒ ‹ Š<]8
 ‹Š<
 Ž R
‹Š „ˆ
ƒ8„ˆ213„
J ‰R
‰ ƒ w ƒ
 Ž a J{ C J G ‚ a J† ‡ 13„

w‘ ‰ b
  J †’‡ a J{  ‹Š
‚Y w a

2
ELEG–636 Homework #1, Spring 2003

(2)
‰ ‹Š<Œ 5d e“Šx56e“Š/‘
" ~
 Ž R 56e“ŠY8 0€X
X213X
J R ‰
 Ž R Ž
 $*213$*
 1<X
J R J ‰ R
 
 Ž R Ž w a JLy 1<$” a J{ 13X
 
R  J{  ‰z–
 •Ž a a J C y { G X213X


! w‘Š—:;
w 1
] b ‹ Š< ‰ ‹Š

Y‰/‹Š<  " 
˜w™Š—:]2š 1@Š
    Z  r
4. The RVs and are › i œ and independent. Show that, if ~ , then
ž Ÿ  ‚ = ž¥Ÿ š  ‚ šY¦
~ ¡ o¢<£ ¤ ~ ¡ 

Answer:
§ 0 Q    
Let ~ , and . Since and are Gaussian, so is also Gaussian. We can find

ƒg„ˆ  
by finding the mean and variance of .

ž & 6+¨ ž & %+


 ž & *+ ž & %+

i

š  ž & š+
 © ~
 ž &ª  š+
™
 ž &  š +@: ž &  š + ‚ ž & *+ ž & %+
 ‚ š

So, ‡

ƒr„ˆ   J ˜­ « ® ‡
a ¬
£ ‚ ¤ ‚  š
Thus,

ž & ¨+  Ž R  „•
ƒ „ˆ213„
~ J R ‡
 ‚Ž R „  Ja ˜­ « ® ‡ 13„
¯
‚
 £ ‚ ¤ ‚  š
 
£ ¤

3
ELEG–636 Homework #1, Spring 2003

ž & š+
~ is already obtained, which is ž & š +*‚ š
~ 
5. Use the moment generating function, show that the linear transformation of a Gaussian random vector
is also Gaussian.

Proof:

Let be a °± real random Gaussian vector, then the density function is

]  J(‡¶ qJ”·¹¸ GSº» ¸3¼ ¶ C q<J”·¹¸ G
‹‚ O ²/³ š µ ´ q  I ³ š a C
¤

Let ½ be a °± real vector, then the moment generating function of is
¾… `  ž & a¿ º q +
½
 Ž a ¿º q  J ‡¶ qJ”· ¸ G º » ¸ ¼ ¶ C q<J”· ¸ G 1<
‹‚  ²Y³ š@µ´ q  I ³ š a C
¤
 ·a !¸ º ¿ – ‡ ¶ ¿ º» ¸ ¿


Let À be a linear transform of
0 
À
Then Á   ´ q
À À6Â

The moment generating function of is
¾   Œ  ž &a ¿ º } +
½
 ž & a ¿ º à q +
 ž & a C ú ¿Gº q +

Using the moment generating function of , we have
–
¾ˆ Œ
½
  a · ¸ º C ú ¿ G ‡¶ C ú ¿ G º » ¸ C ú ¿ G
–
 a ·¹Ä º ¿ ‡¶ ¿ º» Ä ¿
¾… 
which has the same form of
 ½ .
So, is also Gaussian.
Ÿ $   w
6. Let - ° ¡Å-ÇÆ I be four IID random variables with exponential distribution with = 1.
- É
X -    ÈÉ $  > = e“_eÊ
° ° 
Æ I
X  
(a) Determine and plot the pdf of š °
XËY 
(b) Determine and plot the pdf of
X ° 
(c) Determine and plot the pdf of
X   Å °
(d) Compare the pdf of ° with that of the Gaussian density.
Å

4
ELEG–636 Homework #1, Spring 2003


Answer: Let

 $% a J b $*

 $%
The characteristic function of is Ì
fÍr 
Í
 ÏÎ
(
$  >=ÐÐÐT=$ -  
Since I ° ° are i.i.d.,

z Ñ ² X
 X]!ÐÐÐ/r
 X
C G
Evaluating both sides by the characteristic
Ì functions, we have Ì
Ñ ² -
 Ñ ² fÍr ž & >a ÒÓ C G +* É Ô  Õ ²
C G C G
Æ I
Ì
So, -
zÑ ² fÍr\Ö 
C G ÍØ×
 ÏÎ
!
X  
whose inverse Fourier transform yields the pdf of - °
X - J%I a J  b

 Ñ ² X X
C G ‹ >Ù
9
 6 ‚@=zÚ@=Ê ¦
This expression holds for any positive integer , including

7. The mean and covariance of a Gaussian random vector are given by, respectively,

Û ÜÞÝ ‚à
 ß

and á
I
ÜÞÝ  I š
ß
š 

Plot the 1  , 2 , and 3 concentration ellipses representing the contours of the density function in
$ =$^‚3 å $ 7 ç
the  plane. âäã,° : The radius of an ellipse with major axis a (along  ) and minor axis æ
$*‚
(along ) is given by è ç š šš  æ
ç š å
šëÇì å šÇê šÇê :
ãé° æ
e ê eí‚ çî 
where i ¤ . Compute the 1  ellipse specified by £ ï I and æ £ ï š and then rotate and
 C '& $ CFI ðSG $ š CFðSG +  mC ñMG óò q  ñ : Û q
translate each point ã using the transformation .
Answer: ô
 J%I  Ý ÅË š ˚
Ë ß
ÅË

5
ELEG–636 Homework #1, Spring 2003

So,
ô

q  ]c
‚

   I ³ š a J ‡¶ C qJõ BG º ö B ¼ ¶ C qJõ BG
¤
‡

Ú
£ a J ÷ kC J I G‡ J C ¶ %
¶% J I ˜G C  ‡ J š G – C  ‡ J š G ‡ l
Ê
¤
Let
[^$ I = $ š '$ I š $ I Ç$ š ‚3ø:$ š ‚3 š
9 “
The linear transform $ I š š „ I
Ý  Ýäù š ùš Ý
$ š ß š š ß „ š ß
ùš ùš
Ê<ú/û
is a rotation of of the original axes.
„ šI „ šš
[^$ I = $ š  :
‚ šË

So, ‚
ç š  ‚ š 
æ Ú
So, the radius of the ellipse is
è ç š š
š  æ 
ç š%üý W š ê : % š þÇÿ3üzš ê 
è æ

è 
 
The concentration ellipse of  ( ) is thus
„ šI „ šš
:  

‚ šË

or
$ I š $ I Ç$ š ‚3ø: $ š ‚3 š  
9 9

[*$ I = $ š 
When the function is chosen differently, the figure will be different. But the orientation of
the ellipses are the same.

6
ELEG–636 Homework #1, Spring 2003

x2

x1
1

7
ELEG–636 Test #1, March 25, 1999 NAME:

1. (35 pts) Let y = minfjx1 j; x2 g where x1 and x2 are i.i.d. inputs with cdf and pdf Fx () and fx (),
respectively. For simplicity, assume fx () is symmetric about 0, i.e., fx (x) = fx (,x). Determine the
cdf and pdf of y in terms of the distribution of the inputs. Plot the pdf of y for fx () uniform on [,1; 1].
Note that (
Fx (x) , Fx (,x) for x  0
Fjxj (x) =
0 otherwise
Also
Fminfx1 ;x2 g (x) = 1 , P fx1  xgP fx2  xg = 1 , (1 , Fx (x))(1 , Fx (x))
1 2

Thus,
= 1 , (1 , Fjx j (y))(1 , Fx (y))
Fy (y )
( 1 2

= 1 , (1 , Fx (y) + Fx (,y))(1 , Fx (y)) for y  0


1 , (1 , Fx (y)) otherwise
(
2

= 2Fx (y) , Fx (,y) , Fx2 (y) + Fx (y)Fx (,y) for y  0


Fx (y ) otherwise
If fx () is symmetric about 0, then fx (x) = fx (,x) and Fx (x) = 1 , Fx (,x), giving
(
Fy (y ) =
2Fx (y) , (1 , Fx (y)) , Fx2 (y) + Fx (y)(1 , Fx (y)) for y  0
Fx (y ) otherwise
(
= 4Fx (y) , 2Fx2 (y) , 1 for y  0
Fx (y ) otherwise
Taking the derivative, (
fy (y ) = 4fx (y) , 4fx (y)Fx (y) for y  0
fx (y ) otherwise
(
= 4fx(y)(1 , Fx (y)) for y  0
fx (y ) otherwise

1
ELEG–636 Test #1, March 25, 1999 NAME:

2. (35 pts) Consider the observed samples

yi =  + xi
for i = 1; 2; : : : ; N . We wish to estimate the location parameter  using a maximum likelihood estimator
operating on the observations y1 ; y2 ; : : : ; yN . Consider two cases:

  N (0; 2 ), for i = 1; 2; : : : ; N .
(10 pts) The xi terms are i.i.d. with distribution xi

 (10 pts) The xi terms are independent with distribution xi  N (0; i2 ), for i = 1; 2; : : : ; N .

 (15 pts) Are the estimates unbiased? What is the variance of the estimates? Are they consistent?

Y  N=2 P
p1 , (yi2,2) = 1
N 2
, ,
N (yi  )2
j =
fyj (y  ) e
22 e i=1 2 2

i=1 22
Thus,
X
N
(yi , )2
M L = arg max

, 22
i=1

and taking the derivative,


X (yi , M L ) X
= 0 ) M L = N1 yi
N N

i=1
2 i=1

For the case of changing variances,


PN y PN w y
X
N
(yi , M L) i=1
i

= 0 ) M L = PN 1 M L = Pi=1
2
ii i

i=1
i2 N
w
i=1  2 i=1 i
i

which is a normalized filter, where wi = 12 for i = 1; 2; : : : ; N .


For each estimate E fM L g =  , and they are thus unbiased.
8 PN !2 9 8 PN !2 9
< w y , w  = < w x =
var(M L )[N ] = E f(M L ,  )2 g = E i=1
P = P i=1
i i i i i
: N
i=1 wi
; : Ni=1 wi ;
E

Ef N
P PN w x x w g PN w2 2 PN w 1
i=1 j =1 i i j j i=1 i i i=1 i
=
(
P N
w )2
=
(
P N
w )2
=
(
P N
w )2
= P N
w
i=1 i i=1 i i=1 i i=1 i

Since wi > 0, we have var(M L )[N + 1] < var(M L )[N ]. This, combined with the fact that the
estimator is unbiased means the estimate is consistent.

3
ELEG{636 Test #1, March 23, 2000 NAME:

1. (30 pts) The random variables x and y are independent and uniformly distributed
p on
the interval [0,1]. Determine the conditional distribution frjA(rjA) where r = x2 + y2 and
A = fr  1g.
Answer:

Examine the joint density fx;y (x; y) in the xy plane. Since x and y are independent,
fx;y (x; y) = fx(x)fy (y) = 1 for 0  x; y  1
This de nes a uniform
p density over the region 0  x; y  1 in the rst quadrant of the xy plane.
Note that r = x2 + y2 de nes an arc in the rst quadrant. Also, if 0  r  1 the area under
the uniform density up to radius r is simply given by
q Z
Fr (r) = P r[ x + y  r] = p 2 2 fx;y (x; y)dxdy
2 2
x +y r
Z r 2
= p 1dxdy = for 0  r  1
2 2x +y r 4
Then for A = fr  1g.
Fr;A (r; A) Fr (r) r2 4
FrjA (rjA) = = = = r2 for 0  r  1
P r[A] Fr (1) 4 
Thus, frjA(rjA) = 2r for 0  r  1 and 0 elsewhere.

1
ELEG{636 Test #1, April 5, 2001 NAME:

1. (35 pts) Probability questions:


 (10 pts) Let x be a random variable and set y = x2 . Derive a simpli ed expression for
f (y jx  0).

 (15 pts) Suppose now that y = a sin(x + ), where  and a > 0 are constants. Determine
f (y ).
y

 (10 pts) Suppose further that x is uniformly distributed over [ ; ]. Determine f (y) y

for this special case.


Answer: Clearly, F (yjx  0) = 0 for y < 0. Then for y  0,
p
F (y x j  0) = P r(PY r(Xy;X0) 0) = F (1 y)F x Fx (0)
(0)
U (y ) :
x

Thus p
fx ( y )
f (y xj  0) = 2p y(1 Fx (0))
U (y ):

Now for y = g(x) = a sin(x + ) we have, assuming jyj  a, in nitely many solutions
xn = arcsin(y=a) 

n = 0; 1; 2; : : :. Also,


g 0 (xn ) = a cos(x + )n

Note that g 2 (x
n )+g 0 2 (x
n )= a2 cos2 (x n + ) + a2 sin2 (x n + ) = a2 . Or,
q q
g 0 (xn ) = a2 g 2 (xn ) = a2 y2 :

Thus
X f (x ) 1 X
fy (y ) =
x

g (x )
=p 2 n
fx (xn ); jyj  a
i
0
a n y2 i

If x  U ( ; ) then there is only a single solution, and


1
fy (y ) = p ; jyj  a
2 a2 y2

1
ELEG–636 Test #1, April 14, 2003 NAME:

1. (30 pts) Probability questions:


 (15 pts) Let x be a random variables with density fx (x) given below. Let y = g (x) be
the shown function. Determine fy (y ) and Fy (y ).

 (15 pts) Let x and y be independent, zero mean, unit variance Gaussian random variables.
Define
w = x2 + y 2 and z = x2 :
Determine fw;z (w; z ). Are w and z independent?
Answer: Note that
, 0:5) x<2
(
1 1
x +  (x 0
fx (x) = 4 2
0 otherwise
Thus 8
>
< 0 x<0
Fx (x) = 1 2
x +
1
u(x , 0:5) 0 x<2
2x
> 8 2
:
1
p
Since x = y for 0  y  1,
8

Fy (y ) =
>
<
p
0
Fx ( y ) 0
y<0
y<1
1y
>
:
1
8

=
>
<
1
y +
1
u(
py , 0:5)
0
0
y<0
y<1
1y
> 8 2
:
1
8
>
< 0 y<0
=
1
y +
1
u(y , 0:25) 0 y<1
1y
> 8 2
:
1

Taking the derivitive yields


, 0:25) + , 1) y1
(
1 1 3
+  (y  (y 0
fy (y ) = 8 2 8
0 otherwise

1
ELEG–636 Test #1, April 14, 2003 NAME:

Tha Jabobian of the transformation is



d(x2 +y2 ) d(x2 +y2 )



J (x; y ) =



dx
d(x2 )
dy
d(x2 )


=



2x
2x
2y
0



j j
= 4 xy

dx dy

p
The reverse transformation is easily seen to be x = pz and y =  w , x 2
= pw , z ,
w  z . Thus,

fw;z (w; z ) =
fx;y (x; y )
p fx;y (x; y )
pz
j j
p
+
j j

,pw , z
4 xy x= z 4 xy x=
y = w,z y=

fx;y (x; y )
,pz fx;y (x; y )
x=, z
p
+
4 xyj j
x=
pw , z
+
j j
4 xy


p
y= y =, w,z
(1)

Since x and y are independent,


1 ,(x2 +y2 )
fx;y (x; y ) = e 2
2

Thus
fw;z (w; z ) = p p1
2 z w , z
e,w= u(w )u(z )u(w , z )
2

where the last three terms indicate w; z  0 and w  z .

2
ELEG–636 Midterm, April 7, 2009 NAME:

1. [30 pts] Probability:

(a) [15 pts] Prove the Bienayme inequality, which is a generalization of the Tchebycheff
inequality,
E{|X − a|n }
P r{|X − a| ≥ } ≤
n
for arbitrary a and distribution of X.
(b) [15 pts] Consider the uniform distribution over [−1, 1].
i. [10 pts] Determine the moment generating function for this distribution.
ii. [5 pts] Use the moment generating function to generate a simple expression for
0
the k th moment, mk .

Answer:

(a)
Z ∞ Z Z
n n n
E{|x − a| } = |x − a| fx (x)dx ≥ |x − a| fx (x)dx ≥ n fx (x)dx
−∞ x−a|≥ x−a|≥
E{|X − a|n }
=n P r{|x − a| ≥ } ⇒ P r{|X − a| ≥ } ≤
n

(b)
Z 1 1
 s −s
2s (e − e ) s 6= 0
1 sx
Φ(s) = e dx =
2−1 1 s=0
k

d Φ(s)
⇒ E{xk } =
dk s s=0

dΦ(s) 1 s −s 1 s −s

E{x} = = (e + e ) − (e − e )
ds s=0 2s 2s2
s=0

1 s −s 1 s −s

= (e − e ) − (e − e ) =0
2 4 s=0

Repeat the differentiation, limit (l’Hpital’s rule) process. The analytical solution is
simpler:

1 1 k 1 − (−1)k+1
Z 
k 0 k = 1, 3, 5, . . .
E{x } = x dx = = 1
2 −1 2(k + 1) k+1 k = 0, 2, 4, . . .

1
ELEG–636 Midterm, April 7, 2009 NAME:

2 )
3. [35 pts] Let Z = X +N , where X and N are independent with distributions N ∼ N (0, σN
1 1
and fX (x) = 2 δ(x − 2) + 2 δ(x + 2).

(a) [15 pts] Determine the MAP, MS, MAE, and ML estimates for X in terms of Z.
(b) [10 pts] Determine the bias of each estimate, i.e., determine whether or not each
estimate is biased.
(c) [10 pts] Determine the variances of the estimates.

Answer:

(a) Since X and N are independent, fZ (z) = fX (z) ∗ fN (z) = 12 N (−2, σN


2 ) + 1 N (2, σ 2 ).
2 N
Also
2
fZ|X (z|x) =N (x, σN )
x̂M L = arg max fZ|X (z|x) = z
x
fZ|X (z|x)fX (x) N (x, σN2 )(δ(x − 2) + δ(x + 2))
fX|Z (x|z) = =
fZ (z) 2fZ (z)

2 z>0
x̂M AP = arg max fX|Z (x|z) =
x −2 z < 0
Z ∞ Z ∞
1
x̂M S = xfX|Z (x|z)dx = xf (z|x)fX (x)dx
−∞ fZ (z) −∞ Z|X
2 )| 2

2N (2, σN x=z − 2N (−2, σN )|x=z
=
2fZ (z)
2
N (2, σN )|x=z − N (−2, σN 2 )|
x=z
=2 2 )| 2 )|
N (2, σN x=z + N (−2, σN x=z
Z x̂M AE Z x̂M AE
1 1
= fX|Z (x|z)dx = fZ|X (z|x)fX (x)dx
2 −∞ fZ (z) −∞
Z x̂M AE
1 2 2

⇒ fZ|X (z|x)fX (x)dx = N (2, σN )|x=z + N (−2, σN )|x=z
−∞ 4
Z x̂M AE
2 1 2 2

⇒ N (x, σN )(δ(x − 2)+δ(x + 2))dx = N (2, σN )|x=z + N (−2, σN )|x=z
−∞ 2

Note the LHS is not continuous ⇒ x̂M AE not well defined.


(b) Note fZ (z) is symmetric about 0 ⇒ E{x̂M L } = E{z} = 0 ⇒ x̂M L is unbiased
(E{x} = 0). Similarly, E{x̂M AP } = 2P r{z > 0} − 2P r{z < 0} = 0 ⇒ x̂M AP is
unbiased. Also, x̂M S is an odd function (about 0) of z ⇒ E{x̂M S } = 0 ⇒ x̂M S is
unbiased.
2 2 2 2 2 2
(c) σM L = σZ = σX + σN = 4 + σN . Also, σM AP = 4 (since x̂M AP = ±2). Determining
2
σM S is not trivial, and will not be considered.

3
ELEG–636 Homework #1, Spring 2009

1. A token is placed at the origin on a piece of graph paper. A coin biased to heads is given, P (H) =
2/3. If the result of a toss is heads, the token is moved one unit to the right, and if it is a tail the
token is moved one unit to the left. Repeating this 1200 times, what is a probability that the token
is on a unit N , where 350 ≤ N ≤ 450? Simulate the system and plot the histogram using 10,000
realizations.
Solution:
Let x = # of heads. Then 350 ≤ x − (1200 − x) ≤ 450 ⇒ 775 ≤ x ≤ 825 and
825    i  1200−i
X 1200 2 1
P r(775x ≤ 825) =
i 3 3
i=775

which can be approximated using the DeMoivre–Laplace approximation


i2   ! !
X n i 2 − np i 1 − np
(p)i (1 − p)n−i ≈ Φ p −Φ p
i np(1 − p) np(1 − p)
i=i1

Rx 1 −x2 /2
where Φ(x) = −∞ 2π e dx

2. Random variable X is characterized by cdf FX (x) = (1 − e−x )U (x) and event C is defined by
C = {0.5 < X ≤ 1}. Determine and plot FX (x|C) and fX (x|C).
Solution: Evaluating P r(X ≤ x, 0.5 < X ≤ 1) for the allowable three cases

x < 0.5 P r(X ≤ x, 0.5 < X ≤ 1) = 0


0.5 ≤ x ≤ 1 P r(X ≤ x, 0.5 < X ≤ 1) = FX (x) − FX (0.5) = e−0.5 − e−x
x>1 P r(X ≤ x, 0.5 < X ≤ 1) = FX (1) − FX (0.5) = e−0.5 − e−1 = 0.2386

Also, P r(C) = FX (1) − FX (0.5) = e−0.5 − e−1 = 0.2386. Thus



0 x < 0.5
P r(X ≤ x, 0.5 < X ≤ 1)  −0.5 −x
fX (x|C) = = (e − e )/0.2386 0.5 ≤ x ≤ 1
P r(0.5 < X ≤ 1)
1 x>1

3. Prove that the characteristic function for the univariate Gaussian distribution, N (η, σ 2 ), is

ω2σ2
 
φ(ω) = exp jωη −
2

Next determine the moment generating function and determine the first four moments.

1
ELEG–636 Homework #1, Spring 2009

Solution:

(x − η)2
Z  
1
φ(ω) = √ exp 2
ejωx dx
−∞ 2πσ 2σ
Z ∞  2
(x − 2ηx + η 2 − 2jωxσ 2 )

1
= √ exp dx
−∞ 2πσ 2σ 2
Z ∞
(x − (ηx + jωσ 2 )2 (−η 2 + (η 2 + jωσ 2 η)2
   
1
= √ exp exp dx
−∞ 2πσ 2σ 2 2σ 2
Z ∞
(−η 2 + (η 2 + jωσ 2 η)2 (x − (ηx + jωσ 2 )2
  
1
= exp √ exp dx
2σ 2 −∞ 2πσ 2σ 2
(−η 2 + (η 2 + jωσ 2 η)2
 
= exp
2σ 2
 2 2

which reduces to φ(ω) = exp jωη − ω 2σ . The moment generating function is simple

s2 σ 2
 
Φ(s) = exp sη +
2
dk Φ(s)
and mk = | ,
dk s s=0
which yields

m1 = η m2 = σ 2 + η 2
m3 = 3ησ 2 + η 3 m4 = 3σ 4 + 6σ 2 η 2 + η 4

4. Let Y = X 2 . Determine fY (y) for:


(a) fX (x) = 0.5 exp{−|x|}
(b) fX (x) = exp{−|x|}U (X)

Solution: Y = X 2 ⇒ X = ± y and dY /dX = 2X. Thus

fX (x) fX (x)
fY (y) = +
|2x| x=√y |2x| x=−√y

Substituting and simplifying


1 √
fX (x) = 0.5 exp{−|x|} ⇒ fY (y) = √ e− y U (y)
2 y
1 √
fX (x) = exp{−|x|}U (x) ⇒ fY (y) = √ e− y U (y)
2 y

5. Given the joint pdf fXY (x, y)



8xy, 0 < y < 1, 0 < x < y
fXY (x, y) =
0, otherwise

Determine (a) fx (x), (b) fY (y), (c) fY (y|x), and (d) E[Y |x].
Solution:

2
ELEG–636 Homework #1, Spring 2009

4x − 4x3 0 < x < 1



R∞ R1
(a) fX (x) = −∞ fXY (x, y)dy = x 8xydy =
0 otherwise
4y 3 0 < y < 1

R∞ Ry
(b) fY (y) = −∞ fXY (x, y)dx = o 8xydx =
0 otherwise
 2y
fXY (x,y) 1−x2
x<y<1
(c) fY (y|x) = fX (x) =
0 otherwise
   
R∞ R1 2y 2 2 1−x3 2 1+x+x2
(d) E[Y |x] = −∞ yfY (y|x)dy = x 1−x2 dy = 3 1−x2
= 3 1+x

6. Let W and Z be RVs defined by

W = X2 + Y 2 and Z = X2

where X and Y are independent; X, Y ∼ N (0, 1).

(a) Determine the joint pdf fW Z (w, z).


(b) Are W and Z independent?

Solution: Given the system of equations


 
w z 2x 2y
J = = 4|xy|
x y 2x 0

Note we must have w, z ≥ 0 and w ≥ z. Thus the inverse system (roots) are
√ √
x = ± z, y = ± w − z.

Thus

fXY (x, y) √
fW Z (w, z) = (∗)
4|xy| x = ±√ z
y =± w−z

Note also that, since X, Y ∼ N (0, 1),

1 − x2 +y2
fXY (x, y) = e 2 (∗∗)

Substituting (∗∗) into (∗) [which has four terms] and simplifying yields

ew/2
fW Z (w, z) = p U (w − z)U (z) (∗ ∗ ∗)
2π z(w − z)

Note W and Z are not independent. Counter example proof: Suppose W and Z are independent.
Then fW (w)fZ (z) > 0 for all w, z > 0. But this violates (∗ ∗ ∗), as fW Z (w, z) > 0 only for
w ≥ z.

3
ELEG–636 Homework #2, Spring 2009

1. Let  
2 −2
R=
−2 5
Express R as R = QΩQH , where Ω is diagonal.
Solution:

2 − λ −2
= λ2 − 7λ + 6 = 0


−2 5 − λ ⇒ λ1 = 6, λ2 = 1

Than solving Rqi = λi qi gives q1 = √1 [1, −2]T and q2 = √1 [2, 1]T . Thus R = QΩQH
5 5
where  
6 0
Q = [q1 , q2 ] and Ω=
0 1

2. The two-dimensional covariance matrix can be expressed as:

σ12
 
ρσ1 σ2
C=
ρ∗ σ1 σ2 σ22

(a) Find the simplest expression for the eigenvalues of C.


(b) Specialize the results to the case σ 2 = σ22 = σ22 .
(c) What are the eigenvectors in the special case (b) when ρ is real?

Solution:

(a) 2
σ1 − λ ρσ1 σ2
= λ2 − (σ12 + σ22 )λ + (1 − |p|2 )σ12 σ22 = 0


ρ∗ σ1 σ2 σ 2 − λ

2
p
(σ12 + σ22 ) ± σ14 + σ24 − 2σ12 σ22 + 4|p|2 σ12 σ22
⇒λ=
2
(b) For σ 2 = σ22 = σ22 p
2σ 2 ± 4|p|2 σ 4
λ= = (1 ± |p|)σ 2
2
3. Let
x[n] = Aejω0 n
where the complex amplitude A is a RV with random magnitude and phase

A = |A|ejφ .

Show that a sufficient condition for the random process to be stationary is that the amplitude and
phase are independent and that the phase is uniformly distributed over [−π, π].
Solution: First note E{x[n]} = E{A}ejω0 n and

E{A} = E{|A|}E{ejφ } = 0

1
ELEG–636 Homework #2, Spring 2009

by independence and uniform distribution of φ. Thus it has a fixed mean. Next note

E{x[n]x∗ [n − k]} = E{|A|2 }ejω0 k

which is strictly a function of k ⇒ WSS.

4. Let Xi be i.i.d. RVs uniformly distributed on [0, 1] and define


20
X
Y = Xi .
i=1

Utilize Tchebycheff’s inequality to determine a bound for P r{8 < Y < 12}.
1 1 20
Solution: Note ηx = 2 and σx2 = 12 . Thus ηy = 10 and σy2 = 12 = 53 . Utilize Tchebycheff’s
inequality
 σ 2 5 5 7
y
P r{|Y − ηy | ≥ 2} ≤ = ⇒ P r{8 < Y < 12} ≥ 1 − =
2 12 12 12

5. Let X ∼ N (0, 2σ 2 ) and Y ∼ N (1, σ 2 ) be independent RVs. Also, define Z = XY . Find the
Bays estimate of X from observation Z:

(a) Using the squared error criteria.


(b) Using the absolute error criteria.

6. Let X and Y be independent RVs characterized by fX (x) = ae−ax U (x) and fY (y) = ae−ay U (y).
Also, define Z = XY . Find the Bays estimate of X from observation Z using the uniform cost
function.
Solution:
1
Fz|x (z|x) = P r(xy ≤ z|x) = P r(y ≤ z/x) = Fy (z/x) ⇒ fz|x (z|x) = fy (z/x)
x

1
x̂ = arg max fz|x (z|x)fx (x) = arg max fy (z/x)fx (x)
x
1 −az/x −ax −1
= arg max ae ae U (x)U (z) = arg max a2 x−1 e−a(zx +x) U (x)U (z)
x
−1 −1
⇒ 0 = − a2 x−2 e−a(zx +x) + (a2 x−1 e−a(zx +x) )(−a(1 − zx−2 ))
0 = − x−1 − a(1 − zx−2 ) ⇒ ax2 + x − z = 0

−1 ± 1 + 4az
⇒ x̂ =
2a

7. Random processes x[n] and y[n] are defined by

x[n] = v1 [n] + 3v2 [n − 1]


y[n] = v2 [n + 1] + 3v2 [n − 1]

where v1 [n] and v2 [n] are independent white noise processes, each with variance 0.5.

2
ELEG–636 Homework #1, Spring 2008

1. Let fx (t) be symmetric about 0. Prove that µ is the expected value of a sam-
ple distributed according to fx−µ (t).

Solution.
Since fx (t) is symmetric about 0, fx (t) is even.
Z +∞
E[(x − µ)] = tfx−µ (t)dt
−∞
Z +∞
= tfx (t − µ)dt
−∞

Let u = t − µ,
Z +∞
E[(x − µ)] = u + µfx (u)du
−∞
Z +∞ Z +∞
= ufx (u) du + µfx (u)du
−∞ | {z } −∞
odd
Z +∞
= 0+µ fx (u)du
−∞
= µ

2. The complimentary cumulative distribution function is defined as Qx (x) =


1 − Fx (x), or more explicitly in the zero mean, unit variance Gaussian dis-
tribution case as
Z ∞
1 1
 
Qx (x) = √ exp − t2 dt.
x 2π 2
Show that
1 1
 
Qx (x) ≈ √ exp − x2 .
2πx 2
 
Hint: use integration by parts on Qx (x) = x∞ √2πt
1
t exp − 12 t2 dt. Also
R

explain why the approximation improves x as increases.

Solution. Rb Rb 0
0 b
Recall integrationby parts:
 a f (t)g (t)dt = f (t)g(t)|a − a f (t)g(t)dt.
Let g 0 (t) = t exp − 21 t2 and f (t) = √2πt
1

Z ∞
1 1
 
Qx (x) = √ t exp − t2 dt
x 2πt 2

1
ELEG–636 Homework #1, Spring 2008


Z ∞
1 1 1 1
   
= −√ exp − t2 − √
2
exp − t2 dt
2πt 2 x x 2πt 2
| {z }
→0 as x→∞
1 1
 
≈ √ exp − x2
2πx 2
 
Since x∞ √2πt
1 1 2
R
2 exp − 2 t dt goes to zero as x goes to infinity, the ap-
proximation improves x as increase.

3. The probability density function for a two dimensional random vector is de-
fined by
(
Ax21 x2 x1 , x2 ≥ 0 and x1 + x2 ≤ 1
fx (x) =
0 otherwise

(a) Determine Fx (x) and the value of A.


(b) Determine the marginal density fx2 (x).
(c) Are fx1 (x) and fx2 (x) independent? Show why or why not.

Solution.

(a)
Z 1 Z 1−x1
Fx1 ,x2 (∞, ∞) = Ax21 x2 dx2 dx1
0 0
x22 x2
Z 1
= Ax21
0 2 0
(1 − x1 )2
Z 1
= Ax21 dx1
0 2
A 1 4
Z
= (x − 2x31 + x21 )dx1
2 0 1
A
=
60
= 1 (1)

Therefore, A = 60. Defining Fx1 ,x2 (u, v) = P r(x1 ≤ u, x2 ≤ v), we have

• x1 < 0 or x2 < 0, then F (x1 , x2 ) = 0.

2
ELEG–636 Homework #1, Spring 2008

• x1 , x2 ≥ 0 and x1 + x2 ≤ 1, then

Z x1 Z x2
F (x1 , x2 ) = 60u2 vdvdu
0 0
= 10x31 x22

• 0 ≤ x1 , x2 ≤ 1 and x1 + x2 ≥ 1, then

Z 1−x2 Z 1−u Z 1 Z 1−u


2
F (x1 , x2 ) = 1 − 60u vdvdu − 60u2 vdvdu
0 x2 x1 0
= 10x22 − 3
20x2 + 15x42 − 4x52 + 10x31 − 15x41 + 6x51 − 1

• 0 ≤ x1 ≤ 1 and x2 ≥ 1, then

Z 1 Z 1−u
F (x1 , x2 ) = 1 − 60u2 vdvdu
x1 0
= 10x31 − 15x41 + 6x51

• 0 ≤ x2 ≤ 1 and x1 ≥ 1, then

Z 1−x2 Z 1−u
F (x1 , x2 ) = 1 − 60u2 vdvdu
0 x2
= 10x22 − 3
20x2 + 15x42 − 4x52

• x1 , x2 ≥ 1, then F (x1 , x2 ) = 1.
So


 0 x1 < 0 or x2 < 0


 3 2
10x1 x2 x1 , x2 ≥ 0, x1 + x2 ≤ 1

 10x2 − 20x3 + 15x4 − 4x5 + 10x3 − 15x4 + 6x5 − 1 0 ≤ x , x ≤ 1, x + x ≥ 1

2 2 2 2 1 1 1 1 2 1 2
F (x1 , x2 ) =

 10x31 − 15x41 + 6x51 0 ≤ x1 ≤ 1, x2 ≥ 1
10x22 − 20x32 + 15x42 − 4x52




 0 ≤ x2 ≤ 1, x1 ≥ 1
1 x1 , x2 ≥ 1

(b)
Z 1−x2
fx2 (x2 ) = 60x21 x2 dx1
0
= 20x2 (1 − x2 )3

3
ELEG–636 Homework #1, Spring 2008

(c) Since
Z 1−x1
fx1 (x1 ) = 60x21 x2 dx2
0
= 30x21 (1 − x1 )2

, fx1 ,x2 (x1 , x2 ) 6= fx1 (x1 )fx2 (x2 ). Therefore, fx1 (x1 ) and fx2 (x2 ) are NOT
independent.

4. Consider the two independent marginal distributions


(
1 0 ≤ x1 ≤ 1
fx1 (x) =
0 otherwise
(
2x 0 ≤ x2 ≤ 1
fx2 (x) =
0 otherwise
Let A be the event x1 ≤ x2 .

(a) Find and sketch fx (x).


(b) Determine P r{A}.
(c) Determine fx|A (x|A). Are the components independent, i.e., are fx1 |A (x|A)
and fx2 |A (x|A) independent?

Solution.
(a) Since two marginal distributions are independent,

fX (X) = fx1 (x1 )fx2 (x2 )


(
2x2 0 ≤ x1 , x2 ≤ 1
=
0 otherwise

(b)
Z 1 Z 1−x2
P r(A) = 2x2 dx1 dx2
0 0
Z 1
= 2x22 dx2
0
2x32 1
=
3 0
2
= (2)
3

4
ELEG–636 Homework #1, Spring 2008

(c)

fX (X)
fX|A (X|A) =
P r(A)
(
3x2 0 ≤ x1 < x2 ≤ 1
=
0 otherwise

Z 1
fx1 |A (x1 |A) = 3x2 dx2
x1
3x22 1
=
2 x1
3(1 − x1 )2
= , 0 ≤ x1 ≤ 1
2

Z x2
fx2 |A (x2 |A) = 2x2 dx1
0
= 2x22 , 0 ≤ x2 ≤ 1

fX|A (X|A) 6= fx1 |A (x1 |A)fx2 |A (x2 |A). Therefore, fx1 |A (x1 |A) and fx2 |A (x2 |A)
are NOT independent.

5. The entropy H for a random vector is defined as −E{ln fx (x)}. Show that
for the complex Gaussian case

H = N (1 + ln π) + ln |Cx |.

Determine the corresponding expression when the vector is real.


Solution.

The complex Gaussian p.d.f. is


1
fx (x) = exp[−(x − mx )H C−1
x (x − mx )]
π N |Cx |

Then,

H = −E{ln fx (x)}
= E[(x − mx )H C−1
x (x − mx )] + N ln π + ln |Cx |

5
ELEG–636 Homework #1, Spring 2008

Note

E[(x − mx )H C−1 H −1
x (x − mx )] = E[trace((x − mx ) Cx (x − mx ))]
= trace(C−1 H
x E[(x − mx )(x − mx ) ])
= trace(C−1
x Cx )
= trace(I) = N

Therefore

H = N + N ln π + ln |Cx |
= N (1 + ln π) + ln |Cx |

Similarly, when the vector is real


1 1
H = N (1 + ln(2π)) + ln |Cx |
2 2
6. Let

x = 3u − 4v
y = 2u + v

where u and v are unit mean, unit variance, uncorrelated Gaussian random
variables.
(a) Determine the means and variances of x and y.
(b) Determine the joint density of x and y.
(c) Determine the conditional density of y given x.
Solution.
(a)

E(x) = E(3u − 4v)


= 3E(u) − 4E(v)
= 3−4
= −1

E(y) = E(2u + v)
= 2E(u) + E(v)
= 2+1
= 3

6
ELEG–636 Homework #1, Spring 2008

σx2 = E(x2 ) − E 2 (x)


= E[(3u − 4v)2 ] − 1
= 25

σy2 = E(y 2 ) − E 2 (y)


= E[(2u + v)2 ] − 9
= 5

(b) Note " # " #" #


x 3 −4 u
=
y 2 1 v
| {z }
A
Thus " #
−1 1 1 4
A =
11 −2 3
and
fu,v (A−1 [x, y]T )
fx,y (x, y) =
abs |A|
1
= fu,v ((x + 4y)/11, (−2x + 3y)/11)
11
1 1 x + 4y −2x + 3y
= exp(− [( − 1)2 + ( − 1)2 ])
22π 2 11 11

(c) Note x is Gaussian


1 1
 
fx (x) = √ exp − (x + 1)2
2π × 5 2 × 25

Thus
fx,y (x, y)
fy|x (y|x) =
fx (x)

2π × 5 1 x + 4y −2x + 3y 1
 
= exp − [( − 1)2 + ( − 1)2 ] + (x + 1)2
22π 2 11 11 2 × 25
r
5 2 1 x + 4y −2x + 3y 1
 
2 2 2
= exp − [( − 1) + ( − 1) − (x + 1) ]
22 π 2 11 11 25

7
ELEG–636 Homework #1, Spring 2008

7. Consider the orthogonal transformation of the correlated zero mean random


variables x1 and x2
" # " #" #
y1 cos θ sin θ x1
=
y2 − sin θ cos θ x2

Note E{x21 } = σ12 , E{x22 } = σ22 , and E{x1 x2 } = ρσ1 σ2 . Determine the
angle θ such that y1 and y2 are uncorrelated.
Solution.

(
y1 = x1 cos θ + x2 sin θ
y2 = −x1 sin θ + x2 cos θ

E(y1 y2 ) = E[(x1 cos θ + x2 sin θ)(−x1 sin θ + x2 cos θ)]


= sin θ cos θE[x22 ] + (cos2 θ − sin2 θ)E[x1 x2 ] − sin θ cos θE[x21 ]
= sin θ cos θ(σ22 − σ12 ) + (cos2 θ − sin2 θ)ρσ1 σ2
(σ 2 − σ12 )
= sin 2θ · 2 + cos 2θ · ρσ1 σ2
2
If y1 and y2 are uncorrelated, E(y1 y2 ) = 0. For −π/2 ≤ θ < π/2,
1 2ρσ1 σ2
θ= arctan 2
2 σ2 − σ12

8. The covariance matrix and mean vector for a real Gaussian density are
" #
1 0.5
Cx =
0.5 1

and " #
1
mx =
0

(a) Determine the eigenvalues and eigenvectors.


(b) Generate a mesh plot of the distribution using MATLAB.
(c) Change the off-diagonal values to −0.5 and repeat (a) and (b).

8
ELEG–636 Homework #1, Spring 2008

Solution.

(a) Solve |Cx − λI| = 0.

(1 − λ)2 − 0.25 = (λ − 0.5)(λ − 1.5) = 0

Hence, eigenvalues are 0.5 and 1.5. For λ = 0.5, the corresponding eigen-
vector is [1, −1]T . For λ = 1.5, the corresponding eigenvector is [1, 1]T .

(c) Eigenvalues are 0.5 and 1.5. For λ = 0.5, the corresponding eigenvector
is [1, 1]T . For λ = 1.5, the corresponding eigenvector is [1, −1]T .

9. Let {xk (n)}K


k=1 be i.i.d. zero mean, unit variance uniformly distributed ran-
dom variables and set
K
X
yK (n) = xk (n).
k=1

(a) Determine and plot the pdf of yK (n) for K = 2, 3, 4.


(b) Compare the pdf’s to the Gaussian density.
(c) Perform the comparison experimentally using MATLAB. That is, gen-
erate K sequences of n = 1, 2, . . . , N uniformly distributed samples.
Add the sequences and plot the resulting distribution (histogram). Fit
the results to a Gaussian distribution for various K and N .

Solution.

(a) {xk (n)}K


k=1 are i.i.d. zero mean, unit variance uniformly distributed ran-
dom variables. (
1/2a xk ∈ [−a, a]
fxk (xk ) =
0 otherwise
Since E[x2k ] = 1,
1 a 2
Z
E[x2k ] = x dx
2a −a
x3 a
=
6a −a
a2
=
3
= 1

9
ELEG–636 Homework #1, Spring 2008


⇒a= 3
That is ( √ √
1

2 3
xk ∈ [− 3, 3]
fxk (xk ) =
0 otherwise

For K=2, y2 (n) = x1 (n) + x2 (n).

fy2 (x) = fx1 (x) ∗ fx2 (x)


 x 1

 12 + 2 3
√ −2 3 ≤ x < 0


x 1
= − 12 + √
2 3
0<x≤2 3

0 otherwise

For K=3, y3 (n) = x1 (n) + x2 (n) + x3 (n) = y2 (n) + x3 (n).

fy3 (x) = fy2 (x) ∗ fx3 (x)


√ √ √
(x+3√ 3)2


48 3
−3 3 ≤ x < − 3
√ √


2

 3−x

√ − 3≤x< 3
= 8 3√
√ √
(x−3√ 3)2
3≤x≤3 3


48 3



0 otherwise

10

Potrebbero piacerti anche