Sei sulla pagina 1di 9

Probability and Stochastic Processes: A Friendly Introduction for Electrical and Computer Engineers Roy D.

Yates and David J. Goodman


Problem Solutions : Yates and Goodman,4.1.4 4.2.4 4.3.7 4.4.10 4.4.11 4.5.6 4.6.8 4.6.9 4.7.14 4.7.15 4.7.16 4.8.3 and 4.8.4 Problem 4.1.4 (a) By denition, nx is the smallest integer that is greater than or equal to nx. This implies

(b) By part (a), n

That is,

This implies
n n

Problem 4.2.4

First, we note that a and b must be chosen such that the above PDF integrates to 1.

1 0

For x 2 3, the requirement holds for all a. However, the problem is tricky because we must consider the cases 0 x 2 3 and 2 3 x 1 separately because of the sign change of the inequality.

  

a2 3

   

 

For the PDF to be non-negative for x requirement can be written as

0 1 , we must have ax

   

Hence, b

2a 3 and our PDF becomes fX x

x ax

 

fX x

ax2

bx dx

a 3

b 2

2a 3

ax2 0

bx 0 x 1 otherwise

lim

nx n

lim x

nx n

nx n

nx n

nx

nx

nx

nx

1 n

1 n

2a 3

0 for all x

0 1 . This

When 0 x 2 3, we have 2 3 x 0 and the requirement is most stringent at x 0 where we 2. In require 2a 3 2 or a 3. When 2 3 x 1, we can write the constraint as a x 2 3 this case, the constraint is most stringent at x 1, where we must have a 3 2 or a 6. Thus our a complete expression for our requirements are

As we see in the following plot, the shape of the PDF fX x varies greatly with the value of a.
3 2.5 2 fX(x) 1.5 1 0.5 0 0 0.2 0.4 x 0.6 0.8 1
a=6 a=3 a=0 a=3

Problem 4.3.7 nd the PDF of U by taking the derivative of FU u . The CDF and corresponding PDF are

(a) The expected value of U is

3 5

(b) The second moment of U is

49 3

 

The variance of U is Var U

E U2

EU

37 3.

'

24

u3

5 u3

'

(  

'

E U2

u2 fU u du

u2 du 8

'

16

u2

3 2 5 3u

'

' '

'  

EU

u fU u du

u du 8

3u du 8

16

5 3

3u2 du 8

3 8 3 u

u 5

' '

FU u

fU u

 $

 $

5 3

&     $

&     $

%     %

0 u 5 8 1 4 1 4 3u 1

5 u u

2a 3

0 u 5 1 8 5 u 3 3 u 3 0 3 8 3 u 5 0 u 5

 $  $    #"  

 

(c) Note that 2U

The expected value of 2U is then E 2U

8 ln 2 5 3 2307 13 001 256ln 2

Problem 4.4.10 For n 1, we have the fact E X 1 that is given in the problem statement. Now we assume that n 1 n 1 . To complete the proof, we show that this implies that E X n n 1! n! n . E X Specically, we write

By our induction hyothesis, E X n

Problem 4.4.11

(b) We can write the expected value of X in the form


r 0

 

EX

x fX x dx

x fX x dx

 ! 

x fX x dx

 3

(a) Since fX x

0 and x

r over the entire integral, we can write

   '  

1 ! n

E Xn

'

n E Xn

0 1

xn 1 e

which implies

n! n

r fX x dx

'

'

E Xn

xn e

nxn 1 e
x

dx

' '

dx

rP X

2 

Now we use the integration by parts formula u dv e x so that This implies du nxn 1 dx and v

uv

v du with u

xn and dv

'

 

E Xn

xn e

dx
x dx.

 

&

'

5 2u

2u fU u du

3 2u 8 ln 2

2u du 8

'

' '

'

'

2u du

e ln 2 u du

) 0

0 )

   ' 

'

'

 

0 ) '

e ln 2 U . This implies that 1 ln 2 u e ln2 2u ln 2

3 2u du 8

'  

'

Hence,
r 0

Allowing r to approach innity yields


r

By applying part (a), we now observe that


r r

Problem 4.5.6 We are given that there are 100 000 000 men in the United States and 23 000 of them are at least 7 feet tall, and the heights of U.S men are independent Gaussian random variables with mean 5 10 . (a) Let H denote the height in inches of a U.S male. To nd X , we look at the fact that the probability that P H 84 is the number of men who are at least 7 feet tall divided by the total number of men (the frequency interpretation of probability). Since we measure H in inches, we have

(b) The probability that a randomly chosen man is at least 8 feet tall is 4

Unfortunately, Table 4.2 doesnt include Q 6 5 , although it should be apparent that the probability is very small. In fact, Q 6 5 4 0 10 11 .

&

' & D &

&

 

PH

96

96

From Table 4.2, this implies 14 X

3 5 or X

Q 14 X

2 3 10 4.

' 1 &

& 

Since

Qx,

70

Q 65

&

 

PH

84

23 000 100 000 000

70 84 X

0 00023

CC C

 

By part (b), limr

rP

0 and this implies x 1

FX x

FX x dx

x fX x dx

EX

 ! 

8      A8    !  @ B9   A9    @

x1

FX x

lim r 1

FX r

lim rP X

A9 @

 

 8

 

FX x dx

x1

FX x

x fX x dx

0. Thus,

(c) We can use the integration by parts formula and dv dx. This yields

u dv

uv

2   ! 

 7 ! 

Since rP X

0 for all r

0, we must have limr

rP

r 0

0.

v du by dening u

 

 5 

6  

4 ! 

lim rP X

EX

lim

x fX x dx

EX

 5 

4 ! 

rP X

x fX x dx

EX

x fX x dx

EX

FX x

(c) First we need to nd the probability that a man is at least 76.

Now we can begin to nd the probability that no man is at least 76. This can be modeled as 100,000,000 repetitions of a Bernoulli trial with parameter 1 . The probability that no man is at least 76 is

(d) The expected value of N is just the number of trials multiplied by the probability that a man is at least 76.

Problem 4.6.8 good, that is, no foul occurs. The CDF of D obeys

Given the event G, PD

Of course, for y 60, P D D 0. This implies

yG

0. From the problem statement, if the throw is a foul, then

Another way to write this CDF is

However, when we take the derivative, either expression for the CDF will yield the PDF. However, taking the derivative of the rst expression perhaps may be simpler:

Taking the derivative of the second expression for the CDF is a little tricky because of the product of the exponential and the step function. However, applying the usual rule for the differentation of a product does give the correct answer:
y 60 10

0 3 y

0 07u y e

60 e

The middle term y

60 1

y 60 10

dropped out because at y

60, e

SH0 ' ) '

fD y

0 3 y

0 7 y

60 1

y 60 10

0 07u y

60 e

IH0 ' ) '

IH0 ' ) '   & Q & SH0 ' )'  & R IH0 ' ) '   & Q &

fD y

IH0 ' ) ' & &

 & Q &

FD y

0 3u y

0 7u y

60 1

0 3 y 0 07e

y 60 10

IH0 ' ) ' 

y 60 10

y y

60 60

0 3u y 03 07 1

y 60 10

y y

 @

& &   

P1

where u

denotes the unit step function. Since P G FD y PGPD yG

IH0 ' ) '  &    B @ &    @

PD

y Gc

uy

0 7, we can write y Gc 60 60

P Gc P D

 @   @ 

yG

PX

60

y 60 10

   @

IH0 ' ) ' 

6   @

FD y

PD

yGPG

PD

 

EN

100 000 000

D &

100 000 000

G G

94

10

14

30

y Gc P Gc

60

y 60 10

y 60 10

Although Table 4.2 stops at Q 4 99 , if youre curious, the exact value is Q 5

2 87 10 7 .

' 1 &
1.

'

E F

&

 

PH

90

Q5

3 10

' 1
7

90

70

Problem 4.6.9 The professor is on time and lectures the full 80 minutes with probability 0.7. That is, P T 80 0 7. Likewise when the professor is more than 5 minutes late, the students leave and a 0 minute lecture is observed. Since he is late 30% of the time and given that he is late, his arrival is uniformly distributed between 0 and 10 minutes, the probability that there is no lecture is

The only other possible lecture durations are uniformly distributed between 75 and 80 minutes, because the students will not wait longer then 5 minutes, and that probability must add to a total of 1 0 7 0 15 0 15. So the PDF of T can be written as

Therefore by taking the derivative we nd that

Similarly for the case when a

0 we have

And by taking the derivative, we nd that for negative a,

A valid expression for both positive and negative a is

Therefore the assertion is proved.

@ @

fY y

fY y

1 fX a

1 fX a

b a

 

FY y

PY

P X

FX

fY y

1 fX a

 

FY y

PY

P X

FX

Problem 4.7.14 We can prove the assertion by considering the cases where a the case where a 0 we have

 &

fT t

0 15 t 0 03 0 7 t 80 0

t 0 75 7 80 t 80 otherwise

&

& &   & &

PT

03 05

0 15

0 and a y b a

0, respectively. For

b a

&

&  & 

&

Problem 4.7.15 1, we Understanding this claim may be harder than completing the proof. Since 0 F x know that 0 U 1. This implies FU u 0 for u 0 and FU u 1 for u 1. Moreover, since F x is an increasing function, we can write for 0 u 1,

Hence the complete CDF of U is

That is, U is a uniform 0 1 random variable. Problem 4.7.16 First, we must verify that F 1 u is a nondecreasing function. To show this, suppose that for u u , x F 1 u and x F 1 u . In this case, u F x and u F x . Since F x is nondecreasing, F x F x implies that x x . Hence, we can write

Problem 4.8.3 W is

(b) The conditional expected value of W given C is

 @ 

EWC

'

32 32

Making the substitution v

w2 32, we obtain

'

 @ 

EWC

w fW C w dw

we
0

dv

32 32

H '

2 4 2

fW C w

w2 32

dw

fW w P C 0

w C otherwise

2e 0

w2 32

32 w 0 otherwise

 ! 

 

(a) Since W has expected value 0, fW w is symmetric about w 0. Hence P C 1 2. From Denition 4.15, the conditional PDF of W given C is

 H '

fW w

H '

1 e 32

w2 32

8 

  B 

FX x

P F

PU

F x

F x

PW

8C

F '

FU u

FU u

F F

% '

'

C  9C '

  

C 3  ' C

Since FX x

F x , we have for 0

1,
1

0 u u 0 1 u

 7 

FU u

PF X

P X

FX F

0 u 1

'

'

(c) The conditional second moment of W is

We observe that w2 fW w is an even function. Hence

Lastly, the conditional variance of W given C is

Problem 4.8.4 (a) To nd the conditional moments, we rst nd the conditional PDF of T. The PDF of T is

The conditioning event has probability

0 02

From Denition 4.15, the conditional PDF of T is

The conditional mean of T is


0 02

(b) The conditional second moment of T is

0 02

&

! @

E T2 T

0 02

t 2 100 e

100 t 0 02

0 X ')

&  &  &  &

0 0

ET

'

&

ETT

0 02

0 02 100 e

0 02 fT d 0 03

0 02

'

! @  & 

The substitution

0 02 yields

 &

! @ 

ETT

0 02

t 100 e

100 t 0 02

0 X ')

'

X Y V

fT T

0 02

dt

100

dt

&

&

fT t P T 0 02

t 0 02 otherwise

100e 0

100 t 0 02

0 X ')

'

a X)Y ` 0

 &

! 

PT

0 02

fT t dt

'

' 

fT t

100e 0

100t

t 0 otherwise

100t 0 02

&

W @   @

'

 @ 

Var W C

E W2 C

EWC

16

32

5 81

t 0 02 otherwise

'

E W2 C

w2 fW w dw

w2 fW w dw

E W2

'

E W2 C

w2 fW C w dw

w2 fW w dw

16

0 0

E T

0 02

Now we can calculate the conditional variance.

E T

Var T

Var T

&    &  W &   & W & ! @   & ! @

 &

! @ 

Var T T

0 02

E T2 T

0 02
2

ETT

0 02

ET

0 02

0 01

&

& &

&

! @

E T2 T

0 02

0 02

100 e

0 02 2 fT d

'

& 

The substitution

0 02 yields
100

0 02
2

0 02

Potrebbero piacerti anche