Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
(ENM 503)
Michael A. Carchidi
November 10, 2014
Chapter 6 - Continuous Random Variables
The following notes are based on the textbook entitled: A First Course in
Probability by Sheldon Ross (9th edition) and these notes can be viewed at
https://canvas.upenn.edu/
after you log in using your PennKey user name and Password.
1. Range Spaces and Probability Density Functions
In this chapter we shall discuss some probability terminology and concepts
for continuous distributions. If the range space RX of a random variable X is a
continuous interval or a union of continuous intervals, then X is called a continuous
random variable. For a continuous random variable X, the probability that a value
of X lies in a region R is given by
Z
P (X R) =
f (x)dx
(1)
R
where f is called the probability density function (pdf) of the random variable X,
and the integration is over all points in the region R. When R is the interval
a x b, then we write
Z b
P (a x b) =
f (x)dx,
(2a)
a
and so
P (a x a) = P (x = a) =
for any value of a, and also
f (x)dx = 0
(3a)
(3b)
for all values of a and b. The fact that the probability of X = a equals zero does
not mean that this event can never occur. It is not likely to occur, but it can still
occur.
By setting a = x and b = x + x in Equation (2) we see that
P (x X x + x) =
x+x
f (x)dx = f (x)x
(4)
for small x and so we may view f as a probability per length which is why it
is called a probability density function. Note that all pdfs must satisfy the three
conditions: (i) f (x) 0 for all x in RX , (ii)
Z
f (x)dx = 1,
(5)
xRX
for x < 0
0,
f (x) =
.
1 x/2
e
, for 0 x
2
The probability that the lifetime of the laser-ray device is between 2 and 3 years
is then given by
Z 3
1 x/2
e
P (2 X 3) =
dx = e1 e3/2 .
2
2
2
or P (2 X 3) ' 14.5%.
Mixed Random Variables
If the range space RX of a random variable X is the union of discrete (D) sets
and continuous (C) sets, then X is called a mixed random variable. For example
D
C
RX = RX
RX
= {1, 2, 3} [4, 5]
is the set consisting of the integers 1, 2, and 3, as well as all the real numbers
between 4 and 5, inclusive. Here we must have a pmf (p(x)) defined for all points
D
C
and a pdf (f (x)) defined for all points in RX
so that (i) p(x) 0 for all
in RX
D
C
x RX and f (x) 0 for all x RX , and (ii)
Z
X
p(x) +
f (x)dx = 1
(6)
D
xRX
xRC
X
D
C
and f (x) = 0 for all x
/ RX
.
and (iii) p(x) = 0 for all x
/ RX
for x < 0
0,
f (x) =
.
x/2
Ae
, for 0 < x
Then, since
p(0) +
f (x)dx = 1
we have
0.1 +
0,
for x < 0
0.1
for x = 0 .
f (x) =
Aex/2 dx = 1
The probability that the lifetime of a light bulb is between 2 and 3 years is then
given by
Z 3
ex/2 dx = (0.9)(e1 e3/2 ),
P (2 X 3) = (0.45)
2
or P (2 X 3) ' 13%.
Cumulative Distribution Function
The cumulative distribution function (cdf) of a random variable X, denoted
by F (x), equals the probability that the random variable (discrete or continuous)
X assumes a value less than or equal to x, i.e., F (x) = P (X x) so that
X
F (x) =
p(xi )
(7a)
xi x
f (t)dt =
f (t)dt
(7b)
tx
when X is continuous.
It should be noted that if X is a continuous random variable with pdf f (x),
then
dF (x)
= f (x).
(7c)
dx
Note that F (x) must then satisfy the following three properties: (i) F is a nondecreasing function of x, so that F (a) F (b) whenever a b, (ii)
lim F (x) = 0
and
lim F (x) = 1
x+
(8)
and (iii)
P (a < X b) = F (b) F (a)
for all a < b.
(9)
or
or
or
0
|x|
Ae
dx +
Ae|x| dx = 1
0
(x)
Ae
dx +
Aex dx = 1
0
+
Aex
Aex
+
=1
0
A A
+ =1
1
A = .
2
1
f (x) = e|x|
2
for < x < +. A typical plot of f versus x is shown in the following figure.
1
0.8
0.6
0.4
0.2
-2
-1
f (t)dt =
1 |t|
e
dt
2
0
x
1
1 t
1 1
1
1 t
e = ex + = 1 ex .
F (x) = e
2
2
2 2
2
2
1
F (x) =
2
ex ,
for < x 0
2e
, for 0 x < +
-2
-1
F (x) = P (Y g1 (x))
where g1 is the inverse function of g. Therefore we see that
F (x) = P (Y g 1 (x)) = H(g 1 (x)) = H(y)
7
d
d
F (x) =
H(y)
dx
dx
h(y)
(10a)
g 0 (y)
when g is always increasing. On the other hand, when g is always decreasing,
then
F (x) = P (g(Y ) x) = P (Y g 1 (x)) = 1 P (Y < g 1 (x))
f (x) =
or
F (x) = 1 H(y)
d
d
F (x) =
(1 H(y))
dx
dx
which via the chain rule becomes
dy
H 0 (y)
h(y)
d
=
= 0
.
f (x) = (1 H(y))
dy
dx
dx/dy
g (y)
f (x) =
h(y)
(10b)
g0 (y)
when g is always decreasing. To combine these into one expression, we note that
when g(y) is always increasing, g0 (y) > 0, and when g(y) is always decreasing,
then g0 (y) < 0, and so we may write both expressions for f (x) as
f (x) =
h(y)
.
|g 0 (y)|
(10c)
h(y)
h(g 1 (x))
=
.
|g 0 (y)| |g0 (g1 (x))|
(10d)
f (x) =
To summarize we see that if Y is a random variable having known pdf h(y) and if
X is a random variable related to Y via X = g(Y ), for some monotonic function
g, then the pdf of X is given by
f (x) =
0, for y < 0
1, for 0 y < 1
h(y) =
0, for 1 y
and suppose that X = g(Y ). Then since 0 < Y < 1, we have
RX = {g(0) X < g(1)}
if g is increasing, or
RX = {g(1) X < g(0)}
if g is decreasing, and hence
f (x) =
h(y)
1
1
= 0
= 0 1
.
0
|g (y)| |g (y)| |g (g (x))|
Specifically, when X = a + (b a)Y (and a < b), then g(y) = a + (b a)y and
g 0 (y) = (b a), so that
RX = {x|a + (b a)(0) x < a + (b a)(1)} = {x|a x < b}
and
f (x) =
1
h(y)
=
.
|g0 (y)|
ba
h(y)
1
h(y)
1
= x2/3
=
=
0
2
1/3
2
|g (y)| |3y |
3(x )
3
9
for 0 x < 1.
Example #7: A Linear Function Mapping [d, c] [a, b]
Suppose that Y is a random variable with pdf h(y) and defined on the interval
[d, c], and suppose that X is a random variable defined on a interval [a, b] and
suppose that
ba
X = g(Y ) = b +
(Y c)
dc
Then
But from
x=a+
h(y)
h(y)
= ba =
f (x) = 0
|g (y)|
dc
dc
h(y).
ba
dc
ba
(y c) we have y = c +
(x a)
dc
ba
and so
f (x) =
dc
dc
h c+
(x a) .
ba
ba
Note that for the special case when c = 0 and d = 1, this becomes
xa
1
h
.
f (x) =
ba
ba
As an example suppose that Y is a random variable in the unit interval [0, 1] with
then
f (x) =
or simply
h(y) =
( + + 1)!
y (1 y) ,
!!
1
ba
f (x) =
( + + 1)!
!!
xa
ba
xa
1
ba
( + + 1)!
(x a) (b x)
!!(b a)++1
10
for a x b.
3. Expectation and Other Moments
If X is a random variable with range space RX , then the expectation value of
X (if it exist) is defined by
X
E(X) =
xi p(xi )
(11a)
xi RX
if X is discrete,
E(X) =
xf (x)dx.
(11b)
(11c)
xRX
if X is continuous, and
E(X) =
xi p(xi ) +
D
xi RX
xf (x)dx
C
xRX
11
(13a)
xi RX
if X is discrete,
n
E((X c) ) =
(x c)n f (x)dx.
(13b)
(13c)
xRX
if X is continuous, and
n
E((X c) ) =
D
xi RX
(xi c) p(xi ) +
(x c)n f (x)dx
C
xRX
if X is mixed (if its exist), is called the n-th moment of X about the point c. If
c = 0, then this is called the nth moment of X and if c = , this is called the nth
moment of X about the mean .
The Variance of a Random Variable X
The second moment of X about its mean is called the variance of X, and
is given by
V (X) = E((X )2 ) = E(X 2 2X + 2 )
= E(X 2 ) 2E(X) + 2
= E(X 2 ) 22 + 2 = E(X 2 ) 2 ,
so that
V (X) = E(X 2 ) (E(X))2 .
The standard deviation of X is given by
p
= V (X).
12
(14a)
(14b)
Note that the mean of X is a measure of the central tendency of X and the
variance of X is a measure of the spread or variation of possible values of X
around the mean. Note that and are measured in the same units.
Modes
A mode of a random variable X is a value of X that occurs most frequently
(if X is discrete), or its a value of X where the pdf is a maximum (if X is
continuous). Modes are not defined for mixed distributions.
Example #9: Modes Need Not Be Unique
Note that a mode need not be unique. For example, consider the bimodal
distribution
2
f (x) = x2 ex
for < x < +. A plot of f (x) versus x is shown in the figure below
0.4
0.3
0.2
0.1
-3
-2
-1
of intervals, e.g., the time to failure for a light bulb. Ten important distributions
are described in the following subsections.
4.1 The Uniform Distribution Over The Interval [a, b)
A random variable X is uniformly distributed over the interval [a, b) if its pdf
is given by
0,
for otherwise
A plot of this for the interval [1, 6) is shown below.
0.2
0.15
0.1
0.05
-2
0,
for
xa
1,
for
bx
14
(15b)
-4
-2
10
x2 a x1 a
ba
ba
x2 x1
ba
is proportional to the length of the interval, for all x1 and x2 satisfying
P (x1 < X < x2 ) =
(15c)
a x1 x2 b.
The first and second moments are given by
Z
Z b
1
E(X) =
xf (x)dx =
x
dx,
ba
a
or
a+b
,
2
E(X) =
(the midpoint of the interval (a, b)), and
2
E(X ) =
x f (x)dx =
15
x2
1
a2 + ab + b2
dx =
ba
3
(15d)
or
a+b
2
(b a)2
.
(15e)
12
The uniform distribution plays a vital role in simulation. We shall see that random
numbers, uniformly distributed between 0 and 1, provide the means to generate
random events. We shall see how to generate random numbers and random events
in a later chapter.
V (X) =
15 0 30 20
+
,
30 0
30 0
min(R, 1 R)
.
max(R, 1 R)
16
for 0 x 1, and then use this to compute the pdf of X using f (x) = F 0 (x).
Note also that in computing F (x), we will consider the cases: (a) R 1/2 and (b)
R > 1/2 separately. If R 1/2, then min(R, 1 R) = R, and max(R, 1 R) =
1 R, and
R
X=
1R
and setting this less or equal to x gives
R
x
1R
or
x
.
x+1
On the other hand, if R > 1/2, then min(R, 1R) = 1R, and max(R, 1R) = R,
and
1R
X=
R
and setting this less or equal to x gives
1R
x
R
or
1
.
x+1
Therefore we have
F (x) = P (X x)
= P (X x |R 1/2)P (R 1/2) + P (X x |R > 1/2)P (R > 1/2)
1
1
1
x
R
1/2
R
>
1/2
+
P
R
= P R
x + 1
2
x + 1
2
1
1
1
2x
+2 1
=
x+1 2
x+1 2
which reduces to
F (x) =
2x
x+1
for 0 x 1. Then
f (x) = F 0 (x) =
2(x + 1) 2x(1)
2
=
2
(x + 1)
(x + 1)2
17
for 0 x 1 and f (x) = 0 elsewhere. Plots of f (x) and F (x) versus x for
0 x 1 are shown in the figure below.
2
0.2
0.4
0.6
0.8
2
2
2 ln(1) +
=
2 ln(2) +
2
1
which leads to
E(X) = 2 ln(2) 1 ' 0.3863.
or
min(R, 1 R)
dR
max(R, 1 R)
Z 1
min(R, 1 R)
min(R, 1 R)
E(X) =
dR +
dR
max(R, 1 R)
0
1/2 max(R, 1 R)
Z 1/2
Z 1
R
1R
=
dR +
dR
1R
R
0
1/2
1/2
which reduces to
E(X) = ln(2)
1
1
+ ln(2) = 2 ln(2) 1 ' 0.3863
2
2
18
and this agrees with the result computed earlier. Let us now verify this result by
running 1000 Monte-Carlo simulations on a spreadsheet as done in class.
Example #11: Geometry Problems - The Pipe Cleaner Problem
Consider a pipe cleaner of length 1, which is simply a bendable wire of length
1. Two points are chosen at random on this wire and then the wire is bent at
these points to see if it is possible to form a triangle, as shown in the following
figure.
19
To solve this, let the two points be located at R1 and R2 . We are given that
R1 U [0, 1), R2 U[0, 1), so that their pdfs are f1 (x) = 1 for 0 x < 1 and
f2 (y) = 1 for 0 y < 1 . We first note that the probability P is given by
P = P ((Triangle with R1 R2 ) (Triangle with R2 R1 ))
= P (Triangle with R1 R2 ) + P (Triangle with R2 R1 )
since the two events (R1 R2 ) and (R2 R1 ) are disjoint. Assuming first that
R1 R2 , the three sides of the possible triangle being formed have lengths R1 ,
R2 R1 and 1 R2 , and for these to form the sides of a triangle, we must have
R1 (R2 R1 ) + (1 R2 )
and
R2 R1 (R1 ) + (1 R2 )
and
1 R2 (R1 ) + (R2 R1 )
which all reduce to
R1 1/2
R2 R1 1/2
1/2 R2 .
or
P (Triangle with R1 R2 ) =
20
1/2
x+1/2
f2 (y)f1 (x)dydx
1/2
x+1/2
1/2
1 1
1
dydx =
1 1
8
Assuming next that R2 R1 , the three sides of the triangle being formed have
lengths R2 , R1 R2 and 1 R1 , and since R1 and R2 are identical distributions
and independent, we may simply interchange the rolls of R1 and R2 and use the
result of the previous calculation. Thus, the symmetry in the problem leads to
P (Triangle with R2 R1 ) =
Thus we find that
P =
1
8
1 1 1
+ =
8 8 4
or P = 25%.
Example #12: Another Pipe-Cleaner Problem
Consider the same pipe cleaner of the previous example. A point is chosen at
random on this wire and then a second point is chosen at random only to the right
of the first point. The wire is then bent at these two points to see if it is possible
to form a triangle. Determine the probability that a triangle can be formed.
To solve this, let the two points be located at R1 and R2 with R2 to the right
of R1 . We are given that R1 U[0, 1), but this time we have R2 U[R1 , 1), so
that their pdfs are f1 (x) = 1 for 0 x < 1 and f2 (y) = 1(1 x) for x y < 1.
The three sides of the possible triangle being formed have lengths R1 , R2 R1
and 1 R2 , and for these to form the sides of a triangle, we must have
R1 (R2 R1 ) + (1 R2 )
and
R2 R1 (R1 ) + (1 R2 )
and
1 R2 (R1 ) + (R2 R1 )
which all reduce to
R1 1/2
R2 R1 1/2
1/2 R2 .
which is a region in the unit square that looks like a right triangle with side lengths
1/2 and 1/2. Thus
Z 1/2 Z x+1/2
f2 (y)f1 (x)dydx
P (Triangle with R1 R2 ) =
0
or
P (Triangle with R1 R2 ) =
which reduces to
1/2
1/2
x+1/2
1/2
1
1
dydx
1x 1
1
2
or P ' 19.3%.
4.2 The Exponential Distribution With Parameter
A random variable X is said to be exponentially distributed with parameter
> 0 if its pdf is given by
for x < 0
0,
.
(16a)
f (x) =
x
e , for 0 x
The cdf of this distribution is given by
F (x) = P (X x) =
which reduces to
F (x) =
0,
f (t)dt
for x < 0
1e
, for 0 x
and
2
E(X ) =
(16b)
.
x
xex dx =
x f (x)dx =
22
x2 ex dx =
2
,
2
(16c)
2
1
1
=
.
2 2
2
(16d)
Note then that the mean and standard deviation are both equal to 1/.
The Memoryless Property
The exponential distribution has been used to model interarrival times when
arrivals are completely random and it has been used to model service times which
are highly variable. This is because the exponential distribution is memoryless
which says that
P (X > s + t | X > s) = P (X > t)
(16e)
for all s 0 and t 0. This follows from
P (X > s + t | X > s) =
P (X > s + t X > s)
P (X > s + t )
=
P ( X > s)
P ( X > s)
but
P (X > a) = 1 P (X a) = 1 (1 ea ) = ea
and so
e(s+t)
= et = P (X > t).
s
e
If X denotes the lifetime of a component (like a light bulb), then the probability
that the component last t time units does not depend on how long the component
has already lasted. This says that a used component whose lifetime follows an
exponential distribution is as good as a new component whose lifetime follows the
same distribution.
P (X > s + t | X > s) =
We now note that the exponential distribution is the only continuous distribution which has this memoryless property. To prove this we assume that
P (X > s + t | X > s) = P (X > t)
for all values of s and t. This leads to
P (X > s + t X > s)
= P (X > t)
P ( X > s)
23
or
P (X > s + t ) = P (X > t)P ( X > s)
or
1 P (X s + t ) = (1 P (X t))(1 P ( X s))
or simply
1 F (t + s) = (1 F (t))(1 F (s))
which reduces to
F (t + s) F (s) = F (s)(1 F (t)).
If we divide by s we get
F (t + s) F (s)
F (s)
=
(1 F (t)).
s
s
Now
F (t + s) F (s)
lim
s0
s
exist while
lim
s0
F (s)
s
= F 0 (t) = f (t)
exist only if F (0) = 0. This says that F (t) = 0 for all t 0. Then we get
F (t + s) F (s)
F (s)
= lim
(1 F (t))
lim
s0
s0
s
s
or F 0 (t) = (1 F (t)), which leads to
or
dF
= dt
1F
ln(1 F ) = t + C
or
or
dF
=
1F
dt
for t 0
0,
F (t) =
1 et , for 0 < t
24
() =
x1 ex dx
(17)
1
x
= x (e ) 0
( 1)x2 (ex )dx
Z 0
= 0 + ( 1)
x(1)1 ex dx
0
= ( 1)( 1)
for all > 1. Since () = ( 1)( 1) for all > 1, we see that a table of
values of () for 0 < 1 is all that is needed to compute () for any value
of > 1. For example we may compute (8.3) using () = ( 1)( 1)
repetitively we have
(8.3) =
=
=
=
=
=
=
=
(7.3)(7.3)
(7.3)(6.3)(6.3)
(7.3)(6.3)(5.3)(5.3)
(7.3)(6.3)(5.3)(4.3)(4.3)
(7.3)(6.3)(5.3)(4.3)(3.3)(3.3)
(7.3)(6.3)(5.3)(4.3)(3.3)(2.3)(2.3)
(7.3)(6.3)(5.3)(4.3)(3.3)(2.3)(1.3)(1.3)
(7.3)(6.3)(5.3)(4.3)(3.3)(2.3)(1.3)(0.3)(0.3)
25
0,
for x 0
The parameter is called the shape parameter and is called the scale parameter.
The first and second moments are given by
Z
Z
() x
() x1 ex
E(X) =
dx =
x
x e
dx
()
() 0
0
Z
Z
() u
1
u
u
=
=
e d
u eu du
() 0
() 0
1
1
( + 1) =
()
=
()
()
or simply
and
1
E(X) = ,
(18b)
Z
() +1 x
dx =
x
x e
dx
E(X ) =
()
() 0
0
Z +1
Z
() u
1
u
u
=
=
e d
u+1 eu du
() 0
()2 () 0
1
1
( + 2) =
( + 1)()
=
2
2
() ()
() ()
2
1 x
e
2 () x
26
or simply
+1
,
2
and hence the variance of the distribution is given by
E(X 2 ) =
+1
1
2,
2
or simply
1
2
Note that the mode of the gamma distribution is obtained by solving
d () x1 ex
=0
dx
()
V (X) =
(18c)
yielding
or
( 1 x)x2 ex = 0
mode =
1
.
(18d)
()ex , for 0 xk
f (xk ) =
,
0,
for xk < 0
and if all the Xk s are independent, then X has the pdf given by
0,
for x 0
27
1 t
()
dt = 1
()
t1 et dt
for 0 x. Note that for equal to one, the gamma distribution reduces to the
exponential distribution.
Note that if X is a gamma distribution with parameters and and if Y is
an exponential distribution with parameter . Then
1
E(X) =
V (X) =
1
2
V (Y ) =
1
2
and
1
,
V (X) =
1
V (Y )
V (Y )
=
2
when 1, which says that the gamma distribution is less variable than an
exponential distribution with the same mean.
4.4 The Erlang Distribution With Parameters k and
The gamma distribution with = k (an integer) is called the Erlang distribution of order k. If X is an Erlang distribution with parameters k and , then we
may write X as
X = X1 + X2 + X3 + + Xk
where Xi are independent and identical exponential random variables, all with
parameter = k, and so
E(X) = E(X1 ) + E(X2 ) + E(X3 ) + + E(Xk ) =
yielding
E(X) =
28
k
1
= .
k
1
1
1
1
+
+
+ +
k k k
k
In addition, we have
V (X) = V (X1 ) + V (X2 ) + + V (Xk ) =
1
1
1
+
+ +
2
2
(k)
(k)
(k)2
yielding
2 V (X) =
k
1
= 2.
2
(k)
k
Z
Z
(k)k
t e
(k)k
k
1
k1 kt
k2
kt
+
t e
dt =
t e
dt
(k 1)! x
(k 1)!
k x
k x
Z
(k)k1 k2 kt
(kx)k1 ekx
+
t e
dt
=
(k 1)!
(k 2)! x
(kx)k1 ekx
(k)k1 tk2 ekt
=
+
(k 1)!
(k 2)! k x
Z
(k)k1 k 2 k3 kt
+
t e
dt
(k 2)! k x
or
(k)k
(k 1)!
tk1 ekt dt =
k1 kt
dt =
k1
X
(kx)i ekx
i=1
29
i!
(k)1
+
0!
t0 ekt dt
=
=
k1
X
(kx)i ekx
i=1
k1
X
i=1
or simply
(k)k
(k 1)!
i!
(kx)i ekx
+ ekx
i!
k1 kt
ekt
+ (k)
(k) x
dt =
k1
X
(kx)i ekx
i!
i=0
and so
F (x) = 1 ekx
k1
X
(kx)i
i=0
i!
(19)
for 0 < x, and F (x) = 0 for x 0. This is just a sum of Poisson terms with mean
= kx and so tables of cumulative Poisson distribution may be used to evaluate
the cdf of Erlang distributions.
Example #13: Being In The Dark
A box contains four identical light bulbs. Each light bulb has a lifetime distribution that is exponential with a mean of 150 hours. Person A takes two of
these light bulbs into a room and turns them both on at the same time and leaves
them turned on. Person B takes the other two light bulbs into a dierent room
and turns on only one of the light bulbs and then turns on the other light bulb
only when and if the first light bulb burns out. (a) Compute the probability that
person A will be in the dark at the end of one week (168 hours). (b) Compute the
probability that person B will be in the dark at the end of one week (168 hours).
To solve part (a), let X1 , X2 , X3 , and X4 be the random variables for the
lifetime (in hours) of the four light bulbs so that
F1 (x) = F2 (x) = F3 (x) = F4 (x) = 1 ex
with = 1/150. Since person A keeps both light bulbs on at the same time, this
person will be in the dark after one week (24 7 = 168 hours) only when
Y = max(X1 , X2 ) 168.
30
21
X
e2(1/300)z
i=0
(2(1/300)z)i
i!
which reduces to
Then
z z/150
e
.
Fsum (z) = 1 1 +
150
168 168/150
e
' 0.3083.
P (Z 168) = 1 1 +
150
2 )
1
1 x
N(, 2 )
(20a)
f (x) =
exp
2
31
for < x < +. A typical graph of f (x) versus x is shown in the figure below
for = 1 and = 2.
0.2
0.15
0.1
0.05
-6
-4
-2
(20d)
(20e)
the mode of N(, 2 ) occurs at the mean . Note also that the first and second
moments of the distribution are given as
(
2 )
Z +
1 x
1
dx =
(20f)
x exp
E(X) =
2
2
32
and
1
E(X ) =
2
2
1
x exp
2
2
2 )
dx = 2 + 2
so that V (X) = 2 .
The cdf Of The Normal Distribution
The cdf of the normal distribution is given by
(
2 )
Z x
1
1 t
F (x) = P (X x) =
dt,
exp
2
2
and although it is not possible to evaluate this integral in closed form, various
numerical methods and tables exists in order to evaluate F (x). In fact one only
needs to tabulate
Z x
1
1 2
(x) = P (X x) =
(20g)
exp t dt
2
2
for the standard normal distribution N(0, 1), having pdf
1
1 2
(z) = exp z ,
2
2
and then for N(, 2 ), simple substitution shows that
x
F (x) =
.
(20h)
Table 5.1 of the text give (z) for 0 z and for z < 0, we use the fact that
(z) = 1 (z).
(20i)
17 15
14 15
P (14 < X < 17) = F (17) F (14) =
3
3
33
or
1
2
1
2
=
1
P (14 < X < 17) =
3
3
3
3
or simply
1
2
1
2
=
+
1.
P (14 < X < 17) =
3
3
3
3
Using a table of normal probabilities, we have
1
2
= (0.667) = 0.7476
&
= (0.333) = 0.6304
3
3
and so
P (14 < X < 17) = 0.7476 + 0.6304 1 = 0.378.
Example #15 - Another Normal Distribution
Lead-time demand, X, for an item is approximated by a normal distribution
with a mean of 25 and a variance of 9. It is desired to determine a value for lead
time that will be exceeded only 5% of the time. Thus the problem is to determine
xo so that P (X > xo ) = 0.05. This leads to
P (X > xo ) = 1 P (X xo ) = 0.05
or
xo 25
.
P (X xo ) = 0.95 =
3
yielding
xo = 25 + 3(1.645) = 29.935.
Therefore, in only 5% of the cases will demand during lead time exceed available
inventory if an order to purchase is made when the stock level reaches 30.
34
Ly 0
X 0
Lx 0
Y 0
Ly 0
Lx 0
P
= P
2X
X
2X
2Y
Y
2Y
Lx
Ly
Ly
Lx
2X
2X
2Y
2Y
35
Ly
Lx
1
2
1 .
PHit = 2
2X
2Y
Using the inputs x = 600 meters, y = 300 meters, Lx = 400 meters and
Ly = 200 meters, we get
400
200
PHit =
2
1
2
1
2(600)
2(300)
2
1
1
1
1
2
1 = 2
1 .
=
2
3
3
3
Since
we find that
or PHit ' 0.068.
Z 1/3
1 2
1
1
e 2 z dz ' 0.63056
=
3
2
PHit = (2(0.63056) 1)2
1
)
x
x
f (x) =
(21a)
exp
(21b)
1 x/
e
36
(21c)
0.5
1.5
2.5
1
E(X) = + 1 +
(21d)
and
2
1
2
1+
,
V (X) = 1 +
(21e)
Note that the location parameter has no eect on the mean and the cdf of the
Weibull distribution is given by
F (x) = P (X x)
(
1
)
Z x
t
t
=
dt
exp
(
1
)
Z
Z x
t
t
dt
=
(0)dt +
exp
(
)x
t
= exp
37
and so
(
)
x
F (x) = 1 exp
(21f)
for x v and F (x) = 0 for x < . At this point the students should read through
Examples 5.25 and 5.26 of the text.
4.7 The Triangular Distribution With Parameters a, b and c
A random variable X has a triangular distribution with minimum value a,
maximum value c, and mode b if its pdf has the form
2(xa)
(ba)(ca) , for a x b
f (x) =
(22a)
2(cx) , for b x c
(cb)(ca)
and zero elsewhere. The first and second moments are given by
Z b
Z c
2x(x a)
2x(c x)
E(X) =
dx +
dx
a (b a)(c a)
b (c b)(c a)
or
E(X) =
and
2
E(X ) =
which reduces to
a+b+c
3
2x2 (x a)
dx +
(b a)(c a)
E(X 2 ) =
(22b)
2x2 (c x)
dx
(c b)(c a)
a2 + b2 + c2 + ab + ac + bc
6
resulting in a variance of
a2 + b2 + c2 + ab + ac + bc
V (X) =
a+b+c
3
which reduces to
2 V (X) =
(c a)(c a) (b a)(c b)
.
18
38
(22c)
(22d)
05
04
03
02
01
10
20
30
40
2
ca
and the cdf of this distribution is give by
0,
(xa)2
(ba)(ca) ,
F (x) = P (X x) =
(cx)2
1 (cb)(ca)
,
1,
H=
for
xa
for a < x b
(22e)
for b < x c
for
c<x
Note for the special case when f (x) is symmetric about the mode b, we have
cb=ba
so that
39
b=
a+c
2
and
f (x) =
and
=
4(xa)
,
(ca)2
4(cx)
,
(ca)2
a+c
=b
2
and
F (x) = P (X x) =
for a x b
for b x c
and
0,
2(xa)2
(ca)2 ,
1,
2(cx)2
,
(ca)2
2 =
(c a)2
24
for
xa
for a < x b
for b < x c
for
c<x
Note that the triangular distribution is used when the minimum, maximum
and mean of a distribution is known. Note also that the medium of a distribution
is the value of x for which F (x) = 0.5.
4.8 Trapezoidal Distribution With Parameters a, b, c and d:
The distribution generalizes the Uniform and Triangular Distributions. A
continuous random variable X has a trapezoidal distribution with parameters
a b c d if its shape is a trapezoid connecting the points
(a, 0) (b, h) (c, h) (d, 0).
The pdf of this distribution is given by
h,
for b x c .
f (x) =
(23a)
h=
(23b)
with
2
(c + d) (a + b)
40
1 (c2 + cd + d2 ) (a2 + ab + b2 )
E(X) =
.
(23c)
3
(c + d) (a + b)
and the cdf is given as
0,
1 (xa)2
h ba ,
1
h(2x a b),
F (x; a, b, c, d) =
2
1 (dx)2
h dc ,
1
1,
41
for x a
for a x b
for b x c .
for c x d
for d x
(23d)
RX = {x | a x b}.
An example plot of the pdf using a = 1, b = 3 and, = 2 and = 3 is presented
below.
1
8
6
4
2
1.5
2.5
3.5
Z
( + + 1)!
( + + 1)! x (t a) (b t)
xa
; ,
dt
F (x) =
!!
(b a)++1
!!
ba
a
42
where
(z; , ) =
u (1 u) du
(24d)
2 )
1
1 ln(x)
(25a)
f (x) =
exp
2
2x
for 0 < x, and zero otherwise. The mean and variance of this distribution are
given by
1 2
2
2
L = e+ 2
and
L2 = e2+ (e 1).
(25b)
2
L + L2
2L
2
&
= ln
.
(25c)
= ln p 2
2L
L + L2
4.11 The Cauchy Distribution
A random variable X is said to have a Cauchy distribution with parameters
< b < +, if its density function is given by
f (x) =
a2
a/
+ (x b)2
43
10
5
-1.5
-1
-0.5
0.5
-5
-10
1.5
L/
+ x2
L2
45