Sei sulla pagina 1di 52

INTRODUCTION 2 - 1

JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 1 of 52



2

PROBABILISTIC
DESIGN

2.1 INTRODUCTION

In the design of a product for mass-production we are faced with
the challenge that every item produced will be

different

. These differ-
ences will be slight to the casual observer, but may combine in the indi-
vidual items to give vastly different performance characteristics, and
thus impact the perceived quality of the product. These differences are
caused by, among many other things, drift in machine settings, batch
variability in material properties and operator input.
The value of each design parameter embodied in any item is there-
fore likely to be different from the value in any other item. If we mea-
sure the values of a design parameter (a length, say) in all the items in a
production run we will get data on the frequency of occurrence of the
values of the parameter. If there are sufcient data values we can rescale
the frequency to give a probability. Design parameters may thus be
viewed as random variables.
Most physical variables used in engineering design are in fact ran-
dom variables. Standard calculations are really calculations with their
mean values. If we are interested in the possible range of values our re-
sult might have, then we must use more information in our calculation
algorithm than the mean values alone.
The classical approach to design is to apply safety factors to each
design parameter to allow for uncertainties. If the design is complex,
these safety factors can compound to cause overdesign with an uncer-
tain reliability. And in some important cases, where there is an upper

2 - 2 PROBABILISTIC DESIGN

JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 2 of 52

and lower specication or functional limit, the safety factor method can-
not be used at all.
Probabilistic design studies how to make calculations with the
probability distributions of the design parameters, instead of the nomi-
nal or mean values only. This will then allow the designer to design for
a specic reliability or specication conformance, and hence maximize
safety, quality and economy.
Design parameters are usually

independent random variables

.
Each type of parameter will have a distribution. Common distributions
for design parameters are the normal, log-normal, poisson, uniform, tri-
angular, exponential and weibull distributions.

2.2 TYPES OF PROBABILITY DISTRIBUTIONS

Detailed briey below are the types of probability distributions
more commonly found in engineering.

2.2.1 Normal

The distribution is symmetric and bell-shaped
The variable may itself be the sum of a large number of individual
effects.

Example:

Heights of the adult male population.

Example:

Dimension of a fabricated part.

2.2.2 Lognormal

The variable can increase without bound, but is limited to a nite
value at the lower limit
The distribution is positively skewed (most of the values being
closer to the lower limit).
The logarithm of the variable yields a normal distribution.

Example:

Real estate values, river flow rates, strengths of materi-
als, fracture toughness.
CALCUL
WITH
UNSURE
NUMBER
0. 93 1. 00 1. 08
. 21 1. 08 1. 95 2. 82

TYPES OF PROBABILITY DISTRIBUTIONS 2 - 3

JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 3 of 52

2.2.3 Weibull

A distribution possessing three parameters enabling it to be
adjusted to cover all stages of the bathtub reliability curve.
A shape parameter of 1 gives an exponential distribution.
A shape parameter of 3.25 gives an approximation to the normal.
Finds principal application in situations involving wear, fatigue
and fracture.

Example:

Failure rates, life-time expectancies

2.2.4 Exponential

Describes the amount of time between occurrences.
Complements the Poisson distribution (which describes the
number of occurrences per unit time.

Example:

Time between telephone calls.

Example:

Mean time between failures

2.2.5 Triangular

Used when the only information known is the

minimum

, the

most
likely

, and the

maximum

values of a variable.

Example:

Item costs from different suppliers or future estimation.

2.2.6 Uniform

All values between the minimum and maximum are equally likely

Example:

A number from a random number generator.

2.2.7 Poisson (discrete)

Describes the number of times an event occurs in a given interval.
The number of possible occurrences in the interval is not limited.
The occurrences are independent.
The average number of occurrences is xed.

Example:

Number of telephone calls per minute.
0. 00 1. 27 2. 55 3. 82
0. 00 1. 15 2. 30 3. 45
0. 50 0. 65 0. 80 0. 95 1. 10
0. 90 0. 95 1. 00 1. 05 1. 10
. 00 3. 25 6. 50 9. 75

2 - 4 PROBABILISTIC DESIGN

JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 4 of 52

Example:

Number of errors per page in a document

Example:

Number of defects per square metre in sheets of steel.

2.2.8 Binomial (discrete)

Describes the number of successes in a xed number of trials.
For each trial only two outcomes are possible - success or failure.
The trials are independent
The probability of success remains the same from trial to trial.

Example:

Number of heads in ten tosses of a coin

Example:

Number of defective items in a given batch, given that
the average rate of producing defectives is known.

2.2.9 Geometric (discrete)

Describes the number of trials until the rst successful occurrence.
The number of trials is not xed and continue until the rst success
The probability of success is the same from trial to trial

Example:

Number of times to spin a roulette wheel before you win.

Example:

Number of wells you would dig before the next gusher.

2.2.10 Custom

Used to describe a unique situation that cannot be described by any
of the standard distributions.
The area under the curve should equal 1.

2.2.11 Comparison of distributions

Poisson: Number of times an event occurs in a given interval.
Exponential: Interval until next occurrence of event.
Binomial: Number of successes in a xed number of trials.
Geometric: Number of trials until the next success.
Large number of trials: Binomial approaches normal.
0 4 7 1 0
1 7 1 3 1 9
00 3. 25 5. 50 7. 75 10. 0

DESCRIBING PROBABILITY DISTRIBUTIONS 2 - 5

JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 5 of 52

2.3 DESCRIBING PROBABILITY DISTRIBUTIONS

2.3.1 Types of description

The types of description we will use for describing probability dis-
tributions include its parameters, its probability density function (pdf),
its cumulative distribution function (cdf), and its set of moments.
The

parameters

of a given type of distribution are the mathematical
parameters in the formula for the distribution (not to be confused
with design parameters).
The

probability density function

describes the basic shape and
location of the distribution. The graphs shown in the previous
section are probability density functions.
The

cumulative distribution function

allows us to read off the area
under the probability density function in a given range. This area
represents the probability that the random variable will lie in this
range.
It is the probability density function, the parameters which describe
it, and the rst few of its moments that will be of most use to us in prob-
abilistic design.
For brevity we may use the terms pdf, distribution, or density
function instead of probability density function.
Since the distribution is the main description of a random variable
we will sometimes use the terms interchangeably.

2.3.2 Properties of probability density functions

In order to be called a probability density function, a function must
have the following properties: (Any function that looks like a blob of
goo on the axis is probably a good candidate)
It is indeed a function (no undercuts)
The area between it and the axis is unity
The

support

of the probability density function is the domain of the
random variable over which the function is dened.
The full denition of a probability density function comprises the
specication of its formula and its support.

2 - 6 PROBABILISTIC DESIGN

JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 6 of 52

2.3.3 Functions of random variables

Functions of random variables are central to the design of products
for quality and reliability. Since the performance of a product is gener-
ally a function of its design parameters, and the design parameters are
random variables, the performance is a function of random variables,
and is thus itself a random variable.
A central tool in the design of quality products therefore, is the
ability to calculate functions of random variables.

2.3.4 Notation

Generally we will denote a probability density function of a ran-
dom variable x by f(x). However, when we are considering a function z
= g(x) we will distinguish the two probability density functions by de-
noting them f

x

(x) and f

z

(z).

2.3.5 In sum

Design parameters and the quality variables which depend on them
are most often random variables which we describe by probability den-
sity functions.

Problem 2.1 Uniformly distributed design parameter

A design parameter is a random variable uniformly distributed be-
tween 1 and 3. Sketch its probability density function and its cumulative
distribution function showing pertinent values on the axes.

2.4 GRAPHICAL FUNCTIONS OF A RANDOM
VARIABLE

In this section we shall describe the concept of a function of a ran-
dom variable in graphical terms.
The most important attribute of a function when applied to a ran-
dom variable is whether it has an inverse over the support of the density
function of the random variable. If it does, then it is straightforward to
compute the function of the random variable. If not, then the function
must be broken up into pieces so that each piece has an inverse, the

GRAPHICAL FUNCTIONS OF A RANDOM VARIABLE 2 - 7

JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 7 of 52

transformation associated with each piece applied, and the results
summed.

2.4.1 The concept

Suppose x is a random variable with probability density function
f

x

(x), and that z = g(x) is an invertible function of x over the support of
f

x

(x).
The central concept is as follows:
The probability of x being in the interval [x

1

, x

2

] is equal to the
probability that z is in the interval [z

1

, z

2

] = [f

x

(x

1

), f

x

(x

2

)].
Geometrically, this is equivalent to saying that the area under the
probability density function of x in the interval [x

1

, x

2

] is equal to the
area under the probability density function of z in the interval [f

x

(x

1

),
f

x

(x

2

)].


2.4.2 The fundamental formula

Equating the two probabilities (areas) we obtain
A = | f

z

(z) dz | = | f

x

(x) dx |
=> f

z

(z) = f

x

(x) / | dz/dx |
Note that because the probability (area) is always positive, the
same relationship will exist whether the gradient of the function g(x) is
x
z

2 - 8 PROBABILISTIC DESIGN

JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 8 of 52

positive or negative. Hence we always take the absolute value of the de-
rivative dz/dx.

2.4.3 Dimensional considerations

Probability is dimensionless. However, x and z may have (differ-
ent) dimensions (units) [x] and[z], say. The probability density func-
tions f

x

(x) and f

z

(z) must have dimensions 1/[x] and 1/[z] respectively.
This fact corroborates with the formula above and may be used as
a check on the correctness of any functional transformation.

2.4.4 Examples

This simple relationship between area elements of the two density
functions may be used to perform a graphical determination of a func-
tion of a random variable. It is of course generally more accurate to de-
termine the result analytically, however it is useful to be able to
visualize the process graphically.

A linear function through zero


GRAPHICAL FUNCTIONS OF A RANDOM VARIABLE 2 - 9

JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 9 of 52

A general linear function


A concave function


2 - 10 PROBABILISTIC DESIGN

JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 10 of 52

A convex function

Note that the lower gradient of the transformation function leads to


a

concentration

of the probability in the corresponding region of the
transformed probability density function.
Much of our success in the design of quality products will depend
on our being able to tune the design to make use of these regions of low
gradient, hence minimizing the variability of the design output distribu-
tions.

Problem 2.2 Sketching distributions

Sketch yourself a distribution and a transformation function.
Sketch the shape of the resulting transformed distribution.

Problem 2.3 Sound output

The sound output of a product has been determined to follow a tri-
angular distribution with mode 2 units, lower limit 1 unit and upper limit
3 units.
Graphically determine the probability density function for the (nat-
ural) logarithm of the sound output.

ANALYTICAL FUNCTIONS OF A RANDOM VARIABLE2 - 11

JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 11 of 52
2.5 ANALYTICAL FUNCTIONS OF A RANDOM
VARIABLE
2.5.1 Denition of an invertible function
The process above is straightforward if, over the support of x, there
is only one value of x for each value of z. Functions with this property
are called invertible.
2.5.2 Non-invertible functions of a random variable
If the function is not invertible, the following process may be ap-
plied:
1. Break the function up into piecewise invertible pieces over
intervals [x
i
, x
j
]
2. For each piece, follow the procedure below for an invertible
function. The result will be valid over the interval [g(x
i
), g(x
j
)]
(or [g(x
j
), g(x
i
)], whichever is in the correct order), and zero
outside of it.
3. Add the results.
Functions which are constant (at) over an interval give rise to a
discrete jump in the probability density function of z.
2.5.3 Examples of invertible and non-invertible functions
Invertible functions

2 - 12 PROBABILISTIC DESIGN
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 12 of 52
Non-invertible functions

2.5.4 Invertible functions of a random variable


The procedure for calculating an invertible function of a random
variable is as follows:
Given:
A. A probability density function: f
x
(x), x
1
x x
2
B. A transformation function: g(x)
1. Calculate dz/dx from z = g(x)
2. Solve for x in terms of z to get x = g
-1
(z) = h(z). (There should
be only one solution since the function is invertible)
3. Substitute h(z) for x in f
x
(x) / |dz/dx| to get f
z
(z)
4. Determine the new support: g(x
1
) z g(x
2
)
We apply this procedure to some simple cases below. We assume a
general (undened) pdf f
x
(x) transformed by an invertible function g(x)
which we can differentiate. The original probability density function is
shown in light grey and the result of the function (or transformation) in
darker grey.
Addition of a constant: [z = x + a]
1. dz/dx = 1
2. h(z) = z - a
ANALYTICAL FUNCTIONS OF A RANDOM VARIABLE2 - 13
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 13 of 52
3. f
z
(z) = f
x
(z-a)
4. x
1
+a z x
2
+a
Geometrically, the addition of a constant to a random variable sim-
ply gives another random variable all values of which are increased (dis-
placed to the right) by that constant.
Example: The conversion of a random temperature expressed in
Celsius to one expressed in Kelvin.
Multiplication by a constant: [z = a x]
1. dz/dx = a
2. h(z) = z/a
3. f
z
(z) = f
x
(z/a) / |a|
4. a x
1
z a x
2
Geometrically, the multiplication of random variable by a constant
simply gives another random variable all values of which are multiplied
by (stretched to the right) by that constant.
Example: The conversion of a dimension expressed in metres to
one expressed in millimetres.
The general linear transformation: [z = a x + b]
1. dz/dx = a
2. h(z) = (z-b)/a
3. f
z
(z) = f
x
((z-b)/a) / |a|
4. a x
1
+b z a x
2
+b
Geometrically, a general linear function of a random variable pro-
duces both a shift and a change in scale. The form of the function re-
mains the same.
Example: The conversion of a random temperature expressed in
Celsius to one expressed in Fahrenheit.
The exponential transformation: [z = e
x
]
1. dz/dx = e
x
2. h(z) = ln z
3. f
z
(z) = f
x
(ln z) / |z|
4. exp(x
1
) z exp(x
2
)
2 - 14 PROBABILISTIC DESIGN
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 14 of 52
Example: The conversion of a variable expressed on a logarithmic
scale back to one expressed on a linear scale.
2.5.5 Example
Exponential transformation of a Uniform distribution
A. Probability density function: f
x
(x) = 1/(b-a), a x b, a > 0
B. Transformation function: z = c exp(k x) where c and k are con-
stants
1. Calculate dz/dx from z = c exp(k x):
dz/dx = k c exp(k x)
2. Solve for x in terms of z to get x = g
-1
(z) = h(z):
x = h(z) = ln(z/c)/k
3. Substitute h(z) for x in f
x
(x) / |dz/dx| to get f
z
(z):
f
z
(z) = (1/(b-a)) (1/|k c exp(k x)|)
= (1/(b-a)) (1/|k c exp(k (ln(z/c)/k))|)
= (1/(b-a)) (1/|k z|)
4. Determine the new support for f
z
(z):
c exp(k a) z c exp(k b)
Problem 2.4 Sphere volume
A manufacturer makes spheres to meet a specication on the vol-
ume. The process is known to deliver spheres with their diameters nor-
mally distributed with mean 10 mm and standard deviation 1 mm.
1. Determine the formula for the probability density function of the
volume.
2. Compare the true mean volume with the approximate mean vol-
ume calculated from . (Advanced exercise).
Problem 2.5 Alloy steel
The percentage x of an alloy in a steel is exponentially distributed
with probability density function
f
x
(x) = a exp(- a x), 0 x , a constant.

6
---10
3
MOMENTS 2 - 15
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 15 of 52
The ultimate tensile strength of the steel, z, is logarithmically relat-
ed to the percentage of alloy by z = log
e
(x/b).
Derive the formula for the probability density function f
z
(z) of the
ultimate tensile strength, and state its support.
2.6 MOMENTS
Moments of a distribution are a way of summarizing the important
characteristics of a distribution as single numbers, without having to
cope with too much detail. The rst few (lower order) moments are gen-
erally of most interest to us. An analogy might be to the reduction of a
vibration trace to its rst few harmonics.
More precise mechanical analogies are
1. The mean is the centre of area of the distribution - summarizing
the location properties of the distribution.
2. The variance is the second moment of area of the distribution
about the mean - summarizing the way in which the area is spread over
the object.
Because we generally lack detailed information about the probabil-
ity density functions of our design parameters, we will usually be mak-
ing our calculations with the rst few moments, often just the mean and
variance.
Following are some denitions of moments and coefcients based
on them.
2.6.1 (Non-central) moments
The nth (non-central) moment of a distribution f(x) about the
origin is
The rst non-central moment is called the mean.
The mean of a random variable x will be denoted
x
, or simply
where the context is clear.
The mean is also the expectation of x, denoted E[x].
'
n ( )x
'
n ( )x
x
n
f x ( )dx

=
2 - 16 PROBABILISTIC DESIGN
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 16 of 52
2.6.2 Central moments
The nth central moment
(n)x
of a distribution f(x) about the mean
of a distribution is
The rst central moment of any distribution is zero.
The second central moment is called the variance, denoted v
x
.
The third central moment is called the skew, denoted s
x
.
The fourth central moment is called the kurtosis, denoted k
x
.
Since we will be dealing mostly with central moments, we will of-
ten refer to them simply as moments.
2.6.3 Variance
The variance is, after the mean, the most important moment of a
distribution. Its unit is the square of the unit of the random variable and
hence is always positive. It measures the spread of the distribution. A
zero variance thus implies a deterministic variable.
2.6.4 Skew
The skew is the next most important moment. Its unit is the cube of
the unit of the random variable and hence may be positive or negative.
A positively skewed distribution has its longer tail to the right. A nega-
tively skewed distribution has its longer tail to the left. We will some-
times use the skew to test how valid it is to assume a given distribution
is symmetric (and hence perhaps approximatable by a Normal distribu-
tion).
2.6.5 Kurtosis
We include here the kurtosis mainly for completeness. Since the
kurtosis measures the squatness of the distribution, it is useful for dif-
ferentiating different types of symmetric distributions (for example the
Normal and the Uniform). However since most of the distributions we
will be using are bell-shaped, we will not use the kurtosis much. It is al-
ways positive.

n ( )x
x ( )
n
f x ( )dx

=
MOMENTS 2 - 17
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 17 of 52
2.6.6 Standard deviation
The standard deviation of a distribution is the (positive) square root
of the variance. The standard deviation has the same dimensions as the
mean but it is the variance that is the more fundamental quantity.
The standard deviation of a random variable x is denoted
x
.
2.6.7 Coefcient of variation
The coefcient of variation is the ratio of the standard deviation to
the mean, and is thus a measure of the relative spread of the distribution.
This ratio is dimensionless and so may often be used to cast formulae in
a dimensionless form.
The coefcient of variation of a random variable x will be denoted
by .
2.6.8 Variance ratio
The variance ratio is the (dimensionless) ratio of the variance to
the square of the mean. We will nd this measure of relative spread to
occur more commonly in our applications than the coefcient of varia-
tion. The variance ratio will be denoted by u
x
(= ).
2.6.9 Coefcient of skewness
The coefcient of skewness is the (dimensionless) ratio of the skew
to the cube of the standard deviation. The normal distribution has a co-
efcient of skewness of 0. The exponential distribution has a coefcient
of skewness of 2.
2.6.10 Coefcient of kurtosis
The coefcient of kurtosis is the (dimensionless) ratio of the kurto-
sis to the fourth power of the standard deviation (the square of the vari-
ance). The coefcient of kurtosis measures the peakedness of the type
of distribution. Uniform distributions have a kurtosis coefcient of 1.8,
triangular of 2.4, normal of 3, and exponential of 9.
x
x
2
2 - 18 PROBABILISTIC DESIGN
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 18 of 52
2.6.11 Terminology
There are varying denitions in the literature for skew and kurtosis
and their dimensionless ratios. It is wise to check the denition the au-
thor is using.
2.6.12 A note on notation
In situations where there are several random variables, for exam-
ple, x, y, we will use
x
,
y
, for the mean of x, y, , and
x
,
y
,
for the their variance. If we dealing with a single random variable, we
will often drop the subscripts.
2.7 THE NORMAL DISTRIBUTION
The normal distribution is the most important distribution in the ap-
plication of probability theory to science and engineering. The Central
Limit Theorem (to be discussed later) tells us that the Normal distribu-
tion has an interesting involvement in the description of complex prob-
abilistic systems.
It will be worth getting a good intuitive feel for its properties.
It is symmetric
Its support is from -Innity to +Innity
99.7% of the distribution lies within -3 standard deviations of the
mean
95% of the distribution lies within -2 standard deviations of the
mean
68% of the distribution lies within -1 standard deviations of the
mean. The inection point on the curve is at this point.
Because of its symmetry, its odd central moments are zero.
Its even central moments are given by (where is the variance):
{, 3
2
, 3x5
3
, 3x5x7
4
, 3x5x7x9
5
, }
= {, 3
2
, 15
3
, 105
4
, 945
5
, }
THE NORMAL DISTRIBUTION 2 - 19
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 19 of 52
Its probability density function is
The graph of its probability density function for = 0 and = 1 is
Its cumulative distribution function is
The graph of it cumulative distribution function = 0 and = 1 is
1
2
--------------e
1
2
---
x

------------
\ )
[
2
1
2
--- 1 Erf
x
2
------------
\ )
[
+
\ )
[
2 - 20 PROBABILISTIC DESIGN
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 20 of 52
Problem 2.6 Probability of a continuous random variable
1. What is the probability that a normally distributed random vari-
able has its mean value?
2. What is the probability that a normally distributed random vari-
able lies between and + 2?
3. What is the probability that a normally distributed random vari-
able is greater than + 6?
Problem 2.7 Sketching a Normal distribution
Sketch carefully a normal distribution with mean 9 and variance 9.
A random variable is distributed as above. What is the probability
that it is less than zero?
2.8 MEANS FROM NOMINAL VALUES
The usual design specication on a parameter is given by a nominal
value and a tolerance. The nominal value is usually the value given as n
in the specication [n - t
1
, n + t
2
]. The question arises: Given only a
specication on a parameter in this form, what should we assume the
STANDARD DEVIATIONS FROM TOLERANCES 2 - 21
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 21 of 52
mean value of the parameter to be?
Until more research is done in this area, we propose that the mean
be estimated as = (t
1
+ t
2
)/2.
2.9 STANDARD DEVIATIONS FROM TOLERANCES
While mean values are often easy to nd from data sources, it is
usually more difcult to obtain an estimate of the variance (or standard
deviation) of a design parameter.
This section discusses some rules of thumb for estimating standard
deviations from tolerances.
2.9.1 Estimation from tolerance range
If we know that the parameter is approximately normally distrib-
uted and the proportion of product that is expected to lie within a certain
tolerance range, then a rule of thumb for estimating the random vari-
ables standard deviation from the properties of the normal distribution
is:
If expect 68% to lie within - then set
x
. =
If expect 95% to lie within - then set
x
. = /2
If expect 99.7% to lie within - then set
x
. = /3
2.9.2 Estimation from limited data
A rule of thumb which enables standard deviations to be estimated
from limited data is given by Haugen:
If the estimate of the tolerance that is required is obtained:
From about 4 samples then set
x
=
From about 25 samples then set
x
= /2
From about 500 samples then set
x
= /3
2.9.3 Estimation from knowledge of manufacturer
A further rule of thumb given by Shooman is:
If the product is being made by:
2 - 22 PROBABILISTIC DESIGN
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 22 of 52
Commercial, early development, little known or inexperienced
manufacturers, then set
x
=
Military, mature, reputable, or experienced manufacturers, then set

x
= /3
2.10 THE EXPECTATION OPERATOR
The expectation of a function g(x, y, ) of random variables x, y,
with probability density function f(x, y, ) is denoted E[g(x, y, )]
and is dened as the integral:
The following properties may be proven from the denition:
2.10.1 The expectation of sums and products
Constant
The expectation of a constant c is the constant itself. That is
E[c] = c
Linear sum
If a, b, are constants, then
E[a g
1
(x, y,) + b g
2
(x, y,) +] = a E[g
1
(x, y,)] + b E[g
2
(x, y,)]
Product of independent random variables
If x, y, are independent random variables, then
E[g
1
(x) g
2
(y) ] = E[g
1
(x)] E[g
2
(y)]
2.10.2 Relation of moments to the Expectation
Non-central moments as Expectations
E[1] = 1
E[x] =
x
E[x
n
] =
E g x y , , ( ) [ | g x y , , ( ) f x y , , ( ) x d y d

=
'
n ( )x
THE EXPECTATION OPERATOR 2 - 23
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 23 of 52
Central moments as Expectations
E[x -
x
] = 0
E[(x -
x
)
2
] =
x
E[(x -
x
)
3
] = s
x
E[(x -
x
)
4
] = k
x
E[(x -
x
)
n
] =
(n)x
Problem 2.8 Moments as Expectations
Prove the formulae relating moments and Expectations above from
the denitions of moment and Expectation.
2.10.3 Relation of central and non-central moments
Central moments in terms of non-central moments
Non-central moments in terms of central moments
Problem 2.9 Expectation of a linear sum
From the denition of Expectation, prove the formula for the Ex-
pectation of a linear sum of functions of random variables.
Problem 2.10 First central moment
Show that E[x -
x
] = 0.
E x
x
( )
n
[ | E
n
i
\ )
[
x
i

x
( )
n i
i 0 =
n

=
E x
x
( )
n
[ |
n
i
\ )
[

x
( )
n i
i 0 =
n

E x
i
[ | =
E x
n
[ | E x
x
( )
x
+ ( )
n
[ | =
E x
n
[ |
n
i
\ )
[
E x
x
( )
i
[ |
x
( )
n i
i 0 =
n

=
2 - 24 PROBABILISTIC DESIGN
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 24 of 52
Problem 2.11 Central and non-central moments
Fill out the detail in the above derivations relating central and non-
central moments.
Problem 2.12 Variance and skew
1. Derive expressions from rst principles for the variance and
skew in terms of non-central moments. Use the binomial expansion and
the properties of the Expectation operator.
2. Verify your results using the general formulae above.
Problem 2.13 Mean values of a power
1. Derive expressions from rst principles for the mean values of
second, third, and fourth powers of a random variable in terms of its
central moments. Use the binomial expansion and the properties of the
Expectation operator.
2. Verify your results using the general formulae above.
Problem 2.14 Mean second moment of area
A beam of circular cross-section has a normally distributed diame-
ter D with mean 100 mm and standard deviation 2 mm.
1. Calculate the mean second moment of area ( ) about a
diameter.
2. Compare this with the nominal second moment of area based on
a nominal diameter of 100 mm.
(Hint: The kurtosis of a normal distribution is 3
4
).
Problem 2.15 Volume of sphere
The performance of a product is dependent on the volume V of a
contained steel sphere of diameter D remaining within tight specica-
tions. The machine manufacturing the spheres is controlled by the spec-
ication on the nominal diameter.
By using the expectation operator and the identity a
n
= ((a - b) +
I

64
------ D
4
=
LINEAR FUNCTIONS 2 - 25
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 25 of 52
b)
n
, or otherwise, derive an exact formula for the mean
V
in terms of

D
, and higher order central moments.
2.11 LINEAR FUNCTIONS
2.11.1 Introduction
In this section we begin to explore an approximate method for
computing with random variables by considering only the rst few mo-
ments of a distribution (typically only the mean and variance, but some-
times the skew and kurtosis). Exact methods in computation with
random variables are often exceedingly complex and insufciently gen-
eral. However even the approximate probabilistic approaches developed
here are an order of magnitude more powerful in engineering design for
quality and reliability than the traditional factor of safety approach.
In this section we look only at linear functions. In a later section we
will look at more general function types.
2.11.2 General formulae
For the special case of a linear function of several variables, the
moments may be derived exactly and take particularly simple forms.
Note that the relations below are true, independent of the types of
underlying distributions possessed by the x
i
.
However in the general case, z will not have a distribution of any
known standard type.
If x
1
, x
2
, x
3
, are independent random variables, a
1
, a
2
, a
3
, are
constants, and z = a
1
x
1
+ a
2
x
2
+ a
3
x
3
+ , then the rst four moments
of z are given by:

z
= a
1

x1
+ a
2

x2
+ a
3

x3
+ = a
i

xi

z
= a
1
2

x1
+ a
2
2

x2
+ a
3
2

x3
+ = a
i
2

xi
s
z
= a
1
3
s
x1
+ a
2
3
s
x2
+ a
3
3
s
x3
+ = a
i
3
s
xi
k
z
= a
1
4
k
x1
+ a
2
4
k
x2
+ a
3
4
k
x3
+
+ 6{a
1
2

x1
a
2
2

x2
+ a
2
2

x2
a
3
2

x3
+ a
1
2

x1
a
3
2

x3
+ }
2 - 26 PROBABILISTIC DESIGN
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 26 of 52
= a
i
4
k
xi
+ 6 a
i
4

xi
a
j
4

xj
[i<j]
Note that for moments of order higher than 3, the relations are no
longer simple sums of the same order moments.
2.11.3 Sums, differences and multiples
In the special case of sums and differences of two random vari-
ables; and xed scalar multiples of a simple random variable the above
relations reduce to
2.11.4 Linear functions of normal distributions
Linear combinations of normally distributed random variables are
a special case. They are themselves normal.
This means that we can nd the actual normal distribution resulting
from a linear sum by simply calculating the mean and variance from the
formulae above.
Below, we graphically depict the addition of two normal random
variables.
Table 1: Moments of sums, differences and multiples
SUM DIFFERENCE MULTIPLE
FUNCTION z = x + y z = x - y z = a x
MEAN

z
=
x
+
y

z
=
x
-
y

z
= a
x
VARIANCE

z
=
x
+
y

z
=
x
+
y

z
= a
2

x
SKEW
s
z
= s
x
+ s
y
s
z
= s
x
- s
y
s
z
= a
3
s
x
KURTOSIS
k
z
= k
x
+ k
y
+ 6
x

y
k
z
= k
x
+ k
y
+ 6
x

y
k
z
= a
4
k
x
LINEAR FUNCTIONS 2 - 27
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 27 of 52
ADDITION OF TWO NORMAL RANDOM VARIABLES
2.11.5 The Central Limit Theorem
The sum of a number of independent but not necessarily identically
distributed random variables tends to become normally distributed as
the number increases, provided that no one random variable contributes
appreciably more than the others to the sum; that is, no type of distribu-
tion dominates.
This is an important result for designers. It means, for example, that
the overall dimension of an assembly of component parts, independent
of the distribution types of each component dimension, will tend to be
normally distributed. Knowing this, the designer can work back from
the individual component tolerances to get an estimate of the proportion
of assemblies which will lie outside any given specication.
The six greyed graphs below are, sequentially, the distributions of
the average of 1, 2, 3, 4, 5, and 6 independent identically Uniformly dis-
tributed random variables on [-1, 1]. The full line is the Normal distri-
bution which has the same variance as the average.
It can be seen that even the average of only three Uniform distribu-
tions gives a result surprisingly close to Normal.
2 - 28 PROBABILISTIC DESIGN
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 28 of 52

Problem 2.16 Inconsistency?


If x = y in the formula for the variance of a difference, that is, z =
x - y = 0, does this imply
z
= 2
x
= 2
y
?
Problem 2.17 Tolerance build-up
An assembly is made up of several components whose nominal
lengths are L
1
, L
2
, L
3
, and L
4
. It is important that the distance L = L
1
+L
2
- (L
3
+L
4
) be kept within specication limits.
1. Write down a formula for the standard deviation
L
as a function
of
1
,
2
,
3
, and
4
.
2. What can be said about the type of distribution that L has?
LINEAR FUNCTIONS 2 - 29
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 29 of 52
Problem 2.18 Springs in parallel
An assembly contains two springs of stiffness K
1
and K
2
connected
in parallel. Their stiffness distributions have moments of
K
1
: {
1
= 500 N/m,
1
= 144 (N/m)
2
, s
1
= 10 (N/m)
3
}
K
2
: {
2
= 300 N/m,
2
= 25 (N/m)
2
, s
2
= -10 (N/m)
3
}.
1. Calculate the mean, variance and skew of the distribution of the
overall stiffness K of the system.
2. Is the resulting distribution symmetric?
Problem 2.19 Counterweights
Two designs are proposed for a sensitive counterweight. Design A
utilizes 4 spheres each of mass m. Design B utilizes two spheres each of
mass 2m.
Assuming that the coefcient of variation of the mass of each of the
spheres is the same, determine the ratio of the standard deviation of the
total mass of Design A to that of Design B.
Problem 2.20 Algenon and Biggles
Bricks are manufactured with heights of a given mean and vari-
ance. Algenon Ant climbs straight up a vertical stack of N bricks (no
mortar). His brother Biggles (a little disoriented) goes straight up and
down the rst brick for the same number of brick traverses (hence cov-
ering the same mean distance).
Determine the ratio of the standard deviation of Biggles' journey to
the standard deviation of Algenon's journey.
Problem 2.21 Machine support
Suppose that a machine is to be supported with a number of springs
of the same nominal stiffness, and that the overall stiffness of the spring
assembly is to be within a given tolerance of a xed target value K. Sup-
pose also that all the springs in the assembly have the same percentage
tolerance on their stiffness no matter what size they are. That is, the stiff-
nesses of the springs have the same coefcient of variation.
2 - 30 PROBABILISTIC DESIGN
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 30 of 52
Discuss the inuence of the number of springs in the assembly on
the variability of its stiffness.
Problem 2.22 Moon lander
In a design analysis of a suspension system for a moon lander, it
has been determined that the overall damping coefcient of the system
is a critical quality variable, and should be held within tight specica-
tions.
Suppose that the shock absorbers are linear over their range of ap-
plication, and that the standard deviation of the damping coefcient of
each shock absorber is a xed fraction of its mean value for any size
shock absorber.
Suppose also that there are two systems proposed: System F with
4 parallel shock absorbers, and System G with 16 parallel shock absorb-
ers, where both systems have the same total mean damping coefcient.
(You may assume that the damping coefcients are additive).
Determine the ratio of the standard deviation of the overall damp-
ing coefcient of assembly G to that of assembly F.
2.12 RELIABILITY
The reliability R of a system is the probability that the system will
perform as expected.
The unreliability Q of a system is the probability that the system
will fail to perform as expected.
R + Q = 1
Remark on terminology:
The term reliability is commonly used to refer to the probability of
failure of one item due to degradation over time. It is not generally used
for the general conformance to specication of the product coming off
the end of a production line. For simplicity however we will often use
the term reliability to mean probability of conforming to specica-
tion.
Thus the reliability of a mass-produced product is equivalent to the
proportion of the product within specication.
RELIABILITY 2 - 31
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 31 of 52
2.12.1 Operating windows
Many systems (products, designs) depend for their correct perfor-
mance on the values of their quality variables (design parameters or
functions of design parameters) remaining within given bounds, limits,
or tolerances. This leads to viewing these bounds as the frame of an op-
erating window.
A design specication might read something like:
The parameter x must lie in the range x
L
to x
U
.
Since x will usually have a distribution of values it is most likely
that not all values will lie in this range.
If a products function depends only on the single parameter x, then
its reliability is the probability that x lies in the range x
L
to x
U
. That is
R = Pr (x
L
x x
U
)
and this is represented by the area under the probability density
function which can be seen through the operating window.
Conversely the unreliability is the area under the curve outside the
window.
OPERATING WINDOW
2.12.2 Example: Paper feeder operating window
Many photocopiers have paper feeders which use the frictional
2 - 32 PROBABILISTIC DESIGN
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 32 of 52
driving force of an elastomeric covered roll which sits on top of the pa-
per stack.
It is clear that if the normal force exerted by the roll on the paper is
too low, the paper will not move. This failure mode is called a misfeed.
Conversely, if the normal force is too high, more than one sheet will be
driven forward. This failure mode is called a multifeed. The normal
force thus becomes a quality variable which must be kept within dened
upper and lower specication limits.
Considering the normal force as a random variable, the proportion
of its probability density function that we can see through the window
frame formed by the upper and lower specication limits is the reliabil-
ity of the feeder for the normal force failure modes.
2.12.3 Supply and demand
Another type of system depends for its correct performance on the
demand x being less than the supply x
s
.
R = Pr (x < x
s
)
where both x and x
s
are independent random variables.
MARGIN OF SAFETY
Supply and demand here should be taken in the most general
sense of any imposed physical variable: force, stress, deection, tem-
perature, time, ow-rate,
RELIABILITY 2 - 33
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 33 of 52
The margin of safety is dened by y = x
s
- x.
Hence the reliability may be written
R = Pr (y > 0)
The operating window for y is then 0 y for a margin of safety
problem.
2.12.4 Estimation of the reliability
1. Calculate the mean and variance of y.
2. If the type of distribution for y is known, use a formula or table for
its cumulative distribution function to calculate the area 0 y.
Otherwise use a table for the normal distribution as follows:
3. Calculate the distance z of the mean of y from zero in units of the
standard deviation of y.
4. Read off the required probability (reliability or unreliability) in the
table.
The parameter z (measured in standard deviations of y) is often
called the reliability index or safety index. It can be seen from the table
below that the reliability is quite sensitive to small changes in z for z
greater than about 2. A doubling of z from 2.4 to 4.8 decreases the prob-
ability of failure by a factor of approximately 10 000!
You can visualize this geometrically by imagining what happens to
the area of the distribution for y 0 as you shift the distribution to the
right.
Table 2: Probability of failure versus reliability index for a Normal
Distribution
RELIABILITY
INDEX Z
UNRELIABILITY
Q PER MILLION
RELIABILITY
INDEX Z
UNRELIABILITY
Q PER MILLION
0.00 500 000 2.33 10 000
0.67 250 000 3.10 1 000
1.00 160 000 3.72 100
1.28 100 000 4.25 10
1.65 50 000 4.75 1
2 - 34 PROBABILISTIC DESIGN
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 34 of 52
Problem 2.23 Bolt strength reliability
A production run of bolts has a normally distributed ultimate ten-
sile strength with mean 100 MN and standard deviation 2 MN.
The applied load is expected to be normally distributed with mean
90 MN and standard deviation 4 MN.
What proportion may be expected to fail?
Problem 2.24 Buoyancy force reliability
A design calculation predicts that the buoyancy force B acting on
a sonar device is normally distributed with mean 800 N and standard de-
viation 24 N, and that the weight force W is normally distributed with
mean 800 N and standard deviation 8 N.
The sonar device fails to operate as intended if: a) it sinks, or b) its
buoyancy force exceeds its weight force by more than 32 N.
Calculate the probability of failure to operate as intended (to 3 dec-
imal places).
Problem 2.25 Bearing t
A mass-produced bearing of a journal bearing has a normally dis-
tributed diameter with mean 50 mm and tolerance -0.03 mm.
The journal has a normally distributed diameter with mean 49.9
mm and tolerance -0.03 mm.
The assembly fails if (a) the journal will not t in the bearing, or
(b) the diametral clearance is greater than 0.1 mm.
1. Estimate the proportion of assemblies that might be expected to
fail if the manufacturer is very inexperienced in this area of manufac-
ture.
2. Estimate the proportion of assemblies that might be expected to
fail if the manufacturer is highly experienced in this area of manufac-
ture.
Problem 2.26 Shaft failure
A production run of shafts has a normally distributed failure torque
PRODUCTS OF RANDOM VARIABLES 2 - 35
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 35 of 52
with mean 100 kNm and variance 9 (kNm)
2
.
The applied torque is expected to be normally distributed with
mean 80 kNm and variance 16 (kNm)
2
.
What proportion of product may be expected to fail?
(Express your answer as number of failures per million)
Problem 2.27 Fitting of car doors and windshields
Discuss the potential application of the margin of safety concept to
the tting together of components in the automobile industry, for exam-
ple, doors and windshields.
2.13 PRODUCTS OF RANDOM VARIABLES
To develop formulae for the moments of products of independent
random variables we use the fact that if x, y, are any independent ran-
dom variables, then
E[x y ] = E[x] E[y]
The formulae developed below will be exact independent of the
type of distribution to which the random variables belong.
The formulae are used by calculating the mean, variance and skew
in succession.
Suppose z is a product of any number of independent random vari-
ables x
i
: z = x
1
x
2
x
3

2.13.1 The mean of a product
The mean of a product is a direct application of the formula above.
z = x
1
x
2
x
3

E[z] = E[x
1
] E[x
2
] E[x
3
]

z
=
x1

x2

x3

The mean of a product of independent random variables is simply
the product of their means.
2 - 36 PROBABILISTIC DESIGN
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 36 of 52
2.13.2 The variance of a product
The variance of a product is obtained by taking the expectation of
the square of z
z
2
= x
1
2
x
2
2
x
3
2

E[z
2
] = E[x
1
2
] E[x
2
2
] E[x
3
2
]
(
z
2
+
z
) = (
x1
2
+
x1
) (
x2
2
+
x2
) (
x3
2
+
x3
)
To calculate the variance
z
of a product of independent random
variables, rst compute the product on the right hand side of the equa-
tion above and then subtract the square of the mean
z
2
calculated pre-
viously.
2.13.3 The skew of a product
z
3
= x
1
3
x
2
3
x
3
3

E[z
3
] = E[x
1
3
] E[x
2
3
] E[x
3
3
]
(
z
3
+ 3
z

z
+ s
z
) = (
x1
3
+ 3
x1

x1
+ s
x1
)
(
x2
3
+ 3
x2

x2
+ s
x2
) (
x3
3
+ 3
x3

x3
+ s
x1
)
To calculate the skew s
z
of a product of independent random vari-
ables, rst compute the product on the right hand side of the equation
above and then subtract the term
z
3
+ 3
z

z
calculated from the pre-
vious steps.
Higher moments are calculated in a similar fashion.
Problem 2.28 Volume of a cube
Calculate the mean and variance of the volume of a cube of side L
where the sides are machined independently by the same machining
process and are therefore considered to be identically distributed inde-
pendent random variables each with mean and variance .
Comment on how this calculation differs from one based simply on
the formula V = L
3
(see below).
POSITIVE INTEGER POWERS 2 - 37
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 37 of 52
2.14 POSITIVE INTEGER POWERS
2.14.1 The mean of a positive integer power
We have already derived the formula for the Expectation of a pos-
itive integer power of a random variable in terms of central moments.
Since the expectation gives the mean value, we have immediately that
for z = x
n
:
2.14.2 Tables for a Normally distributed random variable
Since the central moments of a normal distribution can all be ex-
pressed in terms of its mean and variance (see the listing in the section
on the Normal distribution), its powers can therefore be expressed via
the above formula in terms of them also.
The tables below thus give exact formulae for calculating the mean,
variance and skew of positive integer powers of a normally distributed
random variable x with mean and variance .
The entries in the table are expressed in the form
(rst order approximation) (1 + terms in the variance ratio u)
where the variance ratio u has been dened as the square of the co-
efcient of variation u = v/
2
.
Table 3: Moments of a square
z = x
2

z

2
(1 + u)

z
4
2
(1 + u/2)
s
z
24
2

2
(1 + u/3)

z
n
i
\ )
[
E x
x
( )
i
[ |
x
( )
n i
i 0 =
n

=
2 - 38 PROBABILISTIC DESIGN
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 38 of 52
Problem 2.29 Inconsistency?
If = 0, does this imply that
z
,
z
, s
z
are all zero?
2.15 GENERAL FUNCTIONS
2.15.1 Introduction
We complete our introductory discussion of probabilistic design by
describing a method (called the Moment Analysis Method) by which
you can calculate the moments of any differentiable function of inde-
Table 4: Moments of a cube
z = x
3

z

3
(1 + 3 u)

z
9
4
(1 + 4 u + (5/3) u
2
)
s
z
162
5

2
(1 + (16/3) u + 5 u
2
)
Table 5: Moments of a fourth power
z = x
4

z

4
(1 + 6 u + 3 u
2
)

z
16
6
(1 + (21/2)u + 24 u
2
+ 6 u
3
)
s
z
576
8

2
(1 + 16 u + (149/2) u
2
+ 99 u
3
+ 33 u
4
)
Table 6: Moments of a fth power
z = x
5

z

5
(1 + 10 u + 15 u
2
)

z
25
8
(1 + 20 u + 114 u
2
+ 180 u
3
+ (189/5) u
4
)
s
z
1500
11

2
(1 + (97/3)u + 366 u
2
+ 1710 u
3
+ 2997 u
4
+ 1323 u
5
)
GENERAL FUNCTIONS 2 - 39
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 39 of 52
pendent random variables, and hence get an estimate of the variability
inherent in a given design. Indeed, all the formulae we have introduced
so far may be derived by this method.
The basic principle of the Moment Analysis Method is the speci-
cation of each probability distribution by its set of moments in the form
{mean, variance, skew, kurtosis,...}. Then, if we wish to calculate a
function of several random variables, the moments of that function will
be functions of the moments of those several random variables.
The two techniques that we will use are
1. Expansion of the function in a Taylor series
2. Application of the Expectation operator to the series
2.15.2 The basic algorithm
Calculation of the mean
1. Expand the function z = g(x, y, ...) as a Taylor series about the
mean values (
x
,
y
, ) of the independent random variables.
2. Calculate the mean of the function (
z
) by calculating the expec-
tation of the terms in the expansion.
Calculation of the nth central moment
1. Expand the function [z -
z
]
n
as a Taylor series about the mean
values (
x
,
y
, ) of the independent random variables.
2. Calculate the nth central moment of the function (
z
, s
z
, k
z
, )
by calculating the expectation of the terms in the expansion.
3. Calculate
z
and substitute for it in the expression.
2.15.3 The theoretical foundation
Assumptions
The fundamental assumptions upon which the method is based are:
1. The random variables x, y, are independent. (Very important!)
2. The pertinent information content of each of the distributions is
sufciently well represented by a nite (small) number of moments.
2 - 40 PROBABILISTIC DESIGN
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 40 of 52
3. The function is sufciently well represented by a nite (small)
number of terms of its Taylor series.
Formulae
The fundamental formulae upon which the method is based are:
1. The mean
z
of a function z = g(x, y, ...) is the expectation of the
function.
2. The nth central moment
(n)z
of a function z = g(x, y, ...) is the
expectation of (z -
z
)
n
3. The expectation of a linear sum is the sum of the expectations of
the terms.
4. The expectation of a product of independent random variables is
the product of their expectations.
2.15.4 How to write down a Taylor series
In this section we discuss a mnemonic method for easily writing
down a Taylor series expansion of a function of several variables.
Suppose you have a function z = g(x, y, ...) and you wish to write
down the Taylor series for z expanded about the point: x =
x
, y =
y
,
. A simple mnemonic way of doing this is as follows:
1. Write down the power series for exp(X+Y+):
1 + (X+Y+) + (1/2!)(X+Y+)
2
+ (1/3!)(X+Y+)
3
+
2. Expand the terms:
1 + (X+Y+) + (1/2!)(X
2
+2XY+Y
2
+)
+ (1/3!)(X
3
+3X
2
Y+3XY
2
+Y
3
+) +
3. Make the following replacements:
1 z [ |

g
x

y
, ( ) = ( )
X
n
x
n
n

x
x
( )
n

GENERAL FUNCTIONS 2 - 41
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 41 of 52
and so on for more products of more than two variables.
Remember that the notation []

means that the bracketed func-


tion is evaluated at the point x =
x
, y =
y
, .
2.15.5 How to write down the expectation of a function
Again suppose you have a function z = g(x, y, ...) and you wish to
write down an expression for the expectation E[z] of z. The normal pro-
cedure for doing this is:
1. Write down the Taylor series with x
0
=
x
, y
0
=
y
,
2. Apply the expectation operator to the series, remembering its
properties when it acts on a constant, a linear sum, and a product of in-
dependent random variables.
3. Make the following replacements:
The resulting expression is a series expressing E[z] in terms of the
moments of x, y, .
If the series terminates the expression will be exact. A polynomial
function, for example, will terminate.
2.15.6 The shortest way to write down the expectation
It may be somewhat shorter to rst simplify the terms in our origi-
nal mnemonic expansion
1 + (X+Y+) + (1/2!)(X
2
+2XY+Y
2
+) +
before replacing them with their corresponding terms in the Taylor se-
ries expansion.
We list possible simplication rules below, and illustrate them with
the example of a function of two variables for which we know only their
means and variances:
X
n
Y
m
x
n
y
m

n m + ( )

x
x
( )
n
y
y
( )
m

E x
x
[ | 0
E x
x
( )
n
[ |
n ( )x

2 - 42 PROBABILISTIC DESIGN
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 42 of 52
1 + (X+Y) + (1/2!)(X
2
+2XY+Y
2
) + (1/3!)(X
3
+3X
2
Y+3XY
2
+Y
3
)
+ (1/4!)(X
4
+4X
3
Y+6X
2
Y
2
+4XY
3
+Y
4
)
+ (1/5!)(X
5
+5X
4
Y+10X
3
Y
2
+10X
2
Y
3
+5XY
4
+Y
5
) +
1. Any term involving a variable to the rst power is zero since the
expectation E[x -
x
] is zero.
1 + (1/2!)(X
2
+Y
2
) + (1/3!)(X
3
+Y
3
)
+ (1/4!)(X
4
+6X
2
Y
2
+Y
4
)
+ (1/5!)(X
5
+10X
3
Y
2
+10X
2
Y
3
+Y
5
) +
2. Any term involving a higher power leading to a moment for
which you have no information must be omitted.
In this example we only know means and variances, hence the ex-
pression reduces to
1 + (1/2!)(X
2
+Y
2
) + (1/4!)(6X
2
Y
2
)
3. If the coefcients resulting from the higher derivatives in the ex-
pansion are small enough compared to those resulting from the lower
ones, the corresponding terms may be neglected. This is often the case
for functions which are not too far off linear in the region near the point
= (
x
,
y
).
In this example we would look at the comparative size of
Assuming the term can be neglected the expression reduces to
1 + (1/2!)(X
2
+Y
2
)
leading nally to a general second order approximation for
z
:
It is evident from this process that the same form is valid for any
number of variables.
x
2
y
2

z
g
x

y
, ( )
1
2
---
x
2
2

x
y
2
2

y
+
| |

| |
+ =

z
g
x

y
, , ( )
1
2
---
x
2
2

x
y
2
2

y
+ +
| |

| |
+ =
GENERAL FUNCTIONS 2 - 43
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 43 of 52
Note carefully that the mean of a general function is only equal to
the function of the means as a rst order approximation.
2.15.7 Calculation of the variance of a function
As an example of the method described for calculating higher order
moments of a differentiable function of random variables, we will cal-
culate an expression for the second order approximation to the variance
of z = g(x, y, ...), that is, E[(z -
z
)
2
].
1. Since (z -
z
)
2
is still a function of x, y, ..., we can let Z = (z -

z
)
2
. E[(z -
z
)
2
] then becomes
Z
which we can write down directly
from the result derived in the section above:
2. Evaluate [Z]

:
3. Evaluate a typical second derivative:
4. Collect the terms and simplify to get nally:
This is a formula for the second order approximation to the vari-
ance of a differentiable function of any number of random variables. It
is one of the most important formulae in the area of probabilistic and ro-
bust design.

z

Z
Z [ |

1
2
---
x
2
2

x
y
2
2

y
+ +
| |

| |
+ = =
Z [ |

z [ |


z
( )
2 1
4
---
x
2
2

x
y
2
2

y
+ +
| |

| |
2
= =
x
2
2

Z
2
x
z
2
z
z
( )
x
2
2

z
+
| |

| |
=

z
x
z

x
y
z

y
+ +
| |

| |
1
4
---
x
2
2

x
y
2
2

y
+ +
| |

| |
2
=
2 - 44 PROBABILISTIC DESIGN
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 44 of 52
2.15.8 Computer implementation
The Moment Analysis Method is ideally suited to being pro-
grammed using a symbolic mathematical programming language.
The size and complexity of the problems that can be tackled will
depend on the memory and speed of the computational devices used.
The accuracy of the result will in addition depend on the degree of lin-
earity of the function g(x, y, ) near the mean values of the independent
random variables (
x
,
y
, ), together with the number of moments
used to specify each of their distributions. The more linear the function
and the more moments used, the more accurate the results will be. The
most signicant problems will arise when the function has a singularity
within the support of the distribution.
Problem 2.30 Mean of a function of one variable
Derive an expression for the mean of a general differentiable func-
tion of a single random variable up to and including the term involving
the kurtosis.
Let x be a symmetrically distributed random variable with mean 0,
variance 1 and kurtosis 1. Using the results above, determine an approx-
imation to the mean of e
x
.
Problem 2.31 Variance of a function of one variable
Derive an expression for the variance of a general differentiable
function of a single random variable up to and including the term in-
volving the skew. Neglect terms of order higher than 3.
Let x be a symmetrically distributed random variable with mean 0
and variance 1. Using the results above, determine an approximation to
the variance of e
x
.
Problem 2.32 Mean of a function of two variables
By using a Taylor series expansion about the mean values of ran-
dom variables x and y, derive an approximate formula for the mean val-
ue of z = g(x, y) in terms of the means and variances of x and y.
SUMMARY OF APPROXIMATE FORMULAE 2 - 45
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 45 of 52
Problem 2.33 Probabilistic means
The mean of a function is not generally the function of the means.
1. Describe the types of functions for which the mean of the func-
tion is the function of the means.
2. Discuss the ramications of these results for ordinary tolerance
analysis.
3. Discuss the ramications of these results for quality design.
2.16 SUMMARY OF APPROXIMATE FORMULAE
This section summarizes the rst and second order approximations
to the mean
z
and variance
z
of a differentiable function z = g(x, y, ...)
of independent random variables x, y, ...
If g(x, y, ...) is approximately linear in the region within several
standard deviations of the mean values (
x
,
y
, ) of (x, y, ...), then the
rst order approximation will often give satisfactory estimates. In any
case, the value of the extra term in the second order approximation may
always be computed to assess its signicance.
For estimates more accurate than that provided by the second order
approximation, moments of order higher than the variance will need to
be known.
2.16.1 First order approximation
Mean
Variance

z
g
x

y
, , ( ) =

z
x
z

x
y
z

y
+ + =
2 - 46 PROBABILISTIC DESIGN
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 46 of 52
2.16.2 Second order approximation
Mean
Variance
Problem 2.34 Approximate formulae
Write down the formulae for the rst and second order approxima-
tion to the variance of a function z = g(x
1
, x
2
, x
3
) of independent random
variables x
i
each with mean
i
and variance
i
.
What is the main assumption upon which the validity of these for-
mulae resides?
Problem 2.35 Right-angled bracket
An angled bracket has legs of nominal length B and C. The nominal
angle between them is = 90. The distance A between the ends of the
legs of the bracket is an important quality variable.
Using the formulae above, derive rst order approximations for the
mean and variance of A in terms of the means and variances of B, C, and
.
Problem 2.36 Applications for probabilistic design
From your own experience, describe a possible application for
probabilistic design.

z
g
x

y
, , ( )
1
2
---
x
2
2

x
y
2
2

y
+ +
| |

| |
+ =

z
x
z

x
y
z

y
+
| |

| |
1
4
---
x
2
2

x
y
2
2

y
+
| |

| |
2
=
GENERAL POWER FUNCTIONS 2 - 47
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 47 of 52
2.17 GENERAL POWER FUNCTIONS
In engineering we often deal with general power functions of the
form
In this section we take the formulae above for the approximate
mean and variance of z and sketch their application to functions of this
form.
2.17.1 The mean of a general power function
1. Calculate the second derivatives:
2. Substitute into the formula for the mean
z
and rewrite terms of
the form /
2
as squares of coefcients of variation:
2.17.2 The variance of a general power function
In a similar fashion we can substitute into the formula for the vari-
ance:
2.17.3 The coefcient of variation of a power function
From the above two results, we can now write down an expression
for the second order approximation to the square of the coefcient of
variation of z in terms of the squares of the coefcients of variation of
x, y, .
z cx
m
y
n
=
x
2
2

m m 1 ( )cx
m 2
y
n
[ |

c
x
m

y
n
( )m m 1 ( )
1

x
2
-------- = =

z
c
x
m

y
n
( ) 1
1
2
--- m m 1 ( )x
2
n n 1 ( )y
2
+ + } + =

z
c
x
m

y
n
( )
2
m
2
x
2
n
2
y
2
+ + }
1
4
--- m m 1 ( )x
2
n n 1 ( )y
2
+ + }
2
=
2 - 48 PROBABILISTIC DESIGN
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 48 of 52
The rst order approximation in coefcient of variation terms is
then simply:
2.17.4 The quotient of two random variables
We apply these formulae to obtain second order approximation ex-
pressions for the quotient of two random variables z = x/y:
The mean of a quotient
The variance of a quotient
2.17.5 The inverse of a random variable
We again apply these formulae to obtain second order approxima-
tion expressions for the inverse of a random variables z = 1/y:
The mean of an inverse
z
2
m
2
x
2
n
2
y
2
+ + }
1
4
--- m m 1 ( )x
2
n n 1 ( )y
2
+ + }
2

1
1
2
--- m m 1 ( )x
2
n n 1 ( )y
2
+ + } +
2
--------------------------------------------------------------------------------------------------------------------------------------------- =
z
2
m
2
x
2
n
2
y
2
+ + =

z

x

y
----- 1 y
2
+ ( ) =

z

x

y
-----
\ )
[
2
x
2
y
2
y
4
+ ( ) =

z
1

y
----- 1 y
2
+ ( ) =
GENERAL POWER FUNCTIONS 2 - 49
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 49 of 52
The variance of an inverse
Problem 2.37 The mean of a square
Suppose that the standard deviation of a random variable x is 10%
of the mean, that is .
Show that for z = x
2
the rst order approximation to the mean of z
underestimates the second order approximation by about 1%.
Problem 2.38 Volume of a cylinder
A mass-produced cylindrical container has an internal diameter of
{
D
= 2 m,
D
= 0.01 m
2
} and an internal length of
{
L
= 10 m,
L
= 0.04 m
2
}.
Compare the rst and second order approximations to the mean
and variance of its volume.
Problem 2.39 Second moment of area
A quality variable z is related to independent random variables x
and y by z = a x
p
y
q
(where a, p and q are constants).
1. Using the rst order formulae for the mean and variance of a
function of random variables, derive an expression for the approximate
coefcient of variation of z in terms of the coefcients of variation of x
and y.
2. A triangular beam cross-section has second moment of area
, where B and H are independent random variables with
means
B
,
H
and variances
B
,
H
respectively. Use the formula de-
rived in 1. to derive an expression for the variance
I
of the second mo-
ment of area in terms of
B
,
H
,
B
,
H
.
Problem 2.40 Coefcients of variation
The radius and length of a cylinder are independent random vari-
ables with identical coefficients of variation C.

z
1

y
-----
\ )
[
2
y
2
y
4
( ) =
x
2
0.01 =
I
1
36
------ BH
3
=
2 - 50 PROBABILISTIC DESIGN
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 50 of 52
1. Determine the coefficient of variation of the volume of the cyl-
inder, using both the first and second order coefficient of variation ap-
proximation formulae.
2. Comment on the increased accuracy provided by the second or-
der approximation.
Problem 2.41 Experimental formula
An experimentally derived formula for a quality variable W in
terms of design parameters X, Y, Z is
W = 2.657 X
0.2
Y
0.7
Z
0.1
The coefcients of variation of X, Y and Z are each equal to k.
Determine an estimate of the coefcient of variation of W in terms
of k.
Problem 2.42 First order approximation
1. State the formula for the first order approximation to the vari-
ance of a function of random variables.
2. From the formula in 1. above derive the formula for the variance
of the function Z = X
2
sinY, where X and Y are independent random vari-
ables.
3. From your result in 2. determine the standard deviation of Z if
the means and standard deviations of X and Y are 2 units each.
Problem 2.43 Springs in series
A mass produced product contains two springs with stiffnesses K
1
and K
2
connected in series.
1. Calculate the rst order approximation to the variance of the
combined stiffness.
2. Calculate the rst order approximation to the variance of a) A =
K
1
K
2
, b) B = K
1
+ K
2
and c) C = A/B.
3. Explain why this does not give the same result as in 1. (Hint: The
difference is not due to the inexactitude of the approximations used.)
GENERAL POWER FUNCTIONS 2 - 51
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 51 of 52
Problem 2.44 The minimum diameter of a rod in tension
Determine the minimum mean diameter
D
of a circular rod with
ultimate tensile strength S which will sustain a load P in tension with a
reliability of R.
The rod is to be made of 4130 steel of ultimate tensile strength S,
considered normally distributed with mean
S
= 1075 MPa, and vari-
ance
S
= 900 MPa
2
.
The manufacturing process is known to be capable of turning out
99.7% of the product to a tolerance of - 1.5% of the rod diameter.
The load P is the resultant of a large number of randomly varying
loads and hence can be considered to be normally distributed. Its mean
is
P
= 13200 N, and variance is
P
= 40 000 N
2
.
The reliability R (equivalent to the proportion of product within
specication) is required to ensure a probability of failure of less than
one per million of shafts produced.
2 - 52 PROBABILISTIC DESIGN
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 52 of 52

Potrebbero piacerti anche