Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
2.4.2 The fundamental formula
Equating the two probabilities (areas) we obtain
A = | f
z
(z) dz | = | f
x
(x) dx |
=> f
z
(z) = f
x
(x) / | dz/dx |
Note that because the probability (area) is always positive, the
same relationship will exist whether the gradient of the function g(x) is
x
z
2 - 8 PROBABILISTIC DESIGN
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 8 of 52
positive or negative. Hence we always take the absolute value of the de-
rivative dz/dx.
2.4.3 Dimensional considerations
Probability is dimensionless. However, x and z may have (differ-
ent) dimensions (units) [x] and[z], say. The probability density func-
tions f
x
(x) and f
z
(z) must have dimensions 1/[x] and 1/[z] respectively.
This fact corroborates with the formula above and may be used as
a check on the correctness of any functional transformation.
2.4.4 Examples
This simple relationship between area elements of the two density
functions may be used to perform a graphical determination of a func-
tion of a random variable. It is of course generally more accurate to de-
termine the result analytically, however it is useful to be able to
visualize the process graphically.
A linear function through zero
GRAPHICAL FUNCTIONS OF A RANDOM VARIABLE 2 - 9
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 9 of 52
A general linear function
A concave function
2 - 10 PROBABILISTIC DESIGN
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 10 of 52
A convex function
2 - 12 PROBABILISTIC DESIGN
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 12 of 52
Non-invertible functions
6
---10
3
MOMENTS 2 - 15
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 15 of 52
The ultimate tensile strength of the steel, z, is logarithmically relat-
ed to the percentage of alloy by z = log
e
(x/b).
Derive the formula for the probability density function f
z
(z) of the
ultimate tensile strength, and state its support.
2.6 MOMENTS
Moments of a distribution are a way of summarizing the important
characteristics of a distribution as single numbers, without having to
cope with too much detail. The rst few (lower order) moments are gen-
erally of most interest to us. An analogy might be to the reduction of a
vibration trace to its rst few harmonics.
More precise mechanical analogies are
1. The mean is the centre of area of the distribution - summarizing
the location properties of the distribution.
2. The variance is the second moment of area of the distribution
about the mean - summarizing the way in which the area is spread over
the object.
Because we generally lack detailed information about the probabil-
ity density functions of our design parameters, we will usually be mak-
ing our calculations with the rst few moments, often just the mean and
variance.
Following are some denitions of moments and coefcients based
on them.
2.6.1 (Non-central) moments
The nth (non-central) moment of a distribution f(x) about the
origin is
The rst non-central moment is called the mean.
The mean of a random variable x will be denoted
x
, or simply
where the context is clear.
The mean is also the expectation of x, denoted E[x].
'
n ( )x
'
n ( )x
x
n
f x ( )dx
=
2 - 16 PROBABILISTIC DESIGN
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 16 of 52
2.6.2 Central moments
The nth central moment
(n)x
of a distribution f(x) about the mean
of a distribution is
The rst central moment of any distribution is zero.
The second central moment is called the variance, denoted v
x
.
The third central moment is called the skew, denoted s
x
.
The fourth central moment is called the kurtosis, denoted k
x
.
Since we will be dealing mostly with central moments, we will of-
ten refer to them simply as moments.
2.6.3 Variance
The variance is, after the mean, the most important moment of a
distribution. Its unit is the square of the unit of the random variable and
hence is always positive. It measures the spread of the distribution. A
zero variance thus implies a deterministic variable.
2.6.4 Skew
The skew is the next most important moment. Its unit is the cube of
the unit of the random variable and hence may be positive or negative.
A positively skewed distribution has its longer tail to the right. A nega-
tively skewed distribution has its longer tail to the left. We will some-
times use the skew to test how valid it is to assume a given distribution
is symmetric (and hence perhaps approximatable by a Normal distribu-
tion).
2.6.5 Kurtosis
We include here the kurtosis mainly for completeness. Since the
kurtosis measures the squatness of the distribution, it is useful for dif-
ferentiating different types of symmetric distributions (for example the
Normal and the Uniform). However since most of the distributions we
will be using are bell-shaped, we will not use the kurtosis much. It is al-
ways positive.
n ( )x
x ( )
n
f x ( )dx
=
MOMENTS 2 - 17
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 17 of 52
2.6.6 Standard deviation
The standard deviation of a distribution is the (positive) square root
of the variance. The standard deviation has the same dimensions as the
mean but it is the variance that is the more fundamental quantity.
The standard deviation of a random variable x is denoted
x
.
2.6.7 Coefcient of variation
The coefcient of variation is the ratio of the standard deviation to
the mean, and is thus a measure of the relative spread of the distribution.
This ratio is dimensionless and so may often be used to cast formulae in
a dimensionless form.
The coefcient of variation of a random variable x will be denoted
by .
2.6.8 Variance ratio
The variance ratio is the (dimensionless) ratio of the variance to
the square of the mean. We will nd this measure of relative spread to
occur more commonly in our applications than the coefcient of varia-
tion. The variance ratio will be denoted by u
x
(= ).
2.6.9 Coefcient of skewness
The coefcient of skewness is the (dimensionless) ratio of the skew
to the cube of the standard deviation. The normal distribution has a co-
efcient of skewness of 0. The exponential distribution has a coefcient
of skewness of 2.
2.6.10 Coefcient of kurtosis
The coefcient of kurtosis is the (dimensionless) ratio of the kurto-
sis to the fourth power of the standard deviation (the square of the vari-
ance). The coefcient of kurtosis measures the peakedness of the type
of distribution. Uniform distributions have a kurtosis coefcient of 1.8,
triangular of 2.4, normal of 3, and exponential of 9.
x
x
2
2 - 18 PROBABILISTIC DESIGN
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 18 of 52
2.6.11 Terminology
There are varying denitions in the literature for skew and kurtosis
and their dimensionless ratios. It is wise to check the denition the au-
thor is using.
2.6.12 A note on notation
In situations where there are several random variables, for exam-
ple, x, y, we will use
x
,
y
, for the mean of x, y, , and
x
,
y
,
for the their variance. If we dealing with a single random variable, we
will often drop the subscripts.
2.7 THE NORMAL DISTRIBUTION
The normal distribution is the most important distribution in the ap-
plication of probability theory to science and engineering. The Central
Limit Theorem (to be discussed later) tells us that the Normal distribu-
tion has an interesting involvement in the description of complex prob-
abilistic systems.
It will be worth getting a good intuitive feel for its properties.
It is symmetric
Its support is from -Innity to +Innity
99.7% of the distribution lies within -3 standard deviations of the
mean
95% of the distribution lies within -2 standard deviations of the
mean
68% of the distribution lies within -1 standard deviations of the
mean. The inection point on the curve is at this point.
Because of its symmetry, its odd central moments are zero.
Its even central moments are given by (where is the variance):
{, 3
2
, 3x5
3
, 3x5x7
4
, 3x5x7x9
5
, }
= {, 3
2
, 15
3
, 105
4
, 945
5
, }
THE NORMAL DISTRIBUTION 2 - 19
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 19 of 52
Its probability density function is
The graph of its probability density function for = 0 and = 1 is
Its cumulative distribution function is
The graph of it cumulative distribution function = 0 and = 1 is
1
2
--------------e
1
2
---
x
------------
\ )
[
2
1
2
--- 1 Erf
x
2
------------
\ )
[
+
\ )
[
2 - 20 PROBABILISTIC DESIGN
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 20 of 52
Problem 2.6 Probability of a continuous random variable
1. What is the probability that a normally distributed random vari-
able has its mean value?
2. What is the probability that a normally distributed random vari-
able lies between and + 2?
3. What is the probability that a normally distributed random vari-
able is greater than + 6?
Problem 2.7 Sketching a Normal distribution
Sketch carefully a normal distribution with mean 9 and variance 9.
A random variable is distributed as above. What is the probability
that it is less than zero?
2.8 MEANS FROM NOMINAL VALUES
The usual design specication on a parameter is given by a nominal
value and a tolerance. The nominal value is usually the value given as n
in the specication [n - t
1
, n + t
2
]. The question arises: Given only a
specication on a parameter in this form, what should we assume the
STANDARD DEVIATIONS FROM TOLERANCES 2 - 21
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 21 of 52
mean value of the parameter to be?
Until more research is done in this area, we propose that the mean
be estimated as = (t
1
+ t
2
)/2.
2.9 STANDARD DEVIATIONS FROM TOLERANCES
While mean values are often easy to nd from data sources, it is
usually more difcult to obtain an estimate of the variance (or standard
deviation) of a design parameter.
This section discusses some rules of thumb for estimating standard
deviations from tolerances.
2.9.1 Estimation from tolerance range
If we know that the parameter is approximately normally distrib-
uted and the proportion of product that is expected to lie within a certain
tolerance range, then a rule of thumb for estimating the random vari-
ables standard deviation from the properties of the normal distribution
is:
If expect 68% to lie within - then set
x
. =
If expect 95% to lie within - then set
x
. = /2
If expect 99.7% to lie within - then set
x
. = /3
2.9.2 Estimation from limited data
A rule of thumb which enables standard deviations to be estimated
from limited data is given by Haugen:
If the estimate of the tolerance that is required is obtained:
From about 4 samples then set
x
=
From about 25 samples then set
x
= /2
From about 500 samples then set
x
= /3
2.9.3 Estimation from knowledge of manufacturer
A further rule of thumb given by Shooman is:
If the product is being made by:
2 - 22 PROBABILISTIC DESIGN
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 22 of 52
Commercial, early development, little known or inexperienced
manufacturers, then set
x
=
Military, mature, reputable, or experienced manufacturers, then set
x
= /3
2.10 THE EXPECTATION OPERATOR
The expectation of a function g(x, y, ) of random variables x, y,
with probability density function f(x, y, ) is denoted E[g(x, y, )]
and is dened as the integral:
The following properties may be proven from the denition:
2.10.1 The expectation of sums and products
Constant
The expectation of a constant c is the constant itself. That is
E[c] = c
Linear sum
If a, b, are constants, then
E[a g
1
(x, y,) + b g
2
(x, y,) +] = a E[g
1
(x, y,)] + b E[g
2
(x, y,)]
Product of independent random variables
If x, y, are independent random variables, then
E[g
1
(x) g
2
(y) ] = E[g
1
(x)] E[g
2
(y)]
2.10.2 Relation of moments to the Expectation
Non-central moments as Expectations
E[1] = 1
E[x] =
x
E[x
n
] =
E g x y , , ( ) [ | g x y , , ( ) f x y , , ( ) x d y d
=
'
n ( )x
THE EXPECTATION OPERATOR 2 - 23
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 23 of 52
Central moments as Expectations
E[x -
x
] = 0
E[(x -
x
)
2
] =
x
E[(x -
x
)
3
] = s
x
E[(x -
x
)
4
] = k
x
E[(x -
x
)
n
] =
(n)x
Problem 2.8 Moments as Expectations
Prove the formulae relating moments and Expectations above from
the denitions of moment and Expectation.
2.10.3 Relation of central and non-central moments
Central moments in terms of non-central moments
Non-central moments in terms of central moments
Problem 2.9 Expectation of a linear sum
From the denition of Expectation, prove the formula for the Ex-
pectation of a linear sum of functions of random variables.
Problem 2.10 First central moment
Show that E[x -
x
] = 0.
E x
x
( )
n
[ | E
n
i
\ )
[
x
i
x
( )
n i
i 0 =
n
=
E x
x
( )
n
[ |
n
i
\ )
[
x
( )
n i
i 0 =
n
E x
i
[ | =
E x
n
[ | E x
x
( )
x
+ ( )
n
[ | =
E x
n
[ |
n
i
\ )
[
E x
x
( )
i
[ |
x
( )
n i
i 0 =
n
=
2 - 24 PROBABILISTIC DESIGN
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 24 of 52
Problem 2.11 Central and non-central moments
Fill out the detail in the above derivations relating central and non-
central moments.
Problem 2.12 Variance and skew
1. Derive expressions from rst principles for the variance and
skew in terms of non-central moments. Use the binomial expansion and
the properties of the Expectation operator.
2. Verify your results using the general formulae above.
Problem 2.13 Mean values of a power
1. Derive expressions from rst principles for the mean values of
second, third, and fourth powers of a random variable in terms of its
central moments. Use the binomial expansion and the properties of the
Expectation operator.
2. Verify your results using the general formulae above.
Problem 2.14 Mean second moment of area
A beam of circular cross-section has a normally distributed diame-
ter D with mean 100 mm and standard deviation 2 mm.
1. Calculate the mean second moment of area ( ) about a
diameter.
2. Compare this with the nominal second moment of area based on
a nominal diameter of 100 mm.
(Hint: The kurtosis of a normal distribution is 3
4
).
Problem 2.15 Volume of sphere
The performance of a product is dependent on the volume V of a
contained steel sphere of diameter D remaining within tight specica-
tions. The machine manufacturing the spheres is controlled by the spec-
ication on the nominal diameter.
By using the expectation operator and the identity a
n
= ((a - b) +
I
64
------ D
4
=
LINEAR FUNCTIONS 2 - 25
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 25 of 52
b)
n
, or otherwise, derive an exact formula for the mean
V
in terms of
D
, and higher order central moments.
2.11 LINEAR FUNCTIONS
2.11.1 Introduction
In this section we begin to explore an approximate method for
computing with random variables by considering only the rst few mo-
ments of a distribution (typically only the mean and variance, but some-
times the skew and kurtosis). Exact methods in computation with
random variables are often exceedingly complex and insufciently gen-
eral. However even the approximate probabilistic approaches developed
here are an order of magnitude more powerful in engineering design for
quality and reliability than the traditional factor of safety approach.
In this section we look only at linear functions. In a later section we
will look at more general function types.
2.11.2 General formulae
For the special case of a linear function of several variables, the
moments may be derived exactly and take particularly simple forms.
Note that the relations below are true, independent of the types of
underlying distributions possessed by the x
i
.
However in the general case, z will not have a distribution of any
known standard type.
If x
1
, x
2
, x
3
, are independent random variables, a
1
, a
2
, a
3
, are
constants, and z = a
1
x
1
+ a
2
x
2
+ a
3
x
3
+ , then the rst four moments
of z are given by:
z
= a
1
x1
+ a
2
x2
+ a
3
x3
+ = a
i
xi
z
= a
1
2
x1
+ a
2
2
x2
+ a
3
2
x3
+ = a
i
2
xi
s
z
= a
1
3
s
x1
+ a
2
3
s
x2
+ a
3
3
s
x3
+ = a
i
3
s
xi
k
z
= a
1
4
k
x1
+ a
2
4
k
x2
+ a
3
4
k
x3
+
+ 6{a
1
2
x1
a
2
2
x2
+ a
2
2
x2
a
3
2
x3
+ a
1
2
x1
a
3
2
x3
+ }
2 - 26 PROBABILISTIC DESIGN
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 26 of 52
= a
i
4
k
xi
+ 6 a
i
4
xi
a
j
4
xj
[i<j]
Note that for moments of order higher than 3, the relations are no
longer simple sums of the same order moments.
2.11.3 Sums, differences and multiples
In the special case of sums and differences of two random vari-
ables; and xed scalar multiples of a simple random variable the above
relations reduce to
2.11.4 Linear functions of normal distributions
Linear combinations of normally distributed random variables are
a special case. They are themselves normal.
This means that we can nd the actual normal distribution resulting
from a linear sum by simply calculating the mean and variance from the
formulae above.
Below, we graphically depict the addition of two normal random
variables.
Table 1: Moments of sums, differences and multiples
SUM DIFFERENCE MULTIPLE
FUNCTION z = x + y z = x - y z = a x
MEAN
z
=
x
+
y
z
=
x
-
y
z
= a
x
VARIANCE
z
=
x
+
y
z
=
x
+
y
z
= a
2
x
SKEW
s
z
= s
x
+ s
y
s
z
= s
x
- s
y
s
z
= a
3
s
x
KURTOSIS
k
z
= k
x
+ k
y
+ 6
x
y
k
z
= k
x
+ k
y
+ 6
x
y
k
z
= a
4
k
x
LINEAR FUNCTIONS 2 - 27
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 27 of 52
ADDITION OF TWO NORMAL RANDOM VARIABLES
2.11.5 The Central Limit Theorem
The sum of a number of independent but not necessarily identically
distributed random variables tends to become normally distributed as
the number increases, provided that no one random variable contributes
appreciably more than the others to the sum; that is, no type of distribu-
tion dominates.
This is an important result for designers. It means, for example, that
the overall dimension of an assembly of component parts, independent
of the distribution types of each component dimension, will tend to be
normally distributed. Knowing this, the designer can work back from
the individual component tolerances to get an estimate of the proportion
of assemblies which will lie outside any given specication.
The six greyed graphs below are, sequentially, the distributions of
the average of 1, 2, 3, 4, 5, and 6 independent identically Uniformly dis-
tributed random variables on [-1, 1]. The full line is the Normal distri-
bution which has the same variance as the average.
It can be seen that even the average of only three Uniform distribu-
tions gives a result surprisingly close to Normal.
2 - 28 PROBABILISTIC DESIGN
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 28 of 52
z
=
x1
x2
x3
The mean of a product of independent random variables is simply
the product of their means.
2 - 36 PROBABILISTIC DESIGN
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 36 of 52
2.13.2 The variance of a product
The variance of a product is obtained by taking the expectation of
the square of z
z
2
= x
1
2
x
2
2
x
3
2
E[z
2
] = E[x
1
2
] E[x
2
2
] E[x
3
2
]
(
z
2
+
z
) = (
x1
2
+
x1
) (
x2
2
+
x2
) (
x3
2
+
x3
)
To calculate the variance
z
of a product of independent random
variables, rst compute the product on the right hand side of the equa-
tion above and then subtract the square of the mean
z
2
calculated pre-
viously.
2.13.3 The skew of a product
z
3
= x
1
3
x
2
3
x
3
3
E[z
3
] = E[x
1
3
] E[x
2
3
] E[x
3
3
]
(
z
3
+ 3
z
z
+ s
z
) = (
x1
3
+ 3
x1
x1
+ s
x1
)
(
x2
3
+ 3
x2
x2
+ s
x2
) (
x3
3
+ 3
x3
x3
+ s
x1
)
To calculate the skew s
z
of a product of independent random vari-
ables, rst compute the product on the right hand side of the equation
above and then subtract the term
z
3
+ 3
z
z
calculated from the pre-
vious steps.
Higher moments are calculated in a similar fashion.
Problem 2.28 Volume of a cube
Calculate the mean and variance of the volume of a cube of side L
where the sides are machined independently by the same machining
process and are therefore considered to be identically distributed inde-
pendent random variables each with mean and variance .
Comment on how this calculation differs from one based simply on
the formula V = L
3
(see below).
POSITIVE INTEGER POWERS 2 - 37
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 37 of 52
2.14 POSITIVE INTEGER POWERS
2.14.1 The mean of a positive integer power
We have already derived the formula for the Expectation of a pos-
itive integer power of a random variable in terms of central moments.
Since the expectation gives the mean value, we have immediately that
for z = x
n
:
2.14.2 Tables for a Normally distributed random variable
Since the central moments of a normal distribution can all be ex-
pressed in terms of its mean and variance (see the listing in the section
on the Normal distribution), its powers can therefore be expressed via
the above formula in terms of them also.
The tables below thus give exact formulae for calculating the mean,
variance and skew of positive integer powers of a normally distributed
random variable x with mean and variance .
The entries in the table are expressed in the form
(rst order approximation) (1 + terms in the variance ratio u)
where the variance ratio u has been dened as the square of the co-
efcient of variation u = v/
2
.
Table 3: Moments of a square
z = x
2
z
2
(1 + u)
z
4
2
(1 + u/2)
s
z
24
2
2
(1 + u/3)
z
n
i
\ )
[
E x
x
( )
i
[ |
x
( )
n i
i 0 =
n
=
2 - 38 PROBABILISTIC DESIGN
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 38 of 52
Problem 2.29 Inconsistency?
If = 0, does this imply that
z
,
z
, s
z
are all zero?
2.15 GENERAL FUNCTIONS
2.15.1 Introduction
We complete our introductory discussion of probabilistic design by
describing a method (called the Moment Analysis Method) by which
you can calculate the moments of any differentiable function of inde-
Table 4: Moments of a cube
z = x
3
z
3
(1 + 3 u)
z
9
4
(1 + 4 u + (5/3) u
2
)
s
z
162
5
2
(1 + (16/3) u + 5 u
2
)
Table 5: Moments of a fourth power
z = x
4
z
4
(1 + 6 u + 3 u
2
)
z
16
6
(1 + (21/2)u + 24 u
2
+ 6 u
3
)
s
z
576
8
2
(1 + 16 u + (149/2) u
2
+ 99 u
3
+ 33 u
4
)
Table 6: Moments of a fth power
z = x
5
z
5
(1 + 10 u + 15 u
2
)
z
25
8
(1 + 20 u + 114 u
2
+ 180 u
3
+ (189/5) u
4
)
s
z
1500
11
2
(1 + (97/3)u + 366 u
2
+ 1710 u
3
+ 2997 u
4
+ 1323 u
5
)
GENERAL FUNCTIONS 2 - 39
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 39 of 52
pendent random variables, and hence get an estimate of the variability
inherent in a given design. Indeed, all the formulae we have introduced
so far may be derived by this method.
The basic principle of the Moment Analysis Method is the speci-
cation of each probability distribution by its set of moments in the form
{mean, variance, skew, kurtosis,...}. Then, if we wish to calculate a
function of several random variables, the moments of that function will
be functions of the moments of those several random variables.
The two techniques that we will use are
1. Expansion of the function in a Taylor series
2. Application of the Expectation operator to the series
2.15.2 The basic algorithm
Calculation of the mean
1. Expand the function z = g(x, y, ...) as a Taylor series about the
mean values (
x
,
y
, ) of the independent random variables.
2. Calculate the mean of the function (
z
) by calculating the expec-
tation of the terms in the expansion.
Calculation of the nth central moment
1. Expand the function [z -
z
]
n
as a Taylor series about the mean
values (
x
,
y
, ) of the independent random variables.
2. Calculate the nth central moment of the function (
z
, s
z
, k
z
, )
by calculating the expectation of the terms in the expansion.
3. Calculate
z
and substitute for it in the expression.
2.15.3 The theoretical foundation
Assumptions
The fundamental assumptions upon which the method is based are:
1. The random variables x, y, are independent. (Very important!)
2. The pertinent information content of each of the distributions is
sufciently well represented by a nite (small) number of moments.
2 - 40 PROBABILISTIC DESIGN
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 40 of 52
3. The function is sufciently well represented by a nite (small)
number of terms of its Taylor series.
Formulae
The fundamental formulae upon which the method is based are:
1. The mean
z
of a function z = g(x, y, ...) is the expectation of the
function.
2. The nth central moment
(n)z
of a function z = g(x, y, ...) is the
expectation of (z -
z
)
n
3. The expectation of a linear sum is the sum of the expectations of
the terms.
4. The expectation of a product of independent random variables is
the product of their expectations.
2.15.4 How to write down a Taylor series
In this section we discuss a mnemonic method for easily writing
down a Taylor series expansion of a function of several variables.
Suppose you have a function z = g(x, y, ...) and you wish to write
down the Taylor series for z expanded about the point: x =
x
, y =
y
,
. A simple mnemonic way of doing this is as follows:
1. Write down the power series for exp(X+Y+):
1 + (X+Y+) + (1/2!)(X+Y+)
2
+ (1/3!)(X+Y+)
3
+
2. Expand the terms:
1 + (X+Y+) + (1/2!)(X
2
+2XY+Y
2
+)
+ (1/3!)(X
3
+3X
2
Y+3XY
2
+Y
3
+) +
3. Make the following replacements:
1 z [ |
g
x
y
, ( ) = ( )
X
n
x
n
n
x
x
( )
n
GENERAL FUNCTIONS 2 - 41
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 41 of 52
and so on for more products of more than two variables.
Remember that the notation []
n m + ( )
x
x
( )
n
y
y
( )
m
E x
x
[ | 0
E x
x
( )
n
[ |
n ( )x
2 - 42 PROBABILISTIC DESIGN
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 42 of 52
1 + (X+Y) + (1/2!)(X
2
+2XY+Y
2
) + (1/3!)(X
3
+3X
2
Y+3XY
2
+Y
3
)
+ (1/4!)(X
4
+4X
3
Y+6X
2
Y
2
+4XY
3
+Y
4
)
+ (1/5!)(X
5
+5X
4
Y+10X
3
Y
2
+10X
2
Y
3
+5XY
4
+Y
5
) +
1. Any term involving a variable to the rst power is zero since the
expectation E[x -
x
] is zero.
1 + (1/2!)(X
2
+Y
2
) + (1/3!)(X
3
+Y
3
)
+ (1/4!)(X
4
+6X
2
Y
2
+Y
4
)
+ (1/5!)(X
5
+10X
3
Y
2
+10X
2
Y
3
+Y
5
) +
2. Any term involving a higher power leading to a moment for
which you have no information must be omitted.
In this example we only know means and variances, hence the ex-
pression reduces to
1 + (1/2!)(X
2
+Y
2
) + (1/4!)(6X
2
Y
2
)
3. If the coefcients resulting from the higher derivatives in the ex-
pansion are small enough compared to those resulting from the lower
ones, the corresponding terms may be neglected. This is often the case
for functions which are not too far off linear in the region near the point
= (
x
,
y
).
In this example we would look at the comparative size of
Assuming the term can be neglected the expression reduces to
1 + (1/2!)(X
2
+Y
2
)
leading nally to a general second order approximation for
z
:
It is evident from this process that the same form is valid for any
number of variables.
x
2
y
2
z
g
x
y
, ( )
1
2
---
x
2
2
x
y
2
2
y
+
| |
| |
+ =
z
g
x
y
, , ( )
1
2
---
x
2
2
x
y
2
2
y
+ +
| |
| |
+ =
GENERAL FUNCTIONS 2 - 43
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 43 of 52
Note carefully that the mean of a general function is only equal to
the function of the means as a rst order approximation.
2.15.7 Calculation of the variance of a function
As an example of the method described for calculating higher order
moments of a differentiable function of random variables, we will cal-
culate an expression for the second order approximation to the variance
of z = g(x, y, ...), that is, E[(z -
z
)
2
].
1. Since (z -
z
)
2
is still a function of x, y, ..., we can let Z = (z -
z
)
2
. E[(z -
z
)
2
] then becomes
Z
which we can write down directly
from the result derived in the section above:
2. Evaluate [Z]
:
3. Evaluate a typical second derivative:
4. Collect the terms and simplify to get nally:
This is a formula for the second order approximation to the vari-
ance of a differentiable function of any number of random variables. It
is one of the most important formulae in the area of probabilistic and ro-
bust design.
z
Z
Z [ |
1
2
---
x
2
2
x
y
2
2
y
+ +
| |
| |
+ = =
Z [ |
z [ |
z
( )
2 1
4
---
x
2
2
x
y
2
2
y
+ +
| |
| |
2
= =
x
2
2
Z
2
x
z
2
z
z
( )
x
2
2
z
+
| |
| |
=
z
x
z
x
y
z
y
+ +
| |
| |
1
4
---
x
2
2
x
y
2
2
y
+ +
| |
| |
2
=
2 - 44 PROBABILISTIC DESIGN
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 44 of 52
2.15.8 Computer implementation
The Moment Analysis Method is ideally suited to being pro-
grammed using a symbolic mathematical programming language.
The size and complexity of the problems that can be tackled will
depend on the memory and speed of the computational devices used.
The accuracy of the result will in addition depend on the degree of lin-
earity of the function g(x, y, ) near the mean values of the independent
random variables (
x
,
y
, ), together with the number of moments
used to specify each of their distributions. The more linear the function
and the more moments used, the more accurate the results will be. The
most signicant problems will arise when the function has a singularity
within the support of the distribution.
Problem 2.30 Mean of a function of one variable
Derive an expression for the mean of a general differentiable func-
tion of a single random variable up to and including the term involving
the kurtosis.
Let x be a symmetrically distributed random variable with mean 0,
variance 1 and kurtosis 1. Using the results above, determine an approx-
imation to the mean of e
x
.
Problem 2.31 Variance of a function of one variable
Derive an expression for the variance of a general differentiable
function of a single random variable up to and including the term in-
volving the skew. Neglect terms of order higher than 3.
Let x be a symmetrically distributed random variable with mean 0
and variance 1. Using the results above, determine an approximation to
the variance of e
x
.
Problem 2.32 Mean of a function of two variables
By using a Taylor series expansion about the mean values of ran-
dom variables x and y, derive an approximate formula for the mean val-
ue of z = g(x, y) in terms of the means and variances of x and y.
SUMMARY OF APPROXIMATE FORMULAE 2 - 45
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 45 of 52
Problem 2.33 Probabilistic means
The mean of a function is not generally the function of the means.
1. Describe the types of functions for which the mean of the func-
tion is the function of the means.
2. Discuss the ramications of these results for ordinary tolerance
analysis.
3. Discuss the ramications of these results for quality design.
2.16 SUMMARY OF APPROXIMATE FORMULAE
This section summarizes the rst and second order approximations
to the mean
z
and variance
z
of a differentiable function z = g(x, y, ...)
of independent random variables x, y, ...
If g(x, y, ...) is approximately linear in the region within several
standard deviations of the mean values (
x
,
y
, ) of (x, y, ...), then the
rst order approximation will often give satisfactory estimates. In any
case, the value of the extra term in the second order approximation may
always be computed to assess its signicance.
For estimates more accurate than that provided by the second order
approximation, moments of order higher than the variance will need to
be known.
2.16.1 First order approximation
Mean
Variance
z
g
x
y
, , ( ) =
z
x
z
x
y
z
y
+ + =
2 - 46 PROBABILISTIC DESIGN
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 46 of 52
2.16.2 Second order approximation
Mean
Variance
Problem 2.34 Approximate formulae
Write down the formulae for the rst and second order approxima-
tion to the variance of a function z = g(x
1
, x
2
, x
3
) of independent random
variables x
i
each with mean
i
and variance
i
.
What is the main assumption upon which the validity of these for-
mulae resides?
Problem 2.35 Right-angled bracket
An angled bracket has legs of nominal length B and C. The nominal
angle between them is = 90. The distance A between the ends of the
legs of the bracket is an important quality variable.
Using the formulae above, derive rst order approximations for the
mean and variance of A in terms of the means and variances of B, C, and
.
Problem 2.36 Applications for probabilistic design
From your own experience, describe a possible application for
probabilistic design.
z
g
x
y
, , ( )
1
2
---
x
2
2
x
y
2
2
y
+ +
| |
| |
+ =
z
x
z
x
y
z
y
+
| |
| |
1
4
---
x
2
2
x
y
2
2
y
+
| |
| |
2
=
GENERAL POWER FUNCTIONS 2 - 47
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 47 of 52
2.17 GENERAL POWER FUNCTIONS
In engineering we often deal with general power functions of the
form
In this section we take the formulae above for the approximate
mean and variance of z and sketch their application to functions of this
form.
2.17.1 The mean of a general power function
1. Calculate the second derivatives:
2. Substitute into the formula for the mean
z
and rewrite terms of
the form /
2
as squares of coefcients of variation:
2.17.2 The variance of a general power function
In a similar fashion we can substitute into the formula for the vari-
ance:
2.17.3 The coefcient of variation of a power function
From the above two results, we can now write down an expression
for the second order approximation to the square of the coefcient of
variation of z in terms of the squares of the coefcients of variation of
x, y, .
z cx
m
y
n
=
x
2
2
m m 1 ( )cx
m 2
y
n
[ |
c
x
m
y
n
( )m m 1 ( )
1
x
2
-------- = =
z
c
x
m
y
n
( ) 1
1
2
--- m m 1 ( )x
2
n n 1 ( )y
2
+ + } + =
z
c
x
m
y
n
( )
2
m
2
x
2
n
2
y
2
+ + }
1
4
--- m m 1 ( )x
2
n n 1 ( )y
2
+ + }
2
=
2 - 48 PROBABILISTIC DESIGN
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 48 of 52
The rst order approximation in coefcient of variation terms is
then simply:
2.17.4 The quotient of two random variables
We apply these formulae to obtain second order approximation ex-
pressions for the quotient of two random variables z = x/y:
The mean of a quotient
The variance of a quotient
2.17.5 The inverse of a random variable
We again apply these formulae to obtain second order approxima-
tion expressions for the inverse of a random variables z = 1/y:
The mean of an inverse
z
2
m
2
x
2
n
2
y
2
+ + }
1
4
--- m m 1 ( )x
2
n n 1 ( )y
2
+ + }
2
1
1
2
--- m m 1 ( )x
2
n n 1 ( )y
2
+ + } +
2
--------------------------------------------------------------------------------------------------------------------------------------------- =
z
2
m
2
x
2
n
2
y
2
+ + =
z
x
y
----- 1 y
2
+ ( ) =
z
x
y
-----
\ )
[
2
x
2
y
2
y
4
+ ( ) =
z
1
y
----- 1 y
2
+ ( ) =
GENERAL POWER FUNCTIONS 2 - 49
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 49 of 52
The variance of an inverse
Problem 2.37 The mean of a square
Suppose that the standard deviation of a random variable x is 10%
of the mean, that is .
Show that for z = x
2
the rst order approximation to the mean of z
underestimates the second order approximation by about 1%.
Problem 2.38 Volume of a cylinder
A mass-produced cylindrical container has an internal diameter of
{
D
= 2 m,
D
= 0.01 m
2
} and an internal length of
{
L
= 10 m,
L
= 0.04 m
2
}.
Compare the rst and second order approximations to the mean
and variance of its volume.
Problem 2.39 Second moment of area
A quality variable z is related to independent random variables x
and y by z = a x
p
y
q
(where a, p and q are constants).
1. Using the rst order formulae for the mean and variance of a
function of random variables, derive an expression for the approximate
coefcient of variation of z in terms of the coefcients of variation of x
and y.
2. A triangular beam cross-section has second moment of area
, where B and H are independent random variables with
means
B
,
H
and variances
B
,
H
respectively. Use the formula de-
rived in 1. to derive an expression for the variance
I
of the second mo-
ment of area in terms of
B
,
H
,
B
,
H
.
Problem 2.40 Coefcients of variation
The radius and length of a cylinder are independent random vari-
ables with identical coefficients of variation C.
z
1
y
-----
\ )
[
2
y
2
y
4
( ) =
x
2
0.01 =
I
1
36
------ BH
3
=
2 - 50 PROBABILISTIC DESIGN
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 50 of 52
1. Determine the coefficient of variation of the volume of the cyl-
inder, using both the first and second order coefficient of variation ap-
proximation formulae.
2. Comment on the increased accuracy provided by the second or-
der approximation.
Problem 2.41 Experimental formula
An experimentally derived formula for a quality variable W in
terms of design parameters X, Y, Z is
W = 2.657 X
0.2
Y
0.7
Z
0.1
The coefcients of variation of X, Y and Z are each equal to k.
Determine an estimate of the coefcient of variation of W in terms
of k.
Problem 2.42 First order approximation
1. State the formula for the first order approximation to the vari-
ance of a function of random variables.
2. From the formula in 1. above derive the formula for the variance
of the function Z = X
2
sinY, where X and Y are independent random vari-
ables.
3. From your result in 2. determine the standard deviation of Z if
the means and standard deviations of X and Y are 2 units each.
Problem 2.43 Springs in series
A mass produced product contains two springs with stiffnesses K
1
and K
2
connected in series.
1. Calculate the rst order approximation to the variance of the
combined stiffness.
2. Calculate the rst order approximation to the variance of a) A =
K
1
K
2
, b) B = K
1
+ K
2
and c) C = A/B.
3. Explain why this does not give the same result as in 1. (Hint: The
difference is not due to the inexactitude of the approximations used.)
GENERAL POWER FUNCTIONS 2 - 51
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 51 of 52
Problem 2.44 The minimum diameter of a rod in tension
Determine the minimum mean diameter
D
of a circular rod with
ultimate tensile strength S which will sustain a load P in tension with a
reliability of R.
The rod is to be made of 4130 steel of ultimate tensile strength S,
considered normally distributed with mean
S
= 1075 MPa, and vari-
ance
S
= 900 MPa
2
.
The manufacturing process is known to be capable of turning out
99.7% of the product to a tolerance of - 1.5% of the rod diameter.
The load P is the resultant of a large number of randomly varying
loads and hence can be considered to be normally distributed. Its mean
is
P
= 13200 N, and variance is
P
= 40 000 N
2
.
The reliability R (equivalent to the proportion of product within
specication) is required to ensure a probability of failure of less than
one per million of shafts produced.
2 - 52 PROBABILISTIC DESIGN
JMBrowne July 12, 2000 11:53 am 4ProbabilisticDesign Page 52 of 52