Sei sulla pagina 1di 23

MATH2059

Error Analysis
1.
2.
3.
4.
5.

Error Definitions
Round-off Errors
Truncation Errors
Total Numerical Error
Error Propagation

1. Error Definitons
Significant Digit (SD):
The significant figures/digits carry meaning to the precision of a measurement.
Consider measurements of the length of a table by three different rulers with
different resolutions:
L=3.2 m (2 S.D.)

L: 3.27 m (3 S.D.)

L: 3.270 m (4 S.D.)

First digit is the most significant digit, and the last digit is the least significant
digit in the measurement.
Error in the measurement is associated with the least significant digit:
L=3.2 0.2 m

L: 3.27 m 0.01 m

L: 3.270 0.003 m

Any digit beyond the error carrying digit is meaningless.


Leading zeros are not significant (e. g., 0.00052 has only two significant digits).
They are only used to show the location of the decimal point. To avoid
confusion, use scientific notation (e.g., 5.2x10-4).

Accuracy & Precision:


Accuracy refers to how closely a

- accurate
- not precise

- accurate
- precise

computed or measured value


agrees with the true value.
Inaccuracy (also called bias) is a
systematic deviation from the
truth.

- not accurate
- not precise

- not accurate
- precise

Precision refers to how closely


individual computed or measured
values agree with each other.

Imprecision (also called


uncertainty) refers to the
magnitude of the scatter.

accuracy and precision are


two independent quantities

Numerical Error:
In numerical methods deviations from both accuracy and
precision is collectively termed as error in the predictions.
Numerical errors arise from the use of approximation to
represent exact mathematic quantities or operations.
Error is the difference between exact and numerical solution.
true value = approximation + error
True error (Et):
The difference betwen exact and approximate values :
Et= true value approximated value
Absolute error (|Et|):

|Et |=abs (Et)

We use the subscript t to designate that this is the true error.


To take into account different magnitudes in different
measurements, we prefer to normalize the error. Then, we
define the fractional relative error:

fractional relative error = (true value-approximate value)/(true value)

Percent relative error (t):


t = (true value-approximate value)/(true value) x 100
(true) percent relative error

Statement of error usually refers to percent relative error.

(true) percent relative error:


(true error )
(true value approximate value)
t
100%
100%
(true value)
(true value)

Approximate error:
In most cases we dont have the knowledge of the true value,
and cannot calculate the true error.
Define approximate error:
a

(approximate error)
100%
(approximate value)

Approximate error can be defined in different ways depending


on the problem. For example, in iterative problems (the most
usual case in numerical methods), error is defined with respect
to the previous calculation:
a

(current approx. previous approx.)


100%
(current approx.)

Stopping criterion:

Often, when performing calculations, we may not be


concerned with the sign of the error but are
interested in whether the absolute value of the
percent relative error is lower than a prespecified
tolerance s. For such cases, the computation is
repeated until
| a |< s
This relationship is referred to as stopping criterion.
The relationship between stopping criterion and the
number of S.D.s:
s (0.5 10 2 n )%
i.e., if s is satisfied the results is correct at least to n S.D.

n= # of S.D.

2. Round-off Errors
In the numerical analysis, round-off errors arise due to the
omission of the significant figures in representing the numbers.
Base-10 (decimal) versus Base-2 (binary) system:
Base-10

Base-2

103 102 101 100

23 22 21 20

a b c d

a b a b

= ax103 + bx102 + cx101 + dx100

= ax23 + bx22 + ax21 + bx20

Positional
Notation

Computers use only two numbers (0/1 ; on/off states). So


computers can only store numbers in binary (base-2) system.
e.g. 100101

a bit (binary digit).


1 byte= 8 bits

a computer uses 6 bits to store this number.

Integer Representation:

The first bit is used to store the sign (e.g., 0 for plus and 1 for
minus); the remaining bits are used to store the number.

In integer representation, numbers can be defined exactly but


only a limited range of integer numbers are allowed in a limited
memory.
Fractional quantities can not be represented.
Find the range of numbers that you can store in a 16-bit computer
using integer representation. (-32767 to 32767).

Floating Point Representation (FPR):


FPR allows representing a much wider range of numbers than
the integer representation.
It allows storing fractional quantities.
It is similar to the scientific notation.
Mantissa (significand)

mb

exponent
e

base

word"
e.g.

0.015678 1.5678102

(in base-10)

IEEE floating point representation standards:


32-bit (single precision) word format:

64-bit (double precision) word format:

Mantissa takes only a limited number of significant digits


round-off error
Increasing the number of digits from 32-bit to 64-bit word sizes
decreases the round-off error.

Range:
In FPR there is still a limit for the representation numbers but the
range is much bigger.
In 64-bit representiaton in IEEE format:
52 digits

11 digits

Max value= +1.1111111 x 2 +(1111111111) = 1.7977 x 10+308


Min value= 1.0000000 x 2 -(1111111111) = 2.2251 x 10 -308
Numbers larger than the max. value cannot be represented by
the computer overflow error.
>> realmax
ans=
1.7976931e+308

Any value bigger than this is set to infinity

Numbers smaller than the min. value cannot be represented.


There is a hole at zero. underflow error.
>> realmin
ans=
2.22500738e-308

Any value smalller than this is set to zero

Chopping versus Rounding:


Assume a computer that can store 7 significant digits:
Rounding

Chopping
error

error

4.2428576428......

4.2428576428......

4.242857

4.242858

Rounding is a better choice since the sign of error can be either


positive and negative leading to smaller total numerical error.
Whereas error in chopping is always positive and adds up.
Rounding costs an extra processing to the computer, so most
computers just chops off the number.

Machine epsilon:

As a result of quantization of numbers, there is a finite length of


interval between two numbers in floating point representation.

x
Machine epsilon (or machine precision) is the upper bound on
the relative error due to chopping/rounding in floating point
arithmetic.

e.g., for a 64-bit


x
representation, b=2, t=53
The machine epsilon can be computed by =2-52
=2.22044.. x 10-16

b1t
b=number base
t= number of digits in mantissa

Arithmetic operations:
Besides the limitations of the computer for storage of numbers,
arithmetic operations of these numbers also contribute to the
round-off error.
Consider a hypotetical base-10 computer with 4-digit mantissa & 1-digit exponent:
1.345 + 0.03406 = 0.1345 x 101 + 0.003406 x 101 = 0.137906 x 101
during arithmetic operations, numbers are converted to be same exponents

chopped-off

Ex: a) Evaluate the polynomial

y x3 5x 2 6 x 0.55
for x=1.73. Use 3-digit arithmetic with chopping. Evaluate the error.
b) If the function is expressed as

y x( x( x 5) 6) 0.55
what is the percent relative error? Compare with part a.

Subtractive cancellation:

Subtractive cancellation occurs when subtracting two nearly


equal number.
0.7549 x 103 - 0.7548 x 103 = 0.0001 x 103
4 S.D.

4 S.D.

1 S.D.

Also called
loss of significance

Many problems in numerical analysis are prone to subtractive


cencallation error. They can be mitigated by manipulations in the
formulation of the problem or by increasing the precision.
Consider finding the roots of a
2nd order polynomial:

x1 b b 2 4ac

x2
2a

b 4ac
2

Subtractive
cancellation

- Can use double precision, or


- Can use an alternative
formulation:

x1
2c

x2 b b 2 4ac

3. Truncation Errors
Truncation errors result from using approximations in place of
exact mathematical representations, e. g., for derivative, use
dv v v(ti 1 ) v(ti )

dt t
ti 1 ti

finite difference
approximation

Taylors Approximation:
Taylors theorem: If a function f and its (n+1) order derivatives are
continuous on an interval containing a and x, then the values of
the function at x can be represented as
( 2)
(n)
f
(
a
)
f
(a)
f ( x) f (a) f ' (a)( x a)
( x a) 2 ...
( x a) n Rn
2!
n!
th

0 order
term

1st order term

2nd order term

nth order term

Rn is the reminder (truncation error) from an n-th order


approximation of the function f(x).

reminder

In other words, any smooth


function can be approximated
as a polynomial of order n
within a given interval.
The error is expected get
smaller as n increases
(convergence) (there are
exceptions where the solution
may diverge).

function

base point(a)=1

EX: Use Taylor series expansion to evaluate ln (10.1) using a base


point a=10. Evaluate the series up to second order term, and
calculate the truncation error for each addition of the term (take 7
S.D. in your calculations).

Error analysis using Taylors approximation:

Taylors theorem gives insight for estimating the truncation error


in the numerical approximation.
Suppose for base point f(xi ), we want to evaulate f(xi+1):
2

f ( xi ) 2
f ( xi ) n
f ( xi 1 ) f ( xi ) f ( xi )h
h ...
h Rn
2!
n!
'

where

f n1 ( ) n 1
Rn
h
(n 1)!

h:step size

h ( xi 1 xi )

xi , xi 1

Here, Rn gives the exact determinaton of the error from the n-th
order approximation of the function.
We can estimate the order of the magnitude of the error in
terms of step size (h):
Rn O(h n1 )

Step size (h) controls the magnitude


of the error in the approximation

Example: Error in finite difference approximation:


Evaluate the truncation error for using finite difference
approximation in the falling object in air problem:
c
dv
g d v2
dt
m

Express v(ti+1 ) in Taylor series:


v '' (ti ) 2
v n (ti ) n
v(ti 1 ) v(ti ) v (ti )h
h ...
h Rn
2!
n!
'

v(ti 1 ) v(ti ) v ' (ti )h R1

and

v ' (ti )

R1 O(h2 )

Taylors
approximation to n=1

v(ti 1 ) v(ti ) R1

h
h
Finite difference

Truncation error

h (ti 1 ti )

R1 O(h 2 )

O ( h)
h
h

Truncation Error

Then the error associated with


finite difference approximation
is in the order of h.

4. Total Numerical Error


Total
Numerical
Error

= Round-off
Error

+ Truncation
Error

Round-off errors can be minimized by increasing the number of


significant digits, by mitigating subtractive cancellations, and by
decreasing the number of computations.
Truncation errors can be reduced by decreasing the step size (h).
But, this may result in subtractive cancellation too!
So, there is a trade-off between
truncation error and round-off error in
terms of step size (h).
As a final note, there is no single systematic
approach for evaluating numerical error for
different problems. We do it as much as we
can.

5. Error Propagation
Error propagation concerns how an error on x is propagated to
the function f(x).
x=true value
xo= approx. value

x x x

f ( x ) f ( x) f ( x )

error in

error out

Taylors theorem can be used to estimate the propagation of error.


Evaluate f(x) near f(xo):
f ( x) f ( x ) f ' ( xo )x ...
f ( x o ) f ( x) f ( x ) f ' ( x o )x

expansion to
1st order term

f ( x ) f ' ( x o ) x

Given a measured value of x= 2.50.01, estimate the resulting


error in the function f(x)=x3.

Error propagation for functions of more than one variable:


x x x
x z z

y y y
...
errors in

f ( x , y o , z o ,..)
Propagation of
error

error out

Error propagation for functions of more than one variable can be


understood as the generalization of the case of functions with a
single variable:
f ( x , y o , z o ,..)

f
f
f
x
y
z ...
x
y
z

EX: It is desired to find the volume (V) of a rectangular tank. The length (L), width (W)
and height (H) of the tank are measured. The length, width and height of the tank were
measured as 150.5 m, 30.2 m and 300.8 m respectively.
a) What is the volume (V) of the tank?
b) Estimate the error done in calculating the volume of the tank.

Potrebbero piacerti anche