Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
skim@math.msstate.edu
1
Contents
2
1. Mathematical Preliminaries
In This Chapter:
Topics Applications/Properties
Review of Calculus Continuity & Differentiability
Intermediate Value Theorem
Mean Value Theorem
Taylor's theorem
Computer Arithmetic
Convergence
Order/rate of convergence and
Review of Linear Algebra Vectors and matrices
Norm
Determinant
Eigenvalues and eigenvectors
System of linear equations Matrix inversion
Elementary row operations LU factorization
System of tridiagonal matrices
Diagonally dominant matrices
Software Maple and Matlab
3
1.1. Review of Calculus
Continuity
4
Theorem:
If is a function defined on a set of real numbers and , the following
are equivalent:
is continuous at .
If is any sequence in converging to , then
Differentiability
Theorem:
If the function is differentiable at , then is continuous at .
Note: The converse is not true.
Example:
5
Intermediate Value Theorem (IVT):
Suppose and is a number between and . Then, there
exists a number for which .
Rolle's Theorem:
Suppose and is differentiable on . If , then
there exists a number such that .
Example:
6
Mean Value Theorem (MVT):
Suppose and is differentiable on . Then, there exists a
number such that
,
which can be equivalently written as
.
=
= 1.454648713
(7.2)
= 1.098818559
7
3
2
f(x)
L(x)
1 Average slope
0
0 1 2
x
=
=
8
Now, find the derivative of .
(9.1)
10
5
f(x)
f'(x)
0
2
x
1.358229874 (9.2)
(9.3)
=
=
9
The following theorem can be derived by applying Rolle's Theorem
successively to and finally to .
Integration
10
Fundamental Theorem of Calculus:
Let be continuous on . Then,
Part I: .
11
Taylor's Theorem
Taylor's Theorem with Lagrange Remainder:
Suppose , exists on , and . Then, for
every ,
Solution:
12
=
=
On the other hand, you can find the Taylor polynomials easily with Maple:
=
f(x)
0 1 2 p3(x)
x
=
=
13
=
where
14
Alternative Form of Taylor's Theorem:
Suppose and exists on Then, for every
,
.
In detail,
15
= 10.002302850208
= 10.002302850208247527
,
where
16
Example: Find the tangent plane approximation of
at .
Solution:
3 (19.1)
2 (19.2)
(19.3)
Thus the tangent plane approximation at is
.
17
Empty Page
18
1.2. Computer Arithmetic and Convergence
Example:
=
= 3.141592654
= 3.1415927
= 3.141592653589793
=
On the other hand, = 0.
19
Computational Algorithms
20
Rates (Orders) of Convergence
, for all .
(a) Find the limit of the sequence and (b) show that the convergence is
quadratic.
21
Big and Little Notation
Definition: A sequence is said to be in (big Oh) of if a
positive number exists for which
, for large .
In this case, we say is in , and denote or .
22
Definition: Suppose . A quantity is said to be in (big
Oh) of (h) if a positive number exists for which
, for sufficiently small.
In this case, we say is in , and denote .
Little oh of (h) can be defined the same way as for sequences.
Example:
.
Note that , for
sufficiently small .
By the way,
.
23
Example: Determine the best integer value of in the following equation
, as .
Ans:
Self study: Let . What are the limit and the rate of
convergence of as ?
24
Example: Let and let . Show that as .
Hint
25
Empty Page
26
1.3. Review of Linear Algebra
Vectors
Distance:
Dot product: . Thus .
27
Matrices and Two-dimensional Arrays
28
Example: Find the matrix product of
and .
If both and are square matrices of the same dimension, then and are
defined but they are in general not the same: . When it happens that
, we say that and commute.
29
Definition: determinant (det) of
If , we define .
For , let (minor of ) be the determinant of the
submatrix of obtained by deleting the -th row and the
-th column of . Define the cofactor of as . Then the
determinant of is given by
Solution:
Then, again using the first row cofactor expansion,
=
= 29
=
=
Thus, = = 77
31
Example: Find the determinant of the following matrices, if it exists.
a.
b.
c.
d.
32
Eigenvalues and Eigenvectors
Ans:
33
Invertible (nonsingular) Matrices
Let .
Definition: The matrix is invertible if there is an matrix such that
. The matrix is called an inverse of , and is denoted as .
Definition: The transpose of is . The matrix is symmetric if
.
34
Invertible (Nonsingular) Matrix Theorem:
For , the following properties are equivalent:
1. The inverse of exists, i.e., is invertible.
2. The determinant of is nonzero.
3. The rows of form a basis for .
4. The columns of form a basis for .
5. As a map from to , is injective (one-to-one).
6. As a map from to , is surjective (onto).
7. The equation implies .
8. For each , there is exactly one such that .
9. is a product of elementary matrices.
10. 0 is not an eigenvalue of .
35
System of Linear Equations
The above algebraic system can be solved by the elementary row operations
applied to the augmented system:
36
Elementary Row Operations:
1. (Replacement) Replace one row by the sum of itself and a multiple of
another row:
2. (Interchange) Interchange two rows:
3. (Scaling) Multiply all entries in a row by a nonzero constant:
Solution:
37
The following row operations (Gauss Elimination) solve the problem:
Forward elimination:
Back substitution:
=
38
Thus the solution =
The last result is called an echelon form, and its diagonals are called pivots.
39
Remarks:
a. The elementary matrices for replacement row operations commute
= =
b. Their product is made by putting the entries below the main diagonal.
c. Their inverse can be obtained by negating the entries below the main
diagonal.
= = =
c.
d.
40
Example: Find the parabola that passes through (1,2), (2,
4), and (3,8).
Use the Gauss Elimination to solve the algebraic system for the unknowns
.
41
System of Tridiagonal Matrices
42
Solution: (The underlined numbers are pivots.)
43
LU Factorization (Triangular Factorization)
44
Thus, the LU factorization can be carried out step-by-step as in
Solution:
Let
Then,
45
which completes the LU factorization.
Using Maple:
= =
required.
Here, .
46
Example: Use replacement row operations to find the LU factorization.
a.
b.
47
Diagonally Dominant Matrices
, for all .
The matrix is strictly diagonally dominant, if the above inequalities are strict
for all .
Solution:
49
(12.1)
50
Norms and Error Analysis
Examples of norms:
Example: Let .
Euclidean -norm:
The -norm:
The -norm:
It is equivalent to
51
Matrix norms:
1.
2.
3.
4. , where denotes the spectral radius of .
5.
6.
1. Find , , and .
2. Find the -condition number.
52
Theorem on Neumann Series: If is an matrix such that for
any subordinate matrix norm, then is invertible and
53
Empty Page
54
Homework
1. Review of Calculus, Convergence, and Linear Algebra
#1. Prove that the following equations have at least one solution in the
given intervals.
a.
b.
c.
d.
a.
b.
c.
d.
55
#4. Let a sequence be defined recursively by , where is
and use the Mean Value Theorem and the fact that is continuously
differentiable, to show that the quotient converges to zero.
#6. Suppose that and are square matrices and that is invertible.
Show that each of , and is invertible.
a.
b.
c.
56
d.
a.
b.
, if (triangle inequality)
Hint: For the last condition, you may begin with
.
57
Empty Page
58
2. Solutions of Equations in One Variable
Objective:
For equations of the form
,
find solutions, that are real numbers such that .
In This Chapter:
Topics Applications/Properties
Bisection method
Fixed-point iteration
Newton's method
Secant method Variant of Newton's method
Method of false position
Zeros of polynomials Application of Newton's method
Horner's method Effective evaluation of polynomials
Bairstow's method Quadratic factors
59
2.1. The Bisection (Binary-Search, Interval Halving) Method
Assumptions:
is continuous in .
0 By IVT, there must be a solution.
There is a single solution in .
Bisection: a pseudocode
60
Example: Find the solution of the equation in .
Solution: Using Maple
(3.1)
a p b
x
f x
61
(3.2)
62
Bisection: Maple code
>
>
>
k= 1: a= 1.000000 b= 2.000000 p= 1.500000 f(p)= 2.375000
k= 2: a= 1.000000 b= 1.500000 p= 1.250000 f(p)=-1.796875
63
k= 3: a= 1.250000 b= 1.500000 p= 1.375000 f(p)= 0.162109
k= 4: a= 1.250000 b= 1.375000 p= 1.312500 f(p)=-0.848389
k= 5: a= 1.312500 b= 1.375000 p= 1.343750 f(p)=-0.350983
k= 6: a= 1.343750 b= 1.375000 p= 1.359375 f(p)=-0.096409
k= 7: a= 1.359375 b= 1.375000 p= 1.367188 f(p)= 0.032356
p_7 = 1.367187500
dp = +- 0.007812 = (b0-a0)/2^k= 0.007812
f(p) = 0.032355785
175
(1)
128
64
Error Analysis:
Theorem: Suppose that and . Then, the
Bisection method generates a sequence approximating a zero of with
Solution:
We have to find the iteration count such the error bound is not larger than
. That is,
.
65
Note: is the midpoint of and is the midpoint of either
of . So, . In other
words,
,
Example: Suppose that the bisection method begins with the interval
. How many steps should be taken to compute a root with a relative
error not larger than ?
Solution:
. Thus,
66
Bisection: MATLAB code
M-file: bisect.m
function [c,err,fc]=bisect(f,a,b,TOL)
fa=feval(f,a);
fb=feval(f,b);
if fa*fb > 0,return,end
max1=1+round((log(b-a)-log(TOL))/log(2));
for k=1:max1
c=(a+b)/2;
fc=feval(f,c);
if fc==0
a=c;
b=c;
elseif fb*fc>0
b=c;
fb=fc;
else
a=c;
fa=fc;
end
if b-a < TOL, break,end
end
c=(a+b)/2;
err=abs(b-a);
fc=feval(f,c);
67
You can call the above algorithm with varying function, by
>> f = @(x) x.^3+4*x.^2-10;
>> [c,err,fc]=bisect(f,1,2,0.005)
c=
1.3652
err =
0.0039
fc =
7.2025e-005
Example: Consider the bisection method applied to find the zero of the
function with . What are ? What are
?
Answer:
68
Example: In the bisection method, does exist?
69
Empty Page
70
2.2. Fixed-Point Iteration
Definition: A number is a fixed point for a given function if .
(1.1)
(1.2)
=
Note:
(1.3)
(1.4)
=
(1.5)
71
The Fixed Point
y=g(x)
2 y=x
0 1 2 3
x
72
Theorem:
If and for all , then has at least
one fixed point in .
If, in addition, is differentiable in and there exists a positive
constant such that
for all ,
then there is a unique fixed point in .
Notes:
73
Proof of the Theorem:
If g(a)=a or g(b)=b 0 g has a fixed point at an endpoint. If not, 0 g(a)>a
and g(b)<b. Define h(x)=g(x)-x. Then, h(a)>0 and h(b)<0. Thus, by the
IVT, there is p2(a,b) such that h(p)=0, which implies that g(p)=p.
In addition, suppose that for all Let p and q are
two fixed points;
, for some between p and q.
Thus
,
which is a contradiction.
74
Fixed-Point Iteration
c.
d. *
e. *
f.
The associated (fixed-point) iteration may not converge for some choices of .
75
Evaluation of and FPI
= 27
(4.1)
=3
(4.2)
(4.3)
76
= = 2.121320343
(4.4)
= = 0.1414213562
(4.5)
5
=
14
70
=
121
(4.6)
77
Fixed-Point Theorem:
Let be such that for all . Suppose
that is differentiable in and there exists a positive constant
such that
for all
Then, for any number , the sequence defined by
Proof:
It follows from the previous theorem that there exists a unique fixed point
, i.e., . Since for all ,
we have for all and, by the MVT,
,
for some . Therefore,
as .
78
Notes:
.
(Here we have used the MVT, for the last inequality.)
Thus,
That is,
defined on any
closed subset of =. By a contractive mapping, we mean a function that
satisfies for some
for all
79
In practice: is not known.
Consider the following:
Thus, we have
,
which is useful for stopping of the iteration.
80
Example: For each of the following equations, Determine an interval
on which the fixed-point iteration will converge. Estimate the number of
iterations necessary to obtain approximations accurate to within .
a.
b.
c.
d.
Solution:
Plots
2 4
1
3
0 1 2
x 2
2 3 4
x
y=g1(x) y=x y=g2(x) y=x
1
=
3
5
=
4
81
3 2
2
1
1
0 1 2
0 x
0 1 2 3
x
y=g3(x) y=x y=g4(x) y=x
82
Example: Prove that the sequence defined recursively as follows is
convergent.
Solution
Begin with setting , then show is a contractive mapping
on
83
Empty Page
84
2.3. Newton's (Newton-Raphson) Method
and Its Variants
Then,
85
Graphical Interpretation:
Consider the tangent line passing :
.
Let . Then,
p
x
f x Tangent lines
Example of Nonconvergence:
86
3 iterations of Newton's method applied to
f x = arctan x , with initial point
p = p/2
0
p
x
f x Tangent lines
Notes:
87
Convergence Analysis:
Let . Then,
.
Thus,
88
Example:
(4.1)
Since , and
,
which is an occasional super-convergence.
Example: Use Newton's method to find the square root of a positive number
.
Solution:
Let . Then is a root of .
Set and .
The Newton's method reads
89
(6.1)
(6.2)
90
Implicit Functions
91
x y F(x,y)
0.000000 1.000000 0
0.100000 0.997760 -9.434e-10
0.200000 0.991250 1.3577e-09
0.300000 0.980657 7.977e-10
0.400000 0.966019 -6.568e-10
0.500000 0.947227 3.970e-10
0.600000 0.924004 1.519e-10
0.700000 0.895854 2.77e-11
0.800000 0.861955 -1.774e-10
0.900000 0.820939 -5.029e-10
1.000000 0.770398 -2.217e-10
92
Systems of Nonlinear Equations:
Newton's method for systems of nonlinear equations follows the same strategy
that was used for single equation. That is, we linearized, solve for
corrections, and update the solution, repeating the steps as often as
necessary. For an illustration, we begin with a pair of equations involving two
variables:
where
93
In general, the system of nonlinear equations,
,
can be expressed as
,
where and . Then
,
where and is the Jacobian of at :
94
Example: Starting with , carry out 6 iterations of Newton's method
to find a root of the nonlinear system
Solution:
95
1 2.18932610 1.59847516 1.39390063 1.19 0.598 0.394
2 1.85058965 1.44425142 1.27822400 -0.339 -0.154 -0.116
3 1.78016120 1.42443598 1.23929244 -0.0704 -0.0198 -0.0389
4 1.77767471 1.42396093 1.23747382 -0.00249 -0.000475 -0.00182
5 1.77767192 1.42396060 1.23747112 -2.79e-006 -3.28e-007 -2.7e-006
6 1.77767192 1.42396060 1.23747112 -3.14e-012 -4.22e-014 -4.41e-012
96
The Secant Method
Notes:
Two initial values must be given.
It requires only one new evaluation of per step.
The graphical interpretation of the secant method is similar to that of
Newton's method.
Convergence:
= 1.618033988
Graphical interpretation:
97
3 iteration(s) of the secant method applied
to
3
f x =x K1
with initial points a = 1.5 and b = 0.5
b a
x
f x
(9.1.1)
Here, is the -intercept of the secant line joining
and .
98
The Method of False Position:
Graphical interpretation:
99
3 iteration(s) of the method of false
position applied to
3
f x =x K1
with initial points a = 1.5 and b = 0.5
b a
x
f x
(10.1)
Here, the root is bracketed in all iterations.
100
Comparison: Convergence Speed
(11.1)
(11.2)
(11.3)
101
n Newton Secant False Position
0 0.7853981635 0.5000000000 0.5000000000
1 0.7395361335 0.7853981635 0.7363841388
2 0.7390851781 0.7363841388 0.7390581392
3 0.7390851332 0.7390581392 0.7390848638
4 0.7390851332 0.7390851493 0.7390851305
5 0.7390851332 0.7390851332 0.7390851332
6 0.7390851332 0.7390851332 0.7390851332
7 0.7390851332 0.7390851332 0.7390851332
8 0.7390851332 0.7390851332 0.7390851332
102
2.4. Zeros of Polynomials
A polynomial of degree has a form
Theorem on Polynomials
Fundamental Theorem of Algebra: Every nonconstant polynomial has
at least one root (possibly, in the complex field).
Complex Roots of Polynomials: A polynomial of degree has exactly
roots in the complex plane, it being agreed that each root shall be
counted a number of times equal to its multiplicity. That is, there are
unique (complex) constants and unique integers
such that
Localization of Roots: All roots of the polynomial lie in the open disk
centered at the origin and of radius of
103
Horner's Method
104
an an K1 an K2 ,,, a0
x
0
x0 bn x0 bn K1 ,,, x0 b1
bn bn K1 bn K2 ,,, P x0 = b0
Solution
We arrange the calculation as mentioned above.
1 K4 7 K5 K2
3 3 K3 12 21
1 K1 4 7 19 = P 3
Thus, , and
105
Example: Evaluate for considered in the previous example.
Solution
As in the previous example, we arrange the calculation and carry out the
synthetic division one more time:
1 K4 7 K5 K2
3 3 K3 12 21
1 K1 4 7 19 = P 3
3 3 6 30
1 2 10 37 = Q 3 = P' 3
Thus, .
106
= P(3)=19, P'(3)=37
107
Complex Zeros: Finding Quadratic Factors
Quadratic Factors of Real-coefficient Polynomials:
Let .
Theorem on Real Quadratic Factor: If is a polynomial whose
coefficients are all real, and if is a nonreal root of , then z is also a
root and is a real quadratic factor of .
Polynomial Factorization: If is a nonconstant polynomial of real
coefficients, then it can be factorized as a multiple of linear and quadratic
polynomials of which coefficients are all real.
Theorem on Quotient and Remainder: If the polynomial is divided
by the quadratic polynomial , then the quotient and
remainder
108
Bairstow's Method
Note that and must be functions of ( , which is clear from the last
theorem.
109
Now, the question is how to compute the Jacobian matrix.
As first appeared in the appendix of the 1920 book "Applied Aerodynamics"
by Leonard Bairstow, we consider the partial derivatives
Note that these recurrence relations obviously generate the same two sequence
( ); we need only the first. The Jacobian explicitly reads
and therefore
110
Bairstow's algorithm:
111
1 2.2000000 -2.7000000 -0.8 1.3
2 2.2727075 -3.9509822 0.07271 -1.251
3 2.2720737 -3.6475280 -0.0006338 0.3035
4 2.2756100 -3.6274260 0.003536 0.0201
5 2.2756822 -3.6273651 7.215e-05 6.090e-05
6 2.2756822 -3.6273651 6.316e-09 -9.138e-09
7 2.2756822 -3.6273651 -1.083e-17 -5.260e-17
Q(x) = (1)x^2 + (-1.72432)x^1 + (-0.551364)
Remainder: -2.66446e-18 (x - (2.27568)) + (-2.47514e-16)
Quadratic Factor: x^2 - (2.27568)x - (-3.62737)
Zeros: 1.137841102 +- (1.527312251) i
112
Deflation
The accuracy difficulty with deflation is due to the fact that when we obtain
the approximate zeros of , Newton's method is used on the reduced
polynomials . An approximate zero of will generally not
approximate a root of as well as a root of the reduced polynomial
, and inaccuracy increases as increases. One way to overcome this
difficulty is to (a) use the method of reduced equations to find approximate
zeros and then (b) improve these zeros by applying Newton's method to the
original polynomial .
113
Empty Page
114
Homework
2. Solutions of Equations in One Variable
#1. Let the bisection method be applied to a continuous function, resulting in intervals
. Let and . Which of these
statements can be false?
a.
b.
c.
d.
e. as
#2. Modify the provided Matlab code for the bisection method to incorporate
115
#3. Let us try to find by sung fixed-point iterations. Use the fact that the result must
be the positive solution of to solve the following:
a. Introduce three different fixed-point forms of which at least one is convergent.
b. Rank the associated iterations based on their apparent speed of convergence for
.
c. Perform three iterations, if possible, on each of the iterations with , and measure
.
#5. Consider a variation of Newton's Method in which only one derivative is needed; that
is,
#6. Starting with , carry out two iterations of Newton's method on the system:
116
#7. Consider the polynomial
117
Empty Page
118
3. Curve Fitting:
Interpolation and Approximation
In This Chapter:
Topics Applications/Properties
Polynomial interpolation The first step toward
approximation theory
Newton form
Lagrange form Basis functions for various
applications including FEM
Chebyshev polynomial Optimized interpolation
Divided differences
Neville's method Evaluation of interpolating
polynomials
Hermite interpolation Requires and
FEM for 4th-order PDEs
Spline interpolation Less oscillatory interpolation
B-splines
Parametric curves Curves in the plane or space
Rational interpolation Interpolation of rough data
with minimum oscillation
Research project
119
3.1. Polynomial Interpolation
Each continuous function can be approximated (arbitrarily close) by a
polynomial, and polynomials of degree interpolating values at distinct
points are all the same polynomial, as shown in the following theorems.
Example:
120
20
18
16
14
12 f(x)
10 p0
p2
8
p4
6 p6
4
2
0 1 2 3
x
Proof:
(Uniqueness). Suppose there were two such polynomials, and . Then
would have the property for . Since the
degree of is at most , the polynomial can have at most zeros
unless it is a zero polynomial. Since are distinct, has zeros
and therefore it must be 0. Hence, .
(Existence). For the existence part, we proceed inductively through
construction. For , the existence is obvious since we may choose the
constant function
.
Now suppose that we have obtained a polynomial of degree
121
with
for .
We try to construct in the form
122
Newton Form of the Interpolating Polynomials
(1)
1 (5.1)
123
(5.2)
(5.3)
(5.4)
(5.5)
124
3
2
f(x)
p0
p1
p2
1 p3
p4
0 1 2
x
(6.1)
125
Thus the algorithm for the evaluation of can be written as
126
Now we can write an algorithm for computing the coefficient in Equation
(1):
A more efficient procedure exists that achieves the same result. The
alternative method uses divided differences to compute the coefficients . The
method will be presented later.
127
Example: For
(2)
=
# Since , the coefficients are
#
(8.1)
# which is the same as the one in (2).
=
128
Polynomial interpolation
0 2 4
x
data points
interpolating polynomial -
newton
given function
(8.2)
129
Example: Find the Newton's form of the interpolating polynomial of the data
Answer:
130
Lagrange Form of the Interpolating Polynomials
,
where are polynomials that depend on the nodes , but not
on the ordinates .
On the other hand, the polynomial interpolation the data must satisfy
, where is the Kronecker delta which is 1 if and 0 if
. Thus all the basis polynomials must satisfy
for all .
Polynomials satisfying such a property are known as the cardinal
functions.
131
and
Hence, we have
Solution:
132
Example: Determine the Lagrange interpolating polynomial that passes
through and .
Answer: .
Maple plot:
Polynomial interpolation
1
2 3 4 5
x
data points
interpolating polynomial -
lagrange
133
Example: Use to find the second Lagrange interpolating
polynomial for . Use to approximate .
Solution:
Maple:
(11.1)
134
(11.2)
(11.3)
f(x)
p2(x)
2 3 4 5 6
x
= 0.3500000000
135
Example: For the previous example, determine the error bound in .
Solution
(13.1)
3
=
8
(13.2)
Thus,
= 0.1320382370
= 0.06922606316
Thus,
0.00008090517158 (14.1)
136
Interpolation Error for Equally Spaced Nodes:
where
Start by picking an . We can assume that is not one of the nodes, because
otherwise the product in question is zero. Let , for some . Then
we have
Thus
137
(3)
138
Chebyshev Polynomials
The Chebyshev polynomials (of the first kind) are defined recursively as
follows:
(17.1)
(17.2)
(17.3)
(17.4)
(17.5)
139
1
T0
T1
T2
0 1
T3
x
T4
(4)
(5)
140
Theorem on Interpolation Error, Chebyshev Nodes:
If the nodes are the roots of the Chebyshev polynomial , as in (5),
then the error bound for the th-degree interpolating polynomial reads
0.00003652217816 (19.1)
It is an optimal upper bound of the error and smaller than the one in
Equation (14.1).
141
Accuracy comparison between Uniform nodes and Chebyshev nodes.
(20.1)
142
2
0 1
x
Uniform nodes
Chebyshev nodes
143
Empty Page
144
3.2. Divided Differences
It turns out that the coefficients for the interpolating polynomials in
Newton's form can be calculated relatively easily by using divided differences.
and therefore
(1.2)
Now, since
,
it follows from the above and (1.1) and (1.2) that
145
We know that for distinct real numbers (nodes), , there is a
unique polynomial of degree at most that interpolates at the nodes:
Definition:
The zeroth divided difference of the function with respect to , denoted
, is the value of at :
The remaining divided differences are defined recursively; the first divided
difference of with respect to and is defined as
In general,
146
DD1 ( DD2 DD3
Step 2: Return
147
Example:
In detail:
Thus,
148
Example: Determine the Newton interpolating polynomial for the data:
149
Properties of Divided Differences:
150
exists a point such that
.
The theorem follows from the comparison of above two equations.
151
Empty Page
152
3.3. Data Approximation and Neville's Method
We have studied how to construct interpolating polynomials. A
frequent use of these polynomials involves the interpolation of
tabulated data. In this case, an explicit representation of the
polynomial might not be needed, only the values of the
polynomial at specified points. In this situation the function
underlying the data might be unknown so the explicit form of
the error cannot be used to assure the accuracy of the
interpolation. Neville's Method provides an adaptive mechanism
for the evaluation of accurate interpolating values.
153
Example: Suppose that and
Thus
154
Theorem: Let be defined at . Then, for each
,
155
For simplicity in computation, we may try to avoid multiple
subscripts by defining the new variable
156
Example: Let . Use
Neville's method to approximate in a four-digit
accuracy.
Solution:
(2.1)
(2.2)
(2.3)
Note:
=
0.0000724766
=
0.0000050845
Thus is already in a four-digit
accuracy.
157
Neville's Iterated Interpolation:
Input: the nodes ; the evaluation point ; the
tolerance ; and values saved in the first
column of .
Output:
Step 1: For
For
If
}
Step 2: Return
158
Example: Neville's method is used to approximate , giving
the following table.
Determine .
159
Empty Page
160
3.4. Hermite Interpolation
The Hermite interpolation refers to the interpolation of a function and some of
its derivatives at a set of nodes. When a distinction is being made between this
type of interpolation and its simpler type (in which no derivatives are
interpolated), the latter is often called Lagrange interpolation.
Since there are four conditions, it seems reasonable to look for a solution in
, the space of all polynomials of degree at most 3. Rather than writing
in terms of , let us write it as
,
because this will simplify the work. This leads to
161
Hermite Interpolation Theorem:
If and are distinct, then the unique
polynomial of least degree agreeing with and at is the
Hermite polynomial of degree at most given by
,
where
162
Construction of Hermite Polynomials:
Define a new sequence by
with
being replaced by .
as usual
163
Example: Use the extended Newton divided difference method to obtain a
cubic polynomial that takes these values:
164
3.5. Spline Interpolation
Runge's phenomenon:
165
1 f(x)
P7
P10
0 1
x
166
Spline Interpolation
Linear Splines:
167
Solution: The linear spline can be easily computed as
L(x)
0 1
x
168
First-Degree Spline Accuracy
To find the error bound, we will consider the error on a single subinterval of
the partition, and apply a little calculus. Let be the linear polynomial
interpolating at the endpoints of . Then,
where
169
Quadratic (Second Degree) Splines:
170
Computing quadratic splines:
we have, for ,
which implies
Thus we have
171
Example: Find the quadratic spline for
Q(x)
L(x)
0 1
x
172
Cubic Splines:
173
Construction of Cubic Splines:
(1)
where
.
If (1) is integrated twice, the result reads
(2)
174
Thus the result is
(3)
Equation (3) is easily verified; simply let and to see that the
interpolation conditions are fulfilled. Once the values of have
been determined, Equation (3) can be used to evaluate for .
(4)
When the right sides of the last two equations are set equal to each other, the
result can be written as
175
(5)
for .
Note:
There are equations in (5), while we must determine unknowns,
.
There are two popular approaches for the choice of the two additional
conditions.
Natural Cubic Spline
where
176
and
Since the matrix is strictly diagonally dominant, the system can be solved
by Gaussian elimination without pivoting.
Equation (5) with the above two quations clearly make conditions for
unknowns, . It is a good exercise to compose an
algebraic system for the computation of clamped cubic splines.
177
Example: Find the natural cubic spline for
S(x)
Q(x)
L(x)
0 1
x
178
Example: Find a natural cubic spline that interpolates the data
5
S(x)
4
Q(x)
L(x)
3
2
0 1 2 3
x
179
Optimality Theorem for Natural Cubic Splines
We now present a theorem to the effect that the natural cubic spline produces
the smoothest interpolating function. The word smooth is given a technical
meaning in the theorem.
180
3.6. Parametric Curves
Consider the data of the form:
0 1
Then none of the interpolation methods we have learnt so far can be used to
generate an interpolating curve for this data, because the curve cannot be
expressed as a function of one coordinate variable to the other. In this section
we will see how to represent general curves by using a parameter to express
both the - and -coordinate variables.
181
Example: Construct a pair of interpolating polynomials, as a function of , for
the data:
Solution
0 1
182
Applications in Computer Graphics:
Required: Rapid generation of smooth curves that can be quickly and
easily modified.
Preferred: Change of one portion of a curve should have little or no
effect on other portions of the curve.
piecewise cubic Hermite
polynomial.
Note:
For data , the piecewise cubic
Hermite polynomial can be generated independently in each portion
. (Why?)
183
Piecewise cubic Hermite polynomial for General Curve Fitting:
Let us focus on the first portion of the piecewise cubic Hermite polynomial
interpolating between
and For the first portion, the given data are
Only six conditions are specified, while the cubic polynomials and
each have four parameters, for a total of eight. This provides flexibility in
choosing the pair of cubic polynomials to specify the conditions. Notice that
the natural form for determining and requires that we specify
and . Since
the slopes at the endpoints can be expressed using the so-called guidepoints
which are to be chosen from the desired tangent line:
guidepoint for
guidepoint for
Thus,
184
The cubic Hermite polynomial , y(t)) on [0,1]:
The unique cubic Hermite polynomial satisfying
can be computed as
is
185
Example: Determine the parametric curve when
Solution:
is
t (5.1)
The cubic Hermite polynomial on that satisfies
(5.2)
(5.3)
186
(5.4)
0
0 1
t
187
Empty Page
188
Homework
3. Curve Fitting: Interpolation and Approximation
#2. Use the Polynomial Interpolation Error Theorem to find an error bound for the
approximations in Problem #1.
By adding one additional term to , find a polynomial that interpolates the whole table.
Determine .
189
#6. Use the extended Newton divided difference method to obtain a quintic polynomial
that takes these values:
#7. Compose an algebraic system, of the form , explicitly for the computation of
clamped cubic splines.
190
#10. Let be the unit circle of radius 1: . Find a piecewise cubic parametric
curve that interpolates the circle at
0
1
Now, you should find parametric curves for the other two portions.
191
Empty Page
192
4. Numerical Differentiation and Integration
In This Chapter:
Topics Applications/Properties
Numerical Differentiation
Three-point rules
Five-point rules
Richardson extrapolation Combination of low-order differences
Numerical Integration
193
4.1. Numerical Differentiation
Note:
Differentiating gives
Thus
194
Definition: For ,
Solution:
= 1.331
3.310000000 (2.1)
= 1.157625
3.152500000 (2.2)
= 1.076890625
3.075625000 (2.3)
The error becomes half ?
195
In general:
Let be distinct points in some interval and
. Then
Hence,
196
Three-Point Formulas ( : For convenience, let
Recall:
197
Summary: Numerical Differentiation, the -point formula
1.
2.
We may derive the formula by using Taylor expansion. The Taylor expansion
can be used for the derivation of the first-derivative difference formulas.
Derivation:
198
Example: Use the second-derivative midpoint formula to approximate
for using ,0.05.
Solution
14.40000000 (5.1)
14.10000000 (5.2)
14.02500000 (5.3)
14 (5.4)
199
Empty Page
200
4.2. Richardson's Extrapolation
Richardson's extrapolation is used to generate high-accuracy difference results
while using low-order formulas.
Note that in this infinite series, the error series is evaluated at the same point,
.
Derivation using the Taylor's Series Expansion:
201
The last equation can be written as
Thus, we have
Then, similarly,
202
The above idea can be applied recursively. The complete algorithm, allowing
for steps of Richardson extrapolation algorithm, is given next:
1. Select a convenient and compute
203
Example: Let . Use the Richardson extrapolation to estimate
using
Solution
1.013662770
1.003353478 (2.2)
1.000834586 (2.3)
0.9999170470
0.9999949557 (2.5)
1.000000150 (2.6)
Error:
= 0.0000829530
= 0.0000050443
The error: =
= 1.013662770
= 1.003353478 = 0.9999170470
204
Using the Taylor's Series Expansion, we can reach at
(3.1)
(3.2)
(3.3)
(3.4)
(3.5)
(3.6)
Error:
= 0.0001385003
= 0.0000084127
The Ratio:
= 16.46324010
205
The error: =
= =
= = =
206
4.3. Numerical Integration
Numerical integration can be performed by
(1) approximation the function by a th-degree polynomial and
(2) integrating the polynomial over the prescribed interval.
What a simple task it is!
In this way, we obtain a formula that can be used on any . It reads as follows:
(1)
where
The formula of the form in (1) is called a Newton-Cotes formula if the nodes
are equally spaced.
207
The Trapezoid Rule:
The simplest case results if and the nodes are and . In this
case,
Since
and
208
Graphical interpretation:
0
1
x
An animated approximation of
1
f x dx using trapezoid rule, where
0
f x = x3 C 2 C sin 2 p x and the
partition is uniform. The approximate
value of the integral is 2.500000000.
Number of partitions used: 1.
209
Composite Trapezoid Rule:
then the trapezoid rule can be applied to each subinterval. Here the nodes are
not necessarily uniformly spaced. Thus, we obtain the composite trapezoid
rule:
210
Example:
211
Simpson's Rule
which is reduced to
212
Graphical interpretation:
213
Error analysis for the elementary Simpson's rule:
The error for the Simpson's rule (Simpson) can be computed from
Thus
214
However, by Mean Value Theorem,
Thus
215
Composite Simpson's Rule:
Then
216
Simpson's Three-Eights Rule
When three equal subintervals are combined, the resulting integration formula
is called the Simpson's three-eights rule:
derive the error term for the composite Simpson's three-eights rule.
Solution:
217
Self Study: Consider
218
4.4. Romberg Integration
In the previous section, we have found that the Composite Trapezoid rule has
a truncation error of order . Specifically, we showed that for
and
we have
(1)
219
It is clear that if is to be computed, then we can take advantage of the
work already done in the computation of . For example, from the
preceding example, we see that
or
(2)
(3)
where
220
Romberg Algorithm:
221
Example: Use the Composite Trapezoid rule to find approximations to
the results.
Solution:
# Trapezoid estimates
#----------------------------------
=0
222
=0
= =
1.570796327 2.094395103
= = =
1.896118898 2.004559755 1.998570731
= = = =
1.974231602 2.000269171 1.999983131 2.000005551
=2
#
#-----------------------------
= 20.70179275
#
#-----------------------------
= 84.72754757 # this is 64 in theory.
223
Empty Page
224
4.5. Gaussian Quadrature
In preceding section, we saw how to create quadrature formulas of the type
(1)
that are exact for polynomials of degree , which is the case if and only if
225
Example (Method of Undetermined Coefficients): Find with
which the following formula is exact for all polynomials of degree .
Solution:
By using as trial functions the polynomials in order, we get
which will produce exact values of integrals for any quadratic polynomial,
.
It must be noticed that the above formula is the elementary Simpson's rule
with .
226
Gaussian quadrature chooses the points for evaluation in an optimal, rather
than equally-spaced, way. The nodes in the interval and
the weights are chosen to minimize the expected error obtained
in the approximation
To measure this accuracy, we assume that the best choice of these values
produces the exact result for the largest class of polynomials, that is, the
choice that gives the greatest degree of precision.
227
Example: Determine and so that the integration formula
Solution:
As in the previous example, we may apply the method of undetermined
coefficients. By using as trial functions the polynomials
in order, we get
A little algebra shows that this system of equations has the unique solution
(2.1)
This formula has degree of precision 3, that is, it produces the exact result
for every polynomial in .
The method of undetermined coefficients can be used to determine the
nodes and weights for formulas that give exact results for high-order
polynomials, but an alternative method obtained them mote easily. The
alternative is related Legendre orthogonal polynomials.
228
Legendre Polynomials:
The Legendre polynomials obey the three term recurrence relation known as
(2)
229
Theorem (Gauss Integration):
Suppose that are the roots of the th Legendre polynomial
and obtained by
Then,
Note: Once the nodes are determined, the weights can also
be found by using the method of undetermined coefficients, that is, the
weights are the solution of the linear system
230
x
(5.1)
231
k=1
k=2
k=3
232
k=4
k=5
233
(5.2)
234
Theorem (Gauss-Lobatto Integration):
Let and and are the roots of the first-
derivative of the th Legendre polynomial, . Let be
obtained by
Then,
Note: Once the nodes are determined, the weights can also
be found by using the method of undetermined coefficients, as for Gauss
Integration; the weights are the solution of the linear system
235
Gaussian Quadrature on Arbitrary Intervals:
236
Example: Find the Gaussian Quadrature for . Choose .
Solution:
(7.1)
2 (7.2)
237
Empty Page
238
Homework:
4. Numerical Differentiation and Integration
#1. Use the most accurate three-point formulas to determine the missing entries.
#2. Use your results in the above table to approximate and with -
accuracy. Make a conclusion by comparing all results (obtained here and from Problem
1) with the exact values:
Explain how Richardson extrapolation will work in this case. (Try to introduce a formula
described as in (Table 1).)
239
#5. A car laps a race track in 65 seconds. The speed of the car at each 5 second interval is
determined by using a radar gun and is given from the beginning of the lap, in
feet/second, by the entries in the following table:
Time
Speed
#6. Use the Composite Trapezoid rule to find approximations and then perform Romberg
extrapolation on the results to find .
(a) (b)
240
5. Numerical Solution
of Ordinary Differential Equations
In This Chapter:
Topics Applications/Properties
Elementary Theory of IVPs Existence and uniqueness of
solution
Taylor-series methods
Euler's Method
Higher-Order Taylor Methods
Runge-Kutta (RK) Methods
Second-order RK (Heun's method) Modified Euler's method
Fourth-order RK
Runge-Kutta-Fehlberg method Variable step-size
(adaptive method)
Multistep Methods
Adams-Bashforth-Moulton method
Higher-Order Equations &
Systems of Differential Equations
241
5.1. Elementary Theory of Initial-Value Problems
Our model is a first-order initial-value problem (IVP) written in the
form
(IVP)
242
Example: Prove that the initial-value problem
Solution:
243
Example: Show that each of the initial-value problems has a unique solution
and find the solution.
a.
b.
Solution:
(Existence and uniqueness):
(2.1)
(2.2)
(2.3)
244
5.2. Taylor-Series Methods
Here we rewrite the initial-value problem (IVP):
(IVP)
245
Euler's Method
which is an approximation of .
Summarizing the above, the Euler's method solving the first-order IVP is
formulated as
(3)
246
Notes:
The computed quantity is an approximation of .
In each subinterval, the method involves a local truncation error
247
Example: Consider
8 8 8
7 7 7
6 6 6
5 5 5
4 4 4
3 3 3
2 2 2
1 1 1
0 1 2 3 0 1 2 3 0 1 2 3
248
Example: Use Euler's method to solve
, with
Solution:
249
=
0.39169859 (2.1)
Note: You may solve the above problem by using a built-in command. For
example, after defining the ODE, odesys, apply the command
dsolve( , numeric, method=foreuler)
250
Higher-Order Taylor Methods
Thus we have
where
251
Notes:
: The Euler's Method
Example:
Consider the initial-value problem:
.5.
(a) Find .
(b) Perform two iterations to find , with .
Solution: (a).
= =
= =
Thus,
(4.1)
(b).
252
155
=
96
16217
=
4608
3.519314236 (4.2)
3.486013602 (4.3)
The absolute error= = 0.033300634
253
Empty Page
254
5.3. Runge-Kutta Methods
The Taylor-series method of the preceding section has the drawback of
requiring the computation of derivatives of . This is a complicated and
time-consuming procedure for most cases, which makes the Taylor methods
seldom used in practice.
255
Second-order Runge-Kutta Method (RK2)
Formulation:
(1)
where
Derivation: For the left-hand side of (1), the Taylor series reads
(2)
Thus we obtain
(3)
The comparison of Equations (2) and (3) drives the following result, for the
second-order Runge-Kutta methods.
256
The result:
(4)
Choices:
RK2, which is also known as
Heun's method
Modified Euler method
(1.1)
where
257
Fourth-order Runge-Kutta method (RK4)
Formulation:
(5)
where
The Choice: The most commonly used set of parameter values yields
(6)
where
> f := proc(x,w)
w-x^3+x+1
end proc:
>
> for n from 0 by 1 to nt do
maxerr:=max(maxerr,abs(exacty(n*h)-yRK4[n]));
end do:
259
= 0.00000184873274
max_error Error_Rario
= 15.73378840
= 15.83783784
= 18.31683168
260
Adaptive Methods
There may be subintervals where a relatively large step size suffices and
other subintervals where a small step is necessary to keep the truncation
error within a desired limit.
An adaptive method is a numerical method which uses a variable step size.
Example: Runge-Kutta-Fehlberg method (RKF45) which uses RK5 to
estimate local truncation error of RK4.
261
=
= 8.13831077273522
= 0.0000184489183565617
8
7
6
5
4
3
2
1
0 1 2 3
x
262
5.4. Multistep Methods
Numerical Methods:
Single-step/Starting methods: Euler's method, Modified
Euler's, Runge-Kutta methods
Multi-step/Continuing methods: Adams-Bashforth-
Moulton
263
Fourth-order multistep methods:
Let .
where
264
## Maple code: Adams-Bashforth-Moulton (ABM) Method
## Model Problem:
> f := proc(x,w)
w-x^3+x+1
end proc:
>
>
> ABM(x0,xt,nt,y0,yABM):
>
> for n from 0 by 1 to nt do
maxerr:=max(maxerr,abs(exacty(n*h)-yABM[n]));
end do:
= 0.00005294884316
266
5.5. High-Order Equations &
Systems of Differential Equations
The problem: 2nd-order initial value problem (IVP)
Let
.
Then,
.
That is, the above IVP can be equivalently written as the following system of
first-order DEs:
267
Example: Write the following DEs as a system of first-order differential
equations.
(a) .
(b)
Solution:
(Hint: For (b), you should first rewrite it as and introduce
and .)
268
The -th order system of first-order IVPs (IVP_m):
Step 1: Set /
For , set
OUT
Step 2: for , do
For , set
For , set
For , set
For , set
For , set
Set
OUT
Step 3: Stop
269
## RK4SYS
##------------------------------
## Ex) IVP of 2 equations:
## x' = 2x+4y, x(0)= -1
## y' = -x+6y, y(0)= 6, 0<= t <= 1
> ef := proc(t,w,f)
f(1):=2*w(1)+4*w(2);
f(2):=-w(1)+6*w(2);
end proc:
>
>
>
>
> for n from 0 by 2 to nt do
if n=0 then
printf(" \t n x(n) y(n) error(x) error(y)\n");
printf(" \t -----------------------------------------------------\n");
end if;
printf(" \t %5d %10.3f %10.3f %-10.3g %-10.3g\n",
n, xRK4[n,1], xRK4[n,2], abs(xRK4[n,1]-ex(n*h)), abs(xRK4[n,
2]-ey(n*h)) );
end do;
271
\011 n x(n) y(n) error(x) error(y)
\011 -----------------------------------------------------
\011 0 -1.000 6.000 0 0
\011 2 0.366 8.122 6.04e-006 4.24e-006
\011 4 2.387 10.890 1.54e-005 1.07e-005
\011 6 5.284 14.486 2.92e-005 2.01e-005
\011 8 9.347 19.140 4.94e-005 3.35e-005
\011 10 14.950 25.144 7.81e-005 5.26e-005
\011 12 22.577 32.869 0.000118 7.91e-005
\011 14 32.847 42.782 0.000174 0.000115
\011 16 46.558 55.474 0.000251 0.000165
\011 18 64.731 71.688 0.000356 0.000232
\011 20 88.668 92.363 0.000498 0.000323
\011 22 120.032 118.678 0.000689 0.000443
\011 24 160.937 152.119 0.000944 0.000604
\011 26 214.072 194.550 0.00128 0.000817
\011 28 282.846 248.313 0.00174 0.0011
\011 30 371.580 316.346 0.00233 0.00147
\011 32 485.741 402.332 0.00312 0.00195
\011 34 632.238 510.885 0.00414 0.00258
\011 36 819.795 647.785 0.00549 0.0034
\011 38 1059.411 820.262 0.00725 0.00447
\011 40 1364.944 1037.359 0.00954 0.00586
272
## RK4SYSTEM
##------------------------------
## Ex)
273
(2)
(3)
274
n y_n y(x_n) y'_n y'(x_n) err(y) err(y')
0 -0.40000000 -0.40000000 -0.60000000 -0.60000000 0 0
1 -0.46173334 -0.46173297 -0.63163124 -0.63163105 3.72e-07 1.91e-07
2 -0.52555988 -0.52555905 -0.64014895 -0.64014866 8.36e-07 2.84e-07
3 -0.58860144 -0.58860005 -0.61366381 -0.61366361 1.39e-06 1.99e-07
4 -0.64661231 -0.64661028 -0.53658203 -0.53658220 2.02e-06 1.68e-07
5 -0.69356666 -0.69356395 -0.38873810 -0.38873905 2.71e-06 9.58e-07
6 -0.72115190 -0.72114849 -0.14438087 -0.14438322 3.41e-06 2.35e-06
7 -0.71815295 -0.71814890 0.22899702 0.22899243 4.06e-06 4.59e-06
8 -0.66971133 -0.66970677 0.77199180 0.77198383 4.55e-06 7.97e-06
9 -0.55644290 -0.55643814 1.53478148 1.53476862 4.77e-06 1.29e-05
10 -0.35339886 -0.35339436 2.57876634 2.57874662 4.50e-06 1.97e-05
275
Empty Page
276
Homework:
5. Numerical Solution of Ordinary Differential Equations
#1. Show that the initial-value problem
has a unique solution in the interval . Can you find the solution, by
guessing?
You do not have to implement any code for Problems 2 and 3 below. You may
solve them by using your cute calculator and math formulas.
277
#4. Apply the codes presented in this lecture note for solving the problem as in
the preceding exercise by
a. RK4
b. Adams-Bashforth-Moulton method
Use and compare the accuracy. If you need me to send you the Maple
codes, please let me know.
278
6. Direct Methods for Solving Linear Systems
In This Chapter:
Topics Applications/Properties
Gaussian Elimination Three elementary row operations
Replacement
Interchange
Scaling
with partial pivoting
with scaled partial pivoting
Matrix Factorization
factorization
Symmetric Positive Definite
Matrices
Cholesky factorization
factorization
279
6.1. Gaussian Elimination
(1)
(2)
(3)
(4)
280
Gaussian Elimination:
281
Forward Elimination:
R_2 <---- R_2 - (7) R_1
Backward Substitution:
R_2 <---- R_2 - (-8) R_3
282
R_2 <---- (0.0833333) R_2
(2.1)
283
Gaussian Elimination with Partial Pivoting:
284
Forward Elimination:
R_1 <---> R_2
285
R_3 <---- R_3 - (0.25) R_2
Backward Substitution:
R_2 <---- R_2 - (1.14286) R_3
286
R_1 <---- (0.142857) R_1
(3.1)
287
Gaussian Elimination with Scaled Partial Pivoting:
(Partial Pivoting for the scaled system: the max norm of rows is all 1.)
288
289
Forward Elimination:
R_2 <---- R_2 - (7) R_1
Backward Substitution:
R_2 <---- R_2 - (-8) R_3
290
R_1 <---- R_1 - (-1) R_2
(4.1)
291
Example: Apply the three different Gaussian elimination methods to
(5)
Solution:
Forward Elimination:
R_2 <---- R_2 - (0.176367) R_1
Backward Substitution:
R_2 <---- (-9.58687e-06) R_2
(6)
Forward Elimination:
R_2 <---- R_2 - (0.176367) R_1
292
Backward Substitution:
R_2 <---- (-9.58687e-06) R_2
(7)
Forward Elimination:
R_1 <---> R_2
Backward Substitution:
R_2 <---- (1.69080e-06) R_2
293
R_1 <---- R_1 - (-6.13) R_2
(8)
294
Example: Solve the following system of linear equations
Solution:
Forward Elimination:
Error, (in GaussElimination) numeric exception: division by zero
Forward Elimination:
R_1 <---> R_3
295
R_2 <---- R_2 - (0.5) R_1
Backward Substitution:
R_3 <---- (-0.685714) R_3
296
R_1 <---- R_1 - (1) R_3
(5.1)
Forward Elimination:
R_1 <---> R_2
297
R_2 <---- R_2 - (0) R_1
Backward Substitution:
R_3 <---- (0.342857) R_3
298
R_2 <---- (0.0833333) R_2
(5.2)
299
Example: Count the number of operations required for the Gaussian
Elimination.
300
6.2. LU Factorization
Definition: A nonsingular matrix has an LU factorization if it can be
expressed as the product of a lower-triangular matrix an upper-triangular
matrix :
.
In matrix form, this is written as
302
LU Factorization:
The LU factorization can be carried out by the Gaussian Elimination
procedure. Define and
where
303
which is called the th Gaussian transformation matrix. Let be its
inverse:
Since
we have
304
Theorem: If Gaussian elimination can be performed on the linear system
without row interchanges, then the matrix can be factorized into the
product of a lower-triangular matrix and an upper-triangular matrix , that
is, , where .
305
Permutation Matrices:
Example:
The matrix
Then
306
We have seen that for a nonsingular matrix the linear system can be
solved by Gaussian elimination, with the possibility of row interchanges.
If we knew the row interchanges that were required to solve the system by
Gaussian elimination, we could rearrange the original equations so that no
further row interchanges are needed during Gaussian elimination. Hence there
is a rearrangement of the equations in the system that permits Gaussian
elimination to proceed without row interchanges. This implies that for any
nonsingular matrix , a permutation matrix exists for which the system
(1)
can be solved without row interchanges. Once the matrix is LU-factorized,
i.e.,
307
Example: Determine a factorization in the form for the matrix
Solution:
308
R_2 <---> R_4
(3.1)
309
Thus,
310
R_3 <---- R_3 - (0) R_2
(3.2)
311
Thus
(3.3)
and therefore
(3.4)
= = =
312
Homework:
6. Direct Methods for Solving Linear Systems
#1. Use Gaussian Elimination with partial pivoting to solve the linear system
where
#2. Let
313