Sei sulla pagina 1di 32

GEM 803

Numerical Methods

Chapter 1
Systems of Linear Algebraic Equations
Systems of Linear Algebraic
Equations

Consider the system of linear algebraic equations of the form

a11 x1 + a12 x2 + a13 x3 + ... + a1n xn = b1


(1) a21 x1 + a22 x2 + a23 x3 + ... + a2 n xn = b2
....................................................
an1 x1 + an 2 x2 + an 3 x3 + ... + ann xn = bn

where xj (j = 1, 2, …,n) denotes the unknown variables, aij (i,j =


1, 2, …., n) denotes the constant coefficients of the unknown
variables, and bi (i = 1, 2, …., n) denotes the
nonhomogeneous terms.
Systems of Linear Algebraic
Equations
The four solution possibilities are:
1. a unique solution
2. no solution
3. an infinite number of solutions
4. trivial solution for a homogeneous set of equations.
Systems of Linear Algebraic
Equations
Two different approaches for solving systems of linear algebraic
equations:
1. Direct elimination methods
2. Iterative methods

Direct elimination methods are systematic procedures on algebraic


elimination, which obtain the solution in a fixed number of operations.
Examples: Gaussian elimination
Gauss-Jordan elimination
Matrix inversion method
LU Factorization
Systems of Linear Algebraic
Equations
Iterative methods obtain the solution asymptotically by an iterative
procedure. A trial solution is assumed, the trial solution is substituted
into the system of equation to determine the mismatch, or error, in the
trial solution, and an improved solution is obtained from the mismatch
data.
Examples:
Jacobi iteration
Gauss-Seidel iteration
Review of Matrices
Matrix Definitions
A matrix is a rectangular array of elements (either numbers or
symbols), which are arranged in orderly rows and columns. Each
element of the matrix is distinct and separate.
Example:
 a11 a12 ... ... a1m 
A = [aij] = 
a21 a22 ... ... a2 m  , m x n matrix

 . . ... ... . 
 
an1 an 2 ... ... amn 

(i = 1, 2,…,n; j = 1, 2,…,m)
Special Types of Matrices
Vectors are a special type of matrix which has only one column or
one row. A column vector is an n x 1 matrix. Thus,
 x1 
 
x = [xj] =  x2  (i = 1, 2,…,n)
 ... 
 
 xn 
A row vector is a 1 x n matrix. For example,
y = [yj] =  y1 y2 ... yn  (j = 1, 2,…,n)

A square matrix S is a matrix which has the same number of rows


and columns, that is, m = n. For example,
 a11 a12 ... a1n 
a a22 ... a2 n 
 21
 ... ... ... ... 
 
an1 an 2 ... ann 
is a square n x n matrix.
Special Types of Matrices
A diagonal matrix D is a square matrix with all elements equal
to zero except the elements on the major diagonal. For
example,
 a11 0 0 0
0 a 0 0 
 22 
0 0 a33 0 
 
 0 0 0 a 44 

is a 4 x 4 diagonal matrix.
The identity matrix I is a diagonal matrix with unity diagonal
elements. The matrix
1 0 0 0
0 1 0 0 
 
0 0 1 0 
 
 0 0 0 1 
is the 4 x 4 identity matrix.
Special Types of Matrices
A triangular matrix is a square matrix in which all of the elements
on one side of the major diagonal are zero. The remaining
elements may be zero or nonzero. An upper triangular matrix U
has all zero elements below the major diagonal. The matrix
a11 a12 a13 a14 
0 a a a24 
 22 23

 0 0 a33 a34 
 
0 0 0 a44 
is a 4 x 4 upper triangular matrix. A lower triangular matrix L has all
zero elements above the major diagonal. The matrix
 a11 0 0 0 
a a 0 0 
 21 22 
a31 a32 a33 0 
 
 41 42 43 44 
a a a a

is a 4 x 4 lower triangular matrix.


Special Types of Matrices
A tridiagonal matrix T is a square matrix in which all of the
elements not on the major diagonal and the two diagonals
surrounding the major diagonal are zero. The elements on these
three diagonal may or may not be zero. The matrix
 a11 a12 0 0 0
a a22 a23 0 0 
 21
0 a32 a33 a34 0
 
0 0 a43 a44 a45 
 0 0 0 a54 a55 
is a 5 x 5 tridiagonal matrix.
Special Types of Matrices
A banded matrix B has all zero elements except along particular
diagonals. For example,

 a11 a12 0 a14 0


a a22 a23 0 a25 
 21
0 a32 a33 a34 0
 
a41 0 a43 a44 a45 
 0 a52 0 a54 a55 
is a 5 x 5 banded matrix.
Special Types of Matrices
The transpose of an m x n matrix A is the n x m matrix, AT,
which has elements aijT = aji.

The transpose of a column vector, is a row vector and vice


versa.
Symmetric square matrices have identical corresponding
elements on either side of the major diagonal. That is,
aij = aji. In that case, A = AT.

A sparse matrix is one in which most of the elements are


zero.
Diagonally Dominant Matrix
A matrix is diagonally dominant if the absolute value of each
element on the major diagonal is equal to, or larger than, the sum
of the absolute values of all the other elements in that row, with the
diagonal element being larger than the corresponding sum of the
other elements for at least one row. Thus, diagonal dominance is
defined as
n
aii   aij
j =1, j  i (i = 1, 2,…,n)
with > true for at least one row.
Matrix Algebra
Matrix addition and subtraction consist of adding or subtracting
the corresponding elements of two matrices of equal size. Let A
and B be two matrices of equal size. Then,
A + B = [aij] + [bij] = [aij+bij] = [cij] = C
A – B = [aij] – [bij] = [aij – bij] = [cij] = C
Unequal size matrices cannot be added or subtracted. Matrices of
the same size are associative on addition. Thus,
A + (B + C) = (A + B) + C
Matrices of the same size are commutative on addition. Thus,
A+B=B+A
Matrix Algebra
Example:
Find A + B and A – B given that

 2 −1 3 1  − 3 1 − 4 3 
 3 − 2 0 − 5  4 − 5 3 − 6
A=   B= 
 4 1 1 − 3  5 4 −2 4 
   
− 2 5 6 4   7 − 2 5 − 2
Matrix Algebra
Matrix multiplication consists of row-element to column-element
multiplication and summation of the resulting products. Multiplication
of the two matrices A and B is defined only when the number of
columns of matrix A is the same as the number of rows of matrix B.
Matrices that satisfy this condition are called conformable in the order
AB. Thus, if the size of matrix A is n x m and the size of matrix B is m x r,
then
AB = [aij][bij] = [cij] = C
where m
cij =  aik bkj (i = 1, 2,..,n, j = 1, 2,…,n)
k =1
The size of matrix C is n x r. Matrices that are not conformable cannot
be multiplied.
Matrix Algebra
Example: Find AB given that

1 1 3 2 3 5 
A = 5 3 1 B = 3 1 − 2
2 3 1 1 3 4 
Multiplication of the matrix A by the scalar  consists of
multiplying each element of A by .
Thus,
A = [aij] = [aij] = [bij] = B

Example: Evaluate (A + 3B)T using the matrices given in the


previous example.
Matrix Algebra
Matrices that are suitably conformable are associative on
multiplication. Thus,
A(BC) = (AB)C
Square matrices are conformable in either order. Thus, if A and B are n
x n matrices,
AB = C and BA = D
where C and D are n x n matrices. However square matrices in
general are not commutative on multiplication. That is, in general
AB ≠ BA
Matrices A, B, and C are distributive if B and C are the same size and
A is conformable to B and C. Thus,
A(B + C) = AB + AC
Matrix Algebra
Consider the two square matrices A and B. If AB = I, then B is the
inverse of A, which is denoted as A-1. Matrix inverses commute on
multiplication. Thus,
AA-1 = A-1A = I

Matrix Factorization refers to the representation of a matrix as the


product of two other matrices. For example, a known matrix A can
be represented as the product of two unknown matrices B and C.
Thus,
A = BC
Factorization is not unique process. There are, in general, an infinite
number of matrices B and C whose product is A.
A particularly useful factorization for square matrices is
A = LU
where L and U are lower and upper triangular matrices,
respectively.
Matrix Algebra
Systems of linear algebraic equations, such as (1), can be
expressed very compactly in matrix notation. Thus, (1) can be
written as the matrix equation
(2) Ax = b
where

 a11 a12 ... a1n   x1   b1 


a a22 ... a2 n 
x  b 
A =  21 x =  2 b =  2
 ... ... ... ...   ...   ...
   
   n
x  n
b
an1 an 2 ... ann 
LU Factorization Method
Consider the linear system Ax = b. Let A be factored into the
product LU. The linear system becomes
(3) LUx = b
Let y = Ux, then
(4) Ly = b
Methods in Finding L and U
a. Doolittle method (assumes lii = 1)
b. Crout’s method (assumes uii = 1)
c. Cholesky’s method (lij = uji )

Stages in LU Factorization
1. Solve for L and U
2. Use Ly = b to solve for y.
3. Use Ux = y to solve for x.
LU Factorization Method
Example 1. Use Doolittle’s LU factorization method to solve the
system
− 2 x1 + 3x2 + x3 − x4 = 8
3 x1 + 4 x2 − 5 x3 + 2 x4 = 2
x1 − 2 x2 + x3 + 4 x4 = 0
x1 + 7 x2 − 4 x3 − 6 x4 = 3
Example 2. Solve the system given in the previous problem using
Crout’s LU factorization method.
System Condition
A well-conditioned system is one in which a small change in any
of the elements of the system causes only a small change in the
solution of the system.
An ill-conditioned system is one in which a small change in any
of the elements of the system causes a large change in the
solution of the problem.
System Condition
Illustration: Solve the system
x1 + x2 = 2
x1 + 1.0001x2 = 2.0001

Suppose a22 is changed slightly from 1.0001 to 0.9999:


x1 + x2 = 2
x1 + 0.9999x2 = 2.0001

Consider another slightly modified form in which b2 is changed


slightly from 2.0001 to 2:
x1 + x2 = 2
x1 + 1.0001x2 = 2
Iterative Methods
Iterative methods are used when the number of equations is large
and most of the coefficients are zero (sparse matrix).
Iterative methods generally diverge unless the system of equations is
diagonally dominant.
A. The Jacobi Iteration method
Consider the general system of linear algebraic equations,
Ax = b, written in index notation
n
(4)  aij x j = bi (i = 1, 2,…,n)
j =1
The Jacobi Iteration method
Each equation of the system is solved for the component of the solution
vector associated with the diagonal element, that is, xi. Thus,
1  i −1 n

xi =  bi −  aij x j −  aij x j 
(5) aii  j =1 j =i +1  (i = 1, 2,…,n)
An initial solution vector x(0) is chosen. The superscript in parentheses
denotes the iteration number, with zero denoting the initial solution
vector. The initial solution vector x(0) is substituted into (5) to yield the first
improved solution vector x(1). Thus,
(6) 1 i −1 n
( 0 )  (i = 1, 2,…,n)
=  bi −  aij x j −  aij x j 
(1) (0)
xi
aii  j =1 j =i +1 
This procedure is repeated until some convergence criterion is satisfied.
The Jacobi algorithm for the general iteration step (k) is:

1  i −1 n
(k ) 
=  bi −  aij x j −  aij x j  (i = 1, 2,…,n)
( k +1)
(7) xi
(k )

aii  j =1 j =i +1 
The Jacobi Iteration method
Or equivalently,

(8) x ( k +1) = x ( k ) + 1  b −  a x ( k )  (i = 1, 2,…,n)


n

aii  
i i i ij j
j =1

Equation (8) is generally written in the form


(k )
( k +1) R
= xi + i
(k )
(9) xi (i = 1, 2,…,n)
aii
n
R
(10) i
(k )
= bi − j =1
aij x j ( k ) (i = 1, 2,…,n)

where the term Ri(k) is called the residual of the equation.


The Jacobi Iteration method
Example: Solve

8 x1 − x2 + 4 x3 + 3 x4 = 21
3 x1 − 5 x2 + x3 − x4 = 9
4 x1 − x2 + 8 x3 + 2 x4 = 3
4 x1 + 3 x2 − 2 x3 − 10 x4 = −10

by Jacobi iteration. Let x(0)T= [ 0.0 0.0 0.0 0.0]


Accuracy and Convergence
In iterative methods, the term accuracy refers to the number of
significant figures obtained in the calculations, and the term
convergence refers to the point in the iterative process when
the desired accuracy is obtained.
The accuracy of any approximate method is measured in
terms of the error of the method. There are two ways to specify
error: absolute error and relative error.
The absolute error is defined as
Absolute error = approximate value – exact value
and relative error is defined as
absolute error
relative error =
exact value
Relative error can be stated directly or as a percentage.
Accuracy and Convergence
Convergence of an iterative procedure is achieved when the
desired accuracy criterion is satisfied. Convergence criteria can
be specified in terms of absolute error or relative error.
Let  be the magnitude of the convergence tolerance. Several
convergence criteria are possible. For an absolute error criteria,
the following choices are possible:
1/ 2
n
 n
2
(xi )max   x i  or  (xi )  
i =1  i =1 
For a relative error criterion, the following choices are possible:

(x )
1/ 2
n xi  n  x 
2

i max
   or   i
  
xi i =1 xi  i =1  xi  
Gauss-Seidel Iteration Method
This method is similar to the Jacobi method, except that the most
recently calculated values of all xi are used in all computations.
Thus,

(11) 1 (k ) 
i −1 n
=  bi −  aij x j −  aij x j 
( k +1) ( k +1)
xi
aii  j =1 j = i +1 
or,
(k )
( k +1) Ri
= xi +
(k )
x
(12) i aii
i −1 n
= bi −  aij x j −  aij x j
(k ) ( k +1) (k )
(13)
Ri
j =1 j =1
Gauss-Seidel Iteration Method
Example: Solve

8 x1 − x2 + 4 x3 + 3 x4 = 21
3 x1 − 5 x2 + x3 − x4 = 9
4 x1 − x2 + 8 x3 + 2 x4 = 3
4 x1 + 3 x2 − 2 x3 − 10 x4 = −10

by Gauss-seidel iteration. Let x(0)T= [ 0.0 0.0 0.0 0.0]

Potrebbero piacerti anche