Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
ref. 2.1
Robotic Arm
Consider the problem of having the two-axis planar robotic arm (Fig. 2.1.1). A linear
algebraic system which relates the induced torques to the end-of-arm force F can be
obtained (see eq. 2.1.1). We want to find the unknowns F with the given .
2.1.2
Converter Circuit
Consider the problem of finding the voltages V and currents I in an electrical circuit
(see Fig. 2.1.2). Applying Kirchhoffs voltage law, one can obtain the system of linear
algebraic equations which describe the relation of V = IR (R =resistance) (see eq. 2.1.3).
2.1.3
Objective
Know when a system of equations is difficult to accurately solve by using the condition number of the coefficient matrix.
Be able to efficiently solve very large sparse systems of equations using iterative
methods including the Jacobi, Gauss-Seidel, and successive relaxation methods.
2.2
Gauss-Jordan Elimination
ref. 2.2
Ax = b,
A=
x=
x1
x2
..
.
xn
b=
b1
b2
..
.
bn
Using matrix multiplication, the kth equation in the linear algebraic system can be expressed as follows
n
X
Akj xj = bk ,
1kn
(2.1)
j=1
A1 = inverse of A
There are more efficient methods that find x without first finding A1 . The basic idea
is to reformulate the equations, using elementary operations, so as to make the solution
transparent. Three elementary row operations are list below
Operation
Description
1
Interchange rows k and j
2
Multiply row k by 6= 0
3
Add times row j to row k
Terminology
Matrix
E1 (k, j)
E2 (k, )
E3 (k, j, )
Premultiplication (multiplication on the left BA), postmultiplication (multiplication on the left AB),
Augmented matrix: C = [A, b], C is the (n n + 1) augmented matrix for the
linear system Ax = b, (A is n) matrix.
Pivot element: the magnitude is largest in a row or column in a matrix.
Algorithm of Gauss-Jordan Elimination
1. Set E = 1 and form the augmented n (n + 1) matrix C = [A, b].
2. For j = 1 to n do
(a) Compute the pivot index j p n such that
n
|Cpj | = max{|Cij |}
i=j
variable change i = n + 1 j
d = n + 1 j = n + 1 (n + 1 i) = i
The total no. of division operations for the normalization step (2d) for 1 j n is
cnorm (n) =
n
X
i=1
d=
n
X
i=1
i=
n(n + 1)
2
Elimination process step (2e) requires (n1)(n+1j) multiplication because Cjk = 0 for
k < j and Cij = 0 for i 6= j. The total no. of multiplication operations for the elimination
process for 1 j n is
celim (n) = (n 1)cnorm
Combining computational effort of cnorm + celim is
cGJ =
2.2.1
n2 (n + 1)
n3
n(n + 1) (n 1)n(n + 1)
+
=
FLOPs n 1
2
2
2
2
(2.2)
Matrix Inversion
From system Ax = b
A1 Ax = A1 b x = A1 b
The solution x can be obtained with the matrix inversion.
A more effective alternative is to modify the algorithm of Gauss-Jordan elimination. From
the augmented n 2n matrix C = [A, I], applying elementary row operations
C = [A, I] [I, A1 ]
Algorithm of Matrix Inverse
1. Set E = 1 and form the augmented n 2n matrix C = [A, I].
2. For j = 1 to n do
(a) Compute the pivot index j p n such that
n
|Cpj | = max{|Cij |}
i=j
variable change i = n j
d = 2n j = 2n (n i) = n + i
10
The total no. of division operations for the normalization step (2d) for 1 j n is
cnorm (n) =
n
X
d=
i=1
n
X
(n + i) =
i=1
n(3n 1)
2
Elimination process step (2e) requires (n 1)(2n j) multiplication because Cjk = 0 for
k < j and Cij = 0 for i 6= j. The total no. of multiplication operations for the elimination
process for 1 j n is
celim (n) = (n 1)cnorm
Combining computational effort of cnorm + celim is
cinv =
n(3n 1) (n 1)n(3n 1)
n2 (3n 1)
3n3
+
=
FLOPs n 1
2
2
2
2
(2.3)
cinv > cGJ , matrix inversion method is more expensive than the Gauss-Jordan elimination
method.
2.3
Gaussian Elimination
ref. 2.3
|Cpj | = max{|Cij |}
i=j
11
The above algorithm contains two-pass algorithm: forward elimination (step (1) to (3)),
back substitution (step (4)).
Operation Count:
cGE
n3
FLOPs,
3
n1
cGE < cGJ : Gaussian elimination method reduce the computational cost comparing
with the Gaussian Jordan elimination method.
2.3.1
Determinant
Determinants are very useful for developing theoretical properties of matrices. However
they are used less often in numerical work because their computation is expensive and
susceptible to accumulated round-off error.
Useful properties of determinant: see Table 2.3.1.
The Gaussian elimination method transforms a matrix into upper-triangular form using
elementary row operations.
U = Er E2 E1 A,
det(Er E2 E1 ) = (1)r ,
|Apj | = max{|Aij |}
i=j
12
2.4
2.4.1
n3
FLOPs,
3
LU Decomposition
n1
ref. 2.4
LU Factorization
A = LU
L11
L21
L=
.
..
Ln1
L22
.. . .
.
.
U =
0
..
Lnn
j1
X
Lki Uij ,
jkn
i=1
Ujk =
j1
X
1
(Ajk
Lji Uik ),
Ljj
i=1
jkn
Since there are a total of n2 variable elements inL and U , these two triangular matrices
can be stored in a single compact n n storage matrix Q = [L\U ] (see eq. 2.4.13).
Algorithm of LU Factorization
1. Set = 1 and form the augmented n 2n matrix D = [A, I].
2. For j = 1 to n do
13
j1
X
Dki Dij
i=1
|Dqj | = max{|Dij |}
i=j
j1
X
1
(Djk
Dji Dik )
=
Djj
i=1
2.4.2
Once the coefficient matrix is factored into lower- and upper-triangular parts, a linear
algebraic system can be solved very efficiently for a variety of RHS vectors.
Consider the system Ax = b,
Rewrite as P Ax = P b since LU = P A LU x = P b
Solution procedure
d = Pb
Ly = dforward substitution
U x = yback substitution
Algorithm of LU Decomposition
1. Apply algorithm of LU Factorization to compute , Q, and P . If = 0, exit.
2. Compute d = P b.
14
3. For k = 1 to n compute
yk =
k1
X
1
(dk
Qki yi )
Qkk
i=1
xk = yk
Qki xi
i=k+1
Operation Count:
cLU
2.4.3
n3
FLOPs,
3
n1
Tridiagonal Systems
In engineering applications, there are many instances where the coefficient matrix A is
sparse in the sense that it has a large no. of zero elements arranged in some patterns. The
tridiagonal matrix is a banded matrix in which nonzero elements on the main-, upperand lower-diagonal.
b 1 a1 0
0
..
c 1 b 2 a2
0
..
...
...
0 ... ...
.
A= . .
.
.
.
.
.. ..
..
..
.
..
cn1
bn
2.5
Ill-Conditioned Systems
15
ref. 2.5
Some linear algorithm system are more sensitive to round-off error than others. Indeed,
for some systems a small change in one of the values of the coefficient matrix or the RHS
vector can give rise to a large change in the solution vector.
When the solution is highly sensitive to the values of the coefficient matrix A or the RHS
vector b, the eqs. are said to be ill-conditioned. This can be a serious numerical problem
for two reasons:
Often the eqs. are generated from a mathematical model of an underlying physical
systems, and only estimates of the values of A and b are known. If the system
Ax = b is ill-conditioned, then the solution is very sensitive to the accuracy of these
estimates.
Even when exact values of A and b are known, we can not represent these values
exactly using a computer with finite precision.
2.5.1
infinity norm of x
||A|| = max{
k=1
X
j=1
|Akj |}
16
The expression for ||A|| involves summing the absolute values of elements in the rows of
A. The matrix norms satisfies all the same properties as the vector norm listed in Table
2.5.1.
2.5.2
Condition Number
We can perturb the eqs. and then examine the size of resulting change in the solution.
Consider the effect of changes as
b b + b,
then solutionx x + x
A is nonsingular
It is also possible to express changes in the solution caused by changes in the coefficient
matrix A
||x||
||A||
K(A)
||x + x||
||A||
When the condition number K(A) becomes large, the system is regarded as being illconditioned.
How large does K(A) have to be before a system is regarded as ill-conditioned? The lower
and upper bound of K(A) is given as
1 K(A) <
2.5.3
The condition number of A can be approximated without going through all of the computations as K(A) = ||A|| ||A1 ||. To estimate K(A), note that if x = A1 b, then
||x|| ||A1 || ||. Thus
||x||
||A1 ||
||b||
17
Since the above eq. must hold for every b, the estimate can be improved by considering
several RHS vectors, solving the system for each, and then taking the largest ratio. In
particular, suppose {b1 , b2 , , bm } is a set of RHS vectors and let {x1 , x2 , , xm } be the
corresponding solutions of Ax = bk , 1 k m. The following can be use to approximate
the condition number
||xk ||
n
K(A) ||A|| max{ k }
k=1 ||b ||
2.5.4
Iterative Improvement
2.6
Iterative Methods
18
ref. 2.6
We can start with an arbitrary initial estimate and then proceed with as many iterative
steps as needed. Iterative methods are attractive for solving very large systems where
performing the direct methods (order O(n3 )) can be prohibitive. In practical applications, large systems have coefficient matrices that are sparse with relatively few nonzero
entries. Here iterative methods are particularly appealing because the sparse structure of
the coefficient matrix is preserved.
2.6.1
Jacobi Method
To apply any of the iterative techniques, the eqs. should first be preprocessed with a row
permutation matrix D, if needed, to ensure that the diagonal elements of the coefficient
matrix are nonzero. The coefficient matrix A is then decomposed into a sum of lowertriangular, diagonal, and upper-triangular terms as follows
A=L+D+U
(2.4)
where L =lower-triangular with zeros on the diagonal, D =diagonal with nonzero diagonal
elements, U =upper-triangular with zeros on the diagonal.
The L, D and U are obtained directly from inspection of A. Using eq. (2.4), we have
Ax = Dx + (L + U )x and Ax = b, then Dx = b (L + U )x. This is the basis for the
Jacobi method. Actual calculation procedure is obtained as follows
Dxk+1 = b (L + U )xk ,
k0
(2.5)
X
1
(bi
Aij xkk ),
Aii
j6=i
1in
(2.6)
To start with the iterative scheme, an initial guess x0 must be chosen. Typically, the
iterations are repeated until the norm of the residual error vector is sufficiently small.
A simple sufficient condition for convergence is
xk x as k if
(2.7)
|Akj |,
1kn
j6=i
We say that the coefficient matrix A is strictly diagonal dominant in a row sense.
(2.8)
2.6.2
19
Gauss-Seidel Method
The Gauss-Seidel (GS) method is an improvement of the convergence rate of the Jacobi
method. The G-S method takes advantage of the most recent information when updating
an estimate of a solution as follows
xk+1
=
i
i1
n
X
X
1
(bi
Aij xk+1
Aij xkj ),
j
Aii
j=1
j=i+1
1in
(2.9)
k>0
(2.10)
The above eq. can be solved easily with forward substitution as eq. (2.10) because the
coefficient matrix (D + L) is lower-triangular.
The sufficient condition which guarantees that the G-S method will converge is
xk x as k if
(2.11)
Solving for xk+1 in eq. (2.11), the norm of the matrix coefficient of xk 1.
2.6.3
Relaxation Method
The GS method can be generalized by adding a new parameter. Begin from eq. (2.9), the
eq. can be rewritten by adding and subtracting xki
xk+1
= xki +
i
The term
1
( )
Aii
i1
n
X
X
1
(bi
Aij xk+1
Aij xkj ),
j
Aii
j=1
j=i
1in
(2.12)
i1
n
X
X
(bi
Aij xk+1
Aij xkj ),
j
Aii
j=1
j=i
1in
(2.13)
i1
n
X
X
(bi
Aij xk+1
Aij xkj ),
j
Aii
j=1
j=i+1
1in
(2.14)
20
k0
(2.15)
(2.16)
2.6.4
Convergence
The three iterative methods that we have examined can all be formulated in the following
general way as
xk+1 = Bxk + c, k 0
where x0 =initial guess. The iteration matrix B, and offset vector c, corresponding to the
Jacobi method, the GS method, the SR method are summarized in Table 2.6.4.
2.7
Applications
ref. 2.7