Sei sulla pagina 1di 12

CHAPTER 3 (4 LECTURES)

DIRECT METHODS FOR SOLVING LINEAR SYSTEMS

1. Introduction
System of simultaneous linear equations are associated with many problems in engineering and
science, as well as with applications to the social sciences and quantitative study of business and
economic problems. These problems occur in wide variety of disciplines, directly in real world problems
as well as in the solution process for other problems.
The principal objective of this Chapter is to discuss the numerical aspects of solving linear system of
equations having the form 
a x + a12 x2 + .........a1n xn = b1
 11 1


a21 x1 + a22 x2 + .........a2n xn = b2
(1.1)

 ................................................
an1 x1 + an2 x2 + .........ann xn = bn .

This is a linear system of n equation in n unknowns x1 , x2 ......xn . This system can simply be written
in the matrix equation form
Ax=b

     
a11 a12 ··· a1n x1 b1
 a21 a22 ··· a2n   x2   b2 
..  ×  ..  =  ..  (1.2)
     
 .. .. ..
 . . . .   .  .
an1 an2 · · · ann xn bn
This equations has a unique solution x = A−1 b, when the coefficient matrix A is non-singular. Unless
otherwise stated, we shall assume that this is the case under discussion. If A−1 is already available,
then x = A−1 b provides a good method of computing the solution x.
If A−1 is not available, then in general A−1 should not be computed solely for the purpose of obtaining
x. More efficient numerical procedures will be developed in this chapter. We study broadly two
categories Direct and Iterative methods. We start with direct method to solve the linear system.

2. Gaussian Elimination
Direct methods, which are technique that give a solution in a fixed number of steps, subject only to
round-off errors, are considered in this chapter. Gaussian elimination is the principal tool in the direct
solution of system (1.2). The method is named after Carl Friedrich Gauss (1777-1855). To solve larger
system of linear equation we use a method of introductory Algebra.
Consider a following n × n system
a11 x1 + a12 x2 + a13 x3 + · · · + a1n xn = b1 (E1 )
a21 x1 + a22 x2 + a23 x3 + · · · + a2n xn = b2 (E2 )
............................................................
ai1 x1 + ai2 x2 + ai3 x3 + · · · + ain xn = bi (Ei )
............................................................
an1 x1 + an2 x2 + an3 + · · · + ann xn = bn (En ).
Let a11 6= 0 and eliminate x1 from E2 , E3 , · · · , En .
ai1
Define multipliers mi1 = , for each i = 2, 3, · · · , n.
a11
We write each entry in Ei as Ei − mi1 E1 .
1
2 DIRECT METHODS FOR SOLVING LINEAR SYSTEMS

This will delete x1 from these rows.


We follow a sequential procedure for j = 2, 3, · · · , n and perform the operations
Ei −→ Ei − (aij /ajj Ej ), for each i = j + 1, j + 2, · · · , n.

bi − (aij /ajj bj ), for each i = j + 1, j + 2, · · · , n,


provided aii 6= 0. This eliminates xi in each row below the ith values.
Also we replace each bi with b(i) = b(i) − (aij /ajj b(j − 1). The resulting matrix has the form
 
a11 a12 · · · a1n b1
 0 a22 · · · a2n b2 
 
 . . . . . . . . . . . . . . .
0 0 · · · ann bn
Therefore the linear system is triangular. Solving the n-th equation for xn gives
bn
xn = .
ann
Solving the (n − 1)st equation for xn−1 and using the known value for xn yields
bn−1 − an−1,n xn
xn−1 = .
an−1,n−1
Continuing this process, we obtain
n
P
bi − aij xj
bi − ain xn−1 − · · · − ai,i+1 xi+1 j=i+1
xi = = ,
aii aii
for each i = n − 1, n − 2, · · · , 2, 1.

Partial Pivoting: In the elimination process, it is assumed that pivot element aii 6= 0, i = 1, 2, . . . , n.
If at any stage of elimination, one of the pivot becomes small (or zero) then we bring other element as
pivot by interchanging rows.

Remark 2.1. Unique Solution, No Solution, or Infinite Solutions.

Here are some tips that will allow us to determine what type of solutions we have based on either the
reduced echelon form.
1. If we have a leading one in every column, then we will have a unique solution.
2. If we have a row of zeros equal to a non-zero number in right side, then the system has no solution.
3. If we don’t have a leading one in every column in a homogeneous system, i.e. a system where all
the equations equal zero, or a row of zeros, then system have infinite solutions.

Example 1. Solve the system of equations. This system has solution x1 = 2.6, x2 = −3.8, x3 = −5.0.

6x1 + 2x2 + 2x3 = −2


2 1
2x1 + x2 + x3 = 1
3 3
x1 + 2x2 − x3 = 0.

Sol. Let us use a floating-point representation with 4−digits and all operations will be rounded.
Augmented matrix is given by
 
6.000 2.000 2.000 −2.000
2.000 0.6667 0.3333 1.000 
1.000 2.000 −1.000 0.0
DIRECT METHODS FOR SOLVING LINEAR SYSTEMS 3

2 1
Multipliers are m21 = = 0.3333 and m31 = = 0.1667.
6 6
(2) (2)
a21 = a21 − m21 a11 , a22 = a22 − m21 a12 etc.
 
6.000 2.000 2.000 −2.000
 0.0 0.0001000 −0.3333 1.667 
0.0 1.667 −1.333 0.3334
1.667
Multiplier is m32 = = 16670
0.0001
 
6.000 2.000 2.000 −2.000
 0.0 0.0001000 −0.3333 1.667 
0.0 0.0 5555 −27790
Using back substitution, we obtain
x3 = −5.003
x2 = 0.0
x1 = 1.335.
We observe that computed solution is not compatible with the exact solution.
The difficulty is in a22 . This coefficient is very small (almost zero). This means that the coefficient
in this position had essentially infinite relative error and this was carried through into computation
involving this coefficient. To avoid this, we interchange second and third rows and then continue the
elimination.
In this case (after interchanging) multipliers is m32 = 0.00005999.
 
6.000 2.000 2.000 −2.000
 0.0 1.667 −1.337 0.3334 
0.0 0.0 −0.3332 1.667
Using back substitution, we obtain
x3 = −5.003
x2 = −3.801
x1 = 2.602.
We see that after partial pivoting, we get the desired solution.
Example 2. Given the linear system
x1 − x2 + αx3 = −2,
−x1 + 2x2 − αx3 = 3,
αx1 + x2 + x3 = 2.
a. Find value(s) of α for which the system has no solutions.
b. Find value(s) of α for which the system has an infinite number of solutions.
c. Assuming a unique solution exists for a given α, find the solution.
Sol. Augmented matrix is given by
 
1 −1 α −2
−1 2 −α 3 
α 1 1 2
Multipliers are m21 = −1 and m31 = α. Performing E2 → E2 + E1 and E3 → E3 − αE1 to obtain
 
1 −1 α −2
0 1 0 1 
2
0 1 + α 1 − α 2(1 + α)
Multiplier is m32 = 1 + α and we perform E3 → E3 − m32 E2 .
 
1 −1 α −2
0 1 0 1 
0 2
0 1−α 1+α
4 DIRECT METHODS FOR SOLVING LINEAR SYSTEMS

a. If α = 1, then the last row of the reduced augmented matrix says that 0.x3 = 2, the system has no
solution.
b. If α = −1, then we see that the system has infinitely many solutions.
c. If α 6= 1, then the system has a unique solution.
1 1
x3 = , x2 = 1, x1 = − .
1−α 1−α
Example 3. Solve the system by Gauss elimination
4x1 + 3x2 + 2x3 + x4 = 1
3x1 + 4x2 + 3x3 + 2x4 = 1
2x1 + 3x2 + 4x3 + 3x4 = −1
x1 + 2x2 + 3x3 + 4x4 = −1.
Sol. We write augmented matrix and solve the system
 
4 3 2 1 1
3 4 3 2 1 
 
2 3 4 3 −1
1 2 3 4 −1
3 1 1
Multipliers are m21 = , m31 = , and m41 = .
4 2 4
Replace E2 with E2 − m21 E1 , E3 with E3 − m31 E1 and E4 with E4 − m41 E1 .
 
4 3 2 1 1
0 7/4 3/2 5/4 1/4 
 
0 3/2 3 5/2 −3/2
0 5/4 5/2 15/4 −5/4
6 5
Multipliers are m32 = and m42 = .
7 7
Replace E3 with E3 − m32 E2 and E4 with E4 − m42 E2 , we obtain
 
4 3 2 1 1
0 7/4 3/2 5/4 1/4 
 
0 0 12/7 10/7 −12/7
0 0 10/7 20/7 −10/7
5
Multiplier is m43 = and we replace E4 with E4 − m43 E3 .
6
 
4 3 2 1 1
0 7/4 3/2 5/4 1/4 
 
0 0 12/7 10/7 −12/7
0 0 0 5/3 0
Using back substitution successively for x4 , x3 , x2 , x1 , we obtain x4 = 0, x3 = −1, x2 = 1, x1 = 0.

Complete Pivoting: In the first stage of elimination, we search the largest element in magnitude
from the entire matrix and bring it at the position of first pivot. We repeat the same process at every
step of elimination. This process require interchange of both rows and columns.

Scaled Partial Pivoting: In this approach, the algorithm selects as the pivot element the entry that
has the largest relative entries in its rows.
At the beginning, a scale factor must be computed for each equation in the system. We define
si = max |aij | (1 ≤ i ≤ n)
1≤j≤n

These numbers are recored in the scaled vector s = [s1 , s2 , · · · , sn ]. Note that the scale vector does not
change throughout the procedure. In starting the forward elimination process, we do not arbitrarily
DIRECT METHODS FOR SOLVING LINEAR SYSTEMS 5

ai,1
use the first equation as the pivot equation. Instead, we use the equation for which the ratio is
si
greatest. We repeat the process by taking same scaling factors.
Example 4. Solve the system
2.11x1 − 4.21x2 + 0.921x3 = 2.01
4.01x1 + 10.2x2 − 1.12x3 = −3.09
1.09x1 + 0.987x2 + 0.832x3 = 4.21
by using scaled partial pivoting.
Sol. The augmented matrix is
 
2.11 −4.21 0.921 2.01
4.01 10.2 −1.12 −3.09
1.09 0.987 0.832 4.21.
The scale factors are s1 = 4.21, s2 = 10.2, & s3 = 1.09. We need to pick the largest (2.11/4.21 =
0.501, 4.01/10.2 = 0.393, 1.09/1.09 = 1), which is the third entry, and interchange row 1 and row 3 and
interchange s1 and s3 to get
 
1.09 0.987 0.832 4.21
4.01 10.2 −1.12 −3.09
2.11 −4.21 0.921 2.01.
Performing E2 − 3.68E1 → E2 , E3 − 1.94E1 → E3 , we obtain
 
1.09 0.987 0.832 4.21
 0 6.57 −4.18 −18.6 
0 −6.12 −0.689 −6.16.
Now comparing (6.57/10.2 = 0.6444, 6.12/4.21 = 1.45), the second ratio is largest so we need to
interchange row 2 and row 3 and interchange scale factor accordingly.
 
1.09 0.987 0.832 4.21
 0 −6.12 −0.689 −6.16 
0 6.57 −4.18 −18.6.
Performing E3 + 1.07E2 → E3 , we get
 
1.09 0.987 0.832 4.21
 0 −6.12 −0.689 −6.16 
0 0 −4.92 −25.2.
Backward substitution gives x3 = 5.12, x2 = 0.43, x1 = −0.436.
Example 5. Solve the system
3x1 − 13x2 + 9x3 + 3x4 = −19
−6x1 + 4x2 + x3 − 18x4 = −34
6x1 − 2x2 + 2x3 + 4x4 = 16
12x1 − 8x2 + 6x3 + 10x4 = 26
by hand using scaled partial pivoting. Justify all row interchanges and write out the transformed matrix
after you finish working on each column.
Sol. The augmented matrix is
 
3 −13 9 3 −19
−6 4 1 −18 −34
 
6 −2 2 4 16 
12 −8 6 10 26
6 DIRECT METHODS FOR SOLVING LINEAR SYSTEMS

and the scale factors are s1 = 13, s2 = 18, s3 = 6, & s4 = 12. We need to pick the largest (3/13, 6/18, 6/6, 12/12),
which is the third entry, and interchange row 1 and row 3 and interchange s1 and s3 to get
 
6 −2 2 4 16
−6 4 1 −18 −34
 
 3 −13 9 3 −19
12 −8 6 10 26
with s1 = 6, s2 = 18, s3 = 13, s4 = 12. Performing E2 − (−6/6)E1 → E2 , E3 − (3/6)E1 → E3 , and
E4 − (12/6)E1 → E4 , we obtain
 
6 −2 2 4 16
0
 2 3 −14 −18 
0 −12 8 1 −27
0 −4 2 2 −6.
Comparing (|a22 |/s2 = 2/18, |a32 |/s3 = 12/13, |a42 |/s4 = 4/12), the largest is the third entry so we
need to interchange row 2 and row 3 and interchange s2 and s3 to get
 
6 −2 2 4 16
0 −12 8 1 −27
 
0 2 3 −14 −18
0 −4 2 2 −6
with s1 = 6, s2 = 13, s3 = 18, s4 = 12. Performing E3 − (2/12)E2 → E3 and E4 − (−4/12)E2 → E4 ,
we get
 
6 −2 2 4 16
0 −12 8 1 −27 
 
0 0 13/3 −83/6 −45/2
0 0 −2/3 5/3 3
Comparing (|a33 |/s3 = (13/3)/18, |a43 |/s4 = (2/3)/12), the largest is the first entry so we do not
interchange rows. Performing E4 − (−2/13)E3 → E4 , we get the final reduced matrix
 
6 −2 2 4 16
0 −12 8 1 −27 
 
0 0 13/3 −83/6 −45/2
0 0 0 −6/13 −6/13
Backward substitution gives x1 = 3, x2 = 1, x3 = −2, x4 = 1.
Example 6. Solve this system of linear equations:
0.0001x + y = 1
x+y =2
using no pivoting, partial pivoting, and scaled partial pivoting. Carry at most five significant digits
of precision (rounding) to see how finite precision computations and roundoff errors can affect the
calculations.
Sol. By direct substitution, it is easy to verify that the true solution is x = 1.0001 and y = 0.99990 to
five significant digits.
For no pivoting, the first equation in the original system is the pivot equation, and the multiplier is
1/0.0001 = 10000. The new system of equations is
0.0001x + y = 1
9999y = 9998
We obtain y = 9998/9999 ≈ 0.99990 and x = 1. Notice that we have lost the last significant digit in
the correct value of x.
We repeat the solution process using partial pivoting for the original system. We see that the second
DIRECT METHODS FOR SOLVING LINEAR SYSTEMS 7

entry is larger, so the second equation is used as the pivot equation. We can interchange the two
equations, obtaining
x+y =2
0.0001x + y = 1
which gives y = 0.99980/0.99990 ≈ 0.99990 and x = 2 − y = 2 − 0.99990 = 1.0001.
Both computed values of x and y are correct to five significant digits.
We repeat the solution process using scaled partial pivoting for the original system. Since the scaling
constants are s = (1, 1) and the ratios for determining the pivot equation are (0.0001/1, 1/1), the
second equation is now the pivot equation. We do not actually interchange the equations and use
the second equation as the first pivot equation. The rest of the calculations are as above for partial
pivoting. The computed values of x and y are correct to five significant digits.
2.1. Operation Counts. Both the amount of time required to complete the calculations and the
subsequent round-off error depend on the number of floating-point arithmetic operations needed to
solve a routine problem. In general, the amount of time required to perform a multiplication or
division on a computer is approximately the same and is considerably greater than that required to
perform an addition or subtraction. The actual differences in execution time, however, depend on the
particular computing system. To demonstrate the counting operations for a given method, we will
count the operations required to solve a typical linear system of n equations in n unknowns using
Gauss elimination Algorithm. We will keep the count of the additions/subtractions separate from the
count of the multiplications/divisions because of the time differential.
First step to calculate multipliers. Then the replacement of the equation Ei by (Ei − mij Ej ) requires
that mij be multiplied by each term in Ej and then each term of the resulting equation is subtracted
from the corresponding term in Ei . The following table states the operations count from going from
A to U at each step 1, 2, · · · , n − 1.
Step Number of Divisions Number of Multiplications Number of Additions/Subtractions
1 (n − 1) (n − 1)2 (n − 1)2
2 (n − 2) (n − 2)2 (n − 2)2
.. .. .. ..
. . . .
n−2 2 4 4
n−1 1 1 1
n(n − 1) n(n − 1)(2n − 1) n(n − 1)(2n − 1)
Total:
2 6 6
n(n − 1)(2n − 1)
Therefore total number of Additions/Subtractions from A to U are .
6
n(n − 1)(2n − 1) n(n − 1) n(n2 − 1)
Total number of Multiplications/Divisions are + = .
6 2 3
Now we count the number of additions/subtractions and the number of multiplications/divisions for
right hand side vector b. We have
n(n − 1)
Total number of Additions/Subtractions (n − 1) + (n − 2) + · · · + 2 + 1 = .
2
n(n − 1)
Total number of Multiplications/Divisions (n − 1) + (n − 2) + · · · + 2 + 1 = .
2
Lastly we count the number of additions/subtractions and multiplications/divisions for finding the
solutions from the back-substitution method.
n(n − 1)
Total number of Additions/Subtractions 0 + 1 + · · · + (n − 1) = .
2
n(n + 1)
Total number of Multiplications/Divisions 1 + 2 + · · · + n = .
2
Therefore the total number of operations to obtain the solution of a system of n linear equations in
n variables using Gaussian elimination is:
Total Number of Additions/Subtractions
n(n − 1)(2n + 5)
.
6
8 DIRECT METHODS FOR SOLVING LINEAR SYSTEMS

Total Number of Multiplications/Divisions


n(n2 + 3n − 1)
.
3
For large n, the total number of multiplications and divisions is approximately n3 /3, as is the total
number of additions and subtractions. Thus the amount of computation and the time required increases
with n in proportion to n3 , as shown in Table.
n Multiplications/Divisions Additions/Subtractions
3 17 11
10 430 375
50 44, 150 42, 875
100 343, 300 338, 250

3. The LU Factorization:
When we use matrix multiplication, another meaning can be given to the Gauss elimination. The
matrix A can be factored into the product of the two triangular matrices.
Let AX = b is the system to be solved, A is n × n coefficient matrix. The linear system can be reduced
to the upper triangular system U X = g with
 
u11 u12 · · · u1n
 0 u22 · · · u2n 
U =  ..
 
. . .. 
 . . . 
0 0 · · · unn
Here uij = aij . Introduce an auxiliary lower triangular matrix L based on the multipliers mij as
following  
1 0 0 ··· 0
 m21
 1 0 ··· 0
L=
 m31 m32 1 ··· 0
 .. .. .. 
 . . . 
mn,1 0 · · · mn,n−1 1
Theorem 3.1. Let A be a non-singular matrix and let L and U be defined as above. If U is produced
without pivoting then
LU = A.
This is called LU factorization of A.
In previous section we found that Gaussian elimination applied to an arbitrary linear system Ax = b
requires O(n3 /3) arithmetic operations to determine x. However, to solve a linear system that involves
an upper-triangular system requires only backward substitution, which takes O(n2 ) operations. The
number of operations required to solve a lower-triangular systems is similar.
Suppose that A has been factored into the triangular form A = LU , where L is lower triangular and
U is upper triangular. Then we can solve for x more easily by using a two-step process.
First we let y = U x and solve the lower triangular system Ly = b for y. Since L is triangular,
determining y from this equation requires only O(n2 ) operations.
Once y is known, the upper triangular system U x = y requires only an additional O(n2 ) operations to
determine the solution x.
Example 7. We require to solve the following system of linear equations using LU decomposition.
x1 + 2x2 + 4x3 = 3
3x1 + 8x2 + 14x3 = 13
2x1 + 6x2 + 13x3 = 4.
(a) Find the matrices L and U using Gauss elimination.
(b) Using those values of L and U , solve the system of equations.
DIRECT METHODS FOR SOLVING LINEAR SYSTEMS 9

Sol. We first apply the Gaussian elimination on the matrix A and collect the multipliers m21 , m31 ,
and m32 .
We have
 
1 2 4
A = 3 8 14
2 6 13
Multipliers are m21 = 3, m31 = 2.
E2 → E2 − 3E1 and E3 → E3 − 2E1 .
 
1 2 4
∼ 0 2 2
0 2 5
Multiplier is m32 = 2/2 = 1 and we perform E3 → E3 − E2 .
 
1 2 4
∼ 0 2 2
0 0 3
We observe that m21 = 3, m31 = 2, and m32 = 1. Therefore,
    
1 2 4 1 0 0 1 2 4
A= 3 8 14 = 3
  1 0 0 2 2 = LU
2 6 13 2 1 1 0 0 3
Therefore,
AX = B =⇒ LU X = B.
Assuming U X = Y , we obtain,
LY = B
i.e.     
1 0 0 y1 3
3 1 0 y2  = 13 .
2 1 1 y3 4
Using forward substitution, we obtain y1 = 3, y2 = 4, and y3 = −6. Now
    
1 2 4 x1 3
U X = Y =⇒ 0 2 2 x2  =  4  .
0 0 3 x3 −6
Now, using the backward substitution process, we obtain the final solution as x3 = −2, x2 = 4, and
x1 = 3.
Example 8. (a) Determine the LU factorization for matrix A in the linear system AX = B, where
   
1 1 0 3 1
2 1 −1 1  
1
 
A=  3 −1 −1 2  and B = −3 (3.1)
−1 2 3 −1 4
(b) Then use the factorization to solve the system
x1 + x2 + 3x4 = 8
2x1 + x2 − x3 + x4 = 7
3x1 − x2 − x3 + 2x4 = 14
−x1 + 2x2 + 3x3 − x4 = −7
.
10 DIRECT METHODS FOR SOLVING LINEAR SYSTEMS

Sol. (a) We take the coefficient matrix and apply Gauss elimination.
Multipliers are m21 = 2, m31 = 3, and m41 = −1.
Sequence of operations E2 → E2 − 2E1, E3 → E3 − 3E1, E4 → E4 − (−1)E1.
 
1 1 0 3
0 −1 −1 −5
 
0 −4 −1 −7
0 3 3 2
Multipliers are m32 = 4 and m42 = −3.
E3 → E3 − 4E2, E4 → E4 − (−3)E2).
 
1 1 0 3
0 −1 −1 −5 
∼
0 0

3 13 
0 0 0 −13
The multipliers mij and the upper triangular matrix produce the following factorization
    
1 1 0 3 1 0 0 0 1 1 0 3
2 1 −1 1  2 1 0 0 0 −1 −1 −5  = LU.
 
A= =
3 −1 −1 2   3 4 1 0 0 0 3 13 
−1 2 3 −1 −1 −3 0 1 0 0 0 −13
(b)
      
1 0 0 0 1 1 0 3 x1 8
2 1 0 0 0 −1 −1 −5  x2   7 
AX = (LU )X = 
3
    =  
4 1 0 0 0 3 13  x3   14 
−1 −3 0 1 0 0 0 −13 x4 −7
We first introduce the substitution y = U X. Then B = L(U X) = LY. That is,
    
1 0 0 0 y1 8
2 1 0 0  y2   7 
LY = 3
  =  
4 1 0 y3   14 
−1 −3 0 1 y4 −7
This system is solved for y by a simple forward-substitution process:

y1 = 8, y2 = −9, y3 = 26, y4 = −26.


We then solve U X = Y for X, the solution of the original system; that is,,
    
1 1 0 3 x1 8
0
 −1 −1 −5   x2  =  −9 
   
0 0 3 13  x3   26 
0 0 0 −13 x4 −26
Using backward substitution we obtain x4 = 2, x3 = 0, x2 = −1, x1 = 3.

Exercises
(1) Use Gaussian elimination with backward substitution and two-digit rounding arithmetic to
solve the following linear systems. Do not reorder the equations. (The exact solution to each
system is x1 = −1, x2 = 1, x3 = 3.)
a.
−x1 + 4x2 + x3 = 8
5 2 2
x1 + x2 + x3 = 1
3 3 3
2x1 + x2 + 4x3 = 11.
DIRECT METHODS FOR SOLVING LINEAR SYSTEMS 11

b.
4x1 + 2x2 − x3 = −5
1 1 1
x1 + x2 − x3 = −1
9 9 3
x1 + 4x2 + 2x3 = 9.
(2) Using the four-digit arithmetic solve the following system of equations by Gaussian elimination
with and without partial pivoting
0.729x1 + 0.81x2 + 0.9x3 = 0.6867
x1 + x2 + x3 = 0.8338
1.331x1 + 1.21x2 + 1.1x3 = 1.000.
This system has exact solution, rounded to four places x1 = 0.2245, x2 = 0.2814, x3 = 0.3279.
Compare your answers!
(3) Use the Gaussian elimination algorithm to solve the following linear systems, if possible, and
determine whether row interchanges are necessary:
a.
x1 − x2 + 3x3 = 2
3x1 − 3x2 + x3 = −1
x1 + x2 = 3.
b.
2x1 − x2 + x3 − x4 = 6
x2 − x3 + x4 = 5
x4 = 5
x3 − x4 = 3.
(4) Use Gaussian elimination and three-digit chopping arithmetic to solve the following linear
system, and compare the approximations to the actual solution [0, 10, 1/7]T .
3.03x1 − 12.1x2 + 14x3 = −119
−3.03x1 + 12.1x2 − 7x3 = 120
6.11x1 − 14.2x2 + 21x3 = −139.
(5) Repeat the above exercise using Gaussian elimination with partial and scaled partial pivoting
and three-digit rounding arithmetic.
(6) Given the linear system
x1 − x2 + αx3 = −2
−x1 + 2x2 − αx3 = 3
αx1 + x2 + x3 = 2.
a. Find value(s) of α for which the system has no solutions.
b. Find value(s) of α for which the system has an infinite number of solutions.
c. Assuming a unique solution exists for a given α, find the solution.
(7) Suppose that
2x1 + x2 + 3x3 = 1
4x1 + 6x2 + 8x3 = 5
6x1 + αx2 + 10x3 = 5,
with |α| < 10. For which of the following values of α will there be no row interchange required
when solving this system using scaled partial pivoting?
a. α = 6 b. α = 9 c. α = −3.
12 DIRECT METHODS FOR SOLVING LINEAR SYSTEMS

(8) Modify the LU Factorization Algorithm so that it can be used to solve a linear system, and
then solve the following linear systems.
2x1 − x2 + x3 = −1
3x1 + 3x2 + 9x3 = 0
3x1 + 3x2 + 5x3 = 4.
(9) Show that the LU Factorization Algorithm requires
a.
1 3 1 1 3 1 2 1
n − n multiplications/divisions and n − n + n additions/subtractions.
3 3 3 2 6
b. Show that solving Ly = b, where L is a lower-triangular matrix with lii = 1 for all i, requires
1 2 1 1 2 1
n − n multiplications/divisions and n − n additions/subtractions.
2 2 2 2
c. Show that solving Ax = b by first factoring A into A = LU and then solving Ly = b and
U x = y requires the same number of operations as the Gaussian Elimination Algorithm.

Bibliography
[Burden] Richard L. Burden, J. Douglas Faires and Annette Burden, “Numerical Analysis,” Cengage
Learning, 10th edition, 2015.
[Atkinson] K. Atkinson and W. Han, “Elementary Numerical Analysis,” John Willey and Sons, 3rd
edition, 2004.

Appendix A. Algorithms
Algorithm (Gauss Elimination)
(1) Start
(2) Declare the variables and read the order of the matrix n.
(3) Input the coefficients of the linear equation with right side as:
Do for i = 1 to n
Do for j = 1 to n + 1
Read a[i][j] End for j
End for i
(4) Do for k = 1 to n − 1
Do for i = k + 1 to n
Do for j = k + 1 to n + 1
a[i][j] = a[i][j] − a[i][k]/a[k][k] ∗ a[k][j] End for j
End for i
End for k
(5) Compute x[n] = a[n][n + 1]/a[n][n]
(6) Do for i = n − 1 to 1
sum = 0
Do for j = i + 1 to n
sum = sum +a[i][j] ∗ x[j] End for j
x[i] = 1/a[i][i] ∗ (a[i][n + 1] − sum)
End for i
(7) Display the result x[i]
(8) Stop

Potrebbero piacerti anche