Sei sulla pagina 1di 8

4.

Gaussian Elimination

Naive Gaussian Elimination: Solving Ax = b using Gaussian elimination consists of two steps: (1) forward elimination, in which we use elementary row operations to nd an upper triangular system, and (2) back substitution, in which we solve the triangular system from the bottom up. Elementary row operations are Interchange two rows of the matrix, i.e., Ei E j , with i = j. Multiply a row by a nonzero constant, i.e., cEi Ei . Add a multiple of one row to another row, i.e., (Ei + cE j ) Ei , with i = j. And these elementary row operations do not change the set of solutions to the system. Example 4.2: Solve the following system of linear equations using the (naive) Gaussian eliminations: 2x1 4x1 + x2 6x2 + x3 = 5 = 2

2x1 + 7x2 + 2x3 = 9. Forward elimination, First Stage: subtracting multiples of the rst equation from the others, so as to eliminate x1 from the last two equations and get 2x1 + x2 8x2 + x3 = 5

8x2 2x3 = 12 + 3x3 = 14.

It is seen that the elimination is constantly dividing the pivot into the number beneath it; therefore, by denition, the pivot can not be zero. Second stage: ignore the rst equation, divide the second pivot into 8 and we get the multiplier 1. Performing (E3 (1)E2 ) E3 , we get 2x1 + x2 + x3 x3 and the forward elimination is now complete. 1 = 5 = 2

8x2 2x3 = 12

Back substitution: x3 = 2x1 + 1 + 2 = 2; 5 x1 = 1. 8x2 2(2) = 12 x2 = 1; Therefore we obtain the solution vector as x = (1, 1, 2)T . Comments: (1) Assume no row exchange, det(A) = product of pivots, e.g., det(A) = (2)(8)(1) = 16. (2) To use the augmented matrix (A, b): 2 1 1 5 2 1 1 5 2 1 1 5 4 6 0 2 0 8 2 12 0 8 2 12 . 0 0 1 2 0 8 3 14 2 7 2 9 Such arrangement will guarantee operations on the left side are also done on the right side. Breakdown of Elimination: [Q] Under what circumstances could the process break down? Singular case: Gaussian elimination must go wrong! Nonsingular case: algorithm must be repaired or modied to produce the right result. In Example 4.2, if the rst coefcient is zero, elimination of x1 in other equations becomes impossible. Furthermore, zero can occur in the pivot positions in the intermediate stages. In general, we just dont known if a zero will appear until we try. Solution to the problem: exchanging row equations. A nonsingular example: x1 + x2 + x3 = b1 x1 + x2 + x3 = b1

2x1 + 2x2 + 5x3 = b2 4x1 + 6x2 + 8x3 = b3 Performing E2 E3 , we get x1 + x2 + x3 = b1

3x3 = b2 2x2 + 4x3 = b3 .

2x2 + 4x3 = b3 3x3 = b2 2

A singular example: x1 + x2 + x3 = b1 x1 + x 2 + x3 = b1

2x1 + 2x2 + 5x3 = b2 4x1 + 4x2 + 8x3 = b3

3x3 = b2 4x3 = b3 .

In this example, there is no exchange of row equations that can avoid zero in the second pivot position. If 3x3 = 6 and 4x3 = 7, we get no solution. If 3x3 = 6 and 4x3 = 8, we get innite number of solutions. Partial pivoting (maximal column pivoting): Suppose at the second stage we have the following formation 1

In the Gaussian elimination with partial pivoting, we pick the row with p i that has the largest absolute value, i.e., if |pi | = max |p j | for j = 2, 3, . . ., m, swap the row of p2 with the row of pi , i.e., E2 Ei , and use pi as our pivot. If both rows and columns are searched for the largest element and then switched, the procedure is called complete pivoting. Partial pivoting achieves two things: (1) avoid zero at pivot position; (2) avoid near-zero pivots such that the stability of Gaussian elimination can be improved. Example 4.3: Solve the following 2 2 system with and without partial pivoting (assume 4digit rounding arithmetic). 0.00300x1 + 59.14x2 = 59.17 5.291x1 6.130x2 = 46.78. The exact solution is (x1 , x2 )T = (10.00, 1.00)T . Without pivoting: rst pivot is 0.00300 and the associated multiplier is m = fl 5.291 0.00300 = f l(1763.66666) = 1764. 3

0 0 0 0

p3 . p4 pm p2

Performing (E2 mE1 ) E2 with appropriate rounding, we get 0.00300x1 + 59.14x2 = 59.17 104, 300x2 = 104, 400 Back substitution gives x2 = 1.001 and x1 = With partial pivoting: 5.291x1 6.130x2 = 46.78 0.00300x1 + 59.14x2 = 59.17. The rst pivot is now 5.291 and the associated multiplier is m= Performing (E2 mE1 ) E2 , we get 5.291x1 6.130x2 = 46.78 59.14x2 = 59.14 and we get x1 = 10.00 and x2 = 1.000, which are the correct solutions. Cost of Gaussian elimination (no pivoting): For any numerical algorithm, both amount of time required to complete the calculation and the subsequent round-off error depend on the number of oating-point arithmetic operations. It is therefore of interest to estimate the number of multiplications/divisions and additions/subtractions for a numerical algorithm. In general, time required for computer to do multiplication and division is about the same, and is considerably greater than addition/subtraction. So we count these two operations separately. Forward Elimination: at the ithe stage, the number of multiplications/divisions required is (n i) + (n i + 1)(n i) = (n i)(n i + 2)
divisions multiplications

59.17 (59.14)(1.001) = 10.00(incorrect!). 0.003

0.00300 = 0.0005670. 5.291

and the number of additions/subtractions required is (n i + 1)(n i). 4

Using the summation formulas

j=1

1 = n,

j=1

j=

n(n + 1) , 2

j=1

j2 =

n(n + 1)(2n + 1) , 6

one can show that


n1 i=1

(n i)(n i + 2) =

n1 i=1 2

(n2 + 2n) (2n + 2)i + i2


n1 n1 i=1 n1 i=1

= (n + 2n) 1 (2n + 2) i + =
i=1 3 + 3n2 5n 2n

i2

which is the number of multiplications/divisions in the forward elimination, and


n1 i=1

(n i + 1)(n i) =
=

n1

i=1 n3 n

(n2 + n) (2n + 1)i + i2

which is the number of additions/subtractions in the forward elimination. Back substitution: a11 x1 + a12 x2 + a13 x3 a22 x2 + a23 x3 a33 x3 + ... + + ... + + ... + ... ... + a(n1) xn nn The back substitution equations are xn = and xi = b(n1) n a(n1) nn , i = n 1, n 2, . . ., 1 a1n xn a2n xn a33 xn = = = ... = b(n1) . n b1 b2 b3

(i1) x b(i1) n j=i+1 ai j j i

a(i1) ii

The number of multiplications/divisions for the back substitution is 1+2+3++n = 5 n(n + 1) n2 + n = 2 2

and the number of additions/subtractions for the back substitution is 1 + 2 + 3 + + (n 1) = n(n 1) n2 n = . 2 2

Therefore, the total number of multiplications/divisions for the Gaussian elimination is 2n3 + 3n2 5n + 6
f orward elimination

n2 + n 2
back substitution

n n3 n3 + n2 O 3 3 3

and the total number of additions/subtractions for the Gaussian elimination is n3 n 3


f orward elimination

n2 n 2
back substitution

n3 n2 5n n3 . + O 3 2 6 3

Example 4.4 (naive scaling): If we multiply E1 in Example 4.2 by a factor 104 and get 30.00x1 + 591, 400x2 = 591, 700 5.291x1 6.130x2 = 46.78 and 30.00 is now the rst pivot. The multiplier constant is m= and (E2 mE1 ) E2 leads to 30.00x1 + 591, 400x2 = 591, 700 104, 300x2 = 104, 400 which has the same inaccurate solutions as in Example 4.2 (without pivoting case): x 2 = 1.001 and x1 = 10.00. Scaled-column pivoting: The scaled-column pivoting technique consists of two steps. Step 1: Dene for each row a scale factor si si = max |ai j |
j=1,2,...n

5.291 = 0.1764 30.00

i.e., the scale factor for the ith row is the element in the ith row with the largest absolute value. 6

Step 2: Make ak1 as the pivot (switch rows if necessary) such that |ak1 | = sk max |a j1 | sj .

j=1,2,...,n

The effect of scaling is to ensure that the pivot has the largest relative magnitude. For Example 4.4, we have s1 = max{|30.00|, |591, 400|} = 591, 400 s2 = max{|5.291|, | 6.130|} = 6.130. Consequently |a11 | 30.00 = = 0.5073 104 s1 591, 400 |a21 | 5.291 = = 0.8631. s2 6.130 Thus, according to the scaled-column pivoting strategy, a21 should be the rst pivot, E1 E2 , 5.291x1 6.13x2 = 46.78 30.00x1 + 591, 400x2 = 591, 700 which will produce the correct results: x1 = 10.00 and x2 = 1.000. Gauss-Jordan Elimination: After the forward elimination, we reach an upper triangular system which may look like the following = = = = = = Instead of using the back substitution, the Gauss-Jordan elimination use the last equation to eliminate xn in the top (n 1) equations and then use xn1 in the next-to-last equations to eliminate all the xn1 s, etc. 7

It can be shown that the Gauss-Jordan method requires n3 n + n2 multiplications/divisions 2 2 n3 n additions/subtractions. 2 2 Therefore, Gauss-Jordan method requires more arithmetic operations than the back substitution. and

Potrebbero piacerti anche