Sei sulla pagina 1di 50

Problem Solving using C#

M
I

Massive Information &


Knowledge Engineering

Updated @2016
@2016 and by mikelab.net
Unauthorized reproduction is prohibited.

Numerical Linear
Algebra
204111 Computer & Programming
Arnon Rungsawang
http://mike.cpe.ku.ac.th/204111
Department of Computer Engineering
Kasetsart University
Bangkok, Thailand.

Linear Equations
2x1 + x2 -3x3 = -1
-x1 + 3x2 +2x3 = 12
3x1 + x2 -3x3 = 0
x1 + 2x2 +3x3 + x4 = 0
-4x1 + 5x2 +4x3
= 1
-5x1 + 3x2 +3x3
= 0
x4 = -1
0.04x1 + 0.04x2 + 0.12x3 = 3
0.56x1 1.56x2 + 0.32x3 = 1
0.24x1 + 1.24x2 0.28x3 = 0

M I K E

204111 Computer & Programming -- 2016_1

2x1 + x2 -3x3 = -1
-x1 + 3x2 +2x3 = 12
3x1 + x2 -3x3 = 0

Linear Equations (2)


A set of N equations can be represented by

a11x1 a12 x2 a13 x3 ... a1N xN y1


a21x1 a22 x2 a23 x3 ... a2 N xN y2
...
aN 1 x1 aN 2 x2 a N 3 x3 ... a NN xN y N
where aij are coefficients
coefficients, xi are unknowns
unknowns, and yi are known terms
called inhomogeneous terms.
terms
For the usual form of a set of linear equations,
equations the number of
unknowns equals the number of equations.
When at least one of the inhomogeneous terms is not zero, the set
is said to be inhomogeneous
inhomogeneous.

M I K E

204111 Computer & Programming -- 2016_1

Gauss Elimination
Gauss elimination applies only to inhomogeneous sets of equations.
equations
It consists of (a) forward elimination,
elimination and (b) backward substitution.
substitution
The forward elimination proceeds as follows:
Every equation i, where i >= 2 is subtracted from the first equation
times ai1/a11

a11x1 a12 x2 a13 x3 ... a1N xN y1


(a21 a11

a21
a
a
a
) x1 (a22 a12 21 ) x2 ... (a2 N a1N 21 ) xN y2 21 y1
a11
a11
a11
a11
...

(a N1 a11

M I K E

aN1
a
a
a
) x1 (a N 2 a12 N1 ) x2 ... (a NN a1N N 1 ) xN y N N 1 y1
a11
a11
a11
a11

204111 Computer & Programming -- 2016_1

Gauss Elimination

forward elimination

a11x1 a12 x2 a13 x3 ... a1N xN y1


a21
a21
a21
a21
(a21 a11 ) x1 (a22 a12
) x2 ... (a2 N a1N
) x N y2
y1
a11
a11
a11
a11
...
(a N1 a11

aN1
a
a
a
) x1 (a N 2 a12 N1 ) x2 ... (a NN a1N N 1 ) xN y N N 1 y1
a11
a11
a11
a11

aij' aij a1 j

ai1
a11

a11x1 a12 x2 a13 x3 ... a1N xN y1


'
'
a22
x2 a23
x3 ... a2' N xN y2'

...
'
a 'N 2 x2 a N' 3 x3 ... a NN
xN y N'

M I K E

204111 Computer & Programming -- 2016_1

Gauss Elimination

forward elimination

a11x1 a12 x2 a13 x3 ... a1N xN y1


'
'
a22
x2 a23
x3 ... a2' N xN y2'

...
'
a 'N 2 x2 a N' 3 x3 ... a NN
xN y N'

ai' 2
a a a
'
a22
''
ij

'
ij

'
2j

a11x1 a12 x2 a13 x3 ... a1N xN y1


'
'
a22
x2 a23
x3 ... a2' N xN y2'

...
''
aN'' 3 x3 ... aNN
xN y N''

M I K E

204111 Computer & Programming -- 2016_1

Gauss Elimination

backward substitution

When the forward elimination process is finished, the set of equations


will be in the following form:

a11x1 a12 x2 a13 x3 ... a1N xN y1

'
'
a22
x2 a23
x3 ... a2' N xN y2'

...
a NNN 1 xN y NN 1
The backward substitution procedure starts with the last equation.
The solution of xn is obtained from the last equation:

xN y NN 1 / a NNN 1

xN 1 ( y NN12 a NN12N xN ) / a NN12N 1


...
N

x1 ( y1 a1 j x j ) / a11
j 2

M I K E

204111 Computer & Programming -- 2016_1

Gauss Elimination
Gauss elimination can be carried out by writing only the coefficients
and the right-side terms in an array form.
a11 a12 a13 a1N-1 a1N y1
a21 a22 a23 a2N-1 a2N y2

aN1 aN2 aN3 aNN-1 aNN yN

Forward elimination

a11 a12 a13 a1N-1 a1N


y1
0 a22 a23 a2N-1 a2N y2

N-1
0 0
0 0
a NN yN

M I K E

204111 Computer & Programming -- 2016_1

Example use of Guass Elimination


Solve the following linear equations in the array form:
2x1 + x2 -3x3 = -1
-x1 + 3x2 +2x3 = 12
3x1 + x2 -3x3 = 0

An array expression of the equations is:


2
-1
3

1
3
1

-3 -1
2 12
-3 0

2
1
-3
0
7/2 1/2
0 -1/2 3/2

-1
23/2
3/2

2
1
-3
0
7/2 1/2
0 -1/2 3/2
2
0
0

-1
23/2
3/2

1
-3
7/2 1/2
0 11/7

-1
23/2
22/7

Finally, we obtain x3=2, x2=3, and x1=1

M I K E

204111 Computer & Programming -- 2016_1

Guass-Jordan Elimination
GuassGuass-Jordan elimination is a variant of Gauss elimination.
It shares with Gauss elimination, the same forward elimination
process, but is different in the backward process.
The backward process of Gauss-Jordan elimination is called
backward elimination.
elimination
Starting from the last forward elimination processes:

a11 a12

a13 ...

a1N 1

a1N

y1

'
a22

'
a23
...

a2' N 1

a2' N

y2'

''
a33
...

a3'' N 1

a3'' N

y3''

...

M I K E

...

... a NN12N 1

a NN12N

y NN12

...

a NNN 1

y NN 1

204111 Computer & Programming -- 2016_1

10

Guass-Jordan Elimination

backward elimination

a11 a12

a13 ...

a1N 1

a1N

y1

a11 a12

a13 ...

a1N 1

a1N

y1

'
a22

'
a23
...

a2' N 1

a2' N

y2'

'
a22

'
a23
...

a2' N 1

a2' N

y2'

''
a33
...

a3'' N 1

a3'' N

y3''

N 1
0
yN yNN 1 / aNN

''
a33
...

a3'' N 1

a3'' N

y3''

...

...

...

...

... a NN12N 1

a NN12N

y NN12

... a NN12N 1

...

a NNN 1

y NN 1

...

a11 a12
0

'
a22

a NN12N

y NN12

yN

y1

a11 a12

a13 ...

a1N 1

y1

'
a23
... a2' N 1 0

y2

'
a22

'
a23
...

a2' N 1

y2

''
a33
... a3'' N 1

y3 y ' y / a
...

''
a33
...

a3'' N 1

y3

a13 ... a1N 1

...

N 1

N 1

N 2
N 1 N 1

...

...

...

y N' 1

... a NN12N 1

y N 1

...

yN

...

yN

M I K E

204111 Computer & Programming -- 2016_1

11

Guass-Jordan Elimination
a11 a12
0

'
a22

backward elimination

y1

a11 a12

a13 ... 0 0

y1'

'
a23
... a2' N 1 0

y2

'
a22

'
a23
... 0 0

y2'

''
a33
... a3'' N 1

y3

''
a33
... 0 0

y3'

a13 ... a1N 1

...

...

...

...

...

y N' 1

... 1 0

y N' 1

...

yN

... 0 1

yN

Final solution:

M I K E

xi yiN i

1 0 0 ... 0 0

y1N 1

0 1 0 ... 0 0

y2N 2

0 0 1 ... 0 0

y3N 3

...

...

0 0 0 ... 1 0

y N' 1

0 0 0 ... 0 1

yN

204111 Computer & Programming -- 2016_1

12

Example use of Gauss-Jordan Elimination


Solve the following equations by hand:

0.04 x1
0.56 x1

0.04 x2
1.56 x2

0.12 x3
0.32 x3

3
1

0.24 x1

1.24 x2

0.28 x3

The forward elimination proceeds as:


row
row
row
row
row

M I K E

1
2
3
1 times 0.56/(-0.04):
1 time 0.24/(-0.04):

0.04
0.56
0.24

0.04
0.12
1.56
0.32
1.24 0.28

0.56 0.56
0.24
0.24

204111 Computer & Programming -- 2016_1

3
1
0

1.68 42
0.72
18

(A)
(B)

13

Example use of Gauss-Jordan Elimination(2)


Subtract (A) from row 2, and subtract (B) from row 3 yield:
row 1
0.04 0.04 0.12
3
row 2
0
1
2
43
row 3
0
1
1 18
The second coefficient of the third row is eliminated by subtracting
row 2 times 1 from row 3:
row 1
0.04 0.04 0.12 3
row 2
0
1
2 43
row 3
0
0
1 25
The backward substitution is straightforward:

M I K E

x3

25 / 1 25

x2

[43 2(25)] /( 1) 7

x1

[3 0.12(25) 0.04(7)] /( 0.04) 7

204111 Computer & Programming -- 2016_1

14

Example use of Gauss-Jordan Elimination(3)


The backward elimination is as follows:

row 1
row 2
row 3

0.04 0.04 0.12

row 1
row 2
row 3

0.333 0.333 1 25

row 1
row 2
row 3

1 1 0 0
0 1 0 7
0 0 1 25

M I K E

2 43

1 25

1 0

0 1 25

row 1
row 2
row 3

0.04 0.04 0.12


1

25

row 1
row 2
row 3

0.04 0.04 0.12

row 1
row 2
row 3

1 0 0 7
0 1 0
7
0 0 1 25

204111 Computer & Programming -- 2016_1

0 7

1 25

15

Pivoting
Solve the following array of eqations:

0 10
1 2
1 3 1 6
2 4
1 5

Trick: to make these kinds of equations solvable, we exchange the rows


order to let the diagonal elements (called pivoting);
be non
non--zero (to make possible computation) and
be the largest values (to increase computational accuracy).

0 10
1 2
1 3 1 6
2 4
1 5

M I K E

2 4
1 5
1 3 1 6
0 10
1 2

2 4
1
5
0 1 3/ 2 7 / 2
0 10
1
2

2 4
1
5
0 10
1
2
0 1 3/ 2 7 / 2

x3
x2

(33 / 10)(10 / 16) 2.0625


(2 x3 ) / 10 0.4062

x1

(5 4 x2 x3 ) / 2 2.7187

204111 Computer & Programming -- 2016_1

0 10

0 16 / 10 33 / 10
16

Your exercises

(as homework)

Write a C# program to solve the following array of equations:

1.334 E 4 4.123E 1
1.777 2.367 E 5

7.912 E 2 1.544 E 3 711.5698662


2.07 E 1 9.035E 1 67.87297633

9.188
0 1.015E 1
1.002 E 2 1.442 E 4 7.014 E 2

1.988E 4
5.321

0.9618012
13824.121

(a) Using only single precision (float), solve the equations without
pivoting and then with pivoting.
(b) Repeat by using double precision (double).

M I K E

204111 Computer & Programming -- 2016_1

17

Unsolvable Problems
A set of linear equations is not always numerically solvable. The following
three sets of equations are simple but important examples:

(a)
(b)
(c )

2x 2 y

x 2y 2
2x
y
0

Infinite number
of solutions

No solutions

No solutions

M I K E

204111 Computer & Programming -- 2016_1

18

Unsolvable Problems (2)


(a)

2x 2 y 2
y

Infinite number
of solutions

M I K E

In the set (a), the second equation is 2 times


the first equation, so they are mathematically
identical. Any point (x,y) satisfying one equation
solves the other also. Therefore, the number of
solutions is infinite.
infinite In other words, there is
no unique solution.
solution If one equation is a multiple
of another or can be obtained by adding or
subtracting other equations, that equation is
said to be linearly dependent on others.

If none of the equation is linearly dependent, all


the equations are said to be linearly independent.
independent

204111 Computer & Programming -- 2016_1

19

Unsolvable Problems (3)


(b) x
x

y 1
y 0

In the set (b), the two equations are parallel lines


that never intersect, so there is no solution.
solution Such
a system is called an inconsistent system.
system A set of
equations is inconsistent if the left side of at least
one equation can be completely eliminated by
adding or subtracting other equations, while the
right side remains nonzero.

No solutions

M I K E

204111 Computer & Programming -- 2016_1

20

Unsolvable Problems (3)


(c ) x

x 2y 2
2x

In the set (c), there are three independent


equations for two unknowns, but these three
equations can never be simultaneously satisfied.
This case cannot be happened if the number of
equations equals the number of unknowns.

No solutions

M I K E

204111 Computer & Programming -- 2016_1

21

Inversion of a Matrix
Consider a linear equation in matrix notation:

Ax y
where A is a square matrix.
A pre-multiplication by a square matrix G yields:

GAx Gy
If G is chosen to be the inverse of A, namely A-1, the above equation
reduces to:

x A1 y

which is the solution. In other words, Gauss-Jordan elimination is


equivalent to pre-multiplication the equation by G= A-1.
Thus, if we apply the same operations performed in Gauss
Gauss--Jordan
elimination to the identity matrix, then the identity matrix will be
transformed to A-1.

GI A1
M I K E

204111 Computer & Programming -- 2016_1

22

Inversion of a Matrix (2)


To compute A-1, we write A and I in an augmented array form:

a11 a12
a21 a22

a13 1 0 0
a23 0 1 0

a31 a32

a33 0 0 1

Then we follow Gauss


Gauss--Jordan elimination in exactly the same way in
solving a linear set of equations. When the left half of the augmented
matrix is reduced to a unit matrix, the right half becomes A-1.

M I K E

204111 Computer & Programming -- 2016_1

23

Example of matrix inversion


Calculate the inverse of the matrix:

2 1 3
A 1 3
2
3 1 3
We first write A and I in one array:

2 1 3 1 0 0
1 3

2 0

1 0

3 1 3 0 0 1

M I K E

204111 Computer & Programming -- 2016_1

24

Example of matrix inversion (2)


Forward elimination proceeds as follows: The first row time 1/2 is
subtracted from the second row, and the first row times 3/2 is
subtracted from the third row:

2 1 3 1 0 0
1 3

2 0

1 0

3 1 3 0 0 1

1 3

1 0 0

3.5 0.5

0.5 1 0

0 0.5 1.5 1.5 0 1

Now the second row times 0.5/3.5 is subtracted from the third row:

1 3

1 0 0

3.5 0.5

0.5 1 0

0 0.5 1.5 1.5 0 1

M I K E

0 0

0 3.5

0.5

0.5

1 0

2
0

0 1.5714 1.4285 0.14285 1

204111 Computer & Programming -- 2016_1

25

Example of matrix inversion (3)


Now, the backward elimination proceeds as follows: The last row is
divided by 1.5714:
1

0 0

1 3

0 3.5

0.5

0.5

1 0

0 3.5 0.5

0.5

2
0

0 1.5714 1.4285 0.14285 1

1 0.90909 0.090909 0.63636

The second row is subtracted by 0.5 times the last row, the first row is
added by 3 times the last row, and the second row is divided by 3.5:
1 3

2 1 0

0 3.5 0.5

0.5

0 1 0

2
0

M I K E

1 0.90909 0.090909 0.63636

1.72727 0.27272

0.27272 0.27272 0.0909

0 0 1 0.90909 0.09090

204111 Computer & Programming -- 2016_1

1.90908
0.63636

26

Example of matrix inversion (4)


The first row is subtracted from the second row and divided by 2:
2 1 0
0 1 0

1.72727 0.27272

1.90908

1 0 0

0.27272 0.27272 0.0909

0 1 0

0 0 1 0.90909 0.09090

0.63636

0.27272 0.27272 0.0909

0 0 1 0.90909 0.09090

0.63636

The last three columns in the foregoing augmented array constitute


the inverse of the matrix A.

2 1 3
A 1 3
2
3 1 3

M I K E

1
0
1

A1 0.27272 0.27272 0.0909


0.90909 0.0909 0.63636

204111 Computer & Programming -- 2016_1

27

LU Decomposition
The LU decomposition scheme is a transformation of a matrix A to a
product of two matrices:
A=LU
where L is a lower triangular matrix and U is the upper triangular
matrix.
The LU decomposition of a 3x3 matrix is illustrated as:

a11 a12
a
21 a22
a31 a32

a13 1 0 0 u11 u12 u13


a23 l21 1 0 0 u22 u23
a33 l31 l32 1 0
0 u33

Note that all the diagonal elements of L are unity.

M I K E

204111 Computer & Programming -- 2016_1

28

LU Decomposition (2)
a11 a12
a
21 a22
a31 a32

a13 1 0 0 u11 u12 u13


a23 l21 1 0 0 u22 u23
a33 l31 l32 1 0
0 u33

To evaluate uij and lij, we first multiply the first row of L by each
column of U, and compare the result to the first row of A, and it was
found that the first row of U is identical to that of A:
u1j=a1j, j=1 to 3

M I K E

204111 Computer & Programming -- 2016_1

29

LU Decomposition (3)
a11 a12
a
21 a22
a31 a32

a13 1 0 0 u11 u12 u13


a23 l21 1 0 0 u22 u23
a33 l31 l32 1 0
0 u33

Multiplying the second and the third row of L by the first column of U,
respectively, and comparing to the left side yields:
a21= l21u11, a31=l31u11
or equivalently:
l21 = a21/u11, l31 = a31 / u11

M I K E

204111 Computer & Programming -- 2016_1

30

LU Decomposition (4)
a11 a12
a
21 a22
a31 a32

a13 1 0 0 u11 u12 u13


a23 l21 1 0 0 u22 u23
a33 l31 l32 1 0
0 u33

Multiplying the second row of L by the second and the third column of
U and comparing to the left side yields:

a22= l21u12+u22, a23=l21u13 +u23


or equivalently:
u22= a22 - l21u12, u23= a23 - l21u13

M I K E

204111 Computer & Programming -- 2016_1

31

LU Decomposition (5)
a11 a12
a
21 a22
a31 a32

a13 1 0 0 u11 u12 u13


a23 l21 1 0 0 u22 u23
a33 l31 l32 1 0
0 u33

By multiplying the third row of L by the second column of U, we obtain:


a32= l31u12+l32u22
or equivalently:
l32= [a32 - l31u12]/u22

M I K E

204111 Computer & Programming -- 2016_1

32

LU Decomposition (6)
a11 a12
a
21 a22
a31 a32

a13 1 0 0 u11 u12 u13


a23 l21 1 0 0 u22 u23
a33 l31 l32 1 0
0 u33

Finally, multiplying the last column of U by the last row of L:


a33= l31u13+l32u23 +u33
or equivalently:
u33= a33 - l31u13 - l32u23

M I K E

204111 Computer & Programming -- 2016_1

33

Example of LU computation
Decompose the following matrix into L and U ones:

2 1 3
A 1 3
2
3 1 3

Following the procedure discussed in previous slides:


u11=2, u12=1, u13=-3
l21=-0.5, l31=1.5
u22=3-(-0.5)(1)=3.5
u23=2-(-0.5)(-3)=0.5
l32=[1-(1.5)(1)]/3.5=-0.142857
u33=-3-(1.5)(-3)-(-0.142857)(-0.5)=1.57142
Then

1
0 0
1
3

2
L 0.5
1 0, U 0 3.5
0
1.5 0.1428 1
0
0 1.5714
M I K E

204111 Computer & Programming -- 2016_1

34

LU decomposition of AN
The first row of U, u1j for j=1 to N, is obtained by

u1 j a1 j , j 1 to N
The first column of L, li1 for i=2 to N, is obtained by

li1 ai1 / u11, i 2 to N


The second row of U is obtained by

u2 j a2 j l21u1 j , j 2 to N
The second column of L is obtained by

li 2 [ai 2 li1u12 ] / u22 , i 3 to N

M I K E

204111 Computer & Programming -- 2016_1

35

LU decomposition of AN (2)
The nth row of U is obtained by
n 1

unj anj lnkukj , j n to N


k 1

The nth column of L is obtained by


n 1

lin [ain lik ukn ] / unn , i n 1 to N


k 1

All diagonal elements of L are not calculated, as they are all unity.

M I K E

204111 Computer & Programming -- 2016_1

36

LU decomposition of AN (3)
As the elements in the upper triangular part of L and the elements in
the lower triangular part of U are all zero, and the diagonal part of L
are all unity, we can store both L and U of a big matirx in one array
to save memory space.

1 0 0 u11 u12 u13


l
0 u

1
0
u
21
22
23

l31 l32 1 0
0 u33

u11 u12 u13


l

u
u
21
22
23

l31 l32 u33

To reduce the memory space further, the result of the factorization


can be overwritten on the memory space of A. This is possible
because each element aij of A is used only once for calculation of lij or
uij in the entire factorization. Therefore, as soon as aij is used, its
memory space can be used to store lij or uij.

M I K E

204111 Computer & Programming -- 2016_1

37

Solving Linear Equations with LU


The set of linear equations Ax=y can be rewritten as:
LUx=y

1 0 0 u11 u12 u13 x1 y1


l
0 u
x y
1
0
u
22
23 2
21

2
l31 l32 1 0
0 u33 x3 y3
This equation can be solved, by setting:
Ux=z
and the underlying linear equations become:
Lz=y

M I K E

204111 Computer & Programming -- 2016_1

38

Solving Linear Equations with LU (2)


In case of a 3x3 matrix, we can write Lz=y as:

1 0 0 z1 y1
l
z y
1
0
21

2 2
l31 l32 1 z3 y3
The solution is calculated recursively as:

z1 y1
z2 [ y2 z1l21]
z3 [ y3 z1l31 z 2l32 ]

M I K E

204111 Computer & Programming -- 2016_1

39

Solving Linear Equations with LU (3)


And we can rewrite Ux=z explicitly as:

u11 u12 u13 x1 z1


0 u
x z
u
22
23 2

2
0
0 u33 x3 z3
The solution becomes:

z3
x3
u33
z 2 u23 x3
x2
u22
z1 u12 x2 u13 x3
x1
u11

M I K E

204111 Computer & Programming -- 2016_1

40

Solving Linear Equations with LU (4)


For a matrix of order N, forward elimination can be summarized as:

z1 y1
i 1

zi yi lij z j , i 2,3,...N
j 1

And for the backward substitution:

xN

zN
u NN

zi uij x j
j i 1
, i N 1, N 2,...,3,2,1
xi
uii

M I K E

204111 Computer & Programming -- 2016_1

41

Determinant
A practical way of calculating the determinant is to use the forward
elimination process of Gauss elimination or, alternatively, the LU decomposition. But, we first look at two important rules of determinants:
Rule1
Rule
1: det
det(BC)
(BC) = det
det(B)
(B)det
det(C)
(C)
which means tht the determinant of a product of matrices is the
product of the determinants of all the matrices.

Rule2
Rule
2: det
det(M)
(M) = the product of all diagonal elements of M
if M is upper or lower triangular matrix.
If no pivoting (exchange between pair of rows) is used, calculation of
the determinant using the LU decomposition is straightforward.
According to Rule1 and Rule2, the determinant can be written as:
N

det(A) = det(LU) = det(L)det(U) = det(U) =

ii

i 1

M I K E

204111 Computer & Programming -- 2016_1

42

Determinant (2)
When the pivoting is used in the LU decomposition, its effect should
be taken into consideration.
First, we recognize that the LU decomposition with pivoting is
equivalent to performing two separate processes:
(1) transform A to A by performing all shuffling of rows (pivoting
pivoting),
(2) then decompose A to LU with no pivoting.

The former step can be expressed by:


A=PA, or equivalently A=P-1A
where P is called a permutation matrix and representing the pivoting
operation. The second process is:
A=LU, or equivalently A=P-1LU

M I K E

204111 Computer & Programming -- 2016_1

43

Determinant (3)
The determinant of A may now be written as:
det(A) = det(P-1)det(L)det(U)
or equivalently:
det(A) = det(U)
where det(L)=1 is used and = det(P-1) equal +1 or 1 depending on
whether the number of pivotings is odd or even, respectively.
The determinant of a matrix may also be calculated during the process
of Gauss elimination. This is because when the forward elimination is
completed, the original matrix has been transformed to the U matrix of
the LU decompostion.
Therefore, the determinant can be calculated by taking the product of
all the terms along the diagonal line and then multiplying by 1 or 1
depending on whether the number of pivoting operations is performed.
performed

M I K E

204111 Computer & Programming -- 2016_1

44

Example of determinant computation


Find the determinant of the matrix:

2 1 3
A 1 3
2
3 1 3
After the forward elimination (with no pivoting), the upper triangular
matrix is:

1
3
2
0 3.5

0
0 1.5714

Therefore, we get
det(A) = (2)(3.5)(1.5714) = 11

M I K E

204111 Computer & Programming -- 2016_1

45

Ill-conditioned Problems
Ill-conditioned problems are solvable, but their solutions become very
Illinaccurate because of severe round-off errors.
Small round-off errors or changes in coefficients can cause significant
errors in the solution for an ill-conditioned problem. Although the effect
of round-off errors increases as the size of the equations become larger,
it can still be illustrated by considering only two equations below:
0.12065
12065x
x + 0.98775
98775y
y = 2.01045 (line A)
0.12032
12032x
x + 0.98755
98755y
y = 2.00555 (line B)
where the two equations are very close to each other. The solution will
be denoted by (x1,y1) as:
x1 = 14
14..7403
y1 = 0.23942

M I K E

204111 Computer & Programming -- 2016_1

46

Ill-conditioned Problems (2)


To simulate the effect of an error in the coefficients, we artificially
increase the inhomogeneous term of the first equation (line A)
by 0.001:
0.12065
12065x
x + 0.98775
98775y
y = 2.01045 (line A)
0.12032
12032x
x + 0.98755
98755y
y = 2.00555 (line B)
0.12065
12065x
x + 0.98775
98775y
y = 2.01145 (line A)
0.12032
12032x
x + 0.98755
98755y
y = 2.00555 (line B)
The solution will be altered to (x2,y2) as:
x1 = 14
14..7403
y1 = 0.23942

M I K E

x1 = 17
17..97563
y1 = -0.15928

204111 Computer & Programming -- 2016_1

47

Your exercises

(homework)

1.Write a C# program to solve the following array of equations:

1 10 E 20 10 E 10
1 1
10 E 20 10 E 20
1 10 E 40 1
10 E 10
1 10 E 40 10 E 50 1
1 10 E 40 10 E 50
1 1
Using Gauss-Jordan Elimination and pivoting technique.
(ref: http://www.icdd.com/resources/axa/VOL42/V42_68.pdf)

2.Write a C# program to solve the following array of equations:


(2.310E-03)x + (4.104E-02)y = 2.283E-02
(4.200E-01)x + (5.368E00)y = 3.104E00

M I K E

204111 Computer & Programming -- 2016_1

48

Your exercises 2

(homework)

3.Write a C# program to solve the following array of equations:


10a + 7b + 8c + 7d = 32
7a + 5b + 6c + 5d = 23
8a + 6b + 10c + 9d = 33
7a + 5b + 9c + 10d = 31
4.Write a C# program to solve the following array of equations:
2.1a + 2.4b + 8.1c = 62.76
7.2a + 8.5b - 6.3c = -1.93
3.4a 6.4b + 5.4c = 16.24
(ref: http://mpec.sc.mahidol.ac.th/numer/STEP12.HTM)

Interesting references: http://www.cs.uiuc.edu/class/fa06/cs257/index.html

M I K E

204111 Computer & Programming -- 2016_1

49

Online Cal
Gaussian Eliminator
http://planetcalc.com/3571/

M I K E

204111 Computer & Programming -- 2016_1

50

Potrebbero piacerti anche