Sei sulla pagina 1di 32

Basic Iterative Methods and Their Rates of Convergence

By Abhishek Srivastava Sneha E. V. S. Della Jacy

Iteration Methods
Iteration: Repeating a process over and over until an approximation of the solution is reached. Iterative methods
Start with approximate answer Each iteration improves accuracy Stop once estimated error below tolerance

Why Iterative Methods


When the number of unknowns is very large , Gauss Elimination becomes inefficient. Additional advantages of iterative methods include: (1) programming is simple (2) It is easily applicable when coefficients are nonlinear.

Fundamental Idea
Fundamental idea is to use: Ax=b It uses Xcurrent, a current approximation for true value of x. To find a new approximation Xnew where Xnew is closer to x than Xcurrent . Then use this approximation as current value to find yet another approximation. Then iterative method use this process again & again until approximation is closer to the true solution.

Types Of Iteration Methods


Jacobis Method Gauss-Seidel Method Successive Over-relaxation Method (SOR)

Jacobis Method
Simplest iterative method of all. If we have to solve for the set of linear equations of the form Ax=b Give a current approximation as x(k) = (x1(k), x2(k), x3(k), , xn(k)) For these current values find new values for x(k+1) = (x1(k+1), x2(k+1), , xn(k+1)) in the following system of equations

In general

The above system of equations can also be represented as

In matrix vector notations it can be written as

Jacobis Method can be written in matrixvector notation as

Example 1 Let's apply Jacobi's Method to the system

At each step, given the current values x1(k), x2(k), x3(k), we solve for x1(k+1), x2(k+1), and x3(k+1) in

Initial guess x(0) = (x1(0), x2(0), x3(0)) is the zero vector 0 = (0,0,0) Then we find x(1) = (x1(1), x2(1), x3(1) ) by solving

So x(1) = (x1(1), x2(1), x3(1)) = (3/4, 9/6, 6/7) (0.750, 1.500, 0.857). We iterate this process to find a sequence of increasingly better approximations x(0), x(1), x(2), . We are interested in the error e at each iteration between the true solution x and the approximation x(k): e(k) = x x(k) . For better understanding we can make use of the known solution; for this example, the true solution is x = (1, 2, 1).

we stop iterating after all three ways of measuring the current error, x(k) x(k-1), e(k), and ||e(k)||,equal 0 to three decimal places. CONVEREGENCE If A is strictly diagonally dominants,system of linear equation of the form Ax=b has a unique solution to which jacobi method will surely converge for any value of initial approximation.

Gauss Seidel
Gauss-Seidel iterative method can be derived as followed: (L + D + U) X = B DX= -(L + U)X + B X= -D-1 (L + U)X + D-1 B = -D-1 (LX + UX)+ D-1 B X= D-1 (B LX UX)

It is the improvement over jacobian method. Its true solution is x if x1(k+1) is a better approximation than x1(k). So we can use the new value x1(k+1) to find x2(k+1),.....,xn(k+1)

The Gauss Seidel method is given by:

that is;
So that

Lets take an example 4x1 - x2 - x3 = 3 -2x1 + 6x2 + x3 = 9 -x1 + x2 7x3 = -6 At each step for the current value of x(k) we will calculate for x(k+1) in 4x1(k+1) x2(k) x3(k)= 3 -2x1(k+1) + 6x2(k+1) + x3 = 9 -x1(k+1) + x2(k+1) 7x3(k+1) = -6

Now we will solve for x(1) by solving 4x1 0 0 = 3 -2x1(1) + 6x2(1) + 0 = 9 -x1(1) + x2(1) + 7x3(1) = -6 We will first solve for x(1) ; x(1) = 3/4 = 0.750 We then solve for x2(1) ; x2(1) = (9 + 2[0.750])/6 = 1.750 Finally we solve for x3(1) ; x3(1) = (-6 + 0.750 1.750)/7= -1.000

The result of the first iteration is x1(1) = (0.750, 1.750,-1.000) We iterate this process to generate increasingly better options.

k 0 1 2 3 4 5

(k) x
-0.000 -0.000 -0.000

x(k)
-1.000 0.750

x
1.750

(k-1)

e(k) = x x(k)
1.000 2.000 -1.000

||e(k)||
2.449

0.750

1.750

-1.000

0.250

0.250

0.000

0.354

0.938

1.979

-1.006

0.188

0.229

-0.006

0.063

0.021

0.006

0.066

0.993

1.999

-1.001

0.056

0.020

0.005

0.007

0.001

0.001

0.007

0.999

2.000

-1.000

0.006

0.001

0.001

0.001

0.000

0.000

0.001

1.000

2.000

-1.000

0.000

0.000

0.000

0.000

0.000

0.000

0.000

Convergence Analysis Of GS
Here we try to answer two questions: 1. When will each of these methods work? 2. What will be the rate of convergence?

The iterative method that finds the solution to Ax = b takes the form: Mx(k+1) = Nx(k) + b x(k+1) = M-1Nxk + M-1b x(k+1) = Bxk + S Where B = M-1N and S = M-1b Which we can write as: x = M-1Nxk + M-1b

If x(k) is the exact solution , then its should certainly produce the next solution x(k+1). i.e x = Bx + S should be true so; Mx = Nx + b (M-N)x = b; where M-N=A The effectiveness of a method depends on the iteration matrix B = M-1N

Successive Over-Relaxation method


This method is a generalization and improvement on Gauss- Siedel method. In finding x(k+1) from x(k), we move a certain amount in a particular direction from x(k) to x(k+1). This direction is the vector x(k+1) x(k), since x(k+1) = x(k) + (x(k+1) x(k)). If we assume that the direction from x(k) to x(k+1) is taking us closer, to the true solution x , then it would make sense to move in the same direction x(k+1) x(k), farther along.

We can write the Gauss-Seidel equation as, so that,

We can subtract x(k) from both sides to get

This can be taken as the Gauss-Seidel correction (x(k+1) xk) GS. It turns out that convergence x(k) to x of the sequence of approximate solutions to the true solution is often faster if we go beyond the standard Gauss-Seidel correction.

The idea of the SOR Method is to iterate where, as we just found, where generally 1 << 2. If = 1 then this is the Gauss-Seidel Method.

The SOR method is: Multiplying both sides by D and dividing both sides by we get,

On solving we get, SOR method is also of the form x = Bx + b. So the general convergence analysis of Jacobi and Gauss Siedel method is also applicable to SOR. The iteration matrix B that determines convergence of SOR method is Optimum convergence can be obtained by choosing to minimize

Convergence of SOR
The necessary condition for the convergence of SOR method is : 0 < < 2. If 0 < < 1, it is called under relaxation. If 1 < < 2, it is called over relaxation.

Comparing the algorithms for performing iteration k+1 (for i = 1 to n), of the 3 methods,

JACOBI

GAUSS - SIEDEL

SOR

Potrebbero piacerti anche