Sei sulla pagina 1di 21

APPLIED NUMERICAL METHODS

MEE3005

DEEPANSHU DEV C2+TC2+TCC2


15BME0120

1) Secant method:The Newton-Raphson algorithm requires the evaluation of two functions (the
function its derivative) per each iteration. If they are complicated expressions it will take
considerable amount of effort to do hand calculations or large amount of CPU time for machine
calculations. Hence it is desirable to have a method that converges (please see the section order of
the numerical methods for theoretical details) as fast as Newton's method yet involves only the
evaluation of the function. Let x0 and x1 are two initial approximations for the root 's' of f(x) = 0 and
f(x0) & f(x1) respectively, are their function values. If x 2 is the point of intersection of x-axis and the
line-joining the points (x0, f(x0)) and (x1, f(x1)) then x2 is closer to 's' than x0 and x1. The equation
relating x0, x1 and x2 is found by considering the slope 'm'

f(x1) - f(x0) f(x2) - f(x1) 0 - f(x1)


m = = =
x1 - x0 x2 - x1 x2 - x1

- f(x1) * (x1-x0)
x2 - x1 =
f(x1) - f(x0)

f(x1) * (x1-x0)
x2 = x1 -
f(x1) - f(x0)

or in general the iterative process can be written as

f(xi) * (xi - xi-1 )


xi+1= xi - i = 1,2,3...
f(xi) - f(xi-1)

This formula is similar to Regula-falsi scheme of root bracketing methods but differs in the implementation.
The Regula-falsi method begins with the two initial approximations 'a' and 'b' such that a < s < b where s is
the root of f(x) = 0. It proceeds to the next iteration by calculating c(x2) using the above formula and then
chooses one of the interval (a,c) or (c,h) depending on f(a) * f(c) < 0 or > 0 respectively. On the other hand
secant method starts with two initial approximation x0 and x1 (they may not bracket the root) and then
calculates the x2 by the same formula as in Regula-falsi method but proceeds to the next iteration without
bothering about any root bracketing.
Algorithm - Secant Method

Given an equation f(x) = 0


Let the initial guesses be x0 and x1
Do

f(xi) * (xi - xi-1)


xi+1= xi - i = 1, 2, 3...
f(xi) - f(xi-1)

While (none of the convergence criterion C1 or C2 is met)

C1. Fixing apriori the total number of iterations N.


C2. By testing the condition | xi+1 - xi | (where i is the iteration number) less than some tolerance limit,
say epsilon, fixed apriori.
Numerical Example :
Find the root of 3x+sin[x]-exp[x]=0
Let the initial guess be 0.0 and 1.0
f(x) = 3x+sin[x]-exp[x]

i 0 1 2 3 4 5 6
xi 0 1 0.471 0.308 0.363 0.36 0.36
So the iterative process converges to 0.36 in six iterations.

1. Find the root of x4-x-10 = 0

The graph of this equation is given in the figure.

Let the initial guess be 1.0 and 2.0

i 0 1 2 3 4 5 6 7
xi 1 2 1.71429 1.83853 1.85778 1.85555 1.85558 1.85558

So the iterative process converges at 1.85558.


2) NEWTON'S METHOD

Let the given equation be f(x) = 0 and the initial approximation for the root is x0. Draw a tangent to the
curve y = f(x) at x0 and extend the tangent until x-axis. Then the point of intersection of the tangent and the
x-axis is the next approximation for the root of f(x) = 0. Repeat the procedure with x0 = x1 until it converges.
If m is the slope of the Tangent at the point x 0 and is the angle between the tangent and x-axis then

f(x0)
m = tan f '(x0 ) =
x 0-x1
(x0-x1) * f '(x0) = f(x0 )

f(x0)
x1 = x0 -
f '(x0)
This can be generalized to the iterative process as

f(xi)
xi+1= xi - i = 0, 1, 2, . . .
f '(xi)
from taylor series
The same also can be obtained from the Taylor series. Let x1= x0+ h be the root of f(x) = 0 .

h2
f(x1) = f(x0+h) = f(x0 ) + hf '(x0) + f ''(x0) + . . .
2

h2
0 = f(x0) + hf '(x0 ) + f ''(x0) + . . .
2

f(x0)
h=-
f '(x0 )

f(x0)
x1= x0 -
f '(x0 )
or in general

f(xi)
xi+1= xi - i = 0, 1, 2, . . .
f '(xi)

Algorithm - Newton's Scheme

Given an equation f(x) = 0


Let the initial guess be x0
Do

xi+1= xi - f(xi) i = 0, 1, 2, . . .
f '(xi)

while (one of the convergence criterion C1 or C2 is met)

C1. Fixing apriori the total number of iterations N.


C2. By testing the condition | xi+1 - xi | (where i is the iteration number) less than some tolerance limit,
say epsilon, fixed apriori.

Numerical Example :

Find a root of 3x+sin[x]-exp[x]=0

Let the initial guess x0 be 2.0


f(x) = 3x+sin[x]-exp[x] f '(x) = 3+cos[x]-exp[x]

i 0 1 2 3 4
xi 2 1.90016 1.89013 1.89003 1.89003
So the iterative process converges to 1.89003 in four iterations.

Example :

Show that the intial approximation x0 for finding 1/N where N is a + ve integer, by the Newton's method
must satisfy 0 < x0 < 2/N for convergence. Proof : Let f(x) = 1/x - N = 0

f '(x) = -1/x2

and Newton's Method is

1/xi - N
xi+1= xi - = 2xi - Nx2i i = 0, 1, 2, . . .
-1/x2i

Now draw the curves y = x & y = 2x - Nx2The first curve is a straight line
passing through origin and the second one is a parabola (x - 1/N)2 = -1/N(y
- 1/N)The point of intersection of these two curves is the required value
1/N. From the figure, we find that any initial value outside the range
0 < x0 < 2/N diverges. If x0 = 0, the iterative does not converge to 1/N but
remains zero always.
3. Find the root of (cos[x])-(x * exp[x]) = 0

The graph of this equation is given in the figure.

Let the initial guess x0 be 2.0

i 0 1 2 3 4 5 6 7
xi 2 1.34157 0.8477 0.58756 0.52158 0.51777 0.51776 0.51776

So the iterative process converges at 0.51776

L U DECOMPOSITION METHOD
In these methods the coefficient matrix A of the given system of equatiron AX = b is written as a product
of a Lower triangulat matrix L and an Upper trigular matrix U, such that A = LU where the elements of L =
(lij = 0 for i < j) and the elements of U = (uij = 0 for i > j) that is, the matrices L and U look like
l11 0 0 ... 0
l21 l22 0 ... 0
L=
... ... ... ... ...
ln1 ln2 . . . ... lnn

u11 u12 ... u1n


0 u22 0 u2n
U=
... ... ... ...
0 0 ... unn
Now using the rules of matrix multiplication
li1u1j + li2u2j + . . . + linunj = ai5 where j = 1(1)n &
i = 1(1)n
This gives a system of n equations for the (n2 + 1) unknowns(the non-zero
2 elements in L and U). To make
the number of unknowns and the number of equations equal one can fix the diagonal element either in L or
in U as '1' then solve the n2 equations for the remaining n2 unknowns in L and U. This can be written as
j-1
li1 = aij - lik ukj
k=1

i-1
uij = aij - lik ukj / lii
k=1
GAUSS - SEIDEL METHOD
In this method the (n+1)th iterative values are used as soon as they are available and the iterative scheme is
defined by

i-1 m
xi(n+1) = 1/aii {b -
i aij xj(n) - aij xj(n) }, i = 1(1)m.
j=1 j=i+1

again in the matrix notation, the coefficient matrix A of the system Ax = b, is split into D - L - U where D
has the diagonal elements of A and L & U respectively have the lower diagonal and upper diagonal elements
of A with a negative sign then the Gauss-Seidel scheme can be written as
Dx = (L + U)x + b
or (D-L)x(n+1) = Ux(n) + b,
giving x(n+1) = (D-L)-1 Ux(n) + (D-L)-1b.
i.e., the Jacobi iterative matrix QGS = (D-L)-1 U and CGS = (D-L)-1b.
THOMAS ALGORITHM
POWER METHOD TO FIND THE EIGEN VALUES:
We first assume that the matrix A has a dominant eigenvalue with the corresponding dominant eigenvectors. As stated
before, the power method for approximating eigenvalues is iterative. Hence, we start with an initial approximation of
the dominant eigenvector of A, which must be non-zero. Thus, we obtain a sequence of eigenvectors given by

When k is large, we can obtain a good approximation of the dominant eigenvector of A by properly scaling the
sequence.
EXAMPLE:

JACOBI METHOD TO FIND EIGEN VALUES:


NEWTON’S FORWARD AND BACKWARD DIFFERENCE FORMULAE:
EXAMPLE:
EXAMPLE:
STIRLING INTERPOLATION:

EXAMPLE:
INTERPOLATION WITH CUBIC SPLINE:
EXAMPLE:

Potrebbero piacerti anche