Sei sulla pagina 1di 58

Roots of Equations: Open

methods
Newton-Raphson
Secant Method
Muller methods




Newton-Raphson Method






Newton-Raphson Method
) (x f
) f(x
- = x x
i
i
i i
'
+1

f(x)
f(x
i
)
f(x
i-1
)
x
i+2
x
i+1
x
i

X
u

( ) | |
i i
x f x
,


Figure 1 Geometrical illustration of the Newton-Raphson method.
http://numericalmethods.eng.usf.edu 3
Derivation

f(x)
f(x
i
)
x
i+1
x
i

X
B
C A o
) (
) (
1
i
i
i i
x f
x f
x x
'
=
+
1
) (
) ( '
+

=
i i
i
i
x x
x f
x f
AC
AB
= ) o tan(
Figure 2 Derivation of the Newton-Raphson method.
4
http://numericalmethods.eng.usf.edu
Algorithm for Newton-Raphson
Method
5
http://numericalmethods.eng.usf.edu
Step 1
) (x f
' Evaluate
symbolically.
http://numericalmethods.eng.usf.edu 6
Step 2
( )
( )
i
i
i i
x f
x f
- = x x
'
+1
Use an initial guess of the root, , to estimate the new value of the root, , as
i
x
1 + i
x
http://numericalmethods.eng.usf.edu 7
Step 3
0 10
1
1

x
- x x
=
i
i i
a
e
+
+
Find the absolute relative approximate error as
a
e
http://numericalmethods.eng.usf.edu 8
Step 4
Compare the absolute relative approximate error with
the pre-specified relative error tolerance .





Also, check if the number of iterations has exceeded the
maximum number of iterations allowed. If so, one needs
to terminate the algorithm and notify the user.
s
e

Is ?

Yes
No
Go to Step 2 using new
estimate of the root.
Stop the algorithm
s a
>e e
http://numericalmethods.eng.usf.edu 9
Example 1
A polynomial is expressed as
( )
4 2 3
10 993 3 165 0
-
. + x . - x x f =
Use the Newtons method of finding roots of equations to find
a) The root x. Conduct three iterations to estimate the root of the above
equation.
b) The absolute relative approximate error at the end of each iteration, and
c) The number of significant digits at least correct at the end of each iteration.
http://numericalmethods.eng.usf.edu 10
Example 1 Cont.
( )
4 2 3
10 993 3 165 0
-
. + x . - x x f =
11 http://numericalmethods.eng.usf.edu
To aid in the understanding of how this
method works to find the root of an
equation, the graph of f(x) is shown to the
right, where
Solution
Figure 4 Graph of the function f(x)
Example 1 Cont.
12 http://numericalmethods.eng.usf.edu
( )
( ) x - x x f
. + x . - x x f
-
33 . 0 3 '
10 993 3 165 0
2
4 2 3
=
=
Let us assume the initial guess of the root of is .
( ) 0 = x f
m 05 . 0
0
= x
Solve for
( ) x f '
Example 1 Cont.
( )
( )
( ) ( )
( ) ( )
( )
06242 . 0
01242 . 0 0.05
10 9
10 118 . 1
0.05
05 . 0 33 . 0 05 . 0 3
10 .993 3 05 . 0 165 . 0 05 . 0
05 . 0
'
3
4
2
4
2 3
0
0
0 1
=
=


=

+
=
=

x f
x f
x x
13 http://numericalmethods.eng.usf.edu
Iteration 1
The estimate of the root is
Example 1 Cont.
14 http://numericalmethods.eng.usf.edu
Figure 5 Estimate of the root for the first iteration.
Example 1 Cont.
% 90 . 19
100
06242 . 0
05 . 0 06242 . 0
100
1
0 1
=

= e
x
x x
a
15 http://numericalmethods.eng.usf.edu
The absolute relative approximate error at the end of Iteration 1 is
a
e
The number of significant digits at least correct is 0, as you need an absolute
relative approximate error of 5% or less for at least one significant digits to be
correct in your result.
Example 1 Cont.
( )
( )
( ) ( )
( ) ( )
( )
06238 . 0
10 4646 . 4 06242 . 0
10 90973 . 8
10 97781 . 3
06242 . 0
06242 . 0 33 . 0 06242 . 0 3
10 .993 3 06242 . 0 165 . 0 06242 . 0
06242 . 0
'
5
3
7
2
4
2 3
1
1
1 2
=
=


=

+
=
=

x f
x f
x x
16 http://numericalmethods.eng.usf.edu
Iteration 2
The estimate of the root is
Example 1 Cont.
17 http://numericalmethods.eng.usf.edu
Figure 6 Estimate of the root for the Iteration 2.
Example 1 Cont.
% 0716 . 0
100
06238 . 0
06242 . 0 06238 . 0
100
2
1 2
=

= e
x
x x
a
18 http://numericalmethods.eng.usf.edu
The absolute relative approximate error at the end of Iteration 2 is
a
e
The maximum value of m for which is 2.844. Hence,
the number of significant digits at least correct in the answer is 2.
m
a

s e
2
10 5 . 0
Example 1 Cont.
( )
( )
( ) ( )
( ) ( )
( )
06238 . 0
10 9822 . 4 06238 . 0
10 91171 . 8
10 44 . 4
06238 . 0
06238 . 0 33 . 0 06238 . 0 3
10 .993 3 06238 . 0 165 . 0 06238 . 0
06238 . 0
'
9
3
11
2
4
2 3
2
2
2 3
=
=


=

+
=
=

x f
x f
x x
19 http://numericalmethods.eng.usf.edu
Iteration 3
The estimate of the root is
Example 1 Cont.
20 http://numericalmethods.eng.usf.edu
Figure 7 Estimate of the root for the Iteration 3.
Example 1 Cont.
% 0
100
06238 . 0
06238 . 0 06238 . 0
100
2
1 2
=

= e
x
x x
a
21 http://numericalmethods.eng.usf.edu
The absolute relative approximate error at the end of Iteration 3 is
a
e
The number of significant digits at least correct is 4, as only 4 significant
digits are carried through all the calculations.
Advantages and Drawbacks of
Newton Raphson Method



22 http://numericalmethods.eng.usf.edu
Advantages
Converges fast (quadratic convergence), if it
converges.
Requires only one guess
23 http://numericalmethods.eng.usf.edu
Drawbacks
24 http://numericalmethods.eng.usf.edu
1. Divergence at inflection points
Selection of the initial guess or an iteration value of the root that is close
to the inflection point of the function may start diverging away from
the root in the Newton-Raphson method.


For example, to find the root of the equation .



The Newton-Raphson method reduces to .



Table 1 shows the iterated values of the root of the equation.

The root starts to diverge at Iteration 6 because the previous estimate of
0.92589 is close to the inflection point of .

Eventually after 12 more iterations the root converges to the exact value of
( ) x f
( ) ( ) 0 512 . 0 1
3
= + = x x f
( )
( )
2
3
3
1
1 3
512 . 0 1

+
=
+
i
i
i i
x
x
x x
1 = x
. 2 . 0 = x
Drawbacks Inflection Points
Iteration
Number
x
i

0 5.0000
1 3.6560
2 2.7465
3 2.1084
4 1.6000
5 0.92589
6 30.119
7 19.746
18 0.2000
( ) ( ) 0 512 . 0 1
3
= + = x x f
25 http://numericalmethods.eng.usf.edu
Figure 8 Divergence at inflection point for
Table 1 Divergence near inflection point.
2. Division by zero
For the equation


the Newton-Raphson method
reduces to





For , the
denominator will equal zero.


Drawbacks Division by Zero
( ) 0 10 4 . 2 03 . 0
6 2 3
= + =

x x x f
26 http://numericalmethods.eng.usf.edu
i i
i i
i i
x x
x x
x x
06 . 0 3
10 4 . 2 03 . 0
2
6 2 3
1

+
=

+
02 . 0 or 0
0 0
= = x x
Figure 9 Pitfall of division by zero
or near a zero number
Results obtained from the Newton-Raphson method may oscillate
about the local maximum or minimum without converging on a root
but converging on the local maximum or minimum.

Eventually, it may lead to division by a number close to zero and may
diverge.

For example for the equation has no real roots.

Drawbacks Oscillations near local
maximum and minimum
( ) 0 2
2
= + = x x f
27 http://numericalmethods.eng.usf.edu
3. Oscillations near local maximum and minimum
Drawbacks Oscillations near local
maximum and minimum
28 http://numericalmethods.eng.usf.edu
-1
0
1
2
3
4
5
6
-2 -1 0 1 2 3
x
f(x)
3
4
2
1
-1.75 -0.3040 0.5 3.142
Figure 10 Oscillations around local
minima for . ( ) 2
2
+ = x x f
Iteration
Number
0
1
2
3
4
5
6
7
8
9
1.0000
0.5
1.75
0.30357
3.1423
1.2529
0.17166
5.7395
2.6955
0.97678
3.00
2.25
5.063
2.092
11.874
3.570
2.029
34.942
9.266
2.954

300.00
128.571
476.47
109.66
150.80
829.88
102.99
112.93
175.96
Table 3 Oscillations near local maxima and
mimima in Newton-Raphson method.
i
x ( )
i
x f %
a
e
4. Root Jumping
In some cases where the function is oscillating and has a number of roots,
one may choose an initial guess close to a root. However, the guesses may jump
and converge to some other root.


For example


Choose


It will converge to

instead of
-1.5
-1
-0.5
0
0.5
1
1.5
-2 0 2 4 6 8 10
x
f(x)
-0.06307 0.5499 4.461 7.539822
Drawbacks Root Jumping
( ) 0 sin = = x x f
29 http://numericalmethods.eng.usf.edu
( ) x f
539822 . 7 4 . 2
0
= = t x
0 = x
2831853 . 6 2 = = t x Figure 11 Root jumping from intended
location of root for
. ( ) 0 sin = = x x f
Secant Method





31
Secant Method Derivation
) (x f
) f(x
- = x x
i
i
i i
'
+1

f(x)
f(x
i
)
f(x
i-1
)
x
i+2
x
i+1
x
i

X
u

( ) | |
i i
x f x
,

1
1
) ( ) (
) (

= '
i i
i i
i
x x
x f x f
x f
) ( ) (
) )( (
1
1
1


=
i i
i i i
i i
x f x f
x x x f
x x
Newtons Method
Approximate the derivative
Substituting Equation (2) into
Equation (1) gives the Secant
method
(1)
(2)
Figure 1 Geometrical illustration of the
Newton-Raphson method.
32
Secant Method Derivation
) ( ) (
) )( (
1
1
1


=
i i
i i i
i i
x f x f
x x x f
x x
The Geometric Similar Triangles

f(x)
f(x
i
)
f(x
i-1
)
x
i+1
x
i-1
x
i

X
B
C
E D A
1 1
1
1
) ( ) (
+

+

=

i i
i
i i
i
x x
x f
x x
x f
DE
DC
AE
AB
=
Figure 2 Geometrical representation of
the Secant method.
The secant method can also be derived from geometry:
can be written as
On rearranging, the secant method
is given as

http://numericalmethods.eng.usf.edu
33
Algorithm for Secant Method

http://numericalmethods.eng.usf.edu
34
Step 1
0 10
1
1

x
- x x
=
i
i i
a
e
+
+
Calculate the next estimate of the root from two initial guesses
Find the absolute relative approximate error
) ( ) (
) )( (
1
1
1


=
i i
i i i
i i
x f x f
x x x f
x x

http://numericalmethods.eng.usf.edu
35
Step 2
Find if the absolute relative approximate error is greater than
the prespecified relative error tolerance.

If so, go back to step 1, else stop the algorithm.

Also check if the number of iterations has exceeded the
maximum number of iterations.

http://numericalmethods.eng.usf.edu
36
Example 1 Cont.
Use the Secant method of finding roots of equations to find the x
Conduct three iterations to estimate the root of the above equation.
Find the absolute relative approximate error and the number of significant
digits at least correct at the end of each iteration.
A polynomial is given as
( )
4 2 3
10 993 3 165 0
-
. + x . - x x f =

http://numericalmethods.eng.usf.edu
37
Example 1 Cont.
( )
4 2 3
10 993 3 165 0
-
. + x . - x x f =
To aid in the understanding of how this
method works to find the root of an
equation, the graph of f(x) is shown to the
right,

where
Solution
Figure 4 Graph of the function f(x).

http://numericalmethods.eng.usf.edu
38
Example 1 Cont.
Let us assume the initial guesses of the root of
as and
( ) 0 = x f
02 . 0
1
=

x
Iteration 1
The estimate of the root is
( )( )
( ) ( )
( ) ( )( )
( ) ( ) ( ) ( )
06461 . 0
10 993 . 3 02 . 0 165 . 0 02 . 0 10 993 . 3 05 . 0 165 . 0 05 . 0
02 . 0 05 . 0 10 993 . 3 05 . 0 165 . 0 05 . 0
05 . 0
4
2
3 4
2
3
4
2
3
1 0
1 0 0
0 1
=
+ +
+
=

x f x f
x x x f
x x
. 05 . 0
0
= x

http://numericalmethods.eng.usf.edu
39
Example 1 Cont.
The absolute relative approximate error at the end of Iteration
1 is
% 62 . 22
100
06461 . 0
05 . 0 06461 . 0
100
1
0 1
=

= e
x
x x
a
a
e
The number of significant digits at least correct is 0, as you need
an absolute relative approximate error of 5% or less for one
significant digits to be correct in your result.

http://numericalmethods.eng.usf.edu
40
Example 1 Cont.
Figure 5 Graph of results of Iteration 1.

http://numericalmethods.eng.usf.edu
41
Example 1 Cont.
Iteration 2
The estimate of the root is
( )( )
( ) ( )
( ) ( )( )
( ) ( ) ( ) ( )
06241 . 0
10 993 . 3 05 . 0 165 . 0 05 . 0 10 993 . 3 06461 . 0 165 . 0 06461 . 0
05 . 0 06461 . 0 10 993 . 3 06461 . 0 165 . 0 06461 . 0
06461 . 0
4
2
3 4
2
3
4
2
3
0 1
0 1 1
1 2
=
+ +
+
=

x f x f
x x x f
x x

http://numericalmethods.eng.usf.edu
42
Example 1 Cont.
The absolute relative approximate error at the end of
Iteration 2 is
% 525 . 3
100
06241 . 0
06461 . 0 06241 . 0
100
2
1 2
=

= e
x
x x
a
a
e
The number of significant digits at least correct is 1, as you need
an absolute relative approximate error of 5% or less.

http://numericalmethods.eng.usf.edu
43
Example 1 Cont.
Figure 6 Graph of results of Iteration 2.

http://numericalmethods.eng.usf.edu
44
Example 1 Cont.
Iteration 3
The estimate of the root is
( )( )
( ) ( )
( ) ( )( )
( ) ( ) ( ) ( )
06238 . 0
10 993 . 3 06461 . 0 165 . 0 05 . 0 10 993 . 3 06241 . 0 165 . 0 06241 . 0
06461 . 0 06241 . 0 10 993 . 3 06241 . 0 165 . 0 06241 . 0
06241 . 0
4
2
3 4
2
3
4
2
3
1 2
1 2 2
2 3
=
+ +
+
=

x f x f
x x x f
x x

http://numericalmethods.eng.usf.edu
45
Example 1 Cont.
The absolute relative approximate error at the end of
Iteration 3 is
% 0595 . 0
100
06238 . 0
06241 . 0 06238 . 0
100
3
2 3
=

= e
x
x x
a
a
e
The number of significant digits at least correct is 5, as you need
an absolute relative approximate error of 0.5% or less.

http://numericalmethods.eng.usf.edu
46
Iteration #3
Figure 7 Graph of results of Iteration 3.

http://numericalmethods.eng.usf.edu
47
Advantages
Converges fast, if it converges
Requires two guesses that do not need to bracket
the root

http://numericalmethods.eng.usf.edu
48
Drawbacks
Division by zero
10 5 0 5 10
2
1
0
1
2
f(x)
prev. guess
new guess
2
2
0
f x ( )
f x ( )
f x ( )
10 10 x x
guess1
, x
guess2
,
( ) ( ) 0 = = x Sin x f

http://numericalmethods.eng.usf.edu
49
Drawbacks (continued)
Root Jumping
10 5 0 5 10
2
1
0
1
2
f(x)
x'1, (first guess)
x0, (previous guess)
Secant line
x1, (new guess)
2
2
0
f x ( )
f x ( )
f x ( )
secant x ( )
f x ( )
10 10 x x
0
, x
1'
, x , x
1
,
( ) 0 = = Sinx x f
50
Mller Method
Mllers method obtains a root estimate by
projecting a parabola to the x axis through three
function values.




51
Mller Method
c x x b x x a x f + + = ) ( ) ( ) (
2
2
2 2
The method consists of deriving the
coefficients of parabola that goes through the
three points:

1. Write the equation in a convenient form:
52
2. The parabola should intersect the three points [x
o
, f(x
o
)],
[x
1
, f(x
1
)], [x
2
, f(x
2
)]. The coefficients of the polynomial
can be estimated by substituting three points to give




3. Three equations can be solved for three unknowns, a, b,
c. Since two of the terms in the 3
rd
equation are zero, it
can be immediately solved for c=f(x
2
).


c x x b x x a x f
c x x b x x a x f
c x x b x x a x f
o o o
+ + =
+ + =
+ + =
) ( ) ( ) (
) ( ) ( ) (
) ( ) ( ) (
2 2
2
2 2 2
2 1
2
2 1 1
2
2
2
) ( ) ( ) ( ) (
) ( ) ( ) ( ) (
2 1
2
2 1 2 1
2
2
2 2
x x b x x a x f x f
x x b x x a x f x f
o o o
+ =
+ =
53
) (
) ( ) (
) ( ) ( ) ( ) (
x - x h x - x h
If
2 1 1
1
1
1 1
2
1 1
1 1
2
1 1
1 2
1 2
1
1
1
1 2 1 o 1 o
x f c ah b
h h
a
h a h b h
h h a h h b h h
x x
x f x f
x x
x f x f
o
o
o o o o
o
o
o
= + =
+

=
=
+ = + +

=
= =
o
o o
o
o o
o o
Solved for a
and b
54
Roots can be found by applying an alternative form of
quadratic formula:



The error can be calculated as




term yields two roots, the sign is chosen to agree with b. This
will result in a largest denominator, and will give root estimate
that is closest to x
2
.

ac b b
c
x x
4
2
2
2 3


+ =
% 100
3
2 3
x
x x
a

= c
55
Once x
3
is determined, the process is
repeated using the following guidelines:
1. If only real roots are being located, choose the
two original points that are nearest the new root
estimate, x
3
.
2. If both real and complex roots are estimated,
employ a sequential approach just like in secant
method, x
1
, x
2
, and x
3
to replace x
o
, x
1
, and x
2
.
Example

Potrebbero piacerti anche