Sei sulla pagina 1di 93

2nd Term A.Y.

2019-2020
ECADVMIL (ADVANCE ENGINEERING MATHEMATICS FOR ECE – LAB)
MATLAB DOCUMENTATION
ECE181

Submitted by:
Pagulayan, Milen Joan T.
2018-101945
BSECE

Submitted to:
Engr. Armil S. Monsura
Instructor

January 30, 2020


TABLE OF CONTENTS

I. BRACKETING METHODS
A. Bisection Method
B. False Position Method
C. Modified False Position Method

II. OPEN METHODS

A. Fixed – Point Iteration


B. Newton – Raphson Method
C. Secant Method
D. Modified Secant Method

III. HYBRID (BRACKETED/OPEN)

1. A. Brent’s Method

IV. ROOTS OF POLYNOMIALS

1.
A. Muller’s Method
B. Bairstow’s Method

V. GAUSS/GAUSS – JORDAN ELIMINATION


I. ROOT FINDING METHODS

BRACKETING METHOD
Bracketing methods determine successively smaller intervals (brackets)
that contain a root. When the interval is small enough, then a root has been
found. They generally use the intermediate value theorem, which asserts that
if a continuous function has values of opposite signs at the end points of an
interval, then the function has at least one root in the interval. Therefore, they
require to start with an interval such that the function takes opposite signs at
the end points of the interval. However, in the case of polynomials there are
other methods (Descartes' rule of signs, Budan's theorem and Sturm's
theorem) for getting information on the number of roots in an interval. They
lead to efficient algorithms for real-root isolation of polynomials, which ensure
finding all real roots with a guaranteed accuracy.

A. BISECTION METHOD

MATHEMATICAL BACKGROUND.

In mathematics, the bisection method is a root-finding method that applies


to any continuous functions for which one knows two values with opposite signs.
The method consists of repeatedly bisecting the interval defined by these values
and then selecting the subinterval in which the function changes sign, and
therefore must contain a root. It is a very simple and robust method, but it is also
relatively slow. Because of this, it is often used to obtain a rough approximation to
a solution which is then used as a starting point for more rapidly converging
methods. The method is also called the interval halving method, the binary search
method, or the dichotomy method.

 The Bisection Method is a successive approximation method that narrows


down an interval that contains a root of the function f(x)
 The Bisection Method is given an initial interval [a..b] that contains a root
 (We can use the property sign of f(a) ≠ sign of f(b) to find such an initial
interval)
 The Bisection Method will cut the interval into 2 halves and check which
half interval contains a root of the function
 The Bisection Method will keep cut the interval in halves until the resulting
interval is extremely small
 The root is then approximately equal to any value in the final (very small)
interval.

METHOD: FORMULA
The method is applicable for numerically solving the equation f(x) = 0 for
the real variable x, where f is a continuous function defined on an interval [a, b]
and where f(a) and f(b) have opposite signs. In this case a and b are said to bracket
a root since, by the intermediate value theorem the continuous function f must
have at least one root in the interval (a, b).

At each step the method divides the interval in two by computing the midpoint c =
(a+b) / 2 of the interval and the value of the function f(c) at that point. Unless c is
itself a root (which is very unlikely, but possible) there are now only two
possibilities: either f(a) and f(c) have opposite signs and bracket a root, or f(c)
and f(b) have opposite signs and bracket a root. The method selects the
subinterval that is guaranteed to be a bracket as the new interval to be used in the
next step. In this way an interval that contains a zero of f is reduced in width by
50% at each step. The process is continued until the interval is sufficiently small.

Explicitly, if f(a) and f(c) have opposite signs, then the method sets c as the new
value for b, and if f(b) and f(c) have opposite signs then the method sets c as the
new a. (If f(c)=0 then c may be taken as the solution and the process stops.) In both
cases, the new f(a) and f(b) have opposite signs, so the method is applicable to this
smaller interval

In doing such , there are iteration tasks that needs to be followed:


The input for the method is a continuous function f, an interval [a, b], and
the function values f(a) and f(b). The function values are of opposite sign (there is
at least one zero crossing within the interval). Each iteration performs these steps:

1. Calculate c, the midpoint of the interval, c = a + b/2.


2. Calculate the function value at the midpoint, f(c).
3. If convergence is satisfactory (that is, c - a is sufficiently small, or |f(c)| is
sufficiently small), return c and stop iterating.
4. Examine the sign of f(c) and replace either (a, f(a)) or (b, f(b)) with (c, f(c))
so that there is a zero crossing within the new interval.

When implementing the method on a computer, there can be problems with


finite precision, so there are often additional convergence tests or limits to the
number of iterations. Although f is continuous, finite precision may preclude a
function value ever being zero. For example, consider f(x) = x − π; there will never
be a finite representation of x that gives zero. Additionally, the difference
between a and b is limited by the floating point precision; i.e., as the difference
between a and b decreases, at some point the midpoint of [a, b] will be
numerically identical to (within floating point precision of) either a or b.

ALGORITHM

EXAMPLE
Example-1
1. Find a root of an equation f(x)=x3-x-1 using Bisection method

Solution:
Here x3-x-1=0
Let f(x)=x3-x-1

Here
x 0 1 2

f(x) -1 -1 5

1st iteration :
Here f(1)=-1<0 and f(2)=5>0
∴ Now, Root lies between 1 and 2
x0=1+22=1.5
f(x0)=f(1.5)=0.875>0

2nd iteration :
Here f(1)=-1<0 and f(1.5)=0.875>0
∴ Now, Root lies between 1 and 1.5
x1=1+1.52=1.25
f(x1)=f(1.25)=-0.29688<0

3rd iteration :

Here f(1.25)=-0.29688<0 and f(1.5)=0.875>0

∴ Now, Root lies between 1.25 and 1.5

x2=1.25+1.52=1.375

f(x2)=f(1.375)=0.22461>0

4th iteration :

Here f(1.25)=-0.29688<0 and f(1.375)=0.22461>0

∴ Now, Root lies between 1.25 and 1.375

x3=1.25+1.3752=1.3125

f(x3)=f(1.3125)=-0.05151<0

5th iteration :

Here f(1.3125)=-0.05151<0 and f(1.375)=0.22461>0

∴ Now, Root lies between 1.3125 and 1.375


x4=1.3125+1.3752=1.34375

f(x4)=f(1.34375)=0.08261>0

6th iteration :

Here f(1.3125)=-0.05151<0 and f(1.34375)=0.08261>0

∴ Now, Root lies between 1.3125 and 1.34375

x5=1.3125+1.343752=1.32812

f(x5)=f(1.32812)=0.01458>0

7th iteration :

Here f(1.3125)=-0.05151<0 and f(1.32812)=0.01458>0

∴ Now, Root lies between 1.3125 and 1.32812

x6=1.3125+1.328122=1.32031

f(x6)=f(1.32031)=-0.01871<0

8th iteration :

Here f(1.32031)=-0.01871<0 and f(1.32812)=0.01458>0

∴ Now, Root lies between 1.32031 and 1.32812

x7=1.32031+1.328122=1.32422

f(x7)=f(1.32422)=-0.00213<0

9th iteration :

Here f(1.32422)=-0.00213<0 and f(1.32812)=0.01458>0

∴ Now, Root lies between 1.32422 and 1.32812

x8=1.32422+1.328122=1.32617
f(x8)=f(1.32617)=0.00621>0

10th iteration :

Here f(1.32422)=-0.00213<0 and f(1.32617)=0.00621>0

∴ Now, Root lies between 1.32422 and 1.32617

x9=1.32422+1.326172=1.3252

f(x9)=f(1.3252)=0.00204>0

11th iteration :

Here f(1.32422)=-0.00213<0 and f(1.3252)=0.00204>0

∴ Now, Root lies between 1.32422 and 1.3252

x10=1.32422+1.32522=1.32471

f(x10)=f(1.32471)=-0.00005<0

Approximate root of the equation x3-x-1=0 using Bisection mehtod is 1.32471

FUNCTIONS
Below is an example pseudocode of bisection method
INPUT: Function f,
endpoint values a, b,
tolerance TOL,
maximum iterations NMAX
CONDITIONS: a < b,
either f(a) < 0 and f(b) > 0 or f(a) > 0 and f(b) < 0
OUTPUT: value which differs from a root of f(x) = 0 by less than TOL

N ← 1
while N ≤ NMAX do // limit iterations to prevent infinite loop
c ← (a + b)/2 // new midpoint
if f(c) = 0 or (b – a)/2 < TOL then // solution found
Output(c)
Stop
end if
N ← N + 1 // increment step counter
if sign(f(c)) = sign(f(a)) then a ← c else b ← c // new interval
end while
Output("Method failed.") // max number of steps exceeded

Example of MATLab Function


function y = f(x)
y = x.^3 - 2;

exists. Then:
>> format long
>> eps_abs = 1e-5;
>> eps_step = 1e-5;
>> a = 0.0;
>> b = 2.0;
>> while (b - a >= eps_step || ( abs( f(a) ) >= eps_abs && abs( f(b)
) >= eps_abs ) )
c = (a + b)/2;
if ( f(c) == 0 )
break;
elseif ( f(a)*f(c) < 0 )
b = c;
else
a = c;
end
end
>> [a b]
ans = 1.259918212890625 1.259925842285156
>> abs(f(a))
ans = 0.0000135103601622
>> abs(f(b))
ans = 0.0000228224229404

Thus, we would choose 1.259918212890625 as our approximation to the cube-root


of 2, which has an actual value (to 16 digits) of 1.259921049894873. An
implementation of a function would be
function [ r ] = bisection( f, a, b, N, eps_step, eps_abs )
% Check that that neither end-point is a root
% and if f(a) and f(b) have the same sign, throw an exception.

if ( f(a) == 0 )
r = a;
return;
elseif ( f(b) == 0 )
r = b;
return;
elseif ( f(a) * f(b) > 0 )
error( 'f(a) and f(b) do not have opposite signs' );
end

% We will iterate N times and if a root was not


% found after N iterations, an exception will be thrown.

for k = 1:N
% Find the mid-point
c = (a + b)/2;

% Check if we found a root or whether or not


% we should continue with:
% [a, c] if f(a) and f(c) have opposite signs, or
% [c, b] if f(c) and f(b) have opposite signs.

if ( f(c) == 0 )
r = c;
return;
elseif ( f(c)*f(a) < 0 )
b = c;
else
a = c;
end

% If |b - a| < eps_step, check whether or not


% |f(a)| < |f(b)| and |f(a)| < eps_abs and return 'a',
or
% |f(b)| < eps_abs and return 'b'.

if ( b - a < eps_step )
if ( abs( f(a) ) < abs( f(b) ) && abs( f(a) ) < eps_abs )
r = a;
return;
elseif ( abs( f(b) ) < eps_abs )
r = b;
return;
end
end
end

error( 'the method did not converge' );


end

B. FALSE POSITION METHOD


 Mathematical Background
In mathematics, the regula falsi method or false position method is
a very old method for solving an equation in one unknown, that, in modified
form, is still in use. In simple terms, the method is the trial and error
technique of using test ("false") values for the variable and then adjusting
the test value according to the outcome. This is sometimes also referred to
as "guess and check". Versions of the method predate the advent of algebra
and the use of equations. The false-position method is a modification on the
bisection method: if it is known that the root lies on [a, b], then it is
reasonable that we can approximate the function on the interval by
interpolating the points (a, f(a)) and (b, f(b)).

 Method

The method of false position provides an exact solution for linear functions,
but more direct algebraic techniques have supplanted its use for these functions.
However, in numerical analysis, double false position became a root-finding
algorithm used in iterative numerical approximation techniques. Many equations,
including most of the more complicated ones, can be solved only by iterative
numerical approximation. This consists of trial and error, in which various values
of the unknown quantity are tried. That trial-and-error may be guided by
calculating, at each step of the procedure, a new estimate for the solution. There
are many ways to arrive at a calculated-estimate and regula falsi provides one of
these.

Given an equation, move all of its terms to one side so that it has the form, f (x) =
0, where f is some function of the unknown variable x. A value c that satisfies this
equation, that is, f (c) = 0, is called a root or zero of the function f and is a solution
of the original equation. If f is a continuous function and there exist two points a0
and b0 such that f (a0) and f (b0) are of opposite signs, then, by the intermediate
value theorem, the function f has a root in the interval (a0, b0).

There are many root-finding algorithms that can be used to obtain


approximations to such a root. One of the most common is Newton's method, but
it can fail to find a root under certain circumstances and it may be computationally
costly since it requires a computation of the function's derivative. Other methods
are needed and one general class of methods are the two-point bracketing
methods. These methods proceed by producing a sequence of shrinking intervals
[ak, bk], at the kth step, such that (ak, bk) contains a root of f. The fact that regula
falsi always converges, and has versions that do well at avoiding slowdowns,
makes it a good choice when speed is needed. However, its rate of convergence
can drop below that of the bisection method.

The false position method differs from the bisection method only in the
choice it makes for subdividing the interval at each iteration. It converges faster
to the root because it is an algorithm which uses appropriate weighting of the
intial end points x1 and x2 using the information about the function, or the data of
the problem. In other words, finding x3 is a static procedure in the case of the
bisection method since for a given x1 and x2, it gives identical x3, no matter what
the function we wish to solve. On the other hand, the false position method uses
the information about the function to arrive at x3.

 Formula

The poor convergence of the bisection method as well as its poor adaptability
to higher dimensions (example systems of two or more non-linear equations)
motivate the use of better techniques. One such method is the Method of False
Position. Here, start with an initial interval [x1,x2], and we assume that the
function changes sign only once in this interval. Next, find an x3 in this interval,
which is given by the intersection of the x axis and the straight line passing
through (x1,f(x1)) and (x2,f(x2)). It is easy to verify that x3 is given by

Then, choose the new interval from the two choices [x1,x3] or [x3,x2]
depending on in which interval the function changes sign.

The convergence rate of the bisection method could possibly be improved by


using a different solution estimate.
The regula falsi method calculates the new solution estimate as the x-
intercept of the line segment joining the endpoints of the function on the current
bracketing interval. Essentially, the root is being approximated by replacing the
actual function by a line segment on the bracketing interval and then using the
classical double false position formula on that line segment.

More precisely, suppose that in the k-th iteration the bracketing interval
is (ak, bk). Construct the line through the points (ak, f (ak)) and (bk, f (bk)), as
illustrated. This line is a secant or chord of the graph of the function f. In point-
slope form, its equation is given by

Now choose ck to be the x-intercept of this line, that is, the value of x for which y =
0, and substitute these values to obtain

Solving this equation for ck gives:

This last symmetrical form has a computational advantage:

As a solution is approached, ak and bk will be very close together, and nearly


always of the same sign. Such a subtraction can lose significant digits.
Because f (bk) and f (ak) are always of opposite sign the “subtraction” in the
numerator of the improved formula is effectively an addition (as is the
subtraction in the denominator too).

At iteration number k, the number ck is calculated as above and then,


if f (ak) and f (ck) have the same sign, set ak + 1 = ck and bk + 1 = bk, otherwise set ak +
1 = ak and bk + 1 = ck. This process is repeated until the root is approximated
sufficiently well.
The above formula is also used in the secant method, but the secant method
always retains the last two computed points, and so, while it is slightly faster, it
does not preserve bracketing and may not converge.

 Algorithm

Example 1

Find a root of an equation f(x)= x3-x-1 using False Position method

Solution:

Here x3-x-1=0

Let f(x)=x3-x-1

Here

1st iteration :
Here f(1)=-1<0 and f(2)=5>0
∴ Now, Root lies between x0=1 and x1=2
x2=x0-f(x0)⋅ x1-x0/f(x1)-f(x0)
x2=1-(-1)⋅ 2-1/ 5-(-1)
x2=1.16667
f(x2)=f(1.16667)=-0.5787<0

2nd iteration :
Here f(1.16667)=-0.5787<0 and f(2)=5>0
∴ Now, Root lies between x0=1.16667 and x1=2
x3=x0-f(x0)⋅ x1-x0/f(x1)-f(x0)
x3=1.16667-(-0.5787)⋅2-1.166675-(-0.5787)
x3=1.25311
f(x3)=f(1.25311)=-0.28536<0
3rd iteration :
Here f(1.25311)=-0.28536<0 and f(2)=5>0
∴ Now, Root lies between x0=1.25311 and x1=2

x4=x0-f(x0)⋅ x1-x0/f(x1)-f(x0)
x4=1.25311-(-0.28536)⋅2-1.253115-(-0.28536)
x4=1.29344
f(x4)=f(1.29344)=-0.12954<0

4th iteration :
Here f(1.29344)=-0.12954<0 and f(2)=5>0
∴ Now, Root lies between x0=1.29344 and x1=2
x5=x0-f(x0)⋅ x1-x0/f(x1)-f(x0)
x5=1.29344-(-0.12954)⋅2-1.293445-(-0.12954)
x5=1.31128
f(x5)=f(1.31128)=-0.05659<0

5th iteration :
Here f(1.31128)=-0.05659<0 and f(2)=5>0
∴ Now, Root lies between x0=1.31128 and x1=2
x6=x0-f(x0)⋅ x1-x0/f(x1)-f(x0)
x6=1.31128-(-0.05659)⋅2-1.311285-(-0.05659)
x61.31899
f(x6)=f(1.31899)=-0.0243<0

6th iteration :
Here f(1.31899)=-0.0243<0 and f(2)=5>0
∴ Now, Root lies between x0=1.31899 and x1=2
x7=x0-f(x0)⋅ x1-x0/f(x1)-f(x0)
x7=1.31899-(-0.0243)⋅2-1.318995-(-0.0243)
x7=1.32228
f(x7)=f(1.32228)=-0.01036<0
7th iteration :
Here f(1.32228)=-0.01036<0 and f(2)=5>0
∴ Now, Root lies between x0=1.32228 and x1=2
x8=x0-f(x0)⋅ x1-x0/f(x1)-f(x0)
x8=1.32228-(-0.01036)⋅2-1.322285-(-0.01036)
x8=1.32368
f(x8)=f(1.32368)=-0.0044<0

8th iteration :
Here f(1.32368)=-0.0044<0 and f(2)=5>0
∴ Now, Root lies between x0=1.32368 and x1=2
x9=x0-f(x0)⋅ x1-x0/f(x1)-f(x0)

x9=1.32368-(-0.0044)⋅2-1.323685-(-0.0044)
x9=1.32428
f(x9)=f(1.32428)=-0.00187<0

9th iteration :
Here f(1.32428)=-0.00187<0 and f(2)=5>0
∴ Now, Root lies between x0=1.32428 and x1=2
x10=x0-f(x0)⋅ x1-x0/f(x1)-f(x0)
x10=1.32428-(-0.00187)⋅2-1.324285-(-0.00187)
x10=1.32453
f(x10)=f(1.32453)=-0.00079<0

10th iteration :
Here f(1.32453)=-0.00079<0 and f(2)=5>0
∴ Now, Root lies between x0=1.32453 and x1=2
x11=x0-f(x0)⋅ x1-x0/f(x1)-f(x0)
x11=1.32453-(-0.00079)⋅2-1.324535-(-0.00079)
x11=1.32464
f(x11)=f(1.32464)=-0.00034<0
Approximate root of the equation x3-x-1=0 using False Position method is 1.3246
PSEUDOCODE

1. Start

2. Define function f(x)

3. Input
a. Lower and Upper guesses x0 and x1
b. tolerable error e

4. If f(x0)*f(x1) > 0
print "Incorrect initial guesses"
goto 3
End If

5. Do
x2 = x0 - ((x0-x1) * f(x0))/(f(x0) - f(x1))

If f(x0)*f(x2) < 0
x1 = x2
Else
x0 = x2
End If

While abs(f(x2) > e


6. Print root as x2

7. Stop
EXAMPLES

1. Find the roots of the given equation

X3+4x2-10

%first plot the function

x=0:0.05:4;

f=@(x) (x.^3)+(4*(x.^2))-10;

plot(x,f(x));grid
x1=input ('x1=');

x2=input ('x2=');

for i= 1:20

f1=f(x1);

f2=f(x2);

x3=x2-((f2*(x1-x2)/(f1-f2)));

f3=f(x3);

fprintf('%13.4f %13.4f %13.6f %13.4f \n',x1,x2,x3,f3)

if sign(f2)==sign(f3)

x2 = x3 ;

else

x1=x2;x2=x3;

end

end

Answer: x1=1 x2=0 x3=0.9993

2. Find the real root of the equation x^3-2x-5=0 by using false position
method.

Solution:
Let f(x) = x^3-2x-5
F(2) = -1 = negative
F(3) = 16 = positive

Hence the roots of given equation lies between (2,3)


Let us find first approximation (x2) by taking x0=2, x1=3,
X2 = x0 – {(x1 – x0)/[f(x1) – f(x0)]}*f(x0)
Therefore, x2 = 2 – [(3-2)/(16+1)]*(-1) = 2.0588
Here, f(x2) = f(2.0588) = -0.3908 = negative
Hence, roots of given equation lies between (2.0588,3)
X3 = x1 – {(x2 – x1)/[f(x2) – f(x1)]}*f(x1)
Therefore, x3 = 2.0588 – [(3-2.0588)/(16+0.3908)]*(-0.3908) = 2.0813
Here, f(x3) = f(2.0813) = -0.1469 = negative

Hence roots lies between (2.0813,3).


Therefore, x4=2.0896
Here, f(x4)=-0.0547

Hence root lies between (2.0896,3)


Therefore, x5=2.0927
Here, f(x5)=-0.0202

Hence root lies between (2.0927,3)


Therefore, x6=2.0939
Here, f(x6)=-0.0075

Hence root lies between (2.0939,3)


Therefore, x7=2.0943
Here, f(x8)=-0.0027

Hence root lies between (2.0943,3)


Therefore, x9=2.0945
Here, f(x9)=-0.0010

Hence root lies between (2.0945,3)


Therefore, x10=2.0945
Now, x9 and x10 are equal, therefore we can stop here.
So our final answer is x=2.0945.

FUNCTION:
% Setting x as symbolic variable

syms x;

% Input Section

y = input('Enter non-linear equations: ');

a = input('Enter first guess: ');

b = input('Enter second guess: ');

e = input('Tolerable error: ');

% Finding Functional Value

fa = eval(subs(y,x,a));

fb = eval(subs(y,x,b));

% Implementing Bisection Method

if fa*fb > 0

disp('Given initial values do not bracket the


root.');

else

c = a - (a-b) * fa/(fa-fb);

fc = eval(subs(y,x,c));

fprintf('\n\na\t\t\tb\t\t\tc\t\t\tf(c)\n');

while abs(fc)>e

fprintf('%f\t%f\t%f\t%f\n',a,b,c,fc);

if fa*fc< 0

b =c;

fb = eval(subs(y,x,b));

else
a =c;

fa = eval(subs(y,x,a));

end

c = a - (a-b) * fa/(fa-fb);

fc = eval(subs(y,x,c));

end

fprintf('\nRoot is: %f\n', c);

end

%false position method

clear;clc

%first plot the function

x=0:0.05:4;

f=@(x) (x.^3)+(4*(x.^2))-10;

plot(x,f(x));grid

x1=input ('x1=');

x2=input ('x2=');

for i= 1:20

f1=f(x1);

f2=f(x2);

x3=x2-((f2*(x1-x2)/(f1-f2)));

f3=f(x3);

fprintf('%13.4f %13.4f %13.6f %13.4f \n',x1,x2,x3,f3)

if sign(f2)==sign(f3)

x2 = x3 ;

else
x1=x2;x2=x3;

end

end

C. MODIFIED FALSE POSITION

Mathematical Background

Earlier modified method generates the approximations in the same manner as


the Regula Falsi method does. For a given function 𝑓(𝑥)which is continuous on
[a,b], such that 𝑓(𝑎) × 𝑓(𝑏) < 0, there exists a real root of 𝑓(𝑥) = 0 in the interval
[a,b] . Since 𝑓(𝑎) and 𝑓(𝑏) are of opposite signs, the straight line joining the points
(𝑎, 𝑓(𝑎)) and (𝑏, 𝑓(𝑏)) cuts the x-axis at the point (𝑥1 , 0), say. The equation of this
line is given by

𝑦−𝑓(𝑎)
𝑥 − 𝑎 = 𝑓(𝑏)−𝑓(𝑎) (𝑏 − 𝑎),

on the x-axis 𝑦 = 0, so it becomes


0−𝑓(𝑎)
𝑥1 − 𝑎 = 𝑓(𝑏)−𝑓(𝑎) (𝑏 − 𝑎).

Therefore, the first approximation of the desired root is given by

𝑎𝑓(𝑏)−𝑏𝑓(𝑎)
𝑥1 = .
𝑓(𝑏)−𝑓(𝑎)

According to Regula Falsi method, the second and higher approximations of the
desired root are as follows:

𝑎𝑓(𝑏)−𝑏𝑓(𝑎)
𝑥𝑘+1 = , k = 1, 2, 3, 4, ........
𝑓(𝑏)−𝑓(𝑎)

where each iteration set ends with either 𝑎 = 𝑥𝑘 if 𝑓(𝑏) × 𝑓(𝑥1 ) < 0,

or 𝑏 = 𝑥𝑘 if 𝑓(𝑎) × 𝑓(𝑏) < 0.

But sometimes this method seems take more than the reasonable number of
iterations in solving a problem. Considering this problem an efficient
improvement of the method is developed. In section 2 the improved method is
described. An efficient algorithm and illustration with numerical example has
been presented in section 3. Section 4 presents a conclusion.
Method: Formula
For a given function 𝑓(𝑥), which is continuous on [a, b], such that 𝑓(𝑎) × 𝑓(𝑏) <
0, there exists a real root 𝑐 in the interval [a, b]. Now the straight line joining the
points (𝑎, 𝑓(𝑎)) and (𝑏, 𝑓(𝑏)) cuts the x-axis at the point (𝑐1 , 0). So the first
approximation of the desired root is

𝑓(𝑏)(𝑏−𝑎) 𝑎𝑓(𝑏)−𝑏𝑓(𝑎)
𝑐1 = 𝑏 − 𝑓(𝑏)−𝑓(𝑎) = .
𝑓(𝑏)−𝑓(𝑎)

Next to decide which straight line is used to compute the approximation𝑐2 , we


check𝑓(𝑏) × 𝑓(𝑐).Two cases arise:

Case 1: If the value of 𝑓(𝑏) × 𝑓(𝑐)is positive then we set

𝑏 = 𝑐1 (according to fig. 2.1)

Case 2: If the value of 𝑓(𝑏) × 𝑓(𝑐)is negative then we set

𝑎 = 𝑐1 (according to fig. 2.2)

Again for each case we check 𝑓(𝑐𝑘 ) × 𝑓(𝑐𝑘+1 )for k= 0,1,2,... ... where 𝑐0 = 𝑎. If
this value is positive then for case 1, we set

𝑓(𝑎) = 𝑓(𝑎)/2 (shown in fig. 2.1) and for case 2,


we set 𝑓(𝑏) = 𝑓(𝑏)/2 (shown in fig. 2.2).

Therefore, the root of the equation 𝑓(𝑥) = 0 can be found by iterative process
using the following formula

𝑎𝑓(𝑏)−𝑏𝑓(𝑎)
𝑐𝑘+1 = for k = 0,1,2,... ... ...
𝑓(𝑏)−𝑓(𝑎)

where each iteration setting ends with initial guess 𝑐0 = 𝑎 and


either 𝑏 = 𝑐𝑘+1 if 𝑓(𝑏) × 𝑓(𝑐𝑘+1 ) > 0

and 𝑓(𝑎) = 𝑓(𝑎)/2 if 𝑓(𝑐𝑘 ) × 𝑓(𝑐𝑘+1 ) > 0

or, 𝑎 = 𝑐𝑘+1 if 𝑓(𝑎) × 𝑓(𝑐𝑘+1 ) > 0

and 𝑓(𝑏) = 𝑓(𝑏)/2 if 𝑓(𝑐𝑘 ) × 𝑓(𝑐𝑘+1 ) > 0

The iterative process continues until the absolute difference between two
successive values of 𝑐𝑘 is less than a desired value (app. zero).

Algorithm

To find a real root of the equation 𝑥 = 0 which lie in the interval [a,b], the following
algorithm and a computer program have been developed using MATLAB R2018 for the
simulation results:

Step 1: Define tolx = 1.e-6

Step 2: Define imax 200

Step 3: INPUT a, b

Step 4: 𝐹𝑎 = 𝑓(𝑎) , 𝐹𝑏 = 𝑓(𝑏);

Step 5: i = 1

Step 6: If Fa*Fb<0

𝑎(𝐹𝑏)−𝑏(𝐹𝑎)
𝑤= ;
𝐹𝑏−𝐹𝑎

end

Step 7: i = 2 to max

Step 9: 𝐹𝑤 = 𝐹(𝑤);

Step 10: If Fa*Fw<0;

b=w;

Fa=(F(a)/2);

𝑎(𝐹𝑤)−𝑏(𝐹𝑎)
𝑤= 𝐹𝑤−𝐹𝑎
;
Fw=F(w);

Else if 𝐹𝑤 ∗ 𝐹𝑏 < 0;

𝑎 = 𝑤;

𝐹𝑏 = (𝐹(𝑏)/2);

𝑤(𝐹𝑏)−𝑏(𝐹𝑤)
𝑤= 𝐹𝑏−𝐹𝑤
;

Fw = F(w);

end
Step 11: fprintf (i, a, b, Fw)
Step 12: Error=abs(Fw);
Step 13: If (error<tolx)
fprintf(‘An exact solution x= was found’, w)
break
end
EXAMPLE

The following table shows the solutions of the equation in the interval [1, 3]
using the said modified Regula Falsi method method:

Iteration
a b 𝑐𝑘 𝑓(𝑐𝑘 )
No. k
17 1.414212 1.414216 1.414214 -0.000007
18 1.414212 1.414214 1.414213 0.000003
19 1.414213 1.414214 1.414214 -0.000002

20 1.414213 1.414214 1.414213 0.000001


Table 1: Solution of 𝑥 = 𝑥 3 − 5𝑥 2 2𝑥 + 10 = 0 using presented method.
The following figure shows the graphical shape of the function 𝑥 considering
values from table 1:

Fig 3.1: Graphical shape of 𝑓(𝑥)


Script
function x=ModRegFal(a,b)
F = @(x)x^3-5*x^2-2*x+10;
imax=200; tolx=1.e-6;
Fa=F(a); Fb=F(b);
for i=1
if Fa*Fb<0
disp('iteration a b (w)Solution F(w)')
w=(a*Fb-b*Fa)/(Fb-Fa);
end
fprintf('%3i %11.6f%11.6f%11.6f%11.6f\n',i,a,b,w,F(w))
end
for i=2:imax
Fw=F(w);
if Fa*Fw<0
b=w;
Fa=(F(a)/2);
w=(a*Fw-b*Fa)/(Fw-Fa);
Fw=F(w);
else Fw*Fb<0;
a=w;
Fb=(F(b)/2);
w=(w*Fb-b*Fw)/(Fb-Fw);
Fw=F(w);
end
fprintf('%3i %11.6f%11.6f%11.6f%11.6f\n',i,a,b,w,Fw)
error=abs(Fw);
if (error<tolx)
fprintf('An exact solution x=%11.6f was found',w)
break
end
end

II. OPEN METHODS


Open methods differ from bracketing methods, in that open methods
require only a single starting value or two starting values that do not necessarily
bracket a root. It may diverge as the computation progresses, but when they do
converge, they are much faster than bracketing method

A. Fixed Point Iteration


Mathematical Background

Fixed Point Iteration Method: A point, say, s is called a fixed point if it satisfies
the equation x = g(x). In this method, we first rewrite the equation (1) in the form;

In such a way that any solution of the equation (2), which is a fixed point of g, is
a solutioV.Ggn of equation (1). Then consider the following algorithm.
Algorithm 1: Start from any point x0 and consider the recursive process

If f is continuous and (xn) converges to some l0 then it is clear that l0 is a fixed


point of g and hence it is a solution of the equation (1). Moreover, xn (for a large n)
can be considered as an approximate solution of the equation (1).

Theorem: Let g: [a, b] → [a, b] be a differentiable function such that

Then g has exactly one fixed point l0 in [a, b] and the sequence (xn) defined by the
process (3), with a starting point x0 ∈ [a, b], converges to l0.

Proof (*): By the intermediate value property g has a fixed point, say l0. The
convergence of (xn) to l0 follows from the following inequalities:

If l1 is a fixed point then | l0 − l1 | = | g(l0) − g(l1) | ≤ α | l0 − l1 | This implies that l0 =


l1.

Theorem: Let l0 be a fixed point of g(x). Suppose g(x) is differentiable on [l0 −ε,l0
+ε] for some ε > 0 and g satisfies the condition | g0(x) | ≤ α < 1 for all x ∈ [l0 −ε,l0 +
ε]. Then the sequence (xn) defined by (3), with a starting point x0 ∈[l0 −ε,l0 +ε],
converges to l0.

Proof: By the mean value theorem g([l0 −ε,l0 + ε]) ⊆ [l0 −ε,l0 + ε] (Prove!).
Therefore, the proof follows from the previous theorem. The previous theorem
essentially says that if the starting point is sufficiently close to the fixed point then
the chance of convergence of the iterative process is high.

Remark: If g is invertible then l0 is a fixed point of g if and only if l0 is a fixed point


of g−1. In view of this fact, sometimes we can apply the fixed point iteration method
for g−1 instead of g. For understanding, consider g(x) = 4x−12 then |g’(x)|= 4 for all
x. So the fixed point iteration method may not work. However, g−1(x) = 1 /4x+3
and in this case |(g−1)’ (x)|= 1/ 4 for all x.

Newton’s Method or Newton-Raphson Method :


The following iterative method used for solving the equation f(x) = 0 is called
Newton’s method.

Algorithm 2:

It is understood that here we assume all the necessary conditions so that xn is well

defined. If we take then Algorithm 2 is particular case of


Algorithm 1. So, we will not get into the convergence analysis of Algorithm 2.

Steps in Writing

Finding Fixed Points with Fixed-Point Iteration (Basic Fixed-Point


Algorithm):

1. Initialize with guess p0 and i = 0 for the given equation. Substitute 0 for all values
of x.

2. Set p0+1 = g(pi). If the initial guess is zero add 1 for the value of g(x), substitute
g(p0+1) to all values of x.

3. If |p0+1 − pi | > ∈, set i = i + 1 and go to step 2

4. Stop with p = pi+1, hence it is the real root of the given equation.

5. Write the given equation in form.

6. Taking x0 = 0 and using the formula.

7. By using x n+1= phi (x n), n= 0,1,2,… , you can find the required root of the given
equation.

Algorithm of Fixed Point Iteration

function [x] = fixedpoint(g,I,y,tol,m)


% input: g, I, y, tol, max
% g - function
% I - interval
% y - starting point
% tol - tolerance (error)
% m - maximal number of iterations
% x - approximate solution
a=I(1);b=I(2);
if(y<a | y>b)
error('The starting iteration does not lie in I.')
end
x=y;
gx=g(y);
while(abs(x-gx)>tol & m>0)
if(gx<a | gx>b)
error('The point g(x) does not lie in I.')
end
y=x;
x=g(y);
m=m-1;
end

Example of Fixed Point Iteration


Compute one root of e-x – 3x = 0 correct to two decimal places between 0
and 1 using any suitable method.
Let f(x) = e-x – 3x
Therefore f(0) = 1 and f(1) = -2.6321
So a root of the given equation lies between 0 and 1.
Write the given equation in the form

Taking x0 = 0 and using the formula


We have

xn (1/3) e-x Root


X1 (1/3) e-0 0.3333
X2 (1/3) e-0.3333 0.2389
X3 (1/3) e-0.2389 0.2624
X4 (1/3) e-0.2624 0.2564
X5 (1/3) e-0.2564 0.2579
… … …

B. Newton-Raphson Method
MATHEMATICAL BACKGROUND
Newton-Raphson method, named after Isaac Newton and Joseph Raphson,
is a popular iterative method to find the root of a polynomial equation. It is also
known as Newton’s method, and is considered as limiting case of secant method.
Based on the first few terms of Taylor’s series, Newton-Raphson method is more
used when the first derivation of the given function/equation is a large value. It is
often used to improve the value of the root obtained using other rooting finding
methods in Numerical Methods.
Derivation of Newton-Raphson Method:
The theoretical and mathematical background behind Newton-Raphson
method and its MATLAB program (or program in any programming language) is
approximation of the given function by tangent line with the help of derivative,
after choosing a guess value of root which is reasonably close to the actual root.
The x- intercept of the tangent is calculated by using elementary algebra,
and this calculated x-intercept is typically better approximation to the root of the
function. This procedure is repeated till the root of desired accuracy is found.
Lets now go through a short mathematical background of Newton’s
method. For this, consider a real value function f(x) as shown in the figure below:
Consider 𝑥1 to be the initial guess root of the function f(x) which is
essentially a differential function. Now, to derive better approximation, a tangent
line is drawn as shown in the figure. The equation of this tangent line is given by:
𝑦 = 𝑓 ′ (𝑥1 )(𝑥 − 𝑥1 ) + 𝑓(𝑥1 )
where, f’(x) is the derivative of function f(x).
As shown in the figure, 𝑓𝑥2 = 0 i.e. at 𝑥 = 𝑥2 , 𝑦 = 0
Therefore, 0 = 𝑓(𝑥1 ) (x2 −x1 ) + 𝑓(x1 )
𝑓(𝑥 )
Solving, 𝑥2 = 𝑥1 − 𝑓′(𝑥1
1)

Repeating the above process for 𝑥𝑛 and 𝑥𝑛+1 terms of the iteration process,
we get the general iteration formula for Newton-Raphson Method as:

FORMULA OF NEWTON-RAPHSON METHOD


𝑓(𝑥𝑛 )
𝑥𝑛+1 = 𝑥𝑛 −
𝑓′(𝑥𝑛)

This formula is used in the program code for Newton Raphson method in
MATLAB to find new guess roots. Just as with fixed-point iteration, the Newton-
Raphson approach will often diverge if the initial guesses are not sufficiently close
to the true roots. Whereas graphical methods could be employed to derive good
guesses for the single-equation case, no such simple procedure is available for the
multi-equation version.

STEPS TO FIND ROOT USING NEWTON-RAPHSON METHOD


1. Check if the given function is differentiable or not. If the function is not
differentiable, Newton’s method cannot be applied.
2. Find the first derivative f’(x) of the given function f(x).
3. Take an initial guess root of the function, say 𝑥1 .
4. Use Newton’s iteration formula to get new better approximate of the root,
say 𝑥2
𝑓(𝑥1 )
𝑥2 = 𝑥1 −
𝑓 ′ (𝑥1 )
5. Repeat the process for 𝑥3 , 𝑥4 … till the actual root of the function is obtained,
fulfilling the tolerance of error.

PRESENT ALGORITHM OF NEWTON-RAPHSON


Newton Raphson Method in MATLAB:
MATLAB Program for Newton-Raphson Method

FUNCTION OF NEWTON-RAPHSON
In this code for Newton’s method in MATLAB, any polynomial function can
be given as input. Initially in the program, the input function has been defined and
is assigned to a variable ‘a’.
After getting the initial guess value of the root and allowed error, the
program, following basic MATLAB syntax, finds out root undergoing iteration
procedure as explained in the theory above. Here’s a sample output of this code:
EXAMPLE OF NEWTON-RAPHSON METHOD
The above program of Newton-Raphson method in MATLAB, taking the
same function used in the above program and solving it numerically. The function
is to be corrected to 9 decimal places.
Solution:
Given function: x 3 − x − 1 = 0, is differentiable.
The first derivative of f(x) is f′(x) = 3x 2 – 1
Let’s determine the guess value.
f(1) = 1 -1 -1 = -1 and f(2) = 8 – 2 -1 = 5
Therefore, the root lies in the interval [1, 2]. So, assume 𝑥1 = 1.5 as the initial
guess root of the function f(x) = x 3 − x − 1.
Now,
f(1.5) = 1.53 – 1.5 – 1 = 0.875
f’(1.5) = 3 * 1.52– 1 = 5.750
Using Newton’s iteration formula:
𝑓(x1 ) 0.875
x2 = x1 – ′ = 1.5 – = 1.34782600
𝑓 (𝑥1 ) 5.750
The iteration for x3 , x4 , …. is done similarly.
The table below shows the whole iteration procedure for the given function in
the program code for Newton Raphson in MATLAB and this numerical example.

Therefore, x = 1.324717957 is the desired root of the given function, corrected to


9 decimal places. The MATLAB program gives the result x = 1.3252 only, but this
value can be improved by improving the value of allowable error entered.

C. SECANT METHOD
MATHEMATICAL BACKGROUND
The Secant method is a recapitulative tool of numerical methods and
mathematics to find the estimated derivation of the polynomial equations. In using
the Secant method, the difficulty in derivatives and inconvenience in calculating
some of the functions will be present. During the pattern of iteration, this method
assumes the function to be approximately linear in the region of interest. It is often
considered to be a finite difference calculation of Newton’s method even though
secant method was developed independently. However, it is generally used as an
alternative to the other method due to the fact of its being exempt from derivative.

In late 20th century, Potra et, al. simplified that the secant method is one of
the most efficient algorithms and procedure for solving nonlinear equations. It has
been used from the time of early Italian algebraists and has been extensively
studied in the literature. It is well known that for smooth equations the classical
1+ √5
secant method is super linearly convergent with Q-order at = 1.618. Then,
2

Ostrowski (1973) admitted that with the exception of the first step, only one
function value per step is used its efficiency index is also (1 + √5)/2. The primary
simplification of the secant process meant for systems of two nonlinear equalities
turns back to Gauss.

Its rate of convergence is more rapid than that of bisection method. So,
secant method is considered to be a much faster root finding method. At the same
time, there is no need to find the derivative of the function as in Newton-Raphson
method. However, the access in its limits is inevitable such when this method fails
to converge when f(xn) = f(xn-1) and If X-axis is tangential to the curve, it may not
converge to the solution.

METHOD: FORMULA
As stated above, it is hard to use Secant method due to the presence of
inconvenience in the process of manual calculation of the functions. As a known
method in Numeds, Secant method estimates the point of intersection of the curve
and the X- axis (i.e. root of the equation that represents the curve) as exactly as
possible.
For that, it uses succession of roots of secant line in the curve. Assume x0
and x1 to be the initial guess values, and construct a secant line to the curve
through (x0, f(x0)) and (x1, f(x1)). The equation of this secant line is given by:

𝑓(𝑥1 ) − 𝑓(𝑥2 )
𝑦= (𝑥 − 𝑥1 ) + 𝑓(𝑥1 )
𝑥1 − 𝑥0

In the first glance, this method is nearly similar to linear interpolation


method, but there is a main difference between these two methods. In the secant
method, it is not vital that two starting points to be in opposite sign. Therefore, the
secant method is not a kind of bracketing method but an open method. This major
difference causes the secant method to be possibly divergent in some cases, but
when this method is convergent, the convergent speed of this method is better
than linear interpolation method in most of the problems. The first key operation
in the equation above is the same with the 2-point slope system, which is:

𝑓(𝑥𝑘−1 ) − 𝑓(𝑥𝑘 )
𝑓′(𝑥𝑘 ) =
𝑥𝑘−1 − 𝑥𝑘

Remember the derivative is “slope of the line tangent to the curve”. Once
seen the pattern, the remaining operation will be able to get by substituting on the
Newton’s Formula. Later that, the equation will lead back to the equation of this
secant line.

𝑓(𝑥𝑘 )(𝑥𝑘−1 − 𝑥𝑘 )
𝑥𝑘+1 = 𝑥𝑘 −
𝑓(𝑥𝑘−1 ) − 𝑓(𝑥𝑘 )

As you observe, with the procedure of transposition, the equation of the


Newton’s Formula will immensely lead to the equation of this secant line. The
Secant method replaces guesses in strict sequence, from that, 𝑥𝑘+1 replaces 𝑥𝑘 and
𝑥𝑘 replaces 𝑥𝑘−1 . However, with this method, there is no guarantee of bracketing
root due of the reason that it is undertaken as an open method.

STEPS:

1. Write the primary equation. The equation of this secant line is given by:

𝑓(𝑥1 ) − 𝑓(𝑥2 )
𝑦= (𝑥 − 𝑥1 ) + 𝑓(𝑥1 )
𝑥1 − 𝑥0

2. If x be the root of the given equation, it must satisfy 𝑓(𝑥) = 0. Replacing y


= 0 in the above equation, and solving for x, we will get:

𝑓(𝑥1 ) − 𝑓(𝑥2 )
0= (𝑥 − 𝑥1 ) + 𝑓(𝑥1 )
𝑥1 − 𝑥0

3. Now, considering this new x as x2, and repeating the same process for x2,
x3, x4, we end up with the following expressions:
𝑓(𝑥1 )(𝑥1 − 𝑥0 )
𝑥2 = 𝑥1 −
𝑓(𝑥1 ) − 𝑓(𝑥0 )
𝑓(𝑥2 )(𝑥2 − 𝑥1 )
𝑥3 = 𝑥2 −
𝑓(𝑥2 ) − 𝑓(𝑥1 )

𝑓(𝑥𝑛−1 )(𝑥𝑛−1 − 𝑥𝑛−2 )


𝑥𝑛 = 𝑥𝑛−1 −
𝑓(𝑥𝑛−1 ) − 𝑓(𝑥𝑛−2 )
4. If |x2-x1|<e, let root = x2, else x0 = x1; x1 = x2; go back to step 3. Where in e:
Acceptable approximated error.
5. After identifying the required formula given above, it can now be applied
in programming using MATLAB.
ALGORITHMS

In this program for secant method in MATLAB, first the equation to be


solved is defined and assigned with a variable ‘a’ using inline() library function.
Then, the approximate guess values and desired tolerance of error are entered to
the program, following the MATLAB syntax.

SECANT METHOD EXAMPLE

Given the function: 𝑓(𝑥) = 𝑐𝑜𝑠(𝑥) + 2 𝑠𝑖𝑛(𝑥) + 𝑥 2 , with x0 = 0 and x1 =


-0.1 are taken as initial approximation, and the allowed error is 0.001, process a
complete iteration.

For first iteration,

𝑓(𝑥1 ) = 𝑐𝑜𝑠(−0.1) + 2 𝑠𝑖𝑛(−0.1) + (−0.1)2 = 0.8053

𝑓(𝑥0 ) = 𝑐𝑜𝑠(0) + 2 𝑠𝑖𝑛(0) + 02 = 1

As we know,

𝑥2 = 𝑥1 − 𝑓(𝑥1 )

−0.1 − 0
𝑥2 = 0 − 0.8053 ∗
0.8053 − 1

𝑥2 = 0.4136

Similarly, 𝑥3 = – 0.5136, and so on…

The complete calculation and iteration of secant method (and MATLAB program)
for the given function is presented in the table below:
𝒏 𝒙𝒏−𝟏 𝒙𝒏 𝒙𝒏+𝟏 |𝒇(𝒙𝒏+𝟏 )| |𝒇(𝒙𝒏+𝟏 ) − 𝒙𝒏 |
1 0.0 -0.1 -0.5136 0.1522 0.4136
2 -0.1 -0.5136 -0.6100 0.0457 0.0964
3 -0.5136 -0.6100 -0.6514 0.0065 0.0414
4 -0.6100 -0.6514 -0.6582 0.0013 0.0068
5 -0.6514 -0.6582 -0.6598 0.0006 0.0016
6 -0.6582 -0.6598 -0.6595 0.0002 0.0003
Thus, the root of 𝑓(𝑥) = 𝑐𝑜𝑠(𝑥) + 2 𝑠𝑖𝑛(𝑥) + 𝑥 2 as obtained from secant method
as well as its MATLAB program is -0.6595.

Proof:

𝑓(−0.6585) = 𝑐𝑜𝑠(−0.6585) + 2 𝑠𝑖𝑛(𝑥 − 0.6585) + (−0.6585)2

𝑓(−0.6585) = 0.0002

D. MODIFIED SECANT METHOD

Mathematical Background
Newton’s method (Newton-Rhapson) is fast (quadratic convergence) but
derivative may not be available.
Secant method uses two points to approximate the derivative, but
approximation may be poor if points are far apart.
Modified Secant method is a much better approximation because it uses one
point, and the derivative is found by using another point some small distance, d,
away

Rather than using two arbitrary values to estimate the derivative, an


alternative approach involves a fractional perturbation of the independent
variable to estimate f’(x),

The classical secant method can be written as:

Our iterative procedure would be considered as a new approach based in


a better approximation to the derivative F’(z,) from x, and x,-i in each iteration. It
takes the following form:

These parameters cr, will be a control of the good approximation to the


derivative. Theoretically, if on ---) 1, then

The modified secant method needs two evaluations of the function in each
iteration. If we consider a complicated function (or operator), this fact can
reduce its competitiveness. In this case, the idea is to consider cr, = 0 after some
first iterations, because then the secant method usually obtains enough good
results.

THEOREM 2. Suppose that F is semi-smooth at a solution x* of F(x) = 0. If d- and


d+ are both positive (or negative), then there are two neighborhoods U and V of
x*, U C= V, such that for each x0 E U, Algorithm 1 is well defined and produces a
sequence of iterates {x,,} such that

and {xn} converges to x* three-step Q-super linearly. Furthermore, if

then {x,} is Q-linearly convergent with Q-factor /3. If F is strongly semismooth at


x*, then {x,,} converges to x* three-step Q-quadratically. PROOF. This proof is
based on the following. Since F is semismooth at x*, there is a convex
neighbourhood W of x* such that F is Lipschitz continuous on W. There are two
convex neighbourhoods U and V such that

whenever

Using the above results, the definition of method (l), and Lemma 1, it is not hard
to prove that

Equation (2) is only obtained in the cases x,-l < x* < x,, x, < x* < X,-I. Thus, we have
proved the three-step Q-superlinear convergence of (xn,). However, in practice,
there are some advantages to this modified secant method. First, since
is a better approximation to F’(xn) than , the
convergence will be faster (the first iterations will be better). Next, the size of the
neighbourhoods can be higher, that is, we can consider worse starting points x0,
as we will see in numerical experiments. Finally, with our modification, usually
and then we could obtain Q-superlinear
convergence (or Q-quadratically if F is strongly semismooth).

REMARK 3. We refer to [6,7] for the definitions and notations of Q-order. As in


[l], it is not hard to prove the fOllowing theorem.

THEOREM 4. Suppose that F is semismooth at a solution x* of F(x) = 0. If d- and d+


do not vanish and have different signs, then there exist two neighborhoods U and V
of x*, U C V, such that for each x0 E U, Algorithm 1 is well defined and produces a
sequence {xn} such that

Present Algorithms

Steps in Writing
In order to show the performance of the modified secant method, we have
compared it with the classical secant method. We have tested on several
semismooth equations. In Table 1, we display the iterates for

with x0 = 0.1, xi = 0.05. In this case, we have d-d+ > 0, and obviousiy, the secant
method is three-step Q-quadratically convergent, the modified secant method
with on = 0.9 is two-step Q-quadratically convergent, and the modified secant
method with an, = 1 - 10-l’ is Q-quadratically convergent.

If we consider as starting points xi = 0.2, xcg = 0.3, the conclusions are similar,
but the new approach is more convenient to use; see Table 2.

In fact, if we put xi = 0.2, x0 = 0.6, the classical secant method has a


problem of convergence.
Nevertheless the modified secant method conserves its good properties; see
Table 3.
In Table 4, we list the iterates for
with zo = 0.1, 51 = 0.05. Here d-d+ < 0, the two-step Q-quadratic convergence is
evident for secant and modified secant method with Q,, = 0.9. But the
modification has a faster convergence. The modified secant method with (Y, = 1 -
10~” is Q-quadratically convergent.

We have presented a modification of the secant method for semismooth


equations. We made a complete analysis of convergence for semismooth one-
dimensional equations. The new iterative method seems to work very well in our
preliminary numerical results, since we have obtained optimal order of
convergence. Of course, the generalization of this modification of higher
dimensions is similar to the classical secant method.

Examples
Determine the highest real root of
f (x) = 2x3 − 11.7x2 + 17.7x − 5

Modified secant method (three iterations, x0 = 3, δ = 0.01).


Compute the approximate percent relative errors for your
solutions.
SOLUTION:
(e) Modified secant ( = 0.01)

i x f(x) dx x+dx f(x+dx) f'(x) a


0 3 -3.2 0.03 3.03 -3.14928 1.6908
1 4.892595 35.7632 0.048926 4.9415212 38.09731 47.7068 38.68%
2 4.142949 9.73047 0.041429 4.1843789 10.7367 24.28771 18.09%
3 3.742316 2.203063 0.037423 3.7797391 2.748117 14.56462 10.71%

Other Function (Script)

function
[root,approximate_error] = secant(func,xr,es,a,maxit)
% func= name of function.
% xr = initial guess
% es = desired relative error
% a= perturbation fraction
% maxit = maximum allowable iterations
if nargin<5, maxit=50;
end
%if maxit blank set to 50
if nargin<4, es=0.001;
end
%if es blank set to 0.001
% Secant method
iter = 0;
while (1)
xrn = xr - a*xr*func(xr)/((func(xr+a*xr) - func(xr));
iter = iter + 1;
if xrn ~= 0,
ea = abs((xrn - xr)/xrn)*100;
end
approximate_error(iter) = ea;
if ea <= es | iter >= maxit,
break, end
xrold = xr;
xr = xrn;
end

root = xrn;
disp(['total number of iterations to display the root (modified secant) =
',num2str(iter)]);

III. HYBRID (BRACKETING/OPEN)


It is not easy to find the roots of equations evolving nonlinearity using
analytic methods. In general, the appropriate methods solving the nonlinear
equation f(x) = 0 are iterative methods. They are divided into two groups: open
methods and bracketing methods.

The open methods are relied on formulas requiring a single initial guess
point or two initial guess points that do not necessarily bracket a real root. They
may sometimes diverge from the root as the iteration progresses. Some of the
known open methods are Secant method, Newton-Raphson method, and Muller’s
method.

The bracketing methods require two initial guess points, a and b, that
bracket or contain the root. The function also has the different parity at these
two initial guesses i.e. f(a)f(b) < 0. The width of the bracket is reduced as the
iteration progresses until the approximate solution to a desired accuracy is
reached. By following this procedure, a root of f is certainly found. Some of the
known bracketing methods are Bisection method, Regula Falsi method (or False
Position), and Improved or modified Regula Falsi method. In this article we will
focus only on the bracketing methods. In the rest of the text, let f be a real and
continuous function on an interval [a, b], and f(a) and f(b) have different parity
i.e. f(a)f(b) < 0. Therefore, there is at least one real root r in the interval [a, b] of
the equation f(x) = 0.

Open methods differ from bracketing methods, in that open methods


require only a single starting value or two starting values that do not necessarily
bracket a root. Open methods may diverge as the computation progresses, but
when they do converge, they usually do so much faster than bracketing methods.

A. BRENT’S METHOD

MATHEMATICAL BACKGROUND

Brent’s method is a root-finding algorithm that combines root bracketing,


bisection, and inverse quadratic interpolation. Brent's method is due to
Richard Brent and builds on an earlier algorithm by Theodorus Dekker. It is also
known as the Brent-Dekker method. Brent’s method will always converge as long
as the values of the function are computable within a given region containing a
root.

Since its development in 1972, Brent’s method has been the most popular
method for finding the zeros of functions. This method usually converges very
quickly to a zero for the occasional difficult functions encountered in practice; it
typically takes O(n) iterations to converge, where O(n) is the number of steps
required for the bisection method to find the zero to approximately the same
accuracy. Brent has shown that this method requires as many as O(n)^2
iterations in the worst case.

METHOD: FORMULA

Brent's method uses a Lagrange interpolating polynomial of degree 2.


Brent (1973) claims that this method will always converge as long as the values
of the function are computable within a given region containing a root. Given
three points x1, x2, and x3, Brent's method fits x as a quadratic function of y, then
uses the interpolation formula

Subsequent root estimates are obtained by setting y=0, giving

where

with
STEPS

An overview of the operation of the method is as follows:

• The method begins with

– a stopping tolerance δ > 0,

– points a and b such that f(a)f(b) < 0.

If necessary, a and b are exchanged so that |f(b)| ≤ |f(a)|; thus b is regarded as


the better approximate solution. A third point c is initialized by setting c = a.

• At each iteration, the method maintains a, b, and c such that b 6= c and

1. f(b)f(c) < 0, so that a solution lies between b and c if f is


continuous;

2. |f(b)| ≤ |f(c)|, so that b can be regarded as the current approximate


solution;

3. either a is distinct from b and c, or a = c and is the immediate past


value of b.

Each iteration proceeds as follows:

1. If |b − c| ≤ δ, then the method returns b as the approximate solution.

2. Otherwise, the method determines a trial point ˆb as follows:

(i) If a = c, then ˆb is determined by linear (secant) interpolation: ˆb =


af(b)−bf(a) f(b)−f(a) .

(ii) Otherwise, a, b, and c are distinct, and ˆb is determined using inverse


quadratic interpolation:

• Determine α, β, and γ such that p(y) = αy2 + βy + γ satisfies p(f(a)) = a, p(f(b))


= b, and p(f(c)) = c.
• Set ˆb = γ.

3. If necessary, ˆb is adjusted or replaced with the bisection point. (The rules are
complicated.)

4. Once ˆb has been finalized, a, b, c, and ˆb are used to determine new values of
a, b, and c. (The rules are complicated.)

Remark: In part (ii) of step 2, the coefficients α, β, and (especially) γ are easily
determined using standard methods at the cost of a few arithmetic operations.
(Of course, there needs to be a safeguard against the unlikely event that f(a),
f(b), and f(c) are not distinct.) Note that γ is just p(0), so if f really were the
inverse of a quadratic, i.e., f −1 (y) = p(y) = αy2 + βy + γ for all y, then ˆb = γ
would satisfy f( ˆb) = f(p(0)) = f(f −1 (0)) = 0. Thus inverse quadratic
interpolation provides a low-cost approximate zero of f that should be more
accurate than that obtained by linear (secant) interpolation. Note that if direct
quadratic interpolation were used instead of inverse quadratic interpolation, i.e.,
if we found p(x) = αx2 + βx + γ such that p(a) = f(a), p(b) = f(b), and p(c) = f(c),
then it would be necessary to find a ˆb such that p( ˆb) = 0 using the quadratic
formula, which involves a square root. By using inverse quadratic interpolation,
Brent’s method avoids this square root.
PRESENT ALGORITHMS

EXAMPLE

Operate the secant method to get the three roots with the cubic polynomial f[x] =
4x2 –16x2+17x-4.

Show information on the actual computations for that beginning value p0=3 and
p1= 2.8

Solution:

Enter the function.

F[x_] = 4x3 -16x2 +17x -4


Print["f[x]=", f[x]];
F[x] = -4+17x-16x2+4x2

The formula of secant iteration: g[x0, x1]


The formula of second iteration is,

Hopefully, this iteration pn+1 = g[pn-1, pn] will probably converge into a root of
f[x]. Graph this function y=f[x]

Y=f[x] = -4+17x-16x2+4x2

There are three kind real root.

Root(1) this root with starting the values p0=3.0 as well as p1=2.8.

Utilize the secant method to find a numerical approximation for the root Initial,
do the iteration one step at the same time.
Kind each one of the subsequent commands inside an individual cell and execute
these one-by-one.

Now use the subroutine.

In the graph we find that two other real roots.

Root(2) Use the starting values p0=0.6 and p1=0.5


Root(3) Use the starting values p0=1.0 and p1=1.1

Evaluate the outcome along with Mathematica's built in numerical root finder.
How they are good:

Mathematica may also solve for that roots symbolically.

The actual solutions could be manipulated in to actual expressions.

The answers can be expressed in decimal form.


These answers are in agreement with the ones we found with the secant method.

FUNCTIONS

function b = fzerotx(F,ab,varargin)
a = ab(1);
b = ab(2);
fa = F(a,varargin{:});
fb = F(b,varargin{:});
if sign(fa) == sign(fb)
error('Function must change sign on the interval')
end
c = a;
fc = fa;
d = b - c;
e = d;
while fb ~= 0
if sign(fa) == sign(fb)
a = c; fa = fc;
d = b - c; e = d;
end
if abs(fa) < abs(fb)
c = b; b = a; a = c;
fc = fb; fb = fa; fa = fc;
end

m = 0.5*(a - b);
tol = 2.0*eps*max(abs(b),1.0);
if (abs(m) <= tol) | (fb == 0.0)
break
end

if (abs(e) < tol) | (abs(fc) <= abs(fb))

d = m;
e = m;
else

s = fb/fc;
if (a == c)

p = 2.0*m*s;
q = 1.0 - s;
else

q = fc/fa;
r = fb/fa;
p = s*(2.0*m*q*(q - r) - (b - c)*(r - 1.0));
q = (q - 1.0)*(r - 1.0)*(s - 1.0);
end;
if p > 0, q = -q; else p = -p; end;

if (2.0*p < 3.0*m*q - abs(tol*q)) & (p < abs(0.5*e*q))


e = d;
d = p/q;
else
d = m;
e = m;
end
end

c = b;
fc = fb;
if abs(d) > tol
b = b + d;
else
b = b - sign(b-a)*tol;
end
fb = F(b,varargin{:});
end
end

IV. ROOTS OF POLYNOMIALS

A. MULLER’S METHOD

Muller's method is a technique for finding the root of a scalar-valued function


f(x) of a single variable x when no information about the derivative exists. It is a
generalization of the secant method, but instead of using two points, it uses three
points and finds an interpolating quadratic polynomial.

This method is better suited to finding the roots of polynomials, and therefore
we will focus on this particular application of Muller's method.

MATHEMATICAL BACKGROUND

This method was first presented by D.E. Muller in 1956. This technique can be
used for any root finding program but it is particularly useful for approximating
the roots of polynomials. Muller method’s is an extension of the Secant Method.
The secant method begins with the two initial approximations x0 and x1 and
determines the next approximation x2 as the intersection of the x-axis with the
line through (x0, f(x0)) and (x1, f(x1)).

Figure.1: The Secant Method.


But the Muller Method uses the three initial approximations x0, x1, x2 and
determines the next approximation X3 by considering the intersection of x-axis
with the parabola through (x0, f(x0)), (x1, f(x1)) and (x2, f(x2)). This can be seen in
the following figure. Then the following can be written:

Figure.2: f(x) and the Interpolating Polynomial for Muller’s Method.


After arranging the above equations we get the following :
Then X3 can be found easily by using the formulas that are used for
calculating the roots of parabolas. The following lines explain this:

The power of Muller Method comes from the fact that it finds the complex
roots of the functions. This property makes it more useful when compared with
the other methods. (like Bisection, Newton, Regula-Falsi …)

FORMULA

Generalizes the secant method of root finding by using quadratic 3-point


interpolation

(1)

Then define

(2)

(3)

(4)

and the next iteration is


(5)

This method can also be used to find complex zeros of analytic functions.

Error can be calculated as :

ALGORITHM

1. Start
2. Declare function f(x)
3. Get initial approximation in array x
4. Get values of aerr and maxitr
*Here aerr is the absolute error
Maxitr is the maximum number of iterations for the desired degree of
accuracy*
5. Loop for itr = 1 to maxitr
6. Calculate li, di, mu and s
7. If mu < 0,
I = (2*y(x[0]*di)/(-mu + s)
8. Else,
I = (2*y(x[I]*di)/(-mu – s)
9. x[I + 1] = x[I] + l*(x[I] – x[I – 1])
10. Print itr and x[l]
11. If fabs (x[l] – x[0]) < aerr,
Print the required root as x[l]
12. Else,
Loop for i=0 to 2
x[i] = x[i + l]
13. End loop (i)
14. End loop (itr)
15. Print the solution does not converge
16. Stop

STEPS IN WRITING

INPUT x0 , x1, x2 ; tolerance TOL ;

Maximum number of iterations N0

OUTPUT approximate solution p or message of failure

Step 1 Set h1= x1 - x0

h2= x2 – x1

Step 2 While do steps 3-7

Step 3

( Note : Maybe Complex Arithmetic)


Step 4

If < then set E= b+D


Else set E= b-D

Step 5

Step

Step 6 If < TOL then

OUTPUT( p ) (Procedure Completed Successfully)

STOP
Step 7 Set x0=x1 (Prepare for next iteration)

x1=x2

x2=p

h1= x1 - x0

h2= x2 – x1

Step 8

OUTPUT (‘Method Failed After N0 iterations, N0= ‘,N0)

(Procedure Completed unsuccessfully)

STOP

EXAMPLE

The following demonstrates the first six iterations of Müller's method in Matlab.
Suppose we wish to find a root of the same polynomial

p(x) = x7 + 3x6 + 7x5 + x4 + 5x3 + 2x2 + 5x + 5

starting with the same three initial approximations x0 = 0, x1 = -0.1, and x2 = -0.2.

The first formula in red is the root of the quadratic polynomial which is added
onto the middle approximation x(2).

>> p = [1 3 7 1 5 2 5 5]
p=
1 3 7 1 5 2 5 5

>> x = [0.0 -0.1 -0.2]' % first three approximations


x =
0.00000
-0.10000
-0.20000

>> % 1st iteration ---------------------------------------


>> M = [(x(1) - x(2))^2, x(1) - x(2), 1
0, 0, 1
(x(3) - x(2))^2, x(3) - x(2), 1]
M =
0.01000 0.10000 1.00000
0.00000 0.00000 1.00000
0.01000 -0.10000 1.00000

>> y = polyval( p, x )
y =
5.00000
4.51503
4.03954

>> c = M \ y
c =
0.47367
4.80230
4.51503

>> x = [x(2), x(3), x(2) - 2*c(3)/(c(2) + sqrt(c(2)^2 -


4*c(1)*c(3)))]'
x =
-0.10000
-0.20000
-1.14864

>> % 2nd iteration ---------------------------------------


>> M = [(x(1) - x(2))^2, x(1) - x(2), 1
0, 0, 1
(x(3) - x(2))^2, x(3) - x(2), 1]
M =
0.01000 0.10000 1.00000
0.00000 0.00000 1.00000
0.89992 -0.94864 1.00000

>> y = polyval( p, x )
y =
4.5150
4.0395
-13.6858

>> c = M \ y
c =
-13.2838
6.0833
4.0395

>> x = [x(2), x(3), x(2) - 2*c(3)/(c(2) + sqrt(c(2)^2 -


4*c(1)*c(3)))]'
x =
-0.20000
-1.14864
-0.56812

>> % 3rd iteration ---------------------------------------


>> M = [(x(1) - x(2))^2, x(1) - x(2), 1
0, 0, 1
(x(3) - x(2))^2, x(3) - x(2), 1]
M =
0.89992 0.94864 1.00000
0.00000 0.00000 1.00000
0.33701 0.58052 1.00000

>> y = polyval( p, x )
y =
4.0395
-13.6858
1.6597

>> c = M \ y
c =
-21.0503
38.6541
-13.6858

>> x = [x(2), x(3), x(2) - 2*c(3)/(c(2) + sqrt(c(2)^2 -


4*c(1)*c(3)))]'
x =
-1.14864
-0.56812
-0.66963

>> % 4th iteration ---------------------------------------


>> M = [(x(1) - x(2))^2, x(1) - x(2), 1
0, 0, 1
(x(3) - x(2))^2, x(3) - x(2), 1]
M =
0.33701 -0.58052 1.00000
0.00000 0.00000 1.00000
0.01030 -0.10151 1.00000

>> y = polyval( p, x )
y =
-13.6858
1.6597
0.5160

>> c = M \ y
c =
-31.6627
8.0531
1.6597

>> x = [x(2), x(3), x(2) - 2*c(3)/(c(2) + sqrt(c(2)^2 -


4*c(1)*c(3)))]'
x =
-0.56812
-0.66963
-0.70285

>> % 5th iteration ---------------------------------------


>> M = [(x(1) - x(2))^2, x(1) - x(2), 1
0, 0, 1
(x(3) - x(2))^2, x(3) - x(2), 1]
M =
0.01030 0.10151 1.00000
0.00000 0.00000 1.00000
0.00110 -0.03322 1.00000

>> y = polyval( p, x )
y =
1.65973
0.51602
0.05802

>> c = M \ y
c =
-18.6991
13.1653
0.5160

>> x = [x(2), x(3), x(2) - 2*c(3)/(c(2) + sqrt(c(2)^2 -


4*c(1)*c(3)))]'
x =
-0.66963
-0.70285
-0.70686

>> % 6th iteration ---------------------------------------


>> M = [(x(1) - x(2))^2, x(1) - x(2), 1
0, 0, 1
(x(3) - x(2))^2, x(3) - x(2), 1]
M =
0.00110 0.03322 1.00000
0.00000 0.00000 1.00000
0.00002 -0.00401 1.00000

>> y = polyval( p, x )
y =
0.51602
0.05802
-0.00046

>> c = M \ y
c =
-21.8018
14.5107
0.0580

>> x = [x(2), x(3), x(2) - 2*c(3)/(c(2) + sqrt(c(2)^2 -


4*c(1)*c(3)))]'
x =
-0.70285
-0.70686
-0.70683

The list of all iterations using Matlab are:

0.000000000000000
-0.100000000000000
-0.200000000000000
-1.148643697414111
-0.568122032631211
-0.669630566165950
-0.702851144883234
-0.706857484921269
-0.706825973130949
-0.706825980788168
-0.706825980788170

FUNCTION

function Muller()
clc, clear all % Clears the command window and
variables
syms x % variable x declared symbol
R_Accuracy = 1e-8; % No of digit for termination
A_x = 0; % Function initiallization
flag =1; % Flag will be used for terminating
process
Root_index =0;
disp('Polynomial function of Order "n" is of type:a[1]X^n+a[2]X^(n-1)+
..+a[n]X^1+a[n+1]');
disp('Type Coeff as "[ 1 2 3 ...]" i.e Row vector form');
Coeff = input('Enter the coefficient in order? ');
[row_initial,col_initial] = size(Coeff);
for i = 1:col_initial
A_x = A_x + Coeff(i)*(x^(col_initial-i)); % Polynomial function building
end
clc
disp('Polynomial is : ');
disp(A_x)

while(flag)
[row,col] = size(Coeff);
if (col ==1)
flag =0;
elseif(col==2)
flag =0;
Root_index = Root_index + 1;
Root(Root_index)= -Coeff(2)/Coeff(1);
disp(['Root found:' num2str(-Coeff(2)/Coeff(1)) '']);
disp(' ')
elseif(col >= 3)
Guess = input('Give the three initial guess point [x0, x1, x2]: ');
if isempty(Guess)
Guess = [1 2 3];
disp('Using default value [1 2 3]')
elseif(Guess == zeros(1,3))
break
end
disp(['Three initial guess are: ' num2str(Guess) ' ']);
for i = 1:100
h1 = Guess(2)-Guess(1);
h2 = Guess(3)-Guess(2);
d1 = (polyval(Coeff,Guess(2))-polyval(Coeff,Guess(1)))/h1;
d2 = (polyval(Coeff,Guess(3))-polyval(Coeff,Guess(2)))/h2;
d = (d2-d1)/(h1+h2);
b = d2 + h2*d;
Delta = sqrt(b^2-4*polyval(Coeff,Guess(3))*d);
if (abs(b-Delta)<abs(b+d))
E = b + Delta;
else
E = b - Delta;
end
h = -2*polyval(Coeff,Guess(3))/E;
p = Guess(3) + h;
if (abs(h) < R_Accuracy)
Factor = [1 -p];
Root_index = Root_index + 1;
Root(Root_index)= p;
disp(['Root found: ' num2str(p) ' ']);
% disp(['Root found after' num2str(i) ' no of iteration.']);
disp(' ')
break;
else
Guess = [Guess(2) Guess(3) p];
end
if (i ==99)
disp('Method failed to find root!!!');
end
end
end
[Coeff,rem] = deconv(Coeff,Factor);
Coeff;
end
disp(['Function has ' num2str(Root_index) ' roots, given as:']);
for i = 1:Root_index
disp(['Root no ' num2str(i) ' is ' num2str(Root(i)) ' .'])
end
disp('End of Program');

B. BAIRSTOW’S METHOD
MATHEMATICAL BACKGROUND

In numerical analysis, Bairstow's method is an efficient algorithm for


finding the roots of a real polynomial of arbitrary degree. The algorithm first
appeared in the appendix of the 1920 book Applied Aerodynamics by Leonard
Bairstow. The algorithm finds the roots in complex conjugate pairs using only real
arithmetic.

Bairstow's approach is to use Newton's method to adjust the coefficients u


and v in the quadratic x2+ux+v until its roots are also roots of the polynomial being
solved. The roots of the quadratic may then be determined, and the polynomial
may be divided by the quadratic to eliminate those roots. This process is then
iterated until the polynomial becomes quadratic or linear, and all the roots have
been determined.

METHOD: FORMULA

Bairstow Method is an iterative method used to find both the real and complex
roots of a polynomial. It is based on the idea of synthetic division of the given
polynomial by a quadratic function and can be used to find all the roots of a
polynomial. Given a polynomial say,

(B.1)

Bairstow's method divides the polynomial by a quadratic function.

(B.2)

Now the quotient will be a polynomial

(B.3)

and the remainder is a linear function, i.e.

(B.4)

Since the quotient and the remainder are obtained by


standard synthetic division the co-efficients can be
obtained by the following recurrence relation.

(B.5a)

(B.5b)

for

(B.5c)

If is an exact factor of then the remainder is zero


and the real/complex roots of are the roots of . It may
be noted that is considered based on some guess values for . So
Bairstow's method reduces to determining the values of r and s such that
is zero. For finding such values Bairstow's method uses a strategy similar to

Newton
Raphson's method.

Since both and are functions of r and s we can have Taylor series expansion of , as:

(B.6a)

(B.6b)

For , terms i.e. second and higher


order terms may be neglected, so that the improvement
over guess value may be obtained by equating (B.6a),(B.6b) to zero i.e.

(B.7a)

(B.7b)

To solve the system of equations , we need the partial derivatives of w.r.t. r and s.
Bairstow has shown that these partial derivatives can be obtained by
synthetic division of , which amounts to using the recurrence relation
replacing with and
with i.e.

(B.8a)

(B.8b)

(B.8c)

for where
(B.9)

The system of equations (B.7a)-(B.7b) may be written as.

(B.10a)

(B.10b)

These equations can be solved for and turn be used to


improve guess value to .
Now we can calculate the percentage of approximate errors in (r,s) by
(B.11)

If or , where is the iteration stopping


error, then we repeat the process with the new guess i.e.
. Otherwise the roots of can be determined by

(B.12)

If we want to find all the roots of then at this point we have the following
three possibilities:

1. If the quotient polynomial is a third (or higher) order


polynomial then we can again apply the Bairstow's method to the quotient
polynomial. The previous values of can serve as the starting
guesses for this application.
2. If the quotient polynomial is a quadratic function then use
(B.12) to obtain the remaining two roots of .
3. If the quotient polynomial is a linear function say
then the remaining single root is given b
4. ALGORITHM

STEPS IN WRITING

The task is to determine a pair of roots of the polynomial

f(x) = 6x5 + 11x4 – 33x3 – 33x2 + 11x + 6

As first quadratic polynomial one may choose the normalized polynomial formed
from the leading three coefficients of f(x),
u = an-1 / an = 11/6; v = an-2/an =-33/6

The iteration then produces the table

Iteration steps of Bairstow's method

Nr u v step length roots

0 1.833333333333 −5.500000000000 5.579008780071 −0.916666666667±2.517990821623

1 2.979026068546 −0.039896784438 2.048558558641 −1.489513034273±1.502845921479

2 3.635306053091 1.900693009946 1.799922838287 −1.817653026545±1.184554563945

3 3.064938039761 0.193530875538 1.256481376254 −1.532469019881±1.467968126819

4 3.461834191232 1.385679731101 0.428931413521 −1.730917095616±1.269013105052

5 3.326244386565 0.978742927192 0.022431883898 −1.663122193282±1.336874153612

6 3.333340909351 1.000022701147 0.000023931927 −1.666670454676±1.333329555414

7 3.333333333340 1.000000000020 0.000000000021 −1.666666666670±1.333333333330

8 3.333333333333 1.000000000000 0.000000000000 −1.666666666667±1.333333333333

After eight iterations the method produced a quadratic factor that contains the
roots −1/3 and −3 within the represented precision. The step length from the
fourth iteration on demonstrates the super linear speed of convergence.

EXAMPLE

Example 1: Find the (real/complex) roots of the following equation using Solver.

In order to limit calculations with complex numbers, instead of finding each root
individually, we find quadratic divisors as done using Bairstow’s method. The
calculations are shown in Figure 1.
Figure 1 – Using Solver to find roots of a polynomial

We show the coefficients of the polynomial in range B6:B10. The parameters r and
s from Bairstow’s algorithm are shown in cells B12 and B13. These are initially set
to zeros. The polynomial which results from division by is shown in range E8:E10
with the remainder shown in E6:E7.

The formula in cell E10 is =B10. The formula in cell E9 is =B9+$B$12*E10. The
formula in cell E8 is =B8+$B$12*E9+$B$13*E10. The formulas in cells E6 and E7
are similar to the formula in cell E8. Our goal now is to use Solver to modify the r
and s values in order to get cells E6 and E7 (the remainder after division) to
become zero..

Since Solver is only able to target one cell (the Set Objective value), we place the
formula =E6^2+E7^2 in cell E12, and use this as the target cell. Cells E6 and E7
only become zero when cell E12 becomes zero and
vice versa.

We now select Data > Analysis|Solver and fill in


the dialog box that appears as shown in Figure 2.

Figure 2 – Solver input


After pressing the Solve button, the spreadsheet shown in Figure 1 are modified
as shown in Figure 3.

We see that cells E6, E7 and E12 are close to zero and the values for r and s have
changed to r = 0 and s = -4. This means that one of the quadratic divisors x2 – rx –
s of the original polynomial is x2 + 4. This quadratic is zero only when
±2$latex\sqr{-1}$, which is usually written as ±2i where i = $latex\sqr{-1}$.

In general, these roots are calculated via the quadratic formula

On the spreadsheet in Figure 3, this calculation is performed in range B17:C18.


First the discriminant (i.e. the value under the square root sign in the quadratic
formula) is calculated in cell B15 using the formula =B12^2+4*B13. If this value is
negative, then the roots are complex numbers of the form a ± bi, with a real part a
and an imaginary part b (i.e. the part that multiplies i). If the discriminant is not
negative, then the roots are real numbers, each of which can viewed as a complex
number of the form a + bi where b = 0.

These values are shown in B17:C17 and B18:C18, where the value in the B cell
contains the real part of the root and the value in the C cell contains the imaginary
part. To accomplish this in Excel we place the formula =IF(B15>=0,(B12-
SQRT(B15))/2,B12/2) in cell B17 and the formula =IF(B15>=0,0,-SQRT(-B15)/2)
in cell C17. The formulas for cells B18 and C18 are identical except that we replace
–SQRT by +SQRT.

Figure 3 – Output from Solver


From range B17:C18, we see that the two roots are 0–2i and 0+2i (or simply ±2i).
We also see from range E8:E10 that when x4 + 2x3 – 11x2 + 8x – 60 is divided by x2
+ 4 the result is the polynomial x2 + 2x – 15. If this were a polynomial of degree 3
or higher, then we would repeat the whole process described above based on this
polynomial.

In this case we can simply use the quadratic formula. The new r and s values are
shown in cells H12 and H13. Cell H12 contains the formula =-E9, while cell H13
contains the formula =-E8. This time the resulting roots x = 3 and x = -5 are real
(since the discriminant in cell H15 is positive) as shown in range H17:I18.

Thus the four roots are 3, -5, -2i, 2i.

FUNCTIONS

function [rts,it]=bairstow(a,n,tol)

it=1;

while n>2

u=1; v=1; st=1;

while st>tol

b(1)=a(1)-u; b(2)=a(2)-b(1)*u-v;

for k=3:n

b(k)=a(k)-b(k-1)*u-b(k-2)*v;

end;

c(1)=b(1)-u; c(2)=b(2)-c(1)*u-v;

for k=3:n-1

c(k)=b(k)-c(k-1)*u-c(k-2)*v;
end;

c1=c(n-1); b1=b(n); cb=c(n-1)*b(n-1);

c2=c(n-2)*c(n-2); bc=b(n-1)*c(n-2);

if n>3, c1=c1*c(n-3); b1=b1*c(n-3); end;

dn=c1-c2;

du=(b1-bc)/dn; dv=(cb-c(n-2)*b(n))/dn;

u=u+du; v=v+dv;

st=norm([du dv]); it=it+1;

end;

[r1,r2,im1,im2]=solveq(u,v,n,a);

rts(n,1:2)=[r1 im1]; rts(n-1,1:2)=[r2 im2];

n=n-2;

a(1:n)=b(1:n);

end;

u=a(1); v=a(2);

[r1,r2,im1,im2]=solveq(u,v,n,a);

rts(n,1:2)=[r1 im1];

if n==2

rts(n-1,1:2)=[r2 im2];

end;
function [r1,r2,im1,im2]=solveq(u,v,n,a)

if n==1

r1=-a(1);im1=0; r2=0; im2=0;

else

d=u*u-4*v;

if d<0

d=-d;

im1=sqrt(d)/2; r1=-u/2; r2=r1; im2=-im1;

elseif d>0

r1=(-u+sqrt(d))/2; im1=0; r2=(-u-sqrt(d))/2; im2=0;

else

r1=-u/2; im1=0; r2=-u/2; im2=0;

end;

end;

V. GAUSS ELIMINATION

Gaussian elimination, also known as row reduction, is


an algorithm in linear algebra for solving a system of linear equations. It is usually
understood as a sequence of operations performed on the
corresponding matrix of coefficients. This method can also be used to find
the rank of a matrix, to calculate the determinant of a matrix, and to calculate the
inverse of an invertible square matrix. The method is named after Carl Friedrich
Gauss (1777–1855). Some special cases of the method - albeit presented without
proof - were known to Chinese mathematicians as early as circa 179 AD.

To perform row reduction on a matrix, one uses a sequence of elementary row


operations to modify the matrix until the lower left-hand corner of the matrix is
filled with zeros, as much as possible. There are three types of elementary row
operations:

Swapping two rows, Multiplying a row by a nonzero number, Adding a


multiple of one row to another row.

Using these operations, a matrix can always be transformed into an upper


triangular matrix, and in fact one that is in row echelon form. Once all of the
leading coefficients (the leftmost nonzero entry in each row) are 1, and every
column containing a leading coefficient has zeros elsewhere, the matrix is said to
be in reduced row echelon form. This final form is unique; in other words, it is
independent of the sequence of row operations used. For example, in the following
sequence of row operations (where multiple elementary operations might be
done at each step), the third and fourth matrices are the ones in row echelon form,
and the final matrix is the unique reduced row echelon form.

A. NAIVE GAUSS ELIMINATION METHOD

MATHEMATICAL BACKGROUND

Naive Gaussian elimination is the application of Gaussian elimination to


solve systems of linear equations with the assumption that pivot values will
never be zero.
Explanation:
Gaussian elimination attempts to convert a system of linear equations from
a form like:

A critical step in this process is the ability to divide row values by the value of a
"pivot entry" (the value of an entry along the top-left to bottom-right of (a possibly
modified) coefficient matrix.
Naive Gaussian Elimination assumes that this division will always be possible i.e.
that the pivot value will never be zero. (Note, by the way, a pivot value close to but
not necessarily equal to zero, can make the results unreliable when working with
calculators or computers with limited accuracy).

METHOD: FORMULA

The following sections divide Naïve Gauss elimination into two steps:

1) Forward Elimination

2) Back Substitution

To conduct Naïve Gauss Elimination, Mathematica will join the [A] and [RHS]
matrices into one augmented matrix, [C],

that will facilitate the process of forward elimination.

B = Transpose [Append[Transpose[A], RHS]]; B // MatrixForm


2.1 Forward Elimination
Forward elimination of unknowns consists of (n-1) steps. In each step k, the
coefficient of the kth unknown will be zeroed

from every subsequent equation that follows the kth row. For example, in step 2
(i.e. k=2), the coefficient of x2 will be

zeroed from rows 3 .. n. With each step that is conducted, a new matrix is
generated until the coefficient matrix is transformed

to an upper triangular matrix. The following procedure calculates the upper


triangular matrix produced for each step.

ALGORITHM

1. Start
2. Declare the variables and read the order of the matrix n.
3. Take the coefficients of the linear equation as:
Do for k=1 to n
Do for j=1 to n+1
Read a[k][j]
End for j
End for k
4. Do for k=1 to n-1
Do for i=k+1 to n
Do for j=k+1 to n+1
a[i][j] = a[i][j] – a[i][k] /a[k][k] * a[k][j]
End for j
End for i
End for k
5. Compute x[n] = a[n][n+1]/a[n][n]
6. Do for k=n-1 to 1
sum = 0
Do for j=k+1 to n
sum = sum + a[k][j] * x[j]
End for j
x[k] = 1/a[k][k] * (a[k][n+1] – sum)
End for k
7. Display the result x[k]
8. Stop

FLOWCHART
EXAMPLE

Let's solve the following system of equations:

In augmented matrix form we have

We now use the method of Gaussian Elimination:

We could proceed to try and replace the first element of row 2 with a zero, but
we can actaully stop. To see why, convert back to a system of equations:

Notice the last equation: 0=5. This is not possible. So the system has no solutions;
it is not possible to find values x, y, and z that satisfy all three equations
simultaneously.

FUNCTIONS
function C = gauss_elimination(A,B)
i = 1;
X = [ A B ];
[ nX mX ] = size( X);
while i <= nX
if X(i,i) == 0
disp('Diagonal element zero') % displaying the result if
there exists zero
return
end
X = elimination(X,i,i);
i = i +1;
end
C = X(:,mX);
function X = elimination(X,i,j)

[ nX mX ] = size( X);
a = X(i,j);
X(i,:) = X(i,:)/a;
for k = 1:nX
if k == i
continue
end
X(k,:) = X(k,:) - X(i,:)*X(k,j);
End
B. GAUSS-JORDAN

MATHEMATICAL BACKGROUND

Row reduction is the process of performing row operations to


transform any matrix into (reduced) row echelon form. In reduced
row echelon form, each successive row of the matrix has less
dependencies than the previous, so solving systems of equations is a
much easier task. The idea behind row reduction is to convert the
matrix into an "equivalent" version in order to simplify certain matrix
computations. Its two main purposes are to solve ystem of linear
equations and calculate the inverse of a matrix.

Carl Friedrich Gauss championed the use of row reduction, to the


extent that it is commonly called Gaussian elimination. It was further
popularized by Wilhelm Jordan, who attached his name to the process
by which row reduction is used to compute matrix inverses, Gauss-
Jordan elimination.

METHOD: FORMULA

The following row operations on the augmented matrix of a system


produce the augmented matrix of an equivalent system, i.e., a system with
the same solution as the original one.

• Interchange any two rows.

• Multiply each element of a row by a nonzero constant.

• Replace a row by the sum of itself and a constant multiple of another


row of the matrix. For these row operations, we will use the following
notations.

• Ri ↔ Rj means: Interchange row i and row j.

• αRi means: Replace row i with α times row i.

• Ri + αRj means: Replace row i with the sum of row i and α times row j.
• Ri + αRj means: Replace row i with the sum of row i and α times row j.

STEPS

The Gauss-Jordan elimination method to solve a system of linear


equations is described in the following steps.

1. Write the augmented matrix of the system.

2. Use row operations to transform the augmented matrix in the


form described below, which is called the reduced row echelon
form (RREF).

(a) The rows (if any) consisting entirely of zeros are grouped
together at the bottom of the matrix.

(b) In each row that does not consist entirely of zeros, the leftmost
nonzero element is a 1 (called a leading 1 or a pivot).

(c) Each column that contains a leading 1 has zeros in all other
entries.

(d) The leading 1 in any row is to the left of any leading 1’s in the
rows below it.

3. Stop process in step 2 if you obtain a row whose elements are all
zeros except the last one on the right. In that case, the system is
inconsistent and has no solutions. Otherwise, finish step 2 and
read the solutions of the system from the final matrix.

Note: When doing step 2, row operations can be performed in any


order. Try to choose row operations so that as few fractions as
possible are carried through the computation. This makes
calculation easier when working by hand.

AlGORITHM

Start

2. Read Number of Unknowns: n


3. Read Augmented Matrix (A) of n by n+1 Size

4. Transform Augmented Matrix (A)


to Diagonal Matrix by Row Operations.

5. Obtain Solution by Making All Diagonal Elements to 1.

6. Display Result.
EXAMPLE
Example 4. Solve the following system by using the Gauss-Jordan elimination
method.
FUNCTIONS
function x = gauss_jordan_elim (a,b)
[m,n]= size(a);
for j=1:m-1
for z=2:m
if a(j,j)==0
t=a(1,:);a(1,:)=a(z,:);
a(z,:)=t;
end
end
for i=j+1:m
a(i,:)=a(i,:)-a(j,:)*(a(i,j)/a(j,j));
end
end

for j=m:-1:2
for i=j-1:-1:1
a(i,:)=a(i,:)-a(j,:)*(a(i,j)/a(j,j));
end
end

for s=1:m
a(s,:)=a(s,:)/a(s,s);
x(s)=a(s,n);
end
References:

BISECTION METHOD

https://en.wikipedia.org/wiki/Bisection_method

FALSE POSITION METHOD

https://en.wikipedia.org/wiki/Regula_falsi

https://brainly.in/question/6700140#readmore

FIXED POINT ITERATION

http://home.iitk.ac.in/~psraj/mth101/lecture_notes/lecture8.pdf

SECANT METHOD

http://www.cs.utexas.edu/users/kincaid/PGE310/Ch6-Roots-Eqn.pdf

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.39.9154&rep=
rep1&type=pdf

https://www.codewithc.com/secant-method-matlab-program/

MODIFIED SECANT METHOD

http://www.cs.utexas.edu/users/kincaid/PGE310/Ch6-Roots-Eqn.pdf

BARISTOW MEDTHOD

https://en.wikipedia.org/wiki/Bairstow%27s_method
BRENT’S METHOD

https://en.wikipedia.org/wiki/Brent%27s_method

MULLER’S METHOD

https://ece.uwaterloo.ca/~dwharder/NumericalAnalysis/10RootFinding/
mueller/

NAIVE GAUSSIAN ELIMINATION

https://socratic.org/questions/what-is-naive-gaussian-elimination

GAUSS-JORDAN

https://brilliant.org/wiki/gaussian-elimination/

Potrebbero piacerti anche