Sei sulla pagina 1di 8

Chapter II

Methods in Obtaining Roots of the Equation

Appendix A contains the figures of the flowchart of the programs being constructed for each methods of obtaining root of an equation. In ths program, it follows a subroutine procedure. The program contains a main class obtaining the necessary methods calling for inputs, and subclasses (Bairstow, Bisection, Brent, FalsePosition, FixedPoint, Muller, NewtonRaphson, and Secant), which lead to the computation of the root using any method.

Bracketing Method

It comprises different methods which the roots may be found within the two initial guesses which are typically changes the signs. The methods present here give strategies which reduces the width of the bracket until the root will be found.

Bisection Method

Bisection is a numerical method for estimating the roots of a polynomial f(x). It is one of the simplest and most reliable but it is not the fastest method. It is called the binary chopping or the Bolzanos method. It is a bracketing method which finds root of a given continuous function over an interval have an opposite signs that gives f( ) f( by computing the midpoint and f( = ( + and such that f( ) and f( ) will

) < 0. The method divides the interval in two

) of the interval. Either f( ) and f( ) or f( )

) will have opposite signs and it brackets a root, we must select a subinterval

within the interval and apply the same bisection step. There will be a 50% of chance of getting a function equals to zero. If f( ) f( ) < 0, then the method sets equal to ,

and if f( and f( interval.

) f( ) < 0, then the method sets

equal to

. For both cases, the new f( )

) will have opposite signs, so that the method is applicable to this smaller

The continuous function on the given interval [ ,

] and f( ) f(

) < 0 states

that the bisection converges to a root of the function and the true error is halved in each step and the method converges linearly if f( ) and f( ) will have different signs.

This method gives only a range where the root exists and not the estimation where is the roots location. The smallest bracket is where the root can be found. Its true error of n steps can be solved by the equation;

(2.1)

False Position Method

The false-position method is a modification on the bisection method: if it is known that the root lies on [a, b], then it is reasonable that we can approximate the function on the interval by interpolating the points (a, f(a)) and (b, f(b)). It is also called the linear interpolation method. An alternative method based on the graphical method. The false position method starts with a two points f( ) and f( and such that the functions

will have an opposite signs then one of the end-points will converges

and the other will remain fixed for all the iterations function f a root. It is given by the formula:

(2.2)

The root f(

is from the graphical representation of joining the function f( ) and

by a straight line and which the point that intersects the line and the axis is the with the same sign as f( and .

improve root. The value of the root replaces f( ) and f( so that the root is always at the interval of the two point

The termination of the computation will be the same as the bisection method and same as the algorithm, but the equation for finding is used. The error of the

regula falsi is more efficient for root finding than the bisection since one of the points will stay throughout the computation and the others converges quickly and makes the approximate error conservative.

Modified False Position is the remedy of being one-sided of the false position method. It divides the function value that was stuck. The algorithm implements the strategies on how the counters are used to determine the root when the one is bound stays fixed for the two iterations and through this, the function value is bound halved.

It is more than the bisection and the false position method for setting the stopping criterion as 1.01% since it gives only 12 iterations compare with the 14 and 25 of the bisection and false position method.

Open Method

It composed of different methods that are based on the formulas that requires only a single starting value of x or two starting values that do not necessarily bracket the root. It may diverge or converges as the computation progresses.

Simple Fixed Point Method

Fixed Point Method is also called the one-point iteration or the successive substitution method. It rearranges the function f(x)=0 to x=g(x) It can be obtained by

adding both sides a x of the equation or by simply doing algebraic manipulation. The guess roots can be used to estimate as and can be expressed as =g(x).

The convergence or the divergence of this method can be depicted graphically through its behavior and structure or it can also be predicted by separating the it into two components parts and the x values obtained by the intersections are the roots of the function f(x)=0. The two-curve method also shows the convergence and the divergence of the simple fixed-point method. To find for the approximate error of this method can be solve using this formula, 

(2.3)

Newton Raphson Method The widely used for finding the root for approximations to the zeroes of a real valued function. It converges quickly for the iterations which are near on the desired root. It also detects and overcomes the convergences failure.

This method starts with an initial guess which is close to the true root, the given function is approximated by its tangent line then computes the x-intercept of this tangent line. This x-intercept will be the approximation to the function's root than

the original guess, and the method can be repeated. The formula for this method is given by 

(2.4)

The termination of the Newton- Raphson method is the same as for computing the other methods. The convergence depends on the accuracy of the initial guess root and the nature of the problem.

Secant Method

It is an open method which assumes a function that can be approximately linear in the region of interest. The formula for the needs two initial estimates of x but the f(x) is not required to change the signs between the two estimates and is given by this equation,

(2.5)

The two values can sometimes lie on the same root and sometimes this can cause the divergence. The convergence of this method is that the root is within the bracketing which is the reason that it was compared with the false position method.

Modified Secant Method uses an alternative approach which involves the fractional perturbation of the independent variable to estimate the f(x) instead of using the two arbitrary values. The formula for the iteration is given by (2.6)

Bairstows Method

Bairstow Method is an iterative method used to find both the real and complex roots of a polynomial. It is based on the idea of synthetic division of the given polynomial by a quadratic function and can be used to find all the roots of a polynomial. It is a method that finds complex roots of a polynomial of a quadratic formula and can be used for solving the root all kinds of a polynomial. It uses the Newtons method to adjust the coefficients u and v in the quadratic x2 + ux + v until its roots are also roots of the polynomial being solved. The root can be found be found

by dividing the polynomial by the quadratic to eliminate the roots and then it can be repeated until the polynomial becomes quadratic or linear and all roots will be determined. The values of u and v can be found by picking the starting and repeating the Newtons method in two dimensions until it converges, for the quadratic equations of multiplicity higher than one it converges to that factor is a linear and quadratic factors that have a small value which has real roots will tend to diverge to infinity. To find for the zero of polynomial can be implemented with a programming language. In a given polynomial equation, anxn + an-1xn-1+...+ao=0, the root of the equation can be solved using the equation given below:

(2.6)

where r and s are guesses. Mller's method A root finding method that solves for the root of the form f(x) = 0 of the single variable x and a scalar function whenever theres no information about the derivatives

that exists. Its the generalizes the secant method but it uses three points of quadratic interpolation noted by as xk, xk-1 and xk-2.The The parabola going through the three points (xk, f(xk)), (xk-1, f(xk-1)) and (xk-2, f(xk-2)) when (2.7)

It can be written in the Newton form, where f[xk, xk-1] and f[xk, xk-1, xk-2] denote divided differences: (2.8)

where: (2.9)

Brents Method Brent's method is a complicated but popular root-finding algorithm combining the bisection method, the secant method, and inverse quadratic interpolation. It has the reliability of bisection but it can be as quick as some of the less reliable methods. The idea is to use the secant method or inverse quadratic interpolation if possible, because they converge faster, but to fall back to the more robust bisection method if necessary.a method that combines that bisection method, the secant method. The idea is to use the secant method because they converge faster, but to fall back to the more robust bisection method if necessary.

Given a specific numerical tolerane

must hold and the

results is used in the iteration and if previous step is perform interpolation then the inequality gives . Also, if the previous step used the bisection

method, the inequality

must hold, otherwise the bisection

method is performed and the result used for the next iteration. If the previous step performed interpolation, then the inequality is used instead.

Most of the N2 iterations, where N denotes the number of iterations for the bisection method, if the function f is well-behaved, and this method will usually proceed by either inverse quadratic or linear interpolation, in which case it will converge linearly.

Potrebbero piacerti anche