Sei sulla pagina 1di 7

Numerical Methods Study Guide

Reducing Error
Horners Method
Whenever you do an arithmetic operation (+ - / *), you have the potential for
error.
Horner's Method rearranges a polynomial P(x) into a form that involves fewer
arithmetic operations.
2 3 4 N
P( x) := a + a x + a x + a x + a x .. a x
0 1 2 3 4 N

(1 (2 (3
H( x) := a + x a + x a + x a + xa
0 4 )) )
Horners Method Generalizes this idea into tabular form (synthetic division),

Convergence ()

:= x x
i+1 i

Residual ()

( i+1)
:= F x

Solutions of Equations of Single Variables


Bisection Method

Error Reduces Linearly


Simplest Method
Provides Error Estimate
Only Requires F(x) not F'(x)
Requies an Initial Interval [a,b] such that F(a) & F(b) have opposite signs
F( a) F( b ) < 0

Interval must bracket one and only one root


Process:
Midpoint of [a,b] is computed
a +b
1 1
c :=
1 2
Check if C is a root
If F(c)=0, then Stop

If
( 1) ( 1) < 0
F a F c Then Root is between a1 and c1

( 1) ( 1) > 0
F a F c Then Root is betweetn c1 and b1

Take new interval and compute midpoint again.

Repeat until and are within some given tolerance.

Error Analysis of Bisecton

1
:= ( b a)
n ( n 1)
2

1
C x ( b a)
n 0 n
2

How many iterations will it take?

ln
b a

tol n
ln( 2 )

Fixed Point Iteration

Rearrange f(x)=0 Into from g(x)=x

Fixed Point Thereom


Convergence is Guaranteed

IF
d For All Iterations
g( x) < 1
dx
Newton Raphson Method

Error Reduces Quadratically


A General Procedure for Fixed Point Iteration
Requires both F(x) and F'(x)
Requires 1 Initial Guess

x := x
( i)
f x
i+1 i
d
f (x )
dx i

Iterate until


tol


tol

Problem with NR method


Oscilation

Secant Method
Converges Super-Linearly
Only Requires F(x)
Requires 2 Initial Guesses

x := x
( i)( i i1)
f x x x
i+1 i f (x ) f (x )
i i 1
Solving Simultaneous Linear Equations
A set of simultaneous equations can be represented by a matrix of coefficients [A] times a
vector of variables {x} set equal to a vector of the right hand side values {b}.

3x+5y-z=10
7x-2y+3z=12
x+5y-4z=-1

3 5 1 x 10
7 2 3 y := 12

1 5 4 z 1
[A] {x} {b}

For [A]*{x}={b} to have a unique solution, all equations must be linearly independent.
This is guaranteed by:

det( A) 0

Elementary Row Operations (EROs)


Used to compute determinant and solve simultaneous equations
EROs DO NOT alter systems of equations
Used to manipulate matrices into convenient forms in order to solve
EROs:
1. Multiply a row by a non-zero constant
2. Add a multiple of one row to another
3. Switch Rows (in order to place largest values in pivot location)
Row Swaps Not necessary when a martrix is Diagonally Dominant

Convenient Forms of Matrices:


Diagonal
a11 0 0
D := 0 a22 0

0 0 aNN

Upper Triangular

a11 a12 a1 , N
U := 0 a22 a2 , N

0 0 aNN

Lower Triangular

a11 0 0

L := a21 a22 0


aN1 aN2 aNN

Determinants of These Convenient Matrices:


S
det( A) := a a .. a ( 1 )
11 22 NN

Where S=Number of Row Swaps

Direct Methods
Exact if there is no round off
Solve a problem in a countable number of steps
Major source of error is round off
Applicable if number of equations N < 2000
FLOP count is approx N^3
Use EROs to convert original problem into one that is solveable

Gaussian Elimination
Uses an upper triangular matrix to solve the equation:
U x := b
mod
Where b_mod is the RHS modified due to EROs.
Uses Back Substitution to solve.
Gauss-Jordan Elimination
Uses a diagonalized matrix to solve the equation:
D x := b
mod

Then "Solves"

L U Decomposition
Factors a matrix into the product of both upper and lower triangular matrices.
A := L U
Solves Equation:
L Z := b

Where
Z := U x
Only Applies to Tri-Diagonal Equations:

1 3 0 0 0

6 2 4 0 0

0 4 3 7 0
0 0 2 4 1

0 0 0 2 5

Crout Method
Diagonals of upper triangular matrix equal 1

Dolittle Method
Diagonals of lower triangular matrix equal 1

Thomas Algorithm
Stores information with the use of 4 vectors
Vectors:
{A} = Diagonal of Tri-Diag Matrix
{B} = Super Diagonal of Tri-Diag Matrix
{C} = Sub Diagonal of Tri-Diag Matrix
{b} = Any RHS Vector

Potrebbero piacerti anche