Sei sulla pagina 1di 2

Numerical methods are techniques by which mathematical problems are reformulated so that they can be solved with arithmetic

operations. Numerical methods are characterized by large numbers of tedious arithmetic calculations. HOW ENGINEERS APPROACH PROBLEM SOLVING BEFORE THE COMPUTER ERA: 1. Uses Analytical or Exact Methods - Applicable for only limited class of problems (e.g. Linear Model, Simple Geometry, Low Dimensionality, etc) 2. Graphical Solutions - Can be used to solve complex problems but results are not very precise and less dimensionality. Without the aids of computer, it becomes tedious and awkward to implement. 3. Calculators and Slide Rules - Calculations are still tedious and slow even because it is still manual. SIGNIFICANCE OF NUMERICAL METHODS: 1. Numerical methods are extremely powerful problemsolving tools and will they greatly enhance your problemsolving skills. 2. If you are conversant with numerical methods and are adept at computer programming, you can design your own programs to solve problems. 3. Numerical methods provide a vehicle for you to reinforce your understanding of mathematics because one function of numerical methods is to reduce higher mathematics to basic arithmetic operations. COMPUTER ARE PRACTICALLY USELESS WITHOUT A FUNDAMENTAL UNDERSTANDING OF HOW ENGINEERING SYSTEMS WORK. What are Generalizations? Generalizations can serve as organizing principles that can be employed to synthesize observations and experimental results into a coherent and comprehensive framework from which conclusions can be drawn. Generalizations, in engineering problem-solving perspective, it is expressed in the form of a mathematical model. Mathematical Model defined as a formulation or equation that expresses the essential features of a physical system or process in mathematical terms. Dependent variable: Characteristic that usually reflects the state of the system Independent variables: Dimensions such as time and space along which the systems behavior is being determined Parameters: reflect the systems properties or composition Forcing functions: external influences acting upon the system NEWTONS SECOND LAW OF MOTION States that the time rate change of momentum of a body is equal to the resulting force acting on it. NEWTONS SECOND LAW OF MOTION States that The acceleration produced by a particular force acting on a body is directly proportional to the magnitude of the force and inversely proportional to the mass of the body. INTERPRETATION: It can be seen that the numerical method (Approximate) captures the essential features of the exact solution. However, because straight-line segments is employed, there are some discrepancies between the two results. One way to minimize such discrepancies is to use a smaller step size. PART I: BASICS OF COMPUTER PROGRAMMING & SOFTWARE

Computer give ease in calculating for extremely laborious and time-consuming solutions to problems than by hand. What can the computer provide you with? 1. Software Packages capable of doing the basic and standard way of computations. 2. Programming extends the capability of the software packages by writing programs run by them Computer Programs - set of instructions that direct the computer to perform a certain task. Structured Programming - set of rules that prescribe good style habits for the programmer. - Well-structured algorithms are invariably easier to debug and test, resulting in programs that take a shorter time to develop, test, and update. Forms of Programming Structuring: 1. Flowchart - visual or graphical representation of an algorithm - employs a series of blocks which represent a particular operation or step in the algorithm and arrows which represent sequence of algorithm. - useful in planning, unraveling, or communicating the logic of your own or someone else's program Terminal- Represents the beginning or end of a program Flowlines- Represent the flow of logic. The humps on the horizontal arrow indicated that it passes over and does not connect with the vertical flowlines. Process- Represents calculation or data manipulations Input/Output Represent inputs and outputs of data information. Decision Represents a comparison, question that determines alternative paths to be followed. Junction- Represent the confluence of flowlines. Off-Page Connector Represents a break a that is continued on another page. Count Controlled Loop User for loops which repeat a prespecified number of iterations 2. Pseudocode - uses code-like statements in place of the graphical symbols of the flowchart. - bridges the gap between flowcharts and computer code - easier to use to develop a program than with a flowchart Fundamental Control Structures (Logical Representation): 1. Sequence - computer code is implemented one instruction at a time (unless directed altered by the user) 2. Selection - provides a means to split the program's flow into branches based on the outcome of a logical condition. Cascade - chain of decisions Case - branching is based on the value of a single test expression 3. Repetition - provides a means to implement instructions repeatedly (loops). a. Decision Loop terminates based on the result of a logical condition. i. DOEXIT (Break Loop) structure repeats until a logical condition is true. ii. DOFOR (Count-controlled Loop) performs a specified number of repetitions or iterations. Modular Programming - approach which divides the computer program into small subprograms, or modules, that can be developed and tested separately. - makes a subprogram independent and self-contained to do a specific and defined function with one entry and exit point. - 50-100 instructions in length Procedures - representation of modules in high-level languages - series of computer instructions that together perform a given task Excel - spreadsheet which is a special type of mathematical software that allow the user to enter and perform calculations on rows and columns of data

- has some built-in numerical capabilities including equation solving, curve fitting, and optimization. - includes VBA as a macro language that can be used to implement numerical calculations. - has several visualization tools, such as graphs and threedimensional surface plots, that serve as valuable adjuncts for numerical analysis. MATLAB - flagship software product of The MathWorks, Inc., which was cofounded by the numerical analysts Cleve Moler and John N. Little. - originally developed as a matrix laboratory - has a variety of functions and operators that allow convenient implementation of many of the numerical methods APPROXIMATIONS AND ROUNDING-OFF ERRORS Approximation - representation of something that is not exact, but still close enough to be useful. Significant Figures - developed to formally designate the reliability of a numerical value. - are the numbers that can be used with confidence. - correspond to the number of certain digits plus one estimated digit. Identifying Significant Digits 1. All non-zero digits are considered significant. EX: 91 has two significant digits (9 and 1) 123.45 has five significant digits (1, 2, 3, 4 and 5). 2. Zeros appearing between two non-zero digits are significant. EX: 101.12 has five significant digits: 1, 0, 1, 1 and 2. 3. Leading zeros are not significant. EX: 0.00052 has two significant digits: 5 and 2. 4. Trailing zeros in a number containing a decimal point are significant. EX: 12.2300 has six significant digits: 1, 2, 2, 3, 0 and 0 0.000 122 300 still has only six significant digits 130.00 has five significant digits. 5. Zeros used to place value is insignificant unless a dot is included EX: 50 has 1 significant digit 50. has 2 significant digit Importance of Understanding Significant Figures in Numerical Math 1. Important when deciding the acceptability of an approximation to a certain number of significant figures. 2. Important since computers cannot express a value with infinite number. Accuracy and Precision Accuracy - refers to how closely a computed or measured value agrees with the true value. Precision - refers to how closely individual computed or measured values agree with each Other Inaccuracy - aka Bias, defined as systematic deviation from the truth. Imprecision - aka Uncertainty, refers to the magnitude of the scatter. Error - represents both the inaccuracy and imprecision of predictions. - arise from the use of approximations to represent exact mathematical operations and Quantities Major Forms of Numerical Errors: 1. Round-off Error - due to the fact that computers can represent only quantities with finite number of digits. - discrepancy introduced by the omission of significant figures 2. Truncation -discrepancy introduced by the fact that numerical methods may employ approximations to represent exact mathematical operations and quantities CHALLENGE OF NUMERICAL METHODS: Determining the error estimates in the absence of knowledge regarding the true value. SOLUTION TO THE CHALLENGE: Iterative approach is used to compute the answers The process is performed repeatedly, or iteratively, to successively compute better and better approximations FLOATING POINT - Represent s fractional quantities in computers

- Composed of the mantissa (significand), base of the number system and exponent (mbe) NUMBER SYSTEM - convention for representing quantities Base - number used as the reference for constructing the system Place Value - determines the position and magnitude of a digit or symbol TRUNCATING ERRORS AND TAYLOR SERIES Truncating Errors - results from using an approximation in place of an exact mathematical procedure EXAMPLE: Eulers method used in solving the parachutist problem using approximation Taylor Series - mathematical formulation widely used in numerical methods to express functions in an approximate fashion. - provides a means to predict a function value at one point in terms of the function value and its derivatives at another point - states that any smooth function can be approximated as a polynomial. Zero-order Approximation - indicates that the value of f at the new point is the same as its value at the old point. - very useful if xi and xi+1 are close to each other. {
#{

- a value of 1 tells us that the functions relative error is identical to the relative error in x - a value greater than 1 tells us that the relative error is amplified, whereas a value less than 1 tells us that it is attenuated. - ill-conditioned functions are those functions with very large values. TOTAL NUMERICAL ERROR - Summation of the truncation and round-off error s. - the only way to minimize round-off errors is to increase the number of significant figures - truncation errors can be reduced by decreasing the step size - truncation errors are decreased when there is an increase in the number of computation while round-off errors are increased - DILEMMA: The strategy for decreasing one component of the total error leads to an increase of the other component BLUNDERS - aka gross errors due to an error that contributed to all the other components of error. FORMULATION ERRORS aka model errors relate to bias that can be ascribed t incomplete mathematical models DATA UNCERTAINTY - associated with inconsistency of nature of data and can exhibit both inaccuracy and imprecision. ROOTS OF EQUATION Bracketing Methods - requires two initial guesses to get the root - should bracket or enclose the root with the guesses - different strategies are used to systematically reduce the bracket and get the correct answer 1. Graphical Methods - simple method for getting an estimate of the root of equation by observing where does a line or curve crosses the x-axis. - limited practical value because they are not precise - used as starting guess for numerical methods Incremental Search Method - locates an interval where the function changes sign and divides the interval into subintervals to again locate where the sign changes and so on. 2. Bisection Method - aka binary chopping, interval halving, or Bolzanos method - a type of incremental search method in which the interval is always divided in half. - if a function changes sign over an interval, the function value at the midpoint is evaluated and the location of the root is determined as lying at the midpoint of the subinterval within which the sign change occurs and the process is repeated to obtain refined estimates. 3. False Position Method - alternative method based on graphical insight - solves the inefficiency of Bisection Method (no account is taken of the magnitudes of f(xl) and f(xu). - joins f(xl) and f(xu) by a straight line and their intersection with the x- axis will serve as an improved estimate of the root - called false position because replacing a curve with a line will yield a false position (regula falsi in Latin). - aka Linear Interpolation Method. { {{ . { . { { . { { NOTE: In bisection method, the interval between xl and xu grows smaller during the course of computation while in false-position method, one of the initial guesses (xl or xu) may stay fixed throughout the computation as the other guess converge on the root. In general, false-position easily converges to the desired root but not all the time, especially for functions with significant curvature, due to its one-sidedness. 4. Modified False-position Method - combination of falseposition and bisection - mitigates the one-sidedness of false position - if one of the bounds is already stuck, the formula of xr for bisection will be used.

{ {

First-order Approximation - added to the zero-order approximation to provide better estimate - adds a terms consists of a slope f(xi) multiplied by the distance between the xi and xi+1. - this adds a straight line and is capable of predicting an increase or decrease of the function between xi and xi+1. { # { { { - # { {{ # . { Second-order Approximation - captures some of the curvature that the function might exhibit. { { { # { # { { { - # { {{ # . { . {$ The Complete Taylor Series { { { # { # { { { - # { {{ # . { { { $ . { { # J . { - J NOTE: The nth-order Taylor series expansion will be exact for an nth-order polynomial. For other differentiable and continuous functions, such as exponentials and sinusoids, a finite number of terms will not yield an exact estimate. Only if an infinite number of terms are added will the series yield an exact result. If Rn is removed, the right Taylor Series becomes an approximation. That is, the error is proportional to the step size h raised to the (n +1)th power. Rn = O (hn+1) We can usually assume that the truncation error is decreased by the addition of terms to the Taylor series. Derivative Mean-Value Theorem - states that the if a function f(x) and its first derivative are continuous over an interval from xi to xi+1, then there exists at least one point on the function that has a slope f() parallel to the line joining f(xi) and f(xi+1) STABILITY AND CONDITION Condition - relates to the sensitivity of the mathematical problem to changes in its input values Numerically Unstable - occurs in those problems if the uncertainty of the input values is grossly magnified by the numerical method. Condition Number - defined as the ratio of these relative errors - provides a measure of the extent to which an uncertainty in x is magnified by f(x)

OPEN METHODS 1. Simple Fixed-Point Iteration. For simple fixed-point iteration, also called one-point iteration or successive substitution, a formula for an iterative algorithm can be derived by re-arranging the function f(x)=0 so that x is on the left hand side of the equation, as in 2. Newton-Raphson Method. Figure 2.6 illustrates the graphical depiction of the method. If the initial guessat the root is x., a tangent can be extended from the point (xi,f(xi)). The point at which this tangentcrosses the x-axis usually represents an improved approximation of the root. Newton-Raphson method is often the most efficient rootfinding algorithm available. Compared to simple fixedpoint iteration, which converges linearly, Newton-Raphson method converges quadratically, that is the error is approximately proportional to the square of the previous error. However it also has pitfalls, and may converge very slowly. { { # . { { 3. Secant Method. A potential problem in implementing the Newton-Raphson method is the evaluation of the derivative. This is because there are certain functions whose derivatives are too complicated to be evaluated or too inconvenient to be used. For these cases, the derivative may be approximated by a backward finite divided difference, { {{ # . { # . { # . { { Note the similarity of the algorithms of secant method and false-position method. However, a critical difference between the methods is how one of the initial values is replaced by the new estimate. In false position method, the initial values are updated in such a way that they always bracket the root. In secant method however, the initial values are replaced in a strict order, that is the new value replaces x. and xi replaces xi-1.. Thus, there is a tendency for the initial values to be on one side of the root. A possibility of divergence thus exists. Figure 2.9 demonstrates this graphically. LINEAR ALGEBRAIC EQUATIONS Definitions and Notations. A matrix consists of a rectangular array of elements represented by a single symbol as in equation 3.3. Here, _A_ is the shorthand notation for the matrix and a__ designates an individual element of the matrix. A horizontal set of elements is called a row, and a vertical set is called a column. The first subscript I always designates the number of the row in which the element lies. The second subscript j designates the column. Matrices with row dimension n=1 are called row vectors, while single column matrices are called column vectors. In matrix [A] of 3.4, the diagonal consisting of the elements a11, a22, a33 and a44 is termed as the principal or main diagonal of the matrix. A diagonal matrix is a square matrix where all elements off the main diagonal are equal to zero. An identity matrix is a diagonal matrix where all elements on the main diagonal are equal to 1. An upper triangular matrix is one where all the elements below the main diagonal are zero. A lower triangular matrix is one where all elements above the main diagonal are zero. A banded matrix has all elements equal to zero, with the exception of a band centered on the main diagonal. The above matrix has a bandwidth of 3 and is given a special name the tridiagonal matrix. Operations on Matrices. The following are the rules in the operation of matrices.

1. Addition and subtraction. This can be accomplished by adding or subtracting the corresponding elements of the matrices, as in cij = aij + bij dij = eij / fij 2. Multiplication by a scalar quantity. The product of a scalar quantity g and a matrix [A] is obtained by multiplying every element of [A] by g 3. Multiplication of two matrices. The product of two matrices is represented as [C] = [A][B] where the elements of [C] are defined as I (# I I 4. Inverse of a matrix. Division of matrix is not a defined operation; however, multiplication of a matrix by the inverse of another matrix is analogous to division. This is because multiplying a matrix by its inverse results in an identity matrix, a property of division. The algorithms for computing inverse matrix will be discussed in future sections. 5. Transpose of the matrix. It involves transforming row elements into column and column elements into rows. 6. Trace of the matrix. The trace of the matrix is the sum of the elements on its principal diagonal. It is designated as tr[A] and is computed as J{{ (# I 7. Augmentation. A matrix is augmented by the addition of a column or columns to the original matrix. Gauss Elimination This section deals with simultaneous linear algebraic equations that can be generally expressed as in Equation 3.2. The technique described here is called Gauss elimination because it involves combining equations to eliminate unknowns. Solving Small Numbers of Equations. Before proceeding with computer-aided methods, several methods that are appropriate for solving small (n _ 3) sets simultaneous equations that can be done without the aid of a computer will be described first. These are the graphical method, Cramers rule and the elimination of the unknowns. Graphical Method. For two equations with two unknowns, a graphical solution is obtained by plotting each of them on a Cartesian plane and looking for their intersection. Naive Gauss Elimination. The Gauss elimination is a systematic technique of forward elimination and back substitution as described previously. Although these techniques are ideally suited for implementation on computers, some modifications are required to avoid pitfalls, such as division by zero. The naive Gauss elimination does not avoid this problem. The Gauss elimination consists of two steps: forward elimination of unknowns and obtaining solutions through back substitution. Pitfalls of Elimination Methods.Whereas there are many systems of equations that can be solved with naive Gauss elimination, there are some pitfalls that must be explored before writing a general computer program to implement the method. These are 1. Division by zero. The naive Gauss elimination uses normalization to eliminate variables. The coefficient used to normalize a row is called the pivot element. However, if plainly used, the possibility of division by zero exists, as in the set of equations normalizing the first row with the pivot will result in the division by zero. Problems can also arise when a coefficient is very close to zero. The technique of pivoting has been developed to partially avoid these problems. 2. Round-off errors. Since the process involves large amount of computations, errors can propagate through and round-off errors come into play. A check of whether the solutions satisfy the system of equations can be helpful to check whether a substantial error has occurred. 3. Ill-conditioned systems. Well-conditioned systems are those where a small change in one or more coefficients results in a similar small change in the solutions. Ill-

conditioned systems are those where small changes in coefficients result in large changes in the solutions. Illconditioned systems can have a wide range of solutions. Ill-conditioned systems have determinants close to zero. However, there is no general rule as to how the determinants of a system be close to zero to lead to illconditioning. 4. Singular systems. If the system is ill-conditioned, then there are set of equations that are almost identical. There will also be the case that there are set of equations that are identical, which can arise especially when working with large set of equations. Thus, degrees of freedom are lost as one is forced to eliminate equations that are identical to another one. These systems are called singular systems. Singular systems are characterized by a zero determinant. Gauss-Jordan Elimination. The Gauss-Jordan method is a variation of Gauss elimination. The major difference is that when an unknown variable is eliminated in the GaussJordan method, it is eliminated from all other equations rather than the subsequent ones. In addition, all rows are normalized by dividing them with their pivot elements. Thus, the elimination process reduces the system matrix into an identity matrix with the constants adjusted. Consequently, it is not necessary to employ back substitution as the constants column already gives the solutions for the unknowns. Matrix Inversion. Recall that the inverse of the square matrix , denoted by , has the property that, when multiplied to the original matrix results in an identity matrix . In this section, computation of such inverse will be demonstrated using the LU decomposition.

Convergence criterion for the Gauss-Seidel method. The Gauss-Seidel method is similar in spirit to the fixed-point iteration described in the previous lesson. Thus, it also exhibits two fundamental problems: (1) it was sometimes non-convergent; and (2) if it converges, it often did so very slowly. By developing convergence criterion, one can know in advance if the Gauss-Seidel will lead to a converging solution. Improvement of convergence using relaxation. Relaxation represents a slight modification of the Gauss-Seidel method and is designed to enhance convergence. CURVE FITTING - data is given for discrete values along a continuum. However you may require estimates at points between the discrete values - a simple method for fitting a curve to data is to plot points and then sketch a line that visually conforms to the data. THREE ATTEMPTS TO FIT A BEST CURVE THROUGH FIVE DATA POINTS: -Did not connect points but rather characterized the general upward trend of data with a straight line - Used straight line segments or linear interpolation to connect the points - Used curves to try to capture the meanderings suggested by the data CRITERIA FOR BEST FIT 1. One strategy for fitting a best line through the data would be to minimize the sum of the residual errors for all the available data 2. Other logical criterion might be to minimize the sum of the absolute values of the discrepancies 3. MINIMA CRITERION. A line is chosen that minimizes the maximum distance that an individual point falls from the line. This strategy is ill-suited for regression because it gives undue influence to an outlier, that is a single point with a large error. Standard Deviation the most common measure of spread for a sample about the mean STANDARD ERROR OF THE ESTIMATE (Sy/x) error for the predicted value of y corresponding to a particular value of x. POLYNOMIAL REGRESSION - used when marked pattern is poorly represented by a straight line and needs a curve for better fit INTERPOLATION - method used when estimating intermediate values between precise points Linear Interpolation - simplest form of interpolation by connecting two points with a straight line - the smaller the interval between the data points, the better the approximation Quadratic Interpolation - strategy for improving the estimates by introducing some curvature into the line connecting the points - used if three data points are available and a parabola can be fitted. How do you reduced Truncation Error and Round-off errors? The truncations error can be reduced by decreasing the step size because a decrease step size can lead to subtractive cancellation or to increase in computations, the truncation errors are decreased as the round-off errors are increased. In general the only way to minimize round off errors is to increase the number of significant figures. Independent variable Dimension such as time and space are examples of ___ variables. Pseudocode Form of programming structure that uses code-like statements in place of the graphical symbols. Count-Controlled Loop(DOFOR) A kind of loop that performs a specified number of repetitions or iterations.

Excel A computer software that allows the user to enter and perform calculations on rows and columns of data. Imprecision Refers to the magnitude of the scatter and also known as uncertainty. Error Difference between the exact value and approximate value. Taylor Series Error experienced when cutting down terms of a series of approximations. Total Numerical Error Summation of the truncation and round-errors. Pseudocode Code-like statements used in place of the graphical symbols of flowchart. Modular Programming Type of programming which makes subprogram independent and self-contained to do a specific and defined function with one entry and exit point. Approximation Represents something that is not exact, but still close enough to be useful. Truncation Error a rise from the use of approximation to represent exact mathematical operations. Stopping Criterion (Es) It is the pre-specified percent tolerance. Taylor Series Used to represent percent tolerance. Blunders Also known as gross error. Formula: Linear Interpolation # {{

{" { -

{# { . {" { { . " { # . "

Error Analysis and System Condition. The inverse of the matrix also provides a means to discern whether systems are ill-conditioned. Three methods are available for this purpose: so that the largest 1. Scale the matrix of coefficients element in each row is 1. Invert the scaled matrix and if there are elements of the inverse that are several orders of magnitude greater than one, it is likely that the system is illconditioned. 2. Multiply the inverse by the original coefficient matrix and assess whether the resulting matrix is close to the identity matrix. If not, it indicates ill-conditioning. 3. Invert the inverted matrix and assess whether the resulting matrix is sufficiently close to the original matrix. If not, it indicates again that the system is ill-conditioned.

Quadratic Interpolation $ {{ I" - I# { . " { - I$ { . " {{ . # { I" {" { {# { . {" { I# # . " {$ { . {# { {# { . {" { . $ . # # . " I$ $ . # Curve Fitting Linear Regression J I# Standard Deviation I" - I# - { . I" . I# {$ { . {$ J . J $ . { {$ I" . I# Standard error of the Estimate

Special Matrices and Gauss-Seidel Certain matrices have special structures than can be exploited to develop efficient solution schemes. The first part of this section discusses two of such: banded and symmetric matrices. Efficient elimination methods are described for both. The second part of this section turns to an alternative to elimination methods, that is, an approximate, iterative method. The focus is on GaussSeidel which is particularly well suited for large number of equations. Round-off errors will not be as great issue as in elimination methods; however, the main disadvantage will be the possibility of divergence. Gauss-Seidel. Iterative or approximate methods provide an alternative to the elimination methods described to this point. Such approaches are similar to the techniques developed to obtain the roots of a single equation in the previous lesson on roots of equation. In this section, the Gauss-Seidel method, which is an iterative method to solve for the unknowns of the set of linear algebraic equations, will be demonstrated.
A similar method, called Jacobi, utilizes a somewhat different tactic. Instead of using the latest available values to solve for the next variable, it stores them first before using them in the next iteration. Figure 3.12 depicts the difference between the two. Although there are cases where the Jacobi method is useful, Gauss-Seidels utilization of the best available estimates usually

J. Coefficient of Determination (r2) . J$ Correlation coefficient J . { {{ { J J{ $ . { {$ J{ $ . { {$

J.

2 JIJ JJJJJJ J IJ J

Polynomial Regression I" - I# - I$ $ - J { . I" . I# . I$ $ {$ Setting the equation {J{I - { I -{

{I# - {
$

$ I - {

{I# - { % {I# - {

$ {I$

$ {I$ & {I$

makes it the method of preference.

J . { - {

Potrebbero piacerti anche