Sei sulla pagina 1di 35

Function Approximation

 Function approximation (Chapters 13 & 14)


-- method of least squares
-- minimize the residuals
-- given data of points have noises
-- the purpose is to find the trend represented by data.

 Function interpolation (Chapters 15 & 16)


-- approximating function match the given data exactly
-- given data of points are precise
-- the purpose is to find data between these points
Interpolation
and
Regression
Chapter 13

Curve Fitting:
Fitting a Straight Line
Least Square Regression
 Curve Fitting
 Statistics Review
 Linear Least Square Regression
 Linearization of Nonlinear
Relationships
 MATLAB Functions
Wind Tunnel Experiment

Curve
Fitting

Measure air resistance as a function of velocity


Regression and Interpolation
Curve fitting
(a) Least-squares regression

(b) Linear interpolation

(c) Curvilinear interpolation


Least-squares fit of a straight line

v, m/s 10 20 30 40 50 60 70 80
F, N 25 70 380 550 610 1220 830 1450
Simple Statistics
Measurement of the coefficient of thermal expansion
of structural steel [106 in/(inF)]

6.485 6.554 6.775 6.495 6.325 6.667


6.552 6.399 6.543 6.621 6.478 6.655
6.555 6.625 6.435 6.564 6.396 6.721
6.662 6.733 6.624 6.659 6.542 6.703
6.403 6.451 6.445 6.621 6.499 6.598
6.627 6.633 6.592 6.670 6.667 6.535

Mean, standard deviation, variance, etc.


Statistics Review
 Arithmetic mean y
 y i

n
 Standard deviation about the mean

; St    yi  y 
St 2
sy 
n1
 Variance (spread)
 y  y  y   y 
2 2 2
/n
 
2 i i i
s
n1 n1
y

 Coefficient of variation (c.v.)


sy
c .v.   100%
y
i yi  yi  y 2 yi2
1 6.485 0.007173 42.055
2 6.554 0.000246 42.955
3 6.775 0.042150 45.901
4 6.495 0.005579 42.185
5 6.325 0.059875 40.006
6 6.667 0.009468 44.449
7 6.552 0.000313 42.929
8 6.399 0.029137 40.947
9 6.543 0.000713 42.811
10 6.621 0.002632 43.838
11 6.478 0.008408 41.964
12 6.655 0.007277 44.289
13 6.555 0.000216 42.968
14 6.625 0.003059 43.891
15 6.435 0.018143 41.409
16 6.564 0.000032 43.086
17 6.396 0.030170 40.909
18 6.721 0.022893 45.172
19 6.662 0.008520 44.382
20 6.733 0.026669 45.333
21 6.624 0.002949 43.877
22 6.659 0.007975 44.342
23 6.542 0.000767 42.798
24 6.703 0.017770 44.930
25 6.403 0.027787 40.998
26 6.451 0.014088 41.615
27 6.445 0.015549 41.538
28 6.621 0.002632 43.838
29 6.499 0.004998 42.237
30 6.598 0.000801 43.534
31 6.627 0.003284 43.917
32 6.633 0.004008 43.997
33 6.592 0.000498 43.454
34 6.670 0.010061 44.489
35 6.667 0.009468 44.449
36 6.535 0.001204 42.706
 236.509 0.406514 1554.198
Coefficient of Thermal Expansion
y
y i

236.509
 6.5697 Mean
n 36

S t    yi  y   0.406514
2
Sum of the square of residuals

St 0.406514 Standard deviation


sy    0.10777
n1 36  1

 yi2   yi  / n
2
1554.198  ( 236.509 ) 2 / 36
s 
2
 Variance
n1
y
35
0.406514
  0.0116147
35

sy 0.10777
c .v .   100%   100%  1.64% Coefficient of variation
y 6.5697

 xx
 
2

Histogram p( x ) 
1
2
exp  
 2 2


 

Normal
Distribution

 A histogram used to depict the distribution of data


 For large data set, the histogram often approaches the
normal distribution (use data in Table 12.2)
Regression and Residual
Linear Regression
Fitting a straight line to observations

Small residual errors Large residual errors


Linear Regression
 Equation for straight line

y  a0  a1 x
 Difference between observation and line

yi  a0  a1 xi   e i
 ei is the residual or error
Least Squares Approximation

 Minimizing Residuals (Errors)


 minimum average error (cancellation)
 minimum absolute error
 minimax error (minimizing the
maximum error)
 least squares (linear, quadratic, ….)
Minimize Sum of Errors
n n

 e   (y
i 1
i
i 1
i  a0  a1 x i )

Minimize Sum of
Absolute Errors
n n

e
i 1
i   yi  a0  a1 x i
i 1

Minimize the
Maximum Error
Linear Least Squares
( x1 , y1 ) , ( x 2 , y 2 ) , ( x 3 , y 3 ) , , ( x n , y n )

 Minimize total square-error


 Straight line approximation
f ( x )  a0  a1 x
yi  f ( xi )  a0  a1 xi

 Not likely to pass all points if n > 2


Linear Least Squares
( x1 , y1 ) , ( x 2 , y 2 ) , ( x 3 , y 3 ) , , ( x n , y n )
 Total square-error function: sum of the
squares of the residuals
n n
Sr   e i2   ( yi  a0  a1 x i ) 2
i 1 i 1

 Minimizing square-error Sr(a0 ,a1)


 S r
 a  0
 0
 Solve for (a0 ,a1)
 S r  0
 a1
Linear Least Squares
n
 Minimize Sr ( a0 , a1 )   ( yi  a0  a1 x i ) 2
i 1

 S r n   n  n

 a  0  2   yi  a0  a1 x i   na0    x i  a1   yi
 0 i 1   i 1  i 1
   n
 S r  0   2  y  a  a x  x  x  a   x 2  a 
n n n

 a1  i 0 1 i i
  i 0  i 1  x i yi
i 1  i  1   i 1  i 1

 Normal equation y = a0 + a1x


n x i y i   x i  y i
a1 
n x   x i 
2 2
i

a0  ya x 
 y i
 a1
x i
1
n n
Advantage of Least Squares

 Positive differences do not cancel


negative differences
 Differentiation is straightforward
 weighted differences
 Small differences become smaller
and large differences are magnified
Linear Least Squares
 Use sum( ) in MATLAB
 n n

 xx  i , S x   x i ,
 2
S x
 i 1 i 1
let  n n
S 
 xy 
i 1
x i y i , S y   yi
i 1

n S x  a0   Sy 
S     
 x S xx   a1   S xy 

S xx S y  S xy S x nS xy  S x S y
a0  , a1 
nS xx  S 2
x nS xx  S x2
Linearization of Nonlinear Relationships
Untransformed
power equation
x vs. y

transformed data
log x vs. log y
Linearization of
Nonlinear Relationships
 Exponential equation
1 x
y   1e 
ln y  ln  1   1 x
use  xi , ln yi  instead of xi , yi 
 Power equation
y   2 x 2 
log :
log y  log  2   2 log x
Base-10
use log xi , log yi  instead of  xi , yi 
Linearization of
Nonlinear Relationships
 Saturation-growth-rate equation
x 1 1 3  1 
y  3     
3  x y 3 3  x 
 1 1
use  ,  instead of  x i , yi 
 x i yi 
 Rational function
1 1
y   4 x  4
4 x  4 y
 1
use  x i ,  instead of  x i , yi 
 yi 
Example 12.4: Power Equation

Transformed
Data
log xi vs. log yi

y = 2 x  2
Power equation fit
along with the data

x vs. y
>> x=[10 20 30 40 50 60 70 80];
>> y = [25 70 380 550 610 1220 830 1450];
>> [a, r2] = linregr(x,y)
a =
19.4702 -234.2857
r2 = y = 19.4702x  234.2857
0.8805

12-12
>> x=[10 20 30 40 50 60 70 80];
>> y = [25 70 380 550 610 1220 830 1450];
>> linregr(log10(x),log10(y))
r2 =
0.9481 log y = 1.9842 log x – 0.5620
ans = y = (10–0.5620)x1.9842 = 0.2742 x1.9842
1.9842 -0.5620

log x vs. log y

12-13
MATLAB Functions
 Least-square fit of nth-order polynomial

p = polyfit(x,y,n)

f ( x )  p1 x n1  p2 x n 2    pn1 x  pn

 Evaluate the value of polynomial using

y = polyval(p,x)
Large error,
poor correlation

Preferable to
fit a parabola
Polynomial Regression
 Quadratic Least Squares
 y = f(x) = a0+ a1x + a2x2
 Minimize total square error
n
Sr ( a0 , a1 , a2 )   ( yi  a0  a1 x i  a2 x i2 ) 2
i 1

 Sr
 
n

 a  0   2  y i  a0  a x
1 i  a x
2 i
2

 0 i 1

 Sr
 
n

  0   2  x y  a  a x  a x 2

  a1
i i 0 1 i 2 i
i 1
 S
 
n
 r  0  2  x i yi  a0  a1 x i  a2 x i
2 2

 a 2 i 1
Quadratic Least Squares
 n n
  n

 n x i  2
xi    yi 
 n i 1 i 1
  a0   i 1

3    
n n n
 x
  i 
x 2
x i  a1     x i yi 
i 1
i
i 1 i 1
    i 1 
 n n n   a2   n 
  x i2  i
x 3
 x 4
   x 2
y
i i
 i 1 
i
i 1 i 1  i 1 

 Use Cholesky decomposition to solve for the


symmetric matrix
 or use MATLAB function z = A\r
Cubic Least Squares
f ( x )  a0  a1 x  a2 x 2  a3 x 3
n
Sr   ( y  a0  a1 x i  a2 x i2  a3 x i3 ) 2
i 1

 n n n
  n 
 n x x i
2
i  3
x 
i   yi 
 n i 1
n
i 1
n
i 1
n   a   n
i 1

 x x i     x i yi 
4
  i  i 
2 3 0
x x
i
  a1   i  1 
 a    n
i 1 i 1 i 1 i 1
 n n n n 
  x i2  i
x 3
 i
x 4
 x 5  2
   x 2
i yi

 a3   i  1 
i
 i 1 i 1 i 1 i 1
 n 3 n n n
  n 
 xi  i  i    x i yi 
4 5 6 3
x x xi 
 i 1 i 1 i 1 i 1   i 1 
Linear Least Square: y = – 20.5717 + 3.6005x
Quadratic: y = 0.2668 + 0.7200 x 2.7231x2
Cubic: y = 0.6513 + 1.5946x – 2.8078x2  0.0608x3

Potrebbero piacerti anche