Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Experimentation
Independent (predictor) variable X
Dependent (response) variable Y
Data available at discrete points or times
Estimates are required at points between the
discrete values (as it is impractical or expensive
to actually measure them)
Function substitution
A implicit (complicated) function or program is known
Results at all values are possible but time consuming
Hypothesis testing
Alternative mathematical models are given
Which is the best to use for a given situation
i 1 2 3 4 5
x 2.10 6.22 7.17 10.5 13.7
y 2.90 3.83 5.98 5.71 7.74
10.00 10.00
8.00 8.00
6.00 6.00
y
y
4.00 4.00
2.00 2.00
0.00 0.00
0 2 4 6 8 10 12 14 16 0 2 4 6 8 10 12 14 16
x x
10.00 10.00
8.00 8.00
6.00 6.00
y
y
4.00 4.00
2.00 2.00
0.00 0.00
0 2 4 6 8 10 12 14 16 0 2 4 6 8 10 12 14 16
x x
10.00
8.00
6.00
4.00
y
2.00
0.00
-2.00 0 2 4 6 8 10 12 14 16
-4.00
x
Interpolation 100
Precise data 80
Temperature (deg F)
60
20
0
0 1 2 3 4 5
Regression
Time (s)
f(x)
4
Model selection
Describing Merit function for closeness-of-fit
Compute values of the parameter of the model
Interpretation of results & assessing goodness-
of-fit
8
y = a0 + a1 x y5
7
Data points
y3
6
e3
y4
5
f(x)
e2
4 Residual
y1 y2 e = y - (a 0 + a 1x )
Regression Model
2 y = a 0 + a 1x
0
0 2 4 6 8 10 12 14 16
x
Visual inspection
Random distribution of residual around data points
Correlation Coefficient r2
Standard error of parameters
Confidence interval
Prediction interval
8
y5
ei = yi − a0 − a1xi 7
6
Data points
y3
e3
y4
n
∑
5
Sr = ei2
f(x)
e2
4 Residual
y1 y2 e = y - (a 0 + a 1x )
i= 1 3
Regression Model
n 2
∑ ( yi − a0 − a1xi )
y = a 0 + a 1x
2
= 1
i= 1 0
0 2 4 6 8 10 12 14 16
x
n
∂ Finally
= ∑ 2[] []
i=1 ∂ a0 n n
n
na0 + ∑ xi a1 = ∑ yi
= ∑ 2[ y
i=1
i − a0 − a1 xi ]( − 1 ) i=1 i=1
a0 + x a1 = y
= 0
21/4/2006 Anuj Jain, Astt Prof, AMD, MNNIT, Allahabad 19
Linear Regression
Second a1
∂S r ∂ n 2
= (
∑ i 0 1 i
y − a − a x )
∂a1 ∂a1 i=1
n
∂
= ∑ [] 2
i=1 ∂a1
n
∂
= ∑ 2[] []
i=1 ∂a1
n
= ∑ 2[ y i − a0 − a1 xi ]( − xi )
i=1 Finally
= 0
n n 2 n
∑ xi a0 + ∑ xi a1 = ∑ xi yi
i=1 i=1 i=1
21/4/2006 Anuj Jain, Astt Prof, AMD, MNNIT, Allahabad 20
Linear Regression
Equations Solution
1 n n 2 1 n n
n n ∑ yi ∑ xi − ∑ xi ∑ xi yi
na0 + ∑ xi a1 = ∑ yi n i=1 i=1 n i=1 i=1
a0 =
i=1 i=1
n 2 1 n 2
∑ xi − ∑ xi
n n 2 n i=1 n i=1
∑ xi a0 + ∑ xi a1 = ∑ xi yi n
i=1 i=1 i=1 1 n n
∑ xi yi − ∑ xi ∑ yi
i=1 n i=1 i=1
a1 =
n 2 1 n 2
∑ xi − ∑ xi
i=1 n i=1
a1
a0 a1
y = 2.038 + 0.4023x
21/4/2006 Anuj Jain, Astt Prof, AMD, MNNIT, Allahabad 24
Another Approach
[Z ] * { A} = {Y }
[[Z] T
]
* [Z ] * { A} = [Z ] * {Y } { T
}
{ A} = [[Z ] * [Z ] ] * {[Z] }
−1
* {Y }
T T
a1 0.4022
= 2.0395
a0
y = 2.038 + 0.4023x
21/4/2006 Anuj Jain, Astt Prof, AMD, MNNIT, Allahabad 27
Goodness-of-fit - I
Visual inspection: Linear trend matching
10
8
6
y
4
2
0
0 2 4 6 8 10 12 14 16
x
y = 2.038 + 0.4023x
y = 2.038 + 0.4023 x
yi y e = y i 8− y
2.90 2.88 +18.5%
3.83 4.54 6
y predicted
5.98 4.92 -17.5%
4
5.71 6.26
7.74 7.55 2
y = 2.038 + 0.4023 x
0
0 2 4 6 8
y m easured
yi y e = y i8 − y
y = 0.8644x + 0.7097
2.90 2.88
R2 = 0.8644
3.83 4.54 6
y predicted
5.98 4.92
4
5.71 6.26
7.74 7.55 2
y = 2.038 + 0.4023x
0
0 2 4 6 8
Can be used to compare y m easured
the mathematical models
1.5
1.0
0.5
e
0.0
-0.5
-1.0
0 1 2 3 4 5 6 7 8
y
4
Sum of squares of residuals
3
2 due to regression
1
0 SSR = St − Sr
0 2 4 6 8 10 12 14 16
x
4
2
0
0 2 4 6 8 10 12 14 16
x
4
2
0
0 2 4 6 8 10 12 14 16
x
Coefficient of Determination
St − Sr
r =
2
St =0.864
Correlation Coefficient
r = 0.93
a0 = 0.81492
a1 = 0.09198
Linear yi = a0 + a1xi
yi = a0 + a1 xi + a2 xi2
Quadratic
Cubic yi = a0 + a1 xi + a2 xi2 + a3 xi3
[Z ] * { A} = {Y }
[[Z] T
]
* [Z ] * { A} = [Z ] * {Y } { T
}
{ A} = [[Z ] * [Z ] ] * {[Z] }
−1
* {Y }
T T
2
f(x)
-2
-4
-6
0 5 10 15 20 25
21/4/2006 Anuj Jain, Astt Prof, AMD,
x MNNIT, Allahabad 54
Exponential function
If relationship is an exponential function
bx
y = ae
To make it linear, take logarithm of both sides
ln (y) = ln (a) + bx
Now it’s a linear relation between ln(y) and x
Linear regression gives
ln(a)
21/4/2006 Anuj Jain, Astt Prof, AMD, MNNIT, Allahabad 58
Power Function
x y X=Log(x) Y=Log(y)
1.2 2.1 0.18 0.74
2.8 11.5 1.03 2.44
4.3 28.1 1.46 3.34
5.4 41.9 1.69 3.74
6.8 72.3 1.92 4.28
7.9 91.4 2.07 4.52
100 2.5
90
80 2
70
1.5
60
Y=Log(y)
50
y
1
40
30
0.5
20
10
0
0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0 1 2 3 4 5 6 7 8 9 X=Log(x)
x
x vs y X=Log(x) vs Y=log(y)
21/4/2006 Anuj Jain, Astt Prof, AMD, MNNIT, Allahabad 59
Power Function
Using the X’s and Y’s, not the original x’s and y’s
n n 5 5
n ∑ Xi ∑ Yi ∑ X i = ∑ ln (xi ) = 8.34
i=1 = i=1
a
i =1 i =1
n n 2 B n
5 2 5
∑ Xi ∑ Xi ∑ X Y
i i 2
i=1 i=1 i=1 ∑ X i = ∑ ln (xi ) = 14.0
i =1 i =1
5 5
∑ Yi = ∑ ln (yi ) = 19.1
i =1 i =1
6 8.34 a 19.1 5 5
8.34 14.0 B = 31.4 ∑ X iYi = ∑ ln (xi ) ln (yi ) = 31.4
i =1 i =1
2.5
2
Y=Log(q)
1.5
0.5
0
0 0.5 1 1.5 2 2.5 3
X=Log(c)
21/4/2006 Anuj Jain, Astt Prof, AMD, MNNIT, Allahabad 62
Power Function
Arithmetic axes: K = 74.702, and n = 0.2289
350
300
250
200
q
150 q = K ( c) n
100
50
0
0 100 200 300 400 500 600
21/4/2006 C
Anuj Jain, Astt Prof, AMD, MNNIT, Allahabad 63
Nonlinear Relation
ATdβ = (ATA)dλ
dλ =(ATA)-1(ATdβ )
21/4/2006 Anuj Jain, Astt Prof, AMD, MNNIT, Allahabad 64
Nonlinear Relation
Gaussian function
In matrix form
ATdβ = (ATA)dλ
dλ =(ATA)-1(ATdβ )
21/4/2006 Anuj Jain, Astt Prof, AMD, MNNIT, Allahabad 65
Nonlinear Relation
(A, x0, σ)
Initial guess (0.8, 15, 4)
Converged values (1.03, 20.14, 4.86)
Actual values (1, 20, 5)
Regression Statistics
Multiple R 0.929710151
R Square 0.864360965
Adjusted R Square 0.819147953
Standard Error 0.809178231
Observations 5
ANOVA
df SS MS F Significance F
Regression 1 12.51757177 12.51757177 19.11752687 0.022132988
Residual 3 1.964308227 0.654769409
Total 4 14.48188
Coefficients Standard Error t Stat P-value Lower 95% Upper 95% Lower 95.0% Upper 95.0%
Intercept 2.039476493 0.814915963 2.502683204 0.087499474 -0.553952235 4.632905221 -0.553952235 4.632905221
X Variable 1 0.402182352 0.091982912 4.372359417 0.022132988 0.109451398 0.694913305 0.109451398 0.694913305
0 4.078952986
Residuals
1
0
-1
0 2 4 6 8 10 12 14 16
X Variable 1
10
Y
5
Y
Predicted Y
0
0 2 4 6 8 10 12 14 16
X Variable 1
10
5
Y
0
0 20 40 60 80 100
Sample Percentile
0.70
1. ms 0.0003105
Mh
= 0.0129
m
A CA D C ρ s a
0.24
2. Mh v 2
0.0011567
=0.0014 Ci
gD
A CA D C ρ s C
3. Mh dp
−0.36 0.0009327
= 0.04
A CA D C ρ s D
C
21/4/2006 Anuj Jain, Astt Prof, AMD, MNNIT, Allahabad 75
Example 2
1. Mh v Ci
1.00 2
0.54
0.00001030
= 0.00096 m s
A CA D C ρ s m
a g DC
0.70 0.004
2. ms dp 0.00031046
Mh = 0.013
A D ρ ma DC
CA C s
−0.055
Mh m
1.02
v 2
0.54
dp
0.00069 s Ci
A CA D C ρ s 0.00000892
ma g DC DC
R2 = 0.987