Sei sulla pagina 1di 5

Name, Surname, Student No:

Question 4
a) How can we reduce the truncation error in Taylor series approximations?

b) In the lecture notes, Taylor series approximation is applied in different sections. Where and why did we
apply it?

c) We would like to approximate ( ) by constructing an interpolating function ℎ( ) given as


ℎ( ) = ∑ ( ) where ’s are coefficients and 's are basis functions. To construct a unique
interpolating function ℎ( ), what should we do? Under what condition, there are infinitely many ways to
construct ℎ( )? Under what condition, there is in general no way to construct ℎ( )? Please explain.
d) Which is more accurate in terms of truncation and round-off errors? Monomial basis or Lagrange
interpolation? Please explain.
e) In Matlab, you can use the function inv or the operator \ to solve linear system of equations. Which one
is better, why?

Solution:
a) In general, the truncation error is decreased by including additional terms in the expansion but we should
also limit the step size. We can also decrease the truncation error if we reduce the step size sufficiently (note
that when the step size is reduced, the term ( ) ( ) will change).

b) 1) In Section 1, we used it to linearise a nonlinear function, make first-order error analysis and calculate
condition number.
2) In Section 2, we used it to derive the formulas for Newton-Raphson and Newton's methods.
3) In Section 7, we used it to linearise a nonlinear model so that we can apply the methods used for linear
regression (Gauss-Newton method).
c) There are ( + 1) coefficients to be determined. To construct a unique interpolating function ℎ( ), we
need ( + 1) data points. There are infinitely many ways to construct ℎ( ) if there are fewer than ( + 1)
data points because the number of unknowns ( + 1) is higher than the number of equations. There is in
general no way to construct ℎ( ) if there are more than ( + 1) distinct data points since the number of
unknowns ( + 1) is less that the number of equations. The number of equations is equal to the number of
distinct data points.

d) Both methods have zero truncation error because they are exact formulations. However, since the basis
matrix of monomial basis is not well-conditioned and the basis matrix of Lagrange interpolation is well-
conditioned, round-off errors for Lagrange interpolation is lower than monomial basis interpolation.

e) The operator \ is much better since forming the inverse requires more computational time and it is also
numerically less accurate.
Name, Surname, Student No:
Question 3
Consider the following nonlinear system of equations.

= (2 ) −

3 −1 =
a) Write the necessary Matlab statements to solve numerically the given system of equations. Use Matlab's
fsolve function with the initial guess ( , ) = (0,0).
b) Suppose that you would like to solve the given system of equations using Newton's method. Determine
the exact Jacobian matrix for this system and then calculate the value of this Jacobian matrix at the first
iteration. Use the initial guess given in part (a). Do not find a solution of the equations.
c) You again would like to solve the given system of equations using Newton's method but you prefer using
finite difference approximation of the Jacobian matrix. For the first iteration, calculate the approximate
Jacobian matrix by using a step size of ℎ = 10 and the initial guess given in part (a). Finally, compare the
values of the exact and approximate Jacobian matrices. Do not find a solution of the equations.

Solution:
a) Let = and = .
function F=EqnSystem(x)
F=[ x(1)^2*x(2)-cos(2*x(2))+x(1);
3*x(2)^2-1-exp(-x(1)*x(2) ];
-----------------------------
x0=[0; 0]; [x,Fval]=fsolve(@EqnSystem,x0)

b) ( , )= − (2 ) + , ( , )=3 −1−

=2 +1 , = +2 (2 ) , = , =6 +
All results are checked in Matlab.

( ̅) = , ( ̅ ) = 1 0 , ̅ = [0 0]
0 0

c)
(0 + ℎ, 0) − (0,0) (−1 + ℎ ) − (−1)
( ̅ )≅ = =1
ℎ ℎ

(0,0 + ℎ ) − (0,0) (− (2ℎ)) − (−1) − (2ℎ) + 1


( ̅ )≅ = = = 0.0019999
ℎ ℎ ℎ

(0 + ℎ, 0) − (0,0) (−1 − 1) − (−1 − 1)


( ̅ )≅ = =0
ℎ ℎ

(0,0 + ℎ ) − (0,0) (3ℎ − 1 − 1) − (−2)


( ̅ )≅ = = 3ℎ = 0.003
ℎ ℎ

1 0.002
Approximate Jacobian matrix is .
0 0.003
Name, Surname, Student No:
Question 3
We would like to approximate ( ) = 2 (2 ) for the interval ∈ [0.5, 2.5] by constructing an
interpolating polynomial such that the upper bound for absolute interpolation error is no more than 0.01 . In
constructing the interpolating polynomial, we would like to use equally spaced data-points (i.e. nodes) in the
interval [0.5, 2.5] where the values of the first and the last nodes are 0.5 and 2.5 respectively. You are asked
to find the smallest possible order (i.e. degree) of interpolating polynomial that satisfies the given error
criterion.
→ You will see that it is required to solve an equation numerically to find the answer; therefore explain how
you would solve this equation in a simple way without facing any risk of divergence considering the root-
finding methods described in the lecture notes.
→ However, for this question you are advised to use a trial-and-error procedure to find the answer.
Hint: Try polynomial degrees of less than 10.

Solution:
− ℎ − ( )
ℎ= , = = ≤ 0.01 , = max ( )
4( + 1) 4( + 1)

( ) = −2(2) (2 ) , ( ) = −2(2)(2) (2 ) , ( )( ) = 2(2)(2)(2) (2 )


( )( ) = 2(2)(2)(2)(2) (2 ) , ( )( ) = −2 (2 ) , ( )( ) = −2 (2 )

The functions (2 ) and (2 ) have a maximum absolute value of 1. Then =2 .


2 2 2 2
= = =
4( + 1) 4( + 1) ( + 1)

>> En=@(n) (2.^(2*n+1))./( (n+1).*n.^(n+1) )


>> figure, plot(n,En(n),'-o'), xlabel('n'), ylabel('En')
>> En(n)'
ans =
4.000000000000000
1.333333333333333
0.395061728395062
0.100000000000000
0.021845333333333
0.004180547390424
0.000710518888683
0.000108506944444
0.000015036432991
0.000001906501818

En(6)=0.004180547390424 so the smallest possible degree of interp. polynomial that satisfies the given
error criterion is = 6.

→ We must solve the equation ( ) − 0.01 = 0 using bracketing methods in order not to face any risk of
divergence.
Question 1
The Stefan-Boltzmann law is given by the equation = where is the rate of radiation of energy
from a surface, is the surface area, is the emissivity of the surface, is the Stefan-Boltzmann constant
and is the average temperature of the surface. Consider an experiment in which a copper sphere of radius
= 0.15 is used. (Note that the surface area of a sphere is 4 ). In this experiment, = 550 and is
taken as 0.90 . is a constant whose value is 5.67 × 10 .
i) Considering the given data, what is the value of in this experiment?
ii) Estimate the variation in if we vary the values , and such that = 0.15 ∓ 0.01, = 0.90 ∓ 0.05
and = 550 ∓ 20. Apply a first-order Taylor series expansion.
iii) Using the data given in part (ii), calculate the exact value of the variation in and compare it with the
result of part (ii).
Solution:
i) >> r=0.15; e=0.90; sg=5.67*10^(-8); T=550;
>> H=4*pi*r^(2)*e*sg*T^(4)
H =
1.320288098536604e+003 (answer)

ii) ∆ = 0.01, ∆ = 0.05 and ∆ = 20. ∆ = ∆ | / | + ∆ | / | + ∆ | / | where the partial


derivatives are evaluated at the given values in part (a)-(i). ∓∆ is the variation in .
/ =8 , / =4 , / = 16
>> dr=0.01; dHdr=8*pi*r*e*sg*T^4, de=0.05; dHde=4*pi*r^2*sg*T^4, dT=20;
dHdT=16*pi*r^2*e*sg*T^3,
dHdr =
1.760384131382140e+004
dHde =
1.466986776151783e+003
dHdT =
9.602095262084395
>>
>> dH=dr*abs(dHdr)+de*abs(dHde)+dT*abs(dHdT)
dH =
4.414296571874910e+002 (answer)
The variation in is ∓dH (answer)

iii) We have a base value of H calculated in part (i). Considering the variations, H becomes maximum at the
value H(r,e,T)=H(0.16,0.95,570) and H becomes minimum at the value H(r,e,T)=H(0.14,0.85,530)
H (0.14,0.85,530)  4 (0.14) 2 0.85(5.67  10 8 )(530) 4  936.6372
H (0.16,0.95,570)  4 (0.16) 2 0.95(5.67  10 8 )(570) 4  1829.178
The exact variation in H is: H varies between 936.6 and 1829.18.
Name, Surname, Student No:
Question 4
Consider the data points given in the below table where is the independent variable and is the
corresponding dependent variable.

1 2 4
2.2 2.8 4.5

Fit the function ( ) = + + / to this data where , and c are the coefficients to be determined.
Consider both interpolation and least-squares regression methods in this problem. In your solution, use an
elimination procedure; do not use matrix inverse. Comment on the results.
Note: In displaying your results, use 5 significant digits only and apply rounding. To limit round-off errors,
you are also advised to keep at least 5 significant digits for the numbers in your arithmetic calculations.

Solution:
This is a determined system i.e. the number of equations is equal to the number of unknowns. We have 3
equations (originating from n=3 data points) and 3 unknowns (m+1=3 coefficients).
Apply interpolation first as this is a typical interpolation problem.

>> t=[1 2 4]'; y=[2.2 2.8 4.5]';


>> B=[ones(size(t)) t 1./t] %B is the basis matrix
B =
1.000000000000000 1.000000000000000 1.000000000000000
1.000000000000000 2.000000000000000 0.500000000000000
1.000000000000000 4.000000000000000 0.250000000000000
>> cfs=B\y %cfs are the coefficients where a=cfs(1), b=cfs(2), c=cfs(3)
cfs =
0.599999999999999
0.933333333333334
0.666666666666668
>> tt=0.5:0.01:4.5; plot(t,y,'o', tt,cfs(1)+cfs(2)*tt+cfs(3)./tt)

The application of least-squares regression will yield the same results with zero residuals (error) at the
data points. This is proven in the course notes, but let’s apply the method anyway.
>> Z=B; cf=(Z'*Z)\(Z'*y) %Z is the design matrix
cf =
0.599999999999983
0.933333333333337
0.666666666666681
See that we obtained the same values of the coefficients.

Potrebbero piacerti anche