Sei sulla pagina 1di 77

Scienti c Computing II

Complements and exercises


March 2008

Scienti c Computing II
  o Inst for informationsteknologi, Avd f r teknisk databehandling

Preface
This collection of exercises is written for Scientic Computing, second course (Berkningsvetenskap II/NV2), given at various Master of Science programs at a Uppsala University. The problems are divided into three categories: Type A These exercises test basic knowledge, including concepts, denitions and proofs of simple and important relations. They also train the ability to use numerical algorithms and standard methods for analysing problems. For example:

  

What is the rank of a matrix? Assume that  is an eigenvalue and x the corresponding eigenvector of a nonsingular matrix A. Show that 1 is an eigenvalue of A1 and that the corresponding eigenvector is x. Use z0 = 1 0 as initial guess and perform two steps with the power method on the matrix
 T

A=

1 2

2 : 1

Use the energy method to show that


n vj +1

vjn

k
n is stable if v0 = 0 and k < h.

n vj

vjn1
h

=0

Type B These exercises train the ability to choose between dierent numerical algorithms and dierent ways of analysing problems. For example:

Propose a method for computing the eigenvalue  of the matrix


H

1 d 0: 1 0: 1

0:1 3 0: 1

0: 1 0:1e 6

that satises j 3j

0:2.
n vj +1

Derive the stability condition for

vjn

k
n if v0 = 0.

n vj

vjn1
h

=0

Type C These exercises train the ability to present and use numerical algorithms in new settings. More elaborate proofs can also be found among these problems. For example:

  

Use the maximum principle to show that a diagonally dominant matrix is invertible. Show that the FEM-method nds the best approximate solution in the sense of least squares. Show that the determinant of a matrix equals the product of its eigenvalues.

Document History
Apr 2005 The rst version goes into print. Aug 2005 Some minor errors and misprints were corrected. Sections on norms of functions, Fourier series and DFT were added. Mar 2006 Added a few new problems. Aug 2006 Some minor errors were corrected. Mar 2008 Some minor errors were corrected. Henrik Brandn e

ii

Contents
1 Matrix Algebra 2 Eigenvalue Problems 3 Dierential Equations 4 Least Squares Problems Answers 1 7 13 33 39

iii

iv

Chapter 1

Matrix Algebra
Summary
1.1 AT is the transpose of A 1.2 AH = AT is the adjoint of A 1.3 Vectors are assumed to be column vectors 1.4 xH y = x1 y1 + : : : + xn yn is the ordinary scalar product between vectors x and y 1.5 I is the identity matrix 1.6 A1 is the inverse of A 1.7 A real matrix A is

     

symmetric if AT = A

skew-symmetric if AT = A orthogonal if AT A = I

1.8 A complex matrix A is Hermitian if AH = A unitary if AH A = I skew-Hermitian if AH = A

1.9 A matrix A is normal if AH A = AAH 1.10 A Hermitian matrix is positive denite if xH Ax > 0 for all x T= 0 and positive semidenite if xH Ax ! 0 for all x 1.11 A matrix is

  

diagonal if nonzero elements appear only on the main diagonal upper triangular if all elements below the main diagonal is zero lower triangular if all elements above the main diagonal is zero 1

1.12 Matrix multiplication can be performed block-wise if the blocks match in size. If     A11 A12 B11 B12 A= and B =
A21 A22 B21 B22

then

 A11 B11 + A12 B21 AB = A21 B11 + A22 B21

A11 B12 + A12 B22 A21 B12 + A22 B22

1.13 The rank of a matrix is the number of linearly independent rows or columns 1.14 tr(A) = a11 + : : : + ann is the trace of the matrix A 1.15 An eigenvalue  of a matrix A is a solution to the characteristic equation det(A I ) = 0 1.16 If  is an eigenvalue of a matrix A, the corresponding eigenvector x is a nontrivial solution to Ax = x 1.17 (A) = max ji j is the spectral radius of A 1.18 A norm

jj jj is a scalar function that satises  jjxjj > 0 for all x T= 0  jjc xjj = jcj jjxjj if c is a complex number  jjx + yjj jjxjj + jjyjj jjxjjp =
2 n
i=1

1.19 The p-norm of a vector x is given by

jxi j

31=p
p

p = 1; 2; : : :

where in the limit as p 3 I,

jjxjjI = max jxi j i


1.20 Each vector norm induces a matrix norm according to
Ax jjAjj = max jjjjxjjjj = jjmax jjAxjj xT=0 xjj=1

1.21 The following are equivalent

1.22

 Ax = b has a unique solution  det(A) T= 0  A1 exists  A has full rank  Ax = 0 has the unique solution x = 0   = 0 is not an eigenvalue of A span(A) = fy j y = Ax; x P Cn g
2

Exercises of Type A
1.1 Let a = 2 a) aT b b) abT c) (bT a)b d) baT b 1.2 Let a and b be two real vectors of the same length. Show that a) (aT b)T = aT b b) aT b = bT a c) (bT a)b = baT b 1.3 Let a and b be two complex vectors of the same length. Show that
b a) (aH b)H = aT

and b = 3

and compute

b) aH b = bT a c) (bH a)b = baT b 1.4 Let


A=


1 2

and compute a) AT b) AH 1.5 Is 1 A= p 2 a) symmetric? b) Hermitian? c) orthogonal? d) unitary? e) normal? 1.6 Is


A=
 

1 1

1
1

2+i 2i 3 1

a) Hermitian? b) unitary? c) normal? 1.7 Show that a real matrix is orthogonal if and only if its columns are orthonormal. 3

1.8 Show that a) every Hermitian matrix is normal b) every unitary matrix is normal c) every real and symmetric matrix is normal d) every real and orthogonal matrix is normal e) there are normal matrices that are neither symmetric, orthogonal, Hermitian or unitary 1.9 Show that (AB )1 = B 1 A1 if all inverses exist. 1.10 Let
A=
 A11 A21

A12 A22

where


A11 =

0 1

1 ; 0

 

A12 =

1 ; 0

A21 = 0

1 ;

and A22 = 1 :

Compute AT A and verify that


AT A =
 T A11 A11 + AT A21 21 AT A11 + AT A21 12 22 H

AT A12 + AT A22 11 21 AT A12 + AT A22 12 22


I

1.11 Let

2 A = d3 4 and compute a) rank(A) b) tr(A) c) det(A) 1.12 Let


A=


6 9 12

3 3e 3

1 1

1 ; 1

 

b=

1 ; 1

 

and c =

1 : 2

b) Does b P span(A)? d) Does c P span(A)? c) How many solutions does Ax = b have? e) How many solutions does Ax = c have? 1.13 Show that Ax = b does not have any solutions if b TP span(A). 1.14 Show that Ax = b is uniquely solvable if A has full rank. 1.15 Show that Ax = b has innitely many solutions if the rank of A is less than full and b P span(A). 4

a) Is A invertible?

1.16 Compute eigenvalues and eigenvectors of the matrices a) b)


0 2

1 1

c)

d 3

1
2

1 2 1 1e 2 1 1 1 3 0 1 1g g 1e 4
I

d)

1 f0 f d0 0

1 2 0 0

1.17 Let i and xi , i = 1; : : : ; n, denote the eigenvalues and corresponding eigenvectors of a matrix A. What are the eigenvalues and eigenvectors of b) A1 a) Ak , where k is a positive integer c) A cI , where c is a complex number

d) (A cI )1 , where c is a complex number 1.18 Compute the spectral radius of




a) b)

3 3

1
0

 

3 + 2i 1 + 4i

1 + 4i 3 + 2i

1.19 Show that the eigenvalues of a diagonal matrix are the diagonal elements. 1.20 Show that the eigenvalues of an upper triangular matrix are the diagonal elements. 1.21 Let
A=


2
1

3 4

and compute a) the eigenvalues of A b) tr(A) c) det(A) Verify that the trace is the sum and that the determinant is the product of the eigenvalues.

1.22 Let a = 1 a) b) c) d)

1.23 Show that jjAxjj2 = jjxjj2 if A is unitary. 1.24 Let


A=


jjajj1 jjajj2 jjajj3 jjajjI

1 2

2 T

and compute

1 1

2
4

and compute a) b) c)

jjAjj1 jjAjj2 jjAjjI ! (A) for all matrices and all matrix norms induced by

d) (A) 1.25 Show that jjAjj vector norms.

Exercises of Type C
1.26 Show that eigenvalues of a a) Hermitian matrix are real b) skew-Hermitian matrix are purely imaginary c) unitary matrix have modulus one 1.27 Show that the determinant of a matrix equals the product of its eigenvalues. 1.28 Show that the trace of a matrix equals the sum of its eigenvalues. 1.29 Show that jjAjj2 =
p (AH A).

1.30 Show that jjAjj2 = (A) if A is Hermitian.

Chapter 2

Eigenvalue Problems
Summary
2.1 The matrices A and B are similar if A = CBC 1 for some nonsingular matrix C . 2.2 The following are equivalent

  

The matrix A can be diagonalised The matrix A is similar to a diagonal matrix The matrix A has a complete set of linearly independent eigenvectors

2.3 Eigenvectors corresponding to distinct eigenvalues are linearly independent. 2.4 The following are equivalent

 

The matrix A is normal The matrix A can be diagonalised by a unitary matrix


H f f f d I g g g e

2.5 Every matrix is similar to a block diagonal matrix


J1

0
J2

.. 0

.
Jp

where each diagonal block is a Jordan box of the form


H f f Jk = f f d

k

1 .. .

.. ..

. .

g g g: g 1e k

Every Jordan box corresponds to one eigenvalue and one eigenvector. 2.6 Every matrix is unitarily similar to an upper triangular matrix. 7

2.7 The eigenvalues of a matrix A are in the union of the Gershgorin discs

j aii j

n
j =1 j 6=i

jaij j;

i = 1; : : : ; n:

If p discs are isolated, their union contains precisely p eigenvalues. The same is true for the discs
n
i=1 i6=j

j aii j
2.8 The power method Choose z0 For k = 0; 1; : : :
yk = zk = zk zk+1 = Ayk

jaij j;

j = 1; : : : ; n:

jj jj2

converges towards the eigenvector corresponding to the eigenvalue with H largest modulus. The eigenvalue can be estimated by yk zk+1 . The error k where  and  are the eigenvalues with largest goes to zero like (2 =1 ) 1 2 and second largest modulus respectively. 2.9 Inverse iteration, that is, the power method on the inverse of the matrix, converges towards the eigenvector corresponding to the eigenvalue with smallest modulus. In practice, the matrix is not inverted, instead a system of equations is solved in each iteration. 2.10 A Householder matrix has the form P = I

2wwT

with wT w = 1.

2.11 For every real x and y such that jjxjj2 = jjy jj2 there is a Householder matrix P such that P x = y , namely P = I 2wwT with w = (y x)=jjy xjj2 . 2.12 Every matrix A has a QR-factorisation A = QR, where Q is unitary and R is upper triangular. 2.13 The QR-method is given by
A0 = A

For k = 0; 1; : : : Factorise Ak = Qk Rk Ak+1 = Rk Qk If the eigenvalues of A are distinct, Ak converges to a block upper triangular matrix where the eigenvalues of the diagonal blocks are the same as the eigenvalues of A. 8

Exercises of Type A
2.1 Show that
A=


1 0

1 2

and B =

2
2

0 1

are similar. 2.2 Show that similar matrices have the same eigenvalues but dierent eigenvectors. 2.3 Diagonalise
A=


0 2

1= 8 : 0

2.4 Can a)

2 1

1 1 1 b) p 2 1 1   2 1 c) 0 2
 

1 2

 

d) e)

0 0 0 0

1 1 1 0

 

be diagonalised? 2.5 Use Gershgorin discs to locate the eigenvalues of 0:1 a) d 1 0: 5 0 f1 b) f d 0 0 4 f 0 c) f d1 0 4: 2 f 0:5 d) f d0:4 1:0
H H H H

0 : 2i 2 1 + 0:5i 1 0 1 0 1 3 1= 2 1=2 0:5 8:3 0: 1 0:2

0:1 0:5e
2:5
I

0 0 1 0g g 0 1e 1 1 1 1 3 0
i

1 g g i=2e 0

0:4
0:1 10:9 0:3

1: 0 0:2 g g 0:3 e 18:2

2.6 Let
A=

1 2

2 : 1
T

a) Compute the eigenvalues and eigenvectors of A.

b) Perform three steps with the power method. Use z0 = 1 initial guess. c) Perform three step with inverse iteration. Use z0 = 1 guess. d) Perform three steps with inverse iteration on A T 1 0 as initial guess.

as

as initial

2I .

Use z0 =

e) Use your results in d) to estimate one eigenvalue and the corresponding eigenvector of A. 2.7 Assume that A has distinct eigenvalues and let 1 be the eigenvalue with largest modulus and x1 the corresponding eigenvector. Show that k Ak z0 3 c1 x1 for some constant c1 as k 3 I for most vectors z0 . 1 When is the statement false?
H 2.8 Let yk and zk+1 be given by the power method. Show that yk zk+1 % 1 if zk % x1 . Here, 1 is the eigenvalue of A with largest modulus and x1 the corresponding eigenvector.

2.9 Explain why power iteration fails if


 

a) A = b) A = c) A =

0 1 1 0 1 0

1 0 0 1 0 2


 

and z0 = and z0 = and z0 = 1 0

   

1 1

1 1

2.10 Show that a Householder matrix is a) symmetric b) orthogonal 2.11 Use Householder transformations to compute QR-factorisations of a)
A=


3 4 0 0 1

1 0

b)

3 A = d0 4 c) 2 A = d1 2 10
H

3 2e 1 2 1e 3
I

4 1 1

2.12 Suppose that x and y are two vectors satisfying xT x = y T y . Let w = (y x)=jjy xjj2 and P = I 2wwT . Show that P x = y . 2.13 Perform two steps with the QR-method on a)
A=


0 1

1 1

b)
A=

4 3

20
35

Use Householder transformations for the QR-factorisations. 2.14 Show that the matrices Ak in the QR-method are similar.

Exercises of Type B
2.15 Propose an iterative method for computing the eigenvector of the matrix
H d 0: 2

0: 1

0: 2 2 0: 2

0: 1 0: 2e 0

that corresponds to the a) smallest b) largest eigenvalue. 2.16 The number  = 0:1 is an approximate eigenvalue of the matrix 0:1 A = d 0: 1 0: 1
H I

0: 1 1 0: 1

0: 1 0:1e : 2

Propose two dierent ways of computing the corresponding eigenvector iteratively. Which one converges the fastest? 2.17 Assume that an approximate eigenvalue of a matrix is known. Propose an algorithm for computing the corresponding eigenvector.

Exercises of Type C
2.18 Show that the n n-matrix
H f f f d

3 f1 f

1 3 .. .

0 1 .. . . 3 1 ..

0 is invertible for all n > 0. 11

g g g g g 1e

2.19 Give an upper bound on the 2-norm of the n n-matrix


H f f f d

2 f1 f

1 2 .. .

0 1 .. . 1 .. . 2 1

g g g g g 1e

2.20 For every matrix A, there is a unitary, nonsingular matrix C and an upper triangular matrix T such that A = CT C 1 . a) Propose an iterative algorithm for computing T and C . b) Show that your proposed algorithm computes an eigenvalue decomposition if A is real and symmetric. 2.21 Prove Gershgorins two theorems: a) The eigenvalues of a matrix are in the union of the Gershgorin discs. b) If p Gershgorin discs are isolated, their union contains precisely p eigenvalues.

12

Chapter 3

Dierential Equations
Summary
3.1 Let f be a continuous function on [a; b]. The p-norm of f is given by
2 b
a

jjf jjp =

jf (x)jp dx

31=p

p = 1; 2; : : :

where in the limit as p 3 I,

jjf jjI = xmax ] jf (x)j P[a;b


3.2 The Fourier series of a function f with period T
f (x) =

I
k=

ck e2ikx=T

where

ck =

1
T

T
0

f (x)e2ikx=T

satises Parsevals identity 1


T
T
0

jf (x)j2 dx =

I
k=

jc k j2 :

3.3 A linear second order scalar partial dierential equation


auxx + 2buxy + cuyy + dux + euy + f u = g

is

  

elliptic if b2 ac < 0

parabolic if b2 ac = 0

hyperbolic if b2 ac > 0
ut = Aux + Bu

3.4 A rst order system is hyperbolic if the matrix A can be diagonalised and has real eigenvalues. 13

3.5 A dierential equation is well-posed if

  

it has at least one solution it has not more than one solution the solution depends continuously on the problems data

For linear problems, requiring continuity is the same as requiring the solution to be bounded in terms of the data of the problem. 3.6 Tools for proving boundedness of the solution include

maximum principles. For example, the solution to Laplace equation


uxx + uyy = 0; u = g;

(x; y ) P ;

(x; y ) P @ ;

satises

(x;y )

max ju(x; y )j =

(x;y ) @

max

jg(x; y)j;

where = @ . the Fourier method. For example, the solution to the heat equation with periodic boundary conditions
ut = uxx ; u(0; t) = u(1; t); ux (0; t) = ux (1; t); u(x; 0) = f (x);

0 < x < 1; t > 0;


t > 0; t > 0;

0 < x < 1;

where  > 0, can be expanded as a Fourier series


u(x; t) =

I
k=

ck (t)e2ikx
2 2

where ck (t) satises cH (t) = 4 2 k 2 ck (t) or ck (t) = e4 k implying jck (t)j jck (0)j. Thus, using Parsevals relation,

0 1

k tc

k (0),

ju(x; t)j2 dx =

I
k=

jck (t)j2 jjf jj2 .

I
k=

jck (0)j2 =

ju(x; 0)j2 dx;

or equivalently, jju(; t)jj2

the energy method. For example, the solution to the advection equation
ut = ux ; u(0; t) = 0; u(x; 0) = f (x);

0 < x < 1;
t > 0;

t > 0;

0 < x < 1;

14

with  < 0 satises jju(; t)jj2


d u( ; t) dt d jj jj2 = dt 2

0 1

jju(; 0)jj2 = jjf jj2 since

u2 (x; t)dx =
0

2u(x; t)ut (x; t)dx


1
0

1 0

=
2

2u(x; t)ux (x; t)dx =  u2 (x; t)

= u (1; t) < 0: 3.7 A characteristic curve describes how information spreads. For example,

 

since u(x; t) = f (x + t) is a solution to advection equation ut = ux , x + t = c is a characteristic curve for every constant c.

since u(x; t) = f (x t) + g (x + t) solves the wave equation utt = uxx , x t = c and x + t = d are characteristic curves for every c and d.
vi+1 vi ; h vi vi1 ; D vi = h vi+1 vi1 D0 vi = ; 2h D+ vi =

3.8 The dierence operators

approximate the derivative. To approximate the second derivative use i.e.


D+ D vi = vi+1

2vi + vi1 :
h2

3.9 Assume that the ordinary dierential equation

uHH = f;
u(0) = 0; u(1) = 0;

0 < x < 1;

is well-posed.

Let N be a positive integer, h = 1=N and xi = ih. A nite dierence approximation is given by

D+ D vi = f (xi );
v0 = 0; vN = 0

i = 1; : : : ; N

1;

or, equivalently,
H f 1 f 1 f f f h2 f f d

h2

1
2 .. .

1
. 1 ..

.. 2

g f g gf g gf g f g gf g f g gf g f g: gf . g = f g gf . g f g gf . g f e dvN 1 e df (xN 1 )e 1 0 h2 vN

IH

v0 v1 v2

0 f (x1 ) f (x2 ) . . .

15

To construct a nite element approximation, the dierential equation has to be rewritten in variational form: Find u P V such that (uH ; v H ) = (f; v )

for all v

P V . Here,
1

(f; g ) =
0

f (x)g (x)dx

and V = fv j v cont; v H bounded and piecewise cont; v (0) = v (1) = 0g. To discretise, let Vh & V be the subspace spanned by the piecewise linear functions j (x), j = 1; : : : ; N , satisfying
&

j (xi ) =

1; 0;

i=j i=j

where xi = ih, h = 1=(N + 1). Any function uh


uh (x) =
N j =1

P Vh can be written

cj j (x);

uh

for some constants cj . The nite element method is given by: Find P Vh such that H (uH ; vh ) = (f; vh ) h

for all vh

P Vh , or equivalently,
N j =1

cj (H ; H ) = (f; i ); j i

i = 1; : : : ; N:

This is a large system of equations


H

f hf d

1f

2 f1 f

1
2 .. .

1
. 1 ..

0 . 2 1 ..

I H I c1 b1 g f c2 g f b2 g gf g f g gf . g f . g gf . g = f . g: gf . g f . g 1e dcN 1 e dbN 1 e

IH

cN

bN

By the Trapezoidal rule, bi

% hf (xi ).
0 < x; y < 1; on the boundary

3.10 Poissons equation on the unit square


uxx + uyy = f; u = g;

is an example of an elliptic partial dierential equation. Let xi = ih and yj = jh, where h = 1=N . A nite dierence approximation is given by
vi+1;j

2vi;j + vi1;j + vi;j+1 2vi;j + vi;j1 = f (x ; y );


h2 h2
i j

&

i = 1; : : : ; N; j = 1; : : : ; N;

vi;j = g (xi ; yj );

on the boundary:

16

With some ordering of the unknowns, such as


v = v0;0

:::

vN +1;0

:::

:::

v0;N +1

:::

vN +1;N +1

this is a large system of equations Av = b. 3.11 A time-dependent partial dierential equation can be solved numerically by either explicit or implicit time-stepping. For example, the heat equation ut = uxx can be approximated by
n n vi +1 vi n = D+ D vi ; k n n n in which case explicit time-stepping vi +1 = kvi + D+ D vi is possible, or by n n vi +1 vi n = D+ D vi +1 ; k

in which case a system of equations has to be solved in each time step. The latter is known as implicit time-stepping. 3.12 The truncation error of a nite dierence approximation Qv = f is Qu f , where u is the solution of the dierential equation. 3.13 A nite dierence equation is consistent if the truncation error goes to zero when the discretisation steps go to zero. For example,

D+ vi = f (xi ) is a consistent approximation of uH (x) = f (x) since D+ u(x)

) f (x) = u(x + hh u(x) f (x) = : : : = = uH (x) + huHH (x)=2 + y(h2 ) f (x) = huHH (x)=2 + y(h2 ) = y(h)

where a Taylor expansion and uH = f have been used. The nite dierence equation
n vj +1

vjn

n = D+ D vj

is a consistent approximation of ut = uxx , since


u(x; t + k ) k

u(x; t) D
h2

+D

u(x; t) = : : : =
h2

= ut (x; t) + kutt (x; t) + : : : (uxx (x; t) + = kutt (x; t)  12 = y(k ) + y(h2 )


uxxxx (x; t) + : : :

12

uxxxx (x; t) + : : :)

3.14 A nite dierence approximation is stable if the nite dierence solution can be bounded in terms of the data of the problem. 17

3.15 Tools for proving stability include

discrete maximum principles. For example, the solution to


vi+1;j

2vi;j + vi1;j + vi;j+1 2vi;j + vi;j1 = 0;


h2 h2 vi;j = g (xi ; yj );

(i; j ) P @ ;

(i; j ) P ;

satises

(i;j )

max jvi;j j

(i;j ) @

max

jg(xi ; yj )j: 1;

the Fourier method. For example, the solution to


n vj +1

vjn

n = D+ D vj ;

j = 0; : : : ; N n = 1; 2; : : : n = 1; 2; : : : j=

n = 1; 2 : : :

n n v1 = vN 1 ;

n n v0 = vN ; 0 vj = f (xj );

1; : : : ; N;
vm n
N 1 1 n vj e2ijm=N

can be expanded as
n vj

N 1 m=0

vm e2ijm=N ; n

where

j =0

is the discrete Fourier transform, satisfying


e2im=N vm n vm+1 vm n n = k

2m + e2im=N vm vn n
h2

or equivalently, vm+1 = gm vm , where gm = 1 4k=h2 sin2 (m=N ). n n 2 Since jgm j 1 if k=h 1=2, it follows from Parsevals relation that
N 1 1

j =0

jvjn j2 =

N 1 m=0

jvm j2 n

N 1 m=0

1 jvm j2 = N 0

N 1 j =0

jvj0 j2

or equivalently, jjv n jj2 jjf jj2 if k=h2 1=2. More generally, a scheme is stable if and only if there exists a constant K such that jgm j 1 + Kk. the energy method. For example, consider the dierence equation
n vj +1

vjn
n
0

n = D vj ;

j = 1; : : : ; N; n = 0; 1; : : : j = 0; : : : ; N:

n = 0; 1; : : :

v0 = 0; vj = f (xj );

Introduce the scalar product (v; w) =


N j =1

vj wj h

18

and the norm jjv jj =

(v; v ). From the summation by parts rule

(v; D w) = (D v; w) + h(D v; D w) + vN wN follows that

v0 w0

jjvn+1 jj2 = jjvn + kD vn jj2

= (v n + kD v n ; v n + kD v n ) + 2 k 2 (D v n ; D v n )

= (v n ; v n ) + k (v n ; D v n ) + k (D v n ; v n )
n = jjv n jj2 + (kh + 2 k 2 )jjD v n jj2 + k (vN )2

jjvn jj2

if 

0 and jjk=h

1. Thus, jjv n+1 jj

jjvn jj

:::

jjv0 jj = jjf jj.

3.16 The CFL-condition is a necessary but not sucient condition for stability. It states that characteristics going through a given point must go through the corresponding area of dependence. 3.17 A consistent nite dierence approximation of a well-posed dierential equation is convergent if it is stable. For example,
D+ D vi = fi ; v0 = c0 ; v1 = c1 ; i = 1; : : : ; N

1;

is a consistent and stable approximation of


uHH (x) = f (x); u(0) = c0 ; u(1) = c1 :

0 < x < 1;

From consistency follows that the error ei = u(xi ) vi satises


D+ D ei =

y(h);

i = 1; : : : ; N

1;

e0 = 0; e1 = 0;

and stability implies jjejj = y(h), that is, v

3 u as h 3 0.

19

Exercises of Type A
3.1 Of what type is a) the Laplace equation uxx + uyy = 0 b) Poissons equation uxx + uyy = f c) the wave equation utt = c2 uxx , where c is a real constant d) the heat equation ut = uxx , where  is a real constant e) the advection equation ut = ux , where  is a real constant f)
   u 0 = v t c   u , where c is a real constant 0 v x

3.2 Are the Cauchy-Riemann equations


ux = vy ; uy =

vx ;

hyperbolic? 3.3 Consider the ordinary dierential equation


uH (x) = f (x); u(0) = c;

0 < x < 1;

where f is a continuous function on [0; 1] and c is a constant, and let


g (x) = c +
x
0

f (t)dt:

a) Prove existence by showing that g solves the equation. b) Prove uniqueness by showing that if u is a solution to the equation, then u = g . c) Show boundedness in maximum norm, that is jjujjI

jcj + jjf jjI .

d) Show that the boundedness property implies uniqueness. 3.4 Is the ordinary dierential equation
uHH = 0;

0 < x < 1;

well-posed with boundary conditions b) uH (0) = c0 , uH (1) = c1 , c0 T= c1 , c) uH (0) = c1 , uH (1) = c1 a) u(0) = c0 , uH (0) = c1 (assume uniqueness)

3.5 Show that the ordinary dierential equation


uHH (x) = 0; u(0) = c0 ; u(1) = c1 ;

0 < x < 1;

satises a maximum principle. 20

3.6 Use the maximum principle to show that the Laplace equation with Dirichlet boundary conditions cannot have more than one solution. 3.7 The Laplace equation with initial conditions
utt + uxx = 0; u(x; 0) = 0; ut (x; 0) = fk (x); x

P R;

t > 0;

where fk (x) = sin kx, has the unique solution


u(x; t) =

1
k

sin kx sin kt:

Show that the problem is ill-posed. 3.8 The heat equation with periodic boundary conditions
ut = uxx ; u(0; t) = u(1; t); ux (0; t) = ux (1; t); u(x; 0) = f (x);

0 < x < 1;

has the unique solution


u(x; t) =

I
k=

where ck (t) = e4

ck (t)e2ikx ;

2 2

k tf k

and fk are the Fourier coecients of f (x),


f (x) =

k=

fk e2ikx :

a) Use Parsevals relation to show that the problem is well-posed if t > 0. b) Let for example f (x) = 2 cos 2mx, where m is an integer, and show that the problem is ill-posed if t < 0. 3.9 Use the Fourier method to show that the advection equation with periodic boundary conditions
ut = ux ; u(0; t) = u(1; t); u(x; 0) = f (x);

0 < x < 1;
t > 0;

t > 0;

0 < x < 1;

is well-posed. 3.10 Use the Fourier method to show that the problem
ut = uxx + ux ; u(0; t) = u(1; t); ux (0; t) = ux (1; t); u(x; 0) = f (x);

0 < x < 1;
t > 0; t > 0;

t > 0;

0 < x < 1;

is well-posed. 21

3.11 Use the Fourier method to show that the wave equation with periodic boundary conditions
utt = uxx ; u(0; t) = u(1; t); ux (0; t) = ux (1; t); u(x; 0) = f (x); ut (x; 0) = g (x);

0 < x < 1;
t > 0; t > 0;

t > 0;

0 < x < 1; 0 < x < 1; 2(a2 + b2 ) for all real numbers

is well-posed. Use the fact that (a + b)2 a and b.

3.12 Use the energy method to show that the solution to the advection equation
ut = ux ; u(x; 0) = f (x);

0 < x < 1; 0 < x < 1;

t > 0;

satises jju(; t)jj2

jjf jj2 if

a) u(0; t) = u(1; t), t > 0 b)  < 0 and u(0; t) = 0, t > 0 3.13 Use the energy method to show that the solution to the heat equation
ut = uxx ; u(x; 0) = f (x);

0 < x < 1; 0 < x < 1;

t > 0;

satises jju(; t)jj2

jjf jj2 if  > 0 and

a) u(0; t) = u(1; t) and ux (0; t) = ux (1; t) for t > 0 b) u(0; t) = u(1; t) = 0 for t > 0 3.14 Use the energy method to show that the solution to
ut = uxx + u; u(0; t) = 0; u(1; t) = 0; u(x; 0) = f (x);

0 < x < 1;
t > 0; t > 0;

t > 0;

0 < x < 1;
av

satises jju(; t)jj2

et f

jj jj2 . Use vH

A v(t)

eat v (0).

3.15 Use the energy method to show that the solution to


ut = uxx + ux + u; u(0; t) = u(1; t); ux (0; t) = ux (1; t); u(x; 0) = f (x);

0 < x < 1;
t > 0; t > 0;

0 < t < T;

0 < x < 1;

satises jju(; t)jj2

C f

jj jj2 where the constant C only depends on T .


22

3.16 Show that a) u(x; t) = f (x + t) solves the advection equation ut = ux b) u(x; t) = f (x + ct) + g (x ct) solves the wave equation utt = c2 uxx 3.17 The function x = x(t) is a characteristic curve for the equation
ut (x; t) = (x; t)ux (x; t)

if u(x(t); t) is constant. a) b) c) d)

Show that xH = (x; t). Determine x(t) if  is constant. Determine x(t) if (x; t) = x. Determine x(t) if (x; t) = t.
ut = ux ; u(0; t) = 0; u(x; 0) = f (x);

3.18 Use an argument about characteristics to show that the problem 0 < x < 1;
t > 0; t > 0;

0 < x < 1;

is ill-posed. 3.19 Let xi = ih, h = 1=N and use the nite dierence approximation D+ D to discretise the ordinary dierential equation
uHH = f;

0 < x < 1;

u(0) = c0 ; u(1) = c1 :

The result is a system of equations Av = b. a) Specify A, v and b. b) Eliminate the two equations that describe the boundary conditions. Specify A, v , and b also in this case. 3.20 Use the nite dierence operators D+ D and D0 to discretise the ordinary dierential equation
uHH + uH = f; u(0) = 0; u(1) = 0:

0 < x < 1;

Eliminate the two equations that describe the boundary conditions. The result is a system of equations Av = b. Specify A, v and b. 3.21 Use the nite dierence operator D+ D to discretise the ordinary dierential equation
uHH + au = f; u(0) = 0; u(1) = 0:

0 < x < 1;

where a = a(x). Eliminate the two equations that describe the boundary conditions. The result is a system Av = b. Specify A, v and b. 23

3.22 Consider the ordinary dierential equation

uHH = f; 1 < x < 1; u(1) = 0;


u(1) = 0:

and let (f; g ) =

b) Show that u(x) = 1 x2 satises (uH ; v H ) = (2; v ) for any v with integrable derivative satisfying v (1) = v (1) = 0. c) Let
u(x) =
&

a) Show that u(x) = 1 x2 is a solution if f (x) = 2.

f (x)g (x)dx:

d) Show that the function u(x) given in c) satises (uH ; v H ) = (f; v ) where
f (x) =
V `

Show that uH (x) is continuous (and therefore integrable).

1; 1 x2 ;
x2

x 0 x<0

0; X 2;

2;

x>0 x=0 x<0

and v is any function with integrable derivative satisfying v ( 1) = v (1) = 0. e) Is the function u(x) given in c) a solution to the dierential equation if f (x) is chosen as in d)? 3.23 Let N be a positive integer, h = 1=(N + 1), xi = ih, i = 0; : : : ; N + 1, and
V b 0; b b x b ` b b b b X

i (x) =

xi1 ; h xi+1 x ;
h

xi1 < x xi < x x > xi+1

xi1

xi xi+1

0;

a) Show that i (x) is continuous and that H (x) is piecewise continuous. i b) Show that
i (xj ) =
&

1; 0;

j=i j=i

c) Show that fi gN is a basis in the space of functions i=1


Vh = v v cont; v linear on[xi ; xi+1 ]; i = 0; : : : ; N; v (0) = v (1) = 0 :

fj

H d) Show that (uH ; vh ) = (f; vh ) for all vh P Vh , with Vh as given in c), h H ; Hi ) = (f; i ), for i = 1; : : : ; N . if and only if (uh
e) Compute (j ; i ), (H ; i ), and (H ; H ). j j i 24

3.24 Consider the boundary value problem

uHH + u = f;
u(0) = 0; u(1) = 0;

0 < x < 1;

a) Derive the variational formulation of the dierential equation. b) Dene the nite element method using continuous and piecewise linear functions on a uniform grid. c) Specify the linear system associated with the nite element method. 3.25 Consider the boundary value problem

uHH + auH = f; 1 < x < 1;


u(0) = 0; u(1) = 0;

where a is constant. a) Derive the variational formulation of the dierential equation. b) Dene the nite element method using continuous and piecewise linear functions on a uniform grid. c) Specify the linear system associated with the nite element method. d) What will change if a is a function of x? 3.26 Consider the boundary value problem

(auH )H = f;
u(0) = 0; u(1) = 0;

0 < x < 1;

where a is constant. a) Derive the variational formulation of the dierential equation. b) Dene the nite element method using continuous and piecewise linear functions on a uniform grid. c) Specify the linear system associated with the nite element method. d) What will change if a = a(x) > 0? 3.27 Use nite dierences to discretise the Laplace equation on the unit square
uxx + uyy = 0; u = 1;

0 < x; y < 1; on the boundary:

Use space step 1/3 in both directions. Write the result on matrix form, a) with boundary conditions included b) with boundary conditions eliminated 25

3.28 Use nite dierences to discretise Poissons equation on the unit square
uxx + uyy = f; u = g;

0 < x; y < 1; on the boundary:

Use N1 steps in the x-direction and N2 steps in the y -direction. Specify the coecient matrix for the case when boundary conditions are eliminated. 3.29 Use D+ D and D0 to discretise
uxx + uyy + ux + uy = f; u = g;

0 < x; y < 1; on the boundary:

Use N1 steps in the x-direction and N2 steps in the y -direction. Specify the coecient matrix for the case when boundary conditions are eliminated. 3.30 One possible discretisation of
uxx + uyy + u = f; u(0; y ) = u(1; y ); ux (0; y ) = ux (1; y ); u = g;

0 < x; y < 1; 0 < y < 1; 0 < y < 1;


y = 0 and y = 1;
&

is
D+ D vi;j + D+ D vi;j + vi;j = f (xi ; yj ); v1;j = vN1 1;j ; v0;j = vN1 ;j ; vi;0 = g (xi ; 0); vi;N2 = g (xi ; 1);
( x) ( x) (y ) (y )

j = 1; : : : ; N2

1; 1; i = 0; : : : ; N1 1; i = 0; : : : ; N1 1;
j = 1; : : : ; N2

i = 0; : : : ; N1 j = 1; : : : ; N2

1; 1;

where xi = ih1 , h1 = 1=N1 and yj = jh2 , h2 = 1=N2 . Specify the coecient matrix for the case when boundary conditions are eliminated. 3.31 Which of the following nite dierence schemes allow for explicit timestepping? a) b) c)
n vj +1 n vj +1 n vj +1

vjn vjn vjn

k k

n = D+ D vj n = D+ D vj +1

1 1 n n D+ D vj +1 + D+ D vj k 2 2 3.32 Specify the system of equations that has to be solved in each time-step if =
n vj +1

vjn

n = D+ D vj +1 ;

n = 0; 1; : : : ; n = 1; 2; : : : n = 1; 2; : : : j = 0; : : : ; N:

j = 1; : : : ; N

1;

n vN = 0;

n v0 = 0;
0

vj = g (xj );

Eliminate boundary conditions and write the system on matrix form. 26

3.33 Show that all of the following dierence schemes are consistent approximations of the ordinary dierential equation uH = f . a) D vi = f (xi ) b) D vi =
f (xi1 ) + f (xi )

c) D0 vi = f (xi ) 3.34 Show that all of the following nite dierence schemes are consistent approximations of ut = ux + f . a) b) c)
n vj +1 n vj +1 n vj +1

vjn vjn vjn

k k k

n = D vj + f (xj ; tn ) n = D0 vj + f (xj ; tn ) n n = D0 vj + hD+ D vj + f (xj ; tn )

3.35 The dierential equation ut = ux is approximated by


n vj +1

vjn

n n = D0 (1 )vj +1 + vj ) :

Which  gives the best approximation? 3.36 Show that


n vj +1

vjn

= D+ D

n n vj +1 + vj

is a consistent approximation of ut = uxx . 3.37 Show that


vi+1;j

2vi;j + vi1;j + vi;j+1 2vi;j + vi;j1 = 0;


h2 h2 vi;j = g (xi ; yj );

(i; j ) P @ ;

(i; j ) P ;

has at most one solution since it satises the discrete maximum principle. 3.38 The discrete Fourier transform and its inverse are given by
vm =
N 1 1

vj e2ijm=N

and vj =

N 1 m=0

vm e2ijm=N :

j =0

Show that a) wj;m = e2ijm=N satises wj +N;m = wj;m and wj;m+N = wj;m b) vm has period N , that is, vm+N = vm
d c) (vj +k )m = e2ikm=N vm if v has period N

27

3.39 Use the Fourier method to show stability for


n vj +1

vjn
n n
0

n = D+ D vj ; n vN 1 ; n vN ;

j = 0; : : : ; N n = 1; 2; : : : n = 1; 2; : : : j=

1;

n = 0; 1 : : :

v1 = v0 =

vj = f (xj );

1; : : : ; N;

if k=h2

1=2.
n vj +1

3.40 Use the Fourier method to show that

vjn

n = D0 vj ;

j = 0; : : : ; N n = 1; 2; : : : n = 1; 2; : : : j=

1;

n = 0; 1 : : :

n n v1 = vN 1 ;

n n v0 = vN ; 0 vj = f (xj );

1; : : : ; N;

is unstable if k=h = 1. 3.41 Use the Fourier method to show stability for
n vj +1

vjn
n
0

n = D+ vj ; n vN ;

j = 0; : : : ; N n = 1; 2; : : :

1;

n = 0; 1 : : :

v0 =

vj = f (xj );

j = 0; : : : ; N;

if k=h

1.
n vj +1

3.42 Use the Fourier method to show that

vjn

n = D+ vj +1 ;

j = 0; : : : ; N n = 1; 2; : : :

1;

n = 0; 1 : : :

n n v0 = vN ; 0 vj = f (xj );

j = 0; : : : ; N;

is always stable. 3.43 Consider


n vj +1

vjn

n = avj ;

j = 0; : : : ; N; j = 0; : : : ; N:

n = 0; 1 : : :

0 vj = f (xj );

Let jjv jj =

(v; v ), where (v; w) =


N j =0

vj wj h:

Use the energy method to determine the values on a and k > 0 for which the norm of the solution is decreasing. 28

3.44 Let jjv jj =

(v; v ), where (v; w) =

N 1 j =0

vj wj h:

a) Prove the summation by parts rule (v; D+ w) = (D+ v; w) h(D+ v; D+ w) + vN wN b) Use the energy method to show that
n vj +1

v0 w0 :

vjn
0

n = D+ vj ;

j = 0; : : : ; N n = 1; 2; : : :

1;

n = 0; 1 : : :

n vN = 0;

vj = f (xj );

j = 0; : : : ; N;

is stable if k=h < 1. 3.45 Let jjv jj =


p

(v; v ), where (v; w) =

N 1 j =1

vj wj h:

a) Prove the summation by parts rule b) Show that 2ab a2 + b2 for all real numbers a and b. c) Use the inequality 2ab a2 + b2 to prove that (v; D+ w) = (D v; w) + vN 1 wN

v0 w1 :

jjD+ vjj2
n vj +1

4
h2

2 2 jjvjj2 + h vN :

d) Use the energy method to show that

vjn
n vN

n = D+ D vj ;

j = 1; : : : ; N n = 1; 2; : : : n = 1; 2; : : :

1;

n = 0; 1 : : :

n v0 = 0;

= 0; 1=2.

vj = f (xj );

j = 0; : : : ; N;

is stable if k=h 3.46 Let

(v; w) = and jjv jj =


n vj +1

N 1 j =1

vj wj h

(v; v ). Use the energy method to show that = D+ D = 0;


n n vj +1 + vj

n vj

n vN

n v0 = 0;

j = 1; : : : ; N n = 1; 2; : : : n = 1; 2; : : :

1;

n = 0; 1 : : :

is unconditionally stable. Hint: Investigate (v n+1 + v n ; v n+1 v n ). 29

vj = f (xj );

j = 0; : : : ; N;

3.47 Consider the nite dierence approximation


n vj +1

vjn

n = D0 vj

of ut = ux . Derive the CFL-condition. 3.48 Consider the nite dierence approximation
n vj +1

2vjn + vjn1
k2

n = c2 D+ D vj

of utt = c2 uxx . Derive the CFL-condition. 3.49 Consider the nite dierence approximation
n vj +1

vjn

n = D+ vj +1

of ut = ux . Derive the CFL-condition. 3.50 Consider the dierential equation


uH = f; u(0) = c;

0 < x < 1;

and the nite dierence approximation


D vi = f (xi ) + f (xi1 ) ; i = 1; : : : ; N;

v0 = c;

where xi = ih, h = 1=N . Assume stability and consistency and prove convergence. 3.51 Consider the dierential equation
ut = ux ; u(0; t) = u(1; t); u(x; 0) = f (x);

0 < x < 1;
t > 0;

t > 0;

0 < x < 1;

and the nite dierence approximation


n vj +1

vjn
n
0

n n = D+ vj +1 + gj ; n vN ;

j = 0; : : : ; N n = 1; 2; : : :

1;

n = 0; 1 : : :

v0 =

vj = f (xj );

j = 0; : : : ; N;

n where gj = 0, xj = jh, h = 1=N and tn = nk . Assume consistency and stability (in the sense that jjv n jj C (jjg n jj + jjf jj)) and prove convergence.

30

Exercises of Type B
3.52 Derive a ve-point nite dierence approximation of uHH = 0. 3.53 Derive a two-point nite dierence approximation of uH + u = 0 that is second order accurate. 3.54 Consider the Laplace equation on an annulus
uxx + uyy = 0; u = f;
p x2 + y 2 < 2;

1<

on the boundary:

a) Show that
wrr +

1
r2

u +

wr = 0; r w(r; 0) = w(r; 2 ); w(r; ) = g (r; );

1 < r < 2;

0 <  < 2;

w (r; 0) = w (r; 2 ); r = 1 and r = 2;

where w(r; ) = u(r cos ; r sin ) and g (r; ) = f (r cos ; r sin ). b) Propose a nite dierence discretisation. 3.55 Consider the dierential equation
ut = ux ; u(0; t) = 0; u(x; 0) = f (x);

0 < x < 1; t > 0;


t > 0;

0 < x < 1:

a) Show that the problem is ill-posed. b) Propose a boundary condition that makes the problem well-posed. 3.56 Consider the dierential equation
ut = ux ; u(1; t) = 0; u(x; 0) = f (x);

0 < x < 1; t > 0;


t > 0;

0 < x < 1;

and the nite dierence approximation


n vj +1

vjn
0

n = D+ vj ;

j = 0; : : : ; N n = 1; 2; : : :

1;

n = 0; 1 : : :

n vN = 0;

vj = f (xj );

j = 0; : : : ; N;

where xj = jh, h = 1=N and tn = nk . a) Show that the dierential equation is well posed. b) Show that the solution of the nite dierence equation converges if k=h 1. 31

3.57 Consider the dierential equation


ut = uxx ; u(0; t) = u(1; t); ux (0; t) = ux (1; t); u(x; 0) = sin(2x);

0 < x < 1; t > 0;


t > 0; t > 0;

0 < x < 1;

and the nite dierence approximation


n vj +1

vjn

n = D+ D vj ;

j = 1; : : : ; N n = 1; 2; : : : n = 1; 2; : : :

1;

n = 0; 1 : : :

n n v1 = vN 1 ; n n v0 = vN ;

vj = sin(xj );

j = 0; : : : ; N;

where xj = jh, h = 1=N and tn = nk . a) Show that the dierential equation is well-posed. b) Show that the solution of the nite dierence equation converges.

Exercises of Type C
3.58 Show that the solution to
vi+1;j h2

2vi;j + vi1;j + vi;j+1 2vi;j + vi;j1 = 0;


h2 vi;j = g (xi ; yj );

(i; j ) P @ ;

(i; j ) P ;

a) assumes its maximum on the boundary @ b) assumes its minimum on the boundary @ c) satises
(i;j )

max jvi;j j

(i;j ) @

max

jg(xi ; yj )j:

3.59 Show that the solution to the Laplace equation


uxx + uyy = 0; u = g;

(x; y ) P ;

(x; y ) P @ ;

a) assumes its maximum on the boundary @ b) assumes its minimum on the boundary @ c) satises
(x;y )

max ju(x; y )j =

(x;y ) @

max

jg(x; y)j:

3.60 Show that


vj =

N 1 m=0

vm e2ijm=N

is the inverse of the discrete Fourier transform.

32

Chapter 4

Least Squares Problems


Summary
4.1 The polynomial
p(x) =
n k=0

ck xk ;

m + 1;

that best approximates the m points (xi ; yi ) in the sense of least squares minimises
F (c0 ; : : : ; cn ) =
m i=1

(yi p(xi ))2 :

This function has its minimum when the linear and symmetric normal equations @F=@cj = 0, j = 0; : : : ; n are satised. 4.2 Let L be a linear space with scalar product (f; g ) and norm jjf jj = (f; f ) and let M & L be a linear subspace. The least squares problem is: Given f
p

P L; nd g P M that minimises F (g) = jjf gjj:

4.3 The least squares problem has a unique solution. 4.4 Let fei gn be an ON-basis (orthogonal basis) in M . The orthogonal i=1 projection of f on M
g = (f; ei )ei
i=1 n

n (f; ei ) g = ei i=1

(ei ; ei )

solves the least squares problem. 4.5 Let g be the solution of the least squares problem. The error f orthogonal to the subspace M .

g is

4.6 An ON-basis can be computed by GramSchmidt orthogonalisation. 4.7 The classical orthogonal polynomials include polynomials by Legendre, Chebyshev, Laguerre and Hermite. 4.8 The least squares solution of an overdetermined system Ax = b is given by the solution of the normal equations AT Ax = AT b D Rx = QT b, where A = QR is a QR-factorisation. 33

Exercises of Type A
4.1 Determine the constant function that best approximates x y in the sense of least squares. 4.2 Determine the linear function that best approximates x y in the sense of least squares. 4.3 Determine the second degree polynomial that best approximates x y in the sense of least squares. 4.4 Let L = R2 , (f; g ) = f1 g1 + f2 g2 , and M = f c b) Show that the error f

0 0

1 1

0 1

1 2

2 2

0 1

1 1

2 1

3 2

4.5

g is orthogonal to M . Let L = R2 , (f; g ) = f1 g1 + f2 g2 , and M = f c c j c P Rg.


a) Compute the orthogonal projection g of f = 0

a) Compute the orthogonal projection g of f = 1

j c P Rg.
1 in M .

b) Show that the error f


M = a 1

g is orthogonal to M .
0 +b 0

1 in M .

4.6 Let L = R3 , (f; g ) = f1 g1 + f2 g2 + f3 g3 , and

j a; b P Rg:

a) Compute the orthogonal projection of f = 1 b) Show that the error f


M = a 1

g is orthogonal to M .
1 +b 0

1 in M .

4.7 Let L = R3 , (f; g ) = f1 g1 + f2 g2 + f3 g3 , and

j a; b P Rg:

4.8

g is orthogonal to M . Find an ON-basis in spanf1; x; x2 g with respect to


b) Show that the error f
1 4
0 0

a) Compute the orthogonal projection of f = 1

2 in M .

a) (f; g ) = b) (f; g ) = c) (f; g ) =

1 f (x)g(x)dx
f (x)g (x)dx f (x)g (x) sin xdx

34

4.9 Let L = C [0; 2], (f; g ) =

f (x)g (x)dx

and M = spanf1; xg. Find the best approximation of f (x) = x3 in M in the sense of least squares in the following three ways: a) Look for g (x) on the form g (x) = a + bx and require the partial derivatives of jjf g jj2 to be zero. b) Look for g (x) on the form g (x) = a + bx and require the error f g to be orthogonal to M . c) Determine an ON-basis in M and compute the orthogonal projection of f on M . 4.10 Let L = C [1; 1], (f; g ) =

and M = spanf1; xg. Find the best approximation of f (x) = x3 in M in the sense of least squares. 4.11 Let L = C [1; 1], (f; g ) =

f (x)g (x)dx

and M = spanf1; xg. Find the best approximation of f (x) = x3 in M in the sense of least squares. 4.12 Let L = C [0;  ], (f; g ) =

0

f (x)g (x)(1 + x2 )dx

f (x)g (x)dx

and M = spanf1; x; x2 g. Find the best approximation of f (x) = sin x in M in the sense of least squares. 4.13 The rst three Legendre polynomials are p1 (x) = 1, p2 (x) = x, and p3 (x) = (3x2 1)=12. a) Show that p1 , p2 , and p3 are orthogonal with respect to

(f; g ) =

b) Find the best approximation in spanf1; x; x2 g of the function f (x) that satises f (x) = 1, x ! 0, and f (x) = 0, x < 0. 4.14 The rst three Hermite polynomials are p1 (x) = 1, p2 (x) = 2x, and p3 (x) = 4x2 2. a) Show that p1 , p2 , and p3 are orthogonal with respect to

f (x)g (x)dx:

(f; g ) =

b) Find the best approximation in spanf1; x; x2 g of the function f (x) that satises f (x) = 1, x ! 0, and f (x) = 0, x < 0. 35

f (x)g (x)ex dx:


2

4.15 The rst three Laguerre polynomials are p1 (x) = 1, p2 (x) = 1 x, and p3 (x) = 1 2x + x2 =2. a) Show that p1 , p2 , and p3 are orthogonal with respect to

(f; g ) =
0

f (x)g (x)ex dx:

b) Find the best approximation in spanf1; x; x2 g of f (x) = sin x. 4.16 Solve the following overdetermined system in the sense of least squares
x = 0; x = 1:

4.17 Solve the following overdetermined system in the sense of least squares
x1 + x2 = 1; x2 = 0; x1 + 2x2 = 2:

4.18 Solve the following overdetermined system in the sense of least squares
x1 + x2 = 3;

2x2 = 3; x1 + x2 = 0;
x1 = 1:

4.19 Consider the overdetermined system Ax = b, where 1 A=d 0 1


H

0 1e ; 2

  x1 ; x= x2

1 and b = d1e : 0

H I

a) Let a1 and a2 be the columns of the coecient matrix A. Compute an ON-basis e1 , e2 in spanfa1 ; a2 g. b) Let Q = e1 e2 . Determine the change of basis matrix R that satises QR = A. c) Solve the overdetermined system Qx = b in the sense of least squares. d) Solve Rx = x. 4.20 Consider the overdetermined system
H

1 f 0 f d1 3

1 2 1 0

1
0

H I I 3 x1 2 g d e f 3g g x2 = f g : d 0e 3e x3 H

a) Use GramSchmidt to compute a QR-factorisation of the coecient matrix. b) Use your result in a) to nd the vector that solves the overdetermined system in the sense of least squares. 36

Exercises of Type B
4.21 Compute the best approximation of x3 in the sense of least squares on a) the interval 0 x 2 with a + bx b) the interval 0 x 2 with a + bex c) the point set f0; 0:5; 1; 1:5; 2g with a + bx 4.22 Compute the best linear approximation of sin x in the sense of least squares with respect to

1 f (x)g(x)dx 2 b) (f; g ) = I f (x)g (x)ex dx I c) (f; g ) = f (x)g (x)ex dx


I
0

a) (f; g ) =

Exercises of Type C
4.23 What is the best upper triangular approximation of a matrix in the sense of least squares if (A; B ) = tr(AT B )? 4.24 Show that the least squares problem has a unique solution. 4.25 Let g be the best approximation of f in M . Show that the error f is orthogonal to M . 4.26 Consider the linear space
V = v v cont; v H piecewise cont; v (0) = v (1) = 0 ;

f j

and the N -dimensional subspace


VN = v v cont; v linear on Ij for j = 0; : : : ; N; v (0) = v (1) = 0 :

f j

Here, Ij = [xj ; xj +1 ], xj = jh, and h = 1=(N + 1). Dene the scalar product 1 (f; g ) = f H (x)g H (x) dx; and the induced norm jjf jj =
p
0

u(0) = u(1) = 0:

uHH = f;

(f; f ). Let u be the solution to 0 < x < 1;

Show that the best approximation of u in VN in the sense of least squares is given by
uh (x) =
N j =1

cj j (x);

where the coecients ci satises


N j =1

1 0

cj

H (x)H (x) dx = j i

f (x)i (x) dx;

i = 1; : : : ; N;

if fj gN is a basis in VN . j =1

37

38

Answers
Chapter 1
1.1 a) 11 6 d3 b) 9 c) 33 d) 33 1.2
H

4 2 6 22 22

2 1e 3 11 11
T T

a) aT b is a scalar and therefore its own transpose b) aT b = c) (bT a)b


n n T i=1 ai bi = i=1 bi ai = b a = b(bT a) = b(aT b) = baT b
T

1.3

a) (aH b)H = aT b = aT b = aT b b) aH b = aT b = bT a c) (bH a)b = b(bH a) = b(aT ) = baT b b


  

1.4

a)

AT

1 1

i 

i
2
i

b) AH = 1.5 a) No b) No c) Yes d) Yes e) Yes 1.6 a) Yes b) No c) Yes

39

1.7 If ai are the columns of a matrix A, then


H TI a1 f . g T A A = d . e a1 .

:::

aT n

H T a1 a1 f . an = d . .

::: :::

aT an 1 aT an n

aT a1 n

. g: . e .

Thus AT A = I if and only if aT aj = 1 if i = j and 0 otherwise. i 1.8 a) b) c) d) e)


AH A = AAH since A = AH AH A = I AT A = I  

AH A = AT A = AAT = AAH since A = AT and A is real

D AH = A1 D AAH = I so AH A = I = AAH

1.9 (B 1 A1 )(AB ) = B 1 A1 AB = B 1 B = I 1.10 1.11 a) 2 b) 14 c) 0 a) b) c) d) e) No Yes Innitely many No None

1 1

D AT= A1 AAT = I so AH A = AT A = AAT = AAH D 1 1 and are two such matrices
1 1

A (AB )1 = B 1 A1

1.12

1.13 Follows from the denition of span. 1.14 The columns of A, say a1 ; : : : ; an , are linearly independent since A has full rank and is therefore a basis in Cn . Every b P Cn can hence be written
b=
n i=1

xi ai

1.15 Since b P span(A) there is at least one solution x. Also, since the rank of A is less than full, the columns a1 ; : : : ; an are linearly dependent, that is, T there exist y = y1 : : : yn T= 0 such that Ay = y1 a1 + : : : + yn an = 0. Consequently, x + cy is also a solution for every complex number c. 1.16 a) 1 = 1, x1 = 1 b) 1 = 2, x1 = 1 c) 1 = 4, x1 = 7
x3 = 1 x3 = 1
T

for a unique set of coecients xi , or equivalently, b = Ax for a unique x.

2
T

, 2 = 1, x2 = 12 0
T

11 0

, 2 =

2, x2 = 1

1 0

, 3 =

1,

d) 1 = 1, x1 = 1 1 1

; 2 = 2, x2 = 1 1 T T 0 ; 4 = 4, x4 = 1 1 1 1

; 3 = 3,

40

1.17

b) 1 and xi i

a) k and xi i

d) (i c)1 and xi 1.18 a) b) 2 13 1.19 If A is diagonal, the characteristic polynomial has the form det(A I ) = (a11 ) : : : (ann ) and we see that the diagonal elements are the zeros. 1.20 If A is upper triangular, the characteristic polynomial has the simple form det(A I ) = (a11 ) : : : (ann ) and we see that the diagonal elements are the zeros. 1.21 a) 1 = 1 + 2 3, 2 = 1 2 3 b) 2 c) -11 1.22 a) 9 b) c) 3 d) 2 1.23 1.24

c) i c and xi

p 3 p

15 % 3:87

jjAxjj2 = (Ax)H (Ax) = xH AH Ax = xH x = jjxjj2 since A is unitary. 2 2


a) 6 b)
p

11 + 3 13 % 4:67

c) 5 d) (5 +

17)=2 % 4:56

1.25 Let i and xi be the eigenvalues and corresponding eigenvectors of a matrix A and assume jjxi jj = 1. Then,

jjAjj = jjmax jjAxjj ! jjAxi jj = jji xi jj = ji j jjxi jj = ji j; Vi xjj=1


and
i

(A) = max i

j j jjAjj:

1.26

a) Let  be an eigenvalue and x the corresponding eigenvector of a Hermitian matrix A. Since both
xH Ax = xH x = xH x

and

xH Ax = xH AH x = (Ax)H x = (x)H x = xH x

holds, we see that  = , that is,  is real. 41

b) Let  be an eigenvalue and x the corresponding eigenvector of a skew-Hermitian matrix A. Since both
xH Ax = xH x = xH x

and xH AH x = (Ax)H x = (x)H x = xH x holds, we see that  = , that is,  is pure imaginary.
xH Ax =

c) Since the 2-norm is preserved by unitary transformations,

jjxjj2 = jjAxjj2 = jjxjj2 = jj jjxjj2


implying jj = 1 if  is an eigenvalue and x the corresponding eigenvector of a unitary matrix A. 1.27 The characteristic polynomial of A takes the form det(A I ) = (1)n n + cn1 n1 + : : : + c0 for some coecients c0 ; : : : ; cn1 . Since the eigenvalues are the zeros, det(A I ) = (1 )(2 ) : : : (n ): Letting  = 0 gives the result. 1.28 We rst show that det(A I ) = (1)n n (a11 + : : : + ann )n1 + : : : Developing the characteristic polynomial by row 1 gives det(A I ) = (a11 )D11 a12 D12 + : : : where D11 is a polynomial of degree n 1 and all other sub-determinants are polynomials of degree n 2. Developing D11 by row 1 gives
D11 = (a22
(2) (2) (2) )D11 a23 D12 + : : :

where D11 is a polynomial of degree n 2 and all other sub-determinants are polynomials of degree n 3. Repeating, one nds that det(A I ) = (a11 )(a22 ) : : : (ann ) + : : : = (1)n n (a11 + : : : + ann )n1 + : : :

where the omitted part is a polynomial of degree n 2. On the other hand, det(A I ) = (1 )(2 ) : : : (n ) and we nd that a11 + : : : + ann = 1 + : : : + n . 42 = (1)n n (1 + : : : + n )n1 + : : :

1.29 First note that

jjAjj2 = jjmax jjAxjj2 = jjmax (Ax)H Ax = jjmax xH AH Ax: 2 2 xjj =1 xjj =1 xjj =1
2 2 2

Since AH A is Hermitian, there is a unitary matrix U such that AH A = U H DU and D = diag(1 ; : : : ; n ), where i are the eigenvalues of AH A. Thus, jjAjj2 = max xH U H DU x = max (U x)H D(U x) 2 Let y = U x. Then jjy jj =
2 2
2

jjxjj2 =1

jjxjj2 =1

(U x)H U x

xH U H U x

jjAjj2 = jjmax yH Dy = jjmax jy1 j2 1 + : : : jyn j2 n : 2 y jj =1 y jj=1

= xH x = jjxjj2 and 2

Since jjAjj2 is non-negative, the eigenvalues i must be real and non2 negative. Suppose k is the largest eigenvalue. Since jy1 j2 + : : : + jyn j2 = 1, the maximum is assumed when yk = 1 and yi = 0, i T= k , and

jjAjj2 = k = (AH A): 2

1.30 If i and xi are the eigenvalues and corresponding eigenvectors of A then

jjAjj2 = (AH A) = (A2 ) = max j2 j = (max ji j)2 = (A)2 : 2 i i i


Chapter 2
2.1 A = CBC 1 if for example C =
 

2.2 Let i and xi be the eigenvalues and eigenvectors of A = CBC 1 . Then i and C 1 xi are the eigenvalues and eigenvectors of B since
Axi = CBC 1 xi = i xi


1 1

1 . 0

D
1= 4 1


BC 1 xi = i C 1 xi :


2.3 D = C 1 AC , where C = 2.4

1=4
1

and D =

1=2
0

0 . 1=2

a) Yes, since its symmetric b) Yes, since its orthogonal and therefore normal c) No, since its a Jordan box i.e. it has only one eigenvector d) Yes, since distinct eigenvalues (0 and 1) implies a complete set of linearly independent eigenvectors e) No, since its a Jordan box i.e. it has only one eigenvector a) There is one eigenvalue in j 0:1j p 0:3 and two in the union of j 2j 1:5 and j 2:5j 0:5 + 1:25 % 1:62 c) One eigenvalue is in jj 1=2 and three eigenvalues are in j 4j 2 3

2.5

b) All four eigenvalues are in jj

d) There is one eigenvalue in j 4:2j 1:9, one in j 8:3j 0:8, one in j 10:9j 0:8, and one in j 18:2j 1:5. The eigenvalues are real since the matrix is symmetric. 43

 

2.6

a) 1 = 3, x1 = b) y0 = 1

1 , 2 = 1, x2 = 1 2
T

y1 = 1 y2 = 5 4  

p 2 = 5; z 2 = 5 T p
T

; z1 = 1

1 1

c) y0 =
y1 y2

d) y0
y1 y2

e) If A has an eigenvalue , ( 2)1 is an eigenvalue of (A 2I )1 . Thus, ( 2)1 % 121=123 A  % 2 + 123=121 % 2:98. The corresponding eigenvector x % z3 is the same for both matrices. 2.7 Since the eigenvalues of A are distinct, A has a full set of linearly independent eigenvectors xi . Thus, any z0 can be written
z0 =
n i=1

1 1 1 1 T ; z1 = ; y 0 z1 = , 0 3 2 3     1 1 1 13 5 =p ; z2 = p yT 2 4 ; 1 z2 = 15 % 0:87, 5   3 5  1 1 121 5 13 T =p 4 ; z3 = 3p41 14 ; y2 z3 = 123 % 0:98 41     1 1 1 1 T = ; z1 = ; y 0 z1 = , 2 0 3 3     1 1 1 13 5 T =p % 0:87, ; z2 = p ; y 1 z2 = 2 4 15 5   3 5   1 1 121 5 13 T =p ; z3 = p ; y 2 z3 = % 0:98 123 41 4 3 41 14

T ; y0 z1 = 1, T T 4 = 5; y1 z2 = 13=5 = 2:6, T T = 41; z3 = 13 14 = 41; y2 z3 = 121=41  

% 2:95

c i xi

for some coecients ci and


k A k z 0 = 1
n i=1

ci k Ak xi 1 ci k k xi i 1


n i=1

= c1 x1 + c2

2 1

k

x2 + : : : + cn

n 1

k

xn :

Since j1 j > ji j, i = 2; : : : ; n, all but the rst term vanish as k 3 I. The statement is false only if c1 = 0. Note that the error is dominated by the second term. 2.8
z H Azk z H Azk H H yk zk+1 = yk Ayk = k 2 = kH zk 2 zk zk

jj jj

% 1 z k z k = 1 zH z
k k

2.9

a) The eigenvectors of A are complex. For a real matrix and real starting vector, the iteration cannot converge to a complex vector. 44

b) The eigenvalues of A both have the same modulus and the iteration converges to a linear combination of the two eigenvectors. c) The initial guess does not have any component in the dominant eigenvector. This is not a real problem since in practice rounding error usually introduces such a component. 2.10 a) P T = (I b) 2.11

2wwT )T = I T 2(wwT )T = I 2wwT = P P T P = P P = (I 2wwT )(I 2wwT ) = I 4wwT + 4wwT wwT I 4wwT + 4wwT = I
 

a) Let
x=

3 ; 4

y=

jjxjj2
0

 

5 ; 0
T

x = p1  2  ; w= jj xjj2 20 4
y y


and

1 P = I 2ww = 5 1 PA = 5


3 4

4 3 :

Then,

25 0

3 4

R

and Q = P 1 = P T = P . b) Let 3 x1 = d0e ; 4 and


H I H I H I

5 y1 = d0e ; 0

x1 = p1 d 2 e ; 0 w1 = jj x1 jj2 20 4
y1 y1
H

3 1 T P1 = I 2w1 w1 = d0 5 4 Then 25 1 P1 A = d 0 5 0 Look for P2 on the form 1 P2 = d0 0 Let


    H H

0 5 0
I

4 0 e: 3

4 0 3

13 10e : 9
I e:

0 P2

x2 =

0 3=5 ;

y2 =

3= 5 ; 0
T

1 x2 = p 1 ; w2 = jj x2 jj2 2 1
y2 y2


and P2 = I

2w 2 w 2
45

0 1

1
0

Then

25 1 R = P2 P1 A = d 0 5 0 and

4 3 0

13 9 e 10
H

T T Q = (P2 P1 )1 = P1 1 P2 1 = P1 P2
c) Let 2 x1 = d1e ; 2 and
H I

3 1 = P1 P2 = d0 5 4

4
0 3

0 5e : 0

3 y1 = d0e ; 0

H I

w1 =

y1 x1 jjy1 x1 jj2

1 1 = p d1e ; 6 2
I

2 1 T P1 = I 2w1 w1 = d1 3 2 Then 3 P1 A = d0 0 Look for P2 on the form 1 P2 = d0 0 Let


  H H

1 2 2
I

2 2e : 1

3 0 3

3 2e : 1
I e:

0 P2

x2 =

0 ; 3

 

y2 =

3 ; 0

w2 =

x2 = p  1  ; 1 jj x2 jj2 2 1
y2 y2


and P2 = I Then
T 2w2 w2 =

0 1

1 : 0 3 1e 2
H I I

3 R = P2 P1 A = d0 0 and

3 3 0

T T Q = (P2 P1 )1 = P1 1 P2 1 = P1 P2

2 1 = P1 P2 = d1 3 2

2 2 1

1 2 e: 2

46

2.12 From xT x = y T y and xT y = (xT y )T = y T x follows that


P x = (I
 T x 2wwT )x = I 2 (y y )(yx2x) x jj jj2 T (x y )(x y ) 2(xT x y T x) =x2 x = x (x y ) T T (y x) (y x) y y y T x xT y + xT x 2(xT x y T x) = x (x y ) = y = x (x y ) T x x xT y xT y + xT x  

2.13

a) Step 1:
x=

0 ; 1

 

y=

Q0 = P =

0 1

1 ; 0

1 ; 0

1 1 w= p 1 2


R0 = P A =


A1 = R0 Q0 =

1 1

1 0

1 0

1 ; 1

Step 2:
 

x=

p  p 2 p1 p p2 1 1 2 ; 21 2 2 p   p 1 2 2 2 p2 1 p ; R1 = P A1 = 21 0 2 2
Q1 = P =

1 ; 1

y=

2 ; 0 1

w= p

42 2

21 1 ;

1 A2 = R1 Q1 = 2 b) Step 1:
x=
 

3 1

1 1

4 ; 3

 

y=

1 Q0 = P = 5

4 3

3 4 ;

5 ; 0

1 w= p

 

10

1 3 ;

R0 = P A =


A1 = R0 Q0 =

7 24


1
32

5 0

5 40 ;

Step 2:
x= Q1 = P =

1 25

7 24 ; 7 24

y=

24 7

25 ; 0

1 w= 5

  

3 ; 4 25 0

R1 = P A1 =


1 A2 = R1 Q1 = 25

919 192

383
56

31 8

2.14 The matrix Ak+1 is similar to Ak since Ak+1 = Rk Qk = Q1 Ak Qk = k QH Ak Qk . Consequently, Ak+1 is similar to all Ai , i = 0; : : : ; k . k 47

2.15 Since the matrix is symmetric and the eigenvalues therefore are real, and since the Gershgorin discs

do not overlap, there is one eigenvalue in the interval I1 = [4:3; 3:7], one in I2 = [2:4; 1:6], and one in I3 = [0:3; 0:3]. a) The smallest eigenvalue, the one in I1 , has the largest modulus and the power method converges towards the corresponding eigenvector. b) The largest eigenvalue, the one in I1 , has the smallest modulus and inverse iteration converges towards the corresponding eigenvector. 2.16 The error when using the power method on A 2I behaves approximately like (1= 1:9)k , while the error for inverse iteration on A behaves approximately like (1=10)k . The convergence of the latter method is much faster. 2.17 Inverse iteration on A I , where  is the approximate eigenvalue, converges towards the requested eigenvector. 2.18 The union of the Gershgorin discs is j 3j 2. Since all eigenvalues are nonzero, the determinant is nonzero and the matrix is invertible. 2.19 The union of the Gershgorin discs is j 2j A is symmetric, jjAjj2 = (A) 4. 2.20 a) The QR-algorithm can be used. Since 2, implying jj 4. Since

j  + 4j j  + 2j j j

0: 3 ; 0: 4 ; 0: 3

Ak+1 = Rk Qk = Q1 Ak Qk = : : : = k

1 = Q1 Q1 Ak1 Qk1 Qk = : : : = k k

1 it follows that A = Ck+1 Ak+1 Ck+1 , where Ck+1 = Q0 Qk . The matrices Ak converges to an upper triangular matrix T and Ck is unitary for every k . b) Symmetry is preserved by the QR-method since
AT+1 = (Rk Qk )T = (Q1 Ak Qk )T = (QT Ak Qk )T = QT AT Qk k k k k k

= (Q0 Qk )1 AQ0 Qk

= Q1 Q1 A0 Q0 Qk = 0 k

= QT Ak Qk = Q1 Ak Qk = Rk Qk = Ak+1 : k k

Since the symmetric matrices Ak converges towards an upper triangular matrix T , the limit must be diagonal. Let
X = lim Ck :
k

3I

Thus, asymptotically,
A = XT X 1

AX = XT

Axi = tii xi ;

i = 1; : : : ; n;

where xi are the columns of X and tii the diagonal elements of T . 48

2.21

a) We need to prove that every eigenvalue of A lies inside at least one Gershgorin disc. Let  be an arbitrary eigenvalue and normalise the corresponding eigenvector x so that jjxjjI = 1. This means that there is at least one component, say xk , that satises jxk j = 1. Now, Ax = x is equivalent to
n j =1

aij xj = xi ;

i = 1; : : : ; n:

For i = k we nd

( akk )xk =

n
j =1 j 6=k

akj xj ;

so that

j akk j = j akk jjxk j = j( akk )xk j

n
j =1 k6=i

jakj jjxj j

n
j =1 j 6=i

jakj j

b) Let B () = A + (1 )D where D is a diagonal matrix with the same diagonal as A. When  = 0 we know where the eigenvalues of B are located in the complex plane since B (0) = D and D is diagonal. In this case, the Gershgorin discs all have zero radius and coincide with the eigenvalues. Note that the eigenvalues depend continuously on . (They are the roots of the characteristic polynomial depending continuously on the coecients of the polynomial which depends continuously on the elements of B which depends continuously on  by construction.) Thus, when  increases from 0 to 1, the eigenvalues move along continuous paths in the complex plane from the eigenvalues of B (0) = D to the eigenvalues of B (1) = A. Also, the midpoints remain the same but the radii of the Gershgorin discs increases. As long as a disc does not overlap with any other disc, the eigenvalue must stay inside by Gershgorins rst theorem. But as soon as a disc overlap another one, the eigenvalue may move outside its original disc and enter the other one. This argument shows that if B (1) = A has p isolated discs, the union must contain precisely p eigenvalues.

since jxk j = 1 and jxj j 1, and the result follows. Applying the theorem to AT gives the column version.

Chapter 3
3.1 a) elliptic b) elliptic c) hyperbolic d) parabolic e) hyperbolic f) hyperbolic 49

3.2 No 3.3 a) b) Follows from integration by parts. c) Since u = g ,


x
0

jju(x)jjI = 0max1 jc + x jcj +

1 0

f (t)dt

0 x 1

max (jcj +

x
0

jf (t)jdt)

jf (x)jdx jcj + 0max1 jf (x)j = jcj + jjf jjI : x

d) Assume u and v to be two dierent solutions. The dierence w = uv satises


wH (x) = g (x); w(0) = d;

0 < x < 1;

where g (x) = d = 0. From boundedness, jjwjjI so u = v . 3.4

jdj + jjgjjI = 0,

a) Yes. The function u(x) = c0 + c1 x is a solution and uniqueness follows from Picards theorem. Boundedness can be shown in several norms. For example,

jjujjI = 0max1 jc0 + c1 xj <x<

0<x<1

max (jc0 j + jc1 jjxj) = jc0 j + jc1 j:

b) No, since there are no solutions. c) No, since there are innitely many solutions. 3.5 The solution is u(x) = c0 + (c1 c0 )x, that is, a linear function going from c0 at the left boundary to c1 at the right boundary. Consequently, it assumes its largest and smallest values at the boundary points. 3.6 Assume u and v to be two dierent solutions. The dierence w = u v satises
wxx + wyy = 0; w = g;

(x; y ) P ;

where g = 0. From the maximum principle follows that


(x;y )

max jw(x; y )j =

(x;y ) @

max

jg(x; y)j = 0;

so u = v . 3.7 We have that jjfk jjI = 1, but jju(; t)jj can be chosen arbitrary large by choosing k large. Hence, there is no constant C such that jjujj C jjf jj for all f . 50

3.8

a) It remains only to show that the solution is bounded. Since for t > 0, jck (t)j jfk j, and

0 1

ju(x; t)j2 dx =

I
k=

jck (t)j2 jjf jj2 .

I
k=

jfk j2 =

jf (x)j2 dx;

or equivalently, jju(; t)jj2 b) If the solution is

f (x) = 2 cos 2mx = e2imx + e2imx


2 2 2 2 u(x; t) = e4 m t (e2imx + e2imx ) = e4 m t 2 cos 2mx:

Thus, jjf jjI = 2 for all m, but for any t < 0,

jju(; t)jjI = 2e4 m t


2 2

can be made arbitrary large by choosing m large enough. 3.9 Since every periodic function can be expanded as a Fourier series, make the Ansatz
u(x; t) =

k=

ck (t)e2ikx :

This function satises the boundary condition and also the initial condi tion if ck (0) = fk , where fk are the Fourier coecients of f ,
f (x) =

I
k=

fk e2ikx :

Inserting the Ansatz into the dierential equation gives

I
k=

cH (t)e2ikx =  k

I
k=

ck (t)2ike2ikx ;

or equivalently, since the exponential functions are linearly independent,


cH (t) = 2ikck (t) k

with unique solution ck (t) = e2ikt fk . It remains only to show that the solution is bounded. Using Parsevals relation,

0 1

ju(x; t)j2 dx =
=

I
k=

jck (t)j2 = je
2ikt 2

I
k=

je2ikt fk j2
I
k=

k=

jfk j2 =

jfk j2 =

jf (x)j2 dx

or equivalently, jju(; t)jj2 = jjf jj2 . 51

3.10 The function


u(x; t) =

I
k=

ck (t)e2ikx

satises the boundary condition and also the initial condition if ck (0) = fk , are the Fourier coecients of f . Inserting the Ansatz into the where fk dierential equation gives

or equivalently, cH (t) = (4 2 k 2 + 2ik )ck (t) with the unique solution k 2 2 ck (t) = e(4 k +2ik)t fk . Using Parsevals relation,

1 0

k=

cH (t)e2ikx = k

k=

ck (t)( 4 2 k 2 + 2ik )e2ikx ;

ju(x; t)j2 dx =
= =

k=

I I I
1 0

jck (t)j2 =
2 2

k=

je(4 k +2ik)t fk j2
2 2

k=

je4 k t j2 je2ikt j2 jfk j2

I
k=

jfk j2

jf (x)j2 dx jjf jj2 .


I
k=

or equivalently, jju(; t)jj2 3.11 The function

u(x; t) =

ck (t)e2ikx

satises the boundary condition and also the initial condition if ck (0) = fk H (0) = gk , where fk and gk are the Fourier coecients of f and g and ck respectively. Inserting the Ansatz into the dierential equation gives

I k=I HH (t) = 42 k2 ck (t) with unique solution or equivalently, c


k= k

cHH (t)e2ikx =
k

ck (t)( 4 2 k 2 )e2ikx ;

g X fk cos 2kt + k sin 2kt; 2k Using Parsevals relation and the given inequality,

0 1

ck (t) =

V ` f0 + g0 t;

k=0 k=0

ju(x; t)j2 dx =

k=

2 =2

I I
1 0

jck (t)j2

k=

(jfk j + jgk j(1 + t))2

k=

(jfk j2 + jgk j2 (1 + t)2 )

0 1 2 2

jf (x)j dx + 2(1 + t)

jg(x)j2 dx

or, jju(; t)jj2 2jjf jj2 + 2(t + 1)2 jjg jj2 . Hence, u is bounded on every nite 2 2 2 interval 0 < t < T . 52

3.12
d u( ; t) dt d jj jj2 = dt 2

0 1

u2 (x; t)dx =
0

2u(x; t)ut (x; t)dx


1
0

1 0

2u(x; t)ux (x; t)dx =  u2 (x; t)

= (u2 (1; t) u2 (0; t)) b) < 0 if u(0; t) = 0 and  < 0, implying jju(; t)jj2 < jju(; 0)jj2 = jjf jj2 3.13
d u( ; t) dt
1 1 d 2 = u (x; t)dx = 2u(x; t)ut (x; t)dx dt 0 0 1 = 2u(x; t)uxx (x; t)dx
0

a) = 0 if u(0; t) = u(1; t), implying jju(; t)jj2 = jju(; 0)jj2 = jjf jj2

jj jj

2 2

= 2 [u(x; t)ux (x; t)]x=0 2


1

ux (x; t)ux (x; t)dx

= 2(u(1; t)ux (1; t) u(0; t)ux (0; t)) 2jjux (; t)jj2 2 0 if  > 0 and a) u(0; t) = u(1; t) and ux (0; t) = ux (1; t) b) u(0; t) = u(1; t) = 0 implying jju(; t)jj2 3.14
d u( ; t) dt d jj jj2 = dt 2

0 1

jju(; 0)jj2 = jjf jj2

u2 (x; t)dx =
0

2u(x; t)ut (x; t)dx

=
0

2u(x; t)(uxx (x; t) + u(x; t))dx

0 1 1

= 2 [u(x; t)ux (x; t)]x=0 2

ux (x; t)ux (x; t)dx

+2
0

u(x; t)u(x; t)dx

= 2jjux (; t)jj2 + 2jju(; t)jj2 2 2 The result follows from the dierential inequality. 3.15

2jju(; t)jj2 2

jj jj2 = : : : = 2jjux (; t)jj2 + 2jju(; t)jj2 2 2 2 implying jju(; t)jj2 et jjf jj2 .
53

d u( ; t) dt

2jju(; t)jj2 2

3.16

3.17

a) The function u(x(t); t) is constant if


d u(x(t); t) = ux xH + ut = 0: dt

b) x(t) = t + c c) x(t) = et c

Comparing with the dierential equation gives xH = .

d) x(t) = t2 =2 + c 3.18 The general solution has the form u(x; t) = g (x + t) implying for example u(0; 1=2) = u(1=2; 0). Thus, the initial and boundary conditions contradict each other. 3.19 a) v = v0

:::

vN

, b = c0 1

f (x1 )

:::

f (xN 1 )

c1
I

, and

f1=h2 f f A=f f d

2=h2
.. .

1=h2 .. . 1=h2

0 b) b = f (x1 ) c0 =h2
v = v1 T

2=h2

..

g g g g: g 2e 1=h

f (x2 )

:::

:::

v N 1

f (xN 2 )

f (xN 1 )

c1 =h2
I

, and 1=h2 2=h2 .. . 1=h2 .. . 1=h2


g g g g: g 2 e 1=h 2=h2

f 1=h2 f f A=f f d

2=h2

0 3.20 v = v1

2=h2
1=h2
T

..

:::
H f 1 1 f f

v N 1

, b = f (x1 ) : : : f (xN 1 ) 0 1 .. . 1 . 2 1
I H g g 1 f g g+ f g 2h f d 1e

, and 0 1 .. . . 0 1 ..
I g g g g: g 1e

A=

f h2 f d

1 2 .. .

..

0 f1 f

1 0 .. .

0 3.21 v = v1

2
0

:::
H f 1 1 f f

v N 1

, b = f (x1 ) : : :
I

T f (xN 1 ) , and H

A=

f h2 f d

1 2 .. .

1 .. . 1

0 where ai = a(xi ).

. 2 1

..

g f g f g f g+f g f 1e d

a1

0
a2

I g g g g g e

.. 0

.
aN 2 aN 1

54

3.22

a) b) (uH ; v H ) = c) d) (uH ; v H ) =

2xvH (x)dx = [2xv(x)]1 1 +

2v (x)dx = (2; v )

2xvH (x)dx +

2xv H (x)dx

0 1

= [2xv (x)]0 1 + = (f; v ) e) No, uHH is not well dened. 3.23

2v (x)dx + [2xv (x)]1 0

2v (x)dx

a) b) c) It follows from the result in b) that the functions i (x) are linearly independent and that
N i=1

ci i (xj ) = cj :

Therefore, since any function uh P Vh is uniquely determined by its values in the points xj , j = 1; : : : ; N , uh can be uniquely expanded
uh (x) =
N i=1

ci i (x)

and ci = uh (xi ). H d) If (uH ; vh ) = (f; vh ) for all vh P Vh , then in particular, (uH ; H ) = h i h (f; i ), i = 1; : : : ; N . Also, if (uH ; H ) = (f; i ), i = 1; : : : ; N , then h i
N i=1

vh (xi )(uH ; H ) = h i

N i=1

vh (xi )(f; i );

for all vh

P Vh

or equivalently

H (uH ; vh ) = (f; vh ); h
e)
V b h=6; b ` 2h=3; (j ; i ) = b h=6 b X 0;

for all vh

P Vh :
V b b `

j=i 1 j=i j =i+1

V b 1=h; b ` 2=h; H ; H ) = (j i b 1=h b X 0;

otherwise

1=2; 0; (H ; i ) = j b 1=2 b X 0;


j=i 1 j=i j =i+1

j=i 1 j=i j =i+1

otherwise

otherwise

55

3.24

a) Let (f; g ) =

f (x)g (x)dx

and
V = v v cont on [0; 1]; v H piecewise cont on [0; 1]; v (0) = v (1) = 0 :

f j

Integration by parts gives (f; v ) = (uHH + u; v ) = (uHH ; v ) + (u; v ) =

1 0

uHH (x)v (x)dx +

0 1

u(x)v (x)dx
0

= [uH (x)v (x)]1 + 0 = (uH ; v H ) + (u; v ):

uH (x)v H (x)dx +

u(x)v (x)dx

for every v P V . The boundary terms vanish since v (0) = v (1) = 0. The variational formulation is: Find u P V such that (uH ; v H ) + (u; v ) = (f; v ) for all v

P V:

b) Let xi = ih, i = 0; 1; :::; N + 1, where h = 1=(N + 1), and let Ii = [xi ; xi+1 ]. Dene the function space
Vh =

f v j v cont; v linear on Ii ; i = 0; :::; N; v(0) = v(1) = 0g


&

and the piecewise linear basis functions j (x), j = 1; :::; N , satisfying


j (xi ) =

1; 0;

i=j i=j

Any function uh

P Vh can be expressed as
uh (x) =
N j =1

uj j (x);

where uj = uh (xj ). Thus, looking for a solution uh nite-dimensional variational problem: Find uh

Vh of the

H P Vh such that (uHh ; vh ) + (uh ; vh ) = (f; vh ) for all vh P Vh H uj (H ; vh ) + j


N j =1

is equivalent to looking for nodal values uj , j = 1; :::; N , such that


N j =1

uj (j ; vh ) = (f; vh )

for all vh

P Vh :

In particular, this equation holds for all basis functions i , so


N j =1

uj (H ; H ) + j i

N j =1

uj (j ; i ) = (f; i )

for i = 1; :::; N:

These N equations dene the nite element method for the N unknowns uj , j = 1; :::; N . 56

c) The FEM dened in b) corresponds to a linear system (A + M )u = b;


: : : uN is the vector of the unknown nodal values. The elements of the stiness matrix A are ai;j = (H ; H ) =
j i

where u = u1

V b b `

2=h; b 1=h b X 0;

1=h;

j=i 1 j=i j =i+1

otherwise

and the elements of the mass matrix M are

V b h=6; b ` 1 2h=3; mi;j = j (x)i (x)dx = (j ; i ) = b h=6 0 b X 0; H

j=i 1 j=i j =i+1

otherwise
I

so

A+M =

f hf d

1f

2 f1 f

1
2 .. .

1
. 1 ..

0 . 2 1 ..

g g g hf g+ f g 6f d 1e

4 f1 f

1 4 .. .

0 1 .. . 1 .. . 4 1

g g g g g 1e

The elements of the load vector b are

bi =
0

f (x)i (x)dx

% hf (xj ): P V:

3.25

a) The variational formulation is Find u P V such that (uH ; v H ) + a(uH ; v ) = (f; v ) for all v Here, (f; g ) =

and V = fv j v cont; v H piecewise cont; v (1) = v (1) = 0g. b) The discrete variational formulation is Find uh Here,
Vh =

f (x)g (x)dx

H P Vh such that (uHh ; vh ) + a(uHh ; vh ) = (f; vh ) for all vh P Vh :

where h = 2=(N + 1). The discrete variational formulation is equivalent to the nite element method
N j =1

f v j v cont; v linear on Ii ; i = 0; :::; N; v(0) = v(1) = 0g; Ii = [xi ; xi+1 ] and xi = ih 1, i = 0; 1; :::; N + 1, where
uj (H ; H ) + uj a(H ; i ) = (f; i ) for i = 1; :::; N; j i j
j =1 N

57

where the piecewise linear basis functions j (x) satisfy


&

j (xi ) =

1; 0;

i=j i=j

c) The nite element method corresponds to a linear system of equations T Au = b, where u = u1 : : : uN , bi = (f; i ) % hf (xj ), and
H

A=

f hf d

1f

2 f1 f

1
2 .. .

1
. 1 ..

0 . 2 1 ..

g g g af g+ f g 2f d 1e

0 f1 f

1 0 .. .

0 1 .. . . 0 1 ..

g g g g: g 1e

d) The elements of A becomes


ai;j = (H ; H ) + (aH ; i ); j i j

where (aH ; i ) has to be approximated by some quadrature rule. j 3.26 a) The variational formulation is Find u P V such that a(uH ; v H ) = (f; v ) for all v Here, (f; g ) =
0

P V:

f (x)g (x)dx

and V = fv j v cont; v H piecewise cont; v (0) = v (1) = 0g. b) The discrete variational formulation is Find uh Here,
Vh =

H P Vh such that a(uHh ; vh ) = (f; vh ) for all vh P Vh :

f v j v cont; v linear on Ii ; i = 0; :::; N; v(0) = v(1) = 0g;

where Ii = [xi ; xi+1 ] and xi = ih, i = 0; 1; :::; N + 1, where h = 1=(N + 1). The discrete variational formulation is equivalent to the nite element method
N j =1

uj a(H ; H ) = (f; i ) j i

for i = 1; :::; N;

where the piecewise linear basis functions j (x) satisfy


&

j (xi ) =

1; 0;

i=j i=j

58

c) The nite element method corresponds to a linear system of equations T Au = b, where u = u1 : : : uN , bi = (f; i ) % hf (xj ), and
H

af A= f hf d

2 f1 f

1
2 .. .

1 1
.. .

0 . 2 1 ..

0 d) The elements of A becomes

g g g g: g 1e

ai;j = (aH ; H ) j i

where (aH ; H ) has to be approximated by some quadrature rule. j i 3.27 a) The nite dierence equations are
v2;1

1=32 1=32 v3;1 2v2;1 + v1;1 v2;2 2v2;1 + v2;0 + 1=32 1=32 v1;3 2v1;2 + v1;1 v2;2 2v1;2 + v0;2 + 1=32 1=32 v3;2 2v2;2 + v1;2 v2;3 2v2;2 + v2;1 + 2 1= 3 1=32 v0;0 = 1; v1;0 = 1; v2;0 = 1; v3;0
v0;1 = 1; v0;3 = 1; v3;1 = 1; v1;3 = 1; v0;2 = 1; v2;3 = 1;

2v1;1 + v0;1 + v1;2 2v1;1 + v1;0 = 0;


= 0; = 0; = 0; = 1;

v3;2 = 1; v3;3 = 1;

where, hopefully, vi;j % u(xi ; yj ). Using a lexicographical ordering of the unknowns, this is a system of equations Av = b, where A is
H

1 f0 f f0 f f0 f f0 f f0 f f0 f f0 f f0 f f0 f f0 f f0 f f0 f f0 f d0 0

0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 0

0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0

0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 4 1 0 0 1 0 0 0 0 0 0

0 0 0 0 0 1 4 0 0 0 1 0 0 0 0 0 59

0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0

0 0 0 0 0 1 0 0 0 4 1 0 0 0 0 0

0 0 0 0 0 0 1 0 0 1 4 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0

0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0

0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0

0 0g g 0g g 0g g 0g g 0g g 0g g 0g g; 0g g 0g g 0g g 0g g 0g g 0g g 0e 1

I v0;0 fv1;0 g f g fv2;0 g f g fv3;0 g f g fv0;1 g f g fv1;1 g f g fv2;1 g f g fv g v = f 3 ;1 g ; fv0;2 g f g fv1;2 g f g fv2;2 g f g fv3;2 g f g fv0;3 g f g fv1;3 g f g dv2;3 e v3;3

H I f1g f g f1g f g f1g f g f1g f g f0g f g f0g f g f1g b = f g: f1g f g f0g f g f0g f g f1g f g f1g f g f1g f g d1e

and

b) Eliminating the boundary conditions gives


v2;1

1=32 1=32 v2;2 2v2;1 + 1 1 2v2;1 + v1;1 + = 0; 2 1= 3 1=32 v2;2 2v1;2 + 1 1 2v1;2 + v1;1 + = 0; 1=32 1=32 1 2v2;2 + v2;1 1 2v2;2 + v1;2 + = 0; 2 1= 3 1=32 or on matrix form H 4 f 1 f d 1 0 1 4 0 1 1 0 4 1 0 v1;1 2 1 g fv2;1 g f2g gf g = f g: 1 e dv1;2 e d2e 4 v2;2 2
IH I H I

2v1;1 + 1 + v1;2 2v1;1 + 1 = 0;

3.28 If a lexicographical ordering of the unknowns is used, the coecient matrix A is an (N2 1) (N2 1) block-tridiagonal matrix with (N1 1) (N1 1) blocks. Here,
B fC f f A=f f d
H

C B

..

..

..

C
H

B C

g g g g; g Ce

H 1 f h2 f 2 C=f f d

..

h2 f h2 1 2 f 1 f f f h2 1 B=f f f f d

1 .. ..
h2 1

g g g; g 1e h2 2 I

and;

. .

..

1
h2 1

g g g g g g: 1 g g 2 g h1 2 2e h2 h2 1 2

60

3.29 If a lexicographical ordering of the unknowns is used, the coecient matrix A is an (N2 1) (N2 1) block-tridiagonal matrix with (N1 1) (N1 1) blocks. Here,
B fC f f A=f f d
H

D B

..

..

..

B C

g g g g; g De

H 1 f h2 f 2 C=f f d

1 2h 0

0
2

..

. 1
h2 2

g g g; g 1 e 2h2

H 1 1 + f h2 2h 2 f 2 D=f f d

..

g g g; g 1 1 e + h2 2h 2 2 I

and
H

f B=f f f f f f d

2 1 1 2 h2 h2 + 2h f h2 1 2 1 f 1 1 2 1 2 f h2 h2 f 2 fh 2h
1 1 1

1
h2 1

..

2h
1 .. . 1

. 1
h2 1

.. 2
1

2h

2 h2 h2 1 2 1 1 2h h2 h2 h2
1 1 1 2

g g g g g g g: g 1 1 g g + g h2 2h1 g 1 2 2e

3.30 If a lexicographical ordering of the unknowns is used, the coecient matrix A is an (N2 1) (N2 1) block-tridiagonal matrix with N1 N1 blocks. Here,
B fC f f A=f f d
H

C B

..

..

..

0 and
H f f f f B=f f f f d

B C

g g g g; g Ce

H 1 f h2 f 2 C=f f d

..

g g g; g 1e h2 2 I g g g g g g: g g g e

2 2 h2 + 1 f h2
1

1 .. ..
h2 1

1 ..
h2 1

1
h2 1

. .

. 1
h2 1

1
h2 1

1
h2 1

2 2 h2 h2 + 1
1 2

3.31 Only a) 61

3.32

H f f f f f f d

1=k + 2=h2

1=h2
0

1=h2
.. .. . .

..

1=h2
h

I H n I n v1 +1 v1 g f n+1 g n f v2 g v2 g gf gf . g 1f . g f gf . g = f . g g gf . g kf . g g d n+1 e dv n e N 2 1=h2 e vN 2 n n vN 1 vN+1 1=k + 2=h2 1

IH

3.33 All schemes are consistent since the truncations errors are a) D u(x) f (x) = : : : = uHH [x] + : : : 2   1 HHH 1 f (x h) + f (x) = : : : = h2 u (x) f HH (x) + : : : b) D u(x) 2 6 4 c) D0 u(x) f (x) = : : : = 3.34 3.35  = 1=2 3.36 3.37 Assume u and v to be dierent solutions. The dierence w = u v satises
wi+1;j h2

uHHH [x] + : : :

2wi;j + wi1;j + wi;j+1 2wi;j + wi;j1 = 0;


h2 h2

(i; j ) P ;

wi;j = gi;j ; max jwi;j j = max

(i; j ) P @ ;

where g = 0. From the discrete maximum principle follows that


(i;j )

(i;j ) @

jgi;j j = 0;

so u = v:

3.38

a) wj +N;m = e2i(j +N )m=N = e2im e2ijm=N = 1 wj;m . The relation wj;m+N = wj;m can be shown in the same way. b) From periodicity of the exponential function follows that
vm+N =
N 1 1

vj e2ij (m+N )=N =

N 1 1

j =0

vj e2ijm=N = vj :

j =0

c) Since both vj and wj = e2ijm=N have period N ,


d (vj +k )m =
N 1 1

vj +k e2ijm=N =

1
N

N 1 +k j =k

vj e2i(j k)m=N

j =0

= e2ikm=N = e2ikm=N =e
2ikm=N

1
N

N 1 +k j =k N 1 j =0

vj e2ijm=N

1
N vm

vj e2ijm=N

62

3.39 The transformed equation reads


vm+1 vm n n e2im=N = k

2 + e2im=N vn
h2

or equivalently, vm+1 = gm vm , where n n


gm = 1 + k 2im=N (e h2 k 2 + e2im=N ) = : : : = 1 4 h2 sin2 m : N

Since jgm j 1 for all m and N if k=h2 relation that


N 1 1

1=2 it follows from Parsevals 1 jvm j2 = N 0


N 1 j =0

j =0

jvjn j2 =

N 1 m=0

jvm j2 n

N 1 m=0

jvj0 j2

or equivalently, jjv n jj2

jjf jj2 if k=h2

1=2.

3.40 The transformed equation reads


e2im=N e2im=N n vm+1 vm n n = vm k 2h

or equivalently, vm+1 = gm vm , where n n


gm = 1 + k 2im=N (e 2h k e2im=N ) = : : : = 1 + i h sin 2m : N

Since k=h = 1, there is no constant K such that jgm j that is, the scheme is unstable. 3.41 The transformed equation reads
e2im=N vm+1 vm n n = k h

1 + Kk for all m,

1 vn

or equivalently, vm+1 = gm vm , where gm = 1 + k=h(e2im=N n n


k jgm j2 = j1 + h (e2im=N 1)j2

1). Since

k k 2m k 2m 2 + cos + i sin h h N h N  2  2 k k 2m k 2m + cos + sin = 1 h h N h N     2m k k 1 cos 1 = ::: = 1 2 1 h h N

= j1

for all m and N if k=h


N 1 1

1, it follows from Parsevals relation that


N 1 m=0

j =0

jvjn j2 =

jvm j2 n

N 1 m=0

1 jvm j2 = N 0

N 1 j =0

jvj0 j2

or equivalently, jjv n jj2

jjf jj2 if k=h


63

1.

3.42 The transformed equation reads


vm+1 vm n n e2im=N = k h

1 vn+1
m

or equivalently, vm+1 = gm vm , where gm = 1=(1 + k=h(e2im=N n n Since 1     jg m j2 = : : : = 1 k k 2m 1+2 1+ 1 cos


h h N

1)).

jjvn jj2 jjf jj2 .

for all m, N , k > 0, and h > 0, it follows from Parsevals relation that

3.43 The norm is decreasing if

2=k < a < 0 since

jjvn+1 jj2 = (vn+1 ; vn+1 ) = (vn + akvn ; vn + akvn ) = : : : = = (1 + ak )2 jjv n jj2 :


3.44 a) (v; D+ w) + (D+ v; w) + h(D+ v; D+ w) = =
N 1 j =0

vj

N 1 vj +1 vj wj +1 wj h+ wj h h h

+h

N 1 j =0

j =0

vj +1 vj wj +1 wj h h h

N 1 j =0

vj wj +1

N 1 j =0

vj wj +
N 1 j =0

N 1 j =0

vj +1 wj

N 1 j =0

vj wj
N 1 j =0

N 1 j =0

vj +1 wj +1

vj +1 wj

N 1 j =0

vj wj +1 +

vj wj

= vN wN b) If k=h 1, then

v0 w0

jjvn+1 jj2 = jjvn + kD+ vn jj2

= (v n + kD+ v n ; v n + kD+ v n ) = (v n ; v n ) + k (v n ; D+ v n ) + k (D+ v n ; v n ) + k 2 (D+ v n ; D+ v n ) = (v n ; v n )


n n k(D+ vn ; vn ) kh(D+ vn ; D+ vn ) + k (vN )2 k (v0 )2
2

jjvn jj2 Thus, jjv n+1 jj jjv n jj

n = jjv n jj2 + (k 2 kh)jjD+ v n jj2 k (v0 )

+ k (D+ v n ; v n ) + k 2 (D+ v n ; D+ v n )

:::

jjv0 jj = jjf jj.


64

3.45

a) (v; D+ w) =
N 1 j =1

vj

wj +1

w j 1 h = N 1 v

N vj 1 wj h = h j =2

N 1 j =1

j =1

wj +1 h h

N 1 j =1

vj

wj h h

vj wj h h
N 1 wN

= b) 0 c)

N 1 j =1

vj

vj1 h + v
h

v0 w1
a2 + b2

(a + b)2 = a2 + 2ab + b2
N 1 j =1

A 2ab
2

jjD+ vjj

vj +1 vj h

h=

N 1 j +1

2 vj +1

2vj+1 vj + vj2
h2 v2 h N

N 1 j +1

2v2 + 2v2 j +1 j
h2
2

h=

N 1 4

h2

j =1

2 vj h +

2 2 h v1

4
h2

jjvjj

v h N

d) If k=h2 < 1, then

jjvn+1 jj2 = (vn+1 ; vn+1 )


= (v n ; v n )

= (v n + kD+ D v n ; v n + kD+ D v n )

= (v n ; v n ) + 2k (v n ; D+ D v n ) + k 2 (D+ D v n ; D+ D v n )
n 2k(D vn ; D vn ) + 2kvN 1 vN hvN 1 2kv0 v1 v0 h n n n n

+ k 2 (D+ D v n ; D+ D v n ) 2k n 2 = jjv n jj2 2k jjD v n jj2 vN 1 + k 2 jjD+ D v n jj2


h  n  n vN vN 1 2 2k 2k + 2 D v n 2 + h h h  2   2 2k 2k = vn 2 + 2k D v n 2 + h2 h3
2

jjv jj 2kjjD v jj
n 2 n 2

2k

n vN 1

jj

jj

jj jj jjvn jj2 Thus, jjv n+1 jj jjv n jj


3.46 On one hand,

jj

jj

k 2h

n 2 vN 1

:::

jjv0 jj = jjf jj.

(v n+1 + v n ;v n+1 v n ) =

= (v n+1 ; v n+1 ) (v n+1 ; v n ) + (v n ; v n+1 ) (v n ; v n ) = jjv n+1 jj2 jjv n jj2 : 65

On the other hand, (v n+1 + v n ; v n+1 v n ) =


k

(v n+1 + v n ; D+ D (v n+1 + v n ))
k

= (D (v n+1 + v n ); D (v n+1 + v n )) 2 +
k

2
k

n n n n (v0 +1 + v0 )(v1 +1 + v1 )

n n n n k (vN+11 + vN 1 )(vN+1 + vN ) 2

= Thus, jjv n+1 jj 3.47 jjk=h 3.48 jcjk=h

jjD (vn+1 + vn )jj2

0:

jjvn jj.

1 (This scheme is unstable even if the CFL-condition is satised) 1

3.49 There is no CFL-condition 3.50 Consistency means that


D u(x)

f (x) + f (x h) = y(h): 2
D vi = gi ; v0 = c;

Stability means that there is a constant C such that the solution to

satises jjv jj

C ( g + c ) for every g and c.

jj jj j j

From consistency follows that the error ei = u(xi ) vi satises


D ei =

y(h);

e0 = 0;

and convergence, i.e. jjejj 3 0 as h 3 0, follows from stability. 3.51 3.52

vi2 + 16vi1 30vi + 16vi+1 vi+2 = 0


12h2
vi+1 vi vi+1 + vi + =0 h 2

3.53

66

3.54

a) The chain rule gives


@w (r; ) = ux cos  + uy sin  @r @2w (r; ) = uxx cos2  + uxy cos  sin  + uyx sin  cos  + uyy sin2  @r2 @2w @ (r; ) = ( ux r sin  + uy r cos ) 2 @ @ = ( uxx r sin  + uxy r cos )r sin  ux r cos 

+ (uyx r sin  + uyy r cos )r cos  uy r sin  = r2 (uxx sin2  uxy cos  sin  uyx sin  cos  + uyy cos2 ) r(ux cos  + uy sin )
wrr +

and summing up shows that 1


r2 u +

1
r

wr = uxx + uyy = 0:

The boundary conditions are trivially satised. b) One possibility is


D+ D vi;j +
(r ) (r )

1
2 ri

D+ D vi;j +

( )

( )

1
ri

D0 vi;j = 0; vi;1 = vi;N2 1 ; vi;0 = vi;N2 ; vi;j = g (ri ; j );

(r )

i = 1; : : : ; N1 j = 0; : : : ; N2

1 1 i = 1; : : : ; N1 1; i = 1; : : : ; N1 1;

j = 0; : : : ; N2 i = 0; N1

where ri = 1 + ih1 , h1 = 1=N1 and j = jh2 , h2 = 2=N2 . 3.55 3.56 3.57 3.58 a) Assume the opposite, that is, that the maximum is assumed at some interior point (i0 ; j0 ) and that vi0 ;j0  M is larger than all values on the boundary. From the dierence equation follows that
vi;j =

1 (vi+1;j + vi1;j + vi;j +1 + vi;j 1 ); 4

that is, vi;j is the mean value of its neighbours. Therefore, vi0 ;j0 is smaller then or equal to its largest neighbour. On the other hand, we have assumed vi0 ;j0 to be larger then or equal to all its neighbours. The conclusion is that all four neighbours must equal M . By repeating the argument, one nds that v = M everywhere, including boundary points, which is a contradiction. b) Replace v by

v and repeat the argument.


67

c) Follows from a) and b).

3.59

a) Assume the opposite, that is, that the maximum is assumed at some inner point (x0 ; y0 ) and that u(x0 ; y0 ) is larger then the values on the boundary. Let
v (x; y ) = u(x; y ) + " (x

x0 )2 + (y y0 )2

for " > 0 so small that also v assumes its maximum at (x0 ; y0 ) and is larger then the values on the boundary. At a maximum, vxx 0 and vyy 0, so vxx + vyy 0. On the other hand, vxx + vyy = uxx + uyy + 4" = 4" > 0, which is a contradiction. b) Replace u by 3.60 Since
N 1 m=0

u and repeat the argument.


V b N 1 b b ` 1 = N;

c) Follows from a) and b).

e2ij=N

k

j=0

b b b X

m=0

1 e2ij=N

1 e2ij

= 0;

j=0

it follows that
N 1 1

vm e2ijm=N =

N 1 N 1 1

m=0

vn e2inm=N e2ijm=N

= =

1
N

m=0 n=0 N 1 N 1

vn

e2i(j n)m=N

n=0

m=0

vn N N = vj :

Chapter 4
4.1 p(x) = 1=2 4.2 p(x) = 7=6 + x=2 4.3 p(x) = 21=20 9x=20 + x2 =4 4.4 g = 1

4.5 g = 1=2 4.6 g = 1 1


r

1= 2 0

4.7 g = 7=6 4.8

5= 3

13=6

1 3 3 5 1 x; a) p ; x2 2 2 2 3 2 p p 1 3 5 b) ; (x 2); (3x2 12x + 8) 2 4 16 r 1 2    x2 x + 2 c) p ; x ; p 2 8 2 2 2 10  2 68

r 

4.10 g (x) = 3x=5

4.9 g (x) = 8=5 + 18x=5

4.11 g (x) = 9x=14 4.12 g (x) = 12 4.13 a)


2 3

10 60 2 12 x + 60 2 12 x2
4 5

b) g (x) = 1=2 + 3x=4 b) g (x) = 1=2 + x=  b) g (x) = (2 + 4x x2 )=8 a) a)

4.14

4.15

4.16 x = 1=2 4.17 x1 = 1, x2 = 1=3 4.18 x1 = 4=3, x2 = 3=2 4.19 a) 1 1 e1 = p d 0 e ; 2 1 b)


R=
 H I

1 1 e2 = p d1e 3 1

H I

c) x = 1= 2 d) x = 7=6 4.20 a) Q = e1

1= 3 2= 3
T

2 0

p 2 p
3

e2

e3 , where H I

1 1 f0g e1 = p f g ; 3 d1e 1 Also,

1 1 f2g e2 = p f g ; 6 d1e 0
H

H I

e3 =

p1

f 6 g f g 114 d 7e

3 R=d 0 0 b) x = 25=19 4.21 a) b) c)

0 p

6 0

2= 3 p 0p e 38= 3

3= 2

1=38

8=5 + 18x=5 2 + 8ex =(e2 1) 27=20 + 77x=20


69

4.22

b) e1=4 x c) 1=2

a) 3(sin 1 cos 1)x

4.23 The upper triangular part. 4.24 Let fei gn be an ON-basis in M . Any g i=1
g=
n i=1

P M can be written

d i ei

for some coecients di . Now,


F 2 (g ) = f

jj gjj2
n i=1

= jjf = (f

d i ei

jj2
n i=1

n i=1

d i ei ; f
n i=1 n i=1

d i ei )
n n i=1 j =1 n i=1
2

= (f; f ) 2 = (f; f ) 2 = jjf jj2 +

di (f; ei ) + di (f; ei ) +

di dj (ei ; ej )

d2 i

n i=1

(di (f; ei ))

n i=1

(f; ei )2

and it follows that minimum is assumed if and only if di = (f; ei ). 4.25 The best approximation of f in M is
g = (f; ei )ei
i=1 n

where fei gn is an ON-basis in M . The error f g is orthogonal to M i=1 if it is orthogonal to every element in M . However, it is sucient to show that the error is orthogonal to ek , k = 1; : : : ; en : (ek ; f

g ) = ( ek ; f

(f; ei )ei ) (f; ei )(ek ; ei )

= (ek ; f )

i=1 n i=1

= (ek ; f ) (f; ek ) = 0:

70

4.26 The error uh u is orthogonal to VN , that is, (uh u; v ) = 0 for all v P VN , or equivalently, (uh ; v ) = (u; v ) for all v P VN . From the denition of the scalar product, integration by parts, v (0) = v (1) = 0, and uHH = f follows

0 1

uH v H dx =
h

uH v H dx = [uH v ]1
0

uHH v dx =

f v dx

V v P VN :

Expanding uh in the basis functions j , and realising that it is sucient to require the equality to hold for all j gives the result.

71

Potrebbero piacerti anche