Sei sulla pagina 1di 74

~, ,

ENM321

DEPARTMENT OF ENGINEERING MATHEMATICS

Aims: to extend the theory of ordinary differential equations (ODEs) beyond the introductory ideas studied in ENM 1051107.

Objectives: to understand the theory of second-order (linear) ODEs and to study some analytic methods for solving them.

to consider some special second-order ODEs important m engineering applications, and their solutions.

COURSE CONTENTS

CHAPTER 1 Introduction: a review of elementary ideas and methods.

CHAPTER 2 Linear differential equations: fundamental ideas and results.

CHAPTER 3

Standard techniques for solving second-order equations.

CHAPTER 4

Series solutions and special functions.

CHAPTER 5

Green's functions.

ASSESSMENT

• A coursework assignment will be issued and will contribute up to 10% of the final mark.

• A 1 Y2 hour examination paper will be taken at the end of the course contributing up to 90% of the final mark. The Engineering Mathematics General Formula Sheet will be provided in the examination.

- 2-

ENM321

COURSE NOTES AND PROBLElVl SHEETS

1. The course presentation will be based on this set of printed lecture notes. These have been prepared with two aims in mind. Firstly, to reduce the preoccupation with taking lecture notes, thereby allowing you to concentrate on the mathematical ideas and explanations, and the many examples that are used to illustrate the theory. Secondly, they highlight the important results and methods that should be known and so provide an aid to your revision.

2. The lecture course has been carefully designed as an integral part of a learning experience that should additionally include self-study, problem-solving and tutorial guidance. The problem exercises that accompany each chapter are intended to test and reinforce your understanding of the course material, and promote your problem solving skills. You are urged to attempt appropriate questions each week, before attending scheduled tutorials: they can then be used to identify and remedy any difficulties that you may have.

TEXTBOOKS

There are many excellent textbooks on Engineering Mathematics which cover most (if not all) of the subject matter of this course (and much more besides). Alternatively, there are many specialist textbooks on Ordinary Differential Equations. Most of the textbooks listed below can be found in the Robinson Library or can be bought as reasonably priced student editions. It is therefore not easy to recommend a single text since the choice will be very much one of personal taste (and may depend on whether you are taking any other mathematics options). However, that said, it is good practice to have at hand a 'good' textbook (and to consult others) both as a reference and to provide a different perspective on the course material (as well as further exercises l)

General Engineering Mathematics textbooks:

Advanced Engineering Mathematics, P.V. O'Neil (1995, PSW, 4th edition)

Advanced Engineering Mathematics, D. Zill & M.Cullen (1997, Jones and Bartlett, 2nd edition)

Advanced Engineering Mathematics, E. Kreyszig (1993, John Wiley, 7th edition)

Specialist textbooks on ordinary differential equations:

Differential Equations (with Boundary-Value Problems), D. Zill & M. Cullen (1997, Brooks/Cole, 4th edition)

Elementary Differential Equations, W. Boyce & R. Di Prima (1997, John Wiley, 6th edition)

Differential Equations (A Modelling Approach), F. Giordano & M. Weir (1991, Addison-Wesley)

Textbook covering special topics and containing useful reference material:

Advanced Mathematical Methods for Engineering and Science Students. G. Stephenson & P. Radmore (1990, Cambridge University Press).

ivv 3 t"NM Sl\ /_/_ Tw )" enr: 1l"L x. 1.

-3-

ENM321

INTRODUCTION: A REVIEW OF ELEMENTARY IDEAS AND METHODS

We briefly review some basic ideas and results on ordinary differential equations that were covered at Stage 1 in modules ENM105/107 . You are urged to read your course notes (or a good text book) for a fuller explanation of the theory and methods covered here.

1. An ordinary differential equation (ODE) is an equation for an (unknown) function y(x) which involves one or more of its derivatives

2

/I d y Y =-)-,

dx-

3 4

Iff d Y (4) _ d Y

Y = dx3' Y - dx4'

An ODE contains only ordinary derivatives since y(x) is a function of a single independent variable x (note that y itself need not appear in the ODE).

An ODE is linear if y and all its derivatives are of the first degree (i.e. there are no terms like l, yy', siny etc.) - otherwise it is nonlinear.

The order of an ODE is the order of the highest derivative that appears in the equation.

Example 1.1
dy 2 -x first-order linear (1.1)
-+ xy=e
dx
y' -3y == 0 first-order linear (1.2)
y"-y'-2y==O second-order linear (1.3)
xy' _y2 =x2 first-order nonlinear (1.4)
(ylff)2 _ yy' :::: 4x third-order nonlinear (1.5)
y(4) +xy" = eY fourth-order nonlinear (why?) (1.6) This course is mainly about second-order linear ODEs: they are very important in the physical sciences and engineering and have 'nice' properties.

Remarks

(1) If an equation involves partial derivatives of a function of two (or more) independent variables it is called a partial differential equation (PDE)

e.g. u(x,t)

ENM321

• • • • • •

• t

• t



-4-

The theory of PDEs is a quite distinct subject (although some important PDEs can be reduced to ODEs; see ENM322).

(2)

When there are two (or more) unknown functions x(t), yet) of a single independent variable t, we obtain a system of ODEs e.g.

2x+x-y = 0 y+2x+y=et

2.

SOLUTION OF AN ODE

~ ,

V"II

00(_.

Consider the function y = ex2 (c is any constant). Then y' = 2ex and so

xy' -2y = O.

(1.7)

By differentiating and eliminating the arbitrary constant e, we have obtained a first-order ODE (1.7) for y whose solution is y = ex2.

Solving an ODE is the reverse problem: given an ODE, find its solution.

DEFINITION 1 A solution of ODE in the unknown y is any function y = y(x) which, when substituted into the equation, reduces it to an identity (i.e. LHS:;RHS).

t:/f.J ) ~j" - ,./ -:2 J =. 0

Example 1.2 Show that y = Ae2x + Be -x is a solution of Eq. (1.3 ) for any constants A, B.

Note: Yl = e2x and Y2 = e-x are two independent solutions of Eq. (1.3). Then the linear superposition y = AYI + BYz is also a solution. This is an important property of linear equations (see Chapter 2). ~ -: ft,J," >.. "7 15 .:

3. GENERAL SOLUTION

The solutions of Eqs. (1.3) and (1.7) involve arbitrary constants:

y" - y' - 2 Y = 0:

y = Ae2x +Be-x y= ex2

A, B arbitrary consts.

xy' -2y = 0:

c arbitrary const.

These are called general solutions (OS) since the ~omtalltSj are not specified. Each solution represents and infinite family of possible solutions (or solution curves in the xyplane).

- 5 -

ENM321

Remarks

(1) In general, the number of arbitrary constants in the GS equals the order of the equation (why?).

(2) To obtain a particular solution we need sufficient further information to determine the arbitrary constants. These auxiliary conditions may be

initial conditions (i.e.) boundary conditions (b. c.)

specified at a single (initial) point specified at two or more points.

4. SOLVING FIRST-ORDER EQUATIONS

Separable equation:

dy =f(x)g(y) dx

f dy = f f(x) dx. g(y)

Example 1.3 Find the general solution of the equation (1 + x )y' - y = 1 and the particular solution satisfying the i.c. y(1) = O.

Linear Equation: has the standard form (SF)

dy

-+ p(x)y = j(x). dx

(1.8)

This can be solved by multiplying Eq. (1.8) by an integrating factor (IF) u = ef pdx to obtain

~(uy) <u ] => dx

(C arbitrary const.)

Example 1.4 Solve the initial value (i.v.) problem .xy' +2y = x2, y(1) = O.

5. SECOND-ORDER LINEAR EQUATIONS

We solve the homogeneous (RHS = 0) equation with constant coefficients

ay" +by' +cy = 0,

a, b, c consts.

(1.9)

- 6-

ENM321

Try a solution y = emx, m = canst. and solve the auxiliary (or characteristic) equation am2 + bm + c = 0 for roots ml' m2:

(i)

(ii)

m repeated root:

II"· Inx

Y e ~ Y2 -- xe

1 = ,

:::::} y=(A+Bx)emx

(iii) complex roots m = a ± ib : Yl = e{a+ib)x, Y2 = e{a-ib)x

Example 1.5 Solve the boundary value (b.v.) problem

x+4x = 0,

x(O) = 2, x( ~) = 2 .

(This equation describes a freely vibrating system with no damping: SHlvi.)

Note: in principle, this method can be used to solve a constant coefficient linear homogeneous equation of any order.

The Euler-Cauchy equation: is the second-order linear equation

(1.10)

with a, b, c canst. Look for a solution y = x", P = canst.

Example 1.6 Find the general solution of

2 1/ I 3 0

·x y -.xy - y = .

Find the solution satisfying the b.c.'s yeO) finite, y(l) = 2.

.. .. .. .. ... ... ... ... ... ... ... ... ... ... ... ... .. .. ... ... .. .. .. III

-7-

ENM321

LINEAR DIFFERENTIAL EQU·ATIONS: FUNDAMENTAL IDEAS AND RESULTS

Linear ODEs are of fundamental importance to our understanding of physical systems in science and engineering (mathematical modelling). You will have met examples in other courses:

dN +AN=O dt

mx + ex + lex = f(t)

d2Q dQ 1

L-+ R-+-Q = E(t)

dt2 dt C

X+ill2X=O

(population growth/decay)

(mechanical vibrations)

(electric circuit)

(simple harmonic motion SHM)

This course is mainly concerned with second-order linear equations. We shall study their properties and methods for solving them, and meet some equations of special importance.

1. PRELIMINARY IDEAS AND DEFINITIONS

The general second-order ODE for y(x) is a relation

F(x, y, y', y") = 0,

xEI

(2.1)

where F is a given function, and I is some open interval of the realline.

The equation is linear if Fis a linear function of y,y',y": then Equation (2.1) can be written

ao(x)y" + al (x)y' + a2(x)y = f(x)

(2.2)

where ai(x), i = 0,1,2 and f(x) are known functions defined on I. Equation (2.2) is homogeneous if the forcing term f(x);: 0 on I; otherwise it is inhomogeneous. An equation that is not linear is said to be nonlinear.

Example 2.1 Some well-known second-order linear equations are

Bessel's equation

(2.3)

(1- x2 )yll - 2xy' + v(v + l)y = 0 (v =const.)

Legendre's equation

(2.4)

xy" + (1- X )y' + Ay = °

(A =const.)

Laguerre's equation

(2.5)

- 8 -

ENM321

Remarks

(1) In general, the theory for second-order equations extends to linear ODEs of any order.

Thus, equation (2.2) is a special case (n = 2) of the general nth order linear equation

(2) We assume that the coefficient functions aJx) and f(x) are continuous on I and ao (x) ;f:. 0 for all x E I. Then, dividing by ao, we can write equation (2.2) in standard form (SF).

y" + p(x) y' +q(x) y = r(x)

(2.6)

with p, q, r continuous on 1.

(3) The interval of definition I;;:; (a,b) may be infinite e.g. (a, =), (-00, b), (-00,00). We will usually take I to be understood except when discussing specific examples.

2. LINEARITY

The linearity of equation (2.2) is crucial to developing the theory of linear equations and for finding their solutions. We define the second-order linear (differential) operator L by

(2.7)

Then equation (2.2) can be written in the compact form

Ly > j(x)

(2.8)

and the reduced homogeneous equation is Ly ;;:; O.

RESULT 1 (Linear operator) For any functions j(x), g(x) and constants a, f3

L(aj + f3g);;:; aLj + f3Lg.

(2.9)

Proof (Exercise: Problems 2, Qu.1) .•

- 9 -

ENM321

We have the following fundamental result for homogeneous linear equations:

RESULT 2 (Linear Superposition) If Yl' Y2 are solutions of the homogeneous linear equation Ly = 0, then the linear combination

(2.10)

is also a solution where Cl, c2 are arbitrary constants.

Proof LYI = 0 and LY2 = O. Using Result 1

Notes: (1) The result shows that a constant multiple Y = AYI of a solution Yl is also a solution of the homogeneous linear equation, and that Y = 0 is a trivial solution (A = 0).

(2) The result extends to any (finite) number of solutions: if YI, .. , , Yk are solutions of Ly = 0 then so is

k

Y = clYl + c2Y2 + ... +ckYk = L ciYi . i=l

(2.11)

We need to distinguish between solutions of the full equation Ly = j(x) and the associated homogeneous equation Ly = 0 .

(b) A solution Yc of the reduced equation Ly = 0 is called a complementary solution.

DEFINITION 1 (a) Any solution yp of the inhomogeneous equation Ly = j(x) which does not contain any arbitrary constants is called a particular solution or particular integral (PI) of the equation.

RESULT 3 Let Yp be a given PI of equation (2.8). Then any other solution Y has the form

Y=Yc +v,

where Yc is some complementary solution.

(2.12)

Proof Let u=Y-Yp where Lyp =f(x) and Ly::::f(x). Then

Lu = L(y- Yp) = Ly-Lyp = j(x)- j(x) = 0

Hence u(x) satisfies the homogeneous equation so u:::: Yc and y = u + Yp' •

ENM321

- 10-

WHAT DOES THIS RESULT TELL US?

Roughly speaking, Result 3 says that every solution of equation (2.2) can be built from a single particular integral y p by adding to it some solution of the reduced homogeneous equation y c :

Suppose we can find the general solution of the homogeneous equation i.e. one that captures every complementary solution. It follows directly that we can write down every solution of the inhomogeneous equation simply by adding any PI i.e. we have obtained the general solution of equation (2.2)! We therefore have two distinct problems to solve:

(1) find the general solution of the homogeneous equation Ly = 0;

(2) find any PI of the full equation Ly = f(x).

The rest of this course is bound up with solving these two problems. We shall start by considering the first problem (1).

3. LINEARLY INDEPENDENT SOLUTIONS AND A FUNDAMENTAL SET

All our experience of solving the second-order homogeneous equation

Ly=O

(2.13)

shows that its general solution must contain two arbitrary constants. Let us take this for granted

Let Yl, Y2 be solutions of (2.13) : then by linear superposition

(2.14)

is also a solution for arbitrary constants cl, c2' But Y is not necessarily the general solution (even though it contains two arbitrary constants)! For suppose one of the solutions Yl say, is a constant multiple of the other i.e. Yl = ky2, k = canst. Substituting into (2.14) we get

which contains only one arbitrary constant C, so it cannot be the general solution. However, if Yl' Y2 are not proportional (i.e. do not depend on one another), then (2.14) is the general solution of Equation (2.13). In this case, we say that {Yl, Y2} are linearly independent (1. i.) and form a fundamental set of solutions of equation (2.13).

We shall define linear independence precisely, but note that Yl' Y2 are 1. i. if

1.L ¢ const. or Y2 ¢ canst. on I .

Y2 Yl

Example 2.2 S'how that e2,\ e -x are I. i: solutions of the equation y" - y' - 2y = 0 on ( -00,00). Hence, write down its general solution.

- 11 -

DEFINITION 2 (Linear independence) The functions f(x) and g(x) are said to be linearly independent (1. i.) on the open interval I if the relation

implies that both cl = 0 and c2 = 0 for any constants cl' c2; otherwise f and g are linearly dependent.

Remarks

ENM321

(1) This is precisely what we want - if YI (x), Y2 (x) are 1. i. they are not proportional to

one another. For suppose YI = ky2' say, then YI -ky2 = 0 i.e.

clYl + c2Y2 = 0, cl = 1, c2 = =k. But ci = 0 by 1. i.!

(2) Conversely, functions f(x) and g(x) are linearly dependent on I if we can find constants cl' c2 not both zero, such that clf + c2g = 0 for all x E I (i.e. they are proportional).

We can now make the following definition:

DEFINITION 3 (Fundamental set) Any pair {YI' Y2}

of linearly

independent solutions of the homogeneous second-order linear equation (2.13) is called a fundamental set of solutions (on the intervall).

Example 2.3 Show that {cos 2t, sin 2t} is a fundamental set for the differential equation x+4x=0 on (a) I=(-n,n) (b) on any interval (a, b). What is the general solution of the equation?

We can now state the following fundamental theorem for homogeneous linear equations:

RESULT 4 (Linear Superposition Principle) If Yl(X), Y2(x) are two linearly independent solutions (i.e, a fundamental set) of the second-order linear ODE

Ly == ao(x )y" + al (x)y' + a2 (x)y = 0,

xEI

then the general solution on I is the linear combination

(2.15)

(2.16)

where cI' c2 are arbitrary constants. We call Yc the complementary function (CF) of equation (2.15).

- 12-

ENM321

We can combine the linear superposition principle with Result 3 to obtain every solution of the inhomogeneous linear equation:

RESULT 5 (General Solution) Let Y p be a particular integral of the inhomogeneous secondorder linear requation Ly = f(x) on I. Let {Yl, Y2} be a fundamental set of solutions of the associated homogeneous equation Ly = O. Then the general solution of the inhomogeneous equations is

(2.17)

where c1' <: are arbitrary constants.

Example 2.4 (Method of undetermined coefficients) Find the general solution of the differential equation

x+4x=3sint.

Comment

The results on second-order equations carryover to linear ODEs of order n in the obvious way. Thus, if {Y1' Y2, ... , y,J is a fundamental set of linearly independent solutions of the

homogeneous equation

L ( ) (n) () (n-l) ( ) f () 0

y == ao x Y + al x Y + .... + an-l x Y + an x Y =

(2.18)

on an open interval I, then its general solution (CF) is

(2.19)

n

Yc = cIY2 + c2Y2 + ... + cnYn = ~>kYk'

. k=l

where ck' k = 1, 2, ... , n, are arbitrary constants. If Y p is any particular integral (PI) of the inhomogeneous equation Ly = f(x) on I, then the general solution is

Y=Yc+Yp =C1Y1 +C2Y2 + ... +cnYn+Yp

(2.20)

Of course, we must use Definition 2 to establish the l. i: of the n solutions Yl,"" Yn·

4. THE WRONKSIAN AND LINEAR INDEPENDENCE

The linear independence of solutions is crucial since we can use them to build all the solutions of a linear ODE. But it is clear from Example 2.3(b), that establishing the 1. i. of even two functions using Definition 2 can be awkward. Fortunately, there is a convenient test for l. i. that is often easy to apply. The test centres on the Wronskian function (after Josef Wronski 1776-1853).

- 13 -

ENM321

fi(x) hex) , ,

w(fi,h)"" flex) fi.(x) ""fi(x)f2(x)- fi (x)h ex) (2.21)

DEFINITION 4 Let fi(x), hex) be differentiable functions on an open interval I. Then the Wronskian W(x) = W(fi ,fz) of it and 12 is the determinant

The Wronskian function possesses several remarkable properties which are of importance in the theory of linear ODEs.

RESULT 6 (Wronskian Test) If fi(x), hex) are differentiable functions on the open interval J and W( xo) 1= 0 for some Xo E J, then 11 and fz are linearly independent on 1.

Remarks

(1) To show that two functions are 1. i. we need only show that their Wronskian is non-zero at just one point of 1. (Note the importance of specifying the interval 1.)

(2) The corollary of Result 6 is that if fi, h are linearly dependent on I then their Wronskian is identically zero on J i.e. W(x) = 0 for all x E J.

(3) The Wronskian test should be used with caution. Result 6 gives a sufficient condition for 1. i. but not a necessary one. Two functions may be 1. i. and yet W(x) == 0 on I. Conversely, corollary (2) gives a necessary condition for linear dependence, but not a sufficient one: W(x) == 0 on I does not always imply linear dependence so the test is inconclusive. (See Problems 2: Qu. 8 (iii) and Qu. 12**.)

(4) However, the Wronskian test (and its corollary (2) is a necessary and sufficient condition when the two functions are also solutions of a homogeneous second-order linear equation v" + p(x)y' + q(x)y-= 0, withp, q continuous on 1.

Example 2.5 Use the Wronskian test to confirm that the solutions of the ODEs in Examples 2.2 and 2.3 are linearly independent.

Example 2.6 Show that the functions sin x, sin2x are linearly independent on (i) (O,n) Cii) any open interval (zz, b).

The Wronskian test applies strictly to functions that are differentiable on the entire open interval I. For other functions one should really go back to the Definition 2:

Exercise Show that x, Ixl are 1. i. on (-1, 1).

However, for functions that are also solutions of the homogeneous equation (2.6), Abel (1802- 1829) proved the following remarkable result:

- 14-

ENM321

RESULT 7 (Abel's Theorem) Suppose Yr, Y2 are solutions ofthe homogeneous equation

v" + p(x )y' + q(x)y = 0,

XEI

(2.22)

with a p, q continuous on the open interval!. Then the Wronskian W(Yr, Y2) is

W(x) = Ae -JpdX,

A = canst.

(2.23)

and so either (a) W(x) -:f. 0 for all x E I

or (b) W(x)=O for all x e Z

(A-:f. 0) (A = 0).

This result shows that the Wronskian of any two solutions of a homogeneous second-order linear equation is either identically zero or never zero on I!

Comments on the Wronskian

(1) The explicit formula (2.23) for W(x) applies to the equation in SF and is independent of q(x). The value of the constant A will depend on Yl' Y2'

(2) We will find the Wronskian helpful to us when we come to construct explicit solutions in the next chapter.

(3) All the results for the Wronskian extend to any (finite) number of functions and to the nth-order homogeneous linear equation (2.18). Thus, for three differentiable functions Yl, Y2, Y3 the Wronskian is the 3 x 3 determinant

Y1 (x) W(Yl' Y2'Y3) = y{ (x) y{'(x)

Y2(X) y2(x) y!{(x)

Y3(X) Y3(x) Y3'(x)

(See Problems 2, Qu, 11*.)

With the aid of Result 7 we can prove a stronger version of Result 6 for two solutions of equation (2.22) (refer to the cautionary Remark (3) following Result 6 ).

RESULT 8 (Linear dependence/independence) Let Yl, Y2 be solutions of equation (2.22)

Y" + p(x)y' + q(x)y = 0,

p, q continuous on 1.

Then Yl' Y2 are linearly dependent on I if and only if W(x) = 0 for all x E I. Alternatively, Yl and Y2 are linearly independent on I if and only if W(x) ¢ 0 for all x e I.

- 15 -

ENM321

Remarks

(1) This result states that linear independence of two solutions and the nonvanishing of their Wronskian on I are equivalent. Alternatively ·their linear dependence and the vanishing on I of the Wronskian are equivalent statements.

(2) The proof of this result depends on the uniqueness of a solution to the initial-value problem and is given in the optional next section.

Example 2.7 Show that Yl == X, Y2 == x2 are linearly independent functions on 1==(-1, 1).

Show also that their Wronskian W(x) == 0 at x == O. What can you conclude about the possibility that Yl, Y2 are solutions of an equation v" + p(x)y' + q(x)y == O?

Show that YI' Y2 are solutions of x2yII - 2xy' + 2y == O. Does this contradict your conclusion and Abel's Theorem on the behaviour of W(x)?

5. EXISTENCE AND UNIQUENESS OF SOLUTIONS (OPTIONAL SECTION)

We have obtained solutions of several second-order linear ODEs, some of which were general solutions i.e. they contain two arbitrary constants. A general solution represents an (infinite) family of possible solutions; if we want a particular (or unique) solution we must have some extra information to fix the arbitrary constants i.e. initial conditions (i.c.) or boundary conditions (b.c.).

But how can we be sure that an equation has a solution and, secondly, that the solution is unique? The following fundamental theorem guarantees the existence of a unique solution for both homogeneous and inhomogeneous linear second-order equations (we state the theorem without proof which lies beyond the scope of this course).

THEOREM 1 (Existence and uniqueness for the initial-value problem)

Letp(x), q(x) and rex) be continuous on an open interval 1. The initial-value problem

y" + p(x)y' + q(x)y == rex),

where Xo E I and Yo, Yl are constants, has a unique solution y(x) defined for all x E I. •

Now consider the corresponding boundary-value problem on I:

y" + p(x )y' + q(x)y = r(x),

yea) = a,

y(b) == {3,

where a, b are distinct points in I, and a, {3 are constants. Unfortunately, even when p, q are continuous on J, there is no guarantee that a solution exists and, if one does, that it is unique. We must investigate each b.v. problem individually.

- 16 -

ENM321

A Proof of Result 8

Suppose that Yl' Y2 are solutions of the equation

Y" + p(x)y' +q(x)y == 0, X E I

with p, q continuous on 1. Then Result 7 shows that W(Y1' Y2) is either everywhere zero or nowhere zero in I.

Observe first that if YI and Y2 are linearly dependent, then W(x) == 0 for all x E I by the corollary to Result 6. We need only prove the converse i.e. if W(x) == 0 throughout I, then Yl' Y2 are linearly dependent. Let Xo E I; then W(xo) == 0 by hypothesis. Consider the simultaneous equations for c1' c2:

clYl (xo) + c2Y2 (xo) == 0,

(2.24)

C1Yl( xo) + C2Y2( xo) == 0 .

Since the determinant of the coefficients W(xo) == 0, they give a nontrivial solution for cl' c2' Use these values to define the function Y == cl Yl (x) + C2 Y2 (x). By linear superposition y(x) is a solution of the equation and satisfies the i.c.s

(2.25)

by equation (2.24). But by Theorem 1, this i.v. problem has a unique solution which is evidently zero on I and so y(x) == 0 for all x E 1. Hence,

ClYl (x) + c2Y2 (x) == 0

on I,

but cI' c2 are not both zero, which means that YI' Y2 are linearly dependent. We have proved the converse result. The alternative statement of Result 8 concerning linear independence follows directly from the first. •

~ ~ ~ ~ ~ @lI-

@@II @... @II ~ @III

e@II

8- ~ @II

@II @'

~ @' @I

~ ~ @9' @9'

8- 8' 8 8 8

8

8

~

~

i

- 17 -

ENM321

STANDARD TECHNIQUES FOR SOLVING SECOND-ORDER EQUATIONS

We now know a great deal about the structure of the solutions of the second-order linear ODE

ao(x )y" + al (x)y' + a2 (x)y = f(x).

(3.1)

Yet we have very few techniques available for solving equation (3.1). We can certainly find an exact solution of equation (3.1) if:

(1)

it has constant coefficients ao, Ql, Q2 : either j(x) = 0 (homogeneous equation (1.9)) or j has a simple form (e.g. eX, sin3x, x2 +1 etc.) in which case we may use the method of undetermined coefficients (Example 2.4);

(2) it has variable coefficients with the special structure

ax2y" + bxy' + cy = 0,

a, b, c consts.

(3.2)

This is the Euler-Cauchy equation (Example 1.6).

In general, the linear equation (3.1) is difficult or impossible to solve analytically - although it may be possible to find a series solution (Chapter 4) - and one must resort to numerical approximations. Here, we shall consider some standard techniques for solving second-order equations.

1. REDUCING THE ORDER OF AN EQUATION

We can sometimes reduce the general second-order ODE

F(x, y, y', y") = 0,

(3.3)

to first-order which may then be solved by the methods of Chapter 1: Section 4.

(1) Equations with 'y' missing: equation (3.3) becomes

F( x, y', y") = 0 .

(3.4)

We introduce a new dependent variable v(x) = y". Then VI = y'l and equation (3.4) becomes

(3.5)

which is first-order in v, We solve for v and obtain the solution y = J vdx of equation (3.4).

- 18 -

ENM321

Example 3.1 Find the general solution of xy" - y' = 3x2 .

Note: the method can sometimes be used for higher-order equations.

(2) Equations with IX' missing: equation (3.3) reduces to

F(y,y',y") = O.

(3.6)

W . . dy h /I dv d:y dv dv e asam wnte v=- so t at y =-=--=v::> cL'C dx dx dy dy

becomes

(chain rule). Then, equation (3.6)

(3.7)

which is first-order in v but with y as the independent variable. We solve equation (3.7) for v(y), and then solve dy = v(y) as J dy = x+c (separable).

dx v(y)

Example 3.2 Solve the equation y(y-l)y" + y/2 = O.

2.

A SECOND SOLUTION OF A LINEAR EQUATION: THE INTEGRAL FORMULA

Suppose we know one nontrivial solution Yl (x) of the homogeneous second-order linear equation. Then we can find a second solution Y2 (x) such that Yl' Y2 are linearly independent. We first write the equation in SF

v" + p(x)y' + q(x)y = O.

(3.8)

Since we require Yl' Y2 I. i., the ratio Y2;f:; const. on 1 i.e, Y2 = u(x).

Yl Yl

We therefore assume Y2 = u(x)YI for some unknown function u(x). Substituting

I I r II N 2" /I

Y2 = u Yl + uYl' Y2 = U Yl + U Yl + uYl

into equation (3.8) and collecting terms in u", u' and u, we have

If '(2 I ) (" I ) 0

u Yl + U Yl + PYI + U Yl + PYI + qyl = .

..,. .. ... .e~ ~ ~ ~ ~ ~ ~ @a

@a @II

e~ ~ e-

@II @' 8= @I

@' 8' 8' 1!1 t!I f!}

~ ~

i

- 19 -

ENM321

But Yl is a solution of (3.7) and so

(3.9)

Rearrange equation (3.9) as

/I ( 1 J

U YI

-I =- 2-+p

U Yl

and integrate to get

In u' = -21nYl - f pdx or

-f pdx

1 e

U ::::--::;--

Y12

Integrating again we obtain -f pdx

u(x):;;: f e 2 ely

Yl

and the second solution Y2 :;;: uYl is therefore

-jp{x)dx

Y2 = Yl f e ? ) dx . y((x

(3.10)

Remarks

(1) Equation (3.10) gives an integral formula for a second solution Y2 which is I. i. of the known solution Yl (Exercise: show that W(Yl' Y2) *" 0) .

(2) Since {Yl' Y2} is a fundamental set, the general solution of equation (3.8) is Y :::: AYI + BY2 (A, B arbitrary consts.) i.e.

(3.11)

This is the solution we would have obtained from our derivation above if we had included the two arbitrary constants of integration. The second solution Y2 can then be read off from the general solution (3.11).

(3) The equation (3.9) for the unknown function u(x) has 'u' missing: so putting v = u' we obtain the first-order separable equation

VI + Q(x)v = 0,

I

Q(x):::: 22L+ p .

Y

- 20-

ENN1.321

In effect, we have reduced the order of equation (3.8): for this reason, the method is called reduction of order in some text books. The method is usually credited to Jean D'AIembert (1717-1783).

Example 3.3 Find a solution of the Euler-Cauchy equation x2y" - 3xy' + 4 y = O. Use the integral formula to obtain a second solution and hence obtain the general solution of the equation. Check that the two solutions are 1. i.

In contrast to Example 3.3, we shall work through the next example without quoting the integral formula (3.10).

Example 3.4 Show by direct substitution that eX is a solution of the equation

xy"-(2x+1)y'+(x+l)y=0.

Find another solution which is linearly independent of the given solution, and hence write down the general solution.

Reduction of order for the inhomogeneous equation

The integral formula (3.10) gives a second l. i. solution of the homogeneous equation (3.8) once one solution has been found. The same reduction of order method can be used to obtain the solution of the inhomogeneous equation (3.1).

As before, we assume that y = U(X)Yl is a solution of equation (3.1) where Yl is a known solution of the associated homogeneous equation (i.e. with f(x) = 0). Substituting for y in equation (3.1) then yields an equation for u which is effectively first-order.

Example 3.5 By first obtaining a solution of the associated homogeneous equation, use the method of reduction of order to find the general solution of

Remarks

(1) Note that the method does not require that the equation be put into SF and one proceeds by direct substitution of y = u(x )Yl .

(2) The resulting solution y is the general solution since it contains two arbitrary constants.

Further, the solution y has the familiar form Y = Yc + yp of the OS of the

inhomogeneous equation.

(3) We could not have used the method of undetermined coefficients in Example 3.5 because it is extremely unlikely that we could guess the form of the PI y p = X 2 (lnx) 2 to

match the forcing term f(x) = 2x2 (the equation has variable coefficientsl).

- 21 -

ENM321

3.

VARIATION OF PARAlVIETERS

We have already noted the limitations of the method of undetermined coefficients for solving inhomogeneous equations: its use is restricted to' equations with constant coefficients and particularly simple types of forcing term f(x). But even for the equation yl/ + Y = secx there is no obvious choice for a trial particular integral y p'

Variation of parameters (VP) is a more powerful method for solving the inhomogeneous linear equation Ly = f(x) once the solution Yc of the homogeneous equation Ly = 0 is known. The credit for this technique is usually given to Joseph Louis Lagrange (1736-1813).

VARIATION OF PARAMETERS

We first put the second-order linear requation (3.1) in SF

Ly == v" + p(X)Y' + q(x)y = rex)

(3.12)

where p, q, r are continuous on some open intervall. We now assume that Yl' yz are I. i. solutions of the reduced equation Ly = 0: its complementary function is then

A, B arbitrary consts.

(3.13)

In seeking the solution of equation (3.12) we now allow the coefficients A, B to vary: we therefore put

y = A(X)Yl + B(X)Y2

(3.14)

and try to determine the functions A(x), B(x). We shall require two equations for A and B: in fact, we will obtain equations for A'and B'. Differentiating equation (3.14) gives

I A I B I A' B'

y = Yl + yz + Yl + Yz

which simplifies if we impose the ?ondition

A'YI + B'yz = o.

(3.15)

It follows that

y' = AYI + BY2

(3.16)

and then

/I A II B II AI' I'

Y = Yl + Yz + Yl + B Y2 .

(3.17)

- 22-

ENM321

We now substitute y, v'. y" into equation (3.12) to obtain a second equation for A', B' in addition to (3.15): this gives (check)

But Yl' Y2 are solutions of the homogeneous equation and so

A'y/ + B'Y21 = rex) .

(3.18)

We now solve the pair of simultaneous equations (3.15) and (3.18) for A' and B'. These can be integrated to give the unknown functions A, B which are then substituted into (3.14) to obtain the solution y(x).

Example 3.6 Solve the equation y" + y = secx.

Remarks

ill Example 3.6 we have worked through the VP technique in full. Alternatively, we can simply quote the equations (3.15) and (3.18) for A', B' once the equation has been put in SF (quicker!).

Example 3.7 Solve y" - 4y' +4y = (1+ x)e2x.

FORMULAE FOR VARIATION OF PARAMETERS

We have to solve equations (3.15) and (3.18) for A', B' which can be written in matrix form

[Yl Y2 ] [AI] [0]

y1' hI B' - r(x) .

But this has a unique solution' since the determinant of the coefficient matrix is just the Wronskian W(Yl' Y2) ;f;: 0 (Yl' Y2 are I. i.). Hence, Cramer's rule gives

AI = _2_ 0 Y2 r(x)Y2
I =
W r(x) yz W
B' = _!_ Yl 0 r(x)Yl
, =
W Yl r(x) W ~ ... .... ... ... .,..

..,...

... ... ~ ~ ~ ... "..

"" ~

... ~ ~ .... ... 8' ~ @II

~ ~ ~ ~ ~ ti!JtA ~ ~ ~ ~ ~ ~ ~ iiJ ~ .-

~"f

~

~

• • • • • • • t

• • • • • • • •

- 23 -

ENM321

Integrating we get

A(x) = -5 r(;2 dx,

(3.19)

and the solution follows from equation (3.14).

Example 3.8 Solve the differential equation x2y" - 3;ry1 + 3y == 2x4 eX, x * O.

Remarks

(1) Remember! - variation of parameters is used for the inhomogeneous linear equation only if two l. i. solutions of the homogeneous equation are known. When only one solution is available we use the reduction of order method.

(2) It is usually not a good idea to remember formulae such as (3.19). It is sufficient to impose the condition (3.15) on A, B which leads to the simple form for y' in (3.16) and then to equation (3.18).

(3) Variation of parameters can be applied to the inhomogeneous linear equation of order n.

In particular, it may be used to solve first-order linear equations; however, it IS equivalent to the integrating factor method in this case (see Chapter 1: Section 4).

4. NORMAL FORM OF A DIFFERENTIAL EQUATION

A second-order linear homogeneous ODE can always be reduced to one in which the firstderivative term is absent: the equation is then said to be in normal form.

U"+Q(X)U==O.

(3.20)

DEFINITION 1 A second-order linear differential equation in u(x) is in normal form (NF) if u' is absent i.e.

Reduction to normal form: consider the equation in SF

Ly = y" + p(x)y' + q(x)y = 0

(3.21)

and put y=u(x)v(x). Then y'=u'v+uv', y"=u"v+2u'v'+uv" and equation (3.21)

becomes

u" v + (2v' + pv) u' + (v" + rv' + qv) u = 0 .

i.e.

/I (2 VI ) I ( LV)

U + --;+ P u + --; u == 0 .

- 24-

ENM321

V'

To eliminate the u' term we choose v so that 2 - + p == 0 i.e. v

(3.22)

-~f p(x)dx

v(x)=e 2 •

The equation for u(x) becomes

u" + Q(x)u = 0

(3.23)

which has normal form where

Q(x) __ Lv v" v'

= -+p-+q

v v v

(3.24)

which can be found once v is known.

Alternatively, ~ = _1.. p and vlt _ (~)2 = -.!. p' so that

v 2 v v 2

( ) 1 2 1 I

Q x =q--p --p

4 2

(3.25)

which expresses Q in terms of p and q which can be read off directly from the given equation (3.21).

When the previous methods of this chapter fail (e.g. we cannot find one solution of the homogeneous equation), it may sometimes be possible to solve an equation by first reducing it to normal form.

Example 3.9 Show that Bessel's equation of order v;?: 0 x2y" +.xy' + (x2 - v2)y == 0

has the normal form

u" +_ V2/::) u = 0,

(3.26)

(3.27)

where

(3.28)

... -...

.. .. WIllI ...

._.

... ~ 'fill ~ ....

-~

~ ~ \I!!aa ~ ~ ..

'" ...

~ <@ls

~ w.. .. w ~ ~ ~ ~ ~ W w ~ ~ ~

W

W

W

~

...

Wj

fij

Ii

Ii

Ii

i!J

..

..

..



~

_ 25 _

ENM32 1

1 Special case: Bessel's equation of order v = 2"

The l. i. solutions of Bessel's equation (3.26) define 'new' functions called Bessel functions which we denote by Jy(x) and J_y(x). We shall study these special functions in more detail in the next chapter.

For the special case v =~, the NF (3.27) reduces to u" + u = 0 which has the l. i. solutions sin x, cosx. Equation (3.28) shows that the Bessel functions J.!_(x) and J_.!_(x) are given by

2 2

J ( )_~ sinx

1 x - r'

2" 1C -s x

(3.29)

where the constant coefficients ~ are included for conventional reasons to do with their properties. Hence, the OS of Bessel's equation of order ~ is

y=AJ~(x)+BJ_~(X)= ~ (Csinx+Dcosx).

The graphs of J.!_(x) and J_.!.(x) are sinusoidal waves whose amplitudes are modulated by ~

2 2 -i x

and so decay to zero as x --7 00.

o

x

Note that J.!. (0) = 0 while J_.!. (x) --7 +00 as X--7 O. These graphs show the typical behaviour

2 2

of Bessel functions.

- 26-

ENM321

5. ASYMPTOTIC FORM OF SOLUTIONS

It is often important to know how a solution y(x) of a differential equation behaves when the independent variable x approaches some limiting value (usually x ---t 0 or x ---t ± co). In physical applications, for example, we are particularly interested in the behaviour of a system once it has settled down i.e. after a sufficiently long time (t ---t co).

We should therefore be concerned with the asymptotic (i.e. approximate) form of solutions for large x or t. We can sometimes deduce this asymptotic behaviour without solving the equation. The normal form of a differential equation can be extremely helpful for finding the asymptotic form of solutions.

Example 3.10 Examine the asymptotic behaviour of the solutions of Bessel's equation of order v for large x.

Even when a differential equation is in normal form (3.20), it may not be possible to deduce the asymptotic behaviour of its solution for large x (because the limiting form of Q(x) as x ---t 00 is not obvious). In this case, we can use the WKB (Wentzel-Kramers-Brlllouln) approximation.

RESUL T 9 (WKB method) Consider the second-order linear differential equation in normal form

y"- j(x)y=O.

(3.30)

Then the solution has the asymptotic form

Af-± J~f(x) dx Bf-± - J ~f(x) dx

y- e + e asx---tOO

(3.31)

where A, B are arbitrary constants.

Remarks

(1) The WKB formula (3.31) depends only on f(x) and is a superposition of the two linearly independent approximate solutions

1

Yl - f-4 exp (f ~ f(x)dx) and

1

Y2 - f-4 exp (-f .Jf(x)dx).

(2) The WKB method will usually give a good approximation (even when x is not too large) except near points where f(x) = 0 (called turning points of the equation).

(3) To carry out the computation of (3.31) it may be necessary to expand j(x) as a power series (in Yx) to obtain the leading order terms as x ---7 co. In the integral exponent we retain only terms that lead to functions that increase with x, while we keep only the highest order term in j-1I4.

-~

~ ~ ~ .. ~ ~ te~ i~ ~ ~ @II @I

e.,.

@II ~ @iJ

@II ~ f!!P ~ @I

@I @' 8 ~ 8 8 8 ~ ~ 8 e

~ 8

"J

8

!I

4

~

- 27-

ENM321

Example 3.11 Find the WKB approximation for the solution of

y" - 2xy' - 2 y = 0

as x ---+ co .

Example 3.12 Use the WKB method to determine the asymptotic behaviour of the Bessel functions of order v as x ~ co.

DERIVATION OF THE WKB APPROXIMATION (OPTIONAL)

We start with the differential equation in normal form

v" - j(x)y = O.

(3.32)

Now introduce a new independent variable z = z(x) and dependent variable w(z) such that

w(z) y(x) =--1 '

(z')?:

dz

z' =

dx

(3.33)

This is called the Liouville-Green transformation: the inclusion of .Ji! in (3.33) will ensure that the equation for w is also in normal form. Differentiating (3.33) with respect to x

( . h d ds d ,d )

notmg t at -=--=z - we get

dx dx dz dz

I 1, dw 1 ( ')-~ II ( I)!.. dw 1 ( ,)-Yz II

y = --1 Z - - - z z w = Z 2 - -- Z Z w .

(Zl)?: dz 2 dz 2

Then

/I _ ( 1)3~ d2w 3 ( ')-~( ,,)2 1 ( ,)-Yz III

y _Z72 __ +-Z Z w--Z ZW

dz2 4 2

(3.34)

where the terms in w' have cancelled. Substituting from (3.33) and (3.34) into (3.32) we obtain

d2w 3 1

+ ( ,)-4 ( ,,)2 ( ,)-3 /I j()( ,)-2 0

-- - z Z w - - z z w - x Z w=

~2 4 2

- 28-

ENM321

i.e.

w" - [f(X) + ¢]W = 0 (Z,)2

(3.35)

2z'z" - 3 (Z,,)2

¢(z) = 4(ZI)4

(3.36)

1

We now choose l(x) = ~f(x) = /2, so that equation (3.35) simplifies to

(3.37)

and (3.36) becomes (exercise for you!)

(3.38)

where l' = df in (3.38). We now assume that 1>« 1 as x ----7 00, so that (3.37) reduces to dx

() ±z ±J ~f(x)dx

w" - W = 0 => w z - e = e .

Hence, from (3.33) we deduce the WKB approximation as x -700

- ~ _ Af-± J ~J(x)dx . Bf-± -J ~f(x)dx

y - 1/4 e + e .

f

Remarks

(1) The WKB approximation is valid only if 1> is much smaller than unity and can be neglected as x becomes large. From equation (3.38) we see that this will be true if both f',f"« f i.e. the function f(x) varies slowly as x increases.

1

(2) Note that if f(x) > 0 then z' = /2 > 0 so that z increases/decreases as x increases/decreases and, in general, we can expect x ----7 00 as z --7 00 and vice versa. However, the WKB approximation may sometimes give an asymptotic approximation for x --7 -00 (although it is usually better to first transform the variable x --7 -{ in the equation and then investigate the behaviour as (--7 00).

.... ~ tete~ ~ ~ ~ ~ ..,.

~ @@I @Ii @r' @I

~ @I

~ @I @ jill

'0~t

~ e i1iJ 1M

,_'

(3)

(4)

- 29-

ENM321

It is clear now why the WKB approximation breaks down near a turning point of the equation: for if f(xo);;:; 0 then (3.38) shows that ¢ will not be small in the

neighbourhood of x;;:; Xo t Indeed, the transformation (ZI)2 ;::;; f(x) is invalid near Xo since f(x) changes sign whereas Z/2 does not.

The Louiville-Green equations (3.35) and (3.36) may well be useful in themselves if the transformation z(x) is chosen appropriately. For example, it may be possible to solve equation (3.35) exactly, giving an exact solution to equation (3.32). Alternatively, we may be able to neglect ¢ if it is much smaller than the term f(x) / (ZI)2, permitting an approximate solution of (3.35), and hence of (3.32).

- 30-

ENM321

SERIES SOLUTIONS AND SPECIAL FUNCTIONS

Thus far, all the solutions of the ODEs that we have solved could be written in terms of elementary functions (eX, cosx, sinx, lnx, polynomials etc.): they are called closed form solutions. For most ODEs this is not possible (even if they are linear). For example, a solution may be defined in terms of an integral that we cannot evaluate (see Problems 4, Qu, 3).

Even a simple second-order linear ODE such as Airy's equation (George Airy 1801-1892)

y" -xy:;::; 0

(4.1)

cannot be solved by any of the previous methods - it has no obvious closed form solution! If we are to obtain exact analytic solutions of equations like (4.1), we need to have available a much larger class of functions than those of elementary calculus. The key idea is to try to represent the solution by a power series

OQ

y(x) = 'L..anxn

It''''O

or

y(x) = Lan(x-xor.

n=O

Since any power series converges to a differentiable function on some open interval I, this provides a vast new source of analytic functions with which to solve ODEs.

1.

THE IDEA OF A SERIES SOLUTION

The general idea of a series solution is inherent in the second-order i. v. problem

y" + p(x)y' + q (x)y = 0,

(4.2)

where A, B are assigned constants. Theorem 1 (Chapter 2) guarantees a unique solution y(x) on some open interval I containing Xo. Rearrange the equation as

v"> -q(x)y- p(x)y'

so y"(xo) is known. Differentiate the equation and set x = Xo to obtain (check thisl)

By repeatedly differentiating the ODE we can evaluate all the derivatives y(n)(xo)' We can now construct the Taylor series expansion for y(x) about x = Xo

"( ) m( )

( ) '( ) ( ) y Xo ( )2 Y Xo ( )3

y(x);;;::y Xo +y Xo x-xo + x-xo + x-xo + ...

2! 3!

(4.3)

.......,

..

... ... ... ..,.

.",. ... .,..

.... .. ... .,.

.... .. ,,_

... ...

-....

~ ... ... ... .... ... .... ... ... .... ~ ~ ... .... ~ .... ,....

~ @iii'II

... ~ ~ fitiItJ ,,_

.". e'".,

"". fjI!J

~ ...

"" f!l1!A

~ ...

~ ~

~

- 31 -

ENM321

If we substitute y(xo), Y'(xo), y"(xo), ... into the series and collect up terms in A and B separately, we get

Y(X)=A{l- ~!q(xo)(x-xO)2 + :![p(xo)q(xo)-q'(xo)](x-xo)3 + ... } +B{(X-Xo) - ~!p(xo)(x-xO)2 + :![p2(xO)_q(xo)- p'(xo)](x-xO)3 + ... }

where Yl (x), Y2 (x) are the functions defined by the two power series in brackets. Since A, B are arbitrary constants, both Yl and Y2 are solutions of (4.2) satisfying the i.c.'s Yl(xo)=l, yl(xo)=0 (A=l, B=O) and Y2(XO)=O, Y2(xo)=1 (A=O, B=l). But W(xo) = 1 '* 0 so Yl, Y2 are 1. i. - we have therefore obtained the general solution of (4.2)[

Example 4.1 Find a Maclaurin series solution of Y" + Y = 0, yeO) = A, y'(O) = B by repeatedly differentiating the equation.

Remarks

(1) In Example 4.1 we have simply obtained the 1. i. solutions cos x, sin x of y" + Y = 0 in the form of their Maclaurin series. The series solutions could be taken to define these trigonometric functions.

(2) Evaluating the derivatives y(n)(O) in Example 4.1 was straightforward. However, for the general equation (4.2), repeated differentiation of the products q(x)y and p(x)y' can become increasingly messy and tedious. We shall look for a better method!

(3)

The method of repeated differentiation can be applied to linear equations of any order and inhomogeneous equations (Problems 4, Qu. 3).

In the light of the discussion above, it seems sensible to alter our point of view. Rather than regard an ODE as an equation to be satisfied, we shall instead think of a differential equation as a prescription for generating solution functions in the form of power series. A few of these solutions turn out to be the series representations of elementary functions (see Example 4.1 and Problems 4, Qu.l). However, the vast majority are Taylor or Maclaurin series of 'new' functions that are also solutions of the ODEs used to construct them.

2.

ANALYTIC FUNCTIONS: ORDINARY AND SINGULAR POINTS

When can we be sure that a solution of a differential equation is represented by a power series? Our procedure of repeatedly differentiating the equation so as to evaluate all the derivatives

/Il)(xo) in the Taylor series (4.3) requires, at the very least, that the coefficients p(x) , q(x) have derivatives of all orders at Xo i.e. they are functions that are infinitely differentiable in the neighbourhood of x = xo. This is precisely the property enjoyed by power series!

- 32-

ENM321

A REVIEW OF POWER SERIES

A power series about the point Xo is an infinite series of the form

'ian(x-xor = ao +al(x-xO)+a2(x-xo)2 + ... ;

n"'O

(4.4)

it is also called a power series in (x - x 0). If xo = 0, we obtain a power series in x (about 0)

(4.5)

... ... ...,.

~ .,.

..,.

.... ...

...

Power series have the following properties: ~

....

1. Convergence. For every power series (4.4) there is a radius of convergence R:2: 0 such .,.

that ..,.

(a)

the series converges (absolutely) for Ix - xo 1< R and diverges for Jx - Xo I> R;

(b)

if R = 0 the series converges only at x = Xo (to ao); if R = 00 the series converges for all x;

(c)

if R > 0, the interval 1= (xo - R, Xo + R) is called the interval of convergence. The series mayor may not converge at the end points Xo - Rand Xo + R of J;

(d)

every series converges at its centre Xo (to the sum ao);

(e)

the radius of convergence R can be calculated by using the ratio test:

R= lim ~.

n-too an+l

(4.6)

divergent

divergent

convergent

?

?

---------~.t/h{/r.f/h{/h'/r.f/r.{/h'/r.f/hrIhr/h{/h'Jhr/rr ... ~,~[~[,'~'~r~'~f~[~[~,~[,'~f,~~.~ ~,~ x

Jllfnl/Ill/nl, ;7

xo-R

xo+R

.... ... ~ .... .,.

.~ ~ ~ ..,..

~ ii!!P .... ..,.

~ ~ ...,

~ ~ ~ ~ ~ ~ ~ .,.

~ ~ ~ ~ 81 ~ i!I

i

- 33-

ENM321

2. Function defined by a power series. Suppose R > 0. and j(x) denotes the sum of the power series for I x - Xo 1< R. Then

00

j(x) = Lan(x-xor

n=O

(4.7)

defines a function on the interval of convergence 1.

(a) The functionjis continuous and differentiable on I, and

f'(x) = fann(x-xor-1

n=l

(4.8)

i.e. a power series can be differentiated term-by-term in I and converges to rex).

(b) A power series can be integrated term-by-term on 1 and

(4.9)

(c) The series (4.8) can be repeatedly differentiated term-by-term to obtain f", r", ....

Hence, a power series represents an infinitely differentiable function on its interval of convergence.

j(n) (xo)

(d) The coefficients an of the power series (4.7) are given by an = so that

n!

co j(ll) ( )

f(x) = L 1xO (x-xot

n=O n.

(4.9)

i.e. the power series (4.7) is just the Taylor series expansion off about x = xo.

(e)

Setting Xo = 0 in (4.9) shows that the power series (4.5) in x is just the Maclaurin series for j

(4.10)

ANALYTIC FUNCTION

For our purposes the crucial property is 2(e), the infinite differentiability of a power series; this

motivates the following definition. .

Remarks

(2)

(3)

...

-34- •

ENM321 .,

" "

" W

DEFINITION 1 A function / is analytic at Xo if it has a Taylor series expansion about x = Xo with a positive radius of convergence.

(1)

If / is analytic at Xo = 0, then it has a Maclaurin series expansion. Some standard Maclaurin series and their intervals of convergence are

x2 x3

eX =l+x+-+- + .,.

21 31

-co < X <co

x2 x4 cosx=l--+- - .. '

21 41

00 (-It 2n = L (2n)! x ,

1T:=O

-co <x <00

x3 xS

sin x = x - - + - - ...

31 51

DO (_1)1T

L x2n+l

= (2n+I)! '

n=O

-oo<x<oo

x2 x4 coshx = 1+-+- + '"

21 41

-oo<x<oo

3 5

. h x x

sm x=x+-+- + ...

3! 51

-oo<x<co

x2 x3 In(l+x) =x--+- - 2 3

00 (_1)11+1

= L Xfl

n:=l n

-1 <x s 1

()1T n(n-1) 2 n(n-1)(n-2) 3

l+x =I+nx+ x + x + ...

2! 3l

+I c x c I.

Since power series about a given point Xo can be added, subtracted, multiplied and divided (inside their common interval of convergence), if/and g are analytic at Xo then

f ± g, fg and / [if g( xo) * 0] are also analytic at Xo . g

Any polynomial function P(x) is evidently analytic at every point. Consequently, if Q(x) is a polynomial having no common factor with P(x), the rational function Q(x)IP(x) is analytic everywhere except at those points Xo for which p(xo) = 0 (i.e. at the points of discontinuity of the rational function).

.. W

• W

..

• • .-

.. .. ..

" ..

.. .. .. .. IP .. \II .. ... \ill fI.II .. .. .. .. ..

"

.,.

~

..

..

!!II

4!11

I!I





II!

II! ,

~ ,

!

- 35 -

ENM321

ORDINARY AND SINGULAR POINTS OF AN ODE

We are interested in the second-order linear homogeneous equation

ao (X) y" + al (X) y' + a2 (X) Y = 0

(4.11)

where any factors common to all three coefficients ao, aI' a2 have been removed. We rewrite (4.11) in SF

y" + p (x)y' + q (x)y = 0

(4.12)

where p(x) = al(x) and q(x) = ~(x) .

ao(x) ao(x)

We have not excluded the possibility that the leading coefficient ao(x) has a zero at some point. It turns out that a solution of (4.11) can exhibit extremely wild behaviour near a zero Xo of ao(x) since at least one of the functions p, q is discontinuous and therefore not analytic at Xo' We distinguish these points as follows: .

DEFINITION 2 A point Xo is called an ordinary (or regular) point of equation (4.11) if both pix) and q(x) in the SF (4.12) are analytic at xo' A point that is not an ordinary point is called a singular point (SP) of the equation.

Comments

(1) If the coefficients ai(x) are analytic everywhere, then p(x) and q(x) will be analytic except at points where ao(x) = 0 i.e, the only possible singular points are the zero's of ao!

(2) For the most part, we shall be interested in equations with polynomial coefficients ai(x) which are analytic everywhere. It follows from comment (1) that x = Xo is an

ordinary point if ao(xo):;t'O ,and a singular point if ao(xo) = O.

Example 4.2 Identify all the singular points of

(ii) ax2 y" + bxy' + cy = 0

(iii) y"+eXy'+(cosx)y=O

(iv) v" + (lnx) y = 0

(v)

(x2 + 4) y" + 2xy' - y = 0

(vi)

,

xy" +..!X_+ (sin x) y = O.

I-x

- 36 -

ENM321

3. SERIES SOLUTION ABOUT AN ORDINARY POINT

At an ordinary point Xo of equation (4.11) or (4.12), the coefficient functions p(x), q(x) are analytic. We might anticipate that a solution of the equation inherits this property and therefore has a power series expansion about xo' The following theorem guarantees the existence of a

power series solution - we state it without proof.

THEOREM 2 (Existence of power series solutions) Suppose Xo is an ordinary point of the equation

v" + p(x)y' + q(x) Y == 0 .

Then there is a power series solution about the point Xo of the form

00

y(x) = I,an(x-xor = aOYl(x)+alY2(x)

n=O

(4.13)

where ao, al are arbitrary, and YI and Yz are linearly independent series solutions i.e, (4.13) is the general solution. Moreover, the radius of convergence of each series solution is at least as large as the distance in the complex plane from Xo to the nearest singular point of the equation.

Remarks

(1) Theorem 2 guarantees a series solution about an ordinary point Xo which is valid on the interval I::: (xo - R, Xo + R) where R is the distance from Xo to the nearest SP of the equation. However, one or both of the 1. i. solutions Yl' Y2 may have a larger radius of convergence than R.

(2)

Equation (4.13) shows that Y = ao + al (x - xo) + .. , so that

Hence, Yl is the solution satisfying the i.c.'s Yl (xo) :::: 1, y{ (xo) = 0, and Y2 satisfies Y2 (xo) = 0, y2( xo) ::: 1. Notice that y(x) satisfies the general i.c, 's Y( xo) ::: ao, y'( xo) ::: al (it is exactly the solution we found by repeated differentiation in Section 1).

(3)

(n)(x )

We know that the coefficients an in the series (4.13) are given by an ::: y 0 which

n!

could be obtained from the equation by repeated differentiation. This is excellent in theory but usually impractical! Rather, we substitute the series (4.13) into the ODE

(4.11) and collect up same powers (x - Xo ) n .

... .. ...,.

fi= tjm

-~

.,.

.,.

@II @S e= @I

~ @1

@

e @

e @

@

e

@

I!

e

!!

tl

'!

f

4!

@

~

@

~

- 37-

ENM321

Example 4.3 Obtain a series solution of y" + y = 0 valid for all x.

Example 4.4 Solve the Airy equation y" - xy = 0, -00 < x < 00.

Comments

(1) In Examples 4.3 and 4.4 we were able to obtain a formula to determine all the coefficients an (n ~ 2) in terms of the arbitrary constants ao and al' This leads to an

analytical expression for the entire series solution. This is often not possible and we must therefore settle for calculating the first 'n' coefficients so as to obtain a polynomial approximation to the exact solution (with the desired accuracy).

(2) Even about an ordinary point, the power series method involves a considerable amount of algebraic manipulation. Instead of manipulating the infinite series, one can write out the first few terms in full and try to spot the recursion relation.

(3) The method applies to linear equations of any order (see Problems 4, Qu, 8) and to inhomogeneous equations. Moreover, the coefficients need not be polynomials. However, p(x) and q(x) and any forcing term r(x) must first be expanded in power series about the ordinary point Xo (see Problems 4, Qu. 7).

(4) Henceforth, we shall only obtain power series solutions about x = 0 i.e, Maclaurin series. If we require a solution about an ordinary point Xo =1= 0, we first change the

variable to z = x - Xo (which shifts x = Xo to Z = 0). We seek a series solution of the 'new' equation (in z) of the form y = L. anzll and then resubstitute z = x ~ xo.

Example 4.5 Find the first few terms of a power series solution of the i.v. problem

y" - 2xy' + y = 0, y(o) = y'(O) = 1,

and give a recursion relation for finding any term in the series. (This IS Hermite's equation y" - 2xy' + Ity = ° with A = 1.)

4. SINGULAR POINTS AND THE SERIES l\1ETHOD OF FROBENIUS

The power series method of the preceding section will not usually work near a singular point (SP) of the equation

y" + p(x)y' + q(x)y = O.

(4.14)

This is because the solution of (4.14) near a SP Xo may exhibit extremely wild behaviour and so cannot be represented by a Taylor series of the form (4.13) about x = Xo i.e, the solution is not analytic at x u-

- 38-

ENM321

Suppose we attempt to find a power series solution of the Euler-Cauchy equation

2 1/ I 3 0

x Y -xy - Y =

(4.15)

00

about its only SP Xo = O. Setting Y = L, anxn and substituting into the DE leads to

n=O

00 QQ ee

L,n(n-1)anxn - L,nanxn - L,3anxll = 0

n=O n=O n=O

00

::::;. L,(n2 -2n-3)allxll =0

n=O

i.e. (n-3)(n+1)an =0 n=O, 1, 2, ....

Hence, an = 0 unless n = 3 and then a3 is arbitrary; we have obtained only one independent solution Yl = x3. Solving (4.15) in the usual way (Y = xm), we obtain a second 1. i. solution (check)

1 Y2 =-.

X

The power series method did not produce Y2 since it does not have a Maclaurin series expansion i.e. Y2 is not analytic at x = O! Indeed, this solution is discontinuous at x = 0 and becomes increasingly large in magnitude near zero.

However, had we substituted instead the series

(4.15)

where the index r is a constant, then we should be able to find both solutions i.e, we obtain two possible values r = 3 or r = -1 and an = 0 (11. ;:::: 1) so that

(aO arbitrary)

or

(see Problems 4, Qu. 10). This admirable idea was proposed by Georg Frobenius (1849-1917) and the series (4.15) is called a Frobenius series. However, the method only works if the singularities of the coefficient functions p(x) and q(x) at the SP x = Xo are not too severe i.e. they are "weak singularities".

- 39-

ENM321

REGULAR AND IRREGULAR SINGULAR POINTS

We distinguish between two types of singular point:

v" + p(x)y' + q(x)y = 0

DEFINITION 3 A singular point x = Xo of the equation

is said to be a regular singular point (RSP) if both the functions

are analytic at xo; otherwise Xo is called an irregular singular point (ISP). If Xo = 0 these expressions become xp(x) and

x2 p(x), respectively.

Comments

(1) If Xo is a SP, then one (or both) of p(x) and q(x) is not analytic at x:::: xo' But if Xo is a RSP, then this means that we can remove the singularities of p(x) and q(x) by multiplying them by (x-xo) and (x-xO)2 respectively.

(2) If Xo is a RSP, then we know that there are Taylor series for

OQ

(x-xo)p(x)= LPn(x-xor,

n=O

0()

(x-xO)2q(x)= Lqn(x-xor

n=O

(4.16)

and so

Then at least one of the three constants Po, qo and ql must be nonzero (why?). This means that Xo is a RSP if

1 p(x) has a singularity at x = xo no worse than --

X-Xo

1

q(x) has a singularity at x = Xo no worse than 2 .

(x - xo)

- 40-

(3) In view of (2), if p(x) and q(x) are rational functions (i.e. quotients of polynomials reduced to lowest terms), then Xo is a RSP provided the factor (x - xo) appears at most to first power in the denominator of p(x) and at. most to the second power in the denominator of q(x).

(4) Alternatively, from (4.16) we see that Xo is a RSP if the limits

lim (x-xo)p(x) = Po,

X-4Xo

are both finite (possibly zero).

Example 4.6 Classify any singular points of the following equations:

(i) x2y" -xy'-3y = 0 (Euler-Cauchy)
(ii) x2y" + 2y' - xy = 0
2
(iii) (x2 -4) yl/+(x-2)y'+2y=O
(iv) (1-x2)yll -2XY+! Y = 0 (Legendre's equation)
(v) X3yll - 2x2y' + 6y = O. The basic result for solving a second-order ODE about a RSP is

THEOREM 3 (Frobenius' Theorem) If x = Xo is a regular singular point of the equation

v" + p(x)y' + q(x) Y = 0 .

then there exists at least one series solution of the form

co ec

y = (x - xor L,an(x - xof = .'L an (x - xoyHr

(4.17)

n=O

where r is a (real) constant and ao ;t:. O. The series converges at least on the interval 0< Ix - xol < R, where R is the distance from Xo to the nearest other singular point (real or complex) of the equation.

ENM321

... ... ... .... .. lIf!!!... .,.

ef!!'" f!" fI1!' f!i!'J 8- ewee@"

@r

f!J! f!J! @!II

e= 8- ~ 8' f!i!1

• @II

@II fi} .. ,.

"

~

• •

• • • It

It It

• • It

• !It

!It !It ,.

,.

.. ,.

• ..

t It

• " •

" " " !It

It ,

, ,

It ,

,

It t It It It It t It It It lit lit It It It t t t t ~

• •

- 41 -

ENM32 I

Conunents

(1) In contrast to Theorem 2, the theorem only guarantees us one solution in the form of the Frobenius series (4.17). Nevertheless, in some cases, it may be possible to obtain two 1. i. solutions of Frobenius type (see below).

(2) The series (4.17) converges for all x in the open interval 1= (xo - R, Xo + R) except possibly at x 0 itself.

(3) Taking ao *" 0 in (4.17) just means that the lowest power appearing in the Frobenius series is (x - Xo r since

( )r ( )l+r ( )2+r

y= ao x-xo +al x-xo +a2 x-xo + ... ;

(4.18)

i.e. we insist that (x - Xo r is the highest common factor of the series.

(4) Just as for ordinary points, we may assume that the RSP is Xo = 0 and seek a Frobenius series solution.

QQ

Y = Lanxn+r = aoxr +alxl+r + ... n=O

(4.19)

Example 4.7 Use the Frobenius series method to solve the equation 2xy" +(1 + X)yl + Y = O.

Remarks

(1) In Example 4.7 we were able to obtain two l. i. series solutions of the form (4.17). This

is because the indicial equation 2r2 - r = 0 has two roots rl = 2., r2 = 0 whose 2

difference 1j - r2 = ~ is not an integer.

(2) The indicial equation (IE) is always given by equating the coefficient of the lowest power of x to zero (after substituting the Frobenius series into the equation and collecting up like powers of x). The IE is a quadratic equation for the unknown index r; the resulting solutions will depend on its two roots (see Theorem 4).

If only one solution can be found, it may be possible to obtain a second solution using the integral formula.

Example 4.8 Show that Laguerre's equation (of order 1)

xy" + (1- X)yl + Y = 0

has a polynomial solution and obtain the first few terms of a series expansion for the second solution.

- 42-

Comments

(1) Example 4.8 shows that we can obtain only one Frobenius series if the indicial equation has a single repeated root; this can also occur if the two roots differ by an integer.

(2) The integral formula is generally only useful when we wish to know the first few (i.e, leading order) terms in the series of a second solution. To compute the complete series we need to know the form of the second solution. This information is provided by the next result which, for simplicity, we state for a RSP at x = O.

THEOREM 4 (Frobenius series solutions) Suppose Xo = 0 is a RSP of the equation

v" + p(x)y' + q(x) Y = 0 .

Let rl 2 rz be the real roots of the indicial equation for a Frobenius series solution.

1.

Tl ;t. rz and J'l -12 * positive integer: there exist two linearly independent solutions of the form

co

Yl = L,anxll+r1 n=O

(bo * 0).

2.

r1 :# rz and J'l - rz == positive integer: there eXi_st two linearly independent solutions of the form

0<>

Yl = L,anxn+r1 (ao :::j:. 0),

n=O

(4.20)

DO

Y2 == CY11nx+ L,bnxn+rz _ (bo * 0)

n=O

(4.21)

where C is a constant that could be zero.

3.

r1 == rz: there exist two linearly independent solutions of the form

(ao * 0),

(4.22)

Yz == Yllnx + Lbnxn+r1. n=l

(4.23)

ENM32 1

.... ~ @II

~ ~ .,.

@II @P @II @' @P

8' @I

e @=

tIP e e e ~ I!

te

e

~

~

@

~

~

~

~

~

~



~

!

- 43-

ENM321

Comments

(1) Example 4.7 illustrates Case 1 with rl =~, r2 = O. In Example 4.8 we obtained two equal roots rl = r2 = 0 - this is Case 3. For an example of Case 2 where C = 0 in equation (4.21), see Problems 4, Qu. 10.

(2) If we want the complete series for the second solution in Cases 2 and 3, then we first obtain the Frobenius series (4.20) and (4.22) corresponding to the root rl. We then substitute the form of the second solution Y2 given by (4.21) and (4.23) into the differential equation to determine the unknown coefficients hn.

For completeness, the next example illustrates Case 2 of Theorem 4 when C * o. The solution is given in outline and you are urged to fill in the missing details yourself.

Example 4.9 Find a Frobenius series solution of the equation

xy" + xy' - Y = O.

Obtain a second solution by using the integral formula.

Solution: x = 0 is the only singular point and is a RSP. Theorem 2 guarantees a Frobenius solution

aD * O.

Substituting the series into the DE and collecting like powers gives

= =

L(n+"r)(n+r-l)anxn+r-l + L(n+r-1)anxn+r = 0

n=O n=O

Changing the index in the second series (n --7 n -1), we have

00 00

L(n+r)(n+r-1)anxn+r-1 + L.(n+r-2)an_IXn+r-1 = 0

n=O n=l

co

i.e. r(r-1)aoxr-1 + L.[(n+r)(n+r-1)an +(n+r-2)an_dxn+r-l = o.

n=I

IE: r(r -1) = 0 => rl = 1, r2 = 0 (Case 2: rl - r2 = 1 and r = rl gives Frobenius series).

RR: (n+r)(n+r-1)an +(n+r-2)an_l =0, n~l

rl = 1: a =- (n-l) (n ~ 1) => an = 0, n = 1,2,
tt n(n + 1) an-l ...
. . Y = aoxrl = aox i.e, Yl=x . - 44-

ENM321

Second solution:

try r = 0 in RR

(n-2)

( ) an-I, n:21 which will not give a

n n-1

second solution!

-jpdx e-x

Y2 = Yl J e 2 dx = Yl f -2 ax

Yl x

(p(x) = 1)

= Yl [- ~ - In x + ~ _ x2 + £ - ... J.

x 21 2.31 3.41

So take

( x2 x3 x4 J

Y2 = Yl In x + 1- - + - - - + ... .

2! 2.3l 3.4!

NB: this has precisely the fonn of the second solution (4.21):

QO

C 1 X'b n+r~

Y2 = Yl n x + L.J nX -,

n=O

c = 1, r2 = 0, bo = 1.

In this case, the integral formula allows us to write down the complete second solution because all the bn' s are known. The general solution is therefore

which converges for all x > 0 (why?) •

5.

SPECIAL FUNCTIONS

There are some important second-order linear ODEs that occur repeatedly in the physical sciences and engineering for many different problems. These special equations of applied mathematics have been studied extensively and their solutions give rise to special functions. They often occur when solving PDEs (such as Laplace's equation, the wave equation and heat conduction equation) by the separation of variables method which reduces a PDE to several ODEs (see ENM 322).

We shall briefly consider two of these equations - Legendre's equation and Bessel's equation - and give a few of the more important properties of their solutions. A detailed discussion of these (and other) important equations can be found in textbooks on the reading list, and in the many books on special functions. (The standard references are: Handbook of Mathematical Functions, M. Abramowitz & 1. Stegun and Higher Transcendental Functions, A. Erdelyi, (ed.) 3 Vals.)

... .. .. .. ..

• ..

...

• ..

~ ... ... tJ... liP ...

.. ~ ~ @I

@II @" @I @lI @I @1 @I @I fill ~ fill ~

t1!!: ~

~

~

i

• •

• • • It

It It

• ,

,

• ,

t t ,

It

I)

it it ,

It ,

,

• It

It

• t

t t It It !t t t

• t

• t

t

It

It

~

• ~



~

• t





• •

• t

~ ~ .(.

- 45-

ENM321

LEGENDRE'S EQUATION AND LEGENDRE POLYNOMIALS

Legendre's equation is the linear second-order equation

(1- x2)y" - 2xy' + v(v + l)y = 0

(4.24)

where v is a real constant. The equation was obtained by Adrien-Marie Legendre (1752-1833) when studying gravitation. Legendre's equation often arises when solving boundary-value problems using spherical polar coordinates (r, e, ¢).

The only singular points of equation (4.24) are the RSPs x ::: 1 and x :::: -1. Since x = 0 is an ordinary point, Theorem 2 shows that we can find a power series solution of the form

convergent on at least the interval -1 < x < 1. Substituting the series into equation (4.24) we obtain the two 1. i. series solutions (exercise)

( ) _ [1 v(v+1) 2 (v-2)v(v+1)(v+3) 4

Yl x - ao - x + x

2! 4!

- (v-4)(v-2)v(v+l)(v+3)(v+4) x6 + ... J 61

(4.25)

() [ (v-1)(v+2) 3 (v-3)(v-1)(v+2)(v+4) 5

Y2 x = al x - x + x

3! 51

- (v-5)(v-3)v(v-1)(v+2)(v+4)(v+6) x7 + ... J

. 7!

(4.26)

where ao, al are arbitrary constants.

For general v, (4.25) and (4.26) are infinite series and are of little interest. However, for v = 0,1,2, ... one or other of the two series terminates and we obtain a polynomial solution.

Legendre Polynomials Pn (x) : when v = n is an even integer the series Yl (x) terminates at x"; if v ;:::: n is an odd integer the series Y2 (x) terminates at x". In each case we obtain a polynomial Pn(x) of degree n called the Legendre polynomial which is a solution of

(4.27)

................. __ ._---------------

- 46-

The arbitrary constants ao, al are chosen so that Pn (1) ::;;; 1 for all n. The first few Legendre polynomials are (check)

Po (x) = 1,

P3 (x) ::;;; ~ (5x3 - 3x ),

In general, we obtain

1 N (-1)k(2n-2k)! n-2k

P,l(X)::;;; 21t ~k! (n- 2k)! (n- k)! x

where N::;;; ~n if n is even and N::;;; ~(n -1) if n is odd.

Basic properties: the following properties of Pn (x) are evident from the series.

(ii) ~J-x)::;;; -~Jx), n odd.

(iv) Pn (0) ::;;; 0, n odd.

(v)

P~(O)::;;; 0,

n even.

sc

Graphs of Legendre polynomials: Po (x), ... ,P4 (x)

ENM321

(4.28)

(4.29)

@@III .,..

@II @PI @II @II

e@-

@-

f!!J' @PI

e~ e~ ~ ~ ~ ~

!

- 47-

ENM321

Rodrigues' formula: the Legendre polynomials are generated by

1 d (2 )ft

P(x)=--x -1

n 2nn! dx" '

n = 0, 1, 2, ....

(4.30)

This formula was obtained by Olinde Rodrigues (1794-1851).

Recurrence relation: there are many relations connecting Legendre polynomials of different degrees. The basic one is due to Ossian Bonnet (1819-92):

(4.31)

Bonnet's recurrence formula can be used to obtain any ~!(x), n ~ 2, given that Po (x) = 1 and F1(x)=x.

Legendre functions of the second kind Qn(x): the solution Pn(x) is defined for all x, even though there are RSPs at x = ±l. However, a second 1. i. solution is given by the nonterminating series (4.26) if v = n = 0, 2, 4, ... , and (4.25) if v = n == 1,3,5,.... This series solution Qn(x) converges for Ixl < 1 and has a logarithmic singularity at both x = 1 and x = -1. The general solution of Legendre's equation (4.27) is therefore

C1, c2 arbitrary constants

(see Problems 4, Qu. 18).

Orthogonality of Legendre polynomials: any two Legendre polynomials P'1(X) and Pm(x) satisfy

1
f ~!(x)Pm(x)dx = 0 if m e n (4.32)
-1
2 if (4.33)
== m=n.
2n+1 The result (4.32) is called the orthogonality property for distinct Legendre polynomials and Pn and Pm (m::j:. n) are said to be orthogonal on (-1, 1).

Fourier-Legendre series: the orthogonality property allows us to represent a function f(x) on (-1, 1) by a series' of Legendre polynomials. We set

00

f(x) = LCnP'l(X) = coPo(x)+CtPl(X)+C2P2(x)+ ....

n=O

(4.34)

=

..

3







'" ..

..

• ..

... ... .. .,

.. w w

'" " \!lI

" til

tIll ~ ~ i ~

~ ~

" i

~ 'i i

~

where v z 0 is a nonnegative real constant. The solutions of equation (4.36) are called Bessel ,

functions. The equation was first derived by Friedrich Wilhelm Bessel (1784-1846) when Ii

studying the perturbations of planetary orbits. However, the equation arises in numerous ~

physical applications (fluid mechanics, wave propagation, heat transfer), particularly when f

solving PDEs by separating variables. Bessel's equation usually governs the radial dependence 4

when solving classical PDEs in cylindrical polars (r, e, z). Many volumes have been written ~

on Bessel's equation and Bessel functions: the standard work is A Treatise on the Theory of 4

Bessel Functions, G. N. Watson. •

,

Bessel functions: the only singular point of equation (4.36) is the RSP x = o. Theorem 3 •

assures us that there is at least one Frobenius series solution

- 48-

ENM321

We call (4.34) the Fourier-Legendre series for f (This is precisely analagous to the familiar Fourier series for periodic functions which are represented by a series of trigonometric functions.) The coefficients cn are obtained by using the orthogonality property (in just the

same way as for ordinary Fourier series). Multiplying (4.34) by Pm (x) and integrating

1 00 1 1

J f(x)Pm(x)dx = 2:Cn f P,l(x)Pm(x)dx = Cm j[Pm(x)]2dx

-1 n=O -1 -1

so that

1 2m+1 J

em = f(x)PnJx)dx.

2 -1

(4.35)

The series (4.34) converges to the value f(x) wherever the function is continuous. At a point of discontinuity, the series converges to the value at the midpoint of the jump (exactly as for ordinary Fourier series).

The set of Legendre polynomials {Pn, n = 0, 1, 2, ... } is a complete set of orthogonal functions on (-1, 1) and can be used to solve b.v. problems in spherical coordinates (spherical

harmonic analysis).

6.

BESSEL'S EQUATION AND BESSEL FUNCTIONS

Bessel's equation of order v is the linear second-order equation

(4.36)

00

y(x) = Lanxn+r, n=O

(4.37)

- 49-

ENM321

which converges for all x, except possibly at x = O. Substituting the series into (4.36) we obtain the indicial equation (check)

We are guaranteed the solution (4.37) with r = li. = v. In this case we find that al = a3 = a5 = ... = 0 and

a2n = 22nn! (1 + v)(2 + v) ... (n + v)

(n = 1, 2, ... ).

Then (4.37) gives the Frobenius series solution

v [00 ( -1 r x 2n ]

YI (x) = aox 1 + L --=-2 ----'---'------

n=12 nn!(1+v)(2+v) ... (n+v)

(4.38)

which is convergent for 0 ::; x < 00 (since v may be a fraction we only consider x ~ 0). Since part of the denominator "looks like" a factorial expression we use the generalised factorial function r(x) to write

(1 + v)(2 + v) ... (n + v) = _r(.;._n_+_v_+_:_l) r(v + 1)

(see the Appendix for an overview of the gamma function T'(xj). If we substitute into (4.38) and choose

we finally obtain the solution of Bessel's equation of order v

co (-lr (x2)21l+V

J v (x) = L· --'---''----

n=O n !r(n + v + 1)

(4.39)

J v (x) is called the Bessel function of the first kind of order V (v ~ 0), which converges for all x ~ 0 with Jo(O) = 1 and Jv(O) = 0 for v > O.

The form of a second solution of Bessel's equation l. i. of I v depends on the difference of the indicial roots. Case 1 of Theorem 4 shows that if rl - r2 = 2v i:- integer then there is a second Frobenius series (4.37) with r ='2 = -v. This gives a second solution (let v -7 -v in Jv(x»

00 (1)n (x2)2n~v.

i.; (x) = L--'--~-

n=O n! r( n - V + 1)

(4.40)

- 50-

ENN1321

,-v is called the Bessel function of the first kind of order -v. This solution is singular at x = 0 and '-vex) becomes infinite there (because of the factor x-v). The series (4.40) converges for all x > 0.

Case rl- r2 = 2v = integer: then v r= O, ~,1, ~,2, .. , . But for v=n+~ (n=O, 1, 2, ... ), we can show that' v and J -v are also 1. i. (This is Case 2 of Theorem 4 with C = ° in (4.21).) The Bessel functions In+1!2 and J -(11+112) can be expressed in closed form in terms of sin x, cos x and powers of x. For example, we can use the series (4.39) and (4.40) to show that

Summary: if v * 0, 1, 2, 3, ... the general solution of Bessel's equation of order v is y(x) = c1JV(x) + C2J_v(X)

(4.41)

Bessel functions of integer order v :::: 0, 1,2, ... : setting v = N in the series (4.39) for J v we obtain

(4.42)

where we have used r(N + n + 1) = (N + n)l (see Appendix). These are the Bessel functions of the first kind of integral order N :::: 0, 1, 2, ... and are of most interest in engineering and physics. Of special interest are 10 and '1 because of their close connection (and similarity to cos x and sin x, respectively):

(4.43)

(4.44)

(compare these series with those for cosx and sin x).

.....p

.. ~ .... .,.

@-' @II @II @:I ... .... @III

@S ~ @I

~ ~ t!i!=

" @

(Jij e@I @

!ill

~

~

Ijj

ii

~

iJ

~

~

~

~

- 51 -

ENM321

We observe directly that

(4.45)

so that 10(0):= -11 (0):= O.

y

Graphs of J 0 and J 1

Note the interlacing of the zeros of 10 and 11; this is a property of consecutive Bessel functions J N(x) and J N+l (x), N:= 0, 1, 2, .... The zeros of J N assume particular importance in the solution of boundary value problems that arise in engineering (see later and also ENM322).

The solutions J N(x) andJ_N(x) are linearly dependent: in fact, it can be shown that

Hence, we obtain only one Frobenius series and we must look for a second solution using Theorem 4. For N:= 0, we have repeated indicial roots TI = T2 = 0 so that a second l. i.

solution is given by equation (4.23)

00

Yo(x)=lo(x)lnx+ "Lbnxn.

n:::::l

(4.46)

It has a logarithmic singularity at the RSP x = O. For N ~ 1, a second solution is given by Case 2 of Theorem 4. Equation (4.21) gives

00

YN(X) = CJ N(x)lnx+ L. bnxn-N

n:::::O

(bo ;f:: 0)

(4.47)

with C;f:: 0 and so it too has a logarithmic singularity at x = O.

- 52-

ENM321

Bessel functions of the second kind ¥;v (x): we don't usually use the solutions (4.46) and (4.47) as the second 1. i. solutions of Bessel's equation of integer order N. Rather we take a very particular combination of IN(x) and YN(X) to obtain a further solution YN(x): this is called the Bessel function of the second kind of integer order N := 0, 1, 2, .... Then the general solution of Bessel's equation of order N

? (??)

x~Y" +.xy' + x- - N- Y = 0

(4.48)

is

x>o.

(4.49)

The Bessel function YN (x) has a logarithmic singularity at x = 0 so that YN(x)~ -00 as x -7 O. ..

..

A similar Bessel function of the second kind Yv can also be defined for v i= integer (by taking a ...

linear combination of J v and J -v). Hence, the general solution of Bessel's equation of any ..

f!'f fjjiF

f'!i f/'!J ff' iji!J

~ ~ ~

~

~ flit!'

flJ t!! flJ t!!

~ e e ~

We have stated these for integer order N, but they are equally true for general order v and for (IiJ'

the Bessel functions Yv (x). In particular, we note that setting N = 1 in (4.52) leads to fJ'

order v ~ 0 can be written as

(4.50)

y
(1.6
OA
0.1
0
-0.2
-().4
-0.6
-0.8 Graphs of Yo and Y1

Recurrence relations: there are numerous formulae relating Bessel functions. Two important but typical examples are

xJ N+1 (x):= 2NJ N (x) - xl N-l (x)

(N~l)

(4.51)

(N ~ 1).

(4.52)

J{(x) = Jo(x) -'!_Jl (x)

X

so that the derivative of J1 (x) can be computed from the values of J 0 (x) and J1 (x).

.,.. ~ p

.. ~ ~ ~ (j!t!J

- 53 -

ENM321

Zeros of Bessel functions: J v and J -v have infintely many zeros am' m = 1,2,3, ... such that J v (a m) = O. These zeros are spread along the entire length of the positive x-axis with O:s; al < a2 < .... However, these zeros are not equally spaced, except for those of J}_ and

2

J 1 which coincide with the zeros of sinx and cosx, respectively (i.e, am+1 - am = n). 2

However, since

Jv(X) - .); (Acosx + Bsinx) as x --7 DO,

the zeros eventually have approximately equal spacing n. In fact, one can show that for sufficiently large m the zeros of J v are approximated by

am;;:(m+~-~)n (4.53)

with increasing accuracy as m --7 DO. However, this expression gives very good approximations even when m is not too large: for example, for J 0 we obtain

Exact Approx.

2.405 2.356

5.520 5.498

8.654 8.639

11.792 11.781

14.931 14.923

18.071 18.064

PARAMETRIC BESSEL EQUA TION: FOURIER-BESSEL SERIES

In many physical problems we encounter Bessel's equation in the form

(4.54)

where}. > 0 is a constant. Equation (4.54) is called the parametric Bessel equation with parameter It. This equation is obtained from Bessel's equation (4.36) by letting x --7 Ax (and using the chain rule) and therefore has the general solution (see equation (4.50»

(4.55)

Equation (4.54) often arises when solving a PDE in cylindrical polars by separation of variables, Typical problems are the vibrations of a circular membrane (wave equation) and the steady-state temperature distribution in a circular plate (Laplace's equation) where we solve equation (4.54) on some interval O:s; x:S; a subject to the b.c.'s

(i) y(O) is finite (at the centre of membrane/plate),

(ii) y(a) = 0 (clamped membrane/insulated plate).

- 54-

ENM321

Since Yv(O) is infinite, we must set c2 = 0 in (4.55) to satisfy (i) i.e. y(x) = c1Iv(k). Then (ii) requires I v (..:ta) = 0 so that Aa = am is any zero of Iv. Hence, A can take anyone of the eigenvalues

m = 1,2,3, ...

(4.56)

with corresponding eigenfunctions (or eigensolutions)

(4.57)

It can be shown that the parametric Bessel functions Iv (AmX) are orthogonal on [O,a] with

a

f XIAAmX)Iv (AnX)d,y = 0 o

if m::j:. n,

(4.58)

(4.59)

Note that the integral in (4.58) has the 'extra' factor x (which is not present in the orthogonality condition (4.32) for Legendre polynomials): we say that the functions are orthogonal with respect to the weight function x.

We can use the complete set [r, (Am x), m=1,2,3, ... } of orthogonal Bessel functions on [O,a] to expand a function f(x) as a series of eigenfunctions

OQ

f{x) = L,cmIv{Amx) = C}lv(AIX)+C2Iv{A2X) + ....

m=l

(4.60)

This is called the Fourier-Bessel series expansion for f(x) where the coefficients em are obtained using the orthogonality conditions (4.58) and (4.59)

Am = am, a

(4.61)

where {a m ' m = 1, 2, '" } are the zeros of I v-

Remarks

(1) The constant v is fixed by the order of the parametric Bessel equation (4.54) and the eigenfunction expansion involves the same Bessel function I v with different eigenvalues .:u:::{ in the argument.

(2) In a physical application, the function x 0 would be a given function e.g. the initial displacement (or velocity) of a membrane, or the initial temperature distribution in a plate.

~

,

...

.... ... ... err ... ... ... ..

... .. ... .. ..... p

.. ..... er r .. r tI!" ~ fIJI' pi

P ~

"'" r

~ _.sI

~ ~j

r!i r'!. ~.

I

r-tI .

• ..

... ... ... ... ... .. ... ... .,.

.. .. ..

... ~ .... ..

.... ~ ._

~ ~ ...

... ~ .,

~ 'I 'I ~ lilt e III III ,

, , ,

• • ."

t

• ,

t Ii ..

- 55 -

ENM321

(3)

The Fourier-Bessel series (4.60) converges to f(x) at points x in (0, a) where this function is continuous and to

~[f(xo +)+ f(xo -)]

iff has a jump discontinuity at Xo .

7 .

APPENDIX: THE GAMMA FUNCTION f(x)

The gamma function f(x) is defined by the integral

00

r(x) = JtX-le-tdt, x c- O . o

(4.62)

The integral converges for x-I> -1 i.e, for all x > O .

RESULT 1 (Factorial property) For x > 0, f(x) satisfies the recurrence

f(x + 1) = xf(x) .

(4.63)

00 00

Proof f(x + 1) = J tXe-t dt =[ _tX e -t]~ + x f tx-1e-t dt

o 0

(x>O)

00

= x J tx-Ie =t dt = xr(x). - o

Also

00

r(l) = ftOe-tdt=[-e-t]; =1.

o

Then, using the factorial property (4.63) with x = n a positive integer, we have

f(n + 1) = nr(n) = n(n -l)r(n -1) = ... = n(n -l)(n - 2)··. 2.1f(1)

Therefore

r(n+1)=nl

(4.64)

Note that if we set n = 0 in (4.64) we get I'(I) = O! = 1 which extends the result (4.64) to all integers n ~ O. Thus, the gamma function can be thought of as a generalised factorial function xl which generalises n!

- 56-

ENM321

RESULT 2 r(~) = rn.

(4.65)

Proof Setting t = u2 we have

~ 1 00 ~

r( ~ ) = f t -2 e - t dt = f U -1 e _u2 2u du = 2 J e _tt2 du. 000

00

But the Gaussian integral fe-I? du = Jii /2 and the result follows. • o

Combining (4.63) and (4.65) we deduce

r(~) = 2. r(!) = 2.-J1i

2 2 2 2 '

and so on for other half-integer arguments. Now, rewriting (4.63) as

f(x) = rex + 1) ,

x

(4.66)

we deduce that I'(x) -7 +00 as x --70+ (because r(x + 1) --7 r(l) = 1). We can now draw the graph of rex) for x c. 0 which is shown below.

Extension of rex) to x< 0: Euler's integral definition (4.62) for r(x) is only valid for x > 0 (since the integral does not converge for x:::; 0). Nevertheless, we can extend the gamma function for negative values of x by using the recurrence relation in the form (4.66). For

le.setti 1

examp e, setting x = -"2

and so on. Further, putting x = -1 in (4.66) and using reO) = +00, we find

r(-I) = reo) --700

(-1) ,

r(-1) r(-2)= (-2) -700,

and so T( x) is infinite at every negative integer x = -1, ~ 2, -3, .,. (approaching ± 00 altemately from either side of the vertical asymptotes, as shown in the graph)

~

,.,.

",. r..". fI"!tJ ff!D

",f/"f' f"!S f"'I ".. ".. ".. ,..

,..

". ~ ,.

,. ,.. ,.,.

~ ~ fi'"

f1J ~ ,_

~ ".

"" f!"

fIf f"! ,.

fill'

~

~

- 57 -

ENM321

r (x)

Graph of I'(x)

n -4

Finally, repeated use of (4.63) shows that for any real number v and positive integer n

I'( v + 11 + 1) = (v + n )( v + 11 - 1)( v + n - 2) ... ( v + 2)( v + 1) r( v + 1)

and then

T[v + n + 1)

(v+1)(v+2) .. · (v+n):::: .

f(v+1)

(4.67)

This last formula allows a product of factors (that increase by one each time) to be conveniently expressed in closed form as a quotient of gamma functions. The result is useful for simplifying series solutions of ODEs (such as Bessel's equation; see equation (4.39)). Note that setting v:::: 0 in (4.67) just recovers the familiar factorial n! = 1 . 2 .3··· n = f(n + 1).

- 58 -

ENM321

GREEN'S FUNCTIONS

Green's functions provide a powerful tool for solving inhomogeneous linear ODEs (though the technique can also be used to solve linear PDEs). Here we are concerned with the linear second-order equation

(5.1)

where ao(x) 7= 0, alex), a2(x) are continuous on some open interval Z,

1. PRELIMINARY CONSIDERATIONS

We saw in Chapter 2 that the solution of (5.1) can be split into two parts:

(1) solve the reduced homogeneous equation Ly = 0 for the CF Yc(x);

(2) solve the inhomogeneous equation Ly =j(x) for a PI Yp (x).

The general solution of equation (5.1) is then

y(x) :::: Yc(x) + Yp(x).

(5.2)

We shall assume that we can always find a fundamental set {Yl, Y2} of solutions of Ly :::: 0 so that

(5.3)

The arbitrary constants cl, c2 can then be chosen so as to satisfy the given initial or boundary conditions once we have obtained a PI yp. (x).

Thus our task is to find a PI: the Green's function enables us to construct Yp(x) from a combination of the solutions {Yl, Y2} to the homogeneous equation. An effective way to obtain a Green's function for equation (5.1) is to use the Dirac delta function.

2.

THE DIRAC DELTA FUNCTION (or UNIT IMPULSE FUNCTION)

Mechanical systems are often subjected to extremely large forces which act for a very short time e.g. a golf ball hit by a club, a single large wave strikes a ship, an aircraft makes a "hard landing". A similar situation arises if a very high voltage is switched rapidly on and off in an electrical circuit. How are we to describe such "impulsive" phenomena mathematically?

A PLAUSIBLE MODEL

We consider a sharp blow to be a constant force F acting over a short interval of time l1t from to to t1 :::: to + M (see Fig. (a». The impulse I applied is

~ ....

'r ...

"". ,,-

".. ". ,,-

,,,,-

r r r r ". ". r

... 'IF"

r f"I' r r r r ,.

r r r r ~ ~

t

I

I

I



I

• I



• I





It



It

It





• •











• • •

• • t





• ••

• ,



t

• • • t



t

• ~

t ~

~

~

~ ~ ~ ~ ~

• ~i

- 59-

tl

1= f Fdt = F(tl - to) = F6t.

to

But from Newton's second law

11 tj d

f Fdt = f dt (mv)dt = mv(tl)-mv(tO)

10 to

i.e.

impulse I = change in momentum

Fct) f:l:t: F{t)
I~ Area=I
~
F 1%1
~
~
0 t-O t, 0
(a) Impulse I =:= F b..t . F-7r> 00

IIA \\ I

re.a. =

(b) Instantaneous limit I1t -7 O.

ENM321

If the interval I1t is made arbitrarily small (61 -70), then the force F must become "infinitely" large in order to give the same finite impulse 1= F 6t. In this case, the impulse occurs instantaneously and we see only a change in velocity but not the transfer of momentum. For example, when two billiard balls collide we observe just the initial and final states, but no change in their position as momentum is transferred from one ball to the other. Paul A. M. Dirac (1902-1984) the English quantum physicist introduced a generalised function to describe the limiting situation shown in Fig. (b).

DEFINITION 1 The Dirac delta function 8 (x - xo) is characterised by the properties

r x;t:.xo
1. 8(x-xo)= 00,
x=xo
00
2. J 8(x- xo)dx = 1. (5.4)

(5.5)

- 60-

ENM321

Comments

(1)

(2)

(3)

(4)

Equation (5.4) shows that 8(x-xo)::::: 8(xo -x) is an even function. Also, smce 8 (x - xo) ::::: 0 for all x * xo, equation (5.5) shows that

b

J 8(x - xo) dx::::: 1 whenever a < Xo < b.

(5.6)

a

The Definition 1 is somewhat intuitive. Clearly, the 8 -function is not a function in the ordinary sense of elementary calculus. It is an example of a generalised function (or distribution), the theory of which provides a rigorous mathematical framework that justifies its use. We give an heuristic argument for the existence of the Dirac delta function in the Appendix at the end of this chapter.

The 8 -function has been used successfully to describe other physical phenomena e.g. a force acting at a point, point charges in electrostatics. However, it has its origin in the notion of an impulse: thus the "infinite" force F(t)::::: 8 (t - to) applied at the instant

t = to generates a unit impulse.

J F(t)dt = f 8(t - to )dt::::: 1.

The 8 -function is therefore often referred to as the unit impulse function.

The mechanical vibrations of a damped mass-spring system which initially hangs at rest, and is then given an impulsive blow at a subsequent time t = to, can be modelled

by the i. v. problem

mx + eX + kx = F8(t ~ to),

x(O) = i(O) = 0,

(5.7)

where x(t) is the displacement of the mass from equilibrium. Similarly, an LRC-circuit in which a very large voltage E 8 (t - to) is switched on and off at t = to is described by

LQ + RQ +!Q = E8(t -to), C

(5.8)

where Q(t) is the capacitor charge.

The Dirac 8 -function has the following important property:

RESULT 1 (Sifting property) For any continuous function j(x)

0<1

J 8(x-xo)j(x)dx = j(xo)·

(5.9)

~ fjIfI ..,.

JjtJm ~ .",.

.... rtjiItP

... .... .. ...,..

... .. ~ .,..

.. ... .... ... ..... .. .. .,..

.... .. ... .... .,..

.... ... ... .... ... .... ... ... ....

-...

... ... .. .. .. .... ......

.... ....

.... .... ...

1.

....,

• I

I Ii Ii I,

• • ,

,

It It It, It It It It It It It

- 61 -

ENM321

Remarks

(1)

The 8-function has the effect of "sifting out" the particular value f(xo) from the set of values offon (-00,00). Note that (5.9) is consistent with the property (2) of 8(x-xo) when we set f(x) = 1. An heuristic proof of Result 1 is given in the Appendix.

(2)

Since 8( x - xo) = 0 for x * xo, it follows from (5.9) that

b

J8(x-xo)f(x)dx= f(xo) whenever a<xo<h.

(5.10)

a

Moreover, as 8(x - xo) is even, we have the equivalent sifting property if Xo E (a,b)

= b

J 8(xo -x)f{x)dx = f(xo) = J 8(xo -x)f(x)d;'(.

(5.11)

a

3.

GREEN'S FUNCTIONS

We wish to solve the inhomogeneous equation (5.1) Ly = f(x) for different forcing functions f(x). Suppose we can solve the special case when f(x) = 8(x - t): more precisely, we find the solution G (x, t) of

LG{x, t) = 8(x - t),

a <x < b,

(5.12)

for each fixed t E (a, h) (with -00 ~ a < b ~ 00). The linear operator L acts on G (x,-) as a function of x, whereas t is merely a parameter. Then we can solve Ly = f{x) formally as

(5.13)

b

y(x) = J G{x, t)f(t)dt

a

since

b b

Ly(x) = L J G(x, t)f(t)dt = J LG(x, t)f{t)dt

a

a

b

= J 8(x - t)f(t)dt = f(x)

(5.14)

a

where we have used the sifting property (5.11). Hence, once we have found a solution G{x,t) of equation (5.12), then (5.13) gives a PI yp (x) of the inhomogeneous equation (5.1). We call G{x, t) a Green's function (GF) for the differential operator L.

- 62-

ENM321

LG(x, t) = S(x-t)

in

a<x<b

(5.15)

DEFINITION 2 (Green's function) Suppose G(x, t) is a solution of

for any fixed t E (a, b). Then G(x, t) is called a Green's function (GF) for the differential operator

Comments

(1) Bearing in mind the analogy between equations (5.7) and (5.15), we can interpret the Green's function G(x, t) as the response of the system (described by the differential

operator L) to a unit impulse at x = t.

(2) The GF associated with each of equations (5.7) and (5.8) describes the. physical response of these systems to impulsive inputs, and so are important solutions in themselves - each is said to be a fundamental solution of the system.

(3) In effect, equation (5.13) states that the solution of Ly = f(x) can be obtained by "adding up" the entirety of responses G(x, t)f(t) to impulses S(x - t)j(t) applied at each t E (a, b).

(4) It should be emphasised that (5.15) is a symbolic equation in the sense that the lefthand side is an ordinary function, whilst the right-hand side is not.

GREEN'S FUNCTION FOR INITIAL-VALUE PROBLEMS

We consider the i.v. problem

Ly = j(x), a < x < b,

yea) = y'(a) = O.

(5.16)

Since the OF satisfies LG = S(x - t), then G(x, t) is simply a solution of the homogeneous equation LG = 0 in both the subintervals (a, t) and (t, b). Hence

G(x, t) = AYI (x) + BY2 (x),

a < x < t,

(5.17)

G(x, t) = CYI (x) + DY2 (x),

t < X < b,

(5.18)

where {Yl' Y2} is a fundamental set. The four arbitrary constants A(t), B(t), e(t) and D(t) depend on the (fixed) parameter t E (a, b) and must be determined.

... ... ... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ... ... .. ... ... ... .. .. ... .. .. .. ... ... ... .. ... .. ... ... ... ... .. ... .. .. lit .. ...

-•

- • II

- 63 -

ENM321

We insist that G(x, t) satisfies the same homogeneous i.c.s as Y i.e .

d

G(a, t) = -G(a, t) = O.

dx

(5.19)

Substituting for G from (5.17) into (5.19) gives A = B :::: 0, so that G(x, t) == 0 for a s; x < t .

We now insist that G(x, t) is continuous at x ::::: t and therefore

G(t, t) = CYI (t) + DY2 (t) :::: O. (1)

We need a second equation for C and D: this comes from integrating the equation (5.15) for G,

with respect to x over the infinitesimal interval (t-, t +) containing the fixed point t (i.e. we integrate over (t- e, t + s) and let E ---7 0). Since ai(x) and G(x,-) are continuous at x = t we obtain

[dGJt+ t+

ao (t) - ::::: f <5 (x - t) dx = 1

dx t-

t-

where we have made use of equation (5.6). Hence,

d dG 1

-G(t+, t) - -(t-, t) =-

dx dx ao(t)

(5.20)

and since G(x, t) == 0 in a s: x < t this last result becomes (replacing t + by t)

d 1

-G(t, t)::::-.

dx ao(t)

(5.21)

Substituting from (5.18) into (5.21) we get

CYl' (t) + Dyz' (t) = _1_) (2)

ao(t

which gives a second equation for C and D .

But the Wronskian W(t) = W(YI (t), Y2 (t)) "* 0 (why?) so that equations (1) and (2) have the unique solution (check)

C(t) = _ Y2 (t) ao (t) W(t)

D(t) == Yl (t)

ao(t)W(t)

(5.22)

- 64-

ENM321

and then (5.18) yields

G(x, t) == _ Yl (x)Y2 (t) - Yl (t)Y2 (x) , ao(t)W(t)

t'5.x<b

(5.23)

(5.24)

Summary: the function G(x, t) defined by

a 5,. x < t,

(5.25)

G(x, t) ==

Yl (x) Y2 (x) Yt(t) Yz(t) ,

t 5,. x < b,

is the Green's function for the initial-value problem (5.16), where Wet) is the Wronskian of a fundamental set {Yl, Y2} of solutions to Ly = O. The insertion of G(x, t) into the integral (5.13) yields the particular integral

x

Yp(x) = J G(x, t)f(t)dt

a

(5.26)

of the inhomogeneous equation Ly = f(x) and its general solution is then

x

y(x) = elYl (x) + C2Y2(x) + f G(x, t)dt.

(5.27)

a

Remarks

(1) The solution Yp{x) in (5.26) is the PI satisfying the homogeneous i.c.'s yp(a) = 0 and y~(a) = O. This follows by using Leibniz's formula (5.42) in the Appendix and the fact that G(x, x) = 0 (see Problems 5, Q. 2).

(2) The functional form of the OF G(x, t) does not depend on the initial point x = a or the end point x == b i.e. it is generic to the equation Ly = f(x). The initial point x = a is chosen according to the given i.v. problem.

Properties of G(x, t): the OF G(x, t) is the solution of the i.v. problem

LG(x, t) = S(x - t), x> a,

d

G(a, t) = -G(a, t) = 0, dx

(5.28)

for fixed t E (a, x) (we have suppressed the arbitrary upper limit b which may be +00).

fI"" .. .... ... ... ... ... ... ... .... .. .. 'it'.,.

~ fjip

... ... .... ...

f...

fiJ' @I

~ ~ (i1t

~ ~ ~ f1It!' e t!' tt! ~ ~ ~ ~

~

If!!

~

~

.. • • • • • • I

I

• • I

I It It .,

., ., .,

It It It It

• ,

a t t

• • • •

- 65-

By construction, it has the properties

1. G(x, t) == 0 for a 5. x 5. t.

2. !!_G(t, t) =:; _2_( ).

dx ao t

ENM321

(5.29)

(5.30)

The properties (1) and (2), along with equation (5.28), are sufficient to determine G(x, t).

The Green's function G(x, t) is pictured schematically opposite. Its derivative has a finite jump discontinuity of magnitude 1/ ao (t) at x=:; t .

This is just as we expect: the system is at rest until the unit impulse is applied at x =:; t when there is an instantaneous transfer of momentum to which the system responds in x > t.

Observe that the second derivative d2~ is "infinite"

dx

at the step-up. It is this term that produces the 8- function singularity at x = t on the left-hand side of equation (5.15) to "balance" the infinity of 8(x - t) on the right-hand side at x = t.

...... _

:x

a.

b

t

dGdx

1

-

L-~======r---~--------~x b

-_

The great advantage of the Green's function method is that we can solve the inhomogeneous equation Ly = f(x) for different forcing terms f(x) using the same GF G(x, t) and, furthermore, f( x) need not be a continuous junction!

Example 5.1 Construct the Green's function for the i.v. problem associated with the equation

y"+y=f(x) in x>O,

and hence write down its general solution. Use this Green's function to

(i) solve the i.v. problem y" + y =:; x, y(O) = 1, y'(O) = -1;

(ii) obtain the general solution of y" + y = sec x.

We can always construct the GF from first principles by following the procedure used above to obtain the general form of G(x, t) in equation (5.25). We illustrate this in the next example.

Example 5.2 Use a Green's function to solve the i.v. problem

x2y" + xy' - Y = x, y(l) = y'(l) = O.

- 66-

ENM321

Alternatively, we can simply use the defining equation (5.28) and the properties (5.29)-(5.30) to construct G(x, t). We adopt this approach in the following physical application (you are encouraged to work through the details yourself):

Example 5.3 A damped mechanical system which is given a unit impulse blow at time to > 0, is described by the equation

x + 2x + 2x;::: 8(t - to), t > O.

(a) If the i.c.'s are x(O) = x(O) ;::: 0, describe the displacement x(t) for t> 0 with the aid of a graph.

(b) Use the solution to (a) to find the displacement when this same system is subjected to an external force J(t) and i.c.s x(O);::: 1, x(O) =-1.

Solution (a) The solution of the i.v. problem

Lx;::: x +2x+2x = 8(t-to)' x(O) = x(O) = 0,

is just the Green's function G(t, to) with to > 0 fixed. Hence, by Property 1

G(t, to) == 0, 0 s:; t $ to.

to<t: LG==O .. G(t,to);:::e-t(Ccost+Dsint) (check)

But

(1)

Property 2 with eta (t) ;::: 1 and equation (1) give (check)

" t

:::::} C sin to - D cos to = -e 0 •

(2)

Solving (1) and (2) we obtain C = _eto sin to, D = eto cos to i.e.

G(t, to) = e -t (_/0 sin to cos t + /0 cos to sin t) == e -(t-to) sin(t - to)

{O'

x(t) = G(t, to) = -(t-t).

e 0 sm(t-to),

to < t.

The system remains at rest until it receives the blow at to which causes the velocity to jump from o to 1. It then vibrates with its natural frequency and rapidly decaying amplitude (eventually returning to the initial rest state as t ----7 00 ).

~

fI"'!A ff"r ",. ",. ",. ",. ~ ".,.

~ ff"f' ~ .,..

.... ... ....

... ",. .... .,...

", .",

.,...

,... ,... ,_ ,... ,... ,...

"... .fi'" r "... ,_

r ,_

f!/IIl ..r r r r .r' r fJI" ,..

~

- 67-

ENM321

(b) Using the G(t, to) we can solve the inhomogeneous equation Lx ;;:::; f(t) for the PI

t t

xp(t);;:::; J G(t, to)f(to)dto ;;:::; J e -(t-to) sin(t-to)f(to)dto

o 0

with x p (0) = i p (0). The general solution is x== Xc + X p i.e.

x(t);;:::; e-t (Acosr + Bsint) + xp{t).

GREEN'S FUNCTION FOR BOUNDARY -VALUE PROBLEMS

We seek a solution of the two-point b.v, problem

Ly = ao (X)y" + al (X)yl + a2 (x)y = f(x), a < x < b,

(5.31)

satisfying the linear homogeneous b.c, 's

(5.32)

where (aI' (2) and (131,132) are given constants which are not both zero. The Green's function G(x, t) for this b.v. problem, if it exists, is defined to be the solution of

1. LG=,s(x-t), a c x c b, tE(a,b)fixed.

2. lX1G(a,t)+a2G'(a,t)=O (i),

131 G(b, t) + J32G'(b, t) ;;:::; 0 (ii),

(G'=~G(x,t))

3. G(x, t) is continuous at x = t .

Integrating the equation for G over the (infinitesimal) interval (t-, t+), and using the continuity of aj (x) and G at x = t, we obtain the further property (see equation (5.20)

4.

[dG]t+ 1

dx t- = ao (t) .

As before, we must solve LG = 0 in the two subintervals (a, t) and (t, b). We use the two solutions (5.17) and (5.18) to satisfy the b.c.s 0) and (ii) at x = a and x = b , respectively, and assume that this yields the two 1. i. solutions u(x), v(x) , say. Then the GF

- 68-

ENM321

a ~x < t,

(5.33)

t < x ~ b,

where cl' c2 are arbitrary constants, satisfies the conditions 1 and 2.

We now use the properties 3 and 4 to determine cl and c2: these require, respectively,

Cl u(t) - c2 vet) == 0, (1)

-C1U'(t)+C2V'(t)==_!_(). (2)

ao t

But, since the Wronskian Wet) == W(u(t), vet)) *- 0 (why?), these equations have the unique solution (check)

vet)

u(t)

C2 = ---'-'--

ao(t)W(t)

Cl ==

ao(t)W(t) ,

Substituting Cl' c2 into (5.33), we obtain

u(x)v(t) ao (t)W(t) , G(x, t) =

a:$;x~t,

(5.34)

u(t)v(x) ao(t)W(t) ,

t ~ x:$; b,

which is the Green's function for the two-point b.v, problem (5.31)-(5.32). Inserting G(x, t) into equation (5.13) we obtain the PI

y (x) = J u(t)v(x) J(t) dt + J u(x)v(t) J(t) dt

P ao(t)W(t) ao(t)W(t)

a x

(5.35)

where Wet) = w(u(t), v(t)).

Remarks

(1)

The GF (5.34) is pictured schematically opposite. It is easily seen that G(x, t)

satisfies the continuity Property 3, and has the jump discontinuity of magnitude 11 ao (t) of Property 4 (exercise).

Green's function ot«, t)

(2) The GF is a symmetric function i.e. G(x, t) = G(t, x).

(3) The PI yp(x) is the solution that satisfies the given homogeneous b.c.'s (5.32) at x = a and x = b (see Problems 5, Q.7).

r ....... .... .... ....... .... ... ... ... ,,_

(i!IA ,,_

t'!' ".



- 69-

ENM321

Example 5.4 Find the Green's function for the two-point b.v. problem

s" + y = f(x), yeO) = y(;) = 0,

and hence find its solution. Use this solution to

(a) solve the b.v. problem y" + y = cosec x, yeO) = y(Yz) = 0;

(b) solve the b.v. problem y" + y = f(x), yeO) = 1, y(Yz) = 2.

Remark: it is usually better to construct the GF for a b.v, problem from first principles (using the properties 1 - 4) rather than find solutions u(x) and vex) satisfying the b.c.'s at x = a and x = b, respectively, and then use the formula (5.34). In Example 5.4, we could have taken u(x) = sin x, vex) = cos x: one can now check that (5.34) gives G(x, t) and that (5.35) gives the

solution of the given b.v. problem (exercise).

You are urged to work through the next example, filling in the missing details yourself.

Example 5.5 (a) Use a Green's function to solve the b.v. problem

xy" + y' = f(x), y'(O) = y(l) = o.

(b) Use the solution to (a) to solve

xy" + y' = 2x, y'(O) = 0, y(l) = 2.

Solution (a) Ly::; xy" + y' = 0 :::::} y = A In x + B (check)

GF: solve LG(x,t)=8(x-t), O<x<l, tE(O,l) fixed

G/(O, t) = G(l, t) = 0

where G'(x, t) = !!_G(x, t) dx

o < x < t: LG = 0 :::::} G (x, t) = a In x + b

G'(x, t) = a x

G(x, t) = b

t < x < 1: LG = 0 :::::} G(x, t) = cdnx +,B

G(l, t) = ,B = 0

G(x, t) = alnx

Continuity at x = t: G(t, t) = b = alnt (i)

Integrating LG = 8(x - t) on [t-, t +] gives

-70 -

ENM321

[G1(x, t)]t+ •.

t- t

a 1 --0=-

t t

a = 1, b =lnt

(using (i)

Hence,

{lnt, 0:::;; x < t, G(x, t) =

lnx, t:::; x:::;; 1,

and then

1 x 1

Yp(x) = J G(x,t)f(t) dt = J In x f(t) dt + J f(t) In t dt

o 0 x

is solution of the given b.v. problem with y~(O) = Yp(l) = O.

x 1

(b) Setf(x)=2x: Yp=lnxJ2tdt+2J tlntdt

o x

GS: y(x) = Alnx + B + ~ (x2 -1), A, B arbitrary.

i.c.' s: /(0) = 0 y(l) = 2

A=O B=2



APPENDIX: THE DIRAC DELTA FUN:CTION (OPTIONAL)

We may construct the Dirac delta function using the following heuristic argument based on the notion of an impulse (refer to the discussion in Section 2). We define the rectangular pulse

function

(5.36)

with e > 0 and Xo fixed. The graph of this function is shown in Fig. (a), The area subtended by the function is

00 1

J ~E(X -xo)dx = -. 2£ = 1 2£

(5.37)

i.e. 118 (x - xo) represents a unit impulse centred on Xo for any e .

... ... ... ... ....

.... ... ... ... ... ~ ... fi!A

fi'!I f!i'! (iI!

f!'! f!'! f!'! eu t"! t"! f!!'"

~ I"}

~ ~ ~

'"

-71 -

2&

I

-

oc

Area:: 1



o

(a) Rectangular pulse .0.. e (x - Xo ) .

o

~ ENM321
~I
1:::,.-::1
~I
/j
I~
1./':::1
l/:;::i
1;:;8
~I
~I
Xa (b) Behaviour of .6.e(x - xo) as e ---7 O.

As e gets smaller, the pulse becomes increasingly tall and thin (Fig. (b)); in the limit e ---70 we formally obtain the idealised function

with

x :;t:xo x=xo

(5.38)

00 0<1

f 8 ( x - Xo )dx = lim f .6. e (x - xo)dx = 1,

e~O

-0<1

(5.39)

which are the defining properties (5.4) and (5.5) of the 8 -function. Thus, we can think of the 8 -function as the limit of a sequence of unit impulse functions which converge to zero everywhere except at xo.

Remark: any similar sequence of "spiked" functions can be used to approximate 8 (x - Xo ). All we require is that the functions be "centred" on xo, subtend unit area and converge to zero for all x :;t: Xo (in some suitable limit). Two alternative sequences are:

.1 _

E

Triangular pulse

1

2(x-xO +0£), e

Xo - e < x :s; Xo

0, ]x-xo] ~ e.

-72 -

ENM321

Area. =1

Gaussian pulse

PROOF OF RESULT 1 (Sifting property)

Using the rectangular pulse (5.31) to approximate 8(x - xo), we have

00 1 xo+e 1

fL).e(x-xo)f(x)dx=- ff(x)cL'C=- .f(;)2£=f(;)

2£ 28

--<><> Xo-e

where Xo - s < ~ < Xo + 8 (and we have used the mean-value theorem for integrals). Now let s -7 0 in this last equation and use (5.38) to obtain

oc 00

f 8 (x - Xo )f(x)dx = lim f 6.e (x - Xo )f(x)cL"'C = lim f(~) = f(xo),

e~O e~O

smce ~ -7 Xo as E -7 O. •

RESULT 2 (Leibniz's formula for differentiating an integral) Let a(x), b(x) and f(x, t) be continuous and differentiable, and suppose

b(x)

lex) = f f(x, t)dt .

a(x)

(5.40)

Then

ill fb~ ~ ~

_ = - dt + f(x,b(x)) - - f(x,a(x))-.

dx dx ax dx

a

(5.41)

~ .... .... .... .... ~ ~ ti!" ti!" ti!" f!ii" ,...

fI!" fiit" ,..

fii'I' ,,_

,,_

'!!' (IiII'

~ ~ ~ ~

~

-73 -

ENM321

Special cases

1. a = constant, b(x) = x:

l(x) = JX I(x, t)dt and dI = JX al dt+ I(x, x).

dx ax

a a

(5.42)

2. a, b constants:

b

lex) = J I(x, t) dt

and

b

dI = J al dt.

dx ax

a

(5.43)

a

Potrebbero piacerti anche