Sei sulla pagina 1di 95

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

MATHEMATICS-III
MATH F211
Session: 2016-2017
Note: Some concepts of Differential Equations are briefly described here just to help the students. Therefore, the following study material is expected to be useful but not exhaustive for the
Mathematics-III course. For detailed study, the students are advised to attend the lecture/tutorial
classes regularly, and consult the text book prescribed in the hand out of the course.
Textbook: G.F. Simmons, Differential Equations with Applications and Historical Notes, TMH,
2nd ed., 1991.

Appeal: Please do not print this document. Develop a habit of reading soft copy of the notes.

Dr. Suresh Kumar, Department of Mathematics, BITS Pilani, Pilani Campus

Contents
1 Preliminaries of Differential Equations
1.1 Differential equations and their classifications . . . .
1.1.1 Classification based on number of independent
1.1.2 Classification based on degree . . . . . . . . .
1.2 Solutions of DE . . . . . . . . . . . . . . . . . . . . .
1.2.1 Explicit solution . . . . . . . . . . . . . . . .
1.2.2 Implicit solution . . . . . . . . . . . . . . . .
1.2.3 Formal solution . . . . . . . . . . . . . . . . .
1.2.4 General and particular solutions . . . . . . . .
1.2.5 Singular solution . . . . . . . . . . . . . . . .
1.2.6 Initial and boundary value problems . . . . .

. . . . . .
variables
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .

2 First Order Differential Equations


2.1 Exact methods for solving first order DE . . . . . . . .
2.1.1 Variable separable DE . . . . . . . . . . . . . .
2.1.2 DE reducible to variable separable . . . . . . .
2.2 Exact DE . . . . . . . . . . . . . . . . . . . . . . . . .
2.3 Integrating Factor . . . . . . . . . . . . . . . . . . . . .
2.3.1 Existence and uniqueness of Integrating Factor .
2.3.2 IF of first order linear DE . . . . . . . . . . . .
2.3.3 Bernoullis DE . . . . . . . . . . . . . . . . . .
2.3.4 IF of homogeneous DE . . . . . . . . . . . . .
2.4 Clairauts DE . . . . . . . . . . . . . . . . . . . . . . .
2.5 Existence and uniqueness of solution of IVP . . . . . .
3 Second Order DE
3.1 Second Order LDE . . . . . . . . . . . . . . .
3.2 Use of known solution to find another . . . . .
3.3 Homogeneous LDE with Constant Coefficients
3.4 Method of Undetermined Coefficients . . . . .
3.5 Method of Variation of Parameters . . . . . .
3.6 Operator Methods . . . . . . . . . . . . . . .

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

4
4
5
5
6
6
6
6
6
7
7

.
.
.
.
.
.
.
.
.
.
.

8
8
8
9
10
12
12
13
14
14
14
15

.
.
.
.
.
.

17
17
19
21
25
26
27

4 Qualitative Behavior of Solutions


31
4.1 Sturm Separation Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.2 Normal form of DE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

5 Power Series Solutions and Special Functions


5.1 Some Basics of Power Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.2 Power series solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.3 Gausss Hypergeometric Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . .

37
37
39
46

6 Fourier Series
6.1 Introduction . . . . . . . . . . . . . . . .
6.2 Dirichlets conditions for convergence . .
6.3 Fourier series for even and odd functions
6.4 Fourier series on arbitrary intervals . . .

.
.
.
.

50
50
53
54
55

.
.
.
.
.

56
56
58
60
63
63

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

7 Boundary Value Problems


7.1 One dimensional wave equation . . . . . . . . . . .
7.2 One dimensional heat equation . . . . . . . . . . .
7.3 The Laplace equation . . . . . . . . . . . . . . . . .
7.4 Strum Liouville Boundary Value Problem (SLBVP)
7.4.1 Orthogonality of eigen functions . . . .
8 Some Special Functions
8.1 Legendre Polynomials . . . . . . . . . . .
8.2 Gamma Function . . . . . . . . . . . . .
8.3 Bessel Functions . . . . . . . . . . . . .
8.3.1 Second solution of Bessels DE . .
8.3.2 Properties of Bessel Functions . .
8.4 Orthogonal properties of Bessel functions
8.4.1 Fourier-Bessel Series . . . . . . .

.
.
.
.

.
.
.
.
.

.
.
.
.

.
.
.
.
.

.
.
.
.

.
.
.
.
.

.
.
.
.

.
.
.
.
.

.
.
.
.

.
.
.
.
.

.
.
.
.

.
.
.
.
.

.
.
.
.

.
.
.
.
.

.
.
.
.

.
.
.
.
.

.
.
.
.

.
.
.
.
.

.
.
.
.

.
.
.
.
.

.
.
.
.

.
.
.
.
.

.
.
.
.

.
.
.
.
.

.
.
.
.

.
.
.
.
.

.
.
.
.

.
.
.
.
.

.
.
.
.

.
.
.
.
.

.
.
.
.

.
.
.
.
.

.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

66
66
70
72
73
75
75
77

9 Laplace Transforms
9.1 Definitions of Laplace and inverse Laplace transforms . . .
9.2 Laplace transforms of some elementary functions . . . . . .
9.3 Sufficient conditions for the existence of Laplace transform
9.4 Some more Laplace transform formulas . . . . . . . . . . .
9.4.1 Laplace transform of a function multiplied by eax .
9.4.2 Laplace transform of derivatives of a function . . .
9.4.3 Laplace transform of integral of a function . . . . .
9.4.4 Laplace transform of a function multiplied by x . .
9.4.5 Laplace transform of a function divided by x . . . .
9.5 Solution of DE using Laplace transform . . . . . . . . . . .
9.6 Solution of integral equations . . . . . . . . . . . . . . . .
9.7 Heaviside or Unit Step Function . . . . . . . . . . . . . . .
9.8 Dirac Delta Function or Unit Impulse Function . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

78
78
79
80
80
80
80
81
81
82
83
85
86
86

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

10 Systems of First Order Equations


90
10.1 Solution of homogeneous system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
10.2 Solution of non-homogeneous system . . . . . . . . . . . . . . . . . . . . . . . . . . 92
10.3 Variable elimination approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

Chapter 1
Preliminaries of Differential Equations
Differential equations and their classifications
Differential equation
The mathematical description of any dynamical or physical phenomenon naturally introduces the
independent and dependent variables. Suppose we blow air into a balloon that inflates in spherical
shape. Then the radius r of the spherical balloon depends on the amount of air blown into, and is
therefore at our discretion. So we may treat the variable r as the independent variable. We know
that surface area S of the spherical balloon depends on r via the relation S = 4r2 . So, in this
example, r is independent variable and S is dependent variable. Also, the rate of change of surface
= 8r. It is a differential
area S of balloon with respect its radius r is given by the equation dS
dr
equation that gives us the rate of change of S with respect to r for any given value of r.
A differential equation may involve more than one independent or dependent variables. For
instance, in the above balloon example, if we allow the variable r to depend on time t, then the
time variable t is independent while r and S both are dependent variables. Also, the governing
= 8r can be written as
differential equation dS
dr
dS
dr
= 8r .
dt
dt
Formally, we define a differential equation as follows: Any equation (non-identity) involving
derivatives of dependent variable(s) with respect to independent variable(s) is called a differential
equation(DE).
Hereafter, we shall use the abbreviation DE for the phrase differential equation and its plural
differential equations as well.
Order and Degree
The order of the highest order derivative occurring in a DE is called its order. The power or
exponent of the highest order derivative occurring in the DE is called its degree provided the DE
is made free from radicals or fractions in its derivatives.
Ex. Order of (y 00 )3 + 2y 0 + 3y = x is 2 and degree is 3.
Ex. Order of y (4) + 2(y 0 )5 + 3y = 0 is 4 and degree is 1.
Ex. (y 000 )1/2 + y 0 = 0 can be rewritten as y 000 (y 0 )2 = 0. So its order is 3 and degree is 1.
4

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

Classification based on number of independent variables


DE are classified into two categories based on the number of independent variables.
Ordinary DE
A DE involving derivatives with respect to one independent variable is called ordinary DE.
An ordinary DE of order n, in general, can be expressed in the form
f (x, y, y 0 , ......., y (n) ) = 0.
In particular, a first order ordinary DE is of the form
f (x, y, y 0 ) = 0,
while a second order ordinary DE is of the form
f (x, y, y 0 , y 00 ) = 0.
Ex. y = xy 0 + (y 0 )2 is a first order ordinary DE.
Ex. y 0 + xy + x2 = 0 is a first order ordinary DE.
Ex. (y 00 )3 + 2y 0 + 3y = x is a second order ordinary DE.
Partial DE
A DE involving partial derivatives with respect to two or more independent variables is called
partial DE.
For example, the well known Laplace equation
2u 2u
+
= 0,
x2 y 2
is a partial DE, which carries the second order partial derivatives of the dependent variable u(x, y)
with respect to the independent variables x and y.
Note:
Hereafter, we shall talk about ordinary DE only. So DE shall mean ordinary DE unless otherwise
stated.

Classification based on degree


DE are classified into two categories based on the degree.
Linear DE
A DE is said to be linear if the dependent variable and its derivatives occur in first degree and are
not multiplied together.
An linear DE of order n can be expressed in the form
a0 (x)y (n) + a1 (x)y (n1) + ....... + an1 (x)y 0 + an (x)y = b(x).
where a0 (x) is not identically 0.
For example, y 00 + 2y 0 + 3y = x is a second order linear DE.

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

Non-linear DE
If a DE in not linear, then it is said to be non-linear.
Ex. y = xy 0 + (y 0 )2 is a first order non-linear DE as y 0 occurs with degree 2.
Ex. yy 00 + 4y = 3x2 is a second order non-linear DE as y and y 00 occur in product in the first term.
Ex. y 00 + 2y 0 + 3y 2 = 0 is a second order non-linear DE as y occurs with degree 2.

Solutions of DE
Consider the nth order DE
f (x, y, y 0 , ......., y (n) ) = 0.

(1.1)

We define the following types of solutions of (1.1).

Explicit solution
A function g defined on an interval I is said to be an explicit solution of (1.1) on the interval I if
f (x, g, g 0 , ......., g (n) ) = 0 for all x I.
For example, y = sin x is an explicit solution of the DE y 00 + y = 0 on (, ) since y = sin x
implies that y 00 + y = sin x + sin x = 0 for all x (, ).

Implicit solution
A relation h(x, y) = 0 is said to be an implicit solution of (1.1) on an interval I if h(x, y) = 0
yields at least one explicit solution g of (1.1) on I.
0
2
2
For example, x2 +y 2 =1 is an implicit solution
of the DE yy +x = 0 on (1, 1). For, x +y = 1
2
2
yields two functions y = 1 x and y = 1 x , both of which can be verified to be explicit
solutions of yy 0 + x = 0 on (1, 1).

Formal solution
A relation h(x, y) = 0 is said to be a formal solution of (1.1) on an interval I if h(x, y) = 0 does
not yield any explicit solution g of (1.1) on I but satisfies (1.1) on I.
For example, x2 + y 2 + 1 = 0 is a formal solution of the DE yy 0 + x = 0. For, the implicit
differentiation of the relation x2 + y 2 + 1 = 0 with respect to x yields the DE yy 0 + x = 0. However,
x2 + y 2 + 1 = 0 gives y 2 = 1 x2 . So y is not real for any real x. This in turn implies that
x2 + y 2 + 1 = 0 does not yield any explicit solution of the given DE.

General and particular solutions


A relation h(x, y, c1 , c2 , ....., cn ) = 0, involving n arbitrary constants c1 , c2 ,...., cn , is said to be a
general solution of (1.1) on an interval I if h(x, y, c1 , c2 , ....., cn ) = 0 satisfies (1.1) identically on
I. Note that the number of arbitrary constants in the general solution is equal to the order n of
the DE (1.1). Further, a solution of (1.1) obtained by choosing particular values of the arbitrary
constants is called particular solution of (1.1).

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

For example, y = c1 sin x + c2 cos x is general solution of the DE y 00 + y = 0 on (, )


since y = c1 sin x + c2 cos x leads to y 00 + y = c1 ( sin x + sin x) + c2 ( cos x + cos x) = 0 for all
x (, ). Also, y = sin x is a particular solution of (1.1) as it can be obtained from the
general solution by choosing c1 = 1 and c2 = 0.

Singular solution
A singular solution of (1.1) is a particular solution of (1.1), which can not be obtained from the
general solution h(x, y, c1 , c2 , ....., cn ) = 0 of (1.1) by choosing particular values of the arbitrary
constants c1 , c2 ,...., cn .
For example, y = cx + c2 is general solution of the DE y = xy 0 + (y 0 )2 . It is easy to verify that
y = x2 /4 is also a solution of this DE. Further, y = x2 /4 can not be retrieved from y = cx + c2
for any choice of the arbitrary constant c. Hence, y = x2 /4 is a singular solution of the DE
y = xy 0 + (y 0 )2 .
Note: Considering the types of solutions as discussed above we can say that a solution of (1.1) is
any relation-explicit or implicit- between x and y that does not involve derivatives and satisfies
(1.1) identically.

Initial and boundary value problems


Consider the nth order DE (1.1). We know that its general solution involves n arbitrary constants.
Therefore, in order to seek a particular solution from the general solution of (1.1), we need to find
the values of n arbitrary constants using n given conditions. If the n given conditions are specified
at a single point say x0 in the form, say
y(x0 ) = b0 , y 0 (x0 ) = b1 , ........, y (n1) (x0 ) = bn1 ,
then the DE (1) with these n conditions is said to be an initial value problem (IVP).
On the other hand, if k conditions are specified at one point say x0 while the remaining n k
points are specified at some other point say x1 , then the DE (1.1) with the given conditions at two
different points is said to be a boundary value problem (BVP).
Ex. y 0 y = 0, y(0) = 1 is an IVP. The general solution of y 0 y = 0 is y = c1 ex . So the condition
y(0) = 1 yields c1 = 1. So the solution of the given IVP is y = ex .
Ex. y 00 y = 0, y(0) = 1, y 0 (0) = 1 is an IVP. The general solution of y 00 y = 0 is y = c1 ex +c2 ex .
So the conditions y(0) = 1 and y 0 (0) = 1 yield the relations c1 +c2 = 1 and c1 c2 = 1, respectively.
Solving the two, we get c1 = 1 and c2 = 0. So the solution of the given IVP is y = ex .
Ex. y 00 + y = 0, y(0) = 1, y(/2) = 0 is a BVP. The general solution of y 00 + y = 0 is y =
c1 sin x + c2 cos x. So the conditions y(0) = 1 and y(/2) = 0 yield c2 = 1 and c1 = 0, respectively.
So the solution of the given BVP is y = cos x.

Chapter 2
First Order Differential Equations
In general, any first order DE is of the form
g(x, y, y 0 ) = 0.

(2.1)

Sometimes, it is possible to write the first order DE (2.1) in the canonical form
y 0 = f (x, y)

(2.2)

There is no method to find a general solution of (2.1) or (2.2).

Exact methods for solving first order DE


In the following, we present some particular families of first order DE and exact methods to solve
the same.

Variable separable DE
A first order DE is said to be in variable separable form if it can be written as
y 0 = F (x)G(y),

(2.3)

where F (x) is function of x, and G(y) is function of y. Equation (2.3) can be rewritten as
dy
= F (x)dx,
G(y)
which on integration, yields the solution
Z
Z
dy
= F (x)dx + C,
G(y)
where C is a constant of integration.
Ex. 2.1.1. Solve y 0 = y cos x.
Sol. 2.1.1. y = cesin x .
8

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

DE reducible to variable separable


There are DE which are not directly variable separable but can be reduced to variable separable
by using some suitable transformation(s). In the following, we present some families of such DE.
DE of the form y 0 = f (ax + by + c)
Here a, b and c are constants. Such DE can be reduced to variable separable by using the transformation
ax + by + c = t.
For, we have a + by 0 = t0 , which transforms the DE y 0 = f (ax + by + c) into the variable separable
DE
t0 = bf (t) + a
with the general solution
Z
dt
= x + C.
bf (t) + a
Ex. 2.1.2. Solve y 0 = sin(x y).
Sol. 2.1.2. sec(x y) + tan(x y) = x + c.
Homogeneous DE
A function h(x, y) is said to be homogeneous function of degree n if h(tx, ty) = tn h(x, y). A DE
of the form M (x, y)dx + N (x, y)dy = 0 is said to be homogeneous if M (x, y) and N (x, y) are
homogeneous functions of same degree. The DE M (x, y)dx + N (x, y)dy = 0 can be rewritten
as y 0 = M (x, y)/N (x, y) = f (x, y) (say). Therefore, a DE expressed in the form y 0 = f (x, y)
is homogeneous if f (tx, ty) = f (x, y) = f (1, y/x). To solve the homogeneous DE, we use the
transformation y = vx, where v is a function of x. This gives y 0 = v + xv 0 . So the DE y 0 = f (x, y)
transforms to
v + xv 0 = f (1, v),
which can be rearranged in the variable separable form
dx
dv
=
.
f (1, v) v
x
Integrating both sides, we get the general solution
Z
dv
= ln x + C.
f (1, v) v
Ex. 2.1.3. Solve y 0 =

x+y
.
xy

Sol. 2.1.3. tan1 (y/x) = log

x2 + y 2 + C.

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

10

DE of the form y 0 = (ax + by + c)/(px + qy + r)


Here a, b, c, p, q and r are constants. In case, ap = qb = m (say), we have ax + by = m(px + qy).
Then the transformation px + qy = t transforms the given DE into variable separable form. Now,
consider that ap 6= qb . In this case, we use the transformations x = X + h and y = Y + k, where
h and k are constants to be determined from the equations ah + bk + c = 0 and ph + qk + r = 0.
The equation y 0 = (ax + by + c)/(px + qy + r), then, transforms to
dY
aX + bY
=
,
dX
pX + qY
which is a homogeneous DE in X and Y .
x+y+4
.
xy6

Ex. 2.1.4. Solve y 0 =


Sol. 2.1.4. tan1

y+5
x1

= log

p
(x 1)2 + (y + 5)2 + c.

Exact DE
dy
= f (x, y) can be written in the canonical form M (x, y)dx + N (x, y)dy = 0
The first order DE dx
where f (x, y) = M (x, y)/N (x, y). It is said to be an exact DE if M dx + N dy is an exact differential of some function say F (x, y), that is, M dx + N dy = dF .

For example, ydx + xdy = 0 is an exact DE since ydx + xdy = d(xy).


The following theorem provides the necessary and sufficient condition for a DE to be exact.
Necessary and sufficient condition for exact DE: If M (x, y) and N (x, y) possess continuous
first order partial derivatives, then the DE M (x, y)dx + N (x, y)dy = 0 is exact if and only if
M
= N
.
y
x
Proof. First assume that the DE M (x, y)dx + N (x, y)dy = 0 is exact. Then by definition, there
exists some function F (x, y) such that
M dx + N dy = dF.

(2.4)

Also F (x, y) is a function of x and y. So from the theory of partial differentiation, we have
F
F
dx +
dy = dF.
x
y

(2.5)

From (2.4) and (2.5), we obtain


M=

F
,
x

N=

M
2F
=
,
y
yx

F
.
y
N
2F
=
.
x
xy

(2.6)

(2.7)

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

11

Given that M (x, y) and N (x, y) possess continuous first order partial derivatives. Therefore,
2F
2F
2F
and xy
are continuous functions, which in turn implies that yx
= xy
. Hence, (2.7) gives

2F
yx

N
M
=
.
y
x

(2.8)

Conversely assume that the condition (2.8) is satisfied. We shall prove that there exists a
function F (x, y) such that equation (2.4) and hence (2.6) are satisfied. Integrating first of the
equations in (2.6) w.r.t. x, we get
Z
F = M dx + g(y).
(2.9)

=
y
y

N=
y

M dx + g 0 (y).

M dx + g 0 (y).

Z 

N
y

g(y) =

The integrand N
y

N
2

x
xy


x
y

N
M

= 0,
x
y

M dx dy.

(2.10)

Z
M dx is a function of y only.



Z

N
M dx = 0.
y

M dx = 0.

Z
M dx

= 0.

which is true in view of (2.8). This completes the proof.


Note. If the DE M dx + N dy = 0 is exact, then in view of (2.9) and (2.10) the solution F (x, y) = c
reads as

Z
Z 
Z

M dx +
N
M dx dy = c.
y

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

12

Ex. Test the equation ey dx + (xey + 2y)dy = 0 for exactness and solve it if it is exact.
Sol. Comparing the given equation with M dx + N dy = 0, we get
M = ey ,

N = xey + 2y.

N
M
= ey =
.
y
x

This shows that the given DE is exact, and therefore its solution is given by

Z
Z 
Z

N
M dx +
M dx dy = c.
y
Z
=


Z
Z 

y
y
e dx dy = c.
e dx +
xe + 2y
y
y


Z 

y
y
(xe ) dy = c.
xe +
xe + 2y
y
y

(xey + 2y xey ) dy = c.

xe +

xey + y 2 = c.

Integrating Factor
If the DE M dx + N dy = 0 is not exact and suppose there exists a function (x, y) such that
the DE (M dx + N dy) = 0 is exact, then (x, y) is called an integrating factor (IF) of the DE
M dx + N dy = 0.
Obviously, we need to determine the integrating factor for a non-exact DE.

Existence and uniqueness of Integrating Factor


Existence of IF is ensured provided there exists general solution of the DE M dx + N dy = 0. For,
let the general solution of M dx + N dy = 0 be f (x, y) = c. Then f
dx + f
dy = 0. Its comparison
x
y
with M dx + N dy = 0 gives
f
x

f
y

= (x, y) (say).

Therefore, M = f
and M =
x
M dx + N dy = 0 is exact.

f
.
y

So M dx + N dy =

f
dx
x

f
dy
y

= df . It implies that

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

13

Next, assume that is IF and f (x, y) = c be general solution of the DE M dx + N dy = 0. Let


F be any function of f , then F (f ) is also an IF of M dx + N dy = 0. For
Z

F (f )(M dx + N dy) = F (f )df = d
F (f )df .
This shows that IF of a DE, if exists, is not unique.
Now we determine the IF (x, y). Since M dx + N dy = 0 is an exact DE, so by the condition
of exactness

(M ) =
(N ).
y
x

+M
=
+N .
y
y
x
x


N
M
x
y


=

M
N

.
y
x

(2.11)

We can not determine in general from (2.11). If happens to be a function of x only, then
(2.11) reduces to


1 M
N
1 d
=

= h(x) (say).
dx
N y
x

d
= h(x)dx.

=e

Thus, if

1
N

Similarly, if

M
y
1
M

h(x)dx

N
x

N
x

= h(x) is a function of x only, then the IF = e h(x)dx .



R
M
= h(y) is a function of y only, then the IF = e h(y)dy .
y

Ex. Solve (x2 + y 2 + x)dx + xydy = 0.


Sol. 3x4 + 4x3 + 6x2 y 2 = c.

IF of first order linear DE


A linear DE (LDE) of first order is of the from
y 0 + p(x)y = q(x),
which can be written in the canonical form
(p(x)y q(x))dx + dy = 0.

(2.12)

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

14

Itscomparison
 with M dx + N dy = 0 gives M = p(x)y q(x) and N R= 1. Here, we find that
M
N
1
x = p(x) is a function of x only. Therefore, the IF is = e p(x)dx . Now multiplying
N
y
both sides of (2.12) by the IF, we obtain
R

y0e

p(x)dx

+ p(x)ye

p(x)dx

= q(x)e

p(x)dx

R
d  R p(x)dx 
ye
= q(x)e p(x)dx .
dx

ye

p(x)dx

Z
=

q(x)e

p(x)dx

dx + c

is the general solution of the LDE.


Ex. Solve sec x y 0 = y + sin x.
Sol. y = (1 + sin x)e sin x + c.

Bernoullis DE
A non-linear DE of the form y 0 + p(x)y = q(x)y n (n 6= 1) is called Bernoullis DE, which can be
reduced to LDE by dividing it by y n and then substituting y 1n = z.
Ex. Solve y 0 + xy = x3 y 3 .
2

Sol. y 2 = 1 + x2 + cex .

IF of homogeneous DE
If M (x, y)dx+N (x, y)dy = 0 is a homogeneous DE, then its IF is 1/(M x+N y) provided M x+N y 6=
0. In case, M x + N y = 0, the IF is 1/x2 or 1/y 2 or 1/(xy).

Clairauts DE
A Clairauts DE is of the form
y = xy 0 + f (y 0 ),

(2.13)

where f is any function of y 0 . Separating x from this equation, we get


x=

y f (p)

, (p = y 0 ).
p
p

(2.14)

Differentiating with respect to y, we get


1
1
y dp f (p) dp f 0 (p) dp
= 2
+ 2

.
p
p p dy
p dy
p dy

(2.15)

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

15

or
dp
[y f (p) + pf 0 (p)] = 0.
dy

(2.16)

dp
= 0 or y = f (p) pf 0 (p).
It suggests that either dy
dp
If
= 0, then p = c (a constant) and we get the general solution of (2.13) given by
dy

y = cx + f (c).
In case, y = f (p) pf 0 (p), from equation (2.13), we get x = f 0 (p).
So the parametric equations x = f 0 (t) and y = f (t) tf 0 (t) define another solution of (2.13).
It is called singular solution of (2.13).
It should be noted that the straight lines given by the general solution y = cx + f (c) are
tangential to the curve given by the singular solution x = f 0 (t) and y = f (t) tf 0 (t). Hence, the
singular solution is an envelope of the family of straight lines of general solution as illustrated in
the following example.

Note: In general, a given DE need not to possess a solution. For example, |y0 | + |y| + 1 = 0 has
no solution. The DE |y 0 | + |y| = 0 has only one solution y = 0.

Existence and uniqueness of solution of IVP


In this section, we discuss some theorems which guarantee the existence/uniqueness of solution of
the IVP y 0 = f (x, y), y(x0 ) = y0 .

Existence Theorem: Let f (x, y) be continuous in a closed rectangular region R = {(x, y) :


|x x0 | a, |y y0 | b}, and there exists some constant M > 0 such that |f (x, y)| M for
all (x, y) R. Then there exists a solution of the IVP y 0 = f (x, y), y(x0 ) = y0 in the interval
[x0 h, x0 + h], where h = min{a, b/M }.
The above theorem suggests that through any given point (x0 , y0 ) of a closed rectangular region
R, there passes at least one solution curve of the DE y 0 = f (x, y) provided f (x, y) is continuous
and bounded in R. Also note that this theorem gives the sufficient conditions for the existence
of solution but not the necessary. For example, consider the IVP xy 0 = 3y, y(1) = 1 and the
rectangular region R = {(x, y) : |x| 2, |y| 3}. For the given IVP, we have f (x, y) = 3y/x,
which is not continuous in R as it is not continuous at the point (0, 0) in R. However, y = x3 is a
solution of the given IVP.

Uniqueness Theorem: Let f (x, y) be continuous in a closed rectangular region R = {(x, y) :


|x x0 | a, |y y0 | b}, and there exists some constant M > 0 such that |f (x, y)| M for all
(x, y) R. Suppose f (x, y) satisfies the Lipshitz condition in R with respect to y, that is, there
exists a constant L such that |f (x, y1 ) f (x, y2 )| L|y1 y2 | for all (x, y1 ), (x, y2 ) R. Then
there exists a unique solution of the IVP y 0 = f (x, y), y(x0 ) = y0 in the interval [x0 h, x0 + h],
where h = min{a, b/M }.

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

Picards Theorem: Let f (x, y) and

f
y

16

be continuous in a closed rectangular region R. If


(x0 , y0 ) is any point in R, then there exists some constant h > 0 such that the IVP y 0 = f (x, y),
y(x0 ) = y0 has a unique solution in the interval [x0 h, x0 + h].

Chapter 3
Second Order DE
Any second order DE is of the form
f (x, y, y 0 , y 00 ) = 0.
First we discuss the LDE of second order.

Second Order LDE


The general form of the second order LDE is
y 00 + p(x)y 0 + q(x)y = r(x).

(3.1)

If r(x) = 0, then it is called homogeneous otherwise non-homogeneous. The following theorem


guarantees the existence and uniqueness of solution of (3.1).
Theorem 3.1.1. (Existence and Uniqueness of Solution): If p(x), q(x) and r(x) are continuous functions on [a, b] and x0 is any point in [a, b], then the IVP y 00 + p(x)y 0 + q(x)y = r(x),
y(x0 ) = y0 , y 0 (x0 ) = y00 has a unique solution on [a, b].
Theorem 3.1.2. If p(x) and q(x) are continuous functions on [a, b] and x0 is any point in [a, b],
then the IVP y 00 + p(x)y 0 + q(x)y = 0, y(x0 ) = 0, y 0 (x0 ) = 0 has only the trivial solution y = 0 on
[a, b].
Proof. We find that y(x) = 0 satisfies the homogeneous DE y 00 + p(x)y 0 + q(x)y = 0 along with
the initial conditions y(x0 ) = 0 and y 0 (x0 ) = 0. So the required result follows from the previous
theorem 4.0.1.
Theorem 3.1.3. (Linearity Principle) If y1 and y2 are any two solutions of the homogeneous
LDE y 00 + p(x)y 0 + q(x)y = 0, then c1 y1 + c2 y2 is also a solution for any constants c1 and c2 .
Proof. Since y1 and y2 are solutions y 00 + p(x)y 0 + q(x)y = 0, we have
y100 + p(x)y10 + q(x)y1 = 0,

y200 + p(x)y20 + q(x)y2 = 0.

Now substituting c1 y1 + c2 y2 for y into left hand side of the given homogeneous LDE, we obtain
c1 (y100 + p(x)y10 + q(x)y1 ) + c2 (y200 + p(x)y20 + q(x)y2 ) = c1 .0 + c2 .0 = 0.
Thus, c1 y1 + c2 y2 , the linear combination of the solutions y1 and y2 , is also a solution of the
homogeneous LDE.
17

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

18

Remark 3.1.1. The above result need not be true for a non-homogeneous or non-linear DE.
Definition 3.1.1. (Linearly Independent and Linearly Dependent Functions) Two functions f (x) and g(x) are said to be linearly independent (LI) on [a, b] if f (x) is not a constant
multiple of g(x) on [a, b]. The functions, which are not LI, are known as linearly dependent (LD)
functions.
For example, the functions x + 1 and x2 are LI on [1, 5] while the functions x2 + 1 and 3x2 + 3
are LD functions on [1, 5]. The functions sin x and cos x are LI on any interval.
Definition 3.1.2. Definition
(Wronskian):
Wronskian of two functions y1 (x) and y2 (x) is


y1 y2
and is denoted by W (y1 , y2 ).
defined as the determinant 0
y1 y20

W (y1 , y2 ) = y1 y20 y2 y10 .

Lemma 3.1.1. (Wronskian of Solutions of Homogeneous LDE) The Wronskian of two


solutions y1 (x) and y2 (x) defined on [a, b] of a homogeneous LDE y 00 + p(x)y 0 + q(x)y = 0 is either
identically zero or never zero.
Proof. Since y1 and y2 are solutions y 00 + p(x)y 0 + q(x)y = 0, we have
y100 + p(x)y10 + q(x)y1 = 0,

(3.2)

y200 + p(x)y20 + q(x)y2 = 0.

(3.3)

Multiplying (3.3) by y1 and (3.2) by y2 , and subtracting, we get


y1 y200 y2 y100 + p(x)(y1 y20 y2 y10 ) = 0.

dW
&
= y1 y200 y2 y100
dx

dW
+ p(x)W = 0.
dx

W = ce

W is identically 0 if c = 0 otherwise W never vanishes.

p(x)dx

W =

y1 y20

y2 y10

, where c is a constant of integration.

Lemma 3.1.2. (Wronskian of LD Solutions) Two solutions y1 and y2 defined on [a, b] of a


homogeneous LDE y 00 + p(x)y 0 + q(x)y = 0 are LD if and only if W (y1 , y2 ) = 0 for all x [a, b].

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

19

Proof. If y1 and y2 are LD, then there exists some constant c such that y1 (x) = cy2 (x) for all
x [a, b]. It follows that W (y1 , y2 ) = y1 y20 y2 y10 = cy2 y20 cy2 y20 = 0 for all x [a, b].
Conversely, let W (y1 , y2 ) = y1 y20 y2 y10 = 0 for all x [a, b]. Now, there are two possibilities
about y1 . First, y1 = 0 for all x [a, b]. In this case, we have y1 = 0 = 0.y2 for all x [a, b], and
consequently y1 and y2 are LD. Next, if y1 is not identically 0 in [a, b] and x0 is any point in [a, b]
such that y1 (x0 ) 6= 0, then continuity of y1 ensures the existence of a subinterval [c, d] containing
x0 in [a, b] such that y1 6= 0 for all x [c, d]. Dividing W (y1 , y2 ) = y1 y20 y2 y10 = 0 by y12 , we get
(y1 y20 y2 y10 )/y12 = (y2 /y1 )0 = 0. So we have y2 /y1 = k for all x [c, d], where k is some constant.
This shows that y1 and y2 are LD in [c, d]. This completes the proof.
Theorem 3.1.4. (General Solution of Homogeneous LDE) If y1 (x) and y2 (x) are two LI
solutions of a homogeneous LDE y 00 + p(x)y 0 + q(x)y = 0 on [a, b], then c1 y1 (x) + c2 y2 (x), where
c1 and c2 are arbitrary constants, is the general solution of the homogeneous LDE.
Proof. Let y(x) be any solution of y 00 + p(x)y 0 + q(x)y = 0. We shall prove that there exists unique
constants c1 and c2 such that
c1 y1 (x) + c2 y2 (x) = y(x).

(3.4)

Now differentiating both sides of (3.4) w.r.t. x, we get


c1 y10 (x) + c2 y20 (x) = y 0 (x).

(3.5)

Given that y 1 (x) and y2 (x) are two LI solutions of the given homogeneous LDE on [a, b].Therefore,
y1 (x) y2 (x)

0
y1 (x) y20 (x) = W (y1 (x), y2 (x)) is non-zero for all x [a, b]. This in turn implies that the system
of equations (3.4) and (3.5) has a unique solution (c1 , c2 ). This completes the proof.
For example, y 00 + y = 0 has two LI solutions y1 = cos x and y2 = sin x. So its general solution
is c1 cos x + c2 sin x.

Use of known solution to find another


Consider the homogeneous LDE
y 00 + p(x)y 0 + q(x)y = 0.

(3.6)

Let y1 be a non-zero and known solution of (3.6). Therefore,


y100 + p(x)y10 + q(x)y1 = 0.

(3.7)

We assume that y2 = vy1 is a solution of (3.4). Therefore,


y200 + p(x)y20 + q(x)y2 = 0.

v(y100 + p(x)y10 + q(x)y1 ) + v 00 y1 + v 0 (2y10 + p(x)y1 ) = 0.

(3.8)

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

20

Plugging (3.7) into (3.8), we get


v 00 y1 + v 0 (2y10 + p(x)y1 ) = 0.

v 00
2y10
p(x).
=

v0
y1

=
Integrating,

log v = 2 log y1

v0 =

1 R p(x)dx
e
.
y12
Z

1 R p(x)dx
e
dx.
y12

1 R p(x)dx
e
dx.
y12

v=

y2 = y1

p(x)dx.

Clearly y2 is not a constant multiple of y1 . So y1 and y2 are LI solutions of (3.6). Hence,


c1 y1 + c2 y2 is general solution of (3.6).
Ex. 3.2.1. Find general solution of x2 y 00 + xy 0 y = 0 given that y1 = x is a solution.
Sol. 3.2.1. The given DE can be rewritten as
1
1
y 00 + y 0 2 y = 0.
x
x
Comparing it with y 00 + p(x)y 0 + q(x)y = 0, we find p(x) = x1 . Also the given solution is y1 = x.
So the second solution reads as
Z
Z
Z
Z
1 log x
1
1 R p(x)dx
1 R 1 dx
e x dx = x
e
dx = x x3 dx = x1 .
y2 = y1
e
dx = x
2
2
2
y1
x
x
2
The general solution is
y = c1 x + c2 x1 .

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

21

Homogeneous LDE with Constant Coefficients


Consider the homogeneous LDE
y 00 + py 0 + qy = 0,

(3.9)

where p and q are constants. Let y = emx be a solution of 3.9. Then, we have
(m2 + pm + q)emx = 0.

m2 + pm + q = 0,

(3.10)

since emx 6= 0.
Equation (3.10) is called auxiliary equation (AE) and its roots are
p
p
p + p2 4q
p p2 4q
m1 =
and m2 =
.
2
2
Now three different cases arise depending on the nature of roots of the AE.
(i) If p2 4q > 0, then m1 and m2 are real and distinct. So em1 x and em2 x are two particular
solutions of 3.9. Also these are LI being not constant multiple of each other. Therefore, general
solution of (3.9) is
y = c1 em1 x + c2 em2 x .
(ii) If p2 4q < 0, then m1 and m2 are conjugate complex numbers. Let m1 = a+ib and m2 = aib.
Then we get the following solutions of 3.9:
e(a+ib)x = eax (cos bx + i sin bx),

(3.11)

e(aib)x = eax (cos bx i sin bx).

(3.12)

As we are interested in real solutions of 3.9, adding 3.11 and 3.12 and then dividing by 2, we
get a real solution eax cos bx.
Similarly, subtracting 3.11 and 3.12 and then dividing by 2i, we get another real solution of
3.9 given by eax sin bx.
Now, we see that the particular solutions eax cos bx and eax sin bx are LI. So general solution of
(3.9) is
y = eax (c1 cos bx + c2 sin bx).
(iii) If p2 4q = 0, then m1 and m2 are real and equal with m1 = m2 = p2 . Therefore, one
px
solution of (3.9) is y1 = e 2 . Another LI solution of (3.9) is given by
Z
Z
Z
px
px
px
1 R p(x)dx
1 R pdx
1 px
y2 = y1
e
dx = e 2
e
dx = e 2
e dx = xe 2 .
2
px
px
y1
e
e
So general solution of (3.9) is
y=e

px
2

(c1 + c2 x).

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

22

Ex. 3.3.1. Solve y 00 + y 0 6y = 0.


Sol. 3.3.1. y = c1 e3x + c2 e2x .
Ex. 3.3.2. Solve y 00 4y 0 + 4y = 0.
Sol. 3.3.2. y = e2x (c1 + c2 x).
Ex. 3.3.3. Solve y 00 + y 0 + y = 0.
Sol. 3.3.3. y = e

x
2

(c1 cos

3
x
2

+ c2 sin

3
x).
2

Ex. 3.3.4. Show that the general homogeneous LDE y 00 + p(x)y 0 + q(x)y = 0 is reducible to a
3
homogeneous
LDE with constant coefficients if and only if (q 0 + 2pq)/q 2 is constant provided that
Rp
z=
q(x)dx.
p
Rp
Sol. 3.3.4. We have z =
q(x)dx and z 0 = q(x). Therefore,
y0 =

dy 0 dy
z =
q.
dz
dz

y 00 =

q 0 dy
d2 y
q 0 dy
d2 y 0
q 2z +
=q 2 +
.
dz
2 q dz
dz
2 q dz

Plugging the values of y 0 and y 00 into y 00 + p(x)y 0 + q(x)y = 0 and dividing by q, we obtain
d2 y q 0 + 2pq dy
+
+ y = 0.
3
dz 2
2q 2 dz
3

This is a homogeneous LDE with constant coefficients if and only if (q 0 + 2pq)/q 2 is constant.
Ex. 3.3.5. Reduce xy 00 + (x2 1)y 0 + x3 y = 0 to a homogeneous LDE with constant coefficients
and hence solve it.
Sol. 3.3.5. The given DE can be rewritten as


1
00
y + x
y 0 + x2 y = 0.
x
Comparing it with y 00 + p(x)y 0 + q(x)y = 0, we get p(x) = x

2x + 2 x x1 x2
q 0 + 2pq

= 2.
=
3
x3
q2

1
x

and q(x) = x2 .

This shows that the given DE is reducible to a homogeneous LDE with constant coefficients given
by
d2 y q 0 + 2pq dy
+
+ y = 0.
3
dz 2
2q 2 dz

Mathematics-III

where z =

Dr. Suresh Kumar, BITS Pilani

Rp
R
q(x)dx = xdx =

23

x2
.
2

d2 y dy
+ y = 0.
+
dz 2 dz

Its AE is m2 + m + 1 = 0 with roots m =

3
3
c1 cos
z + c2 sin
z
2
2

z2

y=e

Substituting z =

x2
,
2

2
x4

y=e

1
2

3
i.
2

So the solution reads as

we have
!

3 2
3 2
c1 cos
x + c2 sin
x .
4
4

Ex. 3.3.6. Show that a DE of the form x2 y 00 + pxy 0 + qy = 0, where p, q are constants, reduces to
a homogeneous LDE with constant coefficients with the transformation x = ez . Hence, solve the
equation x2 y 00 + 2xy 0 6y = 0.
Sol. 3.3.6. We have x = ez . So z = log x and z 0 = 1/x. Therefore,
xy 0 = x

dy 0 dy
z = .
dz
dz

2 00

x y =x

1 dy
x dz

0
=x

1 dy
1 d2 y 0
z

x dz 2
x2 dz


=

d2 y dy
.
dz 2 dz

Thus, the equation x2 y 00 + pxy 0 + qy = 0 becomes


d2 y
dy
+ (p 1) + qy = 0,
2
dz
dz
which a homogeneous LDE with constant coefficients.
Hence, with x = ez , the DE x2 y 00 + 2xy 0 6y = 0 reduces to
d2 y dy
+
6y = 0.
dz 2 dz
Its AE is m2 + m 6 = 0 with the roots m = 3, 2. So its solution is
y = c1 e3z + c2 e2z .

y = c1 x3 + c2 x2 .

Remark 3.3.1. The DE in the form x2 y 00 + pxy 0 + qy = 0 is called Eulers or Cauchys equidimensional equation. If we denote dy
by Dz y, then xy 0 = Dz y and xy 00 = Dz (Dz 1)y. It can also
dz
3 000
be shown that x y = Dz (Dz 1)(Dz 2)y and in general xn y (n) = Dz (Dz 1)...(Dz n + 1)y.
Thus, every Eulers or Cauchys equidimensional equation reduces to a homogeneous LDE with
constant coefficients with the transformation x = ez .

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

24

Ex. 3.3.7. Solve x2 y 00 + 3xy 0 + 10y = 0.


Sol. 3.3.7. y = x1 (c1 cos(log x3 ) + c2 sin(log x3 ).
Ex. 3.3.8. Solve y 00 + 3xy 0 + x2 y = 0.
Sol. 3.3.8. Not possible with the above method.
Theorem 3.3.1. (General Solution of Non-Homogeneous LDE) If yp is a particular solution of a non-homogeneous LDE y 00 + p(x)y 0 + q(x)y = r(x) and yh = c1 y1 + c2 y2 is general solution
of the corresponding homogeneous LDE y 00 + p(x)y 0 + q(x)y = 0, then y = yh + yp is the general
solution of the non-homogeneous LDE.
Proof. Let y be any solution of y 00 +p(x)y 0 +q(x)y = r(x). Then y yp is a solution of homogeneous
LDE y 00 + p(x)y 0 + q(x)y = 0 since
(y yp )00 + p(x)(y yp )0 + q(x)(y yp ) = (y 00 + p(x)y 0 + q(x)y) (yp00 + p(x)yp0 + q(x)yp ) =
r(x) r(x) = 0.
But yh = c1 y1 + c2 y2 is general solution of y 00 + p(x)y 0 + q(x)y = 0. So there exists suitable
constants c1 and c2 such that
y yp = c1 y1 + c2 y2 = yh or y = yh + yp .
This completes the proof.
In the next three sections, we shall learn some methods to find the particular solution yp of the
non-homogeneous LDE, namely the method of undetermined coefficients, the method of variation
of parameters and the operator methods.

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

25

Method of Undetermined Coefficients


This method is used to find a particular solution yp for a DE of the form
y 00 + py 0 + qy = r(x),

(3.13)

where p, q are constants and r(x) is exponential or sine or cosine or polynomial or some combination of these functions. We assume yp equal to linear combination r(x) and all different functions
(except for the constant multiples) arising from derivatives of r(x). Finally substituting yp for y
in (3.13), we determine the unknown coefficients in yp by equating coefficients of like functions on
both sides.
Ex. Find a particular solution of y 00 y 0 2y = 4x2 . Also determine the general solution.
Sol. Comparing the given equation with y 00 + py 0 + qy = r(x), we get r(x) = 4x2 . Therefore, the
possible non-zero derivatives of r(x) are 8x and 8. Let yp = Ax2 + Bx + C be a particular solution.
Substituting yp for y into the given DE, we obtain
2A (2Ax + B) 2(Ax2 + Bx + C) = 4x2 ,

(3.14)

Equating coefficients of x2 , x and x0 on both sides of 3.14, we have


2A = 4, 2A 2B = 0, 2A B 2C = 0.

A = 2, B = 2, C = 3.

Thus, the particular solution is


yp = 2x2 + 2x 3.
Next, we find the general solution yh of the corresponding homogeneous DE y 00 y 0 2y = 0.
Here the AE is m2 m 2 = 0 with roots m = 2, 1. Therefore, yh = c1 e2x + c2 ex .
Finally, the general solution of the given DE reads as
y = yh + yp = c1 e2x + c2 ex 2x2 + 2x 3.
Remark: It is possible that the assumed yp may satisfy the corresponding homogeneous DE. In
such a case, we assume yp with multiplication by x.
Ex. Find a particular solution of y 00 + y = sin x.
Sol. Comparing the given equation with y 00 + py 0 + qy = r(x), we get r(x) = sin x. The only
function arising from derivative from r(x) is cos x. Let yp = A sin x + B cos x be a particular
solution. But yp00 + yp = 0, that is, the assumed yp satisfies the corresponding homogeneous DE
y 00 +y = 0. Therefore, we assume revised particular solution yp = x(A sin x+B cos x). Substituting
it for y into the given DE, we obtain
2A cos x 2B sin x = sin x.

(3.15)

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

26

Equating coefficients of sin x and cos x on both sides, we get


A = 0,

B=

1
.
2

(3.16)

Thus, the particular solution is


1
yp = x cos x.
2

Method of Variation of Parameters


This method is used to find a particular solution yp of the non-homogeneous DE
y 00 + p(x)y 0 + q(x)y = r(x).

(3.17)

y = c1 y 1 + c2 y 2 ,

(3.18)

Let

be general solution of the corresponding homogeneous DE y 00 + p(x)y 0 + q(x)y = 0. We replace the


constants by unknown functions v1 (x) and v2 (x), and attempt to determine these functions such
that
y = v1 y1 + v2 y2 ,

(3.19)

is a solution of (3.17), and


v10 y1 + v20 y2 = 0.

(3.20)

Plugging (3.19) into (3.17), we get


v1 (y100 + p(x)y10 + q(x)y1 ) + v2 (y200 + p(x)y20 + q(x)y2 ) + p(x)(v10 y1 + v20 y2 ) + v10 y10 + v20 y20 = r(x). (3.21)
y1 and y2 being particular solutions of the corresponding homogeneous DE y 00 + p(x)y 0 + q(x)y = 0,
we have y100 + p(x)y10 + q(x)y1 = 0 and y200 + p(x)y20 + q(x)y2 = 0. Therefore, (3.21) reduces to
v10 y10 + v20 y20 = r(x).

(3.22)

Solving (3.20) and (3.22) for v10 and v20 , we get


v10 =

y2 r(x)
,
W (y1 , y2 )

v20 =

y1 r(x)
.
W (y1 , y2 )

Thus, (3.19) leads to


Z
Z
y2 r(x)
y1 r(x)
y = y1
dx + y2
dx.
W (y1 , y2 )
W (y1 , y2 )
Ex. Find a particular solution of y 00 + y = csc x.

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

27

Sol. Comparing the given equation with y 00 + p(x)y 0 + q(x)y = r(x), we get r(x) = csc x. The
general solution of the corresponding homogeneous equation y 00 + y = 0 is y = c1 cos x + c2 sin x.
Let y1 = cos x and y2 = sin x. Then W (y1 , y2 ) = 1, and hence by the method of variation of
parameters, the particular solution is obtained as
Z
Z
y1 r(x)
y2 r(x)
dx + y2
dx
y = y1
W (y1 , y2 )
W (y1 , y2 )
Z
Z
= cos x sin x csc xdx + sin x cos x csc xdx
= x cos x + sin x log(sin x).

Operator Methods
2

dy
d y
d
2
Denoting the differential operator dx
by D such as y 0 = dx
= Dy and y 00 = dx
2 = D y, the DE
00
0
2
y + py + qy = r(x) in operator form can be written as (D + pD + q)y = r(x) or f (D)y = r(x)
1
.
where f (D) = D2 + pD + q. We shall denote the inverse operator of f (D) by f (D)
1
Operating f (D) on both sides of the DE f (D)y = r(x), we obtain

y=

1
r(x),
f (D)

1
a particular solution of the DE. We can not operate f (D)
on r(x) in general. It depends on forms
of f (D) and r(x). So we discuss the following cases.
(i) If a is a constant and f (D) = D a, then the particular solution is given by

y=

1
r(x).
Da

Operating D a on both sides, we get


(D a)y = r(x)

dy
ay = r(x),
dx

which is a LDE with IF= eax and solution


Z
ax
y=e
r(x)eax dx.

Z
1
ax
Thus,
r(x) = e
r(x)eax dx.
Da
R
If a = 0, then D1 r(x) = r(x)dx. This shows that D1 stands for the integral operator. Hence,
inverse operator of differential operator is the integral operator.

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

28

Ex. Find a particular solution of y 00 y = ex .


Sol. The given DE in operator form can be written as
(D2 1)y = ex .

(D 1)(D + 1)y = ex .

y=

1
y=
D1

1
y=
D1

y=

1
ex .
(D 1)(D + 1)

1
x
e
.
D+1

x x

e e dx .

1
(xex ).
D1

y=e

y=e

ex xex dx.



1
1
x
.
2
4

1
1
Remark: In the above example, we applied the operators D+1
and D1
successively. We could,
however, also apply the operators after making partial fractions as illustrated in the following.
We have

y =
=
=
=
=

1
ex
(D 1)(D + 1)


1
1
1

ex
2 D1 D+1


1
1
1
x
x
e
e
2 D1
D+1
 Z

Z
1 x
x x
x
x x
e
e e dx e
e e dx
2


1 1
x
e
x .
4 2

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

29

(ii) If r(x) is some polynomial in x, then we write series expansion of


D as illustrated in the following example.

1
f (D)

in ascending powers of

Ex. Find a particular solution of y 00 + y = x2 + x + 3.


Sol. The given DE in operator form can be written as
(D2 + 1)y = x2 + x + 3.

1
(x2 + x + 3).
+1

y=

y = (1 + D2 )1 (x2 + x + 3).

y = (1 D2 + D4 ......)(x2 + x + 3).

y = x2 + x + 3 D2 (x2 + x + 3) + D4 (x2 + x + 3) .......

y = x2 + x + 3 2 + 0 .......

y = x2 + x + 1,

D2

is the required particular solution of the given DE.


(ii) If k is a constant and r(x) = ekx g(x), then y =
exponential shift rule. It is justified as follows.
We have

1
(ekx g(x))
f (D)

1
= ekx f (D+k)
g(x). This is called

D(ekx g(x)) = ekx Dg(x) + kekx g(x) = ekx (D + k)g(x).

D2 (ekx g(x)) = D(ekx (D+k)g(x)) = ekx D(D+k)g(x)+kekx (D+k)g(x) = ekx (D+k)2 g(x).

Since one can express

1
f (D)

in powers of D, so we have in general

1
1
(ekx g(x)) = ekx
g(x).
f (D)
f (D + k)
Ex. Find a particular solution of (D2 3D + 2)y = xex .

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

Sol. We have
y=

D2

1
(xex ).
3D + 2

y = ex

y = ex

(D +

D2

y = e

y = e

y = e

1)2

1
x.
3(D + 1) + 2

1
x.
D

1
1
+
D 1D


x.


1
2
+ 1 + D + D + .... x.
D


x2
+x+1 .
2

30

Chapter 4
Qualitative Behavior of Solutions
In this chapter, we discuss qualitative behavior of the solutions of the second order homogeneous
LDE given by y 00 + p(x)y 0 + q(x)y = 0.
Let us analyze some properties of the solutions of the DE y 00 + y = 0 by making use of the
following theorem.
Theorem 4.0.1. (Existence and Uniqueness of Solution): If p(x), q(x) and r(x) are continuous functions on [a, b] and x0 is any point in [a, b], then the IVP y 00 + p(x)y 0 + q(x)y = r(x),
y(x0 ) = y0 , y 0 (x0 ) = y00 has a unique solution on [a, b].
Ex. 4.0.1. Let the differential equation y 00 + y = 0 has two solutions s(x) and c(x) satisfying
s(0) = 0, s0 (0) = 1 and c(0) = 1, c0 (0) = 0. Prove the following:
(i) s0 (x) = c(x)
(ii) c0 (x) = s(x)
(iii) s2 (x) + c2 (x) = 1.
Sol. 4.0.1. Given that s(x) and c(x) are solutions of y 00 + y = 0. So s00 (x) + s(x) = 0 and
c00 (x) + c(x) = 0. It follows that (s0 )00 (x) + s0 (x) = (s00 )0 + s0 (x) = s0 (x) + s0 (x) = 0, and
s0 (0) = 1, (s0 )0 (0) = s00 (0) = s(0) = 0. This shows that y = s0 (x) is a solution of y 00 + y = 0 with
y(0) = s0 (0) = 1 and y 0 (0) = (s0 )0 (0) = 0. But y = c(x) is the given solution of y 00 + y = 0 with
y(0) = c(0) = 1 and y 0 (0) = c0 (0) = 0. So by uniqueness theorem it follows that s0 (x) = c(x).
Likewise, it can be proved that c0 (x) = s(x).
d
[s2 (x)+c2 (x)] = 2s(x)s0 (x)+2c(x)c0 (x) = 2s(x)c(x)2c(x)s(x) = 0 since s0 (x) = c(x)
Finally, dx
and c0 (x) = s(x) . So s2 (x) + c2 (x) = k, some constant. Putting x = 0, we get k = 1 since
s(0) = 0 and c(0) = 1. Hence s2 (x) + c2 (x) = 1.

Sturm Separation Theorem


Theorem 4.1.1. (Sturm Separation Theorem) If y1 (x) and y2 (x) are two LI solutions of
y 00 + p(x)y 0 + q(x)y = 0, then y1 (x) vanishes exactly once between two successive zeros of y2 (x),
and vice versa.
Proof. Denoting the Wronskian W (y1 , y2 ) by W (x), we have W (x) = y1 (x)y20 (x) y2 (x)y10 (x).
Since y( x) and y2 (x) are LI, W (x) does not vanish. Let x1 and x2 be any two successive zeros of
y2 . We shall prove that y1 vanishes exactly once between x1 and x2 . Now x1 and x2 being zeros of
y2 , we have
W (x1 ) = y1 (x1 )y20 (x1 ),

W (x2 ) = y1 (x2 )y20 (x2 ),


31

(4.1)

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

32

which implies that y1 (x1 ), y20 (x1 ), y1 (x2 ), y20 (x2 ) all are non-zero since W (x) does not vanish. Now
y2 is continuous and has successive zeros x1 and x2 . Therefore, if y2 is increasing at x1 , then
it must be decreasing at x2 and vice versa. Mathematically speaking, y20 (x1 ) and y20 (x2 ) are of
opposite sign. Also W (x) being a non-vanishing and continuous function retains the same sign. So
in view of (4.1), it is easy to conclude that y1 (x1 ) and y1 (x2 ) must be of opposite sign. Therefore,
y1 vanishes at least once between x1 and x2 . Further, y1 can not vanish more than once between
x1 and x2 . For if it does, then applying the same argument as above, it can be proved that y2 has
at least one zero between two zeros of y1 lying between x1 and x2 . But this would contradict the
assumption that x1 and x2 are successive zeros of y2 . This completes the proof.
Ex. Two LI solutions of y 00 + y = 0 are sin x and cos x. Also, between any two successive zeros of
sin x, there is exactly one zero of cos x and vice versa.

Normal form of DE
A second order linear and homogeneous DE in the standard form is written as
y 00 + p(x)y 0 + q(x)y = 0.

(4.2)

Substituting y = u(x)v(x) into (4.2), we get


vu00 + (2v 0 + pv)u0 + (v 00 + pv 0 + qv)u = 0.

(4.3)
1

On setting coefficient of u0 equal to 0 and solving, we get v = e 2


u00 + h(x)u = 0,

p(x)dx

. Then (4.3) reduces to


(4.4)

where h(x) = q(x) 14 p(x)2 12 p0 (x). The DE (4.4) is referred to as the normal form of DE (4.2).
1

Remark: Since v = e 2 p(x)dx does not vanish and y = u(x)v(x), it follows that the solution y(x)
of (4.2) and the solution u(x) of (4.4) have the same zeros.
Theorem 4.2.1. If h(x) < 0, and if u(x) is a non-trivial solution of u00 + h(x)u = 0, then u(x)
has at most one zero.
Proof. Let x0 be a zero of u(x) so that u(x0 ) = 0. Then u0 (x0 ) must be non-zero otherwise u(x)
would be a trivial solution of u00 + h(x)u = 0 by theorem 3.1.2. Suppose u0 (x0 ) > 0. Then by
continuity, u0 (x) is positive in some interval to the right of x0 . So u(x) is an increasing function
in the interval to the right of x0 . We claim that u(x) does not vanish anywhere to the right of
x0 . In case, u(x) vanishes at some point say x2 to the right of x0 , then u0 (x) must vanish at some
point x1 such that x0 < x1 < x2 . Notice that x1 is a point of maxima of u(x). So u00 (x1 ) < 0, by
second derivative test for maxima. But u00 (x1 ) = h(x1 )u(x1 ) > 0 since h(x1 ) < 0 and u(x1 ) > 0.
So u(x) can not vanish to the right of x0 . Likewise, we can show that u(x) does not vanish to the
left of x0 . A similar argument holds when u0 (x0 ) < 0. Hence, u(x) has at most one zero.
TheoremR4.2.2. If h(x) > 0 for all x > 0, and u(x) is a non-trivial solution of u00 + h(x)u = 0

such that 1 h(x)dx = , then u(x) has infinitely many zeros on the positive X-axis.

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

33

Proof. Suppose u(x) has only finite number of zeros on the positive X-axis, and x0 > 1 be any
number greater than the largest zero of u(x). Without loss of generality, assume that u(x) > 0 for
all x > x0 . Let g(x) = u0 (x)/u(x) so that
g 0 (x) = u00 (x)/u(x) + [u0 (x)/u(x)]2 = h(x) + g 2 (x).
Integrating from x0 to x, we get
Z
Z x
h(x)dx +
g(x) g(x0 ) =
x0

h2 (x)dx.

x0

R
This gives g(x) > 0 for sufficiently large values of x since 1 h(x)dx = . Then u(x) > 0
for all x > x0 in the relation g(x) = u0 (x)/u(x) implies that u0 (x) < 0 for x > x0 . Also,
u00 (x) = h(x)u(x) < 0 for x > x0 . It follows that u(x) must vanish to the right of x0 , which is a
contradiction to the assumption that x0 is the largest zero of u(x). This completes the proof.
Ex. 4.2.1. Show that the zeros of the functions a sin x + b cos x and c sin x + d cos x are distinct
and occur alternatively whenever ad bc 6= 0.
Sol. 4.2.1. The functions a sin x + b cos x and c sin x + d cos x are solutions of the DE y 00 + y = 0.
Also, Wronskian of a sin x + b cos x and c sin x + d cos x is non-zero if ad bc 6= 0, which in turn
implies that the two solutions are LI. Thus, by Theorem 4.1.1, the zeros of these functions occur
alternatively whenever ad bc 6= 0.
Ex. 4.2.2. Find the normal form of Bessels equation x2 y 00 + xy 0 + (x2 p2 )y = 0, and use it to
show that every non-trivial solution has infinitely many positive zeros.
Sol. 4.2.2. Comparing the Bessels equation with y 00 + p(x)y 0 + q(x)y = 0, we obtain p(x) =
2
2
and q(x) = x xp
. Next, we evaluate
2

1
x

1
1
1 4p2
h(x) = q(x) p(x)2 p0 (x) = 1 +
.
4
2
4x2
Therefore, the normal form of Bessels equation reads as


1 4p2
00
00
u = 0.
u + h(x)u = 0
or
u + 1+
4x2

(4.5)

Now we shall prove that every non-trivial solution u(x) of (4.5) has infinitely many positive
zeros.
Case (i) 0 p 21 . In this case, we have



1 4p2
1 1
1
=1+ 2
+p
p 1 > 0 for all x > 0.
h(x) = 1 +
4x2
x 2
2
Also, we have
Z
Z
h(x)dx =
1



1 4p2
1+
dx = .
4x2

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

34

So by Theorem 4.2.2, every non-trivial solution u(x) has infinitely many positive zeros.
Case (ii) p > 12 . In this case, we have
2

h(x) = 1 +

1 4p
1
= 2
2
4x
x

Now let x0 =

4p2 1
2

!
!
p
p
p
2
2
4p 1
4p 1
4p2 1
x+
x
> 0 provided x >
.
2
2
2

and x = t + x0 . Then (4.5) becomes

d2 u
+ h1 (t)u = 0,
dt2
where h1 (t) = 1 +

14p2
.
4(t+x0 )2

Z
h1 (t)dt =

(4.6)

We see that h1 (t) > 0 for all t > 0, and



1 4p2
1+
dt = .
4(t + x0 )2

So by Theorem 4.2.2, every non-trivial solution u(t) of (4.6) has infinitely many positive zeros.
Since x = t + x0 , so zeros of (4.5) and (4.6) differ only by x0 . Also x0 is a positive number.
Therefore, every non-trivial solution u(x) of (4.5) has infinitely many positive zeros.
From case(i) and case (ii), we conclude that every non-trivial solution u(x) of (4.5) has infinitely
many positive zeros.
Ex. 4.2.3. The hypothesis of the theorem 4.2.2 is false for the Euler equation x2 y 00 + ky = 0, but
the conclusion is sometimes true and sometimes false, depending on the magnitude of the constant
k. Show that every non-trivial solution has infinitely many positive zeros if k > 1/4, and only a
finite number if k 1/4.
Sol. 4.2.3. Comparing the given equation with y 00 + h(x)y = 0, we get h(x) = k/x2 . Therefore,
Z
which is finite number.
h(x)dx = [k/x]x=
x=1 = k,
1

The hypothesis of the theorem 4.2.2 is false.


Now with the transformation x = ez , the DE x2 y 00 + ky = 0 transforms to
Dz (Dz 1)y + ky = 0 or (Dz2 y Dz + k)y = 0.
Its AE is
1
m m + k = 0 with the roots m1 = +
2
2

1
1
k, m2 =
4
2

Now three cases arise depending on the values of k.


(i) If k < 1/4, then the non-trivial solutions are given by
y = c1 em1 z + c2 em2 z = c1 xm1 + c2 xm2 .

1
k.
4

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

35

(ii) If k = 1/4, then the non-trivial solutions are given by


1

y = (c1 + c2 z)e 2 = (c1 + c2 log x)x 2 .


(iii) If k > 1/4, then the non-trivial solutions are given by
!
!
r
r
r
r
z
1
1
1
1
1
y = e 2 c1 cos k z + c2 sin k z = x 2 c1 cos k log x + c2 sin k log x .
4
4
4
4
In each case, c1 and c2 are not both zero. In case (i) and case (ii), the solutions are non-periodic
and therefore possess at most finite number of zeros. In case (iii), the solutions being periodic in
nature possess infinitely many positive zeros.
Theorem 4.2.3. If u(x) is a non-trivial solution of u00 + h(x)u = 0 on a closed interval [a, b], then
u(x) has at most a finite number of zeros in this interval.
Proof. Assume that u(x) has infinitely many zeros in the interval [a, b]. Then the infinite set of
zeros of u(x) is bounded. So by Bolzano-Weierstrass theorem of advanced calculus, there exists
some x0 in [a, b] and a sequence {xn 6= x0 } of zeros of u(x) such that xn x0 as n . Since
u(x) is continuous and differentiable, we have
u(x0 ) = lim u(xn ) = 0,
n

u(xn ) u(x0 )
= 0.
n
xn x0
By theorem 3.1.2, it follows that u(x) is a trivial solution of u00 + h(x)u = 0, which is not true as
per the given hypothesis. Hence, u(x) can not have infinitely many zeros in the interval [a, b].
u0 (x0 ) = lim

Theorem 4.2.4. (Sturm Comparison Theorem) If y(x) and z(x) be non-trivial solutions of
y 00 + q(x)y = 0 and z 00 + r(x)z = 0 respectively, q(x) and r(x) are positive functions such that
q(x) > r(x), then y(x) vanishes at least once between two successive zeros of z(x).
Proof. Let x1 and x2 be two successive zeros of z(x) with x1 < x2 . Let us assume that y(x)
does not vanish on the interval (x1 , x2 ). We shall prove the theorem by deducing a contradiction.
Without loss of generality, we assume that y(x) and z(x) both are positive on (x1 , x2 ), for either
function can be replaced by its negative if necessary. Now, denoting the Wronskian W (y, z) by
W (x), we have
W (x) = y(x)z 0 (x) z(x)y 0 (x).

(4.7)

W 0 (x) = yz 00 zy 00 = y(rz) z(qy) = (q r)yz > 0 on (x1 , x2 ).

Integrating over (x1 , x2 ), we obtain


W (x2 ) W (x1 ) > 0

or

W (x2 ) > W (x1 ).

(4.8)

Since z(x) vanishes at x1 and x2 , so (4.7) yields


W (x1 ) = y(x1 )z 0 (x1 ),

W (x2 ) = y(x2 )z 0 (x2 ).

(4.9)

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

36

Now y(x) being continuous and positive on (x1 , x2 ), we have y(x1 ) 0 and y(x2 ) 0. Also
z 0 (x1 ) > 0 and z 0 (x2 ) < 0 since z(x) is continuous and positive on (x1 , x2 ), and x1 , x2 are successive
zeros of z(x). Hence, (4.9) leads to
W (x1 ) 0

and

W (x2 ) 0.

W (x2 ) W (x1 ).

(4.10)

We see that (4.8) and (4.10) are contradictory. This completes the proof.
Ex. Solutions of y 00 + 4y = 0 oscillate more rapidly than y 00 + y = 0.
Ex. 4.2.4. Use Sturm Comparison Theorem to solve example 4.2.2.
Sol. 4.2.4. In example 4.2.2, we have


1 4p2
lim h(x) = lim 1 +
= 1.
x
n
4x2
So given  > 0, there exists > 0 such that h(x) (1 , 1 + ) for all x > . Choosing
 = 1/4, we have h(x) > 1/4 for all x > . So by Theorem 4.2.4, every solution of u00 + h(x)u = 0
vanishes at least once between any two zeros of solutions of v 00 + (1/4)v = 0. Also every non-trivial
solution of v 00 + (1/4)v = 0 has infinitely many zeros. It follows that every non-trivial solution of
u00 + h(x)u = 0 has infinitely many zeros.
Ex. 4.2.5. Let yp (x) be non trivial solution of the Bessels equation. Show that every interval of
length contains at least one zero of yp (x) for 0 p < 1/2, and at most one zero if p > 1/2.
Sol. 4.2.5. Let [x0 , x0 + ] be any interval of length . The non-trivial solution sin(x x0 ) of
the DE v 00 + v = 0 vanishes at the end points x0 and x0 + . Also, for 0 p < 1/2, yp (x) vanishes
at least once between two successive zeros of any non-trivial solution of v 00 + v = 0, by Strum
2
> 1. So [x0 , x0 + ] contains at least one zero of yp (x).
comparison theorem since 1 + 14p
4x2
Next, if p > 1/2, then again by Strum comparison theorem at least one zero of sin(x x0 ) lies
between two successive zeros of yp (x). Now, the interval [x0 , x0 + ] can contain at most one zero
of yp (x). For, if there are two zeros of yp (x) in the interval [x0 , x0 + ], then sin(x x0 ) must
vanish at some point between x0 and x0 + which is not possible.

Chapter 5
Power Series Solutions and Special
Functions
Some Basics of Power Series
An infinite series of the form

an (x x0 )n = a0 + a1 (x x0 ) + a2 (x x0 )2 + ........

(5.1)

n=0

is called a power series in x x0 .


The power series (5.1) is said to converge at a point x if lim

m
X

an (x x0 )n exists finitely, and

n=0

the sum of the series is defined as the value of the limit. Obviously the power series (5.1) converges
at x = x0 , and in this case its sum is a0 . If R is the largest positive real number such that the
power series (5.1) converges for all x with |x x0 | < R, then R is called radius of convergence of
the power series, and (x0 R, x0 + R) is called the interval of convergence. If the power series
converges only for x = x0 , then R = 0. If the power series converges for every real value of x, then
R = .
We can derive a formula for R by using ratio test. For, by ratio test the power series (5.1)
converges
if




an
an+1
|x x0 | < 1, that is, if |x x0 | < R where R = lim
.
lim
n an
n an+1
Similarly, by Cauchys root test the power series (5.1) converges if lim |an |1/n |x x0 | < 1,
n

that is, if |x x0 | < R where R = lim |an |1/n .


n

Ex.

xn (R = 1. So the power series converges for 1 < x < 1.

n=0

Ex.
Ex.

X
xn
n=0

n!

(R = . So the power series converges for all x.)

n!xn (R = 0. So the power series converges only for x = 0.)

n=0

37

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

38

Now suppose that the power series (5.1) converges to f (x) for |x x0 | < R, that is,

f (x) =

an (x x0 )n = a0 + a1 (x x0 ) + a2 (x x0 )2 + a3 (x x0 )3 + ........

(5.2)

n=0

Then it can be proved that f (x) possesses derivatives of all orders in |x x0 | < R. Also, the series
can be differentiated termwise in the sense that

f (x) =

nan (x x0 )n1 = a1 + 2a2 (x x0 ) + 3a3 (x x0 )2 + ........,

n=1

f 00 (x) =

n(n 1)an (x x0 )n2 = 2a2 + 3.2a3 (x x0 ) + ........,

n=2

and so on, and each of the resulting series converges for |x x0 | < R. The successive differentiated
series suggest that an = f (n) (0)/n!. Also, the power series (5.2) can be integrated termwise provided
the limits of integration lie inside the interval of convergence.

X
If we have another power series
bn (x x0 )n converging to g(x) for |x x0 | < R, that is,
n=0

g(x) =

bn (x x0 )n = b0 + b1 (x x0 ) + b2 (x x0 )2 + b3 (x x0 )3 + ........,

(5.3)

n=0

then (5.2) and (5.3) can be added or subtracted termwise, that is,
f (x) g(x) =

(an bn )(x x0 )n = (a0 b0 ) + (a1 b1 )(x x0 ) + (a2 b2 )(x x0 )2 + ........

n=0

The two series can be multiplied also in the sense that


f (x)g(x) =

(a0 bn + a1 bn1 + ....... + an b0 )(x x0 )n

n=0

= a0 b0 + (a0 b1 + a1 b0 )(x x0 ) + (a0 b2 + a1 b1 + a2 b0 )(x x0 )2 + ......


If f (x) possesses derivatives of all orders in |x x0 | < R, then by Taylors formula
f (x) = f (x0 ) + f 0 (x0 )(x x0 ) +
where Rn =

X
f (n) (x0 )

f (n) (x0 )
f 00 (x0 )
(x x0 )2 + ........ +
(x x0 )n + Rn ,
2!
n!

f (n+1) ()
(x x0 )n+1 , is some number between x0 and x. Obviously the power series
(n + 1)!
(x x0 )n converges to f (x) for those values of x (x0 R, x0 + R) for which Rn 0

n!
as n . Thus for a given function f (x), the Taylors formula enables us to find the power series
that converges to f (x). On the other hand, if a convergent power series is given, then it is not
always possible to find/recognize its sum function. In fact, very few power series have sums that
are elementary functions.
n=0

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

39

X
f (n) (x0 )

(x x0 )n converges to f (x) for all values of x in some neighn!


n=0
bourhood of x0 (open interval containing x0 ), then f (x) is said to be analytic at x0 and the power
series is called Taylor series of f (x) at x0 . Notice that f (x) is analytic at each point in the interval

X
f (n) (x0 )
(x x0 )n .
of convergence (x0 R, x0 + R) of the power series
n!
n=0
If the power series

Power series solution


The exact methods that we have learned in Chapter 2 and Chapter 3 are applicable to only
selected class of DE. There are DE such as the Bessel DE, which can not be solved by exact
methods. Solutions of such DE can be found in the form of power series. We start with a simple
example.
Ex. 5.2.1. Find power series solution of y 0 y = 0 about x = 0.
Sol. 5.2.1. Assume that

X
an xn = a0 + a1 x + a2 x2 + a3 x3 + ........,
y=

(5.4)

n=0

is a power series solution of the given DE. So


0

y =

nan xn1 = a1 + 2a2 x + 3a3 x2 + ........,

(5.5)

n=0

Substituting the y and y 0 into the given DE, we get

nan x

n=0

n1

an xn = 0,

(5.6)

n=0

which must be an identity in x since (5.4) is, by assumption, a solution of the given DE. So
coefficients of all powers of x must be zero. In particular, equating to 0 the coefficient of xn1 , the
lowest degree term in x, we obtain
1
an1 .
n
Substituting n = 1, 2, 3, ...., we get
nan 2an1 = 0 or an =

a1 = a0 ,
1
1
a2 = a1 = a0 ,
2
2!
1
1
a3 = a2 = a0 ,
3
3!
and so on. Plugging the values of a1 , a2 , ..... into (5.4), we get
1
1
y = a0 + a0 x + a0 x2 + a0 x3 + ........,
2!
3!


x2 x3
= a0 1 + x +
+
+ ..............
2!
3!

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

40
2

Let us examine the validity of this solution. We know that the power series 1 + x + x2! + x3! +
.............. converges for all x. It implies that the term by term differentiation carried out in (5.5)
is valid for all x. Similarly, the difference
of the two series (5.4)and (5.5) considered in (5.6) is

2
3
valid for all x. It follows that y = a0 1 + x + x2! + x3! + .............. is a valid solution of the given
DE for all x. Also, we know that ex = 1 + x +
of the DE y 0 y = 0, as expected.

x2
2!

x3
3!

+ ............... So y = a0 ex is general solution

Ordinary and regular singular points


Consider a second order homogeneous LDE
y 00 + p(x)y 0 + q(x)y = 0.

(5.7)

If the functions p(x) and q(x) are analytic at x = x0 , then x0 is called an ordinary point of the DE
(5.7). If p(x) and/or q(x) fail to be analytic at x0 , but (x x0 )p(x) and (x x0 )2 q(x) are analytic
at x0 , then we say that x0 is a regular singular point of (5.7) otherwise we call x0 as an irregular
singular point of x0 . For example, x = 0 is a regular singular point of the DE x2 y 00 + xy 0 + 2y = 0
and every non-zero real number is an ordinary point of the same DE. x = 0 is an irregular singular
point of the DE x3 y 00 + xy 0 + y = 0.
The following theorem gives a criterion for the existence of the power series solution of near an
ordinary point.
Theorem 5.2.1. If a0 , a1 are arbitrary constants, and x0 is an ordinary point of a DE y 00 +
p(x)y 0 + q(x)y = 0, then there exists a unique solution y(x) of the DE that is analytic at x0 such
that y(x0 ) = a0 and y 0 (x0 ) = a1 . Furthermore, the power series expansion of y(x) is valid in
|x x0 | < R provided the power series expansions of p(x) and q(x) are valid in this interval.
The above theorem asserts that there exists a unique power series solution of the form
y(x) =

an (x x0 )n = a0 + a1 (x x0 ) + a2 (x x0 )2 + a3 (x x0 )3 + ........,

n=0

about the ordinary point x0 satisfying the initial conditions y(x0 ) = a0 and y 0 (x0 ) = a1 . The
constants a2 , a3 and so on are determined in terms of a0 or a1 as illustrated in the following
examples.
Ex. 5.2.2. Find power series solution of y 00 y = 0 about x = 0.
Sol. 5.2.2. Here p(x) = 0 and q(x) = 4, both are analytic at x = 0. So x = 0 is an ordinary
point of the given DE. So there exists a power series solution
y=

an xn = a0 + a1 x + a2 x2 + a3 x3 + ........,

n=0

where the constants a2 , a3 , ..... are to be determined.


Substituting the power series solution into the given DE, we get

X
n=0

an n(n 1)xn2

X
n=0

an xn = 0.

(5.8)

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

41

Comparing coefficients of xn2 , the lowest degree term in x, we obtain


n(n 1)an an2 = 0 or an =

1
an1 .
n(n 1)

Substituting n = 2, 3, ...., we get


1
1
a2 = a0 = a0 ,
2
2!
1
1
a1 = a1 ,
3.2
3!
1
1
a4 =
a2 = a0 ,
4.3
4!
1
1
a5 =
a4 = a1 ,
5.4
5!
and so on. Plugging the values of a3 , a4 , a5 ..... into (5.8), we get
a3 =

1
1
1
1
y = a0 + a1 x + a0 x2 + a1 x3 + a0 x4 + a1 x5 + ........,
2!
3!
4!
5!


1 3 1 5
1 2 1 4
= a0 1 + x + x + .............. + a1 x + x + x + .............. ,
2!
4!
3!
5!
the required power series solution of the given DE. We know that (ex + ex )/2 = 1 + 2!1 x2 + 4!1 x4 +
.............. and (ex ex )/2 = x + 3!1 x3 + 5!1 x5 + ................ So the power series solution becomes
y = c1 ex + c2 ex , where c1 = (a0 + a1 )/2 and c1 = (a0 a1 )/2, which is the same solution of
y 00 y = 0 as we obtain by exact method.
Ex. 5.2.3. Find power series solution of (1 + x2 )y 00 + xy 0 y = 0 about x = 0.
Sol. 5.2.3. Here x = 0 is an ordinary point of the given DE. So there exists a power series solution

y=

an xn = a0 + a1 x + a2 x2 + a3 x3 + .........

(5.9)

n=0

Substituting the power series solution (5.9) into the given DE, we get
(1 + x2 )

an n(n 1)xn2 + x

n=2

X
n=2

an nxn1

n=1

an n(n 1)xn2 +

n=2

an xn = 0.

n=0

an [n(n 1) + n 1]xn = 0.

n=0

an n(n 1)xn2 +

an (n 1)(n + 1)xn = 0.

n=0

Equating to 0 the coefficient of xn2 , we obtain


n(n 1)an + (n 3)(n 1)an2 = 0.

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

42

3n
an2 provided n 6= 1.
n
Substituting n = 2, 3, ...., we get
=

an =

1
a2 = a0 ,
2
a3 = 0,
1
1
a4 = a2 = a0 ,
4
4.2
a5 = 0,
3
3
a0 ,
a6 = a4 =
6
6.4.2
and so on.
Plugging the values of a2 , a3 , a4 , a5 , a6 and so on into (5.9), we get
1
3
1
a0 x4 + 0.x5 +
a0 x6 + ........,
y = a0 + a1 x + a0 x2 + 0.x3
2
4.2
 6.4.2

1 4
3 6
1 2
= a0 1 + x
x +
x .............. + a1 x,
2
4.2
6.4.2
the required power series solution of the given differential equation.
The following theorem by Frobenius gives a criterion for the existence of the power series
solution of near a regular singular point.
Theorem 5.2.2. If x0 is a regular singular point of a DE y 00 + p(x)y 0 + q(x)y = 0, then there exists

X
an (x x0 )n+r (a0 6= 0), where r is some root
at least one power series solution of the form y =
n=0

of the quadratic equation (known as indicial equation) obtained by equating to zero the coefficient

X
an (x x0 )n+r into
of lowest degree term in x of the equation that arises on substituting y =
n=0

the given DE.


Remark P
5.2.1. The above theorem by Frobenius guarantees at least one power series solution of
n+r
the form
(a0 6= 0) of the DE y 00 + p(x)y 0 + q(x)y = 0, which we call Frobenius
n=0 an (x x0 )
solution. If the roots of the indicial equation do not differ by an integer, we get two LI Frobenious
solutions. In case, there exists only one Frobenious solution, it corresponds to larger root of the
indicial equation. The other LI solution depends on the nature of roots of the indicial equation as
illustrated in the following examples.
Ex. 5.2.4. Find power series solutions of 2x2 y 00 + xy 0 (x2 + 1)y = 0 about x = 0.
Sol. 5.2.4. Here x = 0 is a regular singular point of the given DE. So there exists at least one
Frobenius solution of the form
y=

X
n=0

an xn+r = xr (a0 + a1 x + a2 x2 + a3 x3 + ........).

(5.10)

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

43

Substituting (5.10) into the given DE, we obtain

an (n + r 1)(2n + 2r + 1)x

n+r

n=0

an xn+r+2 = 0.

(5.11)

n=0

Equating to 0 the coefficient of xr , the lowest degree term in x, we obtain


a0 (r 1)(2r + 1) = 0 or (r 1)(2r + 1) = 0.
Therefore, roots of the indicial equation are r = 1, 1/2, which do not differ by an integer. So we
shall get two LI Frobenius solutions.
Next equating to 0 the coefficient of xr+1 , we find
a1 r(2r + 3) = 0 or a1 = 0 for r = 1, 1/2.
Now equating to 0 the coefficient of xn+r , we have the recurrence relation
an (n + r 1)(2n + 2r + 1) an2 = 0 or an =

1
an2 .
(n + r 1)(2n + 2r + 1)

where n = 2, 3, 4....
For r = 1, we have
an =

1
an2 ,
n(2n + 3)

a2 =

1
a0 ,
2.7

a3 =

1
a1 = 0,
3.9

a4 =

1
1
a2 =
a0 , .......
4.11
2.7.4.11

a2 =

1
a0 ,
2.1

a3 =

1
a1 = 0,
3.3

a4 =

1
1
a2 =
a0 , .......
4.5
2.1.4.5

For r = 1/2, we have


an =

1
an2 ,
n(2n 3)

Thus, two LI Frobenious solutions of the given DE are




x2
x4
y 1 = a0 x 1 +
+
+ ....... ,
2.7 2.7.4.11


x2
x4
1/2
y 2 = a0 x
1+
+
+ ....... .
2.1 2.1.4.5
Ex. 5.2.5. Find power series solutions of xy 00 + y 0 xy = 0 about x = 0.
Sol. 5.2.5. Here x = 0 is a regular singular point of the given DE. So there exists at least one
Frobenius solution of the form
y=

an xn+r = xr (a0 + a1 x + a2 x2 + a3 x3 + ........).

(5.12)

n=0

Substituting (5.12) into the given DE, we obtain

X
n=0

an (n + r)2 xn+r1

X
n=0

an xn+r+1 = 0.

(5.13)

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

44

Equating to 0 the coefficient of xr1 , the lowest degree term in x, we obtain


a0 r2 = 0 or r2 = 0.
Therefore, roots of the indicial equation are r = 0, 0, which are equal. So we shall get only one
Frobenious series solution.
Next equating to 0 the coefficient of xr , we find
a1 (r + 1)2 = 0 or a1 = 0 for r = 0.
Now equating to 0 the coefficient of xn+r1 , we have the recurrence relation
an (n + r)2 an2 = 0 or an =

1
an2 .
(n + r)2

where n = 2, 3, 4....
Therefore, we have
a2 =

1
a0 ,
(r + 2)2

a3 =

1
a1 = 0,
(r + 3)2

a4 =

1
1
a2 =
a0 , .......
2
2
(r + 4)
(r + 2) (r + 4)2

Plugging these values in (5.12), we get




x4
x2
r
y = a0 x 1 +
+
+ .......
(r + 2)2 (r + 2)2 (r + 4)2

(5.14)

Taking r = 0, we get the following Frobenius solution




x4
x2
y1 = a0 1 + 2 + 2 2 + .......
2
2 .4
To get another LI solution, we substitute (5.14) into the given DE. Then we have
xy 00 + y 0 xy = a0 r2 xr1

or (xD2 + D x)y = a0 r2 xr1 .

(5.15)

Note that substitution of (5.14) into the given DE gives only the lowest degree term in x. Obviously
(y)r=0 = y1 satisfies (5.15) and hence the given DE. Now differentiating (5.15) partially w.r.t. r,
we obtain
y
= a0 (2rxr1 + r2 xr1 ln x).
(5.16)
r

This shows that y
is a solution of the given DE. Thus, the second LI solution of the given
r r=0
DE is
 2

 
x
3 4
y
y2 =
= y1 ln x a0
+
x + ......
r r=0
4
128
(xD2 + D x)

Ex. 5.2.6. Find power series solutions of x(1 + x)y 00 + 3xy 0 + y = 0 about x = 0.

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

45

Sol. 5.2.6. Here x = 0 is a regular singular point of the given DE. So there exists at least one
Frobenius solution of the form
y=

an xn+r = xr (a0 + a1 x + a2 x2 + a3 x3 + ........).

(5.17)

n=0

Substituting (5.17) into the given DE, we obtain

an (n + r)(n + r 1)x

n+r1

n=0

an [(n + r)(n + r + 2) + 1]xn+r = 0.

(5.18)

n=0

Equating to 0 the coefficient of xr1 , the lowest degree term in x, we obtain


a0 r(r 1) = 0 or r(r 1) = 0.
Therefore, roots of the indicial equation are r = 0, 1, which differ by an integer. So we shall get
only one Frobenious solution and that corresponds to the larger root r = 1.
Now equating to 0 the coefficient of xn+r1 , we have the recurrence relation
an (n + r 1) + an1 (n + r) = 0 or an =

n+r
an1 .
n+r1

where n = 1, 2, 3, 4....
Therefore, we have
a1 =

r+1
a0 ,
r

a2 =

r+2
a0 ,
r

a3 =

r+3
a0 , .......
r

For r = 1, we get a1 = 2a0 , a2 = 3a0 , a3 = 4a0 , ... So the Frobenious series solution is
y = xr (a0 + a1 x + a2 x2 + a3 x3 + ........) = a0 (x 2x2 + 3x3 4x4 + .....).

(5.19)

Now we find the other LI solution. Since a1 , a2 ,...... are not defined at r = 0, so we replace a0 by
b0 r in (5.17). Thus the modified series solution reads as
y = xr (b0 r + a1 x + a2 x2 + a3 x3 + ........),
which on substitution into the given DE yields
x(1 + x)y 00 + 3xy 0 + y = b0 r2 (r 1)xr1 .

(5.20)

Obviously (y)r=0 and (y)r=1 satisfy the given DE. But we find that the solutions
(y)r=0 = b0 (x 2x2 + ..........),
(y)r=1 = b0 (x 2x2 + ..........),
are not LI from the Frobenious solution (5.19). So we partially differentiate (5.20) and find that
y
is a solution of the given DE. Thus the other LI solution of the given DE reads as
r r=0
 
y
y=
= y1 ln x + b0 (1 x + x2 x3 + ..........).
r r=0

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

46

Ex. 5.2.7. Find power series solutions of x2 y 00 + x3 y 0 + (x2 2)y = 0 about x = 0.


Sol. 5.2.7.
r = 2, 1
y 1 = a0 x


3 2
3 4
1 x + x ............ ,
10
56

y2 = a0 x1 .
Ex. 5.2.8. Find power series solutions of x2 y 00 + 6xy 0 + (x2 + 6)y = 0 about x = 0.
Sol. 5.2.8.
r = 2, 3.

an =

1
an2
n(n + 1)

For r = 3, we find that a1 is arbitrary. In this case, r = 3 provides the general solution
y = a0 y1 + a1 y2 , where


1 2 1 4
3
y1 = x
1 x + x ............ ,
2!
4!


1 3 1 5
3
y2 = x
x x + x ............ .
3!
5!
Note that corresponding to the larger root r = 2, you will get the Frobenious solution, a
constant multiple of y2 . (Find and see!)

Gausss Hypergeometric Equation


A DE of the form
x(1 x)y 00 + [c (a + b + 1)x]y 0 aby = 0,

(5.21)

where a, b and c are constants, is called Hypergeometric Equation. We observe that x = 0 is a


regular singular point of (5.21). So there exists at least one Frobenius solution of the form
y=

an xn+r = xr (a0 + a1 x + a2 x2 + a3 x3 + ........).

(5.22)

n=0

Substituting (5.22) into (5.21), we obtain

X
n=0

an (n + r)(c + n + r 1)x

n+r1

an (n + r + a)(n + r + b)xn+r = 0.

n=0

Comparing coefficients of xr1 , the lowest degree term in x, we obtain


a0 r(c + r 1) = 0 or r(c + r 1) = 0.

(5.23)

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

47

Therefore, roots of the indicial equation are r = 0, 1 c. Now comparing the coefficient of xn+r1 ,
we have the recurrence relation
an (n+r)(c+n+r1)an1 (n1+r+a)(n1+r+b) = 0 or an =

(a + n 1 + r)(b + n 1 + r)
an1 .
(n + r)(c + n 1 + r)

where n = 1, 2, 3, 4....
For r = 0, we have
an =

(a + n 1)(b + n 1)
an1 ,
n(c + n 1)

a1 =

a.b
a0
1.c

a2 =

(a + 1)(b + 1)
a(a + 1)b(b + 1)
a1 =
a0 , .......
2(c + 1)
1.2c(c + 1)

So the Frobenius solution corresponding to r = 0 reads as




a.b
a(a + 1)b(b + 1) 2
y = a0 1 +
x+
x + .......... ,
1.c
1.2c(c + 1)
This series with a0 = 1 is called hypergeometric series and is denoted by F (a, b, c, x). Thus,
F (a, b, c, x) = 1 +

X
a(a + 1)...(a + n 1)b(b + 1)...(b + n 1)
n=0

n!c(c + 1)...(c + n 1)

xn .

In case a = 1 and b = c, we get


F (1, b, b, x) = 1 + x + x2 + .........
the familiar geometric series. Thus, F (a, b, c, x) generalizes the geometric series. That is why it is
named as hypergeometric series. Further, we find




an+1
(a + n)(b + n)
|x| = lim
|x| = |x| ,
lim
n an
n (n + 1)(c + n)
provided c is not zero or negative integer. Therefore, F (a, b, c, x) is analytic function-called the
hypergeometric function-on the interval |x| < 1. It is the simplest particular solution of the
hypergeometric equation.
Next we find the series solution corresponding to the indicial root r = 1 c. The series solution
in this case is given by
y = x1c (a0 + a1 x + a2 x2 + a3 x3 + ........),
where the constants a1 , a2 and so on can be determined using the recurrence relation. Alternatively,
we substitute y = x1c z into the given DE (5.21) and obtain
x(1 x)z 00 + [(2 c) ((a c + 1) + (b c + 1) + 1)x]z 0 (a c + 1)(b c + 1)z = 0, (5.24)
which is the hypergeometric equation with the constants a, b and c replaced by a c + 1, b c + 1
and 2 c. Therefore, (5.24) has the power series solution
z = F (a c + 1, b c + 1, 2 c, x).
So the second power series solution (5.21) is
y = x1c F (a c + 1, b c + 1, 2 c, x).

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

48

Thus, general solution of (5.21) near the regular singular point x = 0 is


y = c1 F (a, b, c, x) + c2 x1c F (a c + 1, b c + 1, 2 c, x),

(5.25)

provided c is not an integer.


Next we solve the DE (5.21) near the regular singular point x = 1. Assuming t = 1 x, we
find that x = 1 corresponds to t = 0 and (5.21) transforms to
t(1 t)y 00 + [(a + b c + 1) (a + b + 1)t]y 0 aby = 0,
where the prime denotes the derivative with respect to t. It is a hypergeometric equation with c
replaced by a + b c + 1. So its solution with t replaced by 1 x in view of (5.25) reads as
y = c1 F (a, b, a + b c + 1, 1 x) + c2 (1 x)cab F (c b, c a, c a b + 1, 1 x),
provided c a b is not an integer.
Remark 5.3.1. Any DE of the form
(x A)(x B)y 00 + (C + Dx)y 0 + Ey = 0,

(5.26)

where A, B, C, D and E are constants with A 6= B and D 6= 0, can be transformed to the


hypergeometric equation
t(1 t)y 00 + (F + Gt)y 0 + Hy = 0,

(5.27)

where
t=

xA
BA

and F , G, H are certain combinations of the constants in (5.26). The primes in (5.27) denote the
derivatives with respect to t. This is a hypergeometric equation with a, b and c defined by F = c,
G = (a + b + 1) and H = ab. Therefore, (5.27) can be solved in terms of hypergeoetric function
near t = 0 and t = 1. It follows that (5.26) can be solved in terms of the same function near x = A
and x = B.
Remark 5.3.2. Most of the familiar functions in elementary analysis can be expressed in terms
of hypergeometric function.
(i) (1 + x)n = F (n, b, b, x)
(ii)
(iii)

log(1 + x) = xF (1, 1, 2, x)
sin1 x = xF (1/2, 1/2, 3/2, x2 )

(iv) ex = lim F (a, b, a, x/b)


b

Ex. 5.3.1. Solve (x2 x 6)y 00 + (5 + 3x)y 0 + y = 0 near x = 3.

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

49

Sol. 5.3.1. The given equation can be rewritten as


(x 3)(x + 2)y 00 + (5 + 3x)y 0 + y = 0.

(5.28)

Here A = 3, B = 2. Therefore,
t=

xA
x3
x3
=
=
BA
2 3
5

x = 5t + 3.

So the given equation becomes


t(1 t)y 00 + (14/5 3t)y 0 y = 0,
a hypergeometric equation with c = 14/5, a + b + 1 = 3, ab = 1. This implies a = b = 1. Therefore,
the solution near t = 0, that is, x = 3 is
y = c1 F (1, 1, 14/5, (x 3)/(5)) + c2 ((x 3)/ 5)( 9/5)F (4/5, 4/5, 4/5, (x 3)/(5)).

Chapter 6
Fourier Series
Introduction
We are familiar with the power series representation of a function f (x). The representation of f (x)
in the form of a trigonometric series given by

a0 X
f (x) =
+
(an cos nx + bn sin nx),
2
n=1

(6.1)

is required in the treatment of many physical problems such as heat conduction, electromagnetic
waves, mechanical vibrations etc. An important advantage of the series (6.1) over a usual power
series in x is that it can represent f (x) even if f (x) possesses many discontinuities (eg. discontinuous impulse function in electrical engineering). On the other hand, power series can represent
f (x) only when f (x) is continuous and possesses derivatives of all orders.
Let m and n be positive integers such that m 6= n. Then we have,
Z
Z
Z
cos mx sin nxdx = 0,
sin nxdx = 0,
cos nxdx = 0,
Z

cos mx cos nxdx = 0,

sin mx sin nxdx = 0.

Further,

cos nxdx = =

sin2 nxdx.

Now, we do some classical calculations that were first done by Euler. We assume that the function
f (x) in (6.1) is defined on [, ]. Also, we assume that the series in (6.1) is uniformly convergent
so that term by term integration is possible.
Integrating both sides of (6.1) over [, ], we get
1
a0 =

f (x)dx.

(6.2)

Multiplying both sides of (6.1) by cos nx, and then integrating over [, ], we get
1
an =

f (x) cos nxdx.

(6.3)

50

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

51

Note that this formula, for n = 0, gives the value of a0 as given in (6.2). That is why, a0 is divided
by 2 in (6.1).
Next, multiplying both sides of (6.1) by sin nx, and then integrating over [, ], we get
1
bn =

f (x) sin nxdx.

(6.4)

These calculations show that the coefficients an and bn can be obtained from the sum f (x) in
(6.1) by means of the formulas (6.3) and (6.4) provided the series (6.1) is uniformly convergent.
However, this situation is too restricted to be of much practical use because first we have to ensure
that the given function f (x) admits an expansion as a uniformly convergent trigonometric series.
For this reason, we set aside the idea of finding the coefficients an and bn in the expansion (6.1)
that may or may not exist. Instead we use formulas (6.3) and (6.4) to define some numbers an and
bn . Then we use these to construct a series of the form (6.1). When we follow this approach, the
numbers an and bn are called the Fourier coefficients of the function f (x) and the series (6.1) is
called Fourier series of f (x). Obviously, the function f (x) must be integrable in order to construct
its Fourier series. Note that a discontinuous function may be integrable.
We hope that the Fourier series of f (x) will converge to f (x) so that (6.1) is a valid representation or expansion of f (x). However, this is not always true. There exist integrable functions
whose Fourier series diverge at one or more points. That is, why some advanced texts on Fourier
series write (6.1) in the form

f (x)

a0 X
(an cos nx + bn sin nx),
+
2
n=1

(6.5)

where the sign is used in order to emphasize that the Fourier series on right is not necessarily
convergent to f (x).
Just like a Fourier series does not imply convergence, a convergent trigonometric series does
not imply to be a Fourier series of some function. For example, it is known that the trigonometric
series

X
sin nx
ln(1 + n)
n=1

converges for all x. But it is not a Fourier series since 1/ ln(1+n) can not be obtained from formula
(6.4) for any choice of integrable function f (x). In fact, this series fails to be Fourier series because
it fails to satisfy a remarkable theorem, which states that the term by term integral of any Fourier
series (whether convergent or not) must converge for all x.
Thus, the fundamental problem of the subject of Fourier series is clearly to discover the properties of an integrable function that guarantee that its Fourier series not only converges but also
converges to the function. Before this, let us see some examples.
Ex. 6.1.1. Find Fourier series of the function f (x) = x, x .
Sol. 6.1.1. We find
Z
1
a0 =
f (x)dx = 0,

Mathematics-III
1
an =

bn =

Dr. Suresh Kumar, BITS Pilani

52

f (x) cos nxdx = 0,

f (x) sin nxdx =

2
(1)n .
n

So Fourier series of f (x) = x reads as




1
1
x = 2 sin x sin 2x + sin 3x + ........ .
2
3

(6.6)

Here equals sign is an expression of hope rather than definite knowledge. It can be proved that
the Fourier series in (6.6) converges to x in < x < . At x = or x = , the Fourier series
converges to 0, and hence does not converge to f (x) = x at x = or x = . Further, each term
on right hand side in (6.6) has a period 2. So the entire expression on right hand side of (6.6)
has a period 2. It follows that the Fourier series in (6.6) does not converge to f (x) = x outside
the interval < x < . But if f (x) = x is given to be a periodic function of period 2, then
the Fourier series in (6.6) converges to f (x) = x for all real values of x except x = k, where k
is any non-zero integer. In left panel of Figure 6.1, we show the plots of x (Black line), 2 sin x
(Green curve), 2 sin x sin 2x (Red curve) and 2 sin x sin 2x + (2/3) sin 3x (Blue curve) in the
range < x < or 3.14 < x < 3.14. We see that as we consider more and more terms of the
the Fourier series in (6.6), it approximates the function f (x) = x better and better, as expected.
3
3

2
1
-3

-2

-1

2
1

3
1

-1
-2
-3

-3

-2

-1

Figure 6.1: Left Panel: Plots of x (Black line), 2 sin x (Green curve), 2 sin x sin 2x (Red curve) and 2 sin x
sin 2x + (2/3) sin 3x (Blue curve) in the range < x < or 3.14 < x < 3.14.
Right Panel: Plots of f (x) (Black lines), 2 (Green line), 2 + 2 sin x (Red curve) and 2 + 2 sin x + 32 sin 3x (Blue
curve), 2 + 2 sin x + 23 sin 3x + 25 sin 5x (Purple curve) in the range < x < or 3.14 < x < 3.14.
Ex. 6.1.2. Find Fourier series of the function

0 , x < 0
f (x) =
, 0 x .
Sol. 6.1.2. We find
Z
1
a0 =
f (x)dx = ,

Z
1
f (x) cos nxdx = 0,
an =

Mathematics-III
1
bn =

Dr. Suresh Kumar, BITS Pilani

f (x) sin nxdx =

53

1
[1 (1)n ].
n

So Fourier series of f (x) reads as




1
1

f (x) = + 2 sin x + sin 3x + sin 5x + ........ .


2
3
5

(6.7)

The Fourier series in (6.7) converges to f (x) in < x < except x = 0. At x = 0, the value of
f (x) is while the Fourier series converges to 2 . In right panel of Figure 6.1, we show the plots of
f (x) (Black lines), 2 (Green line), 2 + 2 sin x (Red curve) and 2 + 2 sin x + 32 sin 3x (Blue curve),

+ 2 sin x + 23 sin 3x + 25 sin 5x (Purple curve) in the range < x < or 3.14 < x < 3.14. We
2
see that as we consider more and more terms of the the Fourier series in (6.7), it approximates the
function f (x) = x better and better, as expected.

Dirichlets conditions for convergence


Let f (x) be a function defined and bounded on x < such that it has finite number of
discontinuities, and only a finite number of maxima and minima on this interval. Let f (x) be
defined by f (x + 2) = f (x) for other values of x. Then the Fourier series of f (x) converges to
1
[f (x) + f (x+)] at every point x. Thus, the Fourier series of f (x) converges to f (x) at every
2
point x of continuity. If we redefine the function as the average of the two one sided limits at each
point of discontinuity, that is, f (x) = 12 [f (x) + f (x+)], then the Fourier series represents f (x)
everywhere.
Ex. 6.2.1. Find Fourier series of the function

0 , x < 0
f (x) =
x , 0 x .
Hence show that

2
8

=1+

1
32

1
52

1
72

+ ........

Sol. 6.2.1. We find


Z
1
a0 =
f (x)dx = /2,

Z
1
1
an =
f (x) cos nxdx =
[(1)n 1],

n2
Z
1
(1)n+1
bn =
f (x) sin nxdx =
.

n
So Fourier series of f (x) is
f (x) =

X
X 1
(1)n+1
n
+
[(1)

1]
cos
nx
+
sin nx.
4 n=1 n2
n
n=1

(6.8)

At x = , f (x) is discontinuous. So its Fourier series converges to 12 [f (x)+f (x+)] = 12 (+0) = 2 .


So at x = , (6.8) gives

X 1
= +
[(1)n 1](1)n .
2
4 n=1 n2

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

54

It can be rearranged to write


2
1
1
1
= 1 + 2 + 2 + 2 + .........
8
3
5
7

Fourier series for even and odd functions


Let f (x) be integrable function defined on x . If f (x) is even, then its Fourier series
carries only cosine terms and the Fourier coefficients are given by
Z
2
f (x) cos nxdx, bn = 0.
an =
0
If f (x) is odd, then its Fourier series carries only sine terms and the Fourier coefficients are given
by
Z
2
f (x) sin nxdx.
an = 0, bn =
0
For example, the Fourier coefficients of the odd function f (x) = x, x are an = 0 and
n1
bn = 2(1)
. So the Fourier series of x is given by
n


1
1
x = 2 sin x sin 2x + sin 3x ...........
(6.9)
2
3
Note that the Fourier series converges to x for < x < and not at the end points x = .
Similarly, the Fourier coefficients of the even function f (x) = |x|, x are a0 = ,
an = n2 2 [(1)n 1] and bn = 0. So we have



4
1
1
|x| =
(6.10)
cos x + 2 cos 3x + 2 cos 5x + ...........
2
3
5
It is interesting to observe that the two series (6.9) and (6.10) both represent the same function
f (x) = x on 0 x since |x| = x for x 0. The series (6.9) is called Fourier sine series of x,
and the series (6.10) is called Fourier cosine series of x. Similarly, any function f (x) satisfying the
Dirichlets conditions on 0 x can be expanded in both a sine series and a cosine series on
this interval subject to that the series does not converge to f (x) at the end points x = 0 and x =
unless f (x) = 0 at these points. Thus, to obtain sine series of a function, we redefine the function
(if necessary) to have the value 0 at x = 0, and then extend it over the interval x < 0 such
that f (x) = f (x) for all x lying in x . It is called odd extension of f (x). Similarly,
even extension of f (x) can be carried out in order to obtain Fourier cosine series.
Ex. 6.3.1. Find Fourier sine and cosine series of f (x) = cos x, 0 x .
Sol. 6.3.1. For sine series, we find


Z
Z
2
2n 1 + (1)n
2
f (x) sin nxdx =
cos x sin nxdx =
, n 6= 1.
bn =
0
0

n2 1
Z
2
b1 =
cos x sin xdx = 0.
0

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

55

So Fourier sine series of cos x is given by





X
2n 1 + (1)n
sin nx.
cos x =

n2 1
n=2
For cosine series, we find
Z
Z
2
2
an =
f (x) cos nxdx =
cos x cos nxdx = 0, n 6= 1.
0
0
Z
2
a1 =
cos x cos xdx = 1.
0
So Fourier cosine series of cos x is given by
cos x = cos x.

Fourier series on arbitrary intervals


Let f (x) be defined on an interval L x L. If we let t =
 x 
= g(t), t .
f (x) = f
L
So Fourier series of g(t) is given by

x
,
L

then we have

a0 X
g(t) =
(an cos nt + bn sin nt),
+
2
n=0
Z
Z
1
1
where an =
g(t) cos ntdt, bn =
g(t) sin ntdt.


Since t = x
, it follows that
L

nx
nx 
a0 X 
+
an cos
+ bn sin
,
f (x) =
2
L
L
n=0
Z
Z
nx
nx
1 L
1 L
where an =
f (x) cos
f (x) sin
dx, bn =
dx.
L L
L
L L
L

Ex. 6.4.1. Find Fourier coefficients of the function



0 , 2 x < 0
f (x) =
1 , 0 x 2.
Sol. 6.4.1. Here L = 2. So we find
Z
Z
1 L
1 2
a0 =
f (x)dx =
f (x)dx = 1,
L L
2 2
Z
Z
1 L
nx
1 2
nx
an =
f (x) cos
dx =
f (x) cos
dx = 0,
L L
L
2 2
2
Z
Z
1 L
nx
1 2
nx
1
bn =
f (x) sin
dx =
f (x) sin
dx =
[1 (1)n ].
L L
L
2 2
2
n

Chapter 7
Boundary Value Problems
In this chapter, we shall discuss the solution of some boundary value problems.

One dimensional wave equation


Consider an elastic string of negligible mass and length tied at the two ends along x-axis at the
points (0, 0) and (, 0). Suppose the string is pulled in the shape y = f (x) in the xy-plane and
released. Then it can be shown that the vibrations of the string in the xy-plane are governed by
the one dimensional wave equation
1 2y
2y
=
,
x2
a2 t2

(7.1)

where a is some positive constant, and y(x, t) is the displacement/vibration of the string along
y-axis direction. The wave equation is subjected to the following four conditions.
The first condition is
y(0, t) = 0,

(7.2)

since the left end of the string is tied at (0, 0) for all the time, and hence it can not have displacement
along the y-axis.
The second condition is
y(, t) = 0

(7.3)

since the right end of the string is tied at (, t) for all the time, and hence it can not have
displacement along the y-axis.
The third condition is
y
= 0,
t

at t = 0,

(7.4)

since the string is in rest at t = 0.


The fourth condition is
y(x, 0) = f (x),

(7.5)

since the string is in the shape y = f (x) at t = 0.

56

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

57

Once the string is released from the initial shape y(x, 0) = f (x), we are interested to find the
distance or displacement of the string from the x-axis at any time t. It is equivalent to saying that
we are interested to solve (7.1) for y(x, t) subject to the four conditions (7.2)-(7.5).
Assume that (7.1) possesses a solution of the form
y(x, t) = u(x)v(t),

(7.6)

where u(x) and v(t) are to be determined. Plugging (7.6) into (7.1), we get
1 v 00 (t)
u00 (x)
= 2
= ,
u(x)
a v(t)

(7.7)

where is some constant. This yields following two equations


u00 (x) u(x) = 0,

(7.8)

v 00 (t) a2 v(t) = 0.

(7.9)

Now, let us first solve (7.8). Later, we shall look for the solution of (7.9). Considering (7.6), the
condition y(0, t) = 0 in (7.2) gives u(0)v(t) = 0 or u(0) = 0. Similarly, y(, t) = 0 in (7.3) gives
u() = 0. Further, we see that the nature of solution of (7.8) depends on the values of .

(i) When > 0, the solution reads as u(x) = c1 e x + c2 e x . Using the conditions u(0) = 0
and u() = 0, we get c1 = 0 = c2 , and hence u(x) = 0. This leads to the trivial solution
y(x, t) = u(x)v(t) = 0, which is not of our interest.
(ii) When = 0, the solution reads as u(x) = c1 x + c2 . Again, using the conditions u(0) = 0
and u() = 0, we get c1 = 0 = c2 , which leads to the trivial solution y(x, t) = u(x)v(t) = 0.
(iii) When < 0, say, = n2 , the solution reads as u(x) = c1 sin nx + c2 cos nx. Applying
the condition u(0) = 0, we get c2 = 0. The condition u() = 0 then implies that c1 sin n = 0.
Obviously, for a non-trivial solution we must have c1 6= 0. Then the condition c1 sin n = 0 forces
n to be a positive integer. Thus,
un (x) = sin nx,

(7.10)

is non-trivial solution of (7.8) for each positive integer n.


Now, the solution of (7.9) with = n2 reads as v(t) = c1 sin nat + c2 cos nat. The condition
in (7.4) leads to u(x)v 0 (0) = 0 or v 0 (0) = 0, which in turn gives c1 = 0. So
vn (t) = cos nat,

(7.11)

is non-trivial solution of (7.8).


In view of (7.6), (7.10) and (7.11), we can say that
yn (x, t) = un (x)vn (t) = sin nx cos nat,

(7.12)

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

58

is a solution of (7.1) for each positive integer n. It follows that


y(x, t) =

bn yn (x, t) =

n=1

bn sin nx cos nat,

(7.13)

n=1

is also a solution of (7.1). To determine bn , we use the fourth condition y(x, 0) = f (x) given in
(7.5). Then (7.13) gives
f (x) =

bn sin nx.

(7.14)

n=1

Notice that the series on right hand side in (7.14) is the Fourier sine series of f (x) in the interval
[0, ]. So we have
Z
2
bn =
f (x) sin nxdx.
(7.15)
0
Hence,
y(x, t) =

bn sin nx cos nat,

(7.16)

n=1

with bn from (7.15) is the solution of (7.1) subject to the four conditions (7.2)-(7.5).

One dimensional heat equation


Consider a uniform rod of length aligned along x-axis from (0, 0) to (, 0). Suppose that the two
ends of the rod are kept at zero temperature all the time, and f (x) represents the temperature
function at time t = 0. Then it can be shown that the temperature w(x, t) of the road is governed
by the one dimensional heat equation
1 w
2w
= 2
,
2
x
a t

(7.17)

where a is some positive constant. The heat equation is subjected to the following three conditions.
The first condition is
w(0, t) = 0,

(7.18)

since the left end of the rod is kept at zero temperature for all t.
The second condition is
w(, t) = 0

(7.19)

since the right end of the rod is kept at zero temperature for all t.
The third condition is
w(x, 0) = f (x),

(7.20)

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

59

since the temperature of the rod is given by f (x) at t = 0.


Having known the temperature of the rod at t = 0, we are interested to find the temperature
of the rod at any time t. It is equivalent to saying that we are interested to solve (7.17) for w(x, t)
subject to the three conditions (7.18)-(7.20).
Assume that (7.17) possesses a solution of the form
w(x, t) = u(x)v(t),

(7.21)

where u(x) and v(t) are to be determined. Plugging (7.21) into (7.17), we get
1 v 0 (t)
u00 (x)
= 2
= ,
u(x)
a v(t)

(7.22)

where is some constant. This yields following two equations


u00 (x) u(x) = 0,

(7.23)

v 0 (t) a2 v(t) = 0,

(7.24)

Following the strategy discussed in the previous section, the non-trivial solution of (7.23) subject
to the conditions (7.18) and (7.19), reads as
un (x) = sin nx,

(7.25)

where n a positive integer with = n2 .


2 a2 t

Now, the solution of (7.24) with = n2 reads as v(t) = c1 en


2 a2 t

vn (t) = en

. So
(7.26)

is non-trivial solution of (7.24).


In view of (7.21), (7.25) and (7.26),
2 a2 t

wn (x, t) = un (x)vn (t) = sin nxen

(7.27)

is a solution of (7.17) for each positive integer n. It follows that


w(x, t) =

bn wn (x, t) =

n=1

2 a2 t

bn sin nxen

(7.28)

n=1

is also a solution of (7.17). To determine bn , we use the third condition w(x, 0) = f (x) given in
(7.20). Then (7.28) gives
f (x) =

X
n=1

bn sin nx.

(7.29)

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

60

Notice that the series on right hand side in (7.29) is the Fourier sine series of f (x) in the interval
[0, ]. So we have
Z
2
f (x) sin nxdx.
(7.30)
bn =
0
Hence,
w(x, t) =

bn sin nxen

2 a2 t

(7.31)

n=1

with bn from (7.30) is the solution of (7.17) subject to the three conditions (7.18)-(7.20).

The Laplace equation


The steady state temperature w(x, y) (independent of time) in the two dimensional xy-plane is
governed by
2w 2w
+
= 0,
x2
y 2

(7.32)

known as the Laplace equation. With the transformations x = r cos and y = r sin , the polar
form of (7.32) reads as
1 2w
2 w 1 w
+
+
= 0.
r2
r r
r2 2

(7.33)

For,
w
w x w y
w
w
=
+
= cos
+ sin
.
r
x r
y r
x
y
2
2
2w
2w
2w
2 w
2 w
=
cos

+
cos

sin

+
sin

+
sin

cos

r2
x2
xy
y 2
xy

w
w x w y
w
w
=
+
= r sin
+ r cos
.

x
y
x
y
2
2
2w
2w
w 2
2w
w
2
2 w
2
2 w
2
=
r
sin

r
sin

cos

r
cos

+r
cos

r
cos

sin

r sin
.
2
2
2

x
xy
x
y
xy
y
2

Substituting the values of rw2 , w


and w2 into (7.33), we get (7.32).
r
Suppose the steady state temperature is given on the boundary of a unit circle r = 1, say
w(1, ) = f (). Then the problem of finding the temperature at any point (r, ) inside the circle
is a Dirichlets problem for a circle. Now we shall solve (7.33) subject to the condition
w(1, ) = f ().

(7.34)

Assume that (7.33) possesses a solution of the form


w(r, ) = u(r)v(),

(7.35)

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

61

where u(r) and v() are to be determined. Plugging (7.35) into (7.33), we get
r2 u00 (r) + ru0 (r)
v 00 ()
=
= ,
u(r)
v()

(7.36)

where is some constant. This yields following two equations


v 00 () + v() = 0,

(7.37)

r2 u00 (r) + ru0 (r) u(r) = 0.

(7.38)

The non-trivial solution of (7.37) reads as


vn () = an cos n + bn sin n,

(7.39)

where = n2 ; an , bn are constants such that both the terms on right hand side of (7.41) do not
vanish together for n = 1, 2, 3, ...... Let a20 be the solution corresponding to n = 0.
Notice that (7.40) is a Cauchy-Euler DE with = n2 . So it transforms to
d2 u
n2 u = 0,
dz 2

(7.40)

where r = ez . Solutions of this equation are


u(z) = c1 + c2 z

for n = 0

and
u(z) = c1 enz + c2 enz

for n = 1, 2, 3, .....

where c1 and c2 are constants. In terms of r, the solutions read as


u(r) = c1 + c2 ln r

for n = 0

and
u(r) = c1 rn + c2 rn

for n = 1, 2, 3, .....

Since we are interested in solutions which are well defined inside the circle r = 1, we discard the
first solution because ln r is not finite at r = 0. Similarly, the second solution is acceptable by
discarding the second term carrying rn . Thus, the solution of our interest is
un (r) = rn , n = 1, 2, 3, .....

(7.41)

In view of (7.35), (7.40) and (7.41),


wn (r, ) = un (r)vn () = rn (an cos n + bn sin n),

(7.42)

is a solution of (7.33) for n = 1, 2, ....... It follows that

X
n=0

wn (x, t) =

X
n=1

rn (an cos n + bn sin n),

(7.43)

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

is also a solution of (7.33). Since

a0
2

62

is also a solution of (7.33), so

a0 X n
w(r, ) =
+
r (an cos n + bn sin n),
2
n=1

(7.44)

is also a solution of (7.33).


To determine a0 , an and bn , we use the third condition w(1, ) = f () given in (7.34). Then
(7.44) gives

X
a0 X
+
wn (x, t) =
(an cos n + bn sin n),
f () =
2
n=1
n=1

(7.45)

Notice that the series on right hand side in (7.45) is the Fourier series of f () in the interval [, ].
So we have
Z
1
an =
f () cos ndx, (n = 0, 1, 2, ....)
(7.46)

1
bn =

f () sin ndx, (n = 1, 2, 3, ......).

(7.47)

Thus, (7.44) with an from (7.46) and bn from (7.47) is the solution of (7.33) subject to the
condition (7.34). Thus, the Dirichlet problem for the unit circle is solved.
Now substituting an from (7.46) and bn from (7.47) into (7.44), we get
"
#
Z

1 X n
1
f ()
+
r cos n( ) d.
(7.48)
w(r, ) =

2 n=1
Let = and z = rei = r(cos + i sin n). Then we have

1 X n
1 X n
+
r cos n( ) =
+
r cos n
2 n=1
2 n=1
"
#

1 X n
= Re
+
z
2 n=1


1
z
= Re
+
2 1z


1+z
= Re
2(1 z)


(1 + z)(1 z)
= Re
2|1 z|2
1 |z|2
=
2|1 z|2
1 r2
=
2(1 2r cos + r2 )

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

63

So (7.48) becomes
1
w(r, ) =
2

1 r2
f ()d,
1 2r cos( ) + r2

(7.49)

known as the Poission integral. It expresses the value of the harmonic function w(r, ) at all points
inside the circle r = 1 in terms of its values on the circumference of the circle. In particular, at
r = 0, we have
Z
1
f ()d,
(7.50)
w(0, ) =
2
which shows that the value of the harmonic function w at the center of the circle is the average of
its values on the circumference.

Strum Liouville Boundary Value Problem (SLBVP)


Let p(x) 6= 0, p0 (x), q(x) and r(x) be continuous functions on [a, b]. Then the DE
d
[p(x)y 0 ] + [q(x) + r(x)]y = 0,
dx

(7.51)

with the boundary conditions


c1 y(a) + c2 y 0 (a) = 0,

(7.52)

d1 y(b) + d2 y 0 (b) = 0,

(7.53)

and

where neither both c1 and c2 nor both d1 and d2 are zero, is called a SLBVP. We see that y = 0 is
trivial solution of (7.51). The values of for which (7.51) has non-trivial solutions, are known as
its eigen values while the corresponding non-trivial solutions are known as eigen functions.
Ex. 7.4.1. Find eigen values and eigen functions of the SLBVP
y 00 + y = 0,

y(0) = 0, y() = 0.

Sol. 7.4.1. Eigen values are = n2 , where n is a positive integer.


Eigen functions are yn = sin nx.

Orthogonality of eigen functions


Consider the SLBVP given by (7.51), (7.52) and (7.52). If ym and yn are any two distinct eigen
functions corresponding to the eigen values m and n , then
Z b
q(x)ym (x)yn (x)dx = 0.
a

In other words, any two distinct eigen functions ym and yn of the SLBVP are orthogonal with
respect to the weight function q(x). Let us prove this result.

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

64

Since ym and yn are eigen functions corresponding to the eigen values m and n , we have
0 0
(pym
) + (m q + r)ym = 0

(7.54)

(pyn0 )0 + (n q + r)yn = 0.

(7.55)

and

Multiplying (7.54) by yn and (7.55) by ym , and subtracting we get


0 0
yn (pym
) ym (pyn0 )0 + (m n )qym yn = 0

(7.56)

Moving the first two terms on right hand side, and then integrating from a to b, we have
Z b
Z b
Z b
0 0
0 0
(m n )
qym yn dx =
ym (pyn ) dx
yn (pym
) dx
a
a
a
Z b
Z b
0 b
0
0
0
b
0
= [ym (pyn )]a
ym (pyn )dx [yn (pym )]a +
yn0 (pym
)dx
a

0
0
= p(b)[ym (b)yn0 (b) yn (b)ym
(b)] p(a)[ym (a)yn0 (a) yn (a)ym
(a)]
= p(b)W (b) p(a)W (a)
0
where W (x) = ym (x)yn0 (x) yn (x)ym
(x) is Wronskian of ym and yn .
Z b
qym yn dx = p(b)W (b) p(a)W (a).
(m n )

(7.57)

Notice that the eigen functions ym and yn are particular solutions of the SLBVP given by (7.51),
(7.52) and (7.52). So we have
0
c1 ym (a) + c2 ym
(a) = 0,

(7.58)

c1 yn (a) + c2 yn0 (a) = 0,

(7.59)

0
d1 ym (b) + d2 ym
(b) = 0,

(7.60)

d1 yn (b) + d2 yn0 (b) = 0.

(7.61)

Now by the given c1 and c2 are not zero together. So the homogeneous system given by (7.58) and
0
(7.59) has a non-trivial solution. It follows that ym (a)yn0 (a) yn (a)ym
(a) = W (a) must be zero.
0
0
Likewise, (7.60) and (7.61) lead to ym (b)yn (b) yn (b)ym (b) = W (b) = 0. So (7.57) becomes
Z b
(m n )
qym yn dx = 0.
(7.62)
a

Also, m 6= n . So we get
Z b
qym yn dx = 0,
a

the desired result.

(7.63)

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

65

Remark 7.4.1. The orthogonality property of eigen functions can be used to write a given function
as the series expansion of eigen functions.
Remark 7.4.2. A DE in the form
d
[p(x)y 0 ] + [q(x) + r(x)]y = 0
dx
is called in self adjoint form.

Chapter 8
Some Special Functions
Legendre Polynomials
A DE of the form
(1 x2 )y 00 2xy 0 + n(n + 1)y = 0,

(8.1)

where n is a constant, is called Legendres Equation. We observe that x = 0 is an ordinary point


of (8.1). So there exists a series solution of the form
y=

ak xk = a0 + a1 x + a2 x2 + a3 x3 + .........

(8.2)

k=0

Substituting (8.2) into (8.1), we obtain

ak k(k 1)x

k2

ak (n k)(n + k + 1)xk = 0.

(8.3)

k=0

k=0

Comparing coefficients of xk2 , we obtain


ak k(k 1) + ak2 (n k + 2)(n + k 1) = 0 or ak =
a2 = n(n+1)
a0 ,
2!

a3 =

(n1)(n+2)
a1
3!

a4 =

(n k + 2)(n + k 1)
ak2 .
k(k 1)

(n2)n(n+1)(n+3)
a0 ,
4!

a5 =

(n3)(n1)(n+2)(n+4)
a1 , .......
5!

Substituting these values into (8.2), we obtain the general solution of (8.1) as
y = c1 y 1 + c2 y 2
where



n(n + 1) 2 (n 2)n(n + 1)(n + 3) 4
1
x +
x ......... ,
2!
4!


(n 1)(n + 2) 3 (n 3)(n 1)(n + 2)(n + 4) 5
x
x +
x ......... .
3!
5!

y 1 = a0

y 2 = a1

66

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

67

We observe that y1 and y2 are LI solutions of the Legendre equation (8.1), and these are analytic
in the range 1 < x < 1. However, the solutions most useful in the applications are those bounded
near x = 1. Notice that x = 1 is a regular singular point of the Legendre equation (8.1). We use
the transformation t = (1 x)/2 so that x = 1 corresponds to t = 0, and (8.1) transforms to the
hypergeometric DE
t(1 t)y 00 + (1 2t)y 0 + n(n + 1)y = 0,

(8.4)

where the prime denote derivative with respect to t. Here, a = n, b = n + 1 and c = 1. So the
solution of (8.4) in the neighbourhood of t = 0 is given by
y1 = F (n, n + 1, 1, t).

(8.5)

The other LI solution can be found as


Z
1 R P dt
dt = y1 (ln t + a1 t + ........).
e
y2 = y1
y12

(8.6)

However, this solution is not bounded near t = 0. So any solution of (8.4) bounded near t = 0 is
a constant multiple of y1 . Consequently, the constant multiples of F (n, n + 1, 1, (1 x)/2) are
the solutions of (8.1), which are bounded near x = 1.
If n is a non-negative integer, then F (n, n + 1, 1, (1 x)/2) defines a polynomial of degree n
known as Legendre polynomial, denoted by Pn (x). Therefore,
Pn (x) = F (n, n+1, 1, (1x)/2) = 1+

n(n + 1)
n(n 1)(n + 1)(n + 2)
(2n)!
(x1)+
(x1)2 +....+
(x1)n .
2
2
2
(1!) 2
(2!) 2
(n!)2 2n

Notice that Pn (1) = 1 for all n. Next, after a sequence of algebraic manipulations, we can obtain
Pn (x) =

1 dn
[(x2 1)n ],
2n n! dxn

known as Rodrigues formula. The following theorem provides the alternative approach to obtain
the Rodrigues formula.
Theorem 8.1.1. (Rodrigues Formula)
Prove that Pn (x) =

1 dn
[(x2 1)n ].
2n n! dxn

Proof. Let v = (x2 1)n . Then we have


v1 = 2nx(x2 1)n1

where v1 =

(x2 1)v1 = 2nx(x2 1)n .

(1 x2 )v1 + 2nxv = 0.

dv
.
dx

Differentiating it n + 1 times with respect to x using the Leibnitz theorem, we get


(1 x2 )vn+2 + (n + 1)(2x)vn+1 +

(n + 1)n
(2)vn + 2n[xvn+1 + (n + 1)vn ] = 0.
2!

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

68

(1 x2 )vn00 2xvn0 + n(n + 1)vn = 0.

This shows that cvn (c is an arbitrary constant) is a solution of the Legendres equation (8.1). Also
cvn is a polynomial of degree n. But we know that the nth degree polynomial Pn (x) is a solution
of the Legendres equation. It follows that
Pn (x) = cvn = c

dn
[(x2 1)n ].
dxn

(8.7)

To find c, we put x = 1 into (8.7) to get



 n
d
2
n
[(x 1) ]
.
Pn (1) = c
dxn
x=1
 n

d
n
n
=
1=c
[(x 1) (x + 1) ]
= c[n!(x+1)n +Terms containing the factor (x1)]x=1 .
dxn
x=1
=

1 = c.n!2n

or c =

1
.
n!2n

Thus, (8.7) becomes


Pn (x) =

1 dn
[(x2 1)n ].
2n n! dxn

This completes the proof.


Remark 8.1.1. Using Rodrigues formula, we get
P0 (x) = 1, P1 (x) = x, P2 (x) = 12 (3x2 1), P3 (x) = 12 (5x3 3x) , P4 (x) =

35 4
x
8

15 2
x
4

3
8

etc.

Ex. 8.1.1. Express the polynomial x4 + 3x3 x2 + 5x 2 in terms of Legendres polynomials.


8
3
Sol. 8.1.1. Since P4 (x) = 35
x4 15
x2 + 38 , so x4 = 35
P4 (x)+ 67 x2 35
. Similarly, x3 = 25 P3 (x)+ 35 x,
8
4
1
2
2
x = 3 P2 (x) + 3 , x = P1 (x), 1 = P0 (x). Using all these, we get

x4 + 3x3 x2 + 5x 2 =

8
6
2
34
224
P4 (x) + P3 (x) P2 (x) + P1 (x)
P0 (x).
35
5
21
5
105

Orthogonality of Legendre Polynomials


Z

Pm (x)Pn (x)dx = 0 for m 6= n.

Ex. 8.1.2. Show that


1

Sol. 8.1.2. We know that y = Pm (x) is a solution of


(1 x2 )y 00 2xy 0 + m(m + 1)y = 0,

(8.8)

and z = Pn (x) is a solution of


(1 x2 )z 00 2xz 0 + n(n + 1)z = 0.
Multiplying (8.8) by z and (8.9) by y, and subtracting, we get

(8.9)

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

69

(1 x2 )(y 00 z yz 00 ) 2x(y 0 z yz 0 ) + [m(m + 1) n(n + 1)]yz = 0.


d
[(1 x2 )(y 0 z yz 0 )] + (m n)(m + n + 1)yz = 0.
dx

(8.10)

Integrating (8.10) from 1 to 1, we have


Z 1
(m n)(m + n + 1)
yzdx = 0
1

Also, m 6= n. So it gives
Z 1
Pm (x)Pn (x)dx = 0.
1

This is known as the orthogonality property of Legendre polynomials.


Z 1
2
Ex. 8.1.3. Show that
Pn2 (x)dx =
.
2n + 1
1
Sol. 8.1.3. The Rodrigues formula is
1 dn
1
Pn (x) = n
[(x2 1)n ] = n Dn (x2 1)n .
n
2 n! dx
2 n!
Therefore, we have
Z 1
Z
n
2
2
(2 n!)
Pn (x)dx =
1

Dn (x2 1)n Dn (x2 1)n dx

=
=
=
=

Z 1

 n 2
n n1 2
n x=1
D (x 1) D (x 1) x=1
Dn+1 (x2 1)n Dn1 (x2 1)n dx
1
Z 1
0
Dn+1 (x2 1)n Dn1 (x2 1)n dx
1
Z 1
n
D2n (x2 1)n (x2 1)n dx (Integrating (n 1) times more)
(1)
1
Z 1
(1)n
(2n)!(x2 1)n (x2 1)n dx (Put x = sin )
1
/2

Z
= 2(2n)!

cos2n+1 d

2n(2n 2).......4.2
(2n + 1)(2n 1)........3.1
[2n(2n 2).......4.2]2
= 2(2n)!
(2n + 1)!
2
=
(2n n!)2
2n + 1
= 2(2n)!

Pn2 (x)dx =

2
.
2n + 1

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

70

Legendre Series
Let f (x) be a function defined from x = 1 to x = 1. Then we can write,
f (x) =

cn Pn (x),

(8.11)

n=0

where cn s are constants to be determined. Multiplying both sides of (8.11) by Pn (x) and integrating from 1 to 1, we get
Z 1
Z 1
2
.
f (x)Pn (x)dx = cn
Pn2 (x)dx = cn
2n + 1
1
1
Z
2n + 1 1
=
cn =
f (x)Pn (x)dx.
2
1
Using the values of cn into (8.11), we get the expansion of f (x) in terms of Legendre polynomials,
known as the Legendre series of f (x).
Ex. 8.1.4. If f (x) = x for 0 < x < 1 otherwise 0, then show that f (x) = 14 P0 (x) + 21 P1 (x) +
5
P (x) + .......
16 2
Z
Z
1 1
1
1 1
f (x)P0 (x)dx =
x.1dx = , etc.
Sol. 8.1.4. c0 =
2 1
2 0
4
Ex. 8.1.5. Prove that (1 2xt + t2 )1/2 =

tn Pn (x), and

n=0

hence prove the recurrence relation nPn (x) = (2n 1)xPn1 (x) (n 1)Pn2 (x).
Sol. 8.1.5. Please try yourself.
Note: The function (1 2xt + t2 )1/2 is called generating function of the Legendre polynomials.
Note that the Legendre polynomials Pn (x) appear as coefficients of tn in the expansion of the
function
(1 2xt + t2 )1/2 .

Gamma Function
The gamma function is defined as
Z
(n) =
ex xn1 dx, (n > 0)

(8.12)

The condition n >Z0 is necessary in order to guarantee the convergence of the integral.

Note that (1) =


ex dx = 1.
0

Next, we have
Z
(n + 1) =


e x dx = x e (1) 0 n
x n

n x

nx
0

n1 x

e (1)dx = n
0

ex xn1 dx.

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

71

(n + 1) = n(n).

It is the recurrence relation for gamma function. Using this relation recursively, we have
(2) = 1.(1) = 1,
(3) = 2.(2) = 2.1 = 2!,
(4) = 3.(3) = 3.2! = 3!,
......
(n + 1) = n.(n) = n.(n 1)! = n!.
Thus, (n)
takes positive integer values for positive integer values of n. It can be proved that
1
( 2 ) = . For,
  Z
Z
1
2
t 1/2
e t
dx = 2
ex dx, where t1/2 = x.
=

2
0
0
  2
 Z
 Z

1
x2
y 2

e dx
e dy
=
2
2
2
0
0
Z Z

2
2
e(x +y ) dxdy
=
Z0 2 Z0
2
er rdrd, x = r cos , y = r sin
=
=


Having known the precise value of 21 , we can calculate the values of gamma function at positive
fractions with denominator 2. For instance,
 
 
1
7
5 3 1
5 3 1
.

= . .
= . .
2
2 2 2
2
2 2 2
For values of gamma function at positive fractions with denominator different from 2, we have to
rely upon the numerically approximated value of the integral arising in gamma function.
Note that (n) given by (8.12) is not defined for n 0. We extend the definition of gamma
function by the relation
(n) =

(n + 1)
n

(8.13)

Then (n) is defined for all n except when n is any non-positive integer. If we agree (n) to be
for non-positive integer values of n, then 1/(n) is defined for all n. Such an agreement is useful
while dealing with Bessel functions. The Gamma function is, thus, defined as
Z

ex xn1 dx ,

(n + 1)
(n) =

n>0

n < 0 but not an integer


n = 0, 1, 2, .......

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

72

Note that the gamma function generalizes the concept of factorial from non-negative integers
to any real number via the formula
n! = (n + 1).

Bessel Functions
The DE
x2 y 00 + xy 0 + (x2 p2 )y = 0,

(8.14)

where p is a non-negative constant, is called Bessels DE. We see that x = 0 is a regular singular
point of (8.14). So there exists at least one Frobenious series solution of the form
y=

an xn+r ,

(a0 6= 0).

(8.15)

n=0

Using (8.15) into (8.14), we get

an [(n + r) p ]x

n=0

n+r

an xn+r+2 = 0.

(8.16)

n=0

Equating to 0 the coefficient of xr , the lowest degree term in x, we obtain


a0 (r2 p2 ) = 0 or r2 p2 = 0.
Therefore, roots of the indicial equation are r = p, p.
Next equating to 0 the coefficient of xr+1 , we find
a1 [(r + 1)2 p2 ] = 0 or a1 = 0 for r = p.
Now equating to 0 the coefficient of xn+r , we have the recurrence relation
an =

an2
,
(n + r)2 p2

(8.17)

where n = 2, 3, 4....
For r = p, we get the solution in the form
y = a0 x p

X
n=0

(1)n (x/2)2n
.
n!(p + 1)(p + 2)....(p + n)

The Bessel function of first kind of order p, denoted by Jp (x) is defined by putting a0 =
(8.18) so that
Jp (x) =

X
(1)n (x/2)2n+p
n=0

n!(n + p + 1)

(8.18)
1
2p p!

into

(8.19)

which is well defined for all real values of p in accordance with the definition of gamma function.

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

73

1.0
0.8
0.6
0.4
0.2

- 0.2

10

- 0.4

Figure 8.1: Plots of J0 (x) (Blue curve) and J1 (x) (Red curve).
From applications point of view, the most useful Bessel functions are of order 0 and 1, given
by
x2
x4
x6
+

+ ..........
22 22 .42 22 .42 .62
1  x 3
1  x 5
x
+
..........
J1 (x) =
2 1!2! 2
2!3! 2
Plots of J0 (x) (Blue curve) and J1 (x) (Red curve) are shown in Figure 8.1. It may be seen that
J0 (x) and J1 (x) vanish alternatively, and have infinitely many zeros on positive x-axis, as expected,
since J0 (x) and J1 (x) are two particular LI solutions of the Bessels DE (8.14). Later, we shall
show that J00 (x) = J1 (x). Thus, J0 (x) and J1 (x) behave just like cos x and sin x. This analogy
may also be observed by the fact that the normal form of Bessels DE (8.14) given by


1 4p2
00
u + 1+
u = 0,
4x2
J0 (x) = 1

behaves as
u00 + u = 0,
for large values of x, with solutions cos x and sin x. It means J0 (x) and J1 (x) behave more precisely
like cos x and sin x for larger values of x.

Second solution of Bessels DE


To obtain second solution it is natural to try the second root r = p of the indicial equation. We
assume that p is not an integer otherwise the difference of the indicial equation roots p and p
would be the integer 2p. For r = p, the equation a1 [(r + 1)2 p2 ] = 0 becomes a1 (1 2p) = 0,
which lets a1 arbitrary for p = 1/2. So there is no compulsion to choose a1 = 0. However, we fix
a1 = 0 after all we are interested in a particular solution. Also, for r = p, the recurrence relation
(8.17) reduces to
an =

an2
an2
=
,
2
2
(n p) p
n(n 2p)

where n = 2, 3, 4....
For n = 3, we get 3(3 2p)a3 = a1 = 0. This lets a3 arbitrary for p = 3/2. We choose a3 = 0.

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

74

Likewise, we choose a5 = 0, a7 = 0, ........ for the sake of particular solution, and thus obtain the
following particular solution of (8.14):
Jp (x) =

X
(1)n (x/2)2np

n!(n p + 1)

n=0

(8.20)

which is same as if we replace p by p in (8.19).


Notice that Jp (x) and Jp (x) are LI since Jp (x) is bounded near x = 0 but Jp (x) is not so.
Thus, when p is not an integer, the general solution of Bessels equation (8.14) is
y = c1 Jp (x) + c2 Jp (x).

(8.21)

Now let us see what happens when p is a non-negative integer say m. We have
Jm (x) =

X
(1)n (x/2)2nm
n=0

n!(m + n)!

(1)n (x/2)2nm
=
n!(m + n)!
n=m


1
= 0, n = 0, 1, 2, ...., m 1
(m + n)!

X
(1)n+m (x/2)2(n+m)m
=
(n + m)!(m + n + m)!
n=0

= (1)m

(Replacing the dummy variable n by n + m)

X
(1)n (x/2)2n+m
n=0

n!(m + n)!

= (1)m Jm (x)
This shows that Jp (x) and Jp (x) are not LI when p is an integer.
When p is not an integer, any function of the form (8.21) with c2 6= 0 is a Bessel function of
second kind. The standard Bessel function of second kind is defined as
Yp (x) =

Jp (x) cos p Jp (x)


.
sin p

(8.22)

One can write (8.21) in the equivalent form


y = c1 Jp (x) + c2 Yp (x),

(8.23)

which is general solution of (8.14) when p is not an integer. One may observe that Yp (x) is not
defined when p is an integer say m. However, it can be shown that
Ym (x) = lim Yp (x)
pm

exists, and it is taken as the Bessel function of second kind. Thus, it follows that (8.23) is general
solution of Bessels equation (8.14) in all cases. It is found that Yp (x) is not bounded near x = 0
for p 0. Accordingly, if we are interested in solutions of Bessels equation near x = 0, which is
often the case in applications, then we must take c2 = 0 in (8.23).

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

75

Properties of Bessel Functions


It is easy to prove the following:
d p
[x Jp (x)] = xp Jp1 (x).
(1)
dx

d  p
(2)
x Jp (x) = xp Jp+1 (x).
dx
1
(3) Jp0 (x) = [Jp1 (x) Jp+1 (x)].
2
2p
(4) Jp+1 (x) = Jp (x) Jp1 (x).
x
Z
From (1), we have xp Jp1 (x)dx = xp Jp (x) + C.
Z
Similarly, (2) gives xp Jp+1 (x)dx = xp Jp (x) + C.
Also, notice that (4) is the recurrence relation for Bessel functions. By definition of Bessel function,
it can be shown that
r
r
2
2
J1/2 (x) =
sin x, J1/2 (x) =
cos x.
x
x
So by property (4)
r

2
x

1
J3/2 (x) = J1/2 (x) J1/2 (x) =
x


2  cos x

sin x .
x
x

1
J3/2 (x) = J1/2 (x) J1/2 (x) =
x


sin x
cos x .
x

Again, by property (4)

Thus, every Bessel function Jm+ 1 (x), where m is any integer, is elementary as it is expressible in
2
terms of elementary functions.

Orthogonal properties of Bessel functions


If m and n are positive zeros of Jp (x), then

Z 1
0,
xJp (m x)Jp (n x)dx =
1 2
J ( ) ,
0
2 p+1 m

m 6= n
m = n.

Proof. Since y = Jp (x) is a solution of




1 0
p2
00
y + y + 1 2 y = 0,
x
x
it follows that u(x) = Jp (m x) and v(x) = Jp (n x) satisfy the equations


p2
1 0
00
2
u + u + m 2 u = 0,
x
x

(8.24)

Mathematics-III

Dr. Suresh Kumar, BITS Pilani



p2
1 0
2
v + v + n 2 v = 0,
x
x
00

76

(8.25)

Multiplying (8.24) by v and (8.25) by u, and subtracting the resulting equations, we obtain
d 0
1
(u v v 0 u) + (u0 v v 0 u) = (2n 2m )uv.
dx
x
After multiplication by x, it becomes
d
[x(u0 v v 0 u)] = (2n 2m )xuv.
dx
Now, integrating with respect to x from 0 to 1, we have
Z 1
1
2
2
uv = [x(u0 v v 0 u)]0 = 0,
(n m )
0

since u(1) = Jp (m ) = 0 and v(1) = Jp (n ) = 0.


Z

xJp (m x)Jp (n x)dx = 0,

(m 6= n).

Next, we consider the case m = n. Multiplying (8.24) by 2x2 u0 , we get


2x2 u0 u00 + 2xu02 + 22m x2 uu0 2p2 uu0 = 0.

d
x2 u02 + 2m x2 u2 p2 u2 = 22m xu2 .
dx
Integrating from 0 to 1 with respect to x, we get
Z 1

1
2
2m
xu2 dx = x2 u02 + 2m x2 u2 p2 u2 0 = 2m Jp02 (m ) + (2m p2 )Jp2 (m ).
=

(Notice that u(0) = Jp (0) = 0 for p > 0. So p2 u2 (0) = 0 for p 0).


Z

For,

1 2
1
xJp2 (m x)dx = Jp02 (m ) = Jp+1
(m ).
2
2


d  p
x Jp (x) = xp Jp+1 (x) leads to
dx

Jp0 (x) =

p
Jp (x) Jp+1 (x),
x

Jp0 (m ) =

p
Jp (m ) Jp+1 (m ) = Jp+1 (m ).
m

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

77

Fourier-Bessel Series
In mathematical physics, it is often necessary to expand a given function in terms of Bessel functions. The simplest and most useful expansions are of the form
f (x) =

an Jp (n x) = a1 Jp (1 x) + a2 Jp (2 x) + ...........,

(8.26)

n=1

where f (x) is defined on the interval 0 x 1 and n are positive zeros of some fixed Bessel
function Jp (x) with p 0. Now multiplying (8.26) by xJp (n x) and integrating from x = 0 to
x = 1, we get
Z 1
1
2
xf (x)Jp (n x)dx = an Jp+1
(n ),
2
0
which gives
2
an = 2
Jp+1 (n )

xf (x)Jp (n x)dx.
0

Ex. 8.4.1. Express f (x) = 1 in terms of the functions J0 (n x)


Sol. 8.4.1. We have
an

2
=
2
J1 (n )

xf (x)J0 (n x)dx


1

2
d
1
=

xJ1 (n x) ,
[xJ1 (x)] = xJ0 (x)
J12 (n ) n
dx
0
2
J1 (n )
=

2
J1 (n )
n
2
=
n J1 (n )
0

So the required Fourier-Bessel series is


1=

X
n=1

2
J0 (n x).
n J1 (n )

Convergence of Fourier-Bessel Series


Assume that f (x) and f 0 (x) have at most finite number of jump discontinuities on [0, 1]. If
x (0, 1), then the Bessel series converges to f (x) when x is a point of continuity of f (x), and
converges to 12 [f (x) + f (x+)] when x is a point of discontinuity. At x = 1, the series converges
to 0 regardless of the nature of f (x) because every Jp (n ) = 0. The series converges to 0 if p > 0
and to f (0+) if p = 0.

Chapter 9
Laplace Transforms
Definitions of Laplace and inverse Laplace transforms
Let f (x) be a function defined on a finite or infinite interval a x b. If we choose a fixed function
K(p, x) of variable x and a parameter p, then the general integral transformation is defined as
Z
T [f (x)] =

K(p, x)f (x)dx.

(9.1)

The function K(p, x) is called kernel of T . In particular, if a = 0, b = and K(p, x) = epx , then
(9.1) is called Laplace transform of f (x) and is denoted by L[f (x)].
Z

L[f (x) =

epx f (x)dx = F (p).

It may be noted that L is linear. For,


Z
epx [f (x) + g(x)]dx
L[f (x) + g(x)] =
0
Z
Z
px
=
e f (x)dx +
epx g(x)dx
0

= L[f (x)] + L[g(x)].


Further, if L[f (x)] = F (p), then f (x) is called inverse Laplace of F (p) and is denoted by L1 [F (p)].

L1 [F (p)] = f (x).
Z

Remark: The Laplace transform of a(x), that is,


power series

a(n)xn with x = ep .

n=0

78

epx a(x)dx is the integral analog of the

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

79

Laplace transforms of some elementary functions


It would be useful to memorize the following formulas related to Laplace and inverse Laplace
transforms of elementary functions.

Z
(1) L [1] =

px

(2)

(3)

(4)

(5)

(6)

(7)

1
.1dx = (p > 0).
p

 
1
=1
p



1
e
L [e ] =
L
= eax
pa
0


Z
(n + 1)
1
xn
n
px n
1
L [x ] =
e x dx =
L
(p
>
0).
=
pn+1
pn+1
(n + 1)
0
 iax



e eiax
a
1
1
1
L [sin ax] = L
= 2
.
L
= sin ax
2
2
2
2i
p +a
p +a
a
 iax



e + eiax
p
p
1
L [cos ax] = L
.
L
= 2
= cos ax
2
p + a2
p 2 + a2
 ax



e eax
a
1
1
1
L [sinh ax] = L
.
L
sinh ax
= 2
=
2
p a2
p 2 a2
a
 ax



e + eax
p
p
1
L [cosh ax] = L
= 2
.
L
= cosh ax
2
p a2
p 2 a2
Z

ax

1
e dx =
(p > a).
pa

px ax



Ex. 9.2.1. Find L sin2 x and L1 [4 sin x cos x + ex ].
Sol. 9.2.1.




 2 
1 cos 2x
p
1 1
L sin x = L

=
2
2 p p2 + 4




L 4 sin x cos x + ex = L 2 sin 2x + ex =
1

Ex. 9.2.2. Find L

1
p2 +2

and L

1
p4 +p2

p2

4
1
+
.
+4 p+1

Sol. 9.2.2.



1
1
1

=
sin
2x.
L
p2 + 2
2




1
1
1
1
1
L
=L

= x sin x.
p4 + p2
p2 p2 + 1

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

80

Sufficient conditions for the existence of Laplace transform


Let f (x) be a piecewise continuous function for x 0 and there exist constants M and c such that
|f (x)| M ect . Then L[f (x)] exists for p > c.
For, we have
Z
Z
Z


M
px
px


e(pc)x dx =
e |f (x)|dx M
e f (x)dx
|F (p)| =
, (p > c). (9.2)
pc
0

The above conditions are not necessary. Consider the function f (x) = x1/2 . This function is not
piecewise continuous on [0,b] for any p
positive real number b since it has infinite discontinuity at
1/2
1/2
x = 0. But L[x
] = (1/2)/p = /p exists for p > 0.
Further, from (9.2), we see that lim F (p) = 0. It is true even if the function is not piecewise
p

continuous or of exponential order. So if lim (p) 6= 0, then (p) can not be Laplace transform of
p

any function. For example, L [p], L [cos p], L1 [log p] etc. do not exist.

Some more Laplace transform formulas


Laplace transform of a function multiplied by eax
If L[f (x)] = F (p), then L [eax f (x)] = F (p a). (Shifting formula)
Z
Z
ax
px ax
For, L [e f (x)] =
e e f (x)dx =
e(pa)x f (x)dx = F (p a).
0

0
2x

Ex. 9.4.1. Use shifting formula to evaluate L [e sin x].


Sol. 9.4.1. Since L[sin x] =


L e2x sin x =

1
,
p2 +1

so by shifting formula

1
(p 2)2 + 1

Laplace transform of derivatives of a function


If L[f (x)] = F (p), then
L [f 0 (x)] = pF (p) f (0).
Z
Z


0
px 0
px
For L [f (x)] =
e f (x)dx = f (x)e
+p
0
0

epx f (x)dx = pF (p) f (0).

Likewise, we can show that


L [f 00 (x)] = p2 F (p) pf (0) f 0 (0).
In general,


L f (n) (x) = pn F (p) pn1 f (0) pn1 f 0 (0) ...... f (n1) (0).
Ex. 9.4.2. Find Laplace transform of cos x considering that it is derivative of sin x.
Sol. 9.4.2. Here f (x) = sin x and F (p) = L[sin x] =
p
L [cos x] = pF (p) f (0) = 2
.
p +1

1
.
p2 +1

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

81

Laplace transform of integral of a function


Z


F (p)
.
f (t)dt =
p

If L[f (x)] = F (p), then L


0
Z x
f (t)dt so that g 0 (x) = f (x) and pL[g(x)] g(0) = F (p), where g(0) = 0.
For, let g(x) =
0

The above result is quite useful in the form



 Z x
1 F (p)
L
f (t)dt
=
p
0


1
1
Ex. 9.4.3. Find L
.
p(p2 + 1)


1
1
Sol. 9.4.3. Since L
= sin x, so we have
p2 + 1
 Z x

1
1
sin t = 1 cos x.
L
=
p(p2 + 1)
0

Laplace transform of a function multiplied by x


If L[f (x)] = F (p), then L [xf (x)] = (1)F 0 (p), where prime stands for derivative with respect to
p.
Z
For,
epx f (x)dx = F (p).
0

Differentiating both sides with respect to p, we get


Z
(x)epx f (x)dx = F 0 (p),
0

Z
or

epx [xf (x)]dx = (1)F 0 (p).

Likewise, we can show that




L x2 f (x) = (1)2 F 00 (p).
In general,
L [xn f (x)] = (1)n F (n) (p).
Ex. 9.4.4. Find L[x sin x].
Sol. 9.4.4. We know L[sin x] =

d
L [x sin x] = (1)
dp

1
.
p2 +1

1
2
p +1


=

(p2

2p
.
+ 1)2

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

82

Laplace transform of a function divided by x


 Z
f (x)
F (t)dt.
=
If L[f (x)] = F (p), then L
x
p
f (x)
For, let g(x) =
so that xg(x) = f (x) and (1)G0 (p) = F (p), which on integrating from p to
x
gives the desired result, noting that G() = 0, G(p) being Laplace transform of g(x).


Z
sin x
sin x

Ex. 9.4.5. Find L


and hence show that
dx = .
x
x
2
0


Sol. 9.4.5. We know L[sin x] = p21+1 .



 Z
 1 
sin x
1
L
=
dt
=
tan t p = tan1 p = cot1 p.
2
x
t +1
2
p
Now,


sin x

e
dx = L
= tan1 p.
x
x
2
0
Z
sin x

Choosing p = 0, we get
dx = .
x
2
h0 cos x i
Ex. 9.4.6. Show that L
does not exist.
x
Sol. 9.4.6. Please try yourself.


p+7
1
Ex. 9.4.7. Find L
.
p2 + 2p + 5
Sol. 9.4.7. Please try yourself by making perfect square in the denominator.


2p2 6p + 5
1
.
Ex. 9.4.8. Find L
p3 6p2 + 11p 6
Sol. 9.4.8. Please try yourself by making partial fractions.

 
p+1
1
.
Ex. 9.4.9. Find L
log
p1
Sol. 9.4.9. Please try yourself by letting


p+1
L[f (x)] = log
p1
so that
2
L[xf (x)] = 2
.
p 1


p
1
= x sinh ax.
Ex. 9.4.10. Show that L
(p2 a2 )2
Sol. 9.4.10. Please try yourself.


p
1
Ex. 9.4.11. Show that L
.
p4 + p2 + 1
Sol. 9.4.11. Please try yourself by using


1
p
1
1
=

p 4 + p2 + 1
2 p 2 p + 1 p2 + p + 1
Z

px sin x

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

83

Solution of DE using Laplace transform


To solve a DE, first take Laplace transform of both sides, find L[y] and finally take inverse Laplace
transform to obtain the solution y, as illustrated in the following examples.
Ex. 9.5.1. Solve y 0 y = 0.
Sol. 9.5.1. Taking Laplace transform of both sides, we get
pL[y] y(0) L[y] = 0.
Letting y(0) = c and solving for L[y], we have
L[y] =

c
.
p1

Now taking inverse Laplace transform, we get




1
1
y = cL
= cex .
p1
Ex. 9.5.2. Solve y 00 + y = 0.
Sol. 9.5.2. Taking Laplace transform of both sides, we get
p2 L[y] py(0) y 0 (0) + L[y] = 0.
Letting y(0) = c1 , y 0 (0) = c2 and solving for L[y], we have
L[y] = c1

p2

1
p
+ c2 2
.
+1
p +1

Now taking inverse Laplace transform, we get


y = c1 cos x + c2 sin x.
Ex. 9.5.3. Solve y 0 + y = 3e2x , y(0) = 0.
Sol. 9.5.3. Taking Laplace transform of both sides, we get
pL[y] y(0) + L[y] =

3
.
p2

Using y(0) = 0 and solving for L[y], we have


L[y] =

1
1
3
=

.
(p + 1)(p 2)
p2 p+1

Now taking inverse Laplace transform, we get


y = e2x ex .
Ex. 9.5.4. Solve the Bessels DE of order 0 given by
xy 00 + y 0 + xy = 0,

subject to y(0) = 1, y 0 (0) = 0.

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

84

Sol. 9.5.4. Taking Laplace transform of both sides, we get


(1)


d
d  2
p L[y] py(0) y 0 (0) + pL[y] y(0)
(L[y]) = 0.
dp
dp

Using y(0) = 1, y 0 (0) = 0 and rearranging, we have


d[L[y]]
p
= 2
dp,
L[y]
p +1
which on integrating leads to
2

L[y] = c(p + 1)

1/2

c
=
p


1/2


1
1 11
1 13 1
1+ 2

=c
+
......
p
p 2 p3 2! 2 2 p5

Now taking inverse Laplace transform, we get




x4
x2
y = c 1 2 + 2 2 ........... = cJ0 (x).
2
2 .4
Using y(0) = 1, we get c = 1. Thus, the required solution is
y = J0 (x).
Remark 9.5.1. From the above example, notice that L[J0 (x)] = p

1
p2 + 1

.
Z

Theorem 9.5.1. (Convolution Theorem) Prove that L[f (x)].L[g(x)] = L


f (x t)g(t)dt .

Proof. We have
Z

ept g(t)dt
e f (s)ds.
0

Z0 Z
p(s+t)
e
f (s)g(t)ds dt
0
0
Z Z
epx f (x t)g(t)dxdt (s + t = x)
Z0 Zt x
epx f (x t)g(t)dtdx (Change of order of integration)

Z0 0 Z x
px
e
f (x t)g(t)dt dx
0
0
Z x

L
f (x t)g(t)dt .

L[f (x)].L[g(x)] =
=
=
=
=
=

ps

Remark 9.5.2. If L[f (x)] = F (p) and L[g(x)] = G(p), then by convolution theorem
Z x
1
L [F (p)G(p)] =
f (x t)g(t)dt
0

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

Ex. 9.5.5. Use convolution theorem to find L

1
p2 (p2 + 1)

85


.

Sol. 9.5.5. We know that




 
1
1
1
1
= x, L
= sin x.
L
p2
p2 + 1
So in view of convolution theorem, we have



 Z x
1
1
1
1
1
L
(x t) sin tdt = x sin x.
=L
=
.
p2 (p2 + 1)
p2 p2 + 1
0
Z x
f (x t)g(t)dt is called convolution of the functions f (x) and
Remark 9.5.3. The integral
0

g(x), and is denoted by f (x) g(x). So by convolution theorem, we have


L[f (x) g(x)] = L[f (x)]L[g(x)].

Solution of integral equations


If f (x) and K(x) are given functions, then the equation
Z x
K(x t)y(t)dt,
f (x) = y(x) +

(9.3)

where the unknown function y(x) appears under the integral sign, is called an integral equation.
Taking Laplace transform of both sides of (9.3), we get
L[f (x)] = L[y(x)] + L[K(x)]L[y(x)].
So we have
L[y(x)] =

L[f (x)]
1 + L[K(x)]
3

sin(x t)y(t)dt.

Ex. 9.6.1. Solve y(x) = x +


0

Sol. 9.6.1. Taking Laplace transform of both sides, we get


L[y(x)] = L[x3 ] + L[sin x]L[y(x)].
So we have
L[y(x)] =

6
6
L[x3 ]
= 4 + 6.
1 + L[sin x]
p
p

Taking inverse Laplace transform, we have


y(x) = x3 +

1 5
x.
20

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

86

Heaviside or Unit Step Function


It is denoted by H(t) or u(t), and is defined as

0, t<0
H(t) = u(t) =
1 , t 0.
It has a jump discontinuity at t = 0. If the discontinuity happens to be at t = a 0, we define

0, t<a
Ha (t) = ua (t) =
1 , t a.
The Laplace transform of ua (t) is given by
Z
Z
eap
pt
e ua (t)dt =
ept dt =
L[ua (t)] =
p
0
a
1
.
p
Further, if L[f (t)] = F (p), then we have
Z
Z
pt
L[f (t a)ua (t)] =
e f (t a)dt =
In particular, L[u(t)] =

ep(a+z) f (z)dz = eap F (p).

It gives
L1 [eap F (p)] = f (t a)ua (t).
 3p 
e
1
Ex. 9.7.1. Find L
.
p2 + 1


1
1
Sol. 9.7.1. We know L
= sin t.
p2 + 1

 3p 
e
0,
1
L
= sin(t 3)u3 (t) =
2
sin(t 3) ,
p +1

t<3
t 3.

Dirac Delta Function or Unit Impulse Function


A large force acting on a body for a very short duration of time is called impulse. For instance,
hammering a nail into wood, hitting cricket ball by bat etc. Impulse is modelled by a function
known as the Dirac delta function.
Let  > 0 be any real number, then the limit of the function

t<0
0,
1/ , 0 t 
f (t) =

0,
t > .
as  0+ defines Dirac delta function, which is denoted by (t). So lim+ f (t) = (t), and we may
0

interpret that (t) = 0 for t 6= 0 and (t) = at t = 0. The delta function can be made to act at
any point say a 0. Then we define

0,
t 6= a
a (t) =
, t = a.

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

87

The function f (t) can be written in terms of unit step function as


1
f (t) = [u(t) u (t)].

It implies that
(t) = u0 (t).
Here it should be noted that ordinary derivative of u(t) does not exist at t = 0, u(t) being
discontinuous at t = 0. So it is to be understood as a generalized function or quasi function.
Similarly, we have

t<a
0,
1/ , a t a + 
f (t) =
(9.4)

0,
t > a + .
can be written as
1
f (t) = [ua (t) ua+ (t)].


a (t) = u0a (t).

Now, let g(t) be any continuous function for t 0. Then using (9.4) , we have
Z
Z
1 a+
g(t)f (t)dt =
g(t) = g(t0 ),
 a
0
where a < t0 < a + , by mean value theorem of integral calculus. So in the limit  0, we get
Z
g(t)a (t)dt = g(a).
0

In particular, if we choose g(t) = ept , then we get


Z
ept a (t)dt = epa .
0

It means
L[a (t)] = epa

and L[(t)] = 1.

L1 [epa ] = a (t) and L1 [1] = (t).

Examples
Suppose the LDE
y 00 + ay 0 + by = f (t),

y(0) = y 0 (0) = 0,

(9.5)

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

88

describes a mechanical or electrical system at rest in its state of equilibrium. Here f (t) can be
an impressed external force F or an electromotive force E that begins to act at t = 0. If A(t) is
solution (output or indicial response) for the input f (t) = u(t) (the unit step function), then
A00 + aA0 + bA = u(t)
Taking Laplace transform of both sides, we get
p2 L[A] pA(0) A0 (0) + apL[A] + pL[A] A(0) =

1
p

Using A(0) = A0 (0) = 0 and solving for L[A], we get


L[A] =

p(p2

1
1
=
,
+ ap + b)
pZ(p)

(9.6)

where Z(p) = p2 + ap + b.
Similarly, taking Laplace transform of (9.5), we get

 Z t
Z t

d
L[f (t)]
A(t )f ( )d = L
= pL[A]L[f (t)] = pL
A(t )f ( )d . (9.7)
L[y] =
Z(p)
dt 0
0
Taking inverse Laplace Transform, we have
Z
Z t
d t
= y(t) =
A(t )f ( )d =
A0 (t )f ( )d
dt 0
0

( A(0) = 0).

(9.8)

Since L[A]L[f (t)] = L[f (t)]L[A], (9.7) gives



Z t
f (t )A()d = L [] .
L[y] = pL
0

Taking inverse Laplace Transform, we get


Z t
y(t) =
f 0 (t )A()d + f (0)A(t).
0

Thus, finally the solution of (9.5) for the general input f (t) is given by the following two formulas:
Z

A0 (t )f ( )d,

y(t) =

(9.9)

y(t) =

f 0 (t )A()d + f (0)A(t).

(9.10)

In case, the input is f (t) = (t), the unit impulse function, let us denote the solution (output or
impulsive response) of (9.5) by h(t) so that L[h(t)] = 1/Z(p) and
L[A(t)] =

1
L[h(t)]
=
.
pZ(p)
p

So A0 (t) = h(t) and formula (9.9) becomes


Z t
y(t) =
h(t )f ( )d.
0

(9.11)

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

89

Ex. 9.8.1. Use formula (9.10) to solve y 00 + y 0 6y = 2e3t , y(0) = y 0 (0) = 0.


1
1
1 3t
1 2
.
So
A(t)
=
+
e
+
e 2t.
p(p2 + p 6)
6
15
10
Also, f (t) = 2e3t , f 0 (t) = 6e3t and f (0) = 2. So formula (9.10) gives
Sol. 9.8.1. Here L[A(t)] =

1
2
1
y(t) = e3t + e3t e2t .
3
15
5
1

Formula (9.11) can also be used for the solution, where h(t) = L

1
p2 +p6

= 15 (e2t e3t ).

Chapter 10
Systems of First Order Equations
In this chapter, we shall learn to solve the system of two first order differential equations of the
type:
(
dx
= a1 x + b1 y + f1 (t)
dt
dy
= a2 x + b2 y + f2 (t)
dt
where a1 , a2 , b1 and b2 are constants. This system is said to be homogeneous if f1 (t) = 0 and
f2 (t) = 0 otherwise it is non-homogeneous.

Solution of homogeneous system


Consider the homogeneous system
(
dx
= a1 x + b 1 y
dt
dy
= a2 x + b 2 y
dt

(10.1)

First we need to keep in mind the following results.


(
(
x = x1 (t)
x = x2 (t)
(1) If
and
are two particular solutions of (10.1), and c1 , c2 are any
y = y1 (t)
y = y2 (t)
(
x = c1 x1 (t) + c2 x2 (t)
two constants, then
is again a solution of (10.1).
y = c1 y1 (t) + c2 y2 (t)
(


x1 x2
is defined as Wronskian of the two solutions x = x1 (t) and
(2) The expression W =
y1 y2
y = y1 (t)
(
x = x2 (t)
of (10.1). Further, it can be shown that W is either identically 0 or never 0. If W
y = y2 (t)
is non-zero, the two solutions are LI otherwise LD.
(
(
x = x1 (t)
x = x2 (t)
(3) If
and
are two LI solutions of (10.1), then its general solution is
y = y1 (t)
y = y2 (t)
(
x = c1 x1 (t) + c2 x2 (t)
given by
.
y = c1 y1 (t) + c2 y2 (t)
90

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

91

Two LI solutions of (10.1) are obtained as follows. First write the system (10.1) into operator
form by setting D = dtd so that
(
(D a1 )x b1 y = 0
(10.2)
a2 x + (D b2 )y = 0
(
x = Aemt
Then assume that
y = Bemt
(
(m a1 )A b1 B = 0
a2 A + (m b2 )B = 0

is a solution of (10.1). Substituting it into (10.2), we get

(10.3)

It is obtained as if we have replaced D by m, x by A and y by B in (10.2).



m a1

b
1
= 0, which leads
For non-zero values of A and B from (10.3), we must have
a2
m b2
to a quadratic equation in m with two roots say m1 and m2 . Depending on the nature of the roots,
the following three cases arise.
Case (i) m1 and m2 are real and distinct.
We solve (10.3) for m = m1 . Suppose(we get A = A1 and(B = B1 . Likewise, suppose we find
x = A1 em1 t
x = A2 em2 t
A = A2 and B = B2 for m = m2 . Then
and
are two LI solutions
y = B1 em1 t
y = B2 em2 t
of (10.1).
Case (ii) m1 = a + ib and m2 = a ib are complex roots.
( We solve (10.3) for m = m1 = a + ib. Suppose we get A = A1 + iA2 and B = B1 + iB2 . Then
x = (A1 + iA2 )e(a+ib)t
is a solution of (10.1). Its real and imaginary parts respectively given
y = (B1 + iB2 )e(a+ib)t
(
(
x = eat (A1 cos at A2 sin bt)
x = eat (A1 sin bt + A2 cos at)
by
and
are two LI solutions of
y = eat (B1 cos at B2 sin bt)
y = eat (B1 sin bt + B2 cos at)
(10.1).
Case (iii) m1 and m2 are equal say m1 = m2 = m .
We solve (10.3) for the common
value m . Suppose we get A = A and B = B . Then one
(
x = A em t
particular solution of (10.1) is
.
y = B em t
(
x = (A1 + A2 t)em t
The other LI solution is given by
,
y = (B1 + B2 t)em t
where the constants A1 , A2 , B1 and B2 are to be found from the equations obtained by substituting
this solution into (10.1) and equating to 0 the coefficients of like functions, just like you did in the
method of undetermined coefficients.

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

92

Ex. 10.1.1. Solve the following system of differential equations:


(
dx
=x+y
dt
dy
= 4x 2y
dt
Sol. 10.1.1. The operator form of the given system is
(
(D 1)x y = 0
4x + (D + 2)y = 0
(
x = Aemt
Let
y = Bemt

(10.4)

be a solution of given system. Substituting it into (10.4), we get

(
(m 1)A B = 0
4A + (m + 2)B = 0

(10.5)

For non-zero values of A and B, we must have




m 1 1


4 m + 2 = 0.
So we get
m2 + m 6 = 0,

or m = 3, 2.

Next, for m = 3, (10.5) can be solved to get


( A = 1 and B = 4. One solution of the given
x = e3t
system of differential equations, therefore, is
.
y = 4e3t
(
x = e2t
is a solution of the given
For m = 2, (10.5) yields A = 1 and B = 1. Therefore
y = e2t
system of differential equations.
(
x = c1 e3t + c2 e2t
Finally, the general solution reads as
.
y = 4c1 e3t + c2 e2t

Solution of non-homogeneous system


Consider the non-homogeneous system
(
dx
= a1 x + b1 y + f1 (t)
dt
dy
= a2 x + b2 y + f2 (t)
dt
Its general solution is given by
(
x = c1 x1 (t) + c2 x2 (t) + xp (t)
y = c1 y1 (t) + c2 y2 (t) + yp (t)

(10.6)

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

93

(
x = c1 x1 (t) + c2 x2 (t)
where
is general solution of the corresponding homogeneous system
y = c1 y1 (t) + c2 y2 (t)
(
(
dx
=
a
x
+
b
y
x = xp (t)
1
1
dt
, and
is a particular solution of (10.6).
dy
=
a
x
+
b
y
y
=
y
(t)
2
2
p
dt
We already know how to solve the corresponding homogeneous system of (10.6). So we need
to know how to find a particular solution of (10.6).
We construct a particular solution using the solution of the corresponding homogeneous system
of (10.6) by varying the unknown parameters c1 and c2 with two unknown functions v1 (t) and v2 (t)
respectively. So we assume a particular solution of the form
(
x = v1 x1 + v2 x2
,
y = v1 y1 + v2 y2
Substituting this particular solution into (10.6), we get
(
v10 x1 + v20 x2 = f1 (t)
.
v10 y1 + v20 y2 = f2 (t)
Therefore, we have

f (t)
Z 1
f2 (t)

v1 =
x1

y1


x2
Z
y2
dt and v2 =
x2
y2


x1

y1

x1

y1


f1 (t)
f2 (t)
dt.
x2
y2

This mehtod of getting a particular solution is called mehtod of variation of parameters.


Ex. 10.2.1. Use the mehtod of variation of parameters to find a particular solution of the following
system of differential equations:
(
dx
= x + y 5t + 2
dt
dy
= 4x 2y 8t 8
dt
Hence write the general solution.
Sol. 10.2.1. The corresponding homogeneous system of the given system is
(
dx
=x+y
dt
dy
= 4x 2y
dt
Its general solution (as done in the earlier example) reads as
(
x = c1 e3t + c2 e2t
.
y = 4c1 e3t + c2 e2t
(
x = c1 x 1 + c2 x 2
Comparing it with
, we find
y = c1 y 1 + c2 y 2
x1 = e3t ,

x2 = e2t ,

y1 = 4e3t ,

y2 = e2t .

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

94

Further, f1 (t) = 5t + 2 and f2 (t) = 8t 8. By the method of variation of parameters, a


particular solution is of the form
(
x = v1 e3t + v2 e2t
.
y = 4v1 e3t + v2 e2t


f1 (t) x2


Z
Z
f2 (t) y2


v1 =
x1 x2 dt =


y1 y2


x1 f1 (t)


Z
Z
y1 f2 (t)


v2 =
x1 x2 dt =


y1 y2



5t + 2 e2t


Z
8t 8 e2t
1
1
3t
3t
dt =
(t + 3)e3t
e
(3t
+
10)dt
=
2t
e

5
5
e


4e3t e2t
3t

e
5t + 2

Z
4e3t 8t 8
28
7
2t
3t
dt =
(2t + 1)e2t .
e
tdt
=
2t
e

5
5
e


4e3t e2t

So we have
(
x = v1 e3t + v2 e2t = 3t + 2
y = 4v1 e3t + v2 e2t = 2t 1

Hence, the general solution is then given by


(
x = c1 e3t + c2 e2t + 3t + 2
.
y = 4c1 e3t + c2 e2t + 2t 1

Variable elimination approach


In the following, the variable elimination approach is described through an example. It is very
simple, and powerful as well in the sense that it can be applied to find general solotions of homogeneous and non-homogeneous systems of linear differential equations. Let us solve the previous
example by the variable elimination approach.
Ex. 10.3.1. Find solution of the following system of differential equations:
(
dx
= x + y 5t + 2
dt
dy
= 4x 2y 8t 8
dt
Sol. 10.3.1. The operator form of the given system is
(
(D 1)x y = 5t + 2
4x + (D + 2)y = 8t 8

(10.7)

Let us eliminate y from these two equations. Operating D + 2 on both sides of first equation, and
then adding to the second equation, we get
[(D + 2)(D 1) 4]x = (D + 2)(5t + 2) 8t 8 = 5 10t + 4 8t 8

Mathematics-III

Dr. Suresh Kumar, BITS Pilani

95

(D2 + D 6)x = 18t 9

It is a second order non-homogeneous LDE with constant coefficients in x and t with AE given by
m2 + m 6 = 0.
Its roots are m = 3, 2. So we have
xh = c1 e3t + c2 e2 t.
To find xp , we have

1


1
1
D2 + D
D
xp = 2
(18t 9) =
1
(3t + 3/2) = 3t + 2.
(18t 9) = 1 +
D +D6
6
6
6
Thus,
x = xh + xp = c1 e3t + c2 e2t + 3t + 2.
We can get y by substituting this value of x into the first equation of the given system. For,
y=

dx
d
x + 5t 2 = (c1 e3t + c2 e2t + 3t + 2) (c1 e3t + c2 e2t + 3t + 2) + 5t 2
dt
dt

y = 4c1 e3t + c2 e2t + 2t 1


You can see that life is easy if we use the variable elimination approach unless the question is
asked to be solved by using the earlier approach.
Cheers!

Potrebbero piacerti anche