Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
MATHEMATICS-III
MATH F211
Session: 2016-2017
Note: Some concepts of Differential Equations are briefly described here just to help the students. Therefore, the following study material is expected to be useful but not exhaustive for the
Mathematics-III course. For detailed study, the students are advised to attend the lecture/tutorial
classes regularly, and consult the text book prescribed in the hand out of the course.
Textbook: G.F. Simmons, Differential Equations with Applications and Historical Notes, TMH,
2nd ed., 1991.
Appeal: Please do not print this document. Develop a habit of reading soft copy of the notes.
Contents
1 Preliminaries of Differential Equations
1.1 Differential equations and their classifications . . . .
1.1.1 Classification based on number of independent
1.1.2 Classification based on degree . . . . . . . . .
1.2 Solutions of DE . . . . . . . . . . . . . . . . . . . . .
1.2.1 Explicit solution . . . . . . . . . . . . . . . .
1.2.2 Implicit solution . . . . . . . . . . . . . . . .
1.2.3 Formal solution . . . . . . . . . . . . . . . . .
1.2.4 General and particular solutions . . . . . . . .
1.2.5 Singular solution . . . . . . . . . . . . . . . .
1.2.6 Initial and boundary value problems . . . . .
. . . . . .
variables
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
4
4
5
5
6
6
6
6
6
7
7
.
.
.
.
.
.
.
.
.
.
.
8
8
8
9
10
12
12
13
14
14
14
15
.
.
.
.
.
.
17
17
19
21
25
26
27
Mathematics-III
37
37
39
46
6 Fourier Series
6.1 Introduction . . . . . . . . . . . . . . . .
6.2 Dirichlets conditions for convergence . .
6.3 Fourier series for even and odd functions
6.4 Fourier series on arbitrary intervals . . .
.
.
.
.
50
50
53
54
55
.
.
.
.
.
56
56
58
60
63
63
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
66
66
70
72
73
75
75
77
9 Laplace Transforms
9.1 Definitions of Laplace and inverse Laplace transforms . . .
9.2 Laplace transforms of some elementary functions . . . . . .
9.3 Sufficient conditions for the existence of Laplace transform
9.4 Some more Laplace transform formulas . . . . . . . . . . .
9.4.1 Laplace transform of a function multiplied by eax .
9.4.2 Laplace transform of derivatives of a function . . .
9.4.3 Laplace transform of integral of a function . . . . .
9.4.4 Laplace transform of a function multiplied by x . .
9.4.5 Laplace transform of a function divided by x . . . .
9.5 Solution of DE using Laplace transform . . . . . . . . . . .
9.6 Solution of integral equations . . . . . . . . . . . . . . . .
9.7 Heaviside or Unit Step Function . . . . . . . . . . . . . . .
9.8 Dirac Delta Function or Unit Impulse Function . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
78
78
79
80
80
80
80
81
81
82
83
85
86
86
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Chapter 1
Preliminaries of Differential Equations
Differential equations and their classifications
Differential equation
The mathematical description of any dynamical or physical phenomenon naturally introduces the
independent and dependent variables. Suppose we blow air into a balloon that inflates in spherical
shape. Then the radius r of the spherical balloon depends on the amount of air blown into, and is
therefore at our discretion. So we may treat the variable r as the independent variable. We know
that surface area S of the spherical balloon depends on r via the relation S = 4r2 . So, in this
example, r is independent variable and S is dependent variable. Also, the rate of change of surface
= 8r. It is a differential
area S of balloon with respect its radius r is given by the equation dS
dr
equation that gives us the rate of change of S with respect to r for any given value of r.
A differential equation may involve more than one independent or dependent variables. For
instance, in the above balloon example, if we allow the variable r to depend on time t, then the
time variable t is independent while r and S both are dependent variables. Also, the governing
= 8r can be written as
differential equation dS
dr
dS
dr
= 8r .
dt
dt
Formally, we define a differential equation as follows: Any equation (non-identity) involving
derivatives of dependent variable(s) with respect to independent variable(s) is called a differential
equation(DE).
Hereafter, we shall use the abbreviation DE for the phrase differential equation and its plural
differential equations as well.
Order and Degree
The order of the highest order derivative occurring in a DE is called its order. The power or
exponent of the highest order derivative occurring in the DE is called its degree provided the DE
is made free from radicals or fractions in its derivatives.
Ex. Order of (y 00 )3 + 2y 0 + 3y = x is 2 and degree is 3.
Ex. Order of y (4) + 2(y 0 )5 + 3y = 0 is 4 and degree is 1.
Ex. (y 000 )1/2 + y 0 = 0 can be rewritten as y 000 (y 0 )2 = 0. So its order is 3 and degree is 1.
4
Mathematics-III
Mathematics-III
Non-linear DE
If a DE in not linear, then it is said to be non-linear.
Ex. y = xy 0 + (y 0 )2 is a first order non-linear DE as y 0 occurs with degree 2.
Ex. yy 00 + 4y = 3x2 is a second order non-linear DE as y and y 00 occur in product in the first term.
Ex. y 00 + 2y 0 + 3y 2 = 0 is a second order non-linear DE as y occurs with degree 2.
Solutions of DE
Consider the nth order DE
f (x, y, y 0 , ......., y (n) ) = 0.
(1.1)
Explicit solution
A function g defined on an interval I is said to be an explicit solution of (1.1) on the interval I if
f (x, g, g 0 , ......., g (n) ) = 0 for all x I.
For example, y = sin x is an explicit solution of the DE y 00 + y = 0 on (, ) since y = sin x
implies that y 00 + y = sin x + sin x = 0 for all x (, ).
Implicit solution
A relation h(x, y) = 0 is said to be an implicit solution of (1.1) on an interval I if h(x, y) = 0
yields at least one explicit solution g of (1.1) on I.
0
2
2
For example, x2 +y 2 =1 is an implicit solution
of the DE yy +x = 0 on (1, 1). For, x +y = 1
2
2
yields two functions y = 1 x and y = 1 x , both of which can be verified to be explicit
solutions of yy 0 + x = 0 on (1, 1).
Formal solution
A relation h(x, y) = 0 is said to be a formal solution of (1.1) on an interval I if h(x, y) = 0 does
not yield any explicit solution g of (1.1) on I but satisfies (1.1) on I.
For example, x2 + y 2 + 1 = 0 is a formal solution of the DE yy 0 + x = 0. For, the implicit
differentiation of the relation x2 + y 2 + 1 = 0 with respect to x yields the DE yy 0 + x = 0. However,
x2 + y 2 + 1 = 0 gives y 2 = 1 x2 . So y is not real for any real x. This in turn implies that
x2 + y 2 + 1 = 0 does not yield any explicit solution of the given DE.
Mathematics-III
Singular solution
A singular solution of (1.1) is a particular solution of (1.1), which can not be obtained from the
general solution h(x, y, c1 , c2 , ....., cn ) = 0 of (1.1) by choosing particular values of the arbitrary
constants c1 , c2 ,...., cn .
For example, y = cx + c2 is general solution of the DE y = xy 0 + (y 0 )2 . It is easy to verify that
y = x2 /4 is also a solution of this DE. Further, y = x2 /4 can not be retrieved from y = cx + c2
for any choice of the arbitrary constant c. Hence, y = x2 /4 is a singular solution of the DE
y = xy 0 + (y 0 )2 .
Note: Considering the types of solutions as discussed above we can say that a solution of (1.1) is
any relation-explicit or implicit- between x and y that does not involve derivatives and satisfies
(1.1) identically.
Chapter 2
First Order Differential Equations
In general, any first order DE is of the form
g(x, y, y 0 ) = 0.
(2.1)
Sometimes, it is possible to write the first order DE (2.1) in the canonical form
y 0 = f (x, y)
(2.2)
Variable separable DE
A first order DE is said to be in variable separable form if it can be written as
y 0 = F (x)G(y),
(2.3)
where F (x) is function of x, and G(y) is function of y. Equation (2.3) can be rewritten as
dy
= F (x)dx,
G(y)
which on integration, yields the solution
Z
Z
dy
= F (x)dx + C,
G(y)
where C is a constant of integration.
Ex. 2.1.1. Solve y 0 = y cos x.
Sol. 2.1.1. y = cesin x .
8
Mathematics-III
x+y
.
xy
x2 + y 2 + C.
Mathematics-III
10
y+5
x1
= log
p
(x 1)2 + (y + 5)2 + c.
Exact DE
dy
= f (x, y) can be written in the canonical form M (x, y)dx + N (x, y)dy = 0
The first order DE dx
where f (x, y) = M (x, y)/N (x, y). It is said to be an exact DE if M dx + N dy is an exact differential of some function say F (x, y), that is, M dx + N dy = dF .
(2.4)
Also F (x, y) is a function of x and y. So from the theory of partial differentiation, we have
F
F
dx +
dy = dF.
x
y
(2.5)
F
,
x
N=
M
2F
=
,
y
yx
F
.
y
N
2F
=
.
x
xy
(2.6)
(2.7)
Mathematics-III
11
Given that M (x, y) and N (x, y) possess continuous first order partial derivatives. Therefore,
2F
2F
2F
and xy
are continuous functions, which in turn implies that yx
= xy
. Hence, (2.7) gives
2F
yx
N
M
=
.
y
x
(2.8)
Conversely assume that the condition (2.8) is satisfied. We shall prove that there exists a
function F (x, y) such that equation (2.4) and hence (2.6) are satisfied. Integrating first of the
equations in (2.6) w.r.t. x, we get
Z
F = M dx + g(y).
(2.9)
=
y
y
N=
y
M dx + g 0 (y).
M dx + g 0 (y).
Z
N
y
g(y) =
The integrand N
y
N
2
x
xy
x
y
N
M
= 0,
x
y
M dx dy.
(2.10)
Z
M dx is a function of y only.
Z
N
M dx = 0.
y
M dx = 0.
Z
M dx
= 0.
M dx +
N
M dx dy = c.
y
Mathematics-III
12
Ex. Test the equation ey dx + (xey + 2y)dy = 0 for exactness and solve it if it is exact.
Sol. Comparing the given equation with M dx + N dy = 0, we get
M = ey ,
N = xey + 2y.
N
M
= ey =
.
y
x
This shows that the given DE is exact, and therefore its solution is given by
Z
Z
Z
N
M dx +
M dx dy = c.
y
Z
=
Z
Z
y
y
e dx dy = c.
e dx +
xe + 2y
y
y
Z
y
y
(xe ) dy = c.
xe +
xe + 2y
y
y
(xey + 2y xey ) dy = c.
xe +
xey + y 2 = c.
Integrating Factor
If the DE M dx + N dy = 0 is not exact and suppose there exists a function (x, y) such that
the DE (M dx + N dy) = 0 is exact, then (x, y) is called an integrating factor (IF) of the DE
M dx + N dy = 0.
Obviously, we need to determine the integrating factor for a non-exact DE.
f
y
= (x, y) (say).
Therefore, M = f
and M =
x
M dx + N dy = 0 is exact.
f
.
y
So M dx + N dy =
f
dx
x
f
dy
y
= df . It implies that
Mathematics-III
13
(M ) =
(N ).
y
x
+M
=
+N .
y
y
x
x
N
M
x
y
=
M
N
.
y
x
(2.11)
We can not determine in general from (2.11). If happens to be a function of x only, then
(2.11) reduces to
1 M
N
1 d
=
= h(x) (say).
dx
N y
x
d
= h(x)dx.
=e
Thus, if
1
N
Similarly, if
M
y
1
M
h(x)dx
N
x
N
x
(2.12)
Mathematics-III
14
Itscomparison
with M dx + N dy = 0 gives M = p(x)y q(x) and N R= 1. Here, we find that
M
N
1
x = p(x) is a function of x only. Therefore, the IF is = e p(x)dx . Now multiplying
N
y
both sides of (2.12) by the IF, we obtain
R
y0e
p(x)dx
+ p(x)ye
p(x)dx
= q(x)e
p(x)dx
R
d R p(x)dx
ye
= q(x)e p(x)dx .
dx
ye
p(x)dx
Z
=
q(x)e
p(x)dx
dx + c
Bernoullis DE
A non-linear DE of the form y 0 + p(x)y = q(x)y n (n 6= 1) is called Bernoullis DE, which can be
reduced to LDE by dividing it by y n and then substituting y 1n = z.
Ex. Solve y 0 + xy = x3 y 3 .
2
Sol. y 2 = 1 + x2 + cex .
IF of homogeneous DE
If M (x, y)dx+N (x, y)dy = 0 is a homogeneous DE, then its IF is 1/(M x+N y) provided M x+N y 6=
0. In case, M x + N y = 0, the IF is 1/x2 or 1/y 2 or 1/(xy).
Clairauts DE
A Clairauts DE is of the form
y = xy 0 + f (y 0 ),
(2.13)
y f (p)
, (p = y 0 ).
p
p
(2.14)
.
p
p p dy
p dy
p dy
(2.15)
Mathematics-III
15
or
dp
[y f (p) + pf 0 (p)] = 0.
dy
(2.16)
dp
= 0 or y = f (p) pf 0 (p).
It suggests that either dy
dp
If
= 0, then p = c (a constant) and we get the general solution of (2.13) given by
dy
y = cx + f (c).
In case, y = f (p) pf 0 (p), from equation (2.13), we get x = f 0 (p).
So the parametric equations x = f 0 (t) and y = f (t) tf 0 (t) define another solution of (2.13).
It is called singular solution of (2.13).
It should be noted that the straight lines given by the general solution y = cx + f (c) are
tangential to the curve given by the singular solution x = f 0 (t) and y = f (t) tf 0 (t). Hence, the
singular solution is an envelope of the family of straight lines of general solution as illustrated in
the following example.
Note: In general, a given DE need not to possess a solution. For example, |y0 | + |y| + 1 = 0 has
no solution. The DE |y 0 | + |y| = 0 has only one solution y = 0.
Mathematics-III
f
y
16
Chapter 3
Second Order DE
Any second order DE is of the form
f (x, y, y 0 , y 00 ) = 0.
First we discuss the LDE of second order.
(3.1)
Now substituting c1 y1 + c2 y2 for y into left hand side of the given homogeneous LDE, we obtain
c1 (y100 + p(x)y10 + q(x)y1 ) + c2 (y200 + p(x)y20 + q(x)y2 ) = c1 .0 + c2 .0 = 0.
Thus, c1 y1 + c2 y2 , the linear combination of the solutions y1 and y2 , is also a solution of the
homogeneous LDE.
17
Mathematics-III
18
Remark 3.1.1. The above result need not be true for a non-homogeneous or non-linear DE.
Definition 3.1.1. (Linearly Independent and Linearly Dependent Functions) Two functions f (x) and g(x) are said to be linearly independent (LI) on [a, b] if f (x) is not a constant
multiple of g(x) on [a, b]. The functions, which are not LI, are known as linearly dependent (LD)
functions.
For example, the functions x + 1 and x2 are LI on [1, 5] while the functions x2 + 1 and 3x2 + 3
are LD functions on [1, 5]. The functions sin x and cos x are LI on any interval.
Definition 3.1.2. Definition
(Wronskian):
Wronskian of two functions y1 (x) and y2 (x) is
y1 y2
and is denoted by W (y1 , y2 ).
defined as the determinant 0
y1 y20
(3.2)
(3.3)
dW
&
= y1 y200 y2 y100
dx
dW
+ p(x)W = 0.
dx
W = ce
p(x)dx
W =
y1 y20
y2 y10
Mathematics-III
19
Proof. If y1 and y2 are LD, then there exists some constant c such that y1 (x) = cy2 (x) for all
x [a, b]. It follows that W (y1 , y2 ) = y1 y20 y2 y10 = cy2 y20 cy2 y20 = 0 for all x [a, b].
Conversely, let W (y1 , y2 ) = y1 y20 y2 y10 = 0 for all x [a, b]. Now, there are two possibilities
about y1 . First, y1 = 0 for all x [a, b]. In this case, we have y1 = 0 = 0.y2 for all x [a, b], and
consequently y1 and y2 are LD. Next, if y1 is not identically 0 in [a, b] and x0 is any point in [a, b]
such that y1 (x0 ) 6= 0, then continuity of y1 ensures the existence of a subinterval [c, d] containing
x0 in [a, b] such that y1 6= 0 for all x [c, d]. Dividing W (y1 , y2 ) = y1 y20 y2 y10 = 0 by y12 , we get
(y1 y20 y2 y10 )/y12 = (y2 /y1 )0 = 0. So we have y2 /y1 = k for all x [c, d], where k is some constant.
This shows that y1 and y2 are LD in [c, d]. This completes the proof.
Theorem 3.1.4. (General Solution of Homogeneous LDE) If y1 (x) and y2 (x) are two LI
solutions of a homogeneous LDE y 00 + p(x)y 0 + q(x)y = 0 on [a, b], then c1 y1 (x) + c2 y2 (x), where
c1 and c2 are arbitrary constants, is the general solution of the homogeneous LDE.
Proof. Let y(x) be any solution of y 00 + p(x)y 0 + q(x)y = 0. We shall prove that there exists unique
constants c1 and c2 such that
c1 y1 (x) + c2 y2 (x) = y(x).
(3.4)
(3.5)
Given that y1 (x) and y2 (x) are two LI solutions of the given homogeneous LDE on [a, b].Therefore,
y1 (x) y2 (x)
0
y1 (x) y20 (x) = W (y1 (x), y2 (x)) is non-zero for all x [a, b]. This in turn implies that the system
of equations (3.4) and (3.5) has a unique solution (c1 , c2 ). This completes the proof.
For example, y 00 + y = 0 has two LI solutions y1 = cos x and y2 = sin x. So its general solution
is c1 cos x + c2 sin x.
(3.6)
(3.7)
(3.8)
Mathematics-III
20
v 00
2y10
p(x).
=
v0
y1
=
Integrating,
log v = 2 log y1
v0 =
1 R p(x)dx
e
.
y12
Z
1 R p(x)dx
e
dx.
y12
1 R p(x)dx
e
dx.
y12
v=
y2 = y1
p(x)dx.
Mathematics-III
21
(3.9)
where p and q are constants. Let y = emx be a solution of 3.9. Then, we have
(m2 + pm + q)emx = 0.
m2 + pm + q = 0,
(3.10)
since emx 6= 0.
Equation (3.10) is called auxiliary equation (AE) and its roots are
p
p
p + p2 4q
p p2 4q
m1 =
and m2 =
.
2
2
Now three different cases arise depending on the nature of roots of the AE.
(i) If p2 4q > 0, then m1 and m2 are real and distinct. So em1 x and em2 x are two particular
solutions of 3.9. Also these are LI being not constant multiple of each other. Therefore, general
solution of (3.9) is
y = c1 em1 x + c2 em2 x .
(ii) If p2 4q < 0, then m1 and m2 are conjugate complex numbers. Let m1 = a+ib and m2 = aib.
Then we get the following solutions of 3.9:
e(a+ib)x = eax (cos bx + i sin bx),
(3.11)
(3.12)
As we are interested in real solutions of 3.9, adding 3.11 and 3.12 and then dividing by 2, we
get a real solution eax cos bx.
Similarly, subtracting 3.11 and 3.12 and then dividing by 2i, we get another real solution of
3.9 given by eax sin bx.
Now, we see that the particular solutions eax cos bx and eax sin bx are LI. So general solution of
(3.9) is
y = eax (c1 cos bx + c2 sin bx).
(iii) If p2 4q = 0, then m1 and m2 are real and equal with m1 = m2 = p2 . Therefore, one
px
solution of (3.9) is y1 = e 2 . Another LI solution of (3.9) is given by
Z
Z
Z
px
px
px
1 R p(x)dx
1 R pdx
1 px
y2 = y1
e
dx = e 2
e
dx = e 2
e dx = xe 2 .
2
px
px
y1
e
e
So general solution of (3.9) is
y=e
px
2
(c1 + c2 x).
Mathematics-III
22
x
2
(c1 cos
3
x
2
+ c2 sin
3
x).
2
Ex. 3.3.4. Show that the general homogeneous LDE y 00 + p(x)y 0 + q(x)y = 0 is reducible to a
3
homogeneous
LDE with constant coefficients if and only if (q 0 + 2pq)/q 2 is constant provided that
Rp
z=
q(x)dx.
p
Rp
Sol. 3.3.4. We have z =
q(x)dx and z 0 = q(x). Therefore,
y0 =
dy 0 dy
z =
q.
dz
dz
y 00 =
q 0 dy
d2 y
q 0 dy
d2 y 0
q 2z +
=q 2 +
.
dz
2 q dz
dz
2 q dz
Plugging the values of y 0 and y 00 into y 00 + p(x)y 0 + q(x)y = 0 and dividing by q, we obtain
d2 y q 0 + 2pq dy
+
+ y = 0.
3
dz 2
2q 2 dz
3
This is a homogeneous LDE with constant coefficients if and only if (q 0 + 2pq)/q 2 is constant.
Ex. 3.3.5. Reduce xy 00 + (x2 1)y 0 + x3 y = 0 to a homogeneous LDE with constant coefficients
and hence solve it.
Sol. 3.3.5. The given DE can be rewritten as
1
00
y + x
y 0 + x2 y = 0.
x
Comparing it with y 00 + p(x)y 0 + q(x)y = 0, we get p(x) = x
2x + 2 x x1 x2
q 0 + 2pq
= 2.
=
3
x3
q2
1
x
and q(x) = x2 .
This shows that the given DE is reducible to a homogeneous LDE with constant coefficients given
by
d2 y q 0 + 2pq dy
+
+ y = 0.
3
dz 2
2q 2 dz
Mathematics-III
where z =
Rp
R
q(x)dx = xdx =
23
x2
.
2
d2 y dy
+ y = 0.
+
dz 2 dz
3
3
c1 cos
z + c2 sin
z
2
2
z2
y=e
Substituting z =
x2
,
2
2
x4
y=e
1
2
3
i.
2
we have
!
3 2
3 2
c1 cos
x + c2 sin
x .
4
4
Ex. 3.3.6. Show that a DE of the form x2 y 00 + pxy 0 + qy = 0, where p, q are constants, reduces to
a homogeneous LDE with constant coefficients with the transformation x = ez . Hence, solve the
equation x2 y 00 + 2xy 0 6y = 0.
Sol. 3.3.6. We have x = ez . So z = log x and z 0 = 1/x. Therefore,
xy 0 = x
dy 0 dy
z = .
dz
dz
2 00
x y =x
1 dy
x dz
0
=x
1 dy
1 d2 y 0
z
x dz 2
x2 dz
=
d2 y dy
.
dz 2 dz
y = c1 x3 + c2 x2 .
Remark 3.3.1. The DE in the form x2 y 00 + pxy 0 + qy = 0 is called Eulers or Cauchys equidimensional equation. If we denote dy
by Dz y, then xy 0 = Dz y and xy 00 = Dz (Dz 1)y. It can also
dz
3 000
be shown that x y = Dz (Dz 1)(Dz 2)y and in general xn y (n) = Dz (Dz 1)...(Dz n + 1)y.
Thus, every Eulers or Cauchys equidimensional equation reduces to a homogeneous LDE with
constant coefficients with the transformation x = ez .
Mathematics-III
24
Mathematics-III
25
(3.13)
where p, q are constants and r(x) is exponential or sine or cosine or polynomial or some combination of these functions. We assume yp equal to linear combination r(x) and all different functions
(except for the constant multiples) arising from derivatives of r(x). Finally substituting yp for y
in (3.13), we determine the unknown coefficients in yp by equating coefficients of like functions on
both sides.
Ex. Find a particular solution of y 00 y 0 2y = 4x2 . Also determine the general solution.
Sol. Comparing the given equation with y 00 + py 0 + qy = r(x), we get r(x) = 4x2 . Therefore, the
possible non-zero derivatives of r(x) are 8x and 8. Let yp = Ax2 + Bx + C be a particular solution.
Substituting yp for y into the given DE, we obtain
2A (2Ax + B) 2(Ax2 + Bx + C) = 4x2 ,
(3.14)
A = 2, B = 2, C = 3.
(3.15)
Mathematics-III
26
B=
1
.
2
(3.16)
(3.17)
y = c1 y 1 + c2 y 2 ,
(3.18)
Let
(3.19)
(3.20)
(3.22)
y2 r(x)
,
W (y1 , y2 )
v20 =
y1 r(x)
.
W (y1 , y2 )
Mathematics-III
27
Sol. Comparing the given equation with y 00 + p(x)y 0 + q(x)y = r(x), we get r(x) = csc x. The
general solution of the corresponding homogeneous equation y 00 + y = 0 is y = c1 cos x + c2 sin x.
Let y1 = cos x and y2 = sin x. Then W (y1 , y2 ) = 1, and hence by the method of variation of
parameters, the particular solution is obtained as
Z
Z
y1 r(x)
y2 r(x)
dx + y2
dx
y = y1
W (y1 , y2 )
W (y1 , y2 )
Z
Z
= cos x sin x csc xdx + sin x cos x csc xdx
= x cos x + sin x log(sin x).
Operator Methods
2
dy
d y
d
2
Denoting the differential operator dx
by D such as y 0 = dx
= Dy and y 00 = dx
2 = D y, the DE
00
0
2
y + py + qy = r(x) in operator form can be written as (D + pD + q)y = r(x) or f (D)y = r(x)
1
.
where f (D) = D2 + pD + q. We shall denote the inverse operator of f (D) by f (D)
1
Operating f (D) on both sides of the DE f (D)y = r(x), we obtain
y=
1
r(x),
f (D)
1
a particular solution of the DE. We can not operate f (D)
on r(x) in general. It depends on forms
of f (D) and r(x). So we discuss the following cases.
(i) If a is a constant and f (D) = D a, then the particular solution is given by
y=
1
r(x).
Da
dy
ay = r(x),
dx
Z
1
ax
Thus,
r(x) = e
r(x)eax dx.
Da
R
If a = 0, then D1 r(x) = r(x)dx. This shows that D1 stands for the integral operator. Hence,
inverse operator of differential operator is the integral operator.
Mathematics-III
28
(D 1)(D + 1)y = ex .
y=
1
y=
D1
1
y=
D1
y=
1
ex .
(D 1)(D + 1)
1
x
e
.
D+1
x x
e e dx .
1
(xex ).
D1
y=e
y=e
ex xex dx.
1
1
x
.
2
4
1
1
Remark: In the above example, we applied the operators D+1
and D1
successively. We could,
however, also apply the operators after making partial fractions as illustrated in the following.
We have
y =
=
=
=
=
1
ex
(D 1)(D + 1)
1
1
1
ex
2 D1 D+1
1
1
1
x
x
e
e
2 D1
D+1
Z
Z
1 x
x x
x
x x
e
e e dx e
e e dx
2
1 1
x
e
x .
4 2
Mathematics-III
29
1
f (D)
in ascending powers of
1
(x2 + x + 3).
+1
y=
y = (1 + D2 )1 (x2 + x + 3).
y = (1 D2 + D4 ......)(x2 + x + 3).
y = x2 + x + 3 2 + 0 .......
y = x2 + x + 1,
D2
1
(ekx g(x))
f (D)
1
= ekx f (D+k)
g(x). This is called
D2 (ekx g(x)) = D(ekx (D+k)g(x)) = ekx D(D+k)g(x)+kekx (D+k)g(x) = ekx (D+k)2 g(x).
1
f (D)
1
1
(ekx g(x)) = ekx
g(x).
f (D)
f (D + k)
Ex. Find a particular solution of (D2 3D + 2)y = xex .
Mathematics-III
Sol. We have
y=
D2
1
(xex ).
3D + 2
y = ex
y = ex
(D +
D2
y = e
y = e
y = e
1)2
1
x.
3(D + 1) + 2
1
x.
D
1
1
+
D 1D
x.
1
2
+ 1 + D + D + .... x.
D
x2
+x+1 .
2
30
Chapter 4
Qualitative Behavior of Solutions
In this chapter, we discuss qualitative behavior of the solutions of the second order homogeneous
LDE given by y 00 + p(x)y 0 + q(x)y = 0.
Let us analyze some properties of the solutions of the DE y 00 + y = 0 by making use of the
following theorem.
Theorem 4.0.1. (Existence and Uniqueness of Solution): If p(x), q(x) and r(x) are continuous functions on [a, b] and x0 is any point in [a, b], then the IVP y 00 + p(x)y 0 + q(x)y = r(x),
y(x0 ) = y0 , y 0 (x0 ) = y00 has a unique solution on [a, b].
Ex. 4.0.1. Let the differential equation y 00 + y = 0 has two solutions s(x) and c(x) satisfying
s(0) = 0, s0 (0) = 1 and c(0) = 1, c0 (0) = 0. Prove the following:
(i) s0 (x) = c(x)
(ii) c0 (x) = s(x)
(iii) s2 (x) + c2 (x) = 1.
Sol. 4.0.1. Given that s(x) and c(x) are solutions of y 00 + y = 0. So s00 (x) + s(x) = 0 and
c00 (x) + c(x) = 0. It follows that (s0 )00 (x) + s0 (x) = (s00 )0 + s0 (x) = s0 (x) + s0 (x) = 0, and
s0 (0) = 1, (s0 )0 (0) = s00 (0) = s(0) = 0. This shows that y = s0 (x) is a solution of y 00 + y = 0 with
y(0) = s0 (0) = 1 and y 0 (0) = (s0 )0 (0) = 0. But y = c(x) is the given solution of y 00 + y = 0 with
y(0) = c(0) = 1 and y 0 (0) = c0 (0) = 0. So by uniqueness theorem it follows that s0 (x) = c(x).
Likewise, it can be proved that c0 (x) = s(x).
d
[s2 (x)+c2 (x)] = 2s(x)s0 (x)+2c(x)c0 (x) = 2s(x)c(x)2c(x)s(x) = 0 since s0 (x) = c(x)
Finally, dx
and c0 (x) = s(x) . So s2 (x) + c2 (x) = k, some constant. Putting x = 0, we get k = 1 since
s(0) = 0 and c(0) = 1. Hence s2 (x) + c2 (x) = 1.
(4.1)
Mathematics-III
32
which implies that y1 (x1 ), y20 (x1 ), y1 (x2 ), y20 (x2 ) all are non-zero since W (x) does not vanish. Now
y2 is continuous and has successive zeros x1 and x2 . Therefore, if y2 is increasing at x1 , then
it must be decreasing at x2 and vice versa. Mathematically speaking, y20 (x1 ) and y20 (x2 ) are of
opposite sign. Also W (x) being a non-vanishing and continuous function retains the same sign. So
in view of (4.1), it is easy to conclude that y1 (x1 ) and y1 (x2 ) must be of opposite sign. Therefore,
y1 vanishes at least once between x1 and x2 . Further, y1 can not vanish more than once between
x1 and x2 . For if it does, then applying the same argument as above, it can be proved that y2 has
at least one zero between two zeros of y1 lying between x1 and x2 . But this would contradict the
assumption that x1 and x2 are successive zeros of y2 . This completes the proof.
Ex. Two LI solutions of y 00 + y = 0 are sin x and cos x. Also, between any two successive zeros of
sin x, there is exactly one zero of cos x and vice versa.
Normal form of DE
A second order linear and homogeneous DE in the standard form is written as
y 00 + p(x)y 0 + q(x)y = 0.
(4.2)
(4.3)
1
p(x)dx
where h(x) = q(x) 14 p(x)2 12 p0 (x). The DE (4.4) is referred to as the normal form of DE (4.2).
1
Remark: Since v = e 2 p(x)dx does not vanish and y = u(x)v(x), it follows that the solution y(x)
of (4.2) and the solution u(x) of (4.4) have the same zeros.
Theorem 4.2.1. If h(x) < 0, and if u(x) is a non-trivial solution of u00 + h(x)u = 0, then u(x)
has at most one zero.
Proof. Let x0 be a zero of u(x) so that u(x0 ) = 0. Then u0 (x0 ) must be non-zero otherwise u(x)
would be a trivial solution of u00 + h(x)u = 0 by theorem 3.1.2. Suppose u0 (x0 ) > 0. Then by
continuity, u0 (x) is positive in some interval to the right of x0 . So u(x) is an increasing function
in the interval to the right of x0 . We claim that u(x) does not vanish anywhere to the right of
x0 . In case, u(x) vanishes at some point say x2 to the right of x0 , then u0 (x) must vanish at some
point x1 such that x0 < x1 < x2 . Notice that x1 is a point of maxima of u(x). So u00 (x1 ) < 0, by
second derivative test for maxima. But u00 (x1 ) = h(x1 )u(x1 ) > 0 since h(x1 ) < 0 and u(x1 ) > 0.
So u(x) can not vanish to the right of x0 . Likewise, we can show that u(x) does not vanish to the
left of x0 . A similar argument holds when u0 (x0 ) < 0. Hence, u(x) has at most one zero.
TheoremR4.2.2. If h(x) > 0 for all x > 0, and u(x) is a non-trivial solution of u00 + h(x)u = 0
such that 1 h(x)dx = , then u(x) has infinitely many zeros on the positive X-axis.
Mathematics-III
33
Proof. Suppose u(x) has only finite number of zeros on the positive X-axis, and x0 > 1 be any
number greater than the largest zero of u(x). Without loss of generality, assume that u(x) > 0 for
all x > x0 . Let g(x) = u0 (x)/u(x) so that
g 0 (x) = u00 (x)/u(x) + [u0 (x)/u(x)]2 = h(x) + g 2 (x).
Integrating from x0 to x, we get
Z
Z x
h(x)dx +
g(x) g(x0 ) =
x0
h2 (x)dx.
x0
R
This gives g(x) > 0 for sufficiently large values of x since 1 h(x)dx = . Then u(x) > 0
for all x > x0 in the relation g(x) = u0 (x)/u(x) implies that u0 (x) < 0 for x > x0 . Also,
u00 (x) = h(x)u(x) < 0 for x > x0 . It follows that u(x) must vanish to the right of x0 , which is a
contradiction to the assumption that x0 is the largest zero of u(x). This completes the proof.
Ex. 4.2.1. Show that the zeros of the functions a sin x + b cos x and c sin x + d cos x are distinct
and occur alternatively whenever ad bc 6= 0.
Sol. 4.2.1. The functions a sin x + b cos x and c sin x + d cos x are solutions of the DE y 00 + y = 0.
Also, Wronskian of a sin x + b cos x and c sin x + d cos x is non-zero if ad bc 6= 0, which in turn
implies that the two solutions are LI. Thus, by Theorem 4.1.1, the zeros of these functions occur
alternatively whenever ad bc 6= 0.
Ex. 4.2.2. Find the normal form of Bessels equation x2 y 00 + xy 0 + (x2 p2 )y = 0, and use it to
show that every non-trivial solution has infinitely many positive zeros.
Sol. 4.2.2. Comparing the Bessels equation with y 00 + p(x)y 0 + q(x)y = 0, we obtain p(x) =
2
2
and q(x) = x xp
. Next, we evaluate
2
1
x
1
1
1 4p2
h(x) = q(x) p(x)2 p0 (x) = 1 +
.
4
2
4x2
Therefore, the normal form of Bessels equation reads as
1 4p2
00
00
u = 0.
u + h(x)u = 0
or
u + 1+
4x2
(4.5)
Now we shall prove that every non-trivial solution u(x) of (4.5) has infinitely many positive
zeros.
Case (i) 0 p 21 . In this case, we have
1 4p2
1 1
1
=1+ 2
+p
p 1 > 0 for all x > 0.
h(x) = 1 +
4x2
x 2
2
Also, we have
Z
Z
h(x)dx =
1
1 4p2
1+
dx = .
4x2
Mathematics-III
34
So by Theorem 4.2.2, every non-trivial solution u(x) has infinitely many positive zeros.
Case (ii) p > 12 . In this case, we have
2
h(x) = 1 +
1 4p
1
= 2
2
4x
x
Now let x0 =
4p2 1
2
!
!
p
p
p
2
2
4p 1
4p 1
4p2 1
x+
x
> 0 provided x >
.
2
2
2
d2 u
+ h1 (t)u = 0,
dt2
where h1 (t) = 1 +
14p2
.
4(t+x0 )2
Z
h1 (t)dt =
(4.6)
1 4p2
1+
dt = .
4(t + x0 )2
So by Theorem 4.2.2, every non-trivial solution u(t) of (4.6) has infinitely many positive zeros.
Since x = t + x0 , so zeros of (4.5) and (4.6) differ only by x0 . Also x0 is a positive number.
Therefore, every non-trivial solution u(x) of (4.5) has infinitely many positive zeros.
From case(i) and case (ii), we conclude that every non-trivial solution u(x) of (4.5) has infinitely
many positive zeros.
Ex. 4.2.3. The hypothesis of the theorem 4.2.2 is false for the Euler equation x2 y 00 + ky = 0, but
the conclusion is sometimes true and sometimes false, depending on the magnitude of the constant
k. Show that every non-trivial solution has infinitely many positive zeros if k > 1/4, and only a
finite number if k 1/4.
Sol. 4.2.3. Comparing the given equation with y 00 + h(x)y = 0, we get h(x) = k/x2 . Therefore,
Z
which is finite number.
h(x)dx = [k/x]x=
x=1 = k,
1
1
1
k, m2 =
4
2
1
k.
4
Mathematics-III
35
u(xn ) u(x0 )
= 0.
n
xn x0
By theorem 3.1.2, it follows that u(x) is a trivial solution of u00 + h(x)u = 0, which is not true as
per the given hypothesis. Hence, u(x) can not have infinitely many zeros in the interval [a, b].
u0 (x0 ) = lim
Theorem 4.2.4. (Sturm Comparison Theorem) If y(x) and z(x) be non-trivial solutions of
y 00 + q(x)y = 0 and z 00 + r(x)z = 0 respectively, q(x) and r(x) are positive functions such that
q(x) > r(x), then y(x) vanishes at least once between two successive zeros of z(x).
Proof. Let x1 and x2 be two successive zeros of z(x) with x1 < x2 . Let us assume that y(x)
does not vanish on the interval (x1 , x2 ). We shall prove the theorem by deducing a contradiction.
Without loss of generality, we assume that y(x) and z(x) both are positive on (x1 , x2 ), for either
function can be replaced by its negative if necessary. Now, denoting the Wronskian W (y, z) by
W (x), we have
W (x) = y(x)z 0 (x) z(x)y 0 (x).
(4.7)
or
(4.8)
(4.9)
Mathematics-III
36
Now y(x) being continuous and positive on (x1 , x2 ), we have y(x1 ) 0 and y(x2 ) 0. Also
z 0 (x1 ) > 0 and z 0 (x2 ) < 0 since z(x) is continuous and positive on (x1 , x2 ), and x1 , x2 are successive
zeros of z(x). Hence, (4.9) leads to
W (x1 ) 0
and
W (x2 ) 0.
W (x2 ) W (x1 ).
(4.10)
We see that (4.8) and (4.10) are contradictory. This completes the proof.
Ex. Solutions of y 00 + 4y = 0 oscillate more rapidly than y 00 + y = 0.
Ex. 4.2.4. Use Sturm Comparison Theorem to solve example 4.2.2.
Sol. 4.2.4. In example 4.2.2, we have
1 4p2
lim h(x) = lim 1 +
= 1.
x
n
4x2
So given > 0, there exists > 0 such that h(x) (1 , 1 + ) for all x > . Choosing
= 1/4, we have h(x) > 1/4 for all x > . So by Theorem 4.2.4, every solution of u00 + h(x)u = 0
vanishes at least once between any two zeros of solutions of v 00 + (1/4)v = 0. Also every non-trivial
solution of v 00 + (1/4)v = 0 has infinitely many zeros. It follows that every non-trivial solution of
u00 + h(x)u = 0 has infinitely many zeros.
Ex. 4.2.5. Let yp (x) be non trivial solution of the Bessels equation. Show that every interval of
length contains at least one zero of yp (x) for 0 p < 1/2, and at most one zero if p > 1/2.
Sol. 4.2.5. Let [x0 , x0 + ] be any interval of length . The non-trivial solution sin(x x0 ) of
the DE v 00 + v = 0 vanishes at the end points x0 and x0 + . Also, for 0 p < 1/2, yp (x) vanishes
at least once between two successive zeros of any non-trivial solution of v 00 + v = 0, by Strum
2
> 1. So [x0 , x0 + ] contains at least one zero of yp (x).
comparison theorem since 1 + 14p
4x2
Next, if p > 1/2, then again by Strum comparison theorem at least one zero of sin(x x0 ) lies
between two successive zeros of yp (x). Now, the interval [x0 , x0 + ] can contain at most one zero
of yp (x). For, if there are two zeros of yp (x) in the interval [x0 , x0 + ], then sin(x x0 ) must
vanish at some point between x0 and x0 + which is not possible.
Chapter 5
Power Series Solutions and Special
Functions
Some Basics of Power Series
An infinite series of the form
an (x x0 )n = a0 + a1 (x x0 ) + a2 (x x0 )2 + ........
(5.1)
n=0
m
X
n=0
the sum of the series is defined as the value of the limit. Obviously the power series (5.1) converges
at x = x0 , and in this case its sum is a0 . If R is the largest positive real number such that the
power series (5.1) converges for all x with |x x0 | < R, then R is called radius of convergence of
the power series, and (x0 R, x0 + R) is called the interval of convergence. If the power series
converges only for x = x0 , then R = 0. If the power series converges for every real value of x, then
R = .
We can derive a formula for R by using ratio test. For, by ratio test the power series (5.1)
converges
if
an
an+1
|x x0 | < 1, that is, if |x x0 | < R where R = lim
.
lim
n an
n an+1
Similarly, by Cauchys root test the power series (5.1) converges if lim |an |1/n |x x0 | < 1,
n
Ex.
n=0
Ex.
Ex.
X
xn
n=0
n!
n=0
37
Mathematics-III
38
Now suppose that the power series (5.1) converges to f (x) for |x x0 | < R, that is,
f (x) =
an (x x0 )n = a0 + a1 (x x0 ) + a2 (x x0 )2 + a3 (x x0 )3 + ........
(5.2)
n=0
Then it can be proved that f (x) possesses derivatives of all orders in |x x0 | < R. Also, the series
can be differentiated termwise in the sense that
f (x) =
n=1
f 00 (x) =
n=2
and so on, and each of the resulting series converges for |x x0 | < R. The successive differentiated
series suggest that an = f (n) (0)/n!. Also, the power series (5.2) can be integrated termwise provided
the limits of integration lie inside the interval of convergence.
X
If we have another power series
bn (x x0 )n converging to g(x) for |x x0 | < R, that is,
n=0
g(x) =
bn (x x0 )n = b0 + b1 (x x0 ) + b2 (x x0 )2 + b3 (x x0 )3 + ........,
(5.3)
n=0
then (5.2) and (5.3) can be added or subtracted termwise, that is,
f (x) g(x) =
n=0
n=0
X
f (n) (x0 )
f (n) (x0 )
f 00 (x0 )
(x x0 )2 + ........ +
(x x0 )n + Rn ,
2!
n!
f (n+1) ()
(x x0 )n+1 , is some number between x0 and x. Obviously the power series
(n + 1)!
(x x0 )n converges to f (x) for those values of x (x0 R, x0 + R) for which Rn 0
n!
as n . Thus for a given function f (x), the Taylors formula enables us to find the power series
that converges to f (x). On the other hand, if a convergent power series is given, then it is not
always possible to find/recognize its sum function. In fact, very few power series have sums that
are elementary functions.
n=0
Mathematics-III
39
X
f (n) (x0 )
X
f (n) (x0 )
(x x0 )n .
of convergence (x0 R, x0 + R) of the power series
n!
n=0
If the power series
X
an xn = a0 + a1 x + a2 x2 + a3 x3 + ........,
y=
(5.4)
n=0
y =
(5.5)
n=0
nan x
n=0
n1
an xn = 0,
(5.6)
n=0
which must be an identity in x since (5.4) is, by assumption, a solution of the given DE. So
coefficients of all powers of x must be zero. In particular, equating to 0 the coefficient of xn1 , the
lowest degree term in x, we obtain
1
an1 .
n
Substituting n = 1, 2, 3, ...., we get
nan 2an1 = 0 or an =
a1 = a0 ,
1
1
a2 = a1 = a0 ,
2
2!
1
1
a3 = a2 = a0 ,
3
3!
and so on. Plugging the values of a1 , a2 , ..... into (5.4), we get
1
1
y = a0 + a0 x + a0 x2 + a0 x3 + ........,
2!
3!
x2 x3
= a0 1 + x +
+
+ ..............
2!
3!
Mathematics-III
40
2
Let us examine the validity of this solution. We know that the power series 1 + x + x2! + x3! +
.............. converges for all x. It implies that the term by term differentiation carried out in (5.5)
is valid for all x. Similarly, the difference
of the two series (5.4)and (5.5) considered in (5.6) is
2
3
valid for all x. It follows that y = a0 1 + x + x2! + x3! + .............. is a valid solution of the given
DE for all x. Also, we know that ex = 1 + x +
of the DE y 0 y = 0, as expected.
x2
2!
x3
3!
(5.7)
If the functions p(x) and q(x) are analytic at x = x0 , then x0 is called an ordinary point of the DE
(5.7). If p(x) and/or q(x) fail to be analytic at x0 , but (x x0 )p(x) and (x x0 )2 q(x) are analytic
at x0 , then we say that x0 is a regular singular point of (5.7) otherwise we call x0 as an irregular
singular point of x0 . For example, x = 0 is a regular singular point of the DE x2 y 00 + xy 0 + 2y = 0
and every non-zero real number is an ordinary point of the same DE. x = 0 is an irregular singular
point of the DE x3 y 00 + xy 0 + y = 0.
The following theorem gives a criterion for the existence of the power series solution of near an
ordinary point.
Theorem 5.2.1. If a0 , a1 are arbitrary constants, and x0 is an ordinary point of a DE y 00 +
p(x)y 0 + q(x)y = 0, then there exists a unique solution y(x) of the DE that is analytic at x0 such
that y(x0 ) = a0 and y 0 (x0 ) = a1 . Furthermore, the power series expansion of y(x) is valid in
|x x0 | < R provided the power series expansions of p(x) and q(x) are valid in this interval.
The above theorem asserts that there exists a unique power series solution of the form
y(x) =
an (x x0 )n = a0 + a1 (x x0 ) + a2 (x x0 )2 + a3 (x x0 )3 + ........,
n=0
about the ordinary point x0 satisfying the initial conditions y(x0 ) = a0 and y 0 (x0 ) = a1 . The
constants a2 , a3 and so on are determined in terms of a0 or a1 as illustrated in the following
examples.
Ex. 5.2.2. Find power series solution of y 00 y = 0 about x = 0.
Sol. 5.2.2. Here p(x) = 0 and q(x) = 4, both are analytic at x = 0. So x = 0 is an ordinary
point of the given DE. So there exists a power series solution
y=
an xn = a0 + a1 x + a2 x2 + a3 x3 + ........,
n=0
X
n=0
an n(n 1)xn2
X
n=0
an xn = 0.
(5.8)
Mathematics-III
41
1
an1 .
n(n 1)
1
1
1
1
y = a0 + a1 x + a0 x2 + a1 x3 + a0 x4 + a1 x5 + ........,
2!
3!
4!
5!
1 3 1 5
1 2 1 4
= a0 1 + x + x + .............. + a1 x + x + x + .............. ,
2!
4!
3!
5!
the required power series solution of the given DE. We know that (ex + ex )/2 = 1 + 2!1 x2 + 4!1 x4 +
.............. and (ex ex )/2 = x + 3!1 x3 + 5!1 x5 + ................ So the power series solution becomes
y = c1 ex + c2 ex , where c1 = (a0 + a1 )/2 and c1 = (a0 a1 )/2, which is the same solution of
y 00 y = 0 as we obtain by exact method.
Ex. 5.2.3. Find power series solution of (1 + x2 )y 00 + xy 0 y = 0 about x = 0.
Sol. 5.2.3. Here x = 0 is an ordinary point of the given DE. So there exists a power series solution
y=
an xn = a0 + a1 x + a2 x2 + a3 x3 + .........
(5.9)
n=0
Substituting the power series solution (5.9) into the given DE, we get
(1 + x2 )
an n(n 1)xn2 + x
n=2
X
n=2
an nxn1
n=1
an n(n 1)xn2 +
n=2
an xn = 0.
n=0
an [n(n 1) + n 1]xn = 0.
n=0
an n(n 1)xn2 +
an (n 1)(n + 1)xn = 0.
n=0
Mathematics-III
42
3n
an2 provided n 6= 1.
n
Substituting n = 2, 3, ...., we get
=
an =
1
a2 = a0 ,
2
a3 = 0,
1
1
a4 = a2 = a0 ,
4
4.2
a5 = 0,
3
3
a0 ,
a6 = a4 =
6
6.4.2
and so on.
Plugging the values of a2 , a3 , a4 , a5 , a6 and so on into (5.9), we get
1
3
1
a0 x4 + 0.x5 +
a0 x6 + ........,
y = a0 + a1 x + a0 x2 + 0.x3
2
4.2
6.4.2
1 4
3 6
1 2
= a0 1 + x
x +
x .............. + a1 x,
2
4.2
6.4.2
the required power series solution of the given differential equation.
The following theorem by Frobenius gives a criterion for the existence of the power series
solution of near a regular singular point.
Theorem 5.2.2. If x0 is a regular singular point of a DE y 00 + p(x)y 0 + q(x)y = 0, then there exists
X
an (x x0 )n+r (a0 6= 0), where r is some root
at least one power series solution of the form y =
n=0
of the quadratic equation (known as indicial equation) obtained by equating to zero the coefficient
X
an (x x0 )n+r into
of lowest degree term in x of the equation that arises on substituting y =
n=0
X
n=0
(5.10)
Mathematics-III
43
an (n + r 1)(2n + 2r + 1)x
n+r
n=0
an xn+r+2 = 0.
(5.11)
n=0
1
an2 .
(n + r 1)(2n + 2r + 1)
where n = 2, 3, 4....
For r = 1, we have
an =
1
an2 ,
n(2n + 3)
a2 =
1
a0 ,
2.7
a3 =
1
a1 = 0,
3.9
a4 =
1
1
a2 =
a0 , .......
4.11
2.7.4.11
a2 =
1
a0 ,
2.1
a3 =
1
a1 = 0,
3.3
a4 =
1
1
a2 =
a0 , .......
4.5
2.1.4.5
1
an2 ,
n(2n 3)
(5.12)
n=0
X
n=0
an (n + r)2 xn+r1
X
n=0
an xn+r+1 = 0.
(5.13)
Mathematics-III
44
1
an2 .
(n + r)2
where n = 2, 3, 4....
Therefore, we have
a2 =
1
a0 ,
(r + 2)2
a3 =
1
a1 = 0,
(r + 3)2
a4 =
1
1
a2 =
a0 , .......
2
2
(r + 4)
(r + 2) (r + 4)2
(5.14)
(5.15)
Note that substitution of (5.14) into the given DE gives only the lowest degree term in x. Obviously
(y)r=0 = y1 satisfies (5.15) and hence the given DE. Now differentiating (5.15) partially w.r.t. r,
we obtain
y
= a0 (2rxr1 + r2 xr1 ln x).
(5.16)
r
This shows that y
is a solution of the given DE. Thus, the second LI solution of the given
r r=0
DE is
2
x
3 4
y
y2 =
= y1 ln x a0
+
x + ......
r r=0
4
128
(xD2 + D x)
Ex. 5.2.6. Find power series solutions of x(1 + x)y 00 + 3xy 0 + y = 0 about x = 0.
Mathematics-III
45
Sol. 5.2.6. Here x = 0 is a regular singular point of the given DE. So there exists at least one
Frobenius solution of the form
y=
(5.17)
n=0
an (n + r)(n + r 1)x
n+r1
n=0
(5.18)
n=0
n+r
an1 .
n+r1
where n = 1, 2, 3, 4....
Therefore, we have
a1 =
r+1
a0 ,
r
a2 =
r+2
a0 ,
r
a3 =
r+3
a0 , .......
r
For r = 1, we get a1 = 2a0 , a2 = 3a0 , a3 = 4a0 , ... So the Frobenious series solution is
y = xr (a0 + a1 x + a2 x2 + a3 x3 + ........) = a0 (x 2x2 + 3x3 4x4 + .....).
(5.19)
Now we find the other LI solution. Since a1 , a2 ,...... are not defined at r = 0, so we replace a0 by
b0 r in (5.17). Thus the modified series solution reads as
y = xr (b0 r + a1 x + a2 x2 + a3 x3 + ........),
which on substitution into the given DE yields
x(1 + x)y 00 + 3xy 0 + y = b0 r2 (r 1)xr1 .
(5.20)
Obviously (y)r=0 and (y)r=1 satisfy the given DE. But we find that the solutions
(y)r=0 = b0 (x 2x2 + ..........),
(y)r=1 = b0 (x 2x2 + ..........),
are not LI from the Frobenious solution (5.19). So we partially differentiate (5.20) and find that
y
is a solution of the given DE. Thus the other LI solution of the given DE reads as
r r=0
y
y=
= y1 ln x + b0 (1 x + x2 x3 + ..........).
r r=0
Mathematics-III
46
3 2
3 4
1 x + x ............ ,
10
56
y2 = a0 x1 .
Ex. 5.2.8. Find power series solutions of x2 y 00 + 6xy 0 + (x2 + 6)y = 0 about x = 0.
Sol. 5.2.8.
r = 2, 3.
an =
1
an2
n(n + 1)
For r = 3, we find that a1 is arbitrary. In this case, r = 3 provides the general solution
y = a0 y1 + a1 y2 , where
1 2 1 4
3
y1 = x
1 x + x ............ ,
2!
4!
1 3 1 5
3
y2 = x
x x + x ............ .
3!
5!
Note that corresponding to the larger root r = 2, you will get the Frobenious solution, a
constant multiple of y2 . (Find and see!)
(5.21)
(5.22)
n=0
X
n=0
an (n + r)(c + n + r 1)x
n+r1
an (n + r + a)(n + r + b)xn+r = 0.
n=0
(5.23)
Mathematics-III
47
Therefore, roots of the indicial equation are r = 0, 1 c. Now comparing the coefficient of xn+r1 ,
we have the recurrence relation
an (n+r)(c+n+r1)an1 (n1+r+a)(n1+r+b) = 0 or an =
(a + n 1 + r)(b + n 1 + r)
an1 .
(n + r)(c + n 1 + r)
where n = 1, 2, 3, 4....
For r = 0, we have
an =
(a + n 1)(b + n 1)
an1 ,
n(c + n 1)
a1 =
a.b
a0
1.c
a2 =
(a + 1)(b + 1)
a(a + 1)b(b + 1)
a1 =
a0 , .......
2(c + 1)
1.2c(c + 1)
X
a(a + 1)...(a + n 1)b(b + 1)...(b + n 1)
n=0
n!c(c + 1)...(c + n 1)
xn .
Mathematics-III
48
(5.25)
(5.26)
(5.27)
where
t=
xA
BA
and F , G, H are certain combinations of the constants in (5.26). The primes in (5.27) denote the
derivatives with respect to t. This is a hypergeometric equation with a, b and c defined by F = c,
G = (a + b + 1) and H = ab. Therefore, (5.27) can be solved in terms of hypergeoetric function
near t = 0 and t = 1. It follows that (5.26) can be solved in terms of the same function near x = A
and x = B.
Remark 5.3.2. Most of the familiar functions in elementary analysis can be expressed in terms
of hypergeometric function.
(i) (1 + x)n = F (n, b, b, x)
(ii)
(iii)
log(1 + x) = xF (1, 1, 2, x)
sin1 x = xF (1/2, 1/2, 3/2, x2 )
Mathematics-III
49
(5.28)
Here A = 3, B = 2. Therefore,
t=
xA
x3
x3
=
=
BA
2 3
5
x = 5t + 3.
Chapter 6
Fourier Series
Introduction
We are familiar with the power series representation of a function f (x). The representation of f (x)
in the form of a trigonometric series given by
a0 X
f (x) =
+
(an cos nx + bn sin nx),
2
n=1
(6.1)
is required in the treatment of many physical problems such as heat conduction, electromagnetic
waves, mechanical vibrations etc. An important advantage of the series (6.1) over a usual power
series in x is that it can represent f (x) even if f (x) possesses many discontinuities (eg. discontinuous impulse function in electrical engineering). On the other hand, power series can represent
f (x) only when f (x) is continuous and possesses derivatives of all orders.
Let m and n be positive integers such that m 6= n. Then we have,
Z
Z
Z
cos mx sin nxdx = 0,
sin nxdx = 0,
cos nxdx = 0,
Z
Further,
cos nxdx = =
sin2 nxdx.
Now, we do some classical calculations that were first done by Euler. We assume that the function
f (x) in (6.1) is defined on [, ]. Also, we assume that the series in (6.1) is uniformly convergent
so that term by term integration is possible.
Integrating both sides of (6.1) over [, ], we get
1
a0 =
f (x)dx.
(6.2)
Multiplying both sides of (6.1) by cos nx, and then integrating over [, ], we get
1
an =
(6.3)
50
Mathematics-III
51
Note that this formula, for n = 0, gives the value of a0 as given in (6.2). That is why, a0 is divided
by 2 in (6.1).
Next, multiplying both sides of (6.1) by sin nx, and then integrating over [, ], we get
1
bn =
(6.4)
These calculations show that the coefficients an and bn can be obtained from the sum f (x) in
(6.1) by means of the formulas (6.3) and (6.4) provided the series (6.1) is uniformly convergent.
However, this situation is too restricted to be of much practical use because first we have to ensure
that the given function f (x) admits an expansion as a uniformly convergent trigonometric series.
For this reason, we set aside the idea of finding the coefficients an and bn in the expansion (6.1)
that may or may not exist. Instead we use formulas (6.3) and (6.4) to define some numbers an and
bn . Then we use these to construct a series of the form (6.1). When we follow this approach, the
numbers an and bn are called the Fourier coefficients of the function f (x) and the series (6.1) is
called Fourier series of f (x). Obviously, the function f (x) must be integrable in order to construct
its Fourier series. Note that a discontinuous function may be integrable.
We hope that the Fourier series of f (x) will converge to f (x) so that (6.1) is a valid representation or expansion of f (x). However, this is not always true. There exist integrable functions
whose Fourier series diverge at one or more points. That is, why some advanced texts on Fourier
series write (6.1) in the form
f (x)
a0 X
(an cos nx + bn sin nx),
+
2
n=1
(6.5)
where the sign is used in order to emphasize that the Fourier series on right is not necessarily
convergent to f (x).
Just like a Fourier series does not imply convergence, a convergent trigonometric series does
not imply to be a Fourier series of some function. For example, it is known that the trigonometric
series
X
sin nx
ln(1 + n)
n=1
converges for all x. But it is not a Fourier series since 1/ ln(1+n) can not be obtained from formula
(6.4) for any choice of integrable function f (x). In fact, this series fails to be Fourier series because
it fails to satisfy a remarkable theorem, which states that the term by term integral of any Fourier
series (whether convergent or not) must converge for all x.
Thus, the fundamental problem of the subject of Fourier series is clearly to discover the properties of an integrable function that guarantee that its Fourier series not only converges but also
converges to the function. Before this, let us see some examples.
Ex. 6.1.1. Find Fourier series of the function f (x) = x, x .
Sol. 6.1.1. We find
Z
1
a0 =
f (x)dx = 0,
Mathematics-III
1
an =
bn =
52
2
(1)n .
n
(6.6)
Here equals sign is an expression of hope rather than definite knowledge. It can be proved that
the Fourier series in (6.6) converges to x in < x < . At x = or x = , the Fourier series
converges to 0, and hence does not converge to f (x) = x at x = or x = . Further, each term
on right hand side in (6.6) has a period 2. So the entire expression on right hand side of (6.6)
has a period 2. It follows that the Fourier series in (6.6) does not converge to f (x) = x outside
the interval < x < . But if f (x) = x is given to be a periodic function of period 2, then
the Fourier series in (6.6) converges to f (x) = x for all real values of x except x = k, where k
is any non-zero integer. In left panel of Figure 6.1, we show the plots of x (Black line), 2 sin x
(Green curve), 2 sin x sin 2x (Red curve) and 2 sin x sin 2x + (2/3) sin 3x (Blue curve) in the
range < x < or 3.14 < x < 3.14. We see that as we consider more and more terms of the
the Fourier series in (6.6), it approximates the function f (x) = x better and better, as expected.
3
3
2
1
-3
-2
-1
2
1
3
1
-1
-2
-3
-3
-2
-1
Figure 6.1: Left Panel: Plots of x (Black line), 2 sin x (Green curve), 2 sin x sin 2x (Red curve) and 2 sin x
sin 2x + (2/3) sin 3x (Blue curve) in the range < x < or 3.14 < x < 3.14.
Right Panel: Plots of f (x) (Black lines), 2 (Green line), 2 + 2 sin x (Red curve) and 2 + 2 sin x + 32 sin 3x (Blue
curve), 2 + 2 sin x + 23 sin 3x + 25 sin 5x (Purple curve) in the range < x < or 3.14 < x < 3.14.
Ex. 6.1.2. Find Fourier series of the function
0 , x < 0
f (x) =
, 0 x .
Sol. 6.1.2. We find
Z
1
a0 =
f (x)dx = ,
Z
1
f (x) cos nxdx = 0,
an =
Mathematics-III
1
bn =
53
1
[1 (1)n ].
n
(6.7)
The Fourier series in (6.7) converges to f (x) in < x < except x = 0. At x = 0, the value of
f (x) is while the Fourier series converges to 2 . In right panel of Figure 6.1, we show the plots of
f (x) (Black lines), 2 (Green line), 2 + 2 sin x (Red curve) and 2 + 2 sin x + 32 sin 3x (Blue curve),
+ 2 sin x + 23 sin 3x + 25 sin 5x (Purple curve) in the range < x < or 3.14 < x < 3.14. We
2
see that as we consider more and more terms of the the Fourier series in (6.7), it approximates the
function f (x) = x better and better, as expected.
2
8
=1+
1
32
1
52
1
72
+ ........
X
X 1
(1)n+1
n
+
[(1)
1]
cos
nx
+
sin nx.
4 n=1 n2
n
n=1
(6.8)
X 1
= +
[(1)n 1](1)n .
2
4 n=1 n2
Mathematics-III
54
4
1
1
|x| =
(6.10)
cos x + 2 cos 3x + 2 cos 5x + ...........
2
3
5
It is interesting to observe that the two series (6.9) and (6.10) both represent the same function
f (x) = x on 0 x since |x| = x for x 0. The series (6.9) is called Fourier sine series of x,
and the series (6.10) is called Fourier cosine series of x. Similarly, any function f (x) satisfying the
Dirichlets conditions on 0 x can be expanded in both a sine series and a cosine series on
this interval subject to that the series does not converge to f (x) at the end points x = 0 and x =
unless f (x) = 0 at these points. Thus, to obtain sine series of a function, we redefine the function
(if necessary) to have the value 0 at x = 0, and then extend it over the interval x < 0 such
that f (x) = f (x) for all x lying in x . It is called odd extension of f (x). Similarly,
even extension of f (x) can be carried out in order to obtain Fourier cosine series.
Ex. 6.3.1. Find Fourier sine and cosine series of f (x) = cos x, 0 x .
Sol. 6.3.1. For sine series, we find
Z
Z
2
2n 1 + (1)n
2
f (x) sin nxdx =
cos x sin nxdx =
, n 6= 1.
bn =
0
0
n2 1
Z
2
b1 =
cos x sin xdx = 0.
0
Mathematics-III
55
X
2n 1 + (1)n
sin nx.
cos x =
n2 1
n=2
For cosine series, we find
Z
Z
2
2
an =
f (x) cos nxdx =
cos x cos nxdx = 0, n 6= 1.
0
0
Z
2
a1 =
cos x cos xdx = 1.
0
So Fourier cosine series of cos x is given by
cos x = cos x.
x
,
L
then we have
a0 X
g(t) =
(an cos nt + bn sin nt),
+
2
n=0
Z
Z
1
1
where an =
g(t) cos ntdt, bn =
g(t) sin ntdt.
Since t = x
, it follows that
L
nx
nx
a0 X
+
an cos
+ bn sin
,
f (x) =
2
L
L
n=0
Z
Z
nx
nx
1 L
1 L
where an =
f (x) cos
f (x) sin
dx, bn =
dx.
L L
L
L L
L
Chapter 7
Boundary Value Problems
In this chapter, we shall discuss the solution of some boundary value problems.
(7.1)
where a is some positive constant, and y(x, t) is the displacement/vibration of the string along
y-axis direction. The wave equation is subjected to the following four conditions.
The first condition is
y(0, t) = 0,
(7.2)
since the left end of the string is tied at (0, 0) for all the time, and hence it can not have displacement
along the y-axis.
The second condition is
y(, t) = 0
(7.3)
since the right end of the string is tied at (, t) for all the time, and hence it can not have
displacement along the y-axis.
The third condition is
y
= 0,
t
at t = 0,
(7.4)
(7.5)
56
Mathematics-III
57
Once the string is released from the initial shape y(x, 0) = f (x), we are interested to find the
distance or displacement of the string from the x-axis at any time t. It is equivalent to saying that
we are interested to solve (7.1) for y(x, t) subject to the four conditions (7.2)-(7.5).
Assume that (7.1) possesses a solution of the form
y(x, t) = u(x)v(t),
(7.6)
where u(x) and v(t) are to be determined. Plugging (7.6) into (7.1), we get
1 v 00 (t)
u00 (x)
= 2
= ,
u(x)
a v(t)
(7.7)
(7.8)
v 00 (t) a2 v(t) = 0.
(7.9)
Now, let us first solve (7.8). Later, we shall look for the solution of (7.9). Considering (7.6), the
condition y(0, t) = 0 in (7.2) gives u(0)v(t) = 0 or u(0) = 0. Similarly, y(, t) = 0 in (7.3) gives
u() = 0. Further, we see that the nature of solution of (7.8) depends on the values of .
(i) When > 0, the solution reads as u(x) = c1 e x + c2 e x . Using the conditions u(0) = 0
and u() = 0, we get c1 = 0 = c2 , and hence u(x) = 0. This leads to the trivial solution
y(x, t) = u(x)v(t) = 0, which is not of our interest.
(ii) When = 0, the solution reads as u(x) = c1 x + c2 . Again, using the conditions u(0) = 0
and u() = 0, we get c1 = 0 = c2 , which leads to the trivial solution y(x, t) = u(x)v(t) = 0.
(iii) When < 0, say, = n2 , the solution reads as u(x) = c1 sin nx + c2 cos nx. Applying
the condition u(0) = 0, we get c2 = 0. The condition u() = 0 then implies that c1 sin n = 0.
Obviously, for a non-trivial solution we must have c1 6= 0. Then the condition c1 sin n = 0 forces
n to be a positive integer. Thus,
un (x) = sin nx,
(7.10)
(7.11)
(7.12)
Mathematics-III
58
bn yn (x, t) =
n=1
(7.13)
n=1
is also a solution of (7.1). To determine bn , we use the fourth condition y(x, 0) = f (x) given in
(7.5). Then (7.13) gives
f (x) =
bn sin nx.
(7.14)
n=1
Notice that the series on right hand side in (7.14) is the Fourier sine series of f (x) in the interval
[0, ]. So we have
Z
2
bn =
f (x) sin nxdx.
(7.15)
0
Hence,
y(x, t) =
(7.16)
n=1
with bn from (7.15) is the solution of (7.1) subject to the four conditions (7.2)-(7.5).
(7.17)
where a is some positive constant. The heat equation is subjected to the following three conditions.
The first condition is
w(0, t) = 0,
(7.18)
since the left end of the rod is kept at zero temperature for all t.
The second condition is
w(, t) = 0
(7.19)
since the right end of the rod is kept at zero temperature for all t.
The third condition is
w(x, 0) = f (x),
(7.20)
Mathematics-III
59
(7.21)
where u(x) and v(t) are to be determined. Plugging (7.21) into (7.17), we get
1 v 0 (t)
u00 (x)
= 2
= ,
u(x)
a v(t)
(7.22)
(7.23)
v 0 (t) a2 v(t) = 0,
(7.24)
Following the strategy discussed in the previous section, the non-trivial solution of (7.23) subject
to the conditions (7.18) and (7.19), reads as
un (x) = sin nx,
(7.25)
vn (t) = en
. So
(7.26)
(7.27)
bn wn (x, t) =
n=1
2 a2 t
bn sin nxen
(7.28)
n=1
is also a solution of (7.17). To determine bn , we use the third condition w(x, 0) = f (x) given in
(7.20). Then (7.28) gives
f (x) =
X
n=1
bn sin nx.
(7.29)
Mathematics-III
60
Notice that the series on right hand side in (7.29) is the Fourier sine series of f (x) in the interval
[0, ]. So we have
Z
2
f (x) sin nxdx.
(7.30)
bn =
0
Hence,
w(x, t) =
bn sin nxen
2 a2 t
(7.31)
n=1
with bn from (7.30) is the solution of (7.17) subject to the three conditions (7.18)-(7.20).
(7.32)
known as the Laplace equation. With the transformations x = r cos and y = r sin , the polar
form of (7.32) reads as
1 2w
2 w 1 w
+
+
= 0.
r2
r r
r2 2
(7.33)
For,
w
w x w y
w
w
=
+
= cos
+ sin
.
r
x r
y r
x
y
2
2
2w
2w
2w
2 w
2 w
=
cos
+
cos
sin
+
sin
+
sin
cos
r2
x2
xy
y 2
xy
w
w x w y
w
w
=
+
= r sin
+ r cos
.
x
y
x
y
2
2
2w
2w
w 2
2w
w
2
2 w
2
2 w
2
=
r
sin
r
sin
cos
r
cos
+r
cos
r
cos
sin
r sin
.
2
2
2
x
xy
x
y
xy
y
2
(7.34)
(7.35)
Mathematics-III
61
where u(r) and v() are to be determined. Plugging (7.35) into (7.33), we get
r2 u00 (r) + ru0 (r)
v 00 ()
=
= ,
u(r)
v()
(7.36)
(7.37)
(7.38)
(7.39)
where = n2 ; an , bn are constants such that both the terms on right hand side of (7.41) do not
vanish together for n = 1, 2, 3, ...... Let a20 be the solution corresponding to n = 0.
Notice that (7.40) is a Cauchy-Euler DE with = n2 . So it transforms to
d2 u
n2 u = 0,
dz 2
(7.40)
for n = 0
and
u(z) = c1 enz + c2 enz
for n = 1, 2, 3, .....
for n = 0
and
u(r) = c1 rn + c2 rn
for n = 1, 2, 3, .....
Since we are interested in solutions which are well defined inside the circle r = 1, we discard the
first solution because ln r is not finite at r = 0. Similarly, the second solution is acceptable by
discarding the second term carrying rn . Thus, the solution of our interest is
un (r) = rn , n = 1, 2, 3, .....
(7.41)
(7.42)
X
n=0
wn (x, t) =
X
n=1
(7.43)
Mathematics-III
a0
2
62
a0 X n
w(r, ) =
+
r (an cos n + bn sin n),
2
n=1
(7.44)
X
a0 X
+
wn (x, t) =
(an cos n + bn sin n),
f () =
2
n=1
n=1
(7.45)
Notice that the series on right hand side in (7.45) is the Fourier series of f () in the interval [, ].
So we have
Z
1
an =
f () cos ndx, (n = 0, 1, 2, ....)
(7.46)
1
bn =
(7.47)
Thus, (7.44) with an from (7.46) and bn from (7.47) is the solution of (7.33) subject to the
condition (7.34). Thus, the Dirichlet problem for the unit circle is solved.
Now substituting an from (7.46) and bn from (7.47) into (7.44), we get
"
#
Z
1 X n
1
f ()
+
r cos n( ) d.
(7.48)
w(r, ) =
2 n=1
Let = and z = rei = r(cos + i sin n). Then we have
1 X n
1 X n
+
r cos n( ) =
+
r cos n
2 n=1
2 n=1
"
#
1 X n
= Re
+
z
2 n=1
1
z
= Re
+
2 1z
1+z
= Re
2(1 z)
(1 + z)(1 z)
= Re
2|1 z|2
1 |z|2
=
2|1 z|2
1 r2
=
2(1 2r cos + r2 )
Mathematics-III
63
So (7.48) becomes
1
w(r, ) =
2
1 r2
f ()d,
1 2r cos( ) + r2
(7.49)
known as the Poission integral. It expresses the value of the harmonic function w(r, ) at all points
inside the circle r = 1 in terms of its values on the circumference of the circle. In particular, at
r = 0, we have
Z
1
f ()d,
(7.50)
w(0, ) =
2
which shows that the value of the harmonic function w at the center of the circle is the average of
its values on the circumference.
(7.51)
(7.52)
d1 y(b) + d2 y 0 (b) = 0,
(7.53)
and
where neither both c1 and c2 nor both d1 and d2 are zero, is called a SLBVP. We see that y = 0 is
trivial solution of (7.51). The values of for which (7.51) has non-trivial solutions, are known as
its eigen values while the corresponding non-trivial solutions are known as eigen functions.
Ex. 7.4.1. Find eigen values and eigen functions of the SLBVP
y 00 + y = 0,
y(0) = 0, y() = 0.
In other words, any two distinct eigen functions ym and yn of the SLBVP are orthogonal with
respect to the weight function q(x). Let us prove this result.
Mathematics-III
64
Since ym and yn are eigen functions corresponding to the eigen values m and n , we have
0 0
(pym
) + (m q + r)ym = 0
(7.54)
(pyn0 )0 + (n q + r)yn = 0.
(7.55)
and
(7.56)
Moving the first two terms on right hand side, and then integrating from a to b, we have
Z b
Z b
Z b
0 0
0 0
(m n )
qym yn dx =
ym (pyn ) dx
yn (pym
) dx
a
a
a
Z b
Z b
0 b
0
0
0
b
0
= [ym (pyn )]a
ym (pyn )dx [yn (pym )]a +
yn0 (pym
)dx
a
0
0
= p(b)[ym (b)yn0 (b) yn (b)ym
(b)] p(a)[ym (a)yn0 (a) yn (a)ym
(a)]
= p(b)W (b) p(a)W (a)
0
where W (x) = ym (x)yn0 (x) yn (x)ym
(x) is Wronskian of ym and yn .
Z b
qym yn dx = p(b)W (b) p(a)W (a).
(m n )
(7.57)
Notice that the eigen functions ym and yn are particular solutions of the SLBVP given by (7.51),
(7.52) and (7.52). So we have
0
c1 ym (a) + c2 ym
(a) = 0,
(7.58)
(7.59)
0
d1 ym (b) + d2 ym
(b) = 0,
(7.60)
(7.61)
Now by the given c1 and c2 are not zero together. So the homogeneous system given by (7.58) and
0
(7.59) has a non-trivial solution. It follows that ym (a)yn0 (a) yn (a)ym
(a) = W (a) must be zero.
0
0
Likewise, (7.60) and (7.61) lead to ym (b)yn (b) yn (b)ym (b) = W (b) = 0. So (7.57) becomes
Z b
(m n )
qym yn dx = 0.
(7.62)
a
Also, m 6= n . So we get
Z b
qym yn dx = 0,
a
(7.63)
Mathematics-III
65
Remark 7.4.1. The orthogonality property of eigen functions can be used to write a given function
as the series expansion of eigen functions.
Remark 7.4.2. A DE in the form
d
[p(x)y 0 ] + [q(x) + r(x)]y = 0
dx
is called in self adjoint form.
Chapter 8
Some Special Functions
Legendre Polynomials
A DE of the form
(1 x2 )y 00 2xy 0 + n(n + 1)y = 0,
(8.1)
ak xk = a0 + a1 x + a2 x2 + a3 x3 + .........
(8.2)
k=0
ak k(k 1)x
k2
ak (n k)(n + k + 1)xk = 0.
(8.3)
k=0
k=0
a3 =
(n1)(n+2)
a1
3!
a4 =
(n k + 2)(n + k 1)
ak2 .
k(k 1)
(n2)n(n+1)(n+3)
a0 ,
4!
a5 =
(n3)(n1)(n+2)(n+4)
a1 , .......
5!
Substituting these values into (8.2), we obtain the general solution of (8.1) as
y = c1 y 1 + c2 y 2
where
n(n + 1) 2 (n 2)n(n + 1)(n + 3) 4
1
x +
x ......... ,
2!
4!
(n 1)(n + 2) 3 (n 3)(n 1)(n + 2)(n + 4) 5
x
x +
x ......... .
3!
5!
y 1 = a0
y 2 = a1
66
Mathematics-III
67
We observe that y1 and y2 are LI solutions of the Legendre equation (8.1), and these are analytic
in the range 1 < x < 1. However, the solutions most useful in the applications are those bounded
near x = 1. Notice that x = 1 is a regular singular point of the Legendre equation (8.1). We use
the transformation t = (1 x)/2 so that x = 1 corresponds to t = 0, and (8.1) transforms to the
hypergeometric DE
t(1 t)y 00 + (1 2t)y 0 + n(n + 1)y = 0,
(8.4)
where the prime denote derivative with respect to t. Here, a = n, b = n + 1 and c = 1. So the
solution of (8.4) in the neighbourhood of t = 0 is given by
y1 = F (n, n + 1, 1, t).
(8.5)
(8.6)
However, this solution is not bounded near t = 0. So any solution of (8.4) bounded near t = 0 is
a constant multiple of y1 . Consequently, the constant multiples of F (n, n + 1, 1, (1 x)/2) are
the solutions of (8.1), which are bounded near x = 1.
If n is a non-negative integer, then F (n, n + 1, 1, (1 x)/2) defines a polynomial of degree n
known as Legendre polynomial, denoted by Pn (x). Therefore,
Pn (x) = F (n, n+1, 1, (1x)/2) = 1+
n(n + 1)
n(n 1)(n + 1)(n + 2)
(2n)!
(x1)+
(x1)2 +....+
(x1)n .
2
2
2
(1!) 2
(2!) 2
(n!)2 2n
Notice that Pn (1) = 1 for all n. Next, after a sequence of algebraic manipulations, we can obtain
Pn (x) =
1 dn
[(x2 1)n ],
2n n! dxn
known as Rodrigues formula. The following theorem provides the alternative approach to obtain
the Rodrigues formula.
Theorem 8.1.1. (Rodrigues Formula)
Prove that Pn (x) =
1 dn
[(x2 1)n ].
2n n! dxn
where v1 =
(1 x2 )v1 + 2nxv = 0.
dv
.
dx
(n + 1)n
(2)vn + 2n[xvn+1 + (n + 1)vn ] = 0.
2!
Mathematics-III
68
This shows that cvn (c is an arbitrary constant) is a solution of the Legendres equation (8.1). Also
cvn is a polynomial of degree n. But we know that the nth degree polynomial Pn (x) is a solution
of the Legendres equation. It follows that
Pn (x) = cvn = c
dn
[(x2 1)n ].
dxn
(8.7)
1 = c.n!2n
or c =
1
.
n!2n
1 dn
[(x2 1)n ].
2n n! dxn
35 4
x
8
15 2
x
4
3
8
etc.
x4 + 3x3 x2 + 5x 2 =
8
6
2
34
224
P4 (x) + P3 (x) P2 (x) + P1 (x)
P0 (x).
35
5
21
5
105
(8.8)
(8.9)
Mathematics-III
69
(8.10)
Also, m 6= n. So it gives
Z 1
Pm (x)Pn (x)dx = 0.
1
=
=
=
=
Z 1
n 2
n n1 2
n x=1
D (x 1) D (x 1) x=1
Dn+1 (x2 1)n Dn1 (x2 1)n dx
1
Z 1
0
Dn+1 (x2 1)n Dn1 (x2 1)n dx
1
Z 1
n
D2n (x2 1)n (x2 1)n dx (Integrating (n 1) times more)
(1)
1
Z 1
(1)n
(2n)!(x2 1)n (x2 1)n dx (Put x = sin )
1
/2
Z
= 2(2n)!
cos2n+1 d
2n(2n 2).......4.2
(2n + 1)(2n 1)........3.1
[2n(2n 2).......4.2]2
= 2(2n)!
(2n + 1)!
2
=
(2n n!)2
2n + 1
= 2(2n)!
Pn2 (x)dx =
2
.
2n + 1
Mathematics-III
70
Legendre Series
Let f (x) be a function defined from x = 1 to x = 1. Then we can write,
f (x) =
cn Pn (x),
(8.11)
n=0
where cn s are constants to be determined. Multiplying both sides of (8.11) by Pn (x) and integrating from 1 to 1, we get
Z 1
Z 1
2
.
f (x)Pn (x)dx = cn
Pn2 (x)dx = cn
2n + 1
1
1
Z
2n + 1 1
=
cn =
f (x)Pn (x)dx.
2
1
Using the values of cn into (8.11), we get the expansion of f (x) in terms of Legendre polynomials,
known as the Legendre series of f (x).
Ex. 8.1.4. If f (x) = x for 0 < x < 1 otherwise 0, then show that f (x) = 14 P0 (x) + 21 P1 (x) +
5
P (x) + .......
16 2
Z
Z
1 1
1
1 1
f (x)P0 (x)dx =
x.1dx = , etc.
Sol. 8.1.4. c0 =
2 1
2 0
4
Ex. 8.1.5. Prove that (1 2xt + t2 )1/2 =
tn Pn (x), and
n=0
hence prove the recurrence relation nPn (x) = (2n 1)xPn1 (x) (n 1)Pn2 (x).
Sol. 8.1.5. Please try yourself.
Note: The function (1 2xt + t2 )1/2 is called generating function of the Legendre polynomials.
Note that the Legendre polynomials Pn (x) appear as coefficients of tn in the expansion of the
function
(1 2xt + t2 )1/2 .
Gamma Function
The gamma function is defined as
Z
(n) =
ex xn1 dx, (n > 0)
(8.12)
The condition n >Z0 is necessary in order to guarantee the convergence of the integral.
Next, we have
Z
(n + 1) =
e x dx = x e (1) 0 n
x n
n x
nx
0
n1 x
e (1)dx = n
0
ex xn1 dx.
Mathematics-III
71
(n + 1) = n(n).
It is the recurrence relation for gamma function. Using this relation recursively, we have
(2) = 1.(1) = 1,
(3) = 2.(2) = 2.1 = 2!,
(4) = 3.(3) = 3.2! = 3!,
......
(n + 1) = n.(n) = n.(n 1)! = n!.
Thus, (n)
takes positive integer values for positive integer values of n. It can be proved that
1
( 2 ) = . For,
Z
Z
1
2
t 1/2
e t
dx = 2
ex dx, where t1/2 = x.
=
2
0
0
2
Z
Z
1
x2
y 2
e dx
e dy
=
2
2
2
0
0
Z Z
2
2
e(x +y ) dxdy
=
Z0 2 Z0
2
er rdrd, x = r cos , y = r sin
=
=
Having known the precise value of 21 , we can calculate the values of gamma function at positive
fractions with denominator 2. For instance,
1
7
5 3 1
5 3 1
.
= . .
= . .
2
2 2 2
2
2 2 2
For values of gamma function at positive fractions with denominator different from 2, we have to
rely upon the numerically approximated value of the integral arising in gamma function.
Note that (n) given by (8.12) is not defined for n 0. We extend the definition of gamma
function by the relation
(n) =
(n + 1)
n
(8.13)
Then (n) is defined for all n except when n is any non-positive integer. If we agree (n) to be
for non-positive integer values of n, then 1/(n) is defined for all n. Such an agreement is useful
while dealing with Bessel functions. The Gamma function is, thus, defined as
Z
ex xn1 dx ,
(n + 1)
(n) =
n>0
Mathematics-III
72
Note that the gamma function generalizes the concept of factorial from non-negative integers
to any real number via the formula
n! = (n + 1).
Bessel Functions
The DE
x2 y 00 + xy 0 + (x2 p2 )y = 0,
(8.14)
where p is a non-negative constant, is called Bessels DE. We see that x = 0 is a regular singular
point of (8.14). So there exists at least one Frobenious series solution of the form
y=
an xn+r ,
(a0 6= 0).
(8.15)
n=0
an [(n + r) p ]x
n=0
n+r
an xn+r+2 = 0.
(8.16)
n=0
an2
,
(n + r)2 p2
(8.17)
where n = 2, 3, 4....
For r = p, we get the solution in the form
y = a0 x p
X
n=0
(1)n (x/2)2n
.
n!(p + 1)(p + 2)....(p + n)
The Bessel function of first kind of order p, denoted by Jp (x) is defined by putting a0 =
(8.18) so that
Jp (x) =
X
(1)n (x/2)2n+p
n=0
n!(n + p + 1)
(8.18)
1
2p p!
into
(8.19)
which is well defined for all real values of p in accordance with the definition of gamma function.
Mathematics-III
73
1.0
0.8
0.6
0.4
0.2
- 0.2
10
- 0.4
Figure 8.1: Plots of J0 (x) (Blue curve) and J1 (x) (Red curve).
From applications point of view, the most useful Bessel functions are of order 0 and 1, given
by
x2
x4
x6
+
+ ..........
22 22 .42 22 .42 .62
1 x 3
1 x 5
x
+
..........
J1 (x) =
2 1!2! 2
2!3! 2
Plots of J0 (x) (Blue curve) and J1 (x) (Red curve) are shown in Figure 8.1. It may be seen that
J0 (x) and J1 (x) vanish alternatively, and have infinitely many zeros on positive x-axis, as expected,
since J0 (x) and J1 (x) are two particular LI solutions of the Bessels DE (8.14). Later, we shall
show that J00 (x) = J1 (x). Thus, J0 (x) and J1 (x) behave just like cos x and sin x. This analogy
may also be observed by the fact that the normal form of Bessels DE (8.14) given by
1 4p2
00
u + 1+
u = 0,
4x2
J0 (x) = 1
behaves as
u00 + u = 0,
for large values of x, with solutions cos x and sin x. It means J0 (x) and J1 (x) behave more precisely
like cos x and sin x for larger values of x.
an2
an2
=
,
2
2
(n p) p
n(n 2p)
where n = 2, 3, 4....
For n = 3, we get 3(3 2p)a3 = a1 = 0. This lets a3 arbitrary for p = 3/2. We choose a3 = 0.
Mathematics-III
74
Likewise, we choose a5 = 0, a7 = 0, ........ for the sake of particular solution, and thus obtain the
following particular solution of (8.14):
Jp (x) =
X
(1)n (x/2)2np
n!(n p + 1)
n=0
(8.20)
(8.21)
Now let us see what happens when p is a non-negative integer say m. We have
Jm (x) =
X
(1)n (x/2)2nm
n=0
n!(m + n)!
(1)n (x/2)2nm
=
n!(m + n)!
n=m
1
= 0, n = 0, 1, 2, ...., m 1
(m + n)!
X
(1)n+m (x/2)2(n+m)m
=
(n + m)!(m + n + m)!
n=0
= (1)m
X
(1)n (x/2)2n+m
n=0
n!(m + n)!
= (1)m Jm (x)
This shows that Jp (x) and Jp (x) are not LI when p is an integer.
When p is not an integer, any function of the form (8.21) with c2 6= 0 is a Bessel function of
second kind. The standard Bessel function of second kind is defined as
Yp (x) =
(8.22)
(8.23)
which is general solution of (8.14) when p is not an integer. One may observe that Yp (x) is not
defined when p is an integer say m. However, it can be shown that
Ym (x) = lim Yp (x)
pm
exists, and it is taken as the Bessel function of second kind. Thus, it follows that (8.23) is general
solution of Bessels equation (8.14) in all cases. It is found that Yp (x) is not bounded near x = 0
for p 0. Accordingly, if we are interested in solutions of Bessels equation near x = 0, which is
often the case in applications, then we must take c2 = 0 in (8.23).
Mathematics-III
75
2
x
1
J3/2 (x) = J1/2 (x) J1/2 (x) =
x
2 cos x
sin x .
x
x
1
J3/2 (x) = J1/2 (x) J1/2 (x) =
x
sin x
cos x .
x
Thus, every Bessel function Jm+ 1 (x), where m is any integer, is elementary as it is expressible in
2
terms of elementary functions.
m 6= n
m = n.
(8.24)
Mathematics-III
p2
1 0
2
v + v + n 2 v = 0,
x
x
00
76
(8.25)
Multiplying (8.24) by v and (8.25) by u, and subtracting the resulting equations, we obtain
d 0
1
(u v v 0 u) + (u0 v v 0 u) = (2n 2m )uv.
dx
x
After multiplication by x, it becomes
d
[x(u0 v v 0 u)] = (2n 2m )xuv.
dx
Now, integrating with respect to x from 0 to 1, we have
Z 1
1
2
2
uv = [x(u0 v v 0 u)]0 = 0,
(n m )
0
(m 6= n).
For,
1 2
1
xJp2 (m x)dx = Jp02 (m ) = Jp+1
(m ).
2
2
d p
x Jp (x) = xp Jp+1 (x) leads to
dx
Jp0 (x) =
p
Jp (x) Jp+1 (x),
x
Jp0 (m ) =
p
Jp (m ) Jp+1 (m ) = Jp+1 (m ).
m
Mathematics-III
77
Fourier-Bessel Series
In mathematical physics, it is often necessary to expand a given function in terms of Bessel functions. The simplest and most useful expansions are of the form
f (x) =
an Jp (n x) = a1 Jp (1 x) + a2 Jp (2 x) + ...........,
(8.26)
n=1
where f (x) is defined on the interval 0 x 1 and n are positive zeros of some fixed Bessel
function Jp (x) with p 0. Now multiplying (8.26) by xJp (n x) and integrating from x = 0 to
x = 1, we get
Z 1
1
2
xf (x)Jp (n x)dx = an Jp+1
(n ),
2
0
which gives
2
an = 2
Jp+1 (n )
xf (x)Jp (n x)dx.
0
2
=
2
J1 (n )
xf (x)J0 (n x)dx
1
2
d
1
=
xJ1 (n x) ,
[xJ1 (x)] = xJ0 (x)
J12 (n ) n
dx
0
2
J1 (n )
=
2
J1 (n )
n
2
=
n J1 (n )
0
X
n=1
2
J0 (n x).
n J1 (n )
Chapter 9
Laplace Transforms
Definitions of Laplace and inverse Laplace transforms
Let f (x) be a function defined on a finite or infinite interval a x b. If we choose a fixed function
K(p, x) of variable x and a parameter p, then the general integral transformation is defined as
Z
T [f (x)] =
(9.1)
The function K(p, x) is called kernel of T . In particular, if a = 0, b = and K(p, x) = epx , then
(9.1) is called Laplace transform of f (x) and is denoted by L[f (x)].
Z
L[f (x) =
L1 [F (p)] = f (x).
Z
a(n)xn with x = ep .
n=0
78
Mathematics-III
79
Z
(1) L [1] =
px
(2)
(3)
(4)
(5)
(6)
(7)
1
.1dx = (p > 0).
p
1
=1
p
1
e
L [e ] =
L
= eax
pa
0
Z
(n + 1)
1
xn
n
px n
1
L [x ] =
e x dx =
L
(p
>
0).
=
pn+1
pn+1
(n + 1)
0
iax
e eiax
a
1
1
1
L [sin ax] = L
= 2
.
L
= sin ax
2
2
2
2i
p +a
p +a
a
iax
e + eiax
p
p
1
L [cos ax] = L
.
L
= 2
= cos ax
2
p + a2
p 2 + a2
ax
e eax
a
1
1
1
L [sinh ax] = L
.
L
sinh ax
= 2
=
2
p a2
p 2 a2
a
ax
e + eax
p
p
1
L [cosh ax] = L
= 2
.
L
= cosh ax
2
p a2
p 2 a2
Z
ax
1
e dx =
(p > a).
pa
px ax
Ex. 9.2.1. Find L sin2 x and L1 [4 sin x cos x + ex ].
Sol. 9.2.1.
2
1 cos 2x
p
1 1
L sin x = L
=
2
2 p p2 + 4
L 4 sin x cos x + ex = L 2 sin 2x + ex =
1
1
p2 +2
and L
1
p4 +p2
p2
4
1
+
.
+4 p+1
Sol. 9.2.2.
1
1
1
=
sin
2x.
L
p2 + 2
2
1
1
1
1
1
L
=L
= x sin x.
p4 + p2
p2 p2 + 1
Mathematics-III
80
The above conditions are not necessary. Consider the function f (x) = x1/2 . This function is not
piecewise continuous on [0,b] for any p
positive real number b since it has infinite discontinuity at
1/2
1/2
x = 0. But L[x
] = (1/2)/p = /p exists for p > 0.
Further, from (9.2), we see that lim F (p) = 0. It is true even if the function is not piecewise
p
continuous or of exponential order. So if lim (p) 6= 0, then (p) can not be Laplace transform of
p
any function. For example, L [p], L [cos p], L1 [log p] etc. do not exist.
0
2x
1
,
p2 +1
so by shifting formula
1
(p 2)2 + 1
1
.
p2 +1
Mathematics-III
81
F (p)
.
f (t)dt =
p
Z
or
d
L [x sin x] = (1)
dp
1
.
p2 +1
1
2
p +1
=
(p2
2p
.
+ 1)2
Mathematics-III
82
sin x
e
dx = L
= tan1 p.
x
x
2
0
Z
sin x
Choosing p = 0, we get
dx = .
x
2
h0 cos x i
Ex. 9.4.6. Show that L
does not exist.
x
Sol. 9.4.6. Please try yourself.
p+7
1
Ex. 9.4.7. Find L
.
p2 + 2p + 5
Sol. 9.4.7. Please try yourself by making perfect square in the denominator.
2p2 6p + 5
1
.
Ex. 9.4.8. Find L
p3 6p2 + 11p 6
Sol. 9.4.8. Please try yourself by making partial fractions.
p+1
1
.
Ex. 9.4.9. Find L
log
p1
Sol. 9.4.9. Please try yourself by letting
p+1
L[f (x)] = log
p1
so that
2
L[xf (x)] = 2
.
p 1
p
1
= x sinh ax.
Ex. 9.4.10. Show that L
(p2 a2 )2
Sol. 9.4.10. Please try yourself.
p
1
Ex. 9.4.11. Show that L
.
p4 + p2 + 1
Sol. 9.4.11. Please try yourself by using
1
p
1
1
=
p 4 + p2 + 1
2 p 2 p + 1 p2 + p + 1
Z
px sin x
Mathematics-III
83
c
.
p1
p2
1
p
+ c2 2
.
+1
p +1
3
.
p2
1
1
3
=
.
(p + 1)(p 2)
p2 p+1
Mathematics-III
84
d
d 2
p L[y] py(0) y 0 (0) + pL[y] y(0)
(L[y]) = 0.
dp
dp
L[y] = c(p + 1)
1/2
c
=
p
1/2
1
1 11
1 13 1
1+ 2
=c
+
......
p
p 2 p3 2! 2 2 p5
1
p2 + 1
.
Z
f (x t)g(t)dt .
Proof. We have
Z
ept g(t)dt
e f (s)ds.
0
Z0 Z
p(s+t)
e
f (s)g(t)ds dt
0
0
Z Z
epx f (x t)g(t)dxdt (s + t = x)
Z0 Zt x
epx f (x t)g(t)dtdx (Change of order of integration)
Z0 0 Z x
px
e
f (x t)g(t)dt dx
0
0
Z x
L
f (x t)g(t)dt .
L[f (x)].L[g(x)] =
=
=
=
=
=
ps
Remark 9.5.2. If L[f (x)] = F (p) and L[g(x)] = G(p), then by convolution theorem
Z x
1
L [F (p)G(p)] =
f (x t)g(t)dt
0
Mathematics-III
1
p2 (p2 + 1)
85
.
(9.3)
where the unknown function y(x) appears under the integral sign, is called an integral equation.
Taking Laplace transform of both sides of (9.3), we get
L[f (x)] = L[y(x)] + L[K(x)]L[y(x)].
So we have
L[y(x)] =
L[f (x)]
1 + L[K(x)]
3
sin(x t)y(t)dt.
6
6
L[x3 ]
= 4 + 6.
1 + L[sin x]
p
p
1 5
x.
20
Mathematics-III
86
It gives
L1 [eap F (p)] = f (t a)ua (t).
3p
e
1
Ex. 9.7.1. Find L
.
p2 + 1
1
1
Sol. 9.7.1. We know L
= sin t.
p2 + 1
3p
e
0,
1
L
= sin(t 3)u3 (t) =
2
sin(t 3) ,
p +1
t<3
t 3.
t<0
0,
1/ , 0 t
f (t) =
0,
t > .
as 0+ defines Dirac delta function, which is denoted by (t). So lim+ f (t) = (t), and we may
0
interpret that (t) = 0 for t 6= 0 and (t) = at t = 0. The delta function can be made to act at
any point say a 0. Then we define
0,
t 6= a
a (t) =
, t = a.
Mathematics-III
87
t<a
0,
1/ , a t a +
f (t) =
(9.4)
0,
t > a + .
can be written as
1
f (t) = [ua (t) ua+ (t)].
Now, let g(t) be any continuous function for t 0. Then using (9.4) , we have
Z
Z
1 a+
g(t)f (t)dt =
g(t) = g(t0 ),
a
0
where a < t0 < a + , by mean value theorem of integral calculus. So in the limit 0, we get
Z
g(t)a (t)dt = g(a).
0
It means
L[a (t)] = epa
and L[(t)] = 1.
Examples
Suppose the LDE
y 00 + ay 0 + by = f (t),
y(0) = y 0 (0) = 0,
(9.5)
Mathematics-III
88
describes a mechanical or electrical system at rest in its state of equilibrium. Here f (t) can be
an impressed external force F or an electromotive force E that begins to act at t = 0. If A(t) is
solution (output or indicial response) for the input f (t) = u(t) (the unit step function), then
A00 + aA0 + bA = u(t)
Taking Laplace transform of both sides, we get
p2 L[A] pA(0) A0 (0) + apL[A] + pL[A] A(0) =
1
p
p(p2
1
1
=
,
+ ap + b)
pZ(p)
(9.6)
where Z(p) = p2 + ap + b.
Similarly, taking Laplace transform of (9.5), we get
Z t
Z t
d
L[f (t)]
A(t )f ( )d = L
= pL[A]L[f (t)] = pL
A(t )f ( )d . (9.7)
L[y] =
Z(p)
dt 0
0
Taking inverse Laplace Transform, we have
Z
Z t
d t
= y(t) =
A(t )f ( )d =
A0 (t )f ( )d
dt 0
0
( A(0) = 0).
(9.8)
Thus, finally the solution of (9.5) for the general input f (t) is given by the following two formulas:
Z
A0 (t )f ( )d,
y(t) =
(9.9)
y(t) =
f 0 (t )A()d + f (0)A(t).
(9.10)
In case, the input is f (t) = (t), the unit impulse function, let us denote the solution (output or
impulsive response) of (9.5) by h(t) so that L[h(t)] = 1/Z(p) and
L[A(t)] =
1
L[h(t)]
=
.
pZ(p)
p
(9.11)
Mathematics-III
89
1
2
1
y(t) = e3t + e3t e2t .
3
15
5
1
Formula (9.11) can also be used for the solution, where h(t) = L
1
p2 +p6
= 15 (e2t e3t ).
Chapter 10
Systems of First Order Equations
In this chapter, we shall learn to solve the system of two first order differential equations of the
type:
(
dx
= a1 x + b1 y + f1 (t)
dt
dy
= a2 x + b2 y + f2 (t)
dt
where a1 , a2 , b1 and b2 are constants. This system is said to be homogeneous if f1 (t) = 0 and
f2 (t) = 0 otherwise it is non-homogeneous.
(10.1)
Mathematics-III
91
Two LI solutions of (10.1) are obtained as follows. First write the system (10.1) into operator
form by setting D = dtd so that
(
(D a1 )x b1 y = 0
(10.2)
a2 x + (D b2 )y = 0
(
x = Aemt
Then assume that
y = Bemt
(
(m a1 )A b1 B = 0
a2 A + (m b2 )B = 0
(10.3)
Mathematics-III
92
(10.4)
(
(m 1)A B = 0
4A + (m + 2)B = 0
(10.5)
or m = 3, 2.
(10.6)
Mathematics-III
93
(
x = c1 x1 (t) + c2 x2 (t)
where
is general solution of the corresponding homogeneous system
y = c1 y1 (t) + c2 y2 (t)
(
(
dx
=
a
x
+
b
y
x = xp (t)
1
1
dt
, and
is a particular solution of (10.6).
dy
=
a
x
+
b
y
y
=
y
(t)
2
2
p
dt
We already know how to solve the corresponding homogeneous system of (10.6). So we need
to know how to find a particular solution of (10.6).
We construct a particular solution using the solution of the corresponding homogeneous system
of (10.6) by varying the unknown parameters c1 and c2 with two unknown functions v1 (t) and v2 (t)
respectively. So we assume a particular solution of the form
(
x = v1 x1 + v2 x2
,
y = v1 y1 + v2 y2
Substituting this particular solution into (10.6), we get
(
v10 x1 + v20 x2 = f1 (t)
.
v10 y1 + v20 y2 = f2 (t)
Therefore, we have
f (t)
Z 1
f2 (t)
v1 =
x1
y1
x2
Z
y2
dt and v2 =
x2
y2
x1
y1
x1
y1
f1 (t)
f2 (t)
dt.
x2
y2
x2 = e2t ,
y1 = 4e3t ,
y2 = e2t .
Mathematics-III
94
5t + 2 e2t
Z
8t 8 e2t
1
1
3t
3t
dt =
(t + 3)e3t
e
(3t
+
10)dt
=
2t
e
5
5
e
4e3t e2t
3t
e
5t + 2
Z
4e3t 8t 8
28
7
2t
3t
dt =
(2t + 1)e2t .
e
tdt
=
2t
e
5
5
e
4e3t e2t
So we have
(
x = v1 e3t + v2 e2t = 3t + 2
y = 4v1 e3t + v2 e2t = 2t 1
(10.7)
Let us eliminate y from these two equations. Operating D + 2 on both sides of first equation, and
then adding to the second equation, we get
[(D + 2)(D 1) 4]x = (D + 2)(5t + 2) 8t 8 = 5 10t + 4 8t 8
Mathematics-III
95
It is a second order non-homogeneous LDE with constant coefficients in x and t with AE given by
m2 + m 6 = 0.
Its roots are m = 3, 2. So we have
xh = c1 e3t + c2 e2 t.
To find xp , we have
1
1
1
D2 + D
D
xp = 2
(18t 9) =
1
(3t + 3/2) = 3t + 2.
(18t 9) = 1 +
D +D6
6
6
6
Thus,
x = xh + xp = c1 e3t + c2 e2t + 3t + 2.
We can get y by substituting this value of x into the first equation of the given system. For,
y=
dx
d
x + 5t 2 = (c1 e3t + c2 e2t + 3t + 2) (c1 e3t + c2 e2t + 3t + 2) + 5t 2
dt
dt