Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
PDXScholar
University Honors Theses University Honors College
5-26-2018
Recommended Citation
Deal, Joella Rae, "Basics of Bessel Functions" (2018). University Honors Theses. Paper 546.
10.15760/honors.552
This Thesis is brought to you for free and open access. It has been accepted for inclusion in University Honors Theses by an authorized administrator of
PDXScholar. For more information, please contact pdxscholar@pdx.edu.
Basics of Bessel Functions
by
Joella Deal
Bachelor of Science
in
University Honors
and
Mathematics
Thesis Adviser
2018
Abstract
This paper is a deep exploration of the project Bessel Functions by Martin Kreh
of Pennsylvania State University. We begin with a derivation of the Bessel functions
Ja (x) and Ya (x), which are two solutions to Bessel’s differential equation. Next we
find the generating function and use it to prove some useful standard results and
recurrence relations. We use these recurrence relations to examine the behavior of the
Bessel functions at some special values. Then we use contour integration to derive their
integral representations, from which we can produce their asymptotic formulae. We
also show an alternate method for deriving the first Bessel function using the generating
function. Finally, a graph created using Python illustrates the Bessel functions of order
0, 1, 2, 3, and 4.
Bessel functions are the standard form of the solutions to Bessel’s differential equation,
∂2y ∂y
x2 +x + (x2 − n2 )y = 0, (1)
∂x2 ∂x
where n is the order of the Bessel equation. It is often obtained by the separation of the
wave equation
∂2u
= c2 ∇2 u (2)
∂t2
in cylindric or spherical coordinates. For this reason, the Bessel functions fall under the
umbrella of cylindrical (or spherical ) harmonics when n is an integer or half-integer, and we
see them appear in the separable solutions to both the Helmholtz equation and Laplace’s
equation in cylindric or spherical coordinates. Since the Bessel equation is a 2nd order
differential equation, it has two linearly independent solutions, Jn (x) and Yn (x).
which we will plug into the Bessel equation (1) and solve for its necessary components. For
convenience, the first and second partial derivatives of this power series are:
∞
X ∞
X
y 0 (x) = nxn−1 bk x k + x n kbk xk−1 (4)
k=0 k=0
and
∞
X ∞
X ∞
X
00 n−2 k n−1 k−1 n
y (x) = n(n − 1)x bk x + 2nx kbk x +x k(k − 1)bk xk−2 (5)
k=0 k=0 k=0
1
as given by the multiplication rule. We next multiply equation (4) by x and equation (5)
by x2 so that we can easily plug them into (1). (Note that the x or x2 can be inserted into
the front of each term or be distributed into the summation.) We then have
∞
X ∞
X
xy 0 (x) = nxn bk xk + xn kbk xk (6)
k=0 k=0
and
∞
X ∞
X ∞
X
x2 y 00 (x) = n(n − 1)xn bk xk + 2nxn kbk xk + xn k(k − 1)xk . (7)
k=0 k=0 k=0
Note that because we are solving for the appropriate bk , we can artificially set b−2 :=
b−1 := 0. This will become useful when simplifying the full Bessel differential equation
below. Plugging equations (6), (7), and (8) into (1), we get
∞
X ∞
X ∞
X ∞
X
n(n − 1)xn bk xk + 2nxn kbk xk + xn k(k − 1)bk xk + nxn bk xk
k=0 k=0 k=0 k=0
∞
X ∞
X ∞
X
+ xn kbk xk + xn bk−2 xk − n2 xn bk xk = 0, (9)
k=0 k=2 k=0
2
equation (13) in indeterminate form). We can choose a convenient value for b0 as needed.
First, let us examine the case where −n ∈
/ N (we will take care of the −n ∈ N case later).
From equation (13) we have
b2k−2 b2k−2
b2k = − =− . (14)
2k(2k + 2n) 4k(n + k)
We will perform induction on this equation to find a general formula for b2k .
(−1)l 4l l! Qbn+l
0
mn+1
b2(l+1) = (−1)
4(l + 1)(n + l + 1)
(18)
b0
= (−1)l+1 Qn+l+1 .
4l+1 (l + 1)! n+1 m
Therefore we know that equation (17) holds in general for all positive integers k.
Before we can plug equation (17) back into the original power series (3), we need to choose
a convenient value for b0 . We know that the summation in the final power series equation
needs to be convergent in order for it to be a solution to the Bessel equation (1). In addition,
it would be advantageous to use the factorial (n + k)! in the denominator of bk (instead of
having to terminate the product at (n + 1)). For these reasons, we will choose
1
b0 = . (19)
2n n!
Plugging this into equation (17), we have
(−1)k
b2k = , (20)
2n 4k k!(n + k)(n + k − 1)...(n + 1)n!
(−1)k
b2k = . (21)
2n 4k k!(n+ k)!
3
Now we are finally ready to plug our formula for b2k into the full power series (3). We have
∞
X (−1)k
y(x) = xn x2k . (22)
2n 4k k!(n + k)!
k=0
Observe that the x2k term at the end comes from the b2k in equation (17). I.e., we want
the summation term
∞
X
b0 x0 + b2 x2 + b4 x4 + ... = b2k x2k , (23)
k=0
so we must ensure that the power of x is the same as the subscript of b. Rearranging, we
have
∞
x X (−1)k x
y(x) = ( )n ( )2k . (24)
2 k!(n + k)! 2
k=0
Clearly, this power series is only meaningful if it is convergent. We will check by the well-
known Ratio Test. If it passes, we have found a solution to Bessel’s equation. We take
bk+1
ρ = lim
k→∞ bk
(−1)k+1 x 2(k+1)
(k+1)!(n+k+1)! ( 2 )
= (−1)k
(25)
x 2k
k!(n+k)! ( 2 )
−1 x
= lim ( )2 = 0.
k→∞ (k + 1)(n + k + 1) 2
Since ρ < 1, the series converges. We have indeed found a solution to equation (1).
However, we also want to construct a solution for the complex order v, not just for the
order n ∈ N. We must find a continuous function with which we can replace the factorial
in equation (24).
The most versatile way to extend the factorial function to non-integer and complex numbers
is the Gamma function. The reciprocal of the Gamma function also happens to be holo-
morphic, meaning that it is infinitely differentiable and equal to its own Taylor series. Since
we are applying the Gamma function to the denominator of equation (24), this property of
its reciprocal will be especially convenient.
(the factorial function with its argument shifted down by 1) if n is a positive integer.
For complex numbers and non-integers, the Gamma function corresponds to the Mellin
transform of the negative exponential function,
4
Making this modification to equation (24), we now have
∞
x X (−1)k x
Jv (x) = ( )v ( )2k . (29)
2 Γ(k + 1)Γ(v + k + 1) 2
k=0
This is not valid for −n ∈ N, where the Gamma function is undefined. In this case, we
will begin the summation at k = n to bypass any undefined Gamma terms (since k = n
corresponds to Γ(−n + n + 1)):
∞
x X (−1)k x
J−n (x) = ( )−n ( )2k
2 Γ(k + 1)Γ(−n + k + 1) 2
k=n
∞
x X (−1)k+n x
= ( )−n ( )2(k+n)
2 Γ(n + k + 1)Γ(−n + n + k + 1) 2 (30)
k=0
∞
x x X (−1)k (−1)n x
= ( )−n ( )2n ( )2k
2 2 Γ(n + k + 1)Γ(k + 1) 2
k=0
n
= (−1) Jn (x).
This will also solve the Bessel differential equation (1). So we have found our first Bessel
function, Jn (x) or Jv (x).
We will now determine a second, linearly independent solution. Let us begin by examining
the behavior of Jv (x) as x → 0.
since
x
lim ( )v = 0. (32)
x→0 2
Though it may be difficult to see at first, the limit of this is 1 because of the property that
00 = 1, since our summation will become
∞
X (−1)k x
lim ( )2k = 1 + 0 + 0 + 0 + ... (34)
x→0 Γ(k + 1)Γ(k + 1) 2
k=0
5
Step Three. Let <(v) < 0, v ∈
/ Z (since the gamma function is undefined here). Then we
have
∞
2 X (−1)k x
lim ( )−v ( )2k = ±∞. (35)
x→0 x Γ(k + 1)Γ(k + 1 + v) 2
k=0
Summarized, the results from steps one, two, and three are:
0,
<(v) > 0
lim Jv (x) = 1, v=0 . (36)
x→0
±∞, <(v) < 0, v ∈ /Z
We can see here that Jv (x) and J−v (x) are two linearly independent solutions (i.e. they
cannot be expressed as linear combinations of each other) if v ∈/ Z. If v ∈ Z, they are linearly
dependent (recall equation (30)). Because of this property (and the homogeneity of Bessel’s
differential equation) any linear combination of Jv and J−v , where v ∈ / Z, is also solution.
We build the equation
cos(vπ)Jv (x) − J−v (x)
Yv (x) = , (37)
sin(vπ)
/ Z. Notice that this vanishes if we have order n ∈ N0 , since cos(nπ) = (−1)n . For n
for v ∈
∈ Z, we let
Yv (x) := lim Yv (x). (38)
v→n
It can be shown by L’Hôspital’s rule that this limit exists (the calculation is too lengthy to
be included here, but it follows from the properties of the digamma function, which gives
the relationship between the gamma function and its derivative).
We need to check that the Wronskian determinant of Jv (z) and Yv (z) does not vanish for
any v,z ∈ C. Abel’s Theorem says that if y1 (z) and y2 (z) are two solutions to the differential
equation p(z)y 00 + q(z)y 0 + r(z)y = 0, then the Wronskian of the two solutions is of the form
C
p(z) , where C is a constant which does not depend on z. Notice that we can write Bessel’s
2
equation in the form zy 00 + y 0 + (z − vz )y = 0, so that it is self-adjoint. Then the Wronskian
Av
determinant of Jv (z) and Yv (z) is of the form p(z) , where Av does not depend on z. We
omit the detailed calculation here, but the Wronskian determinant of Jv (z) and Yv (z) turns
out to be
2
W (Jv (z), Yv (z)) = Jv+1 (z)Yv (z) − Jv (z)Yv+1 (z) = , (39)
πz
which does not vanish for any z ∈ C. Therefore, they are linearly independent for all v ∈
C. We have successfully found two linearly independent solutions to the Bessel differential
equation.
6
2 Properties of Bessel Functions
Now that we have derived the two Bessel functions, we will prove some of their fundamental
properties.
Several properties of the Bessel functions can be proven using their generating function. We
will begin this section by introducing the concept of generating functions and showing that
one exists for the Bessel functions.
x −1
I.e., the function e 2 (z−z )
is the generating function of the first Bessel function.
(Note: This power series is actually a Laurent series, since it includes terms of a negative
degree. This will be important later.)
We have
x x 1 x x 1
e 2 z− 2 z = e 2 z e− 2 z
∞ ∞
X ( x2 z)m X (− 2z
x k
)
=
m=0
m! k!
k=0
7
∞ ∞
X (x/2)m m X (−1)k ( x2 )k −k
= z z .
m=0
m! k!
k=0
where
n
X
cn = ak bn−k .
k=0
∞ ∞
!
X X (−1)k x n+k+k n
= ( ) z
n=−∞
(n + k)!k! 2
k=0
∞ ∞
!
X X (−1)k x 2k x n n
= ( ) ( ) z
n=−∞
(n + k)!k! 2 2
k=0
X∞
= Jn (x)z n .
n=−∞
We will now use the generating function to prove some standard results.
Lemma 2.1.1. We have
∞
X
cos(x) = J0 (x) + 2 (−1)n J2n (x), (43)
n=1
∞
X
sin(x) = 2 (−1)n J2n+1 (x), (44)
n=0
∞
X
1 = J0 (x) + 2 J2n (x). (45)
n=1
8
∞
X
= Jn (x)einφ
n=−∞
X∞
= Jn (x)(cos(nφ) + i sin(nφ)).
n=−∞
and
∞
X
sin(x sin φ) = Jn (x) sin(nφ).
n=−∞
π
Setting φ = 2, we have
∞
X nπ
cos(x) = Jn (x) cos( ).
n=−∞
2
π
Plugging φ = 2 into sin(x sin φ), we get
∞
X nπ
sin(x) = Jn (x) sin( ).
n=−∞
2
9
−nπ
Case One. Suppose n = 1 mod 4. Then −n = 3 mod 4, so sin( nπ2 ) = 1 and sin( 2 ) =
−1. So the summation between the terms for −n and n becomes
nπ −nπ
Jn (x) sin( ) + J−n (x) sin( ) = Jn (x)(1) + J−n (x)(−1)
2 2
= Jn (x) + (−1)2 Jn (x)
= 2Jn (x).
Case Two. Suppose n = 2 mod 4. Then sin( nπ 2 ) = 0, so we no longer have the term in
the summation. The same applies for the case n = 0 mod 4.
−nπ
Case Three. Suppose n = 3 mod 4. Then sin( nπ
2 ) = −1 and sin( 2 ) = 1. So the
summation between n and −n terms becomes
nπ −nπ
Jn (x) sin( ) + J−n (x) sin( ) = Jn (x)(−1) + J−n (x)(1)
2 2
= Jn (x)(−1) + (−1)Jn (x)
= −2Jn (x).
since odd values of n cause the terms to cancel, and even values sum to 2Jn (x). This is the
third result.
∀ n ∈ Z.
10
Proof. We make the change of variables x → −x and z → z −1 and insert into the generating
function:
∞
X −x −1
Jn (−x)z −n = e 2 (z −z)
n=−∞
−x x
= e 2z e 2 z
∞ ∞
X ( −x
2z )
m X
( x2 z)k
=
m=0
m! k!
k=0
∞ ∞
X ( x2 )m (−1)m −m X ( x2 )k k
= z z
m=0
m! k!
k=0
∞
!
X X (−1)m ( x )m+k
= 2
z −n
n=−∞ m−k=n
m!k!
m,k≥0
∞ ∞
!
X X (−1)m x 2m x −n −n
= ( ) ( ) z
n=−∞ m=0
m!(m − n)! 2 2
X∞
= J−n (x)z −n
n=−∞
So we have,
∞
X ∞
X
Jn (−x)z −n = J−n (x)z −n .
n=−∞ n=−∞
11
∞
!
X (−1)k x2k−1
= 2k 2k+n
k!(n + k)! 2
k=0
∞
!
X (−1)k x2k−1
= k 2k+n−1
k!(n + k)! 2
k=0
Similarly,
∞
!
d n d X (−1)k x 2k+n n
(x Jn (x)) = x
dx dx k!(n + k)! 2
k=0
∞
!
d X (−1)k x2k+2n
=
dx k!(n + k)! 22k+n
k=0
∞
!
X (−1)k (2k + 2n) x2k+2n−1
=
k!(n + k)! 22k+2n
k=0
∞
!
X (−1)k x2k+2n−1
=
k!(n − 1 + k)! 22k+n−1
k=0
∞
X (−1)k x 2k+n−1
= xn
k!(k + n − 1)! 2
k=0
12
= xn Jn−1 (x).
Proof. We take
d d n −n
(Jn (x)) = x x Jn (x)
dx dx
Applying the results from Proposition 2.2 and the product rule, we have
d d −n
(Jn (x)) = nxn−1 xn Jn (x) + xn
x Jn (x)
dx dx
= nx−1 Jn (x) + xn − x−n Jn+1 (x)
Similarly, we take
d d −n n
(Jn (x)) = x x Jn (x)
dx dx
d n
= −nx−n−1 xn Jn (x) + x−n
x Jn (x)
dx
= −nx−1 Jn (x) + x−n xn Jn−1 (x)
= −nx−1 Jn (x) + Jn−1 (x).
Adding these two expressions, we get the first result from the Lemma:
d
2 Jn (x) = Jn−1 (x) − Jn+1 (x).
dx
Subtracting the same expressions, we get the second result:
Remark 2.2.1. Lemma 2.2.1 can be proved similarly by differentiating the generating func-
tion of Jn (x) with respect to x and z, one at a time, and comparing the coefficients. Adding
n
the resulting relations and multiplying by x2 will produce the second result from Proposition
2.2.
Remark 2.2.2. Note that the relation shown in equation (51) also holds for all v ∈ R, not
just for n ∈ N. This will become useful in the next section during our discussion of Lommel
polynomials.
13
Lemma 2.2.2. For any n ∈ Z we have
Z
xn+1 Jn (x)dx = xn+1 Jn+1 (x) + C (51)
and Z
x−n+1 Jn (x)dx = −x−n+1 Jn−1 (x) + D. (52)
To obtain the second equation, recall from Lemma 2.1.2 that J−n (x) = (−1)n Jn (x), or
equivalently, Jn (x) = (−1)n J−n (x). We apply this to the equation below:
Z Z
x−n+1 Jn (x)dx = x−n+1 (−1)n J−n (x)dx
Z
= (−1) n
x−n+1 J−n (x)dx
and
∞
X
Jn+m (x)Jn (x) = 0. (54)
n=−∞
14
Proof. We have
x −1 x −1
−z)
e 2 (z−z )
e 2 (z = e0 = 1
and by Proposition 2.1,
∞
X ∞
X
1= Jk (x)z k Jn (x)z −n .
k=−∞ n=−∞
Letting m = k − n we write
∞ ∞
!
X X
1= Jn+m (x)Jn (x) z m
m=−∞ n=−∞
∞ ∞
!
X X X
= Jn2 (x)z 0 + Jn+m (x)Jn (x) z m
n=−∞ m∈Z n=−∞
m6=0
∞ ∞
!
X X X
= Jn2 (x) + Jn+m (x)Jn (x) z m .
n=−∞ m∈Z n=−∞
m6=0
In addition,
∞
!
X X
1=1+ Jn+m (x)Jn (x) z m ,
m∈Z n=−∞
m6=0
for all m 6= 0.
P∞
We will now examine the equation 1 = n=−∞ Jn2 (x) to find an equivalent form. We can
P−1 P∞
write this summation as 1 = J02 (x) + n=−∞ Jn2 (x) + n=1 Jn2 (x).
Step One. Fix any even n ∈ Z. Lemma 2.1.2 gives that Jn (x) = J−n (x). Squaring both
sides, we get that Jn2 (x) = J−n
2
(x). Then the sum of the n and −n terms is 2Jn2 (x).
Step Two. Fix any odd n ∈ Z. Then we have that Jn (x) = (−1)J−n (x). Squaring both
sides, we now have that Jn2 (x) = J−n
2
(x). Then Jn2 (x) + J−n
2
(x) = 2Jn2 (x).
15
Lemma 2.2.4. X
Jn (x) = 1. (55)
n∈Z
n∈Z
1 −1 1 −1
= e 2 x(t−t ) e 2 y(t−t )
X X
= Jk (x)tk Jm (y)tm
k∈Z m∈Z
!
X X
= Jk (x)Jn−k (y) tn .
n∈Z k∈Z
In this section we will examine what the Bessel functions look like when some particular
values are chosen.
1
Lemma 2.2.6. If v = 2 or − 12 we have
r
2
J 21 (x) = Y− 12 (x) = sin(x) (57)
πx
and r
2
J− 12 (x) = −Y 21 (x) = cos(x). (58)
πx
16
Proof. Observe the series representation of Jv and apply the properties of the gamma func-
tion. We begin with
r ∞
xX (−1)k x 2k
J 12 (x) = .
2
k=0
Γ(k + 1)Γ(k + 32 ) 2
√
(2k+1)! π
Since Γ k + 32 = k!22k+1
, we have
∞
x X (−1)k x 2k
r
J 12 (x) = √
2 k! (2k+1)!
k=0 2k+1
π 2
k!2
∞
x X (−1)k 22k+1 x 2k
r
= √
2 (2k + 1)! π 2
k=0
∞
X (−1)k 22k+1 x 2k+1− 12
= √
(2k + 1)! π 2
k=0
r ∞
2 X (−1)k 22k+1 x 2k+1
= √
x (2k + 1)! π 2
k=0
r ∞
2 X (−1)k 2k+1
= x .
xπ (2k + 1)!
k=0
17
r ∞
2 X (−1)k 2k
= x
xπ (2k)!
k=0
r
2
= cos(x).
xπ
Also,
deg Pn = deg Qn = n
Pn (−x) = (−1)n Pn (x)
Qn (−x) = (−1)n Qn (x)
Proof. From Lemma 2.2.1, equation (51), we had the recurrence formula
2v
Jv+1 (x) = Jv (x) − Jv−1 (x).
x
We will use this formula and show by induction that
1 1
Jk+v (x) = Pn ( )Jv (x) − Qn−1 ( )Jv−1 (x)
x x
for the Pn and Qn defined in the lemma. We begin with the base case Jv+2 (x):
2v
Jv+2 (x) = Jv+1 (x) − Jv (x)
x
2v 2v
= Jv (x) − Jv−1 (x) − Jv (x)
x x
2v 2v
2
= − 1 Jv (x) − Jv−1 (x).
x x
18
We will also show the case Jv+3 (x):
2v
Jv+3 (x) = Jv+2 (x) − Jv+1 (x)
x !
2v 2v 2 2v
= − 1 Jv (x) − Jv−1 (x) − Jv+1 (x)
x x x
2v 2v 2v 2 2v
3
= − Jv (x) − Jv−1 (x) − Jv (x) − Jv−1 (x)
x x x x
2v 4v 2v
3 2
= − Jv (x) − + 1 Jv−1 (x).
x x x
Then we know that the formula is true for some n ∈ N. We will show that it must hold for
the n + 1 case. We have
1 1
Jn+1+v (x) = Pn ( )Jv+1 (x) − Qn−1 ( )Jv (x)
x x
1 2v 1
= Pn ( ) Jv (x) − Jv−1 (x) − Qn−1 ( )Jv (x)
x x x
2v 1 1 1
= Pn ( ) Jv (x) − Pn ( )Jv−1 (x) − Qn−1 ( )Jv (x)
x x x x
2v 1 1 1
= Pn ( ) − Qn−1 ( ) Jv (x) − Pn ( )Jv−1 (x).
x x x x
Letting Pn+1 ( x1 ) = 2v 1
x Pn ( x ) − Qn−1 ( x1 ) and Qn ( x1 ) = Pn ( x1 ), we now have
1 1
Jn+1+v (x) = Pn+1 ( )Jv (x) − Qn ( )Jv−1 (x).
x x
By the principle of mathematical induction, the formula holds for all k ∈ N.
1 1
Jk+ 12 (x) = Pk ( )J 12 (x) − Qk−1 ( )J− 21 (x).
x x
Plugging in equations (58) and (59) from lemma 2.2.6, we get the first result:
r
2 1 1
Jk+ 21 (x) = Pk sin(x) − Qk−1 cos(x) .
xπ x x
To achieve the second result, we can rewrite our recurrence relation as
−2v
J−v−1 (x) = J−v (x) − J−v+1 (x).
x
We then have 2v
2
Jv−2 (x) = − 1 J−v (x) + J−v+1 (x)
x
and 2v 3 2v 2v
2
Jv−3 (x) = − + − 1 J−v (x) − − 1 J−v+1 (x).
x x x
19
Continuing to iterate in the same manner as before, we get the formula
1 1
J−v−k (x) = (−1)k Pk J−v (x) + Qk−1 J−v+1 (x) .
x x
Letting v = 21 , we have the second result:
r
k 2 1 1
J−k− 12 (x) = (−1) Pk cos(x) + Qk−1 sin(x) .
xπ x x
Remark 2.2.3. The polynomials Pn and Qn in Lemma 2.2.7 are called Lommel polyno-
mials and were introduced by the physicist Eugen von Lommel (1837-1899). They solve the
recurrence relation
And similarly,
20
2.3 Integral Representations
The purpose of this section is to give the integral representations of each of our two Bessel
functions. These will aid us later on in our discussion of asymptotics.
1 π sin(πv) ∞ −x sinh(t)−vt
Z Z
Jv (x) = cos(x sin t − vt)dt − e dt, (63)
π 0 π 0
and Z π Z ∞
1 1
Yv (x) = sin(x sin t − vt)dt − e−x sinh(t) (evt + cos(πv)e−vt )dt. (64)
π 0 π 0
Proof. A representation of the Gamma function extended to the complex plane (given to us
by the mathematician Hermann Hankel) is
Z
1 1
= t−z et dt
Γ(z) 2πi γ1
where γ1 is some contour in the complex plane coming from −∞, turning upwards around
0, and heading back towards −∞.
−∞
(0, 0)
Then we have
∞
( x2 )v (−1)k ( x )2k t−v−k−1
Z X
Jv (x) = 2
et dt
2πi γ1 k=0 k!
since Z
1 1
= t−v−k−1 et dt.
Γ(v + k + 1) 2πi γ1
21
Then Jv (x) becomes
( x )v
Z
x2
Jv (x) = 2 t−v−1 et− 4t dt.
2πi γ1
x
We will apply u-substitution. Let t = 2 u. Then
( x2 )v
Z
x −v−1 ( x2 u)− ( x2 )2 ( x1u) x
Jv (x) = ( u) e 2 ( )du
2πi γ2 2 2
Z
1 x v+1 x −v−1 −v−1 x (u− 1 )
= ( ) ( ) u e2 u du
2πi γ2 2 2
Z
1 x 1
= u−v−1 e 2 (u− u ) du
2πi γ2
for some complex contour γ2 of the same type. Next we will perform another u subsitution.
Let u = ew . We will have a new contour, since the exponential function is always positive.
So our new contour will now originate from +∞, turn around 0 (positively oriented), and
head back to +∞. A suitable contour following this path is the rectangle with complex
vertices ∞ − iπ, −iπ, iπ, and ∞ + iπ. This will be our new contour, γ.
iπ
∞ + iπ
∞ − iπ
−iπ
Figure 2: The contour γ.
This integral in the rectangular γ can be split into three parts: the integral along the left
vertical edge, the integral along the top edge, and the negative of the integral along the
bottom edge. We can write this:
1
Jv (x) = (P1 + P2 − P3 )
2πi
22
with
Z π
P1 = e−ivt ex sinh(it) i dt,
−π
Z ∞
P2 = e−v(iπ+t) ex sinh(iπ+t) dt,
Z0 ∞
P3 = e−v(−iπ+t) ex sinh(−iπ+t) dt.
0
Recall the fromula for the hyperbolic sine which says that sinh(it) = i sin(t). We use this
to get
Z π Z π
1 1
exsinh(it)−ivt dt = ei(x sin(t)−vt) dt
2π −π 2π −π
1 π i(x sin(t)−vt)
Z Z 0
= e dt + ei(x sin(t)−vt) dt
2π 0 −π
1 π i(x sin(t)−vt)
Z
= e + e−i(x sin(t)−vt) dt
2π 0
Since sine is an odd function, i sin(x sin(t) − vt) = −i sin(−x sin(t) + vt), and since cosine is
an even function, cos(x sin(t) − vt) = cos(−x sin(t) + vt)). Therefore, we have
1 π
Z
1
P1 = cos(x sin(t) − vt)dt.
2πi π 0
1 ∞ −v(−iπ+t) x sinh(iπ+t)
Z Z ∞
1
(P2 − P3 ) = e e dt − e−v(−iπ+t) ex sinh(−iπ+t) dt
2πi 2πi 0 0
Z ∞
1 iπv −vt x sinh(iπ+t) iπv −vt x sinh(−iπ+t)
= e e e −e e e dt.
2πi 0
23
We will use the property of the hyperbolic sine function which says that sinh(−iπ + t) =
− sinh(t) and sinh(iπ + t) = − sinh(t). Then we have
Z ∞
1 1
(P2 − P3 ) = e−x sinh(t)−vt e−iπv − eiπv dt
2πi 2πi 0
ix −ix
Recall the identity sin(x) = e −e
2i which follows as a result of Euler’s formula. Utilizing
this, we have
− sin(πv) ∞ −x sinh(t)−vt
Z
1
(P2 − P3 ) = e dt.
2πi π 0
Next we will find the result for Yv (x). Rearranging equation (37), we have that
π
cos(vπ) sin(vπ) ∞ −x sinh(t) −vt
Z Z
cos(vπ)
= cos(x sin(t) − vt)dt − e e dt
π 0 π
Z π Z0 ∞
1 sin(−vπ)
− cos(x sin(t) + vt)dt + e−x sinh(t) evt dt
π 0 π 0
π
1 π
Z Z
1
= cos(vπ) cos(x sin(t) − vt)dt − cos(x sin(t) + vt)dt
π 0 π 0
sin(vπ) ∞ −x sinh(t)
Z
cos(vπ)e−vt + evt dt
− e
π 0
L1 sin(vπ)
= − L2 .
π π
Then Z π Z π
L1 = cos(vπ) cos(x sin(t) − vt)dt − cos(x sin(t) + vt)dt
0 0
and Z ∞
e−x sinh(t) cos(vπ)e−vt + evt dt.
L2 =
0
We will use some rules of trigonometric products to rewrite L1 . Recall that cos(a) cos(b) =
1 1
2 cos(a + b) + cos(a − b) and sin(a) sin(b) = 2 cos(a − b) − cos(a + b) . We have
1
cos(vπ) cos(x sin(t) − vt) = cos(x sin(t) − vt + vπ)) + cos(x sin(t) − vt − vπ)
2
1
= cos(x sin(t) − vt + vπ) − cos(x sin(t) − vt + vπ)
2
1
+ cos(x sin(t) − vt − vπ)
2
24
= cos(x sin(t) − vt + vπ) + sin(vπ) sin(x sin(t) − vt)
= cos(x sin(t) + v(π − t)) + sin(vπ) sin(x sin(t) − vt).
Special Step. Before we continue, we need to show that the following relation is true:
Z π Z π
cos(x sin(t) + v(π − t))dt = cos(x sin(t) + vt)dt.
0 0
We have
so the integral is
Z π Z π Z π
cos(x sin(t) + vt)dt = cos(x sin(t)) cos(vt)dt − sin(x sin(t)) sin(vt)dt
0 0 0
Z π
=0− sin(x sin(t)) sin(vt)dt.
0
In addition,
− cos x sin(t) − vt = − cos(x sin(t)) cos(vt) − sin(x sin(t)) sin(vt),
And thus, we have shown that the relation holds. We will use it to simplify L1 as follows:
Z π
L1 = cos(x sinh(t) + v(π − t)) + sin(vπ) sin(x sin(t) − vt)
0
− cos(x sin(t) + vt) dt
Z π
= sin(vπ) sin(x sin(t) − vt)dt.
0
25
2.4 Using the Generating Function to Derive Jn (x)
The previous section gave the integral representation of Jv (x) for all v ∈ C. Notice that
for all n ∈ N, sin(nπ) = 0. We see this in the following theorem, which gives the integral
representation for orders in the natural numbers.
Theorem 2.4. Let the functions yn (x) be defined by the Laurent series
∞
x −1 X
e 2 (z−z )
= yn (x)z n . (65)
n=−∞
Then
∞
x n X (−1)k x 2k
yn (x) = (66)
2 (n + k)!k! 2
k=0
and Z π
1
yn (x) = cos(x sin φ − nφ) dφ. (67)
π 0
26
for l = −1. Otherwise, the integral equals 0. Thus, the only summation term remaining
will correspond to m = n + k, and the equation is
∞
x n X (−1)k x 2k
yn (x) = .
2 (n + k)!k! 2
k=0
Step Two. We can choose the contour t = eiφ and integrate from 0 to 2π. Then we have
Z 2π x (eiφ −e−iφ )
1 e2
yn (x) = e−iφ dφ
2π 0 (eiφ )n+1
Z 2π x (eiφ −e−iφ )
1 e2
= dφ
2π 0 einφ
Z 2π x2 (cos φ+i sin φ)−(cos(−φ)+i sin(−φ))
1 e
= dφ
2π 0 einφ
Z 2π x (cos φ+i sin φ−cos φ+i sin φ)
1 e2
= dφ
2π 0 einφ
Z 2π xi sin φ
1 e
= dφ
2π 0 einφ
Z 2π
1
= eix sin φ−inφ dφ
2π 0
Z 2π
1
= cos(x sin φ − nφ) + i sin(x sin φ − nφ))dφ
2π 0
Z π
1
= cos(x sin φ − nφ)dφ.
π 0
This proves the second result.
In this section we will use the proven integral representations to derive some asymptotic
formulae.
Theorem 2.5. For x ∈ R, as x → ∞ we have
r
2 π vπ
Jv (x) ∼ cos(x − − ), (68)
πx 4 2
and r
2 π vπ
Yv (x) ∼ sin(x − − ). (69)
πx 4 2
27
for all A. Let u = t − π2 . Then we have
Z π
1 2 π π
Jv (x) + iYv (x) = ei(x sin(u+ 2 )−v(u+ 2 )) du + O(x−A )
π −π2
Z π
2 2 −ivπ
= eix cos(u) e−ivu e 2 du + O(x−A )
π 0
−ivπ Z π
2e 2 2
eix cos(u) cos(vu) − i sin(vu) du + O(x−A )
=
π 0
−ivπ Z π
2e 2 3
eix cos(u) cos(vu) − i sin(vu) du
=
π 0
Z π
!
2
ix cos(u)
+ O(x−A )
+ e cos(vu) − i sin(vu) du
π
3
−ivπ
2e 2
= P1 + P2 + O(x−A )
π
π Rπ
eix cos(u) cos(vu)−i sin(vu) du and P2 = π2 eix cos(u) cos(vu)−i sin(vu) du.
R
where P1 = 0
3
3
Step One.
√ We take P2 and make the substitution cos(u) = z. Then u = cos−1 (z) and
du = − 1 − z 2 dz. So we have
−1
(z) − i sin v cos−1 (z)
Z 0
ixz cos v cos
P2 = e √ dz
1
2
− 1 − z2
Z 12 −1
(z) − i sin v cos−1 (z)
ixz cos v cos
= e √ dz
0 1 − z2
Z 21
= eixz φ(z)dz
0
where
cos v cos−1 (z) − i sin v cos−1 (z)
φ(z) = √ .
1 − z2
Integrating by parts, we have
1 Z 12
eixz 21
Z 2 1
e ixz
φ(z)dz = φ(z) − eixz φ0 (z)dz
0 ix 0 ix 0
cos(xz) + i sin(xz) 12 Z 21
1
= φ(z) − eixz φ0 (z)dz
ix 0 ix 0
sin(xz) 21 Z 12
cos(xz) 1
= φ(z) + φ(z) − eixz φ0 (z)dz
x ix 0 ix 0
= O(x−1 ),
since z ∈ [0, 21 ] avoids any singularities of φ and φ0 . (Notice that φ is not a function of x and
is composed of cyclical functions sine and cosine. We are only interested in the behavior as
x approaches a very large number, where φ will have a negligible effect.)
28
√ √
Step Two. We take P1 and substitute t = 2x sin( u2 ). Then we have du = √ 2 q dt and
x t2
1− 2x
t2
cos(u) = 1 − x. The equation becomes:
√ Z √x
2 2 t2
t t dt
P2 (x) = √ eix(1− x ) cos 2v sin−1 ( √ ) − i sin 2v sin−1 ( √ ) q
x 0 2x 2x 1− t2
2x
√ Z √ x2
2 ix 2
t t dt
=√ e e−it cos 2v sin−1 ( √ ) − i sin 2v sin−1 ( √ ) q .
x 0 2x 2x 1− t2
2x
As x → ∞, we get √ Z ∞
2 2
P1 (x) ∼ √ eix e−it dt.
x 0
R∞ √
2 π −iπ
Since 0
e−it dt = 2 e
4 , we finally have
−ivπ √ √
2e 2 2 ix π −iπ
Jv (x) + iYv (x) ∼ √ e e 4
π x 2
r
2 i(x− π − vπ )
∼ e 4 2 .
πx
Applying Euler’s formula leads to the result.
29
3 Graphs
The following code was implemented in Python to create graphs of the Bessel functions of
orders n = 0, 1, 2, 3, 4.
def f(x,n):
return integrate.quad(lambda t: 1/np.pi * np.cos(x*np.sin(t) - n*t),
0, np.pi)
plt.figure(1, figsize=(10,8))
plt.plot(X, [f(x,0)[0] for x in X], ’--’, linewidth = 1.7, label = ’n=0’)
plt.plot(X, [f(x,1)[0] for x in X], linewidth = 1.5, label = ’n=1’)
plt.plot(X, [f(x,2)[0] for x in X], ’--’,linewidth = 1.25, label = ’n=2’)
plt.plot(X, [f(x,3)[0] for x in X], linewidth = 1, label = ’n=3’)
plt.plot(X, [f(x,4)[0] for x in X], ’--’,linewidth = 0.75, label = ’n=4’)
plt.show()
30
Bessel Function of the First Kind
1.0
n=0
n=1
n=2
n=3
0.8 n=4
0.6
0.4
0.2
0.0
0.2
0.4
0 5 10 15 20 25 30
def f(x,n):
return integrate.quad(lambda t: 1/np.pi * np.sin(x*np.sin(t) - n*t), 0, np.pi)
def g(x,n):
return integrate.quad(lambda t: -1/np.pi * np.exp(-x*np.sinh(t))
*(np.exp(n*t)+np.cos(n*np.pi)*np.exp(-n*t)), 0, 150)
plt.figure(1, figsize=(10,8))
plt.plot(X, [f(x,0)[0]+g(x,0)[0] for x in X], ’--’,linewidth = 1.75, label =
’n=0’)
plt.plot(X, [f(x,1)[0]+g(x,1)[0] for x in X], linewidth = 1.50, label = ’n=1’)
plt.plot(X, [f(x,2)[0]+g(x,2)[0] for x in X], ’--’,linewidth = 1.25, label =
’n=2’)
31
plt.plot(X, [f(x,3)[0]+g(x,3)[0] for x in X], linewidth = 1.00, label = ’n=3’)
plt.plot(X, [f(x,4)[0]+g(x,4)[0] for x in X], ’--’,linewidth = 0.75, label =
’n=4’)
plt.show()
0.0
0.5
1.0
1.5
0 5 10 15 20 25 30
32
References
[6] Bieri, J. Bessel Equations and Bessel Functions. Retrieved from Redlands University:
http://facweb1.redlands.edu/fac/joanna bieri/pde/GoodSummaryofBesselsFunctions.pdf
33