Sei sulla pagina 1di 35

Portland State University

PDXScholar
University Honors Theses University Honors College

5-26-2018

Basics of Bessel Functions


Joella Rae Deal
Portland State University

Let us know how access to this document benefits you.


Follow this and additional works at: https://pdxscholar.library.pdx.edu/honorstheses

Recommended Citation
Deal, Joella Rae, "Basics of Bessel Functions" (2018). University Honors Theses. Paper 546.

10.15760/honors.552

This Thesis is brought to you for free and open access. It has been accepted for inclusion in University Honors Theses by an authorized administrator of
PDXScholar. For more information, please contact pdxscholar@pdx.edu.
Basics of Bessel Functions

by

Joella Deal

An undergraduate honors thesis submitted in partial fulfillment of the

requirements for the degree of

Bachelor of Science

in

University Honors

and

Mathematics

Thesis Adviser

Dr. Bin Jiang

Portland State University

2018
Abstract

This paper is a deep exploration of the project Bessel Functions by Martin Kreh
of Pennsylvania State University. We begin with a derivation of the Bessel functions
Ja (x) and Ya (x), which are two solutions to Bessel’s differential equation. Next we
find the generating function and use it to prove some useful standard results and
recurrence relations. We use these recurrence relations to examine the behavior of the
Bessel functions at some special values. Then we use contour integration to derive their
integral representations, from which we can produce their asymptotic formulae. We
also show an alternate method for deriving the first Bessel function using the generating
function. Finally, a graph created using Python illustrates the Bessel functions of order
0, 1, 2, 3, and 4.

1 Introduction to Bessel Functions

Bessel functions are the standard form of the solutions to Bessel’s differential equation,
∂2y ∂y
x2 +x + (x2 − n2 )y = 0, (1)
∂x2 ∂x
where n is the order of the Bessel equation. It is often obtained by the separation of the
wave equation
∂2u
= c2 ∇2 u (2)
∂t2
in cylindric or spherical coordinates. For this reason, the Bessel functions fall under the
umbrella of cylindrical (or spherical ) harmonics when n is an integer or half-integer, and we
see them appear in the separable solutions to both the Helmholtz equation and Laplace’s
equation in cylindric or spherical coordinates. Since the Bessel equation is a 2nd order
differential equation, it has two linearly independent solutions, Jn (x) and Yn (x).

1.1 Bessel Functions of the First Kind

To find the first solution we begin by taking a power series,



X
n
y(x) = x bk x k , (3)
k=0

which we will plug into the Bessel equation (1) and solve for its necessary components. For
convenience, the first and second partial derivatives of this power series are:

X ∞
X
y 0 (x) = nxn−1 bk x k + x n kbk xk−1 (4)
k=0 k=0

and

X ∞
X ∞
X
00 n−2 k n−1 k−1 n
y (x) = n(n − 1)x bk x + 2nx kbk x +x k(k − 1)bk xk−2 (5)
k=0 k=0 k=0

1
as given by the multiplication rule. We next multiply equation (4) by x and equation (5)
by x2 so that we can easily plug them into (1). (Note that the x or x2 can be inserted into
the front of each term or be distributed into the summation.) We then have

X ∞
X
xy 0 (x) = nxn bk xk + xn kbk xk (6)
k=0 k=0

and

X ∞
X ∞
X
x2 y 00 (x) = n(n − 1)xn bk xk + 2nxn kbk xk + xn k(k − 1)xk . (7)
k=0 k=0 k=0

Finally, we will need the term



X ∞
X
x2 y(x) = xn bk xk+2 = xn bk−2 xk . (8)
k=0 k=2

Note that because we are solving for the appropriate bk , we can artificially set b−2 :=
b−1 := 0. This will become useful when simplifying the full Bessel differential equation
below. Plugging equations (6), (7), and (8) into (1), we get

X ∞
X ∞
X ∞
X
n(n − 1)xn bk xk + 2nxn kbk xk + xn k(k − 1)bk xk + nxn bk xk
k=0 k=0 k=0 k=0

X ∞
X ∞
X
+ xn kbk xk + xn bk−2 xk − n2 xn bk xk = 0, (9)
k=0 k=2 k=0

which can be simplified in the following steps.

Step One. Combine the summation terms


P∞ P∞ (we can do this because we defined b−2 and b−1
to be equal to zero, so k=2 bk−2 xk = k=0 bk−2 xk ):

X
xn [n(n − 1)bk + 2nkbk + k(k − 1)bk + nbk + kbk + bk−2 − n2 bk ]xk = 0. (10)
k=0

Step Two. Cancel like-terms:



X
xn [2nkbk + k 2 bk + bk−2 ]xk = 0. (11)
k=0

Step Three. Compare the coefficients to yield:

2nkbk + k 2 bk + bk−2 = 0. (12)

We use equation (12) to create the general formula:


−bk−2
bk = . (13)
k(k + 2n)

Since b−1 = 0 we can infer that b1 = −b−1


1+2n = 0, and continuing in this manner, b2k−1 = 0
∀ k ∈ N. What about for even values of k? We have no condition on b0 (since k = 0 puts

2
equation (13) in indeterminate form). We can choose a convenient value for b0 as needed.
First, let us examine the case where −n ∈
/ N (we will take care of the −n ∈ N case later).
From equation (13) we have

b2k−2 b2k−2
b2k = − =− . (14)
2k(2k + 2n) 4k(n + k)

We will perform induction on this equation to find a general formula for b2k .

Step One. Examine the base cases k = 1 and k = 2.


b0
b2(1) = b2 = (−1) (15)
4(1)(n + 1)
b0
(−1) 4(1)(n+1) (−1)2 b0
b2(2) = b4 = (−1) = (16)
4(2)(n + 2) 42 (2 · 1)(n + 2)(n + 1)
Step Two. Perform the inductive step. Based on the above cases, suppose that the
following formula holds for some positive integer l:
b0
b2l = (−1)l Qn+l . (17)
4l l! n+1 m

We will show that this holds for the l + 1 case as well:

(−1)l 4l l! Qbn+l
0
mn+1
b2(l+1) = (−1)
4(l + 1)(n + l + 1)
(18)
b0
= (−1)l+1 Qn+l+1 .
4l+1 (l + 1)! n+1 m

Therefore we know that equation (17) holds in general for all positive integers k.

Before we can plug equation (17) back into the original power series (3), we need to choose
a convenient value for b0 . We know that the summation in the final power series equation
needs to be convergent in order for it to be a solution to the Bessel equation (1). In addition,
it would be advantageous to use the factorial (n + k)! in the denominator of bk (instead of
having to terminate the product at (n + 1)). For these reasons, we will choose
1
b0 = . (19)
2n n!
Plugging this into equation (17), we have

(−1)k
b2k = , (20)
2n 4k k!(n + k)(n + k − 1)...(n + 1)n!

which can clearly be simplified to

(−1)k
b2k = . (21)
2n 4k k!(n+ k)!

3
Now we are finally ready to plug our formula for b2k into the full power series (3). We have

X (−1)k
y(x) = xn x2k . (22)
2n 4k k!(n + k)!
k=0

Observe that the x2k term at the end comes from the b2k in equation (17). I.e., we want
the summation term

X
b0 x0 + b2 x2 + b4 x4 + ... = b2k x2k , (23)
k=0
so we must ensure that the power of x is the same as the subscript of b. Rearranging, we
have

x X (−1)k x
y(x) = ( )n ( )2k . (24)
2 k!(n + k)! 2
k=0
Clearly, this power series is only meaningful if it is convergent. We will check by the well-
known Ratio Test. If it passes, we have found a solution to Bessel’s equation. We take
bk+1
ρ = lim
k→∞ bk
(−1)k+1 x 2(k+1)
(k+1)!(n+k+1)! ( 2 )
= (−1)k
(25)
x 2k
k!(n+k)! ( 2 )
−1 x
= lim ( )2 = 0.
k→∞ (k + 1)(n + k + 1) 2
Since ρ < 1, the series converges. We have indeed found a solution to equation (1).

However, we also want to construct a solution for the complex order v, not just for the
order n ∈ N. We must find a continuous function with which we can replace the factorial
in equation (24).

The most versatile way to extend the factorial function to non-integer and complex numbers
is the Gamma function. The reciprocal of the Gamma function also happens to be holo-
morphic, meaning that it is infinitely differentiable and equal to its own Taylor series. Since
we are applying the Gamma function to the denominator of equation (24), this property of
its reciprocal will be especially convenient.

The Gamma function is defined as

Γ(n) = (n − 1)! (26)

(the factorial function with its argument shifted down by 1) if n is a positive integer.
For complex numbers and non-integers, the Gamma function corresponds to the Mellin
transform of the negative exponential function,

Γ(z) = {M e−x }(z), (27)

where the Mellin transform is


Z ∞
{M f }(s) = ϕ(s) = xs−1 f (x)dx. (28)
0

4
Making this modification to equation (24), we now have

x X (−1)k x
Jv (x) = ( )v ( )2k . (29)
2 Γ(k + 1)Γ(v + k + 1) 2
k=0

This is not valid for −n ∈ N, where the Gamma function is undefined. In this case, we
will begin the summation at k = n to bypass any undefined Gamma terms (since k = n
corresponds to Γ(−n + n + 1)):

x X (−1)k x
J−n (x) = ( )−n ( )2k
2 Γ(k + 1)Γ(−n + k + 1) 2
k=n

x X (−1)k+n x
= ( )−n ( )2(k+n)
2 Γ(n + k + 1)Γ(−n + n + k + 1) 2 (30)
k=0

x x X (−1)k (−1)n x
= ( )−n ( )2n ( )2k
2 2 Γ(n + k + 1)Γ(k + 1) 2
k=0
n
= (−1) Jn (x).

This will also solve the Bessel differential equation (1). So we have found our first Bessel
function, Jn (x) or Jv (x).

1.2 Bessel Functions of the Second Kind

We will now determine a second, linearly independent solution. Let us begin by examining
the behavior of Jv (x) as x → 0.

Step One. Let <(v) > 0. We have



x X (−1)k x
lim ( )v ( )2k = 0, (31)
x→0 2 Γ(k + 1)Γ(k + 1 + v) 2
k=0

since
x
lim ( )v = 0. (32)
x→0 2

Step Two. Let <(v) = 0. We examine



x X (−1)k x
lim ( )0 ( )2k = 1. (33)
x→0 2 Γ(k + 1)Γ(k + 1) 2
k=0

Though it may be difficult to see at first, the limit of this is 1 because of the property that
00 = 1, since our summation will become

X (−1)k x
lim ( )2k = 1 + 0 + 0 + 0 + ... (34)
x→0 Γ(k + 1)Γ(k + 1) 2
k=0

5
Step Three. Let <(v) < 0, v ∈
/ Z (since the gamma function is undefined here). Then we
have

2 X (−1)k x
lim ( )−v ( )2k = ±∞. (35)
x→0 x Γ(k + 1)Γ(k + 1 + v) 2
k=0

Summarized, the results from steps one, two, and three are:

0,
 <(v) > 0
lim Jv (x) = 1, v=0 . (36)
x→0 
±∞, <(v) < 0, v ∈ /Z

We can see here that Jv (x) and J−v (x) are two linearly independent solutions (i.e. they
cannot be expressed as linear combinations of each other) if v ∈/ Z. If v ∈ Z, they are linearly
dependent (recall equation (30)). Because of this property (and the homogeneity of Bessel’s
differential equation) any linear combination of Jv and J−v , where v ∈ / Z, is also solution.
We build the equation
cos(vπ)Jv (x) − J−v (x)
Yv (x) = , (37)
sin(vπ)
/ Z. Notice that this vanishes if we have order n ∈ N0 , since cos(nπ) = (−1)n . For n
for v ∈
∈ Z, we let
Yv (x) := lim Yv (x). (38)
v→n

It can be shown by L’Hôspital’s rule that this limit exists (the calculation is too lengthy to
be included here, but it follows from the properties of the digamma function, which gives
the relationship between the gamma function and its derivative).

We need to check that the Wronskian determinant of Jv (z) and Yv (z) does not vanish for
any v,z ∈ C. Abel’s Theorem says that if y1 (z) and y2 (z) are two solutions to the differential
equation p(z)y 00 + q(z)y 0 + r(z)y = 0, then the Wronskian of the two solutions is of the form
C
p(z) , where C is a constant which does not depend on z. Notice that we can write Bessel’s
2
equation in the form zy 00 + y 0 + (z − vz )y = 0, so that it is self-adjoint. Then the Wronskian
Av
determinant of Jv (z) and Yv (z) is of the form p(z) , where Av does not depend on z. We
omit the detailed calculation here, but the Wronskian determinant of Jv (z) and Yv (z) turns
out to be
2
W (Jv (z), Yv (z)) = Jv+1 (z)Yv (z) − Jv (z)Yv+1 (z) = , (39)
πz
which does not vanish for any z ∈ C. Therefore, they are linearly independent for all v ∈
C. We have successfully found two linearly independent solutions to the Bessel differential
equation.

6
2 Properties of Bessel Functions

Now that we have derived the two Bessel functions, we will prove some of their fundamental
properties.

2.1 The Generating Function

Several properties of the Bessel functions can be proven using their generating function. We
will begin this section by introducing the concept of generating functions and showing that
one exists for the Bessel functions.

Definition. A power series is an infinite sum of the form



X
ai z i , (40)
i=0

where the ai ’s are quantities given by a particular function or rule.


Definition. A generating function of another function an is the function whose power
series has an as the coefficient of xn . I.e., the generating function of an is the function
G(an ; x) where

X
G(an ; x) = an xn . (41)
n=0

Proposition 2.1. We have



x −1 X
e 2 (z−z )
= Jn (x)z n . (42)
n=−∞

x −1
I.e., the function e 2 (z−z )
is the generating function of the first Bessel function.

(Note: This power series is actually a Laurent series, since it includes terms of a negative
degree. This will be important later.)

Proof. Recall the power series representation



X xl
ex = .
l!
l=0

We have
x x 1 x x 1
e 2 z− 2 z = e 2 z e− 2 z
∞ ∞
X ( x2 z)m X (− 2z
x k
)
=
m=0
m! k!
k=0

7
∞ ∞
X (x/2)m m X (−1)k ( x2 )k −k
= z z .
m=0
m! k!
k=0

Recall that the multiplication of twoP infinite power series canP


be written as a Cauchy Prod-
∞ ∞
uct; i.e., if two power series f (z) = n=0 an z n and g(z) = n=0 bn z n each have a radius
of convergence of R > 0, then their product can also be expressed as a power series in the
disc |z| < R:
X∞
(f g)(z) = cn z n ,
n=0

where
n
X
cn = ak bn−k .
k=0

Applying this property, we have:


∞ ∞ ∞
!
X (x/2)m m X (−1)k ( x2 )k −k X X (−1)k ( x )m+k
z z = 2
z m−k
m=0
m! k! n=−∞
m!k!
k=0 m−k=n
m,k≥0

∞ ∞
!
X X (−1)k x n+k+k n
= ( ) z
n=−∞
(n + k)!k! 2
k=0
∞ ∞
!
X X (−1)k x 2k x n n
= ( ) ( ) z
n=−∞
(n + k)!k! 2 2
k=0
X∞
= Jn (x)z n .
n=−∞

We will now use the generating function to prove some standard results.
Lemma 2.1.1. We have

X
cos(x) = J0 (x) + 2 (−1)n J2n (x), (43)
n=1


X
sin(x) = 2 (−1)n J2n+1 (x), (44)
n=0

X
1 = J0 (x) + 2 J2n (x). (45)
n=1

Proof. Take z = eiφ and set i sin(φ) = z1 (z − z1 ). From Euler’s formula,

cos(x sin φ) + i sin(x sin φ) = eix sin φ

8

X
= Jn (x)einφ
n=−∞
X∞
= Jn (x)(cos(nφ) + i sin(nφ)).
n=−∞

Separating real and imaginary parts, we get



X
cos(x sin φ) = Jn (x) cos(nφ)
n=−∞

and

X
sin(x sin φ) = Jn (x) sin(nφ).
n=−∞

π
Setting φ = 2, we have

X nπ
cos(x) = Jn (x) cos( ).
n=−∞
2

Consider the cases.


−nπ
Case One. Suppose n = 1 mod 4. Then cos( nπ2 ) = 0 and cos( 2 ) = 0, so the summation
between n and −n terms vanishes. The same occurs for n = 3 mod 4 (or any odd n).
−nπ
Case Two. Suppose n = 2 mod 4. Then cos( nπ 2 ) = −1 and cos( 2 ) = −1. Recalling
equation (30), the summation between n and −n terms becomes
nπ −nπ
Jn (x) cos( ) + J−n (x) cos( ) = Jn (x)(−1) + Jn (x)(−1)
2 2
= −2Jn (x).
−nπ
Case Three. Suppose n = 0 mod 4. Then cos( nπ
2 ) = 1 and cos( 2 ) = 1. So the
summation becomes
nπ −nπ
Jn (x) cos( ) + J−n (x) cos( ) = Jn (x)(1) + Jn (x)(1)
2 2
= 2Jn (x).

Based on these cases, we can rewrite the equation as



X
cos(x) = 2 J2n (−1)n .
n=1

π
Plugging φ = 2 into sin(x sin φ), we get

X nπ
sin(x) = Jn (x) sin( ).
n=−∞
2

We will consider cases once again.

9
−nπ
Case One. Suppose n = 1 mod 4. Then −n = 3 mod 4, so sin( nπ2 ) = 1 and sin( 2 ) =
−1. So the summation between the terms for −n and n becomes
nπ −nπ
Jn (x) sin( ) + J−n (x) sin( ) = Jn (x)(1) + J−n (x)(−1)
2 2
= Jn (x) + (−1)2 Jn (x)
= 2Jn (x).

Case Two. Suppose n = 2 mod 4. Then sin( nπ 2 ) = 0, so we no longer have the term in
the summation. The same applies for the case n = 0 mod 4.
−nπ
Case Three. Suppose n = 3 mod 4. Then sin( nπ
2 ) = −1 and sin( 2 ) = 1. So the
summation between n and −n terms becomes
nπ −nπ
Jn (x) sin( ) + J−n (x) sin( ) = Jn (x)(−1) + J−n (x)(1)
2 2
= Jn (x)(−1) + (−1)Jn (x)
= −2Jn (x).

Based on these cases, we can clearly rewrite this summation as



X
sin(x) = 2 J2n+1 (x)(−1)n .
n=0

Finally, setting φ = 2π, we have



X
cos(sin(2π)) = cos(0) = 1 = Jn (x) cos(2πn)
n=−∞
X∞
= Jn (x)
n=−∞
−1
X ∞
X
= J0 (x) + Jn (x) + Jn (x)
n=−∞ n=1
X∞
= J0 (x) + 2 J2n (x)
n=1

since odd values of n cause the terms to cancel, and even values sum to 2Jn (x). This is the
third result.

Lemma 2.1.2. We have

Jn (−x) = J−n (x) = (−1)n Jn (x) (46)

∀ n ∈ Z.

10
Proof. We make the change of variables x → −x and z → z −1 and insert into the generating
function:

X −x −1
Jn (−x)z −n = e 2 (z −z)

n=−∞
−x x
= e 2z e 2 z
∞ ∞
X ( −x
2z )
m X
( x2 z)k
=
m=0
m! k!
k=0
∞ ∞
X ( x2 )m (−1)m −m X ( x2 )k k
= z z
m=0
m! k!
k=0

!
X X (−1)m ( x )m+k
= 2
z −n
n=−∞ m−k=n
m!k!
m,k≥0

∞ ∞
!
X X (−1)m x 2m x −n −n
= ( ) ( ) z
n=−∞ m=0
m!(m − n)! 2 2
X∞
= J−n (x)z −n
n=−∞

So we have,

X ∞
X
Jn (−x)z −n = J−n (x)z −n .
n=−∞ n=−∞

Comparing the coefficients, we get

Jn (−x) = J−n (x),

and from equation (30),


Jn (−x) = J−n (x) = (−1)n Jn (x).

Proposition 2.2. For any n ∈ Z,


d  −n 
x Jn (x) = −x−n Jn+1 (x) (47)
dx
and
d  n 
x Jn (x) = xn Jn−1 (x). (48)
dx

Proof. Multiply the series representation of Jn (x) by x−n and differentiate.



!
d  −n  d X (−1)k x 2k+n −n
x Jn (x) = ( ) (x)
dx dx k!(n + k)! 2
k=0

!
d X (−1)k x2k
=
dx k!(n + k)! 22k+n
k=0

11

!
X (−1)k x2k−1
= 2k 2k+n
k!(n + k)! 2
k=0

!
X (−1)k x2k−1
= k 2k+n−1
k!(n + k)! 2
k=0

Observe that when k = 0 the summation term is


!
(−1)0 x−1
(0) n−1 = 0,
0!n! 2

so we can rewrite the equation starting with the index k = 1:



!
d  −n  X (−1)k x2k−1
x Jn (x) = k 2k+n−1
dx k!(n + k)! 2
k=1

!
X (−1)k x2k−1
=
(k − 1)!(n + k)! 22k−1+n
k=1

X (−1)k  x 2k−1+n
= x−n .
(k − 1)!(n + k)! 2
k=1

Let j = k − 1. Then k = j + 1, and we have:



d  −n 
−n
X (−1)j+1  x 2j+n+1
x Jn (x) = x
dx j=0
j!(n + j + 1)! 2

X (−1)j  x 2j+(n+1)
= x−n (−1)
j=0
j!(n + j + 1)! 2
−n
= −x Jn+1 (x).

Similarly,

!
d n d X (−1)k  x 2k+n n
(x Jn (x)) = x
dx dx k!(n + k)! 2
k=0

!
d X (−1)k  x2k+2n 
=
dx k!(n + k)! 22k+n
k=0

!
X (−1)k (2k + 2n) x2k+2n−1
=
k!(n + k)! 22k+2n
k=0

!
X (−1)k x2k+2n−1
=
k!(n − 1 + k)! 22k+n−1
k=0

X (−1)k  x 2k+n−1
= xn
k!(k + n − 1)! 2
k=0

12
= xn Jn−1 (x).

Lemma 2.2.1. For any n ∈ Z,


d 
2 Jn (x) = Jn−1 (x) − Jn+1 (x), (49)
dx
and
2n
Jn (x) = Jn+1 (x) + Jn−1 (x). (50)
x

Proof. We take
d d  n −n 
(Jn (x)) = x x Jn (x)
dx dx
Applying the results from Proposition 2.2 and the product rule, we have
d d −n
(Jn (x)) = nxn−1 xn Jn (x) + xn
 
x Jn (x)
dx dx
= nx−1 Jn (x) + xn − x−n Jn+1 (x)


= nx−1 Jn (x) − Jn+1 (x).

Similarly, we take
d d  −n n 
(Jn (x)) = x x Jn (x)
dx dx
d n
= −nx−n−1 xn Jn (x) + x−n
 
x Jn (x)
dx 
= −nx−1 Jn (x) + x−n xn Jn−1 (x)
= −nx−1 Jn (x) + Jn−1 (x).

Adding these two expressions, we get the first result from the Lemma:
d 
2 Jn (x) = Jn−1 (x) − Jn+1 (x).
dx
Subtracting the same expressions, we get the second result:

2nx−1 Jn (x) = Jn−1 (x) + Jn+1 (x).

Remark 2.2.1. Lemma 2.2.1 can be proved similarly by differentiating the generating func-
tion of Jn (x) with respect to x and z, one at a time, and comparing the coefficients. Adding
n
the resulting relations and multiplying by x2 will produce the second result from Proposition
2.2.
Remark 2.2.2. Note that the relation shown in equation (51) also holds for all v ∈ R, not
just for n ∈ N. This will become useful in the next section during our discussion of Lommel
polynomials.

13
Lemma 2.2.2. For any n ∈ Z we have
Z
xn+1 Jn (x)dx = xn+1 Jn+1 (x) + C (51)

and Z
x−n+1 Jn (x)dx = −x−n+1 Jn−1 (x) + D. (52)

where C and D are arbitrary constants.

Proof. We take the integral of equation (48) from Proposition 2.2:


Z Z
n+1 d n+1
x Jn (x)dx = (x Jn+1 (x))dx
dx
= xn+1 Jn+1 (x) + C.

To obtain the second equation, recall from Lemma 2.1.2 that J−n (x) = (−1)n Jn (x), or
equivalently, Jn (x) = (−1)n J−n (x). We apply this to the equation below:
Z Z
x−n+1 Jn (x)dx = x−n+1 (−1)n J−n (x)dx
Z
= (−1) n
x−n+1 J−n (x)dx

= (−1)n x−n+1 J−n+1 (x) + D

by the first part of the lemma. Then we have


Z
x−n+1 Jn (x)dx = (−1)n x−n+1 J−n+1 (x) + D

= (−1)−n+2 x−n+1 J−n+1 (x) + D


= −x−n+1 (−1)−n+1 J−n+1 (x) + D.

Applying Lemma 2.1.2,


Z
x−n+1 Jn (x)dx = −x−n+1 Jn−1 (x) + D.

Lemma 2.2.3. For any m 6= 0,



X
J02 (x) + 2 Jn2 (x) = 1 (53)
n=1

and

X
Jn+m (x)Jn (x) = 0. (54)
n=−∞

14
Proof. We have
x −1 x −1
−z)
e 2 (z−z )
e 2 (z = e0 = 1
and by Proposition 2.1,

X ∞
X
1= Jk (x)z k Jn (x)z −n .
k=−∞ n=−∞

Letting m = k − n we write
∞ ∞
!
X X
1= Jn+m (x)Jn (x) z m
m=−∞ n=−∞
∞ ∞
!
X X X
= Jn2 (x)z 0 + Jn+m (x)Jn (x) z m
n=−∞ m∈Z n=−∞
m6=0

∞ ∞
!
X X X
= Jn2 (x) + Jn+m (x)Jn (x) z m .
n=−∞ m∈Z n=−∞
m6=0

Because of the case where z = 0, we have



X
1= Jn2 (x).
n=−∞

In addition,

!
X X
1=1+ Jn+m (x)Jn (x) z m ,
m∈Z n=−∞
m6=0

which implies that



X
0= Jn+m (x)Jn (x)
n=−∞

for all m 6= 0.
P∞
We will now examine the equation 1 = n=−∞ Jn2 (x) to find an equivalent form. We can
P−1 P∞
write this summation as 1 = J02 (x) + n=−∞ Jn2 (x) + n=1 Jn2 (x).

Step One. Fix any even n ∈ Z. Lemma 2.1.2 gives that Jn (x) = J−n (x). Squaring both
sides, we get that Jn2 (x) = J−n
2
(x). Then the sum of the n and −n terms is 2Jn2 (x).

Step Two. Fix any odd n ∈ Z. Then we have that Jn (x) = (−1)J−n (x). Squaring both
sides, we now have that Jn2 (x) = J−n
2
(x). Then Jn2 (x) + J−n
2
(x) = 2Jn2 (x).

Then our equivalent equation is



X
1 = J02 (x) + 2 Jn2 (x).
n=1

15
Lemma 2.2.4. X
Jn (x) = 1. (55)
n∈Z

Proof. Set z = 1 in the generating function. Then we have



x −1 X
e 2 (1−1 )
=1= Jn (x)(1)n
n=−∞
X
= Jn (x).
n∈Z

Lemma 2.2.5. ∀ n ∈ Z, we have


X
Jn (x + y) = Jk (x)Jn−k (y). (56)
k∈Z

Proof. Observe that


X 1 −1
Jn (x + y)tn = e 2 (x+y)(t−t )

n∈Z
1 −1 1 −1
= e 2 x(t−t ) e 2 y(t−t )
X X
= Jk (x)tk Jm (y)tm
k∈Z m∈Z
!
X X
= Jk (x)Jn−k (y) tn .
n∈Z k∈Z

Comparing the coefficients yields the result.

2.2 Some Special Values

In this section we will examine what the Bessel functions look like when some particular
values are chosen.
1
Lemma 2.2.6. If v = 2 or − 12 we have
r
2
J 21 (x) = Y− 12 (x) = sin(x) (57)
πx
and r
2
J− 12 (x) = −Y 21 (x) = cos(x). (58)
πx

16
Proof. Observe the series representation of Jv and apply the properties of the gamma func-
tion. We begin with
r ∞
xX (−1)k  x 2k
J 12 (x) = .
2
k=0
Γ(k + 1)Γ(k + 32 ) 2
  √
(2k+1)! π
Since Γ k + 32 = k!22k+1
, we have


x X (−1)k  x 2k
r
J 12 (x) = √
2 k! (2k+1)!
k=0 2k+1
π 2
k!2

x X (−1)k 22k+1  x 2k
r
= √
2 (2k + 1)! π 2
k=0

X (−1)k 22k+1  x 2k+1− 12
= √
(2k + 1)! π 2
k=0
r ∞
2 X (−1)k 22k+1  x 2k+1
= √
x (2k + 1)! π 2
k=0
r ∞
2 X (−1)k 2k+1
= x .
xπ (2k + 1)!
k=0

Since we know the Taylor series



X (−1)k 2k+1
sin(x) = x ,
(2k + 1)!
k=0

we can conclude that r


2
J 12 (x) = sin(x).

We also have that
cos(− π2 )J− 12 (x) − J 21 (x)
Y− 12 =
sin(− π2 )
(0)J− 12 − J 21 (x)
=
−1
= J 21 (x).

Thus we have proven



the first result. The second result follows similarly, considering that
x2n
P∞
Γ(k + 12 ) = (2k)!
22k k!
π
and cos(x) = n=0 (−1)n (2n)! . We have

 x − 12 X (−1)k  x 2k
J− 12 (x) =
2
k=0
Γ(k + 1)Γ(k + 12 ) 2
r ∞
2 X (−1)k  x 2k
= √
(2k)! π 2
x
k=0
2k 2

17
r ∞
2 X (−1)k 2k
= x
xπ (2k)!
k=0
r
2
= cos(x).

Also,

cos( π2 )J 21 (x) − J− 21 (x)


Y 12 =
sin( π2 )
(0)J 21 − J− 12 (x)
=
1
= −J− 21 (x).

This proves the second result.


1
Lemma 2.2.7. If v ∈ 2 + Z, there are polynomials Pn (x) and Qn (x) with

deg Pn = deg Qn = n
Pn (−x) = (−1)n Pn (x)
Qn (−x) = (−1)n Qn (x)

such that for any k ∈ N0


r
2  1 1 
Jk+ 12 (x) = Pk sin(x) − Qk−1 cos(x) (59)
xπ x x
and r
k 2  1 1 
J−k− 21 (x) = (−1) Pk cos(x) + Qk−1 sin(x) . (60)
xπ x x

Proof. From Lemma 2.2.1, equation (51), we had the recurrence formula
2v
Jv+1 (x) = Jv (x) − Jv−1 (x).
x
We will use this formula and show by induction that
1 1
Jk+v (x) = Pn ( )Jv (x) − Qn−1 ( )Jv−1 (x)
x x
for the Pn and Qn defined in the lemma. We begin with the base case Jv+2 (x):
2v
Jv+2 (x) = Jv+1 (x) − Jv (x)
x
2v 2v 
= Jv (x) − Jv−1 (x) − Jv (x)
x x
 2v   2v
2
= − 1 Jv (x) − Jv−1 (x).
x x

18
We will also show the case Jv+3 (x):
2v
Jv+3 (x) = Jv+2 (x) − Jv+1 (x)
x !
2v  2v 2  2v
= − 1 Jv (x) − Jv−1 (x) − Jv+1 (x)
x x x
 2v  2v  2v 2  2v 
3
= − Jv (x) − Jv−1 (x) − Jv (x) − Jv−1 (x)
x x x x
 2v  4v   2v  
3 2
= − Jv (x) − + 1 Jv−1 (x).
x x x
Then we know that the formula is true for some n ∈ N. We will show that it must hold for
the n + 1 case. We have
1 1
Jn+1+v (x) = Pn ( )Jv+1 (x) − Qn−1 ( )Jv (x)
x x
1  2v  1
= Pn ( ) Jv (x) − Jv−1 (x) − Qn−1 ( )Jv (x)
x x x
 2v 1  1 1
= Pn ( ) Jv (x) − Pn ( )Jv−1 (x) − Qn−1 ( )Jv (x)
x x x x
 2v 1  1  1
= Pn ( ) − Qn−1 ( ) Jv (x) − Pn ( )Jv−1 (x).
x x x x
Letting Pn+1 ( x1 ) = 2v 1
x Pn ( x ) − Qn−1 ( x1 ) and Qn ( x1 ) = Pn ( x1 ), we now have

1 1
Jn+1+v (x) = Pn+1 ( )Jv (x) − Qn ( )Jv−1 (x).
x x
By the principle of mathematical induction, the formula holds for all k ∈ N.

Now let v = 21 . We have

1 1
Jk+ 12 (x) = Pk ( )J 12 (x) − Qk−1 ( )J− 21 (x).
x x
Plugging in equations (58) and (59) from lemma 2.2.6, we get the first result:
r   
2 1 1 
Jk+ 21 (x) = Pk sin(x) − Qk−1 cos(x) .
xπ x x
To achieve the second result, we can rewrite our recurrence relation as
−2v
J−v−1 (x) = J−v (x) − J−v+1 (x).
x
We then have  2v  
2
Jv−2 (x) = − 1 J−v (x) + J−v+1 (x)
x
and  2v 3 2v   2v 
2

Jv−3 (x) = − + − 1 J−v (x) − − 1 J−v+1 (x).
x x x

19
Continuing to iterate in the same manner as before, we get the formula
 1 1 
J−v−k (x) = (−1)k Pk J−v (x) + Qk−1 J−v+1 (x) .
x x
Letting v = 21 , we have the second result:
r
k 2  1 1 
J−k− 12 (x) = (−1) Pk cos(x) + Qk−1 sin(x) .
xπ x x

Remark 2.2.3. The polynomials Pn and Qn in Lemma 2.2.7 are called Lommel polyno-
mials and were introduced by the physicist Eugen von Lommel (1837-1899). They solve the
recurrence relation

Jm+v (z) = Jv (z)Rm,v (z) − Jv−1 (z)Rm−1,v+1 (z)

and are given by the formula


m
[2]
X (−1)n (m − n)!γ(v + m − n) z 2n−m
Rm,v = ) .
n=0
n!(m − 2n)!γ(v + n) 2

Lemma 2.2.8. For k ∈ N,


Y−k− 12 = (−1)k Jk+ 12 (x) (61)
and
Yk+ 21 (x) = (−1)k−1 J−k− 21 (x). (62)

Proof. We use the formula to obtain

cos(−kπ − π2 )J−k− 12 (x) − Jk+ 21 (x)


Y−k− 12 (x) =
sin(−kπ − π2 )
−Jk+ 21 (x)
=
(−1)k+1
= (−1)k Jk+ 21 (x).

And similarly,

cos(kπ + π2 )Jk+ 21 (x) − J−k− 12 (x)


Yk+ 21 (x) =
sin(kπ + π2 )
−J−k− 21 (x)
=
(−1)k
= (−1)k−1 J−k− 21 (x).

20
2.3 Integral Representations

The purpose of this section is to give the integral representations of each of our two Bessel
functions. These will aid us later on in our discussion of asymptotics.

Theorem 2.3. For all n, x ∈ C we have

1 π sin(πv) ∞ −x sinh(t)−vt
Z Z
Jv (x) = cos(x sin t − vt)dt − e dt, (63)
π 0 π 0

and Z π Z ∞
1 1
Yv (x) = sin(x sin t − vt)dt − e−x sinh(t) (evt + cos(πv)e−vt )dt. (64)
π 0 π 0

Proof. A representation of the Gamma function extended to the complex plane (given to us
by the mathematician Hermann Hankel) is
Z
1 1
= t−z et dt
Γ(z) 2πi γ1

where γ1 is some contour in the complex plane coming from −∞, turning upwards around
0, and heading back towards −∞.

−∞
(0, 0)

Figure 1: The contour γ1 .

Then we have

( x2 )v (−1)k ( x )2k t−v−k−1
Z X
Jv (x) = 2
et dt
2πi γ1 k=0 k!
since Z
1 1
= t−v−k−1 et dt.
Γ(v + k + 1) 2πi γ1

Recall the power series representation


∞ 2 ∞
2
− x4t
X 4−k ( −x )k t
X (−1)k ( x )2k t−k
2
e = = .
k! k!
k=0 k=0

21
Then Jv (x) becomes
( x )v
Z
x2
Jv (x) = 2 t−v−1 et− 4t dt.
2πi γ1
x
We will apply u-substitution. Let t = 2 u. Then

( x2 )v
Z 
x −v−1 ( x2 u)− ( x2 )2 ( x1u) x
Jv (x) = ( u) e 2 ( )du
2πi γ2 2 2
Z
1 x v+1 x −v−1 −v−1 x (u− 1 )
= ( ) ( ) u e2 u du
2πi γ2 2 2
Z
1 x 1
= u−v−1 e 2 (u− u ) du
2πi γ2

for some complex contour γ2 of the same type. Next we will perform another u subsitution.
Let u = ew . We will have a new contour, since the exponential function is always positive.
So our new contour will now originate from +∞, turn around 0 (positively oriented), and
head back to +∞. A suitable contour following this path is the rectangle with complex
vertices ∞ − iπ, −iπ, iπ, and ∞ + iπ. This will be our new contour, γ.


∞ + iπ

∞ − iπ
−iπ
Figure 2: The contour γ.

Making this substitution, we have


Z
1 x w −w
Jv (x) = (ew )−v−1 e( 2 )(e −e ) ew dw
2πi γ
Z
1 x w −w
= (ew )−v e 2 (e −e ) dw
2πi γ
Z
1
= e−vw ex sinh(w) dw.
2πi γ

This integral in the rectangular γ can be split into three parts: the integral along the left
vertical edge, the integral along the top edge, and the negative of the integral along the
bottom edge. We can write this:
1
Jv (x) = (P1 + P2 − P3 )
2πi

22
with
Z π
P1 = e−ivt ex sinh(it) i dt,
−π
Z ∞
P2 = e−v(iπ+t) ex sinh(iπ+t) dt,
Z0 ∞
P3 = e−v(−iπ+t) ex sinh(−iπ+t) dt.
0

Step One. Let us examine the first part, P1 . We have


Z π
1 1
P1 = e−ivt exsinh(it) i dt
2πi 2πi −π
Z π
1
= exsinh(it)−ivt dt.
2π −π

Recall the fromula for the hyperbolic sine which says that sinh(it) = i sin(t). We use this
to get
Z π Z π
1 1
exsinh(it)−ivt dt = ei(x sin(t)−vt) dt
2π −π 2π −π
1  π i(x sin(t)−vt)
Z Z 0 
= e dt + ei(x sin(t)−vt) dt
2π 0 −π
1  π i(x sin(t)−vt)
Z 
= e + e−i(x sin(t)−vt) dt
2π 0

By Euler’s formula, this becomes


Z π
1 1
P1 = cos(x sin(t) − vt) + i sin(x sin(t) − vt)
2πi 2π 0

+ cos(−x sin(t) + vt) + i sin(−x sin(t) + vt) dt.

Since sine is an odd function, i sin(x sin(t) − vt) = −i sin(−x sin(t) + vt), and since cosine is
an even function, cos(x sin(t) − vt) = cos(−x sin(t) + vt)). Therefore, we have

1 π
Z
1
P1 = cos(x sin(t) − vt)dt.
2πi π 0

This gives the first half of the first result.

Step Two. We will examine

1  ∞ −v(−iπ+t) x sinh(iπ+t)
Z Z ∞
1 
(P2 − P3 ) = e e dt − e−v(−iπ+t) ex sinh(−iπ+t) dt
2πi 2πi 0 0
Z ∞
1 iπv −vt x sinh(iπ+t) iπv −vt x sinh(−iπ+t)

= e e e −e e e dt.
2πi 0

23
We will use the property of the hyperbolic sine function which says that sinh(−iπ + t) =
− sinh(t) and sinh(iπ + t) = − sinh(t). Then we have
Z ∞
1 1  
(P2 − P3 ) = e−x sinh(t)−vt e−iπv − eiπv dt
2πi 2πi 0
ix −ix
Recall the identity sin(x) = e −e
2i which follows as a result of Euler’s formula. Utilizing
this, we have
− sin(πv) ∞ −x sinh(t)−vt
Z
1
(P2 − P3 ) = e dt.
2πi π 0

This proves this 2nd half of the result for Jv (x).

Next we will find the result for Yv (x). Rearranging equation (37), we have that

sin(vπ)Yv (x) = cos(vπ)Jv (x) − J−v (x)

π
cos(vπ) sin(vπ) ∞ −x sinh(t) −vt
Z Z
cos(vπ)
= cos(x sin(t) − vt)dt − e e dt
π 0 π
Z π Z0 ∞
1 sin(−vπ)
− cos(x sin(t) + vt)dt + e−x sinh(t) evt dt
π 0 π 0

π
1 π
Z Z
1
= cos(vπ) cos(x sin(t) − vt)dt − cos(x sin(t) + vt)dt
π 0 π 0
sin(vπ) ∞ −x sinh(t)
Z
cos(vπ)e−vt + evt dt

− e
π 0

L1 sin(vπ)
= − L2 .
π π
Then Z π Z π
L1 = cos(vπ) cos(x sin(t) − vt)dt − cos(x sin(t) + vt)dt
0 0
and Z ∞
e−x sinh(t) cos(vπ)e−vt + evt dt.

L2 =
0

We will use some rules of trigonometric products to rewrite L1 . Recall that cos(a) cos(b) =
1 1

2 cos(a + b) + cos(a − b) and sin(a) sin(b) = 2 cos(a − b) − cos(a + b) . We have

1 
cos(vπ) cos(x sin(t) − vt) = cos(x sin(t) − vt + vπ)) + cos(x sin(t) − vt − vπ)
2
 1 
= cos(x sin(t) − vt + vπ) − cos(x sin(t) − vt + vπ)
2
1 
+ cos(x sin(t) − vt − vπ)
2

24
= cos(x sin(t) − vt + vπ) + sin(vπ) sin(x sin(t) − vt)
= cos(x sin(t) + v(π − t)) + sin(vπ) sin(x sin(t) − vt).

Special Step. Before we continue, we need to show that the following relation is true:
Z π Z π
cos(x sin(t) + v(π − t))dt = cos(x sin(t) + vt)dt.
0 0

First we will simplify the statement.


 
cos x sin(t) + vπ − vt = cos x sin(t)) cos(vπ − vt) − sin x sin(t) sin(vπ − vt)
  
= cos x sin(t) − cos(vt) − sin x sin(t) sin(vt)
    
= − cos x sin(t) cos(vt) + sin x sin(t) sin(vt)

= − cos x sin(t) − vt .

Then we need to show that


Z π Z π
 
cos x sin(t) + vt dt = − cos x sin(t) − vt dt.
0 0

We have

cos(x sin(t) + vt) = cos(x sin(t)) cos(vt) − sin(x sin(t)) sin(vt)

so the integral is
Z π Z π Z π
cos(x sin(t) + vt)dt = cos(x sin(t)) cos(vt)dt − sin(x sin(t)) sin(vt)dt
0 0 0
Z π
=0− sin(x sin(t)) sin(vt)dt.
0

In addition,

− cos x sin(t) − vt = − cos(x sin(t)) cos(vt) − sin(x sin(t)) sin(vt),

and the integral becomes


Z π Z π Z π
− cos(x sin(t) − vt)dt = − cos(x sin(t)) cos(vt)dt − sin(x sin(t)) sin(vt)dt
0 0 0
Z π
=0− sin(x sin(t)) sin(vt)dt.
0

And thus, we have shown that the relation holds. We will use it to simplify L1 as follows:
Z π
L1 = cos(x sinh(t) + v(π − t)) + sin(vπ) sin(x sin(t) − vt)
0

− cos(x sin(t) + vt) dt
Z π
= sin(vπ) sin(x sin(t) − vt)dt.
0

This proves the result for Yv (x).

25
2.4 Using the Generating Function to Derive Jn (x)

The previous section gave the integral representation of Jv (x) for all v ∈ C. Notice that
for all n ∈ N, sin(nπ) = 0. We see this in the following theorem, which gives the integral
representation for orders in the natural numbers.
Theorem 2.4. Let the functions yn (x) be defined by the Laurent series

x −1 X
e 2 (z−z )
= yn (x)z n . (65)
n=−∞

Then

x n X (−1)k x 2k
yn (x) = (66)
2 (n + k)!k! 2
k=0

and Z π
1
yn (x) = cos(x sin φ − nφ) dφ. (67)
π 0

Proof. Cauchy’s integral formula for a Laurent series gives us:


x −1
e 2 (t−t )
I
1
yn (x) = dt
2πi γ tn+1

for any simply closed contour γ centered around 0.


xk
P∞
Step One. Perform the u-substitution t = 2u x
x and recall the series expansion e = k=0 k! .
Then we have
x 2u 2u −1
e 2 ( x −( x ) ) 2
I
1
yn (x) = 2 n+1
du
2πi γ 0 x u)
x
x2
eu− 4u
I
1 2
= du
2πi γ 0 2 n+1 un+1 x

x
I
1 x n x2
= ) eu− 4u u−n−1 du
2πi 2 γ0
∞ ∞ 2k −k
um X −x u
I X
1 x n
= ) 2
u−n−1 du
2πi 2 γ 0 m=0 m! k=0 k!
∞ ∞
x n X X (−1)k x 2k 1
I
= um−k−n−1 du
2 m=0 m!k! 2 2πi γ 0
k=0

Choosing γ 0 to be a circle CR of radius R, Cauchy’s formula gives us


I
ul du = 2πi
CR

26
for l = −1. Otherwise, the integral equals 0. Thus, the only summation term remaining
will correspond to m = n + k, and the equation is

x n X (−1)k x 2k
yn (x) = .
2 (n + k)!k! 2
k=0

This proves the first result.

Step Two. We can choose the contour t = eiφ and integrate from 0 to 2π. Then we have
Z 2π x (eiφ −e−iφ )
1 e2
yn (x) = e−iφ dφ
2π 0 (eiφ )n+1
Z 2π x (eiφ −e−iφ )
1 e2
= dφ
2π 0 einφ

Z 2π x2 (cos φ+i sin φ)−(cos(−φ)+i sin(−φ))
1 e
= dφ
2π 0 einφ
Z 2π x (cos φ+i sin φ−cos φ+i sin φ)
1 e2
= dφ
2π 0 einφ
Z 2π xi sin φ
1 e
= dφ
2π 0 einφ
Z 2π
1
= eix sin φ−inφ dφ
2π 0
Z 2π
1
= cos(x sin φ − nφ) + i sin(x sin φ − nφ))dφ
2π 0
Z π
1
= cos(x sin φ − nφ)dφ.
π 0
This proves the second result.

2.5 Asymptotic Analysis

In this section we will use the proven integral representations to derive some asymptotic
formulae.
Theorem 2.5. For x ∈ R, as x → ∞ we have
r
2 π vπ
Jv (x) ∼ cos(x − − ), (68)
πx 4 2
and r
2 π vπ
Yv (x) ∼ sin(x − − ). (69)
πx 4 2

Proof. The second integrals in both of the integral represenations go to 0 exponentially as


x gets large. So as x → ∞, we use Euler’s formula to write
1 π i(x sin(t)−vt)
Z
Jv (x) + iYv (x) = e dt + O(x−A )
π 0

27
for all A. Let u = t − π2 . Then we have
Z π
1 2 π π
Jv (x) + iYv (x) = ei(x sin(u+ 2 )−v(u+ 2 )) du + O(x−A )
π −π2
Z π
2 2 −ivπ
= eix cos(u) e−ivu e 2 du + O(x−A )
π 0
−ivπ Z π
2e 2 2
eix cos(u) cos(vu) − i sin(vu) du + O(x−A )

=
π 0
−ivπ Z π
2e 2 3
eix cos(u) cos(vu) − i sin(vu) du

=
π 0
Z π
!
2
ix cos(u)
+ O(x−A )

+ e cos(vu) − i sin(vu) du
π
3
−ivπ
2e 2
 
= P1 + P2 + O(x−A )
π
π  Rπ 
eix cos(u) cos(vu)−i sin(vu) du and P2 = π2 eix cos(u) cos(vu)−i sin(vu) du.
R
where P1 = 0
3
3

Step One.
√ We take P2 and make the substitution cos(u) = z. Then u = cos−1 (z) and
du = − 1 − z 2 dz. So we have
−1
(z) − i sin v cos−1 (z)
Z 0  
ixz cos v cos
P2 = e √ dz
1
2
− 1 − z2
Z 12 −1
(z) − i sin v cos−1 (z)
 
ixz cos v cos
= e √ dz
0 1 − z2
Z 21
= eixz φ(z)dz
0

where
cos v cos−1 (z) − i sin v cos−1 (z)
 
φ(z) = √ .
1 − z2
Integrating by parts, we have
1 Z 12
eixz 21
Z 2 1
e ixz
φ(z)dz = φ(z) − eixz φ0 (z)dz

0 ix 0 ix 0
 cos(xz) + i sin(xz)  12 Z 21
1
= φ(z) − eixz φ0 (z)dz

ix 0 ix 0
 sin(xz)  21 Z 12
cos(xz) 1
= φ(z) + φ(z) − eixz φ0 (z)dz

x ix 0 ix 0
= O(x−1 ),

since z ∈ [0, 21 ] avoids any singularities of φ and φ0 . (Notice that φ is not a function of x and
is composed of cyclical functions sine and cosine. We are only interested in the behavior as
x approaches a very large number, where φ will have a negligible effect.)

28
√ √
Step Two. We take P1 and substitute t = 2x sin( u2 ). Then we have du = √ 2 q dt and
x t2
1− 2x
t2
cos(u) = 1 − x. The equation becomes:
√ Z √x
2 2 t2
 t  t  dt
P2 (x) = √ eix(1− x ) cos 2v sin−1 ( √ ) − i sin 2v sin−1 ( √ ) q
x 0 2x 2x 1− t2
2x
√ Z √ x2
2 ix 2
 t  t  dt
=√ e e−it cos 2v sin−1 ( √ ) − i sin 2v sin−1 ( √ ) q .
x 0 2x 2x 1− t2
2x

As x → ∞, we get √ Z ∞
2 2
P1 (x) ∼ √ eix e−it dt.
x 0
R∞ √
2 π −iπ
Since 0
e−it dt = 2 e
4 , we finally have
−ivπ √ √
2e 2 2 ix  π −iπ 
Jv (x) + iYv (x) ∼ √ e e 4
π x 2
r
2 i(x− π − vπ )
∼ e 4 2 .
πx
Applying Euler’s formula leads to the result.

29
3 Graphs

The following code was implemented in Python to create graphs of the Bessel functions of
orders n = 0, 1, 2, 3, 4.

3.1 Graph of First Bessel Function

import matplotlib as matplotlib


import numpy as np
import matplotlib.pyplot as plt
import scipy.integrate as integrate

def f(x,n):
return integrate.quad(lambda t: 1/np.pi * np.cos(x*np.sin(t) - n*t),
0, np.pi)

X = np.arange(0.0, 30.0, 0.01)

plt.figure(1, figsize=(10,8))
plt.plot(X, [f(x,0)[0] for x in X], ’--’, linewidth = 1.7, label = ’n=0’)
plt.plot(X, [f(x,1)[0] for x in X], linewidth = 1.5, label = ’n=1’)
plt.plot(X, [f(x,2)[0] for x in X], ’--’,linewidth = 1.25, label = ’n=2’)
plt.plot(X, [f(x,3)[0] for x in X], linewidth = 1, label = ’n=3’)
plt.plot(X, [f(x,4)[0] for x in X], ’--’,linewidth = 0.75, label = ’n=4’)

legend = plt.legend(loc=’upper right’, shadow=True)


frame = legend.get frame()
frame.set facecolor(’0.90’)

for label in legend.get texts():


label.set fontsize(’large’)

for label in legend.get lines():


label.set linewidth(1.5)

plt.title(’Bessel Function of the First Kind’)

plt.show()

30
Bessel Function of the First Kind

1.0
n=0
n=1
n=2
n=3
0.8 n=4

0.6

0.4

0.2

0.0

0.2

0.4

0 5 10 15 20 25 30

3.2 Graph of Second Bessel Function

import matplotlib as matplotlib


import numpy as np
import matplotlib.pyplot as plt
import scipy.integrate as integrate

def f(x,n):
return integrate.quad(lambda t: 1/np.pi * np.sin(x*np.sin(t) - n*t), 0, np.pi)

def g(x,n):
return integrate.quad(lambda t: -1/np.pi * np.exp(-x*np.sinh(t))
*(np.exp(n*t)+np.cos(n*np.pi)*np.exp(-n*t)), 0, 150)

X = np.arrange(0.1, 30.0, 0.01)

plt.figure(1, figsize=(10,8))
plt.plot(X, [f(x,0)[0]+g(x,0)[0] for x in X], ’--’,linewidth = 1.75, label =
’n=0’)
plt.plot(X, [f(x,1)[0]+g(x,1)[0] for x in X], linewidth = 1.50, label = ’n=1’)
plt.plot(X, [f(x,2)[0]+g(x,2)[0] for x in X], ’--’,linewidth = 1.25, label =
’n=2’)

31
plt.plot(X, [f(x,3)[0]+g(x,3)[0] for x in X], linewidth = 1.00, label = ’n=3’)
plt.plot(X, [f(x,4)[0]+g(x,4)[0] for x in X], ’--’,linewidth = 0.75, label =
’n=4’)

legend = plt.legend(loc=’upper right’, shadow=True)


frame = legend.get frame()
frame.set facecolor(’0.90’)

for label in legend.get texts():


label.set fontsize(’large’)

for label in legend.get lines():


label.set linewidth(1.5)

plt.title(’Bessel Function of the Second Kind’)


plt.ylim([-1.5,1.0])

plt.show()

Bessel Function of the Second Kind


1.0
n=0
n=1
n=2
n=3
n=4
0.5

0.0

0.5

1.0

1.5
0 5 10 15 20 25 30

32
References

[1] Kreh, M. Bessel Functions. Retrieved from Pennsylvania State University:


http://www.math.psu.edu/papikian/Kreh.pdf
[2] Watson, G.N. 1995. A Treatise on the Theory of Bessel Functions. New York, NY.
Cambridge Mathematical Library.
[3] Chiang, E.Y.M. 2011. A Brief Introduction to Bessel and Related Special Functions.
Retrieved from Hong Kong University of Science and Technology:
https://www.math.ust.hk/∼machiang/150/Intro bessel bk Nov08.pdf
[4] Fisher, S.D. 1990. Complex Variables. Mineola, NY. Dover Publications, Inc.
[5] Arfken, G.B. 2013. Mathematical Methods for Physicists. Waltham, MA. Elsevier Inc.

[6] Bieri, J. Bessel Equations and Bessel Functions. Retrieved from Redlands University:
http://facweb1.redlands.edu/fac/joanna bieri/pde/GoodSummaryofBesselsFunctions.pdf

33

Potrebbero piacerti anche