Sei sulla pagina 1di 305

INVERSE STURM-LIOUVILLE PROBLEMS AND THEIR

APPLICATIONS
G. Freiling and V. Yurko
PREFACE
This book presents the main results and methods on inverse spectral problems for Sturm-
Liouville dierential operators and their applications. Inverse problems of spectral analysis
consist in recovering operators from their spectral characteristics. Such problems often ap-
pear in mathematics, mechanics, physics, electronics, geophysics, meteorology and other
branches of natural sciences. Inverse problems also play an important role in solving nonlin-
ear evolution equations in mathematical physics. Interest in this subject has been increasing
permanently because of the appearance of new important applications, and nowadays the
inverse problem theory develops intensively all over the world.
The greatest success in spectral theory in general, and in particular in inverse spectral
problems has been achieved for the Sturm-Liouville operator
y := y

+q(x)y, (1)
which also is called the one-dimensional Schrodinger operator. The rst studies on the
spectral theory of such operators were performed by D. Bernoulli, J. dAlembert, L. Euler,
J. Liouville and Ch. Sturm in connection with the solution of the equation describing the
vibration of a string. An intensive development of the spectral theory for various classes
of dierential and integral operators and for operators in abstract spaces took place in the
XX-th century. Deep ideas here are due to G. Birkho, D. Hilbert, J. von Neumann, V.
Steklov, M. Stone, H. Weyl and many other mathematicians. The main results on inverse
spectral problems appear in the second half of the XX-th century. We mention here the
works by R. Beals, G. Borg, L.D. Faddeev, M.G. Gasymov, I.M. Gelfand, B.M. Levitan, I.G.
Khachatryan, M.G. Krein, N. Levinson, Z.L. Leibenson, V.A. Marchenko, L.A. Sakhnovich,
E. Trubowitz, V.A. Yurko and others (see Section 1.9 for details). An important role in the
inverse spectral theory for the Sturm-Liouville operator was played by the transformation
operator method (see [mar1], [lev2] and the references therein). But this method turned out
to be unsuitable for many important classes of inverse problems being more complicated than
the Sturm-Liouville operator. At present time other eective methods for solving inverse
spectral problems have been created. Among them we point out the method of spectral
mappings connected with ideas of the contour integral method. This method seems to be
perspective for inverse spectral problems. The created methods allowed to solve a number
of important problems in various branches of natural sciences.
2
In recent years there appeared new areas for applications of inverse spectral problems. We
mention a remarkable method for solving some nonlinear evolution equations of mathemat-
ical physics connected with the use of inverse spectral problems (see [abl1], [abl3], [zak1]
and Sections 4.1-4.2 for details). Another important class of inverse problems, which of-
ten appear in applications, is the inverse problem of recovering dierential equations from
incomplete spectral information when only a part of the spectral information is available
for measurement. Many applications are connected with inverse problems for dierential
equations having singularities and turning points, for higher-order dierential operators, for
dierential operators with delay or other types of aftereect.
The main goals of this book are as follows:
To present a fairly elementary and complete introduction to the inverse problem theory
for ordinary dierential equations which is suitable for the rst reading and accessible
not only to mathematicians but also to physicists, engineers and students. Note that
the book requires knowledge only of classical analysis and the theory of ordinary linear
dierential equations.
To describe the main ideas and methods in the inverse problem theory using the Sturm-
Liouville operator as a model. Up to now many of these ideas, in particular those which
appeared in recent years, are presented in journals only. It is very important that the
methods, provided in this book, can be used (and have been already used) not only for
Sturm-Liouville operators, but also for solving inverse problems for other more compli-
cated classes of operators such as dierential operators of arbitrary orders, dierential
operators with singularities and turning points, pencils of operators, integro-dierential
and integral operators and others.
To reect various applications of the inverse spectral problems in natural sciences and
engineering.
The book is organized as follows. In Chapter 1, Sturm-Liouville operators (1) on a nite
interval are considered. In Sections 1.1-1.3 we study properties of spectral characteristics and
eigenfunctions, and prove a completeness theorem and an expansion theorem. Sections 1.4-
1.8 are devoted to the inverse problem theory. We prove uniqueness theorems, give algorithms
for the solution of the inverse problems considered, provide necessary and sucient conditions
for their solvability, and study stability of the solutions. We present several methods for
solving inverse problems. The transformation operator method, in which the inverse problem
is reduced to the solution of a linear integral equation, is described in Section 1.5. In Section
1.6 we present the method of spectral mappings, in which ideas of the contour integral method
are used. The central role there is played by the so-called main equation of the inverse
problem which is a linear equation in a corresponding Banach space. We give a derivation of
the main equation, prove its unique solvability and provide explicit formulae for the solution
of the inverse problem. At present time the contour integral method seems to be the most
universal tool in the inverse problem theory for ordinary dierential operators. It has a
wide area for applications in various classes of inverse problems (see, for example, [bea1],
[yur1] and the references therein). In the method of standard models, which is described in
Section 1.7, a sequence of model dierential operators approximating the unknown operator
are constructed. In Section 1.8 we provide the method for the local solution of the inverse
3
problem from two spectra which is due to G. Borg [Bor1]. In this method, the inverse
problem is reduced to a special nonlinear integral equation, which can be solved locally.
Chapter 2 is devoted to Sturm-Liouville operators on the half-line. First we consider non-
selfadjoint operators with integrable potentials. Using the contour integral method we study
the inverse problem of recovering the Sturm-Liouville operator from its Weyl function. Then
locally integrable complex-valued potentials are studied, and the inverse problem is solved
by specifying the generalized Weyl function. In Chapter 3 Sturm-Liouville operators on the
line are considered, and the inverse scattering problem is studied. In Chapter 4 we provide
a number of applications of the inverse problem theory in natural sciences and engineering:
we consider the solution of the Korteweg-de Vries equation on the line and on the half-line,
solve the inverse problem of constructing parameters of a medium from incomplete spec-
tral information, and study boundary value problems with aftereect, inverse problems in
elasticity theory and others.
There exists an extensive literature devoted to inverse spectral problems. Instead of trying
to give a complete list of all relevant contributions, we mention only monographs, survey
articles and the most important papers, and refer to the references therein. In Section 1.9
we give a short review on literature on the inverse problem theory.
4
CONTENTS
1. Sturm-Liouville operators on a nite interval 5
1.1. Behavior of the spectrum 5
1.2. Properties of eigenfunctions 14
1.3. Transformation operators 19
1.4. Uniqueness theorems 23
1.5. Gelfand-Levitan method 24
1.6. The method of spectral mappings 49
1.7. The method of standard models 74
1.8. Local solution of the inverse problem 77
1.9. Review of the inverse problem theory 97
2. Sturm-Liouville operators on the half-line 102
2.1. Properties of the spectrum. The Weyl function 102
2.2. Recovery of the dierential equation from the Weyl function 120
2.3. Recovery of the dierential equation from the spectral data 131
2.4. An inverse problem for a wave equation 148
2.5. The generalized Weyl function 159
2.6. The Weyl sequence 163
3. Inverse scattering on the line 168
3.1. Scattering data 168
3.2. The main equation 183
3.3. The inverse scattering problem 188
3.4. Reectionless potentials. Modication of the discrete spectrum 201
4. Applications of the inverse problem theory 204
4.1. Solution of the Korteweg-de Vries equation on the line 204
4.2. Korteweg-de Vries equation on the half-line. Nonlinear reection 208
4.3. Constructing parameters of a medium from incomplete spectral information 216
4.4. Discontinuous inverse problems 236
4.5. Inverse problems in elasticity theory 252
4.6. Boundary value problems with aftereect 257
4.7. Dierential operators of the Orr-Sommerfeld type 268
4.8. Dierential equations with turning points 273
References 288
5
I. STURM-LIOUVILLE OPERATORS ON A FINITE
INTERVAL
In this chapter we present an introduction to the spectral theory for Sturm-Liouville
operators on a nite interval. Sections 1.1-1.2 are devoted to the so-called direct problems
of spectral analysis. In Section 1.1 we introduce and study the main spectral characteristics
for Sturm-Liouville boundary value problems. In particular, the theorem on the existence
and the asymptotic behavior of the eigenvalues and eigenfunctions is proved. In Section 1.2
properties of the eigenfunctions are investigated. It is proved that the system of the eigen-
functions is complete and forms an orthogonal basis in L
2
(0, ). An expansion theorem in
the uniform norm is provided. We also investigate oscillation properties of the eigenfunc-
tions, and prove that the n -th eigenfunction has exactly n zeros inside the interval (0, ) .
In Section 1.3 we study the so-called transformation operators which give us an eective tool
in the Sturm-Liouville spectral theory.
Sections 1.4-1.8 are devoted to constructing the inverse problem theory for Sturm-Liouville
operators on a nite interval. In Section 1.4 we give various formulations of the inverse
problems and prove the corresponding uniqueness theorems. In Sections 1.5-1.8 we present
several methods for constructing the solution of the inverse problems considered. The vari-
ety of ideas in these methods allows one to use them for many classes of inverse problems
and their applications. Section 1.5 deals with the Gelfand-Levitan method [gel1], in which
the transformation operators are used. The method of spectral mappings, considered in
Section 1.6, leans on ideas of the contour integral method. By these two methods we obtain
algorithms for the solution of the inverse problems, and provide necessary and sucient con-
ditions for their solvability. In Section 1.7 we describe the method of standard models which
allows one to obtain eective algorithms for the solution of a wide class of inverse problems.
The method for the local solution of the inverse problem, which is due to Borg [Bor1], is
provided in Section 1.8. A short historical review on the inverse problems of spectral analysis
for ordinary dierential operators is given in Section 1.9.
1.1. BEHAVIOR OF THE SPECTRUM
Let us consider the boundary value problem L = L(q(x), h, H) :
y := y

+q(x)y = y, 0 < x < , (1.1.1)


U(y) := y

(0) hy(0) = 0, V (y) := y

() +Hy() = 0. (1.1.2)
Here is the spectral parameter; q(x), h and H are real; q(x) L
2
(0, ). We shall
subsequently refer to q as the potential. The operator is called the Sturm-Liouville
operator.
We are interested in non-trivial solutions of the boundary value problem (1.1.1)-(1.1.2).
Denition 1.1.1. The values of the parameter for which L has nonzero solutions
are called eigenvalues, and the corresponding nontrivial solutions are called eigenfunctions.
The set of eigenvalues is called the spectrum of L.
6
In this section we obtain the simplest spectral properties of L, and study the asymptotic
behavior of eigenvalues and eigenfunctions. We note that similarly one can also study other
types of separated boundary conditions, namely:
(i) U(y) = 0, y() = 0;
(ii) y(0) = 0, V (y) = 0;
(iii) y(0) = y() = 0.
Let C(x, ), S(x, ), (x, ), (x, ) be solutions of (1.1.1) under the initial conditions
C(0, ) = 1, C

(0, ) = 0, S(0, ) = 0, S

(0, ) = 1,
(0, ) = 1,

(0, ) = h, (, ) = 1,

(, ) = H.
For each xed x, the functions (x, ), (x, ), C(x, ), S(x, ) are entire in . Clearly,
U() :=

(0, ) h(0, ) = 0, V () :=

(, ) + H(, ) = 0. (1.1.3)
Denote
() = (x, ), (x, )), (1.1.4)
where
y(x), z(x)) := y(x)z

(x) y

(x)z(x)
is the Wronskian of y and z. By virtue of Liouvilles formula for the Wronskian [cod1, p.83],
(x, ), (x, )) does not depend on x. The function () is called the characteristic
function of L. Substituting x = 0 and x = into (1.1.4), we get
() = V () = U(). (1.1.5)
The function () is entire in , and it has an at most countable set of zeros
n
.
Theorem 1.1.1. The zeros
n
of the characteristic function coincide with the eigen-
values of the boundary value problem L. The functions (x,
n
) and (x,
n
) are eigen-
functions, and there exists a sequence
n
such that
(x,
n
) =
n
(x,
n
),
n
,= 0. (1.1.6)
Proof. 1) Let
0
be a zero of (). Then, by virtue of (1.1.3)-(1.1.5), (x,
0
) =

0
(x,
0
), and the functions (x,
0
), (x,
0
) satisfy the boundary conditions (1.1.2).
Hence,
0
is an eigenvalue, and (x,
0
), (x,
0
) are eigenfunctions related to
0
.
2) Let
0
be an eigenvalue of L, and let y
0
be a corresponding eigenfunction. Then
U(y
0
) = V (y
0
) = 0. Clearly y
0
(0) ,= 0 (if y
0
(0) = 0 then y

0
(0) = 0 and, by virtue of
the uniqueness theorem for the equation (1.1.1) (see [cod1, Ch.1]), y
0
(x) 0 ). Without
loss of generality we put y
0
(0) = 1. Then y

0
(0) = h, and consequently y
0
(x) (x,
0
).
Therefore, (1.1.5) yields (
0
) = V ((x,
0
)) = V (y
0
(x)) = 0. We have also proved that
for each eigenvalue there exists only one (up to a multiplicative constant) eigenfunction. 2
Throughout the whole chapter we use the notation

n
:=
_

0

2
(x,
n
) dx. (1.1.7)
7
The numbers
n
are called the weight numbers, and the numbers
n
,
n
are called the
spectral data of L.
Lemma 1.1.1. The following relation holds

n
=

(
n
), (1.1.8)
where the numbers
n
are dened by (1.1.6), and

() =
d
d
().
Proof. Since

(x, ) + q(x)(x, ) = (x, ),

(x,
n
) + q(x)(x,
n
) =
n
(x,
n
),
we get
d
dx
(x, ), (x,
n
)) = (x, )

(x,
n
)

(x, )(x,
n
) = (
n
)(x, )(x,
n
),
and hence with the help of (1.1.5),
(
n
)
_

0
(x, )(x,
n
) dx = (x, ), (x,
n
))

0
=

(,
n
) + H(,
n
) +

(0, ) h(0, ) = ().


For
n
, this yields
_

0
(x,
n
)(x,
n
) dx =

(
n
).
Using (1.1.6) and (1.1.7) we arrive at (1.1.8). 2
Theorem 1.1.2. The eigenvalues
n
and the eigenfunctions (x,
n
), (x,
n
) are
real. All zeros of () are simple, i.e.

(
n
) ,= 0. Eigenfunctions related to dierent
eigenvalues are orthogonal in L
2
(0, ).
Proof. Let
n
and
k
(
n
,=
k
) be eigenvalues with eigenfunctions y
n
(x) and y
k
(x)
respectively. Then integration by parts yields
_

0
y
n
(x) y
k
(x) dx =
_

0
y
n
(x)y
k
(x) dx,
and hence

n
_

0
y
n
(x)y
k
(x) dx =
k
_

0
y
n
(x)y
k
(x) dx,
or _

0
y
n
(x)y
k
(x) dx = 0.
Further, let
0
= u+iv, v ,= 0 be a non-real eigenvalue with an eigenfunction y
0
(x) ,= 0.
Since q(x), h and H are real, we get that
0
= u iv is also the eigenvalue with the
eigenfunction y
0
(x). Since
0
,=
0
, we derive as before
|y
0
|
2
L
2
=
_

0
y
0
(x)y
0
(x) dx = 0,
8
which is impossible. Thus, all eigenvalues
n
of L are real, and consequently the eigen-
functions (x,
n
) and (x,
n
) are real too. Since
n
,= 0,
n
,= 0, we get by virtue of
(1.1.8) that

(
n
) ,= 0. 2
Example 1.1.1. Let q(x) = 0, h = 0 and H = 0, and let =
2
. Then one can
check easily that
C(x, ) = (x, ) = cos x, S(x, ) =
sin x

, (x, ) = cos ( x),


() = sin ,
n
= n
2
(n 0), (x,
n
) = cos nx,
n
= (1)
n
.
Lemma 1.1.2. For [[ , the following asymptotic formulae hold
(x, ) = cos x +O
_
1
[[
exp([[x)
_
= O(exp([[x)),

(x, ) = sin x +O(exp([[x)) = O([[ exp([[x)),


_

_
(1.1.9)
(x, ) = cos ( x) + O
_
1
[[
exp([[( x))
_
= O(exp([[( x))),

(x, ) = sin ( x) +O(exp([[( x))) = O([[ exp([[( x))),


_

_
(1.1.10)
uniformly with respect to x [0, ]. Here and in the sequel, =
2
, = Im, and o
and O denote the Landau symbols.
Proof. Let us show that
(x, ) = cos x +h
sin x

+
_
x
0
sin (x t)

q(t)(t, ) dt. (1.1.11)


Indeed, the Volterra integral equation
y(x, ) = cos x +h
sin x

+
_
x
0
sin (x t)

q(t)y(t, ) dt
has a unique solution (for the theory of Volterra integral equations see [Kre1]). On the other
hand, if a certain function y(x, ) satises this equation, then we get by dierentiation
y

(x, ) +
2
y(x, ) = q(x)y(x, ), y(0, ) = 1, y

(0, ) = h,
i.e. y(x, ) (x, ), and (1.1.11) is valid.
Dierentiating (1.1.11) we calculate

(x, ) = sin x +hcos x +


_
x
0
cos (x t)q(t)(t, ) dt. (1.1.12)
Denote () = max
0x
([(x, )[ exp([[x)). Since [ sin x[ exp([[x) and [ cos x[
exp([[x), we have from (1.1.11) that for [[ 1, x [0, ],
[(x, )[ exp([[x) 1 +
1
[[
_
h +()
_
x
0
[q(t)[ dt
_
C
1
+
C
2
[[
(),
9
and consequently
() C
1
+
C
2
[[
().
For suciently large [[, this yields () = O(1), i.e. (x, ) = O(exp([[x)). Substi-
tuting this estimate into the right-hand sides of (1.1.11) and (1.1.12), we arrive at (1.1.9).
Similarly one can derive (1.1.10).
We note that (1.1.10) can be also obtained directly from (1.1.9). Indeed, since

(x, ) + q(x)(x, ) = (x, ), (, ) = 1,

(, ) = H,
the function (x, ) := (x, ) satises the following dierential equation and the initial
conditions

(x, ) +q( x) (x, ) = (x, ), (0, ) = 1,

(0, ) = H.
Therefore, the asymptotic formulae (1.1.9) are also valid for the function (x, ). From this
we arrive at (1.1.10). 2
The main result of this section is the following theorem on the existence and the asymp-
totic behavior of the eigenvalues and the eigenfunctions of L.
Theorem 1.1.3. The boundary value problem L has a countable set of eigenvalues

n0
. For n 0,

n
=
_

n
= n +

n
+

n
n
,
n
l
2
, (1.1.13)
(x,
n
) = cos nx +

n
(x)
n
, [
n
(x)[ C, (1.1.14)
where
= h +H +
1
2
_

0
q(t) dt.
Here and everywhere below one and the same symbol
n
denotes various sequences
from l
2
, and the symbol C denotes various positive constants which do not depend on x,
and n.
Proof. 1) Substituting the asymptotics for (x, ) from (1.1.9) into the right-hand
sides of (1.1.11) and (1.1.12), we calculate
(x, ) = cos x +q
1
(x)
sin x

+
1
2
_
x
0
q(t) sin (x 2t) dt +O
_
exp([[x)

2
_
,

(x, ) = sin x +q
1
(x) cos x +
1
2
_
x
0
q(t) cos (x 2t) dt +O
_
exp([[x)

_
,
_

_
(1.1.15)
where
q
1
(x) = h +
1
2
_
x
0
q(t) dt.
According to (1.1.5), () =

(, ) + H(, ). Hence by virtue of (1.1.15),


() = sin + cos +(), (1.1.16)
10
where
() =
1
2
_

0
q(t) cos ( 2t) dt +O
_
1

exp([[)
_
.
2) Denote G

= : [ k[ , k = 0, 1, 2, . . ., > 0. Let us show that


[ sin [ C

exp([[), G

, (1.1.17)
[()[ C

[[ exp([[), G

, [[

, (1.1.18)
for suciently large

().
Let = +i. It is sucient to prove (1.1.17) for the domain
D

= : [
1
2
,
1
2
], 0, [[ .
Denote () = [ sin [ exp([[). Let D

. For 1, () C

. Since sin =
(exp(i) exp(i))/(2i), we have for 1, () = [1 exp(2i) exp(2)[/2
1/4. Thus, (1.1.17) is proved. Further, using (1.1.16) we get for G

,
() = sin
_
1 + O
_
1

__
,
and consequently (1.1.18) is valid.
3) Denote

n
= : [[ = (n + 1/2)
2
.
By virtue of (1.1.16),
() = f() +g(), f() = sin , [g()[ C exp([[).
According to (1.1.17), [f()[ > [g()[,
n
, for suciently large n (n n

). Then by
Rouch`es theorem [con1, p.125], the number of zeros of () inside
n
coincides with the
number of zeros of f() = sin , i.e. it equals n+1. Thus, in the circle [[ < (n+1/2)
2
there exist exactly n + 1 eigenvalues of L :
0
, . . . ,
n
. Applying now Rouch`es theorem
to the circle
n
() = : [ n[ , we conclude that for suciently large n, in
n
()
there is exactly one zero of (
2
), namely
n
=

n
. Since > 0 is arbitrary, it follows
that

n
= n +
n
,
n
= o(1), n . (1.1.19)
Substituting (1.1.19) into (1.1.16) we get
0 = (
2
n
) = (n +
n
) sin(n +
n
) + cos(n +
n
) +
n
,
and consequently
nsin
n
+ cos
n
+
n
= 0. (1.1.20)
Then sin
n
= O
_
1
n
_
, i.e.
n
= O
_
1
n
_
. Using (1.1.20) once more we obtain more precisely

n
=

n
+

n
n
, i.e. (1.1.13) is valid. Substituting (1.1.13) into (1.1.15) we arrive at (1.1.14),
where

n
(x) =
_
h +
1
2
_
x
0
q(t) dt x

x
n
_
sin nx +
1
2
_
x
0
q(t) sin n(x 2t) dt +O
_
1
n
_
. (1.1.21)
11
Consequently [
n
(x)[ C, and Theorem 1.1.3 is proved. 2
By virtue of (1.1.6) with x = ,

n
= ((,
n
))
1
.
Then, using (1.1.7), (1.1.8), (1.1.14) and (1.1.21) one can calculate

n
=

2
+

n
n
,
n
= (1)
n
+

n
n
,

(
n
) = (1)
n+1

2
+

n
n
. (1.1.22)
Since () has only simple zeros, we have sign

(
n
) = (1)
n+1
for n 0.
Denote by W
N
2
the Sobolev space of functions f(x), x [0, ], such that f
(j)
(x), j =
0, N 1 are absolutely continuous, and f
(N)
(x) L
2
(0, ).
Remark 1.1.1. If q(x) W
N
2
, N 1, then one can obtain more precise asymptotic
formulae as before. In particular,

n
= n +
N+1

j=1

j
n
j
+

n
n
N+1
,
2p
= 0, p 0,
1
=

n
=

2
+
N+1

j=1

+
j
n
j
+

n
n
N+1
,
+
2p+1
= 0, p 0,
n
> 0.
_

_
(1.1.23)
Indeed, let q(x) W
1
2
. Then integration by parts yields
1
2
_
x
0
q(t) cos (x 2t) dt =
sin x
4
(q(x) + q(0)) +
1
4
_
x
0
q

(t) sin (x 2t) dt,


1
2
_
x
0
q(t) sin (x 2t) dt =
cos x
4
2
(q(x) q(0))
1
4
2
_
x
0
q

(t) cos (x 2t)dt.


_

_
(1.1.24)
It follows from (1.1.15) and (1.1.24) that
(x, ) = cos x +
_
h +
1
2
_
x
0
q(t)dt
_
sin x

+O
_
exp([[x)

2
_
.
Substituting this asymptotics into the right-hand sides of (1.1.11)-(1.1.12) and using (1.1.24)
and (1.1.5), one can obtain more precise asymptotics for
()
(x, ) and () than (1.1.15)-
(1.1.16):
(x, ) = cos x +q
1
(x)
sin x

+q
20
(x)
cos x

2

1
4
2
_
x
0
q

(t) cos (x 2t) dt +O


_
exp([[x)

3
_
,

(x, ) = sin x+q


1
(x) cos x+q
21
(x)
sin x

+
1
4
_
x
0
q

(t) sin (x2t) dt+O


_
exp([[x)

2
_
,
() = sin + cos +
0
sin

+

0
()

, (1.1.25)
where
q
1
(x) = h +
1
2
_
x
0
q(t) dt,
0
= q
21
() +Hq
1
(),
12
q
2j
(x) =
1
4
_
q(x) + (1)
j+1
q(0)
_
+
(1)
j+1
2
_
x
0
q(t)q
1
(t) dt, j = 0, 1,

0
() =
1
4
_

0
q

(t) sin ( 2t) dt +O


_
exp([[)

_
.
From (1.1.25), by the same arguments as above, we deduce

n
= n +
n
, nsin
n
+ cos
n
+

n
n
= 0.
Hence

n
= n +

n
+

n
n
2
,
n
l
2
.
Other formulas in (1.1.23) can be derived similarly.
Theorem 1.1.4. The specication of the spectrum
n

n0
uniquely determines the
characteristic function () by the formula
() = (
0
)

n=1

n
2
. (1.1.26)
Proof. It follows from (1.1.16) that () is entire in of order 1/2, and consequently
by Hadamards factorization theorem [con1, p.289], () is uniquely determined up to a
multiplicative constant by its zeros:
() = C

n=0
_
1

n
_
(1.1.27)
(the case when (0) = 0 requires minor modications). Consider the function

() := sin =

n=1
_
1

n
2
_
.
Then
()

()
= C

0

n=1
n
2

n=1
_
1 +

n
n
2
n
2

_
.
Taking (1.1.13) and (1.1.16) into account we calculate
lim

()

()
= 1, lim

n=1
_
1 +

n
n
2
n
2

_
= 1,
and hence
C =
0

n=1

n
n
2
.
Substituting this into (1.1.27) we arrive at (1.1.26). 2
Remark 1.1.2. Analogous results are valid for Sturm-Liouville operators with other
types of separated boundary conditions. Let us briey formulate these results which will be
used below. The proof is recommended as an exercise.
13
(i) Consider the boundary value problem L
1
= L
1
(q(x), h) for equation (1.1.1) with the
boundary conditions U(y) = 0, y() = 0. The eigenvalues
n

n0
of L
1
are simple and
coincide with zeros of the characteristic function d() := (, ), and
d() =

n=0

(n + 1/2)
2
. (1.1.28)
For the spectral data
n
,
n1

n0
,
n1
:=
_

0

2
(x,
n
) dx of L
1
we have the asymptotic
formulae:

n
= n +
1
2
+

1
n
+

n
n
,
n
l
2
, (1.1.29)

n1
=

2
+

n1
n
,
n1
l
2
, (1.1.30)
where
1
= h +
1
2
_

0
q(t) dt.
(ii) Consider the boundary value problem L
0
= L
0
(q(x), H) for equation (1.1.1) with
the boundary conditions y(0) = V (y) = 0. The eigenvalues
0
n

n0
of L
0
are simple and
coincide with zeros of the characteristic function
0
() := (0, ) = S

(, ) + HS(, ).
The function S(x, ) satises the Volterra integral equation
S(x, ) =
sin x

+
_
x
0
sin (x t)

q(t)S(t, ) dt, (1.1.31)


and for [[ ,
S(x, ) =
sin x

+O
_
1
[[
2
exp([[x)
_
= O
_
1
[[
exp([[x)
_
,
S

(x, ) = cos x +
_
1
[[
exp([[x)
_
= O(exp([[x)),

0
() = cos +
_
1
[[
exp([[)
_
, = Im.
_

_
(1.1.32)
Moreover,

0
() =

n=0

0
n

(n + 1/2)
2
.
_

0
n
= n +
1
2
+

0
n
+

n
n
,
n
l
2
,
where
0
= H +
1
2
_

0
q(t) dt.
(iii) Consider the boundary value problem L
0
1
= L
0
1
(q(x)) for equation (1.1.1) with the
boundary conditions y(0) = y() = 0. The eigenvalues
0
n

n1
of L
0
1
are simple and
coincide with zeros of the characteristic function d
0
() := S(, ), and
d
0
() =

n=1

0
n

n
2
.
_

0
n
= n +

0
1
n
+

n
n
,
n
l
2
,
14
where
0
1
=
1
2
_

0
q(t) dt.
Lemma 1.1.3. The following relation holds

n
<
n
<
n+1
, n 0. (1.1.33)
i.e. the eigenvalues of two boundary value problems L and L
1
are alternating.
Proof. As in the proof of Lemma 1.1.1 we get
d
dx
(x, ), (x, )) = ( )(x, )(x, ), (1.1.34)
and consequently,
( )
_

0
(x, )(x, ) dx = (x, ), (x, ))

0
= (, )

(, )

(, )(, ) = d()() d()().


For we obtain
_

0

2
(x, ) dx =

d()() d()

() with

() =
d
d
(),

d() =
d
d
d().
In particular, this yields

n
=

(
n
)d(
n
), (1.1.35)
1
d
2
()
_

0

2
(x, ) dx =
d
d
_
()
d()
_
, < < , d() ,= 0.
Thus, the function
()
d()
is monotonically decreasing on R
n
[ n 0 with
lim

n
0
()
d()
= .
Consequently in view of (1.1.13) and (1.1.29), we arrive at (1.1.33). 2
1.2. PROPERTIES OF EIGENFUNCTIONS
1.2.1. Completeness and expansion theorems. In this subsection we prove
that the system of the eigenfunctions of the Sturm-Liouville boundary value problem L
is complete and forms an orthogonal basis in L
2
(0, ). This theorem was rst proved by
Steklov at the end of XIX-th century. We also provide sucient conditions under which the
Fourier series for the eigenfunctions converges uniformly on [0, ]. The completeness and
expansion theorems are important for solving various problems in mathematical physics by
the Fourier method, and also for the spectral theory itself. In order to prove these theorems
we apply the contour integral method of integrating the resolvent along expanding contours
in the complex plane of the spectral parameter (since this method is based on Cauchys
theorem, it sometimes called Cauchys method).
15
Theorem 1.2.1. (i) The system of eigenfunctions (x,
n
)
n0
of the boundary value
problem L is complete in L
2
(0, ) .
(ii) Let f(x), x [0, ] be an absolutely continuous function. Then
f(x) =

n=0
a
n
(x,
n
), a
n
=
1

n
_

0
f(t)(t,
n
) dt, (1.2.1)
and the series converges uniformly on [0, ].
(iii) For f(x) L
2
(0, ), the series (1.2.1) converges in L
2
(0, ), and
_

0
[f(x)[
2
dx =

n=0

n
[a
n
[
2
(Parsevals equality) . (1.2.2)
There are several methods to prove Theorem 1.2.1. We shall use the contour integral
method which plays an important role in studying direct and inverse problems for various
dierential, integro-dierential and integral operators.
Proof. 1) Denote
G(x, t, ) =
1
()
_

_
(x, )(t, ), x t,
(t, )(x, ), x t,
and consider the function
Y (x, ) =
_

0
G(x, t, )f(t) dt
=
1
()
_
(x, )
_
x
0
(t, )f(t) dt +(x, )
_

x
(t, )f(t) dt
_
.
The function G(x, t, ) is called Greens function for L. G(x, t, ) is the kernel of the inverse
operator for the Sturm-Liouville operator, i.e. Y (x, ) is the solution of the boundary value
problem
Y Y +f(x) = 0, U(Y ) = V (Y ) = 0; (1.2.3)
this is easily veried by dierentiation. Taking (1.1.6) into account and using Theorem 1.1.2
we calculate
Res
=
n
Y (x, ) =
1

(
n
)
_
(x,
n
)
_
x
0
(t,
n
)f(t) dt +(x,
n
)
_

x
(t,
n
)f(t) dt
_
=

n

(
n
)
(x,
n
)
_

0
f(t)(t,
n
) dt,
and by virtue of (1.1.8),
Res
=
n
Y (x, ) =
1

n
(x,
n
)
_

0
f(t)(t,
n
) dt. (1.2.4)
2) Let f(x) L
2
(0, ) be such that
_

0
f(t)(t,
n
) dt = 0, n 0.
16
Then in view of (1.2.4), Res
=
n
Y (x, ) = 0, and consequently (after extending Y (x, ) con-
tinuously to the whole - plane) for each xed x [0, ], the function Y (x, ) is entire
in . Furthermore, it follows from (1.1.9), (1.1.10) and (1.1.18) that for a xed > 0 and
suciently large

> 0 :
[Y (x, )[
C

[[
, G

, [[

.
Using the maximum principle [con1, p.128] and Liouvilles theorem [con1, p.77] we conclude
that Y (x, ) 0. From this and (1.2.3) it follows that f(x) = 0 a.e. on (0, ). Thus (i)
is proved.
3) Let now f AC[0, ] be an arbitrary absolutely continuous function. Since (x, )
and (x, ) are solutions of (1.1.1), we transform Y (x, ) as follows
Y (x, ) =
1
()
_
(x, )
_
x
0
(

(t, ) + q(t)(t, ))f(t) dt


+(x, )
_

x
(

(t, ) + q(t)(t, ))f(t) dt


_
.
Integration of the terms containing second derivatives by parts yields in view of (1.1.4),
Y (x, ) =
f(x)

_
Z
1
(x, ) + Z
2
(x, )
_
, (1.2.5)
where
Z
1
(x, ) =
1
()
_
(x, )
_
x
0
g(t)

(t, ) dt +(x, )
_

x
g(t)

(t, ) dt
_
, g(t) := f

(t),
Z
2
(x, ) =
1
()
_
hf(0)(x, ) + Hf()(x, )
+(x, )
_
x
0
q(t)(t, )f(t)) dt +(x, )
_

x
q(t)(t, )f(t)) dt
_
.
Using (1.1.9), (1.1.10) and (1.1.18), we get for a xed > 0, and suciently large

> 0 :
max
0x
[Z
2
(x, )[
C
[[
, G

, [[

. (1.2.6)
Let us show that
lim
||
G

max
0x
[Z
1
(x, )[ = 0. (1.2.7)
First we assume that g(x) is absolutely continuous on [0, ]. In this case another integration
by parts yields
Z
1
(x, ) =
1
()
_
(x, )g(t)(t, )

x
0
+(x, )g(t)(t, )

x
(x, )
_
x
0
g

(t)(t, ) dt (x, )
_

x
g

(t)(t, ) dt
_
.
17
By virtue of (1.1.9), (1.1.10) and (1.1.18), we infer
max
0x
[Z
1
(x, )[
C
[[
, G

, [[

.
Let now g(t) L(0, ). Fix > 0 and choose an absolutely continuous function g

(t)
such that _

0
[g(t) g

(t)[ dt <

2C
+
,
where
C
+
= max
0x
sup
G

1
[()[
_
[(x, )[
_
x
0
[

(t, )[ dt +[(x, )[
_

x
[

(t, )[ dt
_
.
Then, for G

, [[

, we have
max
0x
[Z
1
(x, )[ max
0x
[Z
1
(x, ; g

)[ + max
0x
[Z
1
(x, ; g g

)[

2
+
C()
[[
.
Hence, there exists
0
> 0 such that max
0x
[Z
1
(x, )[ for [[ >
0
. By virtue of the
arbitrariness of > 0, we arrive at (1.2.7).
Consider the contour integral
I
N
(x) =
1
2i
_

N
Y (x, ) d,
where
N
= : [[ = (N + 1/2)
2
(with counterclockwise circuit). It follows from
(1.2.5)-(1.2.7) that
I
N
(x) = f(x) +
N
(x), lim
N
max
0x
[
N
(x)[ = 0. (1.2.8)
On the other hand, we can calculate I
N
(x) with the help of the residue theorem [con1,
p.112]. By virtue of (1.2.4),
I
N
(x) =
N

n=0
a
n
(x,
n
), a
n
=
1

n
_

0
f(t)(t,
n
) dt.
Comparing this with (1.2.8) we arrive at (1.2.1), where the series converges uniformly on
[0, ], i.e. (ii) is proved.
4) Since the eigenfunctions (x,
n
)
n0
are complete and orthogonal in L
2
(0, ), they
form an orthogonal basis in L
2
(0, ), and Parsevals equality (1.2.2) is valid. 2
1.2.2. Oscillations of the eigenfunctions. Consider the boundary value problem
(1.1.1)-(1.1.2) with q(x) 0, h = H = 0. In this case, according to Example 1.1.1, the
eigenfunctions are cos nx, and we have that the n -th eigenfunction has exactly n zeros
inside the interval (0, ). It is turns out that this property of the eigenfunctions holds also
in the general case. In other words, the following oscillation theorem , which is due to Sturm,
is valid.
18
Theorem 1.2.2. The eigenfunction (x,
n
) of the boundary value problem L has
exactly n zeros in the interval 0 < x < .
First we prove several auxiliary assertions.
Lemma 1.2.1. Let u
j
(x), j = 1, 2, x [a, b] be solutions of the equations
u

j
+g
j
(x)u
j
= 0, g
1
(x) < g
2
(x), j = 1, 2, x [a, b]. (1.2.9)
Suppose that for certain x
1
, x
2
[a, b], u
1
(x
1
) = u
1
(x
2
) = 0, and u
1
(x) ,= 0, x (x
1
, x
2
).
Then there exists x

(x
1
, x
2
) such that u
2
(x

) = 0. In other words, the function u


2
(x)
has at least one zero lying between any two zeros of u
1
(x).
Proof. Suppose, on the contrary, that u
2
(x) ,= 0 for x (x
1
, x
2
). Without loss of
generality we assume that u
j
(x) > 0 for x (x
1
, x
2
), j = 1, 2. By virtue of (1.2.9),
d
dx
(u

1
u
2
u

2
u
1
) = (g
2
g
1
)u
1
u
2
,
and consequently
_
x
2
x
1
(g
2
g
1
)u
1
u
2
dx = (u

1
u
2
u

2
u
1
)

x
2
x
1
= u

1
(x
2
)u
2
(x
2
) u

1
(x
1
)u
2
(x
1
). (1.2.10)
The integral in (1.2.10) is strictly positive. On the other hand, since u

1
(x
1
) > 0, u

1
(x
2
) < 0,
and u
2
(x) 0 for x [x
1
, x
2
], the right-hand side of (1.2.10) is non-positive. This
contradiction proves the lemma. 2
Corollary 1.2.1. Let g
1
(x) <
2
< 0. Then each non-trivial solution of the equation
u

1
+g
1
(x)u
1
= 0 cannot have more than one zero.
Indeed, this follows from Lemma 1.2.1 with g
2
(x) =
2
since the equation u

2
u
2
=
0 has the solution u
2
(x) = exp(x), which has no zeros.
We consider the function (x, ) for real . Zeros of (x, ) with respect to x are
functions of . We show that these zeros are continuous functions of .
Lemma 1.2.2. Let (x
0
,
0
) = 0. For each > 0 there exists > 0 such that if
[
0
[ < , then the function (x, ) has exactly one zero in the interval [x x
0
[ < .
Proof. A zero x
0
of a solution (x,
0
) of dierential equation (1.1.1) is a simple zero,
since if we had

(x
0
,
0
) = 0, it would follow from the uniqueness theorem for equation
(1.1.1) (see [cod1, Ch.1]) that (x,
0
) 0. Hence,

(x
0
,
0
) ,= 0, and for deniteness let us
assume that

(x
0
,
0
) > 0. Choose
0
> 0 such that

(x,
0
) > 0 for [xx
0
[
0
. Then
(x,
0
) < 0 for x [x
0

0
, x
0
), and (x,
0
) > 0 for x (x
0
, x
0
+
0
]. Take
0
. By
continuity of (x, ) and

(x, ), there exists > 0 such that for [


0
[ < , [xx
0
[ <
we have

(x, ) > 0, (x
0

0
, ) < 0, (x
0
+
0
, ) > 0. Consequently, the function
(x, ) has exactly one zero in the interval [x x
0
[ < . 2
Lemma 1.2.3. Suppose that for a xed real , the function (x, ) has m zeros in
the interval 0 < x a. Let > . Then the function (x, ) has not less than m zeros
in the same interval, and the k-th zero of (x, ) is less than the k-th zero of (x, ).
Proof. Let x
1
> 0 be the smallest positive zero of (x, ). By virtue of Lemma 1.2.1,
it is sucient to prove that (x, ) has at least one zero in the interval 0 < x < x
1
.
19
Suppose, on the contrary, that (x, ) ,= 0, x [0, x
1
). Since (0, ) = 1, we have
(x, ) > 0, (x, ) > 0, x [0, x
1
); (x
1
, ) = 0,

(x
1
, ) < 0. It follows from (1.1.34)
that
( )
_
x
1
0
(x, )(x, ) dx = (x, ), (x, ))

x
1
0
= (x
1
, )

(x
1
, ) 0.
But the integral in the left-hand side is strictly positive. This contradiction proves the
lemma. 2
Proof of Theorem 1.2.2. Let us consider the function (x, ) for real . By virtue of
(1.1.9), the function (x, ) has no zeros for suciently large negative : (x, ) > 0,

< 0, x [0, ]. On the other hand, (,


n
) = 0, where
n

n0
are eigenvalues of
the boundary value problem L
1
.
Using Lemmas 1.2.2-1.2.3 we get that if moves from to , then the zeros of
(x, ) on the interval [0, ] continuously move to the left. New zeros can appear only
through the point x = . This yields:
(i) The functions (x,
n
) has exactly n zeros in the interval x [0, ).
(ii) If (
n1
,
n
), n 1,
1
:= , then the function (x, ) has exactly n zeros
in the interval x [0, ].
According to (1.1.33),

0
<
0
<
1
<
1
<
2
<
2
< . . .
Consequently, the function (x,
n
) has exactly n zeros in the interval [0, ]. 2
1.3. TRANSFORMATION OPERATORS
An important role in the inverse problem theory for Sturm-Liouville operators is played by
the so-called transformation operator. It connects solutions of two dierent Sturm-Liouville
equations for all . In this section we construct the transformation operator and study its
properties. The transformation operator rst appeared in the theory of generalized transla-
tion operators of Delsarte and Levitan (see [lev1]). Transformation operators for arbitrary
Sturm-Liouville equations were constructed by Povzner [pov1]. In the inverse problem the-
ory the transformation operators were used by Gelfand, Levitan and Marchenko (see Section
1.5 and the monographs [mar1], [lev2]).
Theorem 1.3.1. For the function C(x, ) the following representation holds
C(x, ) = cos x +
_
x
0
K(x, t) cos t dt, =
2
, (1.3.1)
where K(x, t) is a real continuous function, and
K(x, x) =
1
2
_
x
0
q(t) dt. (1.3.2)
Proof. It follows from (1.1.11) for h = 0, that the function C(x, ) is the solution of
the following integral equation
C(x, ) = cos x +
_
x
0
sin (x t)

q(t)C(t, ) dt. (1.3.3)


20
Since
sin (x t)

=
_
x
t
cos (s t) ds,
(1.3.3) becomes
C(x, ) = cos x +
_
x
0
q(t)C(t, )
_
_
x
t
cos (s t) ds
_
dt,
and hence
C(x, ) = cos x +
_
x
0
_
_
t
0
q()C(, ) cos (t ) d
_
dt.
The method of successive approximations gives
C(x, ) =

n=0
C
n
(x, ), (1.3.4)
C
0
(x, ) = cos x, C
n+1
(x, ) =
_
x
0
_
_
t
0
q()C
n
(, ) cos (t ) d
_
dt. (1.3.5)
Let us show by induction that the following representation holds
C
n
(x, ) =
_
x
0
K
n
(x, t) cos t dt, (1.3.6)
where K
n
(x, t) does not depend on .
First we calculate C
1
(x, ) , using cos cos =
1
2
_
cos( +) + cos( )
_
:
C
1
(x, ) =
_
x
0
_
_
t
0
q() cos cos (t ) d
_
dt
=
1
2
_
x
0
cos t
_
_
t
0
q() d
_
dt +
1
2
_
x
0
_
_
t
0
q() cos (t 2) d
_
dt.
The change of variables t 2 = s in the second integral gives
C
1
(x, ) =
1
2
_
x
0
cos t
_
_
t
0
q() d
_
dt +
1
4
_
x
0
_
_
t
t
q
_
t s
2
_
cos s ds
_
dt.
Interchanging the order of integration in the second integral we obtain
C
1
(x, ) =
1
2
_
x
0
cos t
_
_
t
0
q() d
_
dt +
1
4
_
x
0
cos s
_
_
x
s
q
_
t s
2
_
dt
_
ds
+
1
4
_
0
x
cos s
_
_
x
s
q
_
t s
2
_
dt
_
ds =
1
2
_
x
0
cos t
_
_
t
0
q() d
_
dt
+
1
4
_
x
0
cos s
_
_
x
s
_
q
_
t s
2
_
+q
_
t +s
2
__
dt
_
ds.
Thus, (1.3.6) is valid for n = 1, where
K
1
(x, t) =
1
2
_
t
0
q() d +
1
4
_
x
t
_
q
_
s t
2
_
+q
_
s +t
2
__
ds
=
1
2
_ x+t
2
0
q() d +
1
2
_ xt
2
0
q() d, t x. (1.3.7)
21
Suppose now that (1.3.6) is valid for a certain n 1. Then substituting (1.3.6) into
(1.3.5) we have
C
n+1
(x, ) =
_
x
0
_
t
0
q() cos (t )
_

0
K
n
(, s) cos s ds d dt
=
1
2
_
x
0
_
t
0
q()
_

0
K
n
(, s)(cos (s +t ) + cos (s t +)) ds d dt.
The changes of variables s +t = and s t + = , respectively, lead to
C
n+1
(x, ) =
1
2
_
x
0
_
t
0
q()
_
t
t
K
n
(, + t) cos d d dt
+
1
2
_
x
0
_
t
0
q()
_
2t
t
K
n
(, +t ) cos d d dt.
6
-
-
t
t
= t

6
t
t
t

= 2 t
= t
g. 1.3.1
Interchanging the order of integration (see g. 1.3.1) we obtain
C
n+1
(x, ) =
_
x
0
K
n+1
(x, t) cos t dt,
where
K
n+1
(x, t) =
1
2
_
x
t
_
_

t
q()K
n
(, t + ) d
+
_

+t
2
q()K
n
(, t +) d +
_
t
t
2
q()K
n
(, t +) d
_
d. (1.3.8)
Substituting (1.3.6) into (1.3.4) we arrive at (1.3.1), where
K(x, t) =

n=1
K
n
(x, t). (1.3.9)
It follows from (1.3.7) and (1.3.8) that
[K
n
(x, t)[ (Q(x))
n
x
n1
(n 1)!
, Q(x) :=
_
x
0
[q()[ d.
22
Indeed, (1.3.7) yields for t x,
[K
1
(x, t)[
1
2
_ x+t
2
0
[q()[ d +
1
2
_ xt
2
0
[q()[ d
_
x
0
[q()[ d = Q(x).
Furthermore, if for a certain n 1, the estimate for [K
n
(x, t)[ is valid, then by virtue of
(1.3.8),
[K
n+1
(x, t)[
1
2
_
x
t
_
_

+t
2
[q()[(Q())
n

n1
(n 1)!
d +
_

t
2
[q()[(Q())
n

n1
(n 1)!
d
_
d

_
x
0
_

0
[q()[(Q())
n

n1
(n 1)!
d d
_
x
0
(Q())
n+1

n1
(n 1)!
d (Q(x))
n+1
x
n
n!
.
Thus, the series (1.3.9) converges absolutely and uniformly for 0 t x , and the
function K(x, t) is continuous. Moreover, it follows from (1.3.7)-(1.3.9) that the smoothness
of K(x, t) is the same as the smoothness of the function
_
x
0
q(t) dt. Since according to
(1.3.7) and (1.3.8)
K
1
(x, x) =
1
2
_
x
0
q(t) dt, K
n+1
(x, x) = 0, n 1,
we arrive at (1.3.2). 2
The operator T, dened by
Tf(x) = f(x) +
_
x
0
K(x, t)f(t) dt,
transforms cos x, which is the solution of the equation y

= y with zero potential, to


the function C(x, ), which is the solution of equation (1.1.1) satisfying the same initial
condition (i.e. C(x, ) = T(cos x) ). The operator T is called the transformation operator
for C(x, ). It is important that the kernel K(x, t) does not depend on .
Analogously one can obtain the transformation operators for the functions S(x, ) and
(x, ) :
Theorem 1.3.2. For the functions S(x, ) and (x, ) the following representations
hold
S(x, ) =
sin x

+
_
x
0
P(x, t)
sin t

dt, (1.3.10)
(x, ) = cos x +
_
x
0
G(x, t) cos t dt, (1.3.11)
where P(x, t) and G(x, t) are real continuous functions with the same smoothness as
_
x
0
q(t) dt, and
G(x, x) = h +
1
2
_
x
0
q(t) dt. (1.3.12)
P(x, x) =
1
2
_
x
0
q(t) dt. (1.3.13)
23
Proof. The function S(x, ) satises (1.1.31), and hence
S(x, ) =
sin x

+
_
x
0
_
t
0
q()S(, ) cos (t ) d dt.
The method of successive approximations gives
S(x, ) =

n=0
S
n
(x, ), (1.3.14)
S
0
(x, ) =
sin x

, S
n+1
(x, ) =
_
x
0
_
t
0
q()S
n
(, ) cos (t ) d dt.
Acting in the same way as in the proof of Theorem 1.3.1, we obtain the following represen-
tation
S
n
(x, ) =
_
x
0
P
n
(x, t)
sin t

dt,
where
P
1
(x, t) =
1
2
_ x+t
2
0
q() d
1
2
_ xt
2
0
q() d,
P
n+1
(x, t) =
1
2
_
x
t
_
_

t
q()P
n
(, t + ) d
+
_

+t
2
q()P
n
(, t +) d
_
t
t
2
q()P
n
(, t +) d
_
d,
and
[P
n
(x, t)[ (Q(x))
n
x
n1
(n 1)!
, Q(x) :=
_
x
0
[q()[ d,
Hence, the series (1.3.14) converges absolutely and uniformly for 0 t x , and we
arrive at (1.3.10) and (1.3.13).
The relation (1.3.11) can be obtained directly from (1.3.1) and (1.3.10):
(x, ) = C(x, ) + hS(x, ) = cos x +
_
x
0
K(x, t) cos t dt +h
_
x
0
cos t dt+
h
_
x
0
P(x, t)
_
_
t
0
cos d
_
dt = cos x +
_
x
0
G(x, t) cos t dt,
where
G(x, t) = K(x, t) + h +h
_
x
t
P(x, ) d.
Taking here t = x we get (1.3.12). 2
1.4. UNIQUENESS THEOREMS
Let us go on to inverse problems of spectral analysis for Sturm-Liouville operators. In
this section we give various formulations of these inverse problems and prove the correspond-
ing uniqueness theorems. We present several methods for proving these theorems. These
24
methods have a wide area for applications and allow one to study various classes of inverse
spectral problems.
1.4.1. Ambarzumians theorem. The rst result in the inverse problem theory is due
to Ambarzumian [amb1]. Consider the boundary value problem L(q(x), 0, 0), i.e.
y

+q(x)y = y, y

(0) = y

() = 0. (1.4.1)
Clearly, if q(x) = 0 a.e. on (0, ), then the eigenvalues of (1.4.1) have the form
n
=
n
2
, n 0. Ambarzumian proved the inverse assertion:
Theorem 1.4.1. If the eigenvalues of (1.4.1) are
n
= n
2
, n 0, then q(x) = 0 a.e.
on (0, ).
Proof. It follows from (1.1.13) that = 0, i.e.
_

0
q(x) dx = 0. Let y
0
(x) be an
eigenfunction for the rst eigenvalue
0
= 0. Then
y

0
(x) q(x)y
0
(x) = 0, y

0
(0) = y

0
() = 0.
According to Theorem 1.2.2, the function y
0
(x) has no zeros in the interval x [0, ].
Taking into account the relation
y

0
(x)
y
0
(x)
=
_
y

0
(x)
y
0
(x)
_
2
+
_
y

0
(x)
y
0
(x)
_

,
we get
0 =
_

0
q(x) dx =
_

0
y

0
(x)
y
0
(x)
dx =
_

0
_
y

0
(x)
y
0
(x)
_
2
dx.
Thus, y

0
(x) 0, i.e. y
0
(x) const, q(x) = 0 a.e. on (0, ). 2
Remark 1.4.1. Actually we have proved a more general assertion than Theorem 1.4.1,
namely:
If
0
=
1

_

0
q(x) dx, then q(x) =
0
a.e. on (0, ).
1.4.2. Uniqueness of the recovery of dierential equations from the spectral
data. Ambarzumians result is an exception from the rule. In general, the specication
of the spectrum does not uniquely determine the operator. In Subsections 1.4.2-1.4.4 we
provide three uniqueness theorems where we point out spectral characteristics which uniquely
determine the operator.
Consider the following inverse problem:
Inverse Problem 1.4.1. Given the spectral data
n
,
n

n0
, construct the potential
q(x) and the coecients h and H of the boundary conditions.
The goal of this subsection is to prove the uniqueness theorem for the solution of Inverse
Problem 1.4.1.
We agree that together with L we consider here and in the sequel a boundary value
problem

L = L( q(x),

h,

H) of the same form but with dierent coecients. If a certain
symbol denotes an object related to L, then the corresponding symbol with tilde
denotes the analogous object related to

L, and := .
25
Theorem 1.4.2. If
n
=

n
,
n
=
n
, n 0, then L =

L, i.e. q(x) = q(x) a.e.
on (0, ), h =

h and H =

H. Thus, the specication of the spectral data
n
,
n

n0
uniquely determines the potential and the coecients of the boundary conditions.
We give here two proofs of Theorem 1.4.2. The rst proof is due to Marchenko [mar4] and
uses the transformation operator and Parsevals equality (1.2.2). This method also works
for Sturm-Liouville operators on the half-line, and gives us an opportunity to prove the
uniqueness theorem of recovering an operator from its spectral function. The second proof
is due to Levinson [Lev1] and uses the contour integral method. We note that Levinson rst
applied the ideas of the contour integral method to inverse spectral problems. In Section 1.6
one can see a development of these ideas for constructing the solution of the inverse problem.
Marchenkos proof. According to (1.3.11),
(x, ) = cos x +
_
x
0
G(x, t) cos t dt, (x, ) = cos x +
_
x
0

G(x, t) cos t dt.


In other words,
(x, ) = (E +G) cos x, (x, ) = (E +

G) cos x,
where
(E +G)f(x) = f(x) +
_
x
0
G(x, t)f(t) dt, (E +

G)f(x) = f(x) +
_
x
0

G(x, t)f(t) dt.


One can consider the relation (x, ) = (E +

G) cos x as a Volterra integral equation (see
[jer1] for the theory of integral equations) with respect to cos x. Solving this equation we
get
cos x = (x, ) +
_
x
0

H(x, t) (t, ) dt,


where

H(x, t) is a continuous function which is the kernel of the inverse operator:
(E +

H) = (E +

G)
1
,

Hf(x) =
_
x
0

H(x, t)f(t) dt.


Consequently
(x, ) = (x, ) +
_
x
0
Q(x, t) (t, ) dt, (1.4.2)
where Q(x, t) is a real continuous function.
Let f(x) L
2
(0, ). It follows from (1.4.2) that
_

0
f(x)(x, ) dx =
_

0
g(x) (x, ) dx,
where
g(x) = f(x) +
_

x
Q(t, x)f(t) dt.
Hence, for all n 0,
a
n
=

b
n
,
a
n
:=
_

0
f(x)(x,
n
) dx,

b
n
:=
_

0
g(x) (x,
n
) dx.
26
Using Parsevals equality (1.2.2), we calculate
_

0
[f(x)[
2
dx =

n=0
[a
n
[
2

n
=

n=0
[

b
n
[
2

n
=

n=0
[

b
n
[
2

n
=
_

0
[g(x)[
2
dx,
i.e.
|f|
L
2
= |g|
L
2
. (1.4.3)
Consider the operator
Af(x) = f(x) +
_

x
Q(t, x)f(t) dt.
Then Af = g. By virtue of (1.4.3), |Af|
L
2
= |f|
L
2
for any f(x) L
2
(0, ). Conse-
quently, A

= A
1
, but this is possible only if Q(x, t) 0. Thus, (x, ) (x, ), i.e.
q(x) = q(x) a.e. on (0, ), h =

h, H =

H.
Levinsons proof. Let f(x), x [0, ] be an absolutely continuous function. Consider
the function
Y
0
(x, ) =
1
()
_
(x, )
_
x
0
f(t) (t, ) dt +(x, )
_

x
f(t)

(t, ) dt
_
and the contour integral
I
0
N
(x) =
1
2i
_

N
Y
0
(x, ) d.
The idea used here comes from the proof of Theorem 1.2.1 but here the function Y
0
(x, )
is constructed from solutions of two boundary value problems.
Repeating the arguments of the proof of Theorem 1.2.1 we calculate
Y
0
(x, ) =
f(x)


Z
0
(x, )

,
where
Z
0
(x, ) =
1
()
_
f(x)[(x, )(

(x, )

(x, )) (x, )(

(x, )

(x, ))]
+

hf(0)(x, ) +

Hf()(x, ) + (x, )
_
x
0
(

(t, )f

(t) + q(t) (t, )f(t)) dt


+(x, )
_

x
(

(t, )f

(t) + q(t)

(t, )f(t)) dt
_
.
The asymptotic properties for (x, ) and

(x, ) are the same as for (x, ) and (x, ).
Therefore, by similar arguments as in the proof of Theorem 1.2.1 one can obtain
I
0
N
(x) = f(x) +
0
N
(x), lim
N
max
0x
[
0
N
(x)[ = 0.
On the other hand, we can calculate I
0
N
(x) with the help of the residue theorem:
I
0
N
(x) =
N

n=0
_

(
n
)
__
(x,
n
)
_
x
0
f(t) (t,
n
) dt +(x,
n
)
_

x
f(t)

(t,
n
) dt
_
.
27
It follows from Lemma 1.1.1 and Theorem 1.1.4 that under the hypothesis of Theorem 1.4.2
we have
n
=

n
. Consequently,
I
0
N
(x) =
N

n=0
1

n
(x,
n
)
_

0
f(t) (t,
n
) dt.
If N we get
f(x) =

n=0
1

n
(x,
n
)
_

0
f(t) (t,
n
) dt.
Together with (1.2.1) this gives
_

0
f(t)((t,
n
) (t,
n
)) dt = 0.
Since f(x) is an arbitrary absolutely continuous function we conclude that (x,
n
) =
(x,
n
) for all n 0 and x [0, ]. Consequently, q(x) = q(x) a.e. on (0, ), h =

h, H =

H. 2
Symmetrical case. Let q(x) = q( x), H = h. In this case in order to construct
the potential q(x) and the coecient h, it is sucient to specify the spectrum
n

n0
only.
Theorem 1.4.3. If q(x) = q( x), H = h, q(x) = q( x),

H =

h and
n
=

n
, n 0, then q(x) = q(x) a.e. on (0, ) and h =

h.
Proof. If q(x) = q(x), H = h and y(x) is a solution of (1.1.1), then y
1
(x) := y(x)
also satises (1.1.1). In particular, we have (x, ) = (x, ). Using (1.1.6) we calculate
(x,
n
) =
n
(x,
n
) =
n
( x,
n
) =
2
n
( x,
n
) =
2
n
(x,
n
).
Hence,
2
n
= 1.
On the other hand, it follows from (1.1.6) that
n
(,
n
) = 1. Applying Theorem 1.2.2
we conclude that

n
= (1)
n
.
Then (1.1.8) implies

n
= (1)
n+1

(
n
).
Under the assumptions of Theorem 1.4.3 we obtain
n
=
n
, n 0, and by Theorem 1.4.2,
q(x) = q(x) a.e. on (0, ) and h =

h. 2
1.4.3. Uniqueness of the recovery of dierential equations from two spectra.
G. Borg [Bor1] suggested another formulation of the inverse problem: Reconstruct a dif-
ferential operator from two spectra of boundary value problems with a common dierential
equation and one common boundary condition. For deniteness, let a boundary condition
at x = 0 be common.
Let
n

n0
and
n

n0
be the eigenvalues of L and L
1
( L
1
is dened in Remark
1.1.2) respectively. Consider the following inverse problem
Inverse Problem 1.4.2. Given two spectra
n

n0
and
n

n0
, construct the
potential q(x) and the coecients h and H of the boundary conditions.
28
The goal of this subsection is to prove the uniqueness theorem for the solution of Inverse
Problem 1.4.2.
Theorem 1.4.4. If
n
=

n
,
n
=
n
, n 0, then q(x) = q(x) a.e. on (0, ), h =

h
and H =

H. Thus, the specication of two spectra
n
,
n

n0
uniquely determines the
potential and the coecients of the boundary conditions.
Proof. By virtue of (1.1.26) and (1.1.28), the characteristic functions () and d()
are uniquely determined by their zeros, i.e. under the hypothesis of Theorem 1.4.4 we have
()

() d()

d().
Moreover, it follows from (1.1.13) and (1.1.29) that
H =

H,

h +
1
2
_

0
q(x) dx = 0, (1.4.4)
where

h = h

h, q(x) = q(x) q(x). Hence


(, ) = (, ),

(, ) =

(, ). (1.4.5)
Since

(x, ) + q(x)(x, ) = (x, ),

(x, ) + q(x) (x, ) = (x, ),


we get
_

0
q(x)(x, ) (x, ) dx =

0
(

(x, ) (x, ) (x, )

(x, )), q := q q.
Taking (1.4.4) and (1.4.5) into account we calculate
_

0
q(x)
_
(x, ) (x, )
1
2
_
dx = 0. (1.4.6)
Let us show that (1.4.6) implies q = 0. For this we use the transformation operator (1.3.11).
However, in order to prove this fact one can also apply the original Borg method which is
presented in Section 1.8.
Using the transformation operator (1.3.11) we rewrite the function (x, ) (x, )
1
2
as follows:
(x, ) (x, )
1
2
=
1
2
cos 2x +
_
x
0
(G(x, t) +

G(x, t)) cos x cos t dt
+
_
x
0
_
x
0
G(x, t)

G(x, s) cos t cos s dtds =
1
2
cos 2x
+
1
2
_
x
x
(G(x, t) +

G(x, t)) cos (x t) dt +
1
4
_
x
x
_
x
x
G(x, t)

G(x, s) cos (t s) dtds
(here G(x, t) = G(x, t),

G(x, t) =

G(x, t) ). The changes of variables = (xt)/2 and
= (s t)/2, respectively, yield
(x, ) (x, )
1
2
=
1
2
_
cos 2x + 2
_
x
0
(G(x, x 2) +

G(x, x 2)) cos 2 d
29
+
_
x
x

G(x, s)
_
_
(s+x)/2
(sx)/2
G(x, s 2) cos 2 d
_
ds
_
,
and hence
(x, ) (x, )
1
2
=
1
2
_
cos 2x +
_
x
0
V (x, ) cos 2 d
_
, (1.4.7)
where
V (x, ) = 2(G(x, x 2) +

G(x, x 2))
+
_
x
2x

G(x, s)G(x, s 2) ds +
_
x2
x

G(x, s)G(x, s + 2) ds. (1.4.8)


Substituting (1.4.7) into (1.4.6) and interchanging the order of integration we obtain
_

0
_
q() +
_

V (, x) q(x) dx
_
cos 2 d 0.
Consequently,
q() +
_

V (, x) q(x) dx = 0.
This homogeneous Volterra integral equation has only the trivial solution, i.e. q(x) = 0 a.e.
on (0, ). From this and (1.4.4) it follows that h =

h, H =

H. 2
Remark 1.4.2. Clearly, Borgs result is also valid when instead of
n
and
n
, the
spectra
n
and
0
n
of L and L
0
( L
0
is dened in Remark 1.1.2) are given, i.e. it is
valid for the following inverse problem.
Inverse Problem 1.4.3. Given two spectra
n

n0
and
0
n

n0
, construct the
potential q(x) and the coecients h and H of the boundary conditions.
The uniqueness theorem for Inverse Problem 1.4.3 has the form
Theorem 1.4.5. If
n
=

n
,
0
n
=

0
n
, n 0, then q(x) = q(x) a.e. on (0, ), h =

h
and H =

H. Thus, the specication of two spectra
n
,
0
n

n0
uniquely determines the
potential and the coecients of the boundary conditions.
We note that Theorem 1.4.4 and Theorem 1.4.5 can be reduced each to other by the
replacement x x.
1.4.4. The Weyl function. Let (x, ) be the solution of (1.1.1) under the condi-
tions U() = 1, V () = 0. We set M() := (0, ). The functions (x, ) and M()
are called the Weyl solution and the Weyl function for the boundary value problem L, re-
spectively. The Weyl function was introduced rst (for the case of the half-line) by H. Weyl.
For further discussions on the Weyl function see, for example, [lev3]. Clearly,
(x, ) =
(x, )
()
= S(x, ) + M()(x, ), (1.4.9)
M() =

0
()
()
, (1.4.10)
(x, ), (x, )) 1, (1.4.11)
where
0
() is dened in Remark 1.1.2. Thus, the Weyl function is meromorphic with
simple poles in the points =
n
, n 0.
30
Theorem 1.4.6. The following representation holds
M() =

n=0
1

n
(
n
)
. (1.4.12)
Proof. Since
0
() = (0, ), it follows from (1.1.10) that [
0
()[ C exp([[).
Then, using (1.4.10) and (1.1.18), we get for suciently large

> 0,
[M()[
C

[[
, G

, [[

. (1.4.13)
Further, using (1.4.10) and Lemma 1.1.1, we calculate
Res
=
n
M() =

0
(
n
)

(
n
)
=

n

(
n
)
=
1

n
. (1.4.14)
Consider the contour integral
J
N
() =
1
2i
_

N
M()

d, int
N
.
By virtue of (1.4.13), lim
N
J
N
() = 0. On the other hand, the residue theorem and (1.4.14)
yield
J
N
() = M() +
N

n=0
1

n
(
n
)
,
and Theorem 1.4.6 is proved. 2
In this subsection we consider the following inverse problem:
Inverse Problem 1.4.4. Given the Weyl function M(), construct L(q(x), h, H).
Let us prove the uniqueness theorem for Inverse Problem 1.4.4.
Theorem 1.4.7. If M() =

M(), then L =

L. Thus, the specication of the Weyl
function uniquely determines the operator.
Proof. Let us dene the matrix P(x, ) = [P
jk
(x, )]
j,k=1,2
by the formula
P(x, )
_
(x, )

(x, )

(x, )

(x, )
_
=
_
(x, ) (x, )

(x, )

(x, )
_
. (1.4.15)
Using (1.4.11) and (1.4.15) we calculate
P
j1
(x, ) =
(j1)
(x, )

(x, )
(j1)
(x, )

(x, ),
P
j2
(x, ) =
(j1)
(x, ) (x, )
(j1)
(x, )

(x, ),
_
_
_
(1.4.16)
(x, ) = P
11
(x, ) (x, ) +P
12
(x, )

(x, ),
(x, ) = P
11
(x, )

(x, ) + P
12
(x, )

(x, ).
_
_
_
(1.4.17)
It follows from (1.4.16), (1.4.9) and (1.4.11) that
P
11
(x, ) = 1 +
1
()
_
(x, )(

(x, )

(x, )) (x, )(

(x, )

(x, ))
_
,
31
P
12
(x, ) =
1
()
_
(x, )

(x, ) (x, ) (x, )


_
.
By virtue of (1.1.9), (1.1.10) and (1.1.18), this yields
[P
11
(x, ) 1[
C

[[
, [P
12
(x, )[
C

[[
, G

, [[

, (1.4.18)
[P
22
(x, ) 1[
C

[[
, [P
21
(x, )[ C

, G

, [[

. (1.4.19)
According to (1.4.9) and (1.4.16),
P
11
(x, ) = (x, )

(x, ) S(x, )

(x, ) + (

M() M()(x, )

(x, ),
P
12
(x, ) = S(x, ) (x, ) (x, )

S(x, ) + (M()

M())(x, ) (x, ).
Thus, if M()

M() , then for each xed x, the functions P
11
(x, ) and P
12
(x, ) are
entire in . Together with (1.4.18) this yields P
11
(x, ) 1, P
12
(x, ) 0. Substituting
into (1.4.17) we get (x, ) (x, ), (x, )

(x, ) for all x and , and consequently,
L =

L. 2
Remark 1.4.3. According to (1.4.12), the specication of the Weyl function M()
is equivalent to the specication of the spectral data
n
,
n

n0
. On the other hand, by
virtue of (1.4.10) zeros and poles of the Weyl function M() coincide with the spectra of
the boundary value problems L and L
0
, respectively. Consequently, the specication of
the Weyl function M() is equivalent to the specication of two spectra
n
and
0
n
, .
Thus, the inverse problems of recovering the Sturm-Liouville equation from the spectral data
and from two spectra are particular cases of Inverse Problem 1.4.4 of recovering the Sturm-
Liouville equation from the given Weyl function, and we have several independent methods
for proving the uniqueness theorems. The Weyl function is a very natural and convenient
spectral characteristic in the inverse problem theory. Using the concept of the Weyl function
and its generalizations we can formulate and study inverse problems for various classes of
operators. For example, the inverse problem of recovering higher-order dierential operators
from the Weyl functions has been studied in [yur1]. We will also use the Weyl function in
Chapter 2 for the investigation of the Sturm-Liouville operator on the half-line.
1.5. GELFAND-LEVITAN METHOD
In Sections 1.5-1.8 we present various methods for constructing the solution of inverse
problems. In this section we describe the Gelfand-Levitan method ([gel1], [lev2], [mar1]) in
which the transformation operators constructed in Section 1.3 are used. By the Gelfand-
Levitan method we obtain algorithms for the solution of inverse problems and provide neces-
sary and sucient conditions for their solvability. The central role in this method is played
by a linear integral equation with respect to the kernel of the transformation operator (see
Theorem 1.5.1). The main results of Section 1.5 are stated in Theorems 1.5.2 and 1.5.4.
1.5.1. Auxiliary propositions. In order to present the transformation operator
method we rst prove several auxiliary assertions.
32
Lemma 1.5.1. In a Banach space B, consider the equations
(E +A
0
)y
0
= f
0
,
(E +A)y = f,
where A and A
0
are linear bounded operators, acting from B to B, and E is the identity
operator. Suppose that there exists the linear bounded operator R
0
:= (E+A
0
)
1
, this yields
in particular that the equation (E +A
0
)y
0
= f
0
is uniquely solvable in B. If
|A A
0
| (2|R
0
|)
1
,
then there exists the linear bounded operator R := (E +A)
1
with
R = R
0
_
E +

k=1
((A
0
A)R
0
)
k
_
,
and
|R R
0
| 2|R
0
|
2
|A A
0
|.
Moreover, y and y
0
satisfy the estimate
|y y
0
| C
0
(|A A
0
| +|f f
0
|),
where C
0
depends only on |R
0
| and |f
0
|.
Proof. We have
E +A = (E +A
0
) + (A A
0
) =
_
E + (A A
0
)R
0
_
(E +A
0
).
Under the assumptions of the lemma it follows that |(AA
0
)R
0
| 1/2, and consequently
there exists the linear bounded operator
R := (E +A)
1
= R
0
_
E + (A A
0
)R
0
_
1
= R
0
_
E +

k=1
((A
0
A)R
0
)
k
_
.
This yields in particular that |R| 2|R
0
|. Using again the assumption on |AA
0
| we
infer
|R R
0
| |R
0
|
|(A A
0
)R
0
|
1 |(A A
0
)R
0
|
2|R
0
|
2
|A A
0
|.
Furthermore,
y y
0
= Rf R
0
f
0
= (R R
0
)f
0
+R(f f
0
).
Hence
|y y
0
| 2|R
0
|
2
|f
0
||A A
0
| + 2|R
0
||f f
0
|.
2
The following lemma is an obvious corollary of Lemma 1.5.1 (applied in a corresponding
function space).
Lemma 1.5.2. Consider the integral equation
y(t, ) +
_
b
a
A(t, s, )y(s, ) ds = f(t, ), a t b, (1.5.1)
33
where A(t, s, ) and f(t, ) are continuous functions. Assume that for a xed =
0
the
homogeneous equation
z(t) +
_
b
a
A
0
(t, s)z(s) ds = 0, A
0
(t, s) := A(t, s,
0
)
has only the trivial solution. Then in a neighbourhood of the point =
0
, equation (1.5.1)
has a unique solution y(t, ), which is continuous with respect to t and . Moreover, the
function y(t, ) has the same smoothness as A(t, s, ) and f(t, ).
Lemma 1.5.3. Let
j
(x, ), j 1, be the solution of the equation
y

+q
j
(x)y = y, q
j
(x) L
2
(0, )
under the conditions
j
(0, ) = 1,

j
(0, ) = h
j
, and let (x, ) be the function dened in
Section 1.1. If lim
j
|q
j
q|
L
2
= 0, lim
j
h
j
= h, then
lim
j
max
0x
max
||r
[
j
(x, ) (x, )[ = 0.
Proof. One can verify by dierentiation that the function
j
(x, ) satises the following
integral equation

j
(x, ) = (x, ) + (h
j
h)S(x, ) +
_
x
0
g(x, t, )(q
j
(t) q(t))
j
(t, ) dt,
where g(x, t, ) = C(t, )S(x, ) C(x, )S(t, ) is the Green function for the Cauchy
problem:
y

+q(x)y y = f, y(0) = y

(0) = 0.
Fix r > 0. Then for [[ r and x [0, ],
[
j
(x, ) (x, )[ C
_
[h
j
h[ +|q
j
q|
L
2
max
0x
max
||r
[
j
(x, )[
_
.
In particular, this yields
max
0x
max
||r
[
j
(x, )[ C,
where C does not depend on j. Hence
max
0x
max
||r
[
j
(x, ) (x, )[ C
_
[h
j
h[ +|q
j
q|
L
2
_
0 for j .
2
Lemma 1.5.4. Let numbers
n
,
n

n0
of the form

n
= n +

n
+

n
n
,
n
=

2
+

n1
n
,
n
,
n1
l
2
,
n
,= 0 (1.5.2)
be given. Denote
a(x) =

n=0
_
cos
n
x

cos nx

0
n
_
, (1.5.3)
34
where

0
n
=
_
_
_

2
, n > 0,
, n = 0.
Then a(x) W
1
2
(0, 2).
Proof. Denote
n
=
n
n. Since
cos
n
x

cos nx

0
n
=
1

0
n
_
cos
n
x cos nx
_
+
_
1

0
n
_
cos
n
x,
cos
n
x cos nx = cos(n +
n
)x cos nx = sin
n
x sin nx 2 sin
2

n
x
2
cos nx
=
n
x sin nx (sin
n
x
n
x) sin nx 2 sin
2

n
x
2
cos nx,
we have
a(x) = A
1
(x) +A
2
(x),
where
A
1
(x) =
x

n=1
sin nx
n
=
x


x
2
, 0 < x < 2,
A
2
(x) =

n=0
_
1

0
n
_
cos
n
x +
1

_
cos
0
x 1
_
x

n=1

n
sin nx
n

n=1
(sin
n
x
n
x) sin nx 2

n=1
sin
2

n
x
2
cos nx. (1.5.4)
Since

n
= O
_
1
n
_
,
1

0
n
=

n
n
,
n
l
2
,
the series in (1.5.4) converge absolutely and uniformly on [0, 2], and A
2
(x) W
1
2
(0, 2).
Consequently, a(x) W
1
2
(0, 2). 2
By similar arguments one can prove the following more general assertions.
Lemma 1.5.5. Let numbers
n
,
n

n0
of the form (1.1.23) be given. Then a(x)
W
N+1
2
(0, 2).
Proof. Indeed, substituting (1.1.23) into (1.5.3) we obtain two groups of series. The
series of the rst group can be dierentiated N + 1 times. The series of the second group
are

n=1
sin nx
n
2k+1
,

n=1
cos nx
n
2k
.
Dierentiating these series 2k and 2k 1 times respectively, we obtain the series

n=1
sin nx
n
=
x
2
, 0 < x < 2.
2
35
Lemma 1.5.6. Let numbers
n
,
n

n0
of the form (1.5.2) be given. Fix C
0
> 0. If
certain numbers
n
,
n

n0
,
n
,= 0 satisfy the condition
:=
_

n=0
((n + 1)
n
)
2
_
1/2
C
0
,
n
:= [
n

n
[ +[
n

n
[,
then
a(x) :=

n=0
_
cos
n
x

n

cos
n
x

n
_
W
1
2
(0, 2), (1.5.5)
and
max
0x2
[ a(x)[ C

n=0

n
, | a(x)|
W
1
2
C,
where C depends on
n
,
n

n0
and C
0
.
Proof. Obviously,

n=0

n
C.
We rewrite (1.5.5) in the form
a(x) :=

n=0
__
1

n

n
_
cos
n
x +
1

n
_
cos
n
x cos
n
x
__
=

n=0
_

n

n

n

n
cos
n
x +
2

n
sin
(
n

n
)x
2
sin
(
n
+
n
)x
2
_
.
This series converges absolutely and uniformly, a(x) is a continuous function, and
[ a(x)[ C

n=0

n
.
Dierentiating (1.5.5) we calculate analogously
a

(x) :=

n=0
_

n
sin
n
x

n

n
sin
n
x

n
_
=

n=0
__

n

n

n
_
sin
n
x+

n

n
_
sin
n
xsin
n
x
__
=

n=0
_

n

n

n

n

n
sin
n
x +
2
n

n
sin
(
n

n
)x
2
cos
(
n
+
n
)x
2
_
=

n=0
_

n

n

n

n

n
sin
n
x +x

n

n
(
n

n
) cos
(
n
+
n
)x
2
+

n

n
_
2 sin
(
n

n
)x
2
x(
n

n
)
_
cos
(
n
+
n
)x
2
_
= A
1
(x) + A
2
(x),
where
A
1
(x) =

n=0

n

n

n

n

n
sin
n
x +x

n=0

n

n
_

n

n
_
cos
n
x, (1.5.6)
A
2
(x) =

n=0

n

n
_
2 sin
(
n

n
)x
2
x(
n

n
)
_
cos
(
n
+
n
)x
2
36
+x

n=0

n

n
(
n

n
)
_
cos
(
n
+
n
)x
2
cos
n
x
_
. (1.5.7)
The series in (1.5.7) converge absolutely and uniformly, and
[A
2
(x)[ C

n=0
[
n

n
[.
The series in (1.5.6) converge in L
2
(0, 2), and
|A
1
(x)|
L
2
(0,2)
C.
These estimates imply the assertions of the lemma. 2
1.5.2. Recovery of dierential operators from the spectral data. Let us consider
the boundary value problem L = L(q(x), h, H). Let
n
,
n

n0
be the spectral data of
L,
n
=

n
. We shall solve the inverse problem of recovering L from the given spectral
data
n
,
n

n0
. It was shown in Section 1.1 that the spectral data have the properties:

n
= n +

n
+

n
n
,
n
=

2
+

n1
n
,
n
,
n1

2
, (1.5.8)

n
> 0,
n
,=
m
(n ,= m). (1.5.9)
More precisely

n
=
1
2
_

0
q(t) cos 2nt dt +O
_
1
n
_
,
n1
=
1
2
_

0
( t)q(t) sin 2nt dt +O
_
1
n
_
,
i.e. the main terms depend linearly on the potential.
Consider the function
F(x, t) =

n=0
_
cos
n
x cos
n
t

cos nx cos nt

0
n
_
, (1.5.10)
where

0
n
=
_
_
_

2
, n > 0,
, n = 0.
Since F(x, t) = (a(x+t)+a(xt))/2, then by virtue of Lemma 1.5.4, F(x, t) is continuous,
and
d
dx
F(x, x) L
2
(0, ).
Theorem 1.5.1. For each xed x (0, ], the kernel G(x, t) appearing in representa-
tion (1.3.11) satises the linear integral equation
G(x, t) + F(x, t) +
_
x
0
G(x, s)F(s, t) ds = 0, 0 < t < x. (1.5.11)
This equation is called the Gelfand-Levitan equation.
Thus, Theorem 1.5.1 allows one to reduce our inverse problem to the solution of the
Gelfand-Levitan equation (1.5.11). We note that (1.5.11) is a Fredholm type integral equa-
tion in which x is a parameter.
37
Proof. One can consider the relation (1.3.11) as a Volterra integral equation with respect
to cos x. Solving this equation we obtain
cos x = (x, ) +
_
x
0
H(x, t)(t, ) dt, (1.5.12)
where H(x, t) is a continuous function. Using (1.3.11) and (1.5.12) we calculate
N

n=0
(x,
n
) cos
n
t

n
=
N

n=0
_
cos
n
x cos
n
t

n
+
cos
n
t

n
_
x
0
G(x, s) cos
n
s ds
_
,
N

n=0
(x,
n
) cos
n
t

n
=
N

n=0
_
(x,
n
)(t,
n
)

n
+
(x,
n
)

n
_
t
0
H(t, s)(s,
n
) ds
_
.
This yields

N
(x, t) = I
N1
(x, t) + I
N2
(x, t) + I
N3
(x, t) + I
N4
(x, t),
where

N
(x, t) =
N

n=0
_
(x,
n
)(t,
n
)

cos nx cos nt

0
n
_
,
I
N1
(x, t) =
N

n=0
_
cos
n
x cos
n
t

cos nx cos nt

0
n
_
,
I
N2
(x, t) =
N

n=0
cos nt

0
n
_
x
0
G(x, s) cos ns ds,
I
N3
(x, t) =
N

n=0
_
x
0
G(x, s)
_
cos
n
t cos
n
s

cos nt cos ns

0
n
_
ds,
I
N4
(x, t) =
N

n=0
(x,
n
)

n
_
t
0
H(t, s)(s,
n
) ds.
Let f(x) AC[0, ]. According to Theorem 1.2.1,
lim
N
max
0x
_

0
f(t)
N
(x, t) dt = 0;
furthermore, uniformly with respect to x [0, ],
lim
N
_

0
f(t)I
N1
(x, t) dt =
_

0
f(t)F(x, t) dt,
lim
N
_

0
f(t)I
N2
(x, t) dt =
_
x
0
f(t)G(x, t) dt,
lim
N
_

0
f(t)I
N3
(x, t) dt =
_

0
f(t)
_
_
x
0
G(x, s)F(s, t) ds
_
dt,
lim
N
_

0
f(t)I
N4
(x, t) dt = lim
N
N

n=0
(x,
n
)

n
_

0
(s,
n
)
_
_

s
H(t, s)f(t) dt
_
ds
=
_

x
f(t)H(t, x) dt.
38
Extend G(x, t) = H(x, t) = 0 for x < t. Then, in view of the arbitrariness of f(x), we
derive
G(x, t) + F(x, t) +
_
x
0
G(x, s)F(s, t) ds H(t, x) = 0.
For t < x, this yields (1.5.11). 2
The next theorem, which is the main result of Section 1.5, gives us an algorithm for the
solution of the inverse problem as well as necessary and sucient conditions for its solvability.
Theorem 1.5.2. For real numbers
n
,
n

n0
to be the spectral data for a certain
boundary value problem L(q(x), h, H) with q(x) L
2
(0, ), it is necessary and sucient
that the relations (1.5.8)-(1.5.9) hold. Moreover, q(x) W
N
2
i (1.1.23) holds. The bound-
ary value problem L(q(x), h, H) can be constructed by the following algorithm:
Algorithm 1.5.1. (i) From the given numbers
n
,
n

n0
construct the function
F(x, t) by (1.5.10).
(ii) Find the function G(x, t) by solving equation (1.5.11).
(iii) Calculate q(x), h and H by the formulae
q(x) = 2
d
dx
G(x, x), h = G(0, 0), (1.5.13)
H = h
1
2
_

0
q(t) dt. (1.5.14)
The necessity part of Theorem 1.5.2 was proved above, here we prove the suciency.
Let real numbers
n
,
n

n0
of the form (1.5.8)-(1.5.9) be given. We construct the function
F(x, t) by (1.5.10) and consider equation (1.5.11).
Lemma 1.5.7. For each xed x (0, ], equation (1.5.11) has a unique solution G(x, t)
in L
2
(0, x).
Proof. Since (1.5.11) is a Fredholm equation it is sucient to prove that the homoge-
neous equation
g(t) +
_
x
0
F(s, t)g(s) ds = 0 (1.5.15)
has only the trivial solution g(t) = 0.
Let g(t) be a solution of (1.5.15). Then
_
x
0
g
2
(t) dt +
_
x
0
_
x
0
F(s, t)g(s)g(t) dsdt = 0
or
_
x
0
g
2
(t) dt +

n=0
1

n
_
_
x
0
g(t) cos
n
t dt
_
2

n=0
1

0
n
_
_
x
0
g(t) cos nt dt
_
2
= 0.
Using Parsevals equality
_
x
0
g
2
(t) dt =

n=0
1

0
n
_
_
x
0
g(t) cos nt dt
_
2
,
for the function g(t), extended by zero for t > x, we obtain

n=0
1

n
_
_
x
0
g(t) cos
n
t dt
_
2
= 0.
39
Since
n
> 0, then
_
x
0
g(t) cos
n
t dt = 0, n 0.
The system of functions cos
n
t
n0
is complete in L
2
(0, ) (see Levinsons theorem [you1,
p.118] or Proposition 1.8.6 in Section 1.8). This yields g(t) = 0. 2
Let us return to the proof of Theorem 1.5.2. Let G(x, t) be the solution of (1.5.11). The
substitution t tx, s sx in (1.5.11) yields
F(x, xt) + G(x, xt) +x
_
1
0
G(x, xs)F(xt, xs) ds = 0, 0 t 1. (1.5.16)
It follows from (1.5.11), (1.5.16) and Lemma 1.5.2 that the function G(x, t) is continuous,
and has the same smoothness as F(x, t). In particular,
d
dx
G(x, x) L
2
(0, ).
We construct the function (x, ) by (1.3.11) and the function q(x) and the number
h by (1.5.13).
Lemma 1.5.8. The following relations hold

(x, ) + q(x)(x, ) = (x, ), (1.5.17)


(0, ) = 1,

(0, ) = h. (1.5.18)
Proof. 1) First we assume that a(x) W
2
2
(0, 2), where a(x) is dened by (1.5.3).
Dierentiating the identity
J(x, t) := F(x, t) + G(x, t) +
_
x
0
G(x, s)F(s, t) ds 0, (1.5.19)
we calculate
J
t
(x, t) = F
t
(x, t) + G
t
(x, t) +
_
x
0
G(x, s)F
t
(s, t) ds = 0, (1.5.20)
J
tt
(x, t) = F
tt
(x, t) + G
tt
(x, t) +
_
x
0
G(x, s)F
tt
(s, t) ds = 0, (1.5.21)
J
xx
(x, t) = F
xx
(x, t) + G
xx
(x, t) +
dG(x, x)
dx
F(x, t) +G(x, x)F
x
(x, t)
+
G(x, t)
x

t=x
F(x, t) +
_
x
0
G
xx
(x, s)F(s, t) ds = 0. (1.5.22)
According to (1.5.10), F
tt
(s, t) = F
ss
(s, t) and F
t
(x, t)
|t=0
= 0. Then, (1.5.20) for t = 0
gives
G(x, t)
t

t=0
= 0. (1.5.23)
Moreover, integration by parts in (1.5.21) yields
J
tt
(x, t) = F
tt
(x, t) + G
tt
(x, t) + G(x, x)
F(s, t)
s

s=x

G(x, s)
s

s=x
F(x, t) +
_
x
0
G
ss
(x, s)F(s, t) ds = 0. (1.5.24)
40
It follows from (1.5.19), (1.5.22), (1.5.24) and the equality
J
xx
(x, t) J
tt
(x, t) q(x)J(x, t) 0,
that
(G
xx
(x, t) G
tt
(x, t) q(x)G(x, t)) +
_
x
0
(G
xx
(x, s) G
ss
(x, s) q(x)G(x, s))F(s, t) ds 0.
According to Lemma 1.5.7, this homogeneous equation has only the trivial solution, i.e.
G
xx
(x, t) G
tt
(x, t) q(x)G(x, t) = 0 0 < t < x. (1.5.25)
Dierentiating (1.3.11) twice, we get

(x, ) = sin x +G(x, x) cos x +


_
x
0
G
x
(x, t) cos t dt, (1.5.26)

(x, ) =
2
cos x G(x, x) sin x
+
_
dG(x, x)
dx
+
G(x, t)
x

t=x
_
cos x +
_
x
0
G
xx
(x, t) cos t dt. (1.5.27)
On the other hand, integrating by parts twice, we obtain
(x, ) =
2
cos x +
2
_
x
0
G(x, t) cos t dt =
2
cos x +G(x, x) sin x
+
G(x, t)
t

t=x
cos x
G(x, t)
t

t=0

_
x
0
G
tt
(x, t) cos t dt.
Together with (1.3.11) and (1.5.27) this gives

(x, ) + (x, ) q(x)(x, ) =


_
2
dG(x, x)
dx
q(x)
_
cos x
G(x, t)
t

t=0
+
_
x
0
(G
xx
(x, t) G
tt
(x, t) q(x)G(x, t)) cos t dt.
Taking (1.5.13),(1.5.23) and (1.5.25) into account, we arrive at (1.5.17). The relations
(1.5.18) follow from (1.3.11) and (1.5.26) for x = 0.
2) Let us now consider the general case when (1.5.8)-(1.5.9) hold. Then according to
Lemma 1.5.4, a(x) W
1
2
(0, 2). Denote by (x, ) the solution of equation (1.1.1) under
the conditions (0, ) = 1,

(0, ) = h. Our goal is to prove that (x, ) (x, ).


Choose numbers
n,(j)
,
n,(j)

n0
, j 1 of the form

n,(j)
= n +

n
+

n,(j)
n
2
,
n,(j)
=

2
+

n,(j)
n
2
+

n1,(j)
n
2
,
n,(j)
,
n1,(j)

2
,
such that for j ,

j
:=
_

n=0
[(n + 1)
n,(j)
[
2
_
1/2
0,
n,(j)
:= [
n,(j)

n
[ +[
n,(j)

n
[.
41
Denote
a
j
(x) :=

n=0
_
cos
n,(j)
x

n,(j)

cos nx

0
n
_
j 1.
By virtue of Lemma 1.5.5, a
j
(x) W
2
2
(0, 2). Let G
j
(x, t) be the solution of the Gelfand-
Levitan equation
G
j
(x, t) + F
j
(x, t) +
_
x
0
G
j
(x, s)F
j
(s, t) ds = 0, 0 < t < x,
where F
j
(x, t) = (a
j
(x +t) + a
j
(x t))/2. Take
q
j
(x) := 2
d
dx
G
j
(x, x), h
j
:= G
j
(0, 0),

j
(x, ) = cos x +
_
x
0
G
j
(x, t) cos t dt. (1.5.28)
Since a
j
(x) W
2
2
(0, 2), it follows from the rst part of the proof of Lemma 1.5.8 that

j
(x, ) + q
j
(x)
j
(x, ) =
j
(x, ),
j
(0, ) = 1,

j
(0, ) = h
j
.
Further, by virtue of Lemma 1.5.6,
lim
j
|a
j
(x) a(x)|
W
1
2
= 0.
From this, taking Lemma 1.5.1 into account, we get
lim
j
max
0tx
[G
j
(x, t) G(x, t)[ = 0, (1.5.29)
lim
j
|q
j
q|
L
2
= 0, lim
j
h
j
= h. (1.5.30)
It follows from (1.3.11), (1.5.28) and (1.5.29) that
lim
j
max
0x
max
||r
[
j
(x, ) (x, )[ = 0.
On the other hand, according to Lemma 1.5.3 and (1.5.30),
lim
j
max
0x
max
||r
[
j
(x, ) (x, )[ = 0.
Consequently, (x, ) (x, ), and Lemma 1.5.8 is proved. 2
Lemma 1.5.9 The following relation holds
H(x, t) = F(x, t) +
_
t
0
G(t, u)F(x, u) du, 0 t x, (1.5.31)
where H(x, t) is dened in (1.5.12).
Proof. 1) First we assume that a(x) W
2
2
(0, 2). Dierentiating (1.5.12) twice, we
calculate
sin x =

(x, ) +H(x, x)(x, ) +


_
x
0
H
t
(x, t)(t, ) dt, (1.5.32)
42

2
cos x =

(x, ) + H(x, x)

(x, )
+
_
dH(x, x)
dx
+
H(x, t)
x

t=x
_
(x, ) +
_
x
0
H
xx
(x, t)(t, ) dt. (1.5.33)
On the other hand, it follows from (1.5.12) and (1.5.17) that

2
cos x =

(x, ) q(x)(x, ) +
_
x
0
H(x, t)(

(t, ) q(t)(t, )) dt.


Integrating by parts twice and using (1.5.18), we infer

2
cos x =

(x, ) + H(x, x)

(x, )
_
H(x, t)
t

t=x
+q(x)
_
(x, )
+
_
H(x, t)
t

t=0
hH(x, 0)
_
+
_
x
0
(H
tt
(x, t) q(t)H(x, t))(t, )) dt.
Together with (1.5.33) and
dH(x, x)
dx
=
_
H(x, t)
x
+
H(x, t)
t
_

t=x
this yields
b
0
(x) + b
1
(x)(x, ) +
_
x
0
b(x, t)(t, ) dt = 0, (1.5.34)
where
b
0
(x) :=
_
H(x, t)
t

t=0
hH(x, 0)
_
, b
1
(x) := 2
dH(x, x)
dx
+q(x),
b(x, t) := H
xx
(x, t) H
tt
(x, t) + q(t)H(x, t).
_

_
(1.5.35)
Substituting (1.3.11) into (1.5.34) we obtain
b
0
(x) + b
1
(x) cos x +
_
x
0
B(x, t) cos t dt = 0, (1.5.36)
where
B(x, t) = b(x, t) + b
1
(x)G(x, t) +
_
x
t
b(x, s)G(s, t) ds. (1.5.37)
It follows from (1.5.36) for =
_
n +
1
2
_

x
that
b
0
(x) +
_
x
0
B(x, t) cos
_
n +
1
2
_
t
x
dt = 0.
By the Riemann-Lebesgue lemma, the integral here tends to 0 as n ; consequently
b
0
(x) = 0. Further, taking in (1.5.36) =
2n
x
, we get
b
1
(x) +
_
x
0
B(x, t) cos
2nt
x
dt = 0,
43
therefore, similarly, b
1
(x) = 0. Hence, (1.5.36) takes the form
_
x
0
B(x, t) cos t dt = 0, C,
and consequently B(x, t) = 0. Therefore, (1.5.37) implies
b(x, t) +
_
x
t
b(x, s)G(s, t) ds = 0.
From this it follows that b(x, t) = 0. Taking in (1.5.32) x = 0, we get
H(0, 0) = h. (1.5.38)
Since b
0
(x) = b
1
(x) = b(x, t) = 0, we conclude from (1.5.35) and (1.5.38) that the function
H(x, t) solves the boundary value problem
H
xx
(x, t) H
tt
(x, t) + q(t)H(x, t) = 0, 0 t x,
H(x, x) = h
1
2
_
x
0
q(t) dt,
H(x, t)
t

t=0
hH(x, 0) = 0.
_

_
(1.5.39)
The inverse assertion is also valid, namely: If a function H(x, t) satises (1.5.39) then
(1.5.12) holds.
Indeed, denote
(x, ) := (x, ) +
_
x
0
H(x, t)(t, ) dt.
By similar arguments as above one can calculate

(x, ) + (x, ) =
_
2
dH(x, x)
dx
+q(x)
_
(x, )
_
H(x, t)
t

t=0
hH(x, 0)
_
+
_
x
0
(H
xx
(x, t) H
tt
(x, t) + q(t)H(x, t))(t, ) dt.
In view of (1.5.39) we get

(x, ) +(x, ) = 0. Clearly, (0, ) = 1,

(0, ) = 0. Hence
(x, ) = cos x, i.e. (1.5.12) holds.
Denote

H(x, t) := F(x, t) +
_
t
0
G(t, u)F(x, u) du. (1.5.40)
Let us show that the function

H(x, t) satises (1.5.39).
(i) Dierentiating (1.5.40) with respect to t and then taking t = 0, we get


H(x, t)
t

t=0
= G(0, 0)F(x, 0) = hF(x, 0).
Since

H(x, 0) = F(x, 0), this yields


H(x, t)
t

t=0
h

H(x, 0) = 0.
44
(ii) It follows from (1.3.11) and (1.5.40) that

H(x, x) = F(x, x) +
_
x
0
G(x, u)F(x, u) du = G(x, x),
i.e. according to (1.5.13)

H(x, x) = h
1
2
_
x
0
q(t) dt.
(iii) Using (1.5.40) again, we calculate

H
tt
(x, t) = F
tt
(x, t) +
dG(t, t)
dt
F(x, t) + G(t, t)F
t
(x, t)
+
G(t, u)
t

u=t
F(x, t) +
_
t
0
G
tt
(t, u)F(x, u) du,

H
xx
(x, t) = F
xx
(x, t) +
_
t
0
G(t, u)F
xx
(x, u) du
= F
xx
(x, t) +
_
t
0
G(t, u)F
uu
(x, u) du = F
xx
(x, t) +G(t, t)F
t
(x, t)

G(t, u)
u

u=t
F(x, t) +
_
t
0
G
uu
(t, u)F(x, u) du.
Consequently

H
xx
(x, t)

H
tt
(x, t) + q(t)

H(x, t) =
_
q(t) 2
dG(t, t)
dt
_
F(x, t)

_
t
0
(G
tt
(t, u) G
uu
(t, u) q(t)G(t, u)) dt.
In view of (1.5.13) and (1.5.25) this yields

H
xx
(x, t)

H
tt
(x, t) + q(t)

H(x, t) = 0.
Since

H(x, t) satises (1.5.39), then, as was shown above,
cos x = (x, ) +
_
x
0

H(x, t)(t, ) dt.


Comparing this relation with (1.5.12) we conclude that
_
x
0
(

H(x, t) H(x, t))(t, ) dt = 0 for all ,
i.e.

H(x, t) = H(x, t).
2) Let us now consider the general case when (1.5.8)-(1.5.9) hold. Then a(x) W
1
2
(0, 2).
Repeating the arguments of the proof of Lemma 1.5.8, we construct numbers

n,(j)
,
n,(j)

n0
, j 1, and functions a
j
(x) W
2
2
(0, 2), j 1, such that
lim
j
|a
j
(x) a(x)|
W
1
2
= 0.
45
Then
lim
j
max
0tx
[F
j
(x, t) F(x, t)[ = 0,
and (1.5.29) is valid. Similarly,
lim
j
max
0tx
[H
j
(x, t) H(x, t)[ = 0.
It was proved above that
H
j
(x, t) = F
j
(x, t) +
_
t
0
G
j
(t, u)F
j
(x, u) du.
As j , we arrive at (1.5.31), and Lemma 1.5.9 is proved. 2
Lemma 1.5.10. For each function g(x) L
2
(0, ),
_

0
g
2
(x) dx =

n=0
1

n
_
_

0
g(t)(t,
n
) dt
_
2
. (1.5.41)
Proof. Denote
Q() :=
_

0
g(t)(t, ) dt.
It follows from (1.3.11) that
Q() =
_

0
h(t) cos t dt,
where
h(t) = g(t) +
_

t
G(s, t)g(s) ds. (1.5.42)
Similarly, in view of (1.5.12),
g(t) = h(t) +
_

t
H(s, t)h(s) ds. (1.5.43)
Using (1.5.42) we calculate
_

0
h(t)F(x, t) dt =
_

0
_
g(t) +
_

t
G(u, t)g(u) du
_
F(x, t) dt
=
_

0
g(t)
_
F(x, t) +
_
t
0
G(t, u)F(x, u) du
_
dt
=
_
x
0
g(t)
_
F(x, t) +
_
t
0
G(t, u)F(x, u) du
_
dt +
_

x
g(t)
_
F(x, t) +
_
t
0
G(t, u)F(x, u) du
_
dt.
From this, by virtue of (1.5.31) and (1.5.11), we derive
_

0
h(t)F(x, t) dt =
_
x
0
g(t)H(x, t) dt
_

x
g(t)G(t, x) dt. (1.5.44)
It follows from (1.5.10) and Parsevals equality that
_

0
h
2
(t) dt +
_

0
_

0
h(x)h(t)F(x, t) dxdt
46
=
_

0
h
2
(t) dt +

n=0
_
1

n
_
_

0
h(t) cos
n
t dt
_
2

0
n
_
_

0
h(t) cos nt dt
_
2
_
=

n=0
Q
2
(n
2
)

0
n
+

n=0
_
Q
2
(
n
)

Q
2
(n
2
)

0
n
_
=

n=0
Q
2
(
n
)

n
.
Using (1.5.44) we get

n=0
Q
2
(
n
)

n
=
_

0
h
2
(t) dt +
_

0
h(x)
_
_
x
0
g(t)H(x, t) dt
_
dx
_

0
h(x)
_
_

x
g(t)G(t, x) dt
_
dx
=
_

0
h
2
(t) dt +
_

0
g(t)
_
_

t
H(x, t)h(x) dx
_
dt
_

0
h(x)
_
_

x
g(t)G(t, x) dt
_
dx.
In view of (1.5.42) and (1.5.43) we infer

n=0
Q
2
(
n
)

n
=
_

0
h
2
(t) dt +
_

0
g(t)(g(t) h(t)) dt
_

0
h(x)(h(x) g(x)) dx =
_

0
g
2
(t) dt,
i.e. (1.5.41) is valid. 2
Corollary 1.5.1. For arbitrary functions f(x), g(x) L
2
(0, ),
_

0
f(x)g(x) dx =

n=0
1

n
_

0
f(t)(t,
n
) dt
_

0
g(t)(t,
n
) dt. (1.5.45)
Indeed, (1.5.45) follows from (1.5.41) applied to the function f +g.
Lemma 1.5.11 The following relation holds
_

0
(t,
k
)(t,
n
) dt =
_
0, n ,= k,

n
, n = k.
(1.5.46)
Proof. 1) Let f(x) W
2
2
[0, ]. Consider the series
f

(x) =

n=0
c
n
(x,
n
), (1.5.47)
where
c
n
:=
1

n
_

0
f(x)(x,
n
) dx. (1.5.48)
Using Lemma 1.5.8 and integration by parts we calculate
c
n
=
1

n
_

0
f(x)
_

(x,
n
) + q(x)(x,
n
)
_
dx
=
1

n
_
hf(0)f

(0)+(,
n
)f

()

(,
n
)f()+
_

0
(x,
n
)(f

(x)+q(x)f(x)) dx
_
.
Applying the asymptotic formulae (1.5.8) and (1.1.15) one can check that for n ,
c
n
= O
_
1
n
2
_
, (x,
n
) = O(1),
47
uniformly for x [0, ]. Consequently the series (1.5.47) converges absolutely and uniformly
on [0, ]. According to (1.5.45) and (1.5.48),
_

0
f(x)g(x) dx =

n=0
c
n
_

0
g(t)(t,
n
) dt
=
_

0
g(t)

n=0
c
n
(t,
n
) dt =
_

0
g(t)f

(t) dt.
Since g(x) is arbitrary, we obtain f

(x) = f(x), i.e.


f(x) =

n=0
c
n
(x,
n
). (1.5.49)
2) Fix k 0, and take f(x) = (x,
k
). Then, by virtue of (1.5.49)
(x,
k
) =

n=0
c
nk
(x,
n
), c
nk
=
1

n
_

0
(x,
k
)(x,
n
) dx.
Further, the system cos
n
x
n0
is minimal in L
2
(0, ) (see Proposition 1.8.6 in Section
1.8), and consequently, in view of (1.3.11), the system (x,
n
)
n0
is also minimal in
L
2
(0, ). Hence c
nk
=
nk
(
nk
is the Kronecker delta), and we arrive at (1.5.46). 2
Lemma 1.5.12. For all n, m 0,

(,
n
)
(,
n
)
=

(,
m
)
(,
m
)
. (1.5.50)
Proof. It follows from (1.1.34) that
(
n

m
)
_

0
(x,
n
)(x,
m
) dx =
_
(x,
n
)

(x,
m
)

(x,
n
)(x,
m
)
_

0
.
Taking (1.5.46) into account, we get
(,
n
)

(,
m
)

(,
n
)(,
m
) = 0. (1.5.51)
Clearly, (,
n
) ,= 0 for all n 0. Indeed, if we suppose that (,
m
) = 0 for a certain
m, then

(,
m
) ,= 0, and in view of (1.5.51), (,
n
) = 0 for all n, which is impossible
since (,
n
) = (1)
n
+O
_
1
n
_
.
Since (,
n
) ,= 0 for all n 0, (1.5.50) follows from (1.5.51). 2
Denote

H =

(,
n
)
(,
n
)
.
Notice that (1.5.50) yields that

H is independent of n. Hence

(,
n
) +

H(,
n
) = 0, n 0.
48
Together with Lemma 1.5.8 and (1.5.46) this gives that the numbers
n
,
n

n0
are the
spectral data for the constructed boundary value problem L(q(x), h,

H). Clearly,

H = H
where H is dened by (1.5.14). Thus, Theorem 1.5.2 is proved. 2
Example 1.5.1. Let
n
= n
2
(n 0),
n
=

2
(n 1), and let
0
> 0 be an
arbitrary positive number. Denote a :=
1

. Let us use Algorithm 1.5.1:


1) By (1.5.10), F(x, t) a.
2) Solving equation (1.5.11) we get easily
G(x, t) =
a
1 + ax
.
3) By (1.5.13)-(1.5.14),
q(x) =
2a
2
(1 + ax)
2
, h = a, H =
a
1 + a
=
a
0

.
By (1.3.11),
(x, ) = cos x
a
1 + ax
sin x

.
Remark 1.5.1. Analogous results are also valid for other types of separated boundary
conditions, i.e. for the boundary value problems L
1
, L
0
and L
0
1
. In particular, the following
theorem holds.
Theorem 1.5.3. For real numbers
n
,
n1

n0
to be the spectral data for a certain
boundary value problem L
1
(q(x), h) with q(x) L
2
(0, ), it is necessary and sucient that

n
,=
m
(n ,= m),
n1
> 0, and that (1.1.29)-(1.1.30) hold.
1.5.3. Recovery of dierential operators from two spectra. Let
n

n0
and

n0
be the eigenvalues of L and L
1
respectively. Then the asymptotics (1.1.13) and
(1.1.29) hold, and we have the representations (1.1.26) and (1.1.28) for the characteristic
functions () and d() respectively.
Theorem 1.5.4. For real numbers
n
,
n

n0
to be the spectra for certain bound-
ary value problems L and L
1
with q(x) L
2
(0, ), it is necessary and sucient that
(1.1.13),(1.1.29) and (1.1.33) hold. The function q(x) and the numbers h and H can be
constructed by the following algorithm:
(i) From the given numbers
n
,
n

n0
calculate the numbers
n
by (1.1.35), where
() and d() are constructed by (1.1.26) and (1.1.28).
(ii) From the numbers
n
,
n

n0
construct q(x), h and H by Algorithm 1.5.1.
The necessity part of Theorem 1.5.4 was proved above, here we prove the suciency. Let
real numbers
n
,
n

n0
be given satisfying the conditions of Theorem 1.5.4. We construct
the functions () and d() by (1.1.26) and (1.1.28), and calculate the numbers
n
by
(1.1.35).
Our plan is to use Theorem 1.5.2. For this purpose we should obtain the asymptotics for
the numbers
n
. This seems to be dicult because the functions () and d() are by
construction innite products. But fortunately, for calculating the asymptotics of
n
one
can also use Theorem 1.5.2, as an auxiliary assertion. Indeed, by virtue of Theorem 1.5.2
49
there exists a boundary value problem

L = L( q(x),

h,

H) with q(x) L
2
(0, ) (not unique)
such that
n

n0
are the eigenvalues of

L. Then () is the characteristic function of

L, and consequently, according to (1.1.22),

(
n
) = (1)
n+1

2
+

n
n
,
n
l
2
.
Moreover, sign

(
n
) = (1)
n+1
. Similarly, using Theorem 1.5.3, one can prove that
d(
n
) = (1)
n
+

n
n
,
n
l
2
.
Moreover, taking (1.1.33) into account, we get sign d(
n
) = (1)
n
. Hence by (1.1.35),

n
> 0,
n
=

2
+

n1
n
,
n1
l
2
.
Then, by Theorem 1.5.2 there exists a boundary value problem L = L(q(x), h, H) with
q(x) L
2
(0, ) such that
n
,
n

n0
are spectral data of L. Denote by
n

n0
the
eigenvalues of the boundary value problem L
1
(q(x), h). It remains to show that
n
=
n
for all n 0.
Let

d() be the characteristic function of L
1
. Then, by virtue of (1.1.35),

n
=

(
n
)

d(
n
). But, by construction,
n
=

(
n
)d(
n
), and we conclude that
d(
n
) =

d(
n
), n 0.
Consequently, the function
Z() :=
d()

d()
()
is entire in (after extending it continuously to all removable singularities). On the other
hand, by virtue of (1.1.9),
[d()[ C exp([[), [

d()[ C exp([[).
Taking (1.1.18) into account, we get for a xed > 0,
[Z()[
C
[[
, G

, [[

.
Using the maximum principle [con1, p.128] and Liouvilles theorem [con1, p.77], we infer
Z() 0, i.e. d()

d(), and consequently
n
=
n
for all n 0. 2
1.6. THE METHOD OF SPECTRAL MAPPINGS
The method of spectral mappings presented in this section is an eective tool for inves-
tigating a wide class of inverse problems not only for Sturm-Liouville operators, but also
for other more complicated classes of operators such as dierential operators of arbitrary
orders, dierential operators with singularities and/or turning points, pencils of operators
and others. In the method of spectral mappings we use ideas of the contour integral method.
50
Moreover, we can consider the method of spectral mappings as a variant of the contour in-
tegral method which is adapted specially for solving inverse problems. In this section we
apply the method of spectral mappings to the solution of the inverse problem for the Sturm-
Liouville operator on a nite interval. The results obtained in Section 1.6 by the contour
integral method are similar to the results of Section 1.5 obtained by the transformation op-
erator method. However, the contour integral method is a more universal and perspective
tool for the solution of various classes of inverse spectral problems.
In the method of spectral mappings we start from Cauchys integral formula for analytic
functions [con1, p.84]. We apply this theorem in the complex plane of the spectral parame-
ter for specially constructed analytic functions having singularities connected with the given
spectral characteristics (see the proof of Lemma 1.6.3). This allows us to reduce the inverse
problem to the so-called main equation which is a linear equation in a corresponding Banach
space of sequences. In Subsection 1.6.1 we give a derivation of the main equation, and prove
its unique solvability. Using the solution of the main equation we provide an algorithm for
the solution of the inverse problem. In Subsection 1.6.2 we give necessary and sucient
conditions for the solvability of the inverse problem by the method of spectral mappings.
In Subsection 1.6.3 the inverse problem for the non-selfadjoint Sturm-Liouville operator is
studied.
1.6.1. The main equation of the inverse problem. Consider the boundary value
problem L = L(q(x), h, H) of the form (1.1.1)-(1.1.2) with real q(x) L
2
(0, ), h and H.
Let
n
,
n

n0
be the spectral data of L,
n
=

n
. Then (1.5.8)-(1.5.9) are valid.
In this section we shall solve the inverse problem of recovering L from the given spectral
data by using ideas of the contour integral method.
We note that if y(x, ) and z(x, ) are solutions of the equations y = y and z = z
respectively, then
d
dx
y, z) = ( )yz, y, z) := yz

z. (1.6.1)
Denote
D(x, , ) :=
(x, ), (x, ))

=
_
x
0
(t, )(t, ) dt. (1.6.2)
The last identity follows from (1.6.1).
Let us choose a model boundary value problem

L = L( q(x),

h,

H) with real q(x)
L
2
(0, ),

h and

H such that = (take, for example, q(x) 0,

h = 0,

H = , or

h =

H = 0, q(x) 2/). Let

n
,
n

n0
be the spectral data of

L.
Remark 1.6.1. Without loss of generality one can assume that = 0. This can be
achieved by the shift of the spectrum
n

n
+ C, since if
n
is the spectrum of
L(q(x), h, H), then
n
+ C is the spectrum of L(q(x) + C, h, H). In this case = 0,
and one can take q(x) 0,

h =

H = 0. However we will consider the general case when
is arbitrary.
Let

n
:= [
n

n
[ +[
n

n
[.
Since = , it follows from (1.5.8) and the analogous formulae for
n
and
n
that
:=
_

n=0
((n + 1)
n
)
2
_
1/2
< ,

n
< . (1.6.3)
51
Denote in this section

n0
=
n
,
n1
=

n
,
n0
=
n
,
n1
=
n
,
ni
(x) = (x,
ni
),
ni
(x) = (x,
ni
),
P
ni,kj
(x) =
1

kj
D(x,
ni
,
kj
),

P
ni,kj
(x) =
1

kj

D(x,
ni
,
kj
), i, j 0, 1, n, k 0. (1.6.4)
Then, according to (1.6.2),
P
ni,kj
(x) =

ni
(x),
kj
(x))

kj
(
ni

kj
)
=
1

kj
_
x
0

ni
(t)
kj
(t) dt.

P
ni,kj
(x) =

ni
(x),
kj
(x))

kj
(
ni

kj
)
=
1

kj
_
x
0

ni
(t)
kj
(t) dt.
Clearly,
P

ni,kj
(x) =
1

kj

ni
(x)
kj
(x),

P

ni,kj
(x) =
1

kj

ni
(x)
kj
(x). (1.6.5)
Below we shall use the following version of Schwarzs lemma (see [con1, p.130]):
Lemma 1.6.1. Let f() be an analytic function for [
0
[ < a such that f(
0
) = 0
and [f()[ A, [
0
[ < a. Then
[f()[
A
a
[
0
[ for [
0
[ < a.
In order to obtain the solution of the inverse problem we need several auxiliary proposi-
tions.
Lemma 1.6.2. The following estimates are valid for x [0, ], n, k 0, i, j, = 0, 1,
[
()
ni
(x)[ C(n + 1)

, [
()
n0
(x)
()
n1
(x)[ C
n
(n + 1)

, (1.6.6)
[P
ni,kj
(x)[
C
[n k[ + 1
, [P
(+1)
ni,kj
(x)[ C(k +n + 1)

,
[P
ni,k0
(x) P
ni,k1
(x)[
C
k
[n k[ + 1
, [P
n0,kj
(x) P
n1,kj
(x)[
C
n
[n k[ + 1
,
[P
n0,k0
(x) P
n1,k0
(x) P
n0,k1
(x) + P
n1,k1
(x)[
C
n

k
[n k[ + 1
.
_

_
(1.6.7)
The analogous estimates are also valid for
ni
(x),

P
ni,kj
(x).
Proof. It follows from (1.1.9) and (1.5.8) that
[
()
(x,
ni
)[ C(n + 1)

.
Moreover, for a xed a > 0,
[
()
(x, )[ C(n + 1)

, [
n1
[ a.
Applying Schwarzs lemma in the - plane to the function f() :=
()
(x, )
()
(x,
n1
)
with xed , n, x and a, we get
[
()
(x, )
()
(x,
n1
)[ C(n + 1)

[
n1
[, [
n1
[ a.
52
Consequently,
[
()
n0
(x)
()
n1
(x)[ C(n + 1)

[
n0

n1
[,
and (1.6.6) is proved.
Let us show that
[D(x, ,
kj
)[
C exp([[x)
[ k[ + 1
, =
2
, Re 0, := Im, k 0. (1.6.8)
For deniteness, let := Re 0. Take a xed
0
> 0. For [
kj
[
0
, we have by
virtue of (1.6.2), (1.1.9) and (1.5.8),
[D(x, ,
kj
)[ =
[(x, ), (x,
kj
))[
[
kj
[
C exp([[x)
[[ +[
kj
[
[
2

2
kj
[
.
Since
[[ +[
kj
[
[ +
kj
[

2
+
2
+[
kj
[
_

2
+
2
+
2
kj

2
(it is used here that (a +b)
2
2(a
2
+b
2
) for all real a, b ), we get
[D(x, ,
kj
)[
C exp([[x)
[
kj
[
.
For [
kj
[
0
,
[ k[ + 1
[
kj
[
1 +
[
kj
k[ + 1
[
kj
[
C,
and consequently
1
[
kj
[

C
[ k[ + 1
.
This yields (1.6.8) for [
kj
[
0
. For [
kj
[
0
,
[D(x, ,
kj
)[ =

_
x
0
(t, )(t,
kj
) dt

C exp([[x),
i.e. (1.6.8) is also valid for [
kj
[
0
. Similarly one can show that
[D(x, , )[
C exp([[x)
[ [ + 1
, =
2
, [Im[ C
0
, Re Re 0.
Using Schwarzs lemma we obtain
[D(x, ,
k1
) D(x, ,
k0
)[
C
k
exp([[x)
[ k[ + 1
, Re 0, k 0. (1.6.9)
In particular, this yields
[D(x,
ni
,
k1
) D(x,
ni
,
k0
)[
C
k
[n k[ + 1
.
Symmetrically,
[D(x,
n1
,
kj
) D(x,
n0
,
kj
)[
C
n
[n k[ + 1
.
53
Applying Schwarzs lemma to the function
Q
k
(x, ) := D(x, ,
k1
) D(x, ,
k0
)
for xed k and x, we get
[D(x,
n0
,
k0
) D(x,
n1
,
k0
) D(x,
n0
,
k1
) + D(x,
n1
,
k1
)[
C
n

k
[n k[ + 1
.
These estimates together with (1.6.4), (1.6.5) and (1.5.8) imply (1.6.7). 2
Lemma 1.6.3. The following relations hold
(x, ) = (x, ) +

k=0
_
(x, ),
k0
(x))

k0
(
k0
)

k0
(x)
(x, ),
k1
(x))

k1
(
k1
)

k1
(x)
_
, (1.6.10)
(x, ), (x, ))


(x, ), (x, ))

+

k=0
_
(x, ),
k0
(x))

k0
(
k0
)

k0
(x), (x, ))
(
k0
)

(x, ),
k1
(x))

k1
(
k1
)

k1
(x), (x, ))
(
k1
)
_
= 0. (1.6.11)
Both series converge absolutely and uniformly with respect to x [0, ] and , on com-
pact sets.
Proof. 1) Denote

= min
ni

ni
and take a xed > 0. In the - plane we consider
closed contours
N
(with counterclockwise circuit) of the form:
N
=
+
N

N
,
where

N
= : Im = , Re

, [[
_
N +
1
2
_
2
,

= :

= exp(i),
_

2
,
3
2
_
,

N
=
N
: [Im[ , Re > 0,
N
= : [[ =
_
N +
1
2
_
2
.
-
6
'
E

N
(N +
1
2
)
2
6
-

(N +
1
2
)
2

0
N
'
E
!
g. 1.6.1 g. 1.6.2
Denote
0
N
=
+
N

(
N

N
) (with clockwise circuit). Let P(x, ) =
[P
jk
(x, )]
j,k=1,2
be the matrix dened by (1.4.15). It follows from (1.4.16) and (1.4.9)
54
that for each xed x, the functions P
jk
(x, ) are meromorphic in with simple poles

n
and

n
. By Cauchys integral formula [con1, p.84],
P
1k
(x, )
1k
=
1
2i
_

0
N
P
1k
(x, )
1k

d, k = 1, 2,
where int
0
N
and
jk
is the Kronecker delta. Hence
P
1k
(x, )
1k
=
1
2i
_

N
P
1k
(x, )

d
1
2i
_

N
P
1k
(x, )
1k

d,
where
N
is used with counterclockwise circuit. Substituting into (1.4.17) we obtain
(x, ) = (x, ) +
1
2i
_

N
(x, )P
11
(x, ) +

(x, )P
12
(x, )

d +
N
(x, ),
where

N
(x, ) =
1
2i
_

N
(x, )(P
11
(x, ) 1) +

(x, )P
12
(x, )

d.
By virtue of (1.4.18),
lim
N

N
(x, ) = 0 (1.6.12)
uniformly with respect to x [0, ] and on compact sets. Taking (1.4.16) into account
we calculate
(x, ) = (x, ) +
1
2i
_

N
_
(x, )((x, )

(x, ) (x, )

(x, ))+

(x, )((x, ) (x, ) (x, )

(x, )
_
d

+
N
(x, ).
In view of (1.4.9), this yields
(x, ) = (x, ) +
1
2i
_

N
(x, ), (x, ))

M()(x, ) d +
N
(x, ), (1.6.13)
where

M() = M()

M(), since the terms with S(x, ) vanish by Cauchys theorem.
It follows from (1.4.14) that
Res
=
kj
(x, ), (x, ))

M()(x, ) =
(x, ),
kj
(x))

kj
(
kj
)

kj
(x).
Calculating the integral in (1.6.13) by the residue theorem [con1, p.112] and using (1.6.12)
we arrive at (1.6.10).
2) Since
1

_
1


1

_
=
1
( )( )
,
we have by Cauchys integral formula
P
jk
(x, ) P
jk
(x, )

=
1
2i
_

0
N
P
jk
(x, )
( )( )
d, k, j = 1, 2; , int
0
N
.
55
Acting in the same way as above and using (1.4.18)-(1.4.19) we obtain
P
jk
(x, ) P
jk
(x, )

=
1
2i
_

N
P
jk
(x, )
( )( )
d +
Njk
(x, , ), (1.6.14)
where lim
N

Njk
(x, , ) = 0, j, k = 1, n.
From (1.4.16) and (1.4.11) it follows that
P
11
(x, )

(x, ) P
21
(x, )(x, ) =

(x, ),
P
22
(x, )(x, ) P
12
(x, )

(x, ) = (x, ),
_
_
_
(1.6.15)
P(x, )
_
y(x)
y

(x)
_
= y(x),

(x, ))
_
(x, )

(x, )
_
y(x), (x, ))
_
(x, )

(x, )
_
(1.6.16)
for any y(x) C
1
[0, 1]. Taking (1.6.14) and (1.6.16) into account, we calculate
P(x, ) P(x, )

_
y(x)
y

(x)
_
=
1
2i
_

N
_
y(x),

(x, ))
_
(x, )

(x, )
_

y(x), (x, ))
_
(x, )

(x, )
_
_
d
( )( )
+
0
N
(x, , ), lim
N

0
N
(x, , ) = 0. (1.6.17)
According to (1.4.15),
P(x, )
_
(x, )

(x, )
_
=
_
(x, )

(x, )
_
.
Therefore,
det
_
P(x, )
_
(x, )

(x, )
_
,
_
(x, )

(x, )
_
_
= (x, ), (x, )).
Furthermore, using (1.6.15) we get
det
_
P(x, )
_
(x, )

(x, )
_
,
_
(x, )

(x, )
_
_
= (x, )
_
P
11
(x, )

(x, )P
21
(x, )(x, )
_

(x, )
_
P
22
(x, )(x, )P
12
(x, )

(x, )
_
= (x, ), (x, )).
Thus,
det
_
(P(x, ) P(x, ))
_
(x, )

(x, )
_
,
_
(x, )

(x, )
_
_
=
(x, ), (x, )) (x, ), (x, )).
Consequently, (1.6.17) for y(x) = (x, ) yields
(x, ), (x, ))


(x, ), (x, ))

=
1
2i
_

N
_
(x, ),

(x, ))(x, ), (x, ))
( )( )

(x, ), (x, ))(x, ), (x, ))
( )( )
_
d
56
+
1
N
(x, , ), lim
N

1
N
(x, , ) = 0.
By virtue of (1.4.9), (1.4.14) and the residue theorem, we arrive at (1.6.11). 2
Analogously one can obtain the following relation

(x, ) = (x, ) +

k=0
_

(x, ),
k0
(x))

k0
(
k0
)

k0
(x)

(x, ),
k1
(x))

k1
(
k1
)

k1
(x)
_
. (1.6.18)
It follows from the denition of

P
ni,kj
(x), P
ni,kj
(x) and (1.6.10)-(1.6.11) that

ni
(x) =
ni
(x) +

k=0
(

P
ni,k0
(x)
k0
(x)

P
ni,k1
(x)
k1
(x)), (1.6.19)
P
ni,j
(x)

P
ni,j
(x) +

k=0
(

P
ni,k0
(x)P
k0,j
(x)

P
ni,k1
(x)P
k1,j
(x)) = 0. (1.6.20)
Denote

0
(x) =

k=0
_
1

k0

k0
(x)
k0
(x)
1

k1

k1
(x)
k1
(x)
_
, (x) = 2

0
(x). (1.6.21)
Lemma 1.6.4. The series in (1.6.21) converges absolutely and uniformly on [0, ]. The
function
0
(x) is absolutely continuous, and (x) L
2
(0, ).
Proof. We rewrite
0
(x) to the form

0
(x) = A
1
(x) + A
2
(x), (1.6.22)
where
A
1
(x) =

k=0
_
1

k0

k1
_

k0
(x)
k0
(x),
A
2
(x) =

k=0
1

k1
_
(
k0
(x)
k1
(x))
k0
(x) +
k1
(x)(
k0
(x)
k1
(x))
_
.
_

_
(1.6.23)
It follows from (1.5.8), (1.6.3) and (1.6.6) that the series in (1.6.23) converge absolutely and
uniformly on [0, ], and
[A
j
(x)[ C

k=0

k
C, j = 1, 2. (1.6.24)
Furthermore, using the asymptotic formulae (1.1.9), (1.5.8) and (1.6.3) we calculate
A

1
(x) =

k=0
_
1

k0

k1
_
d
dx
_

k0
(x)
k0
(x)
_
=

k=0

k
_
sin 2kx +

k
(x)
k + 1
_
,
where
k
l
2
, and max
0x
[
k
(x)[ C for k 0. Hence A
1
(x) W
1
2
(0, ). Similarly,
we get A
2
(x) W
1
2
(0, ), and consequently
0
(x) W
1
2
(0, ). 2
57
Lemma 1.6.5. The following relations hold
q(x) = q(x) + (x), (1.6.25)
h =

h
0
(0), H =

H +
0
(), (1.6.26)
where (x) and
0
(x) are dened by (1.6.21).
Proof. Dierentiating (1.6.10) twice with respect to x and using (1.6.1) and (1.6.21) we
get

(x, )
0
(x) (x, ) =

(x, )
+

k=0
_
(x, ),
k0
(x))

k0
(
k0
)

k0
(x)
(x, ),
k1
(x))

k1
(
k1
)

k1
(x)
_
, (1.6.27)

(x, ) =

(x, ) +

k=0
_
(x, ),
k0
(x))

k0
(
k0
)

k0
(x)
(x, ),
k1
(x))

k1
(
k1
)

k1
(x)
_
+2 (x, )

k=0
_
1

k0

k0
(x)

k0
(x)
1

k1

k1
(x)

k1
(x)
_
+

k=0
_
1

k0
( (x, )
k0
(x))

k0
(x)
1

k1
( (x, )
k1
(x))

k1
(x)
_
.
We replace here the second derivatives, using equation (1.1.1), and then replace (x, ),
using (1.6.10). This yields
q(x) (x, ) = q(x) (x, ) +

k=0
_
1

k0
(x, ),
k0
(x))
k0
(x)
1

k1
(x, ),
k1
(x))
k1
(x)
_
+2 (x, )

k=0
_
1

k0

k0
(x)

k0
(x)
1

k1

k1
(x)

k1
(x)
_
+

k=0
_
1

k0
( (x, )
k0
(x))

k0
(x)
1

k1
( (x, )
k1
(x))

k1
(x)
_
.
After canceling terms with

(x, ) we arrive at (1.6.25).


Denote h
0
= h, h

= H, U
0
= U, U

= V. In (1.6.10) and (1.6.27) we put x = 0 and


x = . Then

(a, ) + (h
a

0
(a)) (a, ) = U
a
((x, ))
+

k=0
_
(x, ),
k0
(x))
|x=a

k0
(
k0
)
U
a
(
k0
)
(x, ),
k1
(x))
|x=a

k1
(
k1
)
U
a
(
k1
)
_
, a = 0, . (1.6.28)
Let a = 0. Since U
0
((x, )) = 0, (0, ) = 1,

(0, ) =

h
0
, we get h
0

h
0

0
(0) =
0, i.e. h =

h
0
(0). Let a = . Since
U

((x, )) = (), U

(
k0
) = 0, U

(
k1
) = (
k1
),
(x, ), (x, )) = (, )

() (, )

(),
it follows from (1.6.28) that

(, ) + (h


0
()) (, ) = () +

k=0

k1
()

()

k1
(
k1
)
(
k1
).
58
For =
n1
this yields

n1
() + (h


0
())
n1
() = (
n1
)
_
1 +
1

n1

n1
()

1
(
n1
)
_
,
where

1
() :=
d
d

(). By virtue of (1.1.6) and (1.1.8),



n1
()

n
= 1,

n

n
=

1
(
n1
),
i.e.
1

n1

n1
()

1
(
n1
) = 1,
Then

n1
() + (h


0
())
n1
() = 0.
On the other hand,

n1
() +

h


n1
() =

(
n1
) = 0.
Thus, (h


0
()

)
n1
() = 0, i.e. h

=
0
(). 2
Remark 1.6.2. For each xed x [0, ], the relation (1.6.19) can be considered as a
system of linear equations with respect to
ni
(x), n 0, i = 0, 1. But the series in (1.6.19)
converges only with brackets. Therefore, it is not convenient to use (1.6.19) as a main
equation of the inverse problem. Below we will transfer (1.6.19) to a linear equation in a
corresponding Banach space of sequences (see (1.6.33) or (1.6.34)).
Let V be a set of indices u = (n, i), n 0, i = 0, 1. For each xed x [0, ], we
dene the vector
(x) = [
u
(x)]
uV
=
_

n0
(x)

n1
(x)
_
n0
= [
00
,
01
,
10
,
11
, . . .]
T
by the formulae
_

n0
(x)

n1
(x)
_
=
_

n

n
0 1
_ _

n0
(x)

n1
(x)
_
=
_

n
(
n0
(x)
n1
(x))

n1
(x)
_
,

n
=
_

1
n
,
n
,= 0,
0,
n
= 0.
We also dene the block matrix
H(x) = [H
u,v
(x)]
u,vV
=
_
H
n0,k0
(x) H
n0,k1
(x)
H
n1,k0
(x) H
n1,k1
(x)
_
n,k0
, u = (n, i), v = (k, j)
by the formulae
_
H
n0,k0
(x) H
n0,k1
(x)
H
n1,k0
(x) H
n1,k1
(x)
_
=
_

n

n
0 1
_ _
P
n0,k0
(x) P
n0,k1
(x)
P
n1,k0
(x) P
n1,k1
(x)
_ _

k
1
0 1
_
=
_

k

n
(P
n0,k0
(x) P
n1,k0
(x))
n
(P
n0,k0
(x) P
n0,k1
(x) P
n1,k0
(x) + P
n1,k1
(x))

k
P
n1,k0
(x) P
n1,k0
(x) P
n1,k1
(x)
_
.
59
Analogously we dene

(x),

H(x) by replacing in the previous denitions
ni
(x) by
ni
(x)
and P
ni,kj
(x) by

P
ni,kj
(x). It follows from (1.6.6) and (1.6.7) that
[
()
ni
(x)[ C(n +1)

, [H
ni,kj
(x)[
C
k
([n k[ + 1)
, [H
(+1)
ni,kj
(x)[ C(n +k +1)

k
. (1.6.29)
Similarly,
[

()
ni
(x)[ C(n +1)

, [

H
ni,kj
(x)[
C
k
([n k[ + 1)
, [

H
(+1)
ni,kj
(x)[ C(n +k +1)

k
. (1.6.30)
and also
[

H
ni,kj
(x)

H
ni,kj
(x
0
)[ C[x x
0
[
k
, x, x
0
[0, ], (1.6.31)
where C does not depend on x, x
0
, n, i, j and k.
Let us consider the Banach space m of bounded sequences = [
u
]
uV
with the norm
||
m
= sup
uV
[
u
[. It follows from (1.6.29) and (1.6.30) that for each xed x [0, ], the
operators E +

H(x) and E H(x) (here E is the identity operator), acting from m to
m , are linear bounded operators, and
|H(x)|, |

H(x)| C sup
n

k
[n k[ + 1
< . (1.6.32)
Theorem 1.6.1. For each xed x [0, ], the vector (x) m satises the equation

(x) = (E +

H(x))(x) (1.6.33)
in the Banach space m. Moreover, the operator E +

H(x) has a bounded inverse operator,
i.e. equation (1.6.33) is uniquely solvable.
Proof. We rewrite (1.6.19) in the form

n0
(x)
n1
(x) =
n0
(x)
n1
(x) +

k=0
_
(

P
n0,k0
(x)

P
n1,k0
(x))(
k0
(x)
k1
(x))
+(

P
n0,k0
(x)

P
n1,k0
(x)

P
n0,k1
(x) +

P
n1,k1
(x))
k1
(x)
_
,

n1
(x) =
n1
(x) +

k=0
_

P
n1,k0
(x)(
k0
(x)
k1
(x)) + (

P
n1,k0
(x)

P
n1,k1
(x))
k1
(x)
_
.
Taking into account our notations, we obtain

ni
(x) =
ni
(x) +

k,j

H
ni,kj
(x)
kj
(x), (n, i), (k, j) V, (1.6.34)
which is equivalent to (1.6.33). The series in (1.6.34) converges absolutely and uniformly for
x [0, ]. Similarly, (1.6.20) takes the form
H
ni,kj
(x)

H
ni,kj
(x) +

,s
(

H
ni,s
(x)H
s,kj
(x) = 0, (n, i), (k, j), (, s) V,
or
(E +

H(x))(E H(x)) = E.
60
Interchanging places for L and

L, we obtain analogously
(x) = (E H(x))

(x), (E H(x))(E +

H(x)) = E.
Hence the operator (E +

H(x))
1
exists, and it is a linear bounded operator. 2
Equation (1.6.33) is called the main equation of the inverse problem. Solving (1.6.33)
we nd the vector (x) , and consequently, the functions
ni
(x) . Since
ni
(x) = (x,
ni
)
are the solutions of (1.1.1), we can construct the function q(x) and the coecients h and
H. Thus, we obtain the following algorithm for the solution of Inverse Problem 1.4.1.
Algorithm 1.6.1. Given the numbers
n
,
n

n0
.
(i) Choose

L such that = , and construct

(x) and

H(x).
(ii) Find (x) by solving equation (1.6.33).
(iii) Calculate q(x), h and H by (1.6.21), (1.6.25)-(1.6.26).
1.6.2. Necessary and sucient conditions. In this item we provide necessary and
sucient conditions for the solvability of Inverse Problem 1.4.1.
Theorem 1.6.2. For real numbers
n
,
n

n0
to be the spectral data for a cer-
tain L(q(x), h, H) with q(x) L
2
(0, ), it is necessary and sucient that the relations
(1.5.8)-(1.5.9) hold. Moreover, q(x) W
N
2
i (1.1.23) holds. The boundary value problem
L(q(x), h, H) can be constructed by Algorithm 1.6.1.
The necessity part of Theorem 1.6.2 was proved above. We prove here the suciency. Let
numbers
n
,
n

n0
of the form (1.5.8)-(1.5.9) be given. Choose

L, construct

(x),

H(x)
and consider the equation (1.6.33).
Lemma 1.6.6. For each xed x [0, ], the operator E +

H(x), acting from m to
m, has a bounded inverse operator, and the main equation (1.6.33) has a unique solution
(x) m.
Proof. It is sucient to prove that the homogeneous equation
(E +

H(x))(x) = 0, (1.6.35)
where (x) = [
u
(x)]
uV
, has only the zero solution. Let (x) m be a solution of
(1.6.35), i.e.

ni
(x) +

k,j

H
ni,kj
(x)
kj
(x) = 0, (n, i), (k, j) V. (1.6.36)
Denote
n1
(x) =
n1
(x),
n0
(x) =
n0
(x)
n
+
n1
(x). Then (1.6.36) takes the form

ni
(x) +

k=0
(

P
ni,k0
(x)
k0
(x)

P
ni,k1
(x)
k1
(x)) = 0, (1.6.37)
and
[
ni
(x)[ C(x), [
n0
(x)
n1
(x)[ C(x)
n
. (1.6.38)
Construct the functions (x, ), (x, ) and B(x, ) by the formulas
(x, ) =

k=0
_
(x, ),
k0
(x))

k0
(
k0
)

k0
(x)
(x, ),
k1
(x))

k1
(
k1
)

k1
(x)
_
, (1.6.39)
61
(x, ) =

k=0
_

(x, ),
k0
(x))

k0
(
k0
)

k0
(x)

(x, ),
k1
(x))

k1
(
k1
)

k1
(x)
_
, (1.6.40)
B(x, ) = (x,

)(x, ). (1.6.41)
The idea to consider B(x, ) comes from the construction of Greens function for t = x :
G(x, x, ) = (x,

)(x, ).
In view of (1.6.2) the function (x, ) is entire in for each xed x. The functions
(x, ) and B(x, ) are meromorphic in with simple poles
ni
. According to (1.6.2)
and (1.6.37), (x,
ni
) =
ni
(x). Let us show that
Res
=
n0
B(x, ) =
1

n0
[
n0
(x)[
2
, Res
=
n1
B(x, ) = 0. (1.6.42)
Indeed, by virtue of (1.6.39), (1.6.40) and (1.4.9) we get
(x, ) =

M()(x, )

k=0
_

S(x, ),
k0
(x))

k0
(
k0
)

k0
(x)

S(x, ),
k1
(x))

k1
(
k1
)

k1
(x)
_
. (1.6.43)
Since

S(x, ), (x, ))
|x=0
= 1, it follows from (1.6.1) that

S(x, ), (x, ))

=
1

+
_
x
0

S(t, ) (t, ) dt.


Therefore, according to (1.6.43),
Res
=
n0
(x, ) =
1

n0

n0
(x), Res
=
n1
(x, ) =
1

n1

n1
(x)
1

n1

n1
(x) = 0.
Together with (1.6.41) this yields (1.6.42).
Further, consider the integral
I
0
N
(x) =
1
2i
_

N
B(x, ) d, (1.6.44)
where
N
= : [[ = (N + 1/2)
2
. Let us show that
lim
N
I
0
N
(x) = 0. (1.6.45)
Indeed, it follows from (1.6.2) and (1.6.39) that
(x, ) =

k=0
1

k0

D(x, ,
k0
)(
k0
(x)
k1
(x))
+

k=0
_
1

k0

k1
_

D(x, ,
k0
)
k1
(x) +

k=0
1

k1
_

D(x, ,
k0
)

D(x, ,
k1
)
_

k1
(x).
For deniteness, let := Re 0. By virtue of (1.5.8), (1.6.38), (1.6.8) and (1.6.9), we get
[(x,

)[ = [(x, )[ C(x) exp([[x)

k=0

k
[ k[ + 1
, 0, = Im. (1.6.46)
62
Similarly, using (1.6.40) we obtain for suciently large

> 0 :
[(x, )[
C(x)
[[
exp([[x)

k=0

k
[ k[ + 1
, 0, G

, [[

.
Then
[B(x, )[
C(x)
[[
_

k=0

k
[ k[ + 1
_
2

C(x)
[[

k=0
((k + 1)
k
)
2

k=0
1
([ k[ + 1)
2
(k + 1)
2
.
In view of (1.6.3), this implies
[B(x, )[
C(x)
[[

k=0
1
[ k[
2
(k + 1)
2
, [[ = N +
1
2
, Re 0. (1.6.47)
Since = (N + 1/2) exp(i), [/2, /2], we calculate
[ k[ =
_
N +
1
2
_
2
+k
2
2
_
N +
1
2
_
k cos
_
N +
1
2
k
_
2
. (1.6.48)
Furthermore,

k=0
1
(k + 1)
2
(N +
1
2
k)
2

N

k=0
1
(k + 1)
2
(N +
1
2
k)
2
+
1
(N + 2)
2

k=1
1
(k
1
2
)
2
=
N1

k=1
1
k
2
(N k)
2
+O
_
1
N
2
_
.
Since
1
k
2
(N k)
2
=
2
kN
3
+
1
k
2
N
2
+
2
N
3
(N k)
+
1
N
2
(N k)
2
,
we get
N1

k=1
1
k
2
(N k)
2
=
4
N
3
N1

k=1
1
k
+
2
N
2
N1

k=1
1
k
2
.
Hence

k=0
1
(k + 1)
2
(N +
1
2
k)
2
= O
_
1
N
2
_
.
Together with (1.6.47) and (1.6.48) this yields
[B(x, )[
C(x)
[[
3
,
N
.
Substituting this estimate into (1.6.44) we arrive at (1.6.45).
On the other hand, calculating the integral in (1.6.44) by the residue theorem and taking
(1.6.42) into account, we arrive at
lim
N
N

n=0
1

n0
[
n0
(x)[
2
= 0.
63
Since
n0
> 0, we get
n0
(x) = 0.
Construct the functions
() := (
0
)

n=1

n
2
,
n
=
2
n
,

() :=

n=1
n
2

n
2
= sin , =
2
,
f(x, ) :=
(x, )
()
.
It follows from the relation (x,
n0
) =
n0
(x) = 0 that for each xed x [0, ] the
function f(x, ) is entire in . On the other hand, we have

()
()
=

n=0
_
1 +

n
n
2

n
_
.
Fix > 0. By virtue of (1.1.17) and (1.5.8), the following estimates are valid in the sector
arg [, 2 ] :
[

()[ C[[ exp([[),

n
n
2

n


C
n
2
, := Im.
Consequently
[()[ C[[ exp([[), arg [, 2 ].
Taking (1.6.46) into account we obtain
[f(x, )[
C(x)
[[
, arg [, 2 ].
From this, using the Phragmen-Lindelof theorem [you1, p.80] and Liouvilles theorem [con1,
p.77] we conclude that f(x, ) 0, i.e. (x, ) 0 and
n1
(x) = (x,
n1
) = 0. Hence

ni
(x) = 0 for n 0, i = 0, 1, and Lemma 1.6.6 is proved. 2
Let (x) = [
u
(x)]
uV
be the solution of the main equation (1.6.33).
Lemma 1.6.7. For n 0, i = 0, 1, the following relations hold

ni
(x) C
1
[0, ], [
()
ni
(x)[ C(n + 1)

, = 0, 1, x [0, ], (1.6.49)
[
ni
(x)

ni
(x)[ C
n
, x [0, ], (1.6.50)
[

ni
(x)

ni
(x)[ C, x [0, ], (1.6.51)
where is dened by (1.6.3), and

n
:=
_

k=0
1
(k + 1)
2
([n k[ + 1)
2
_
1/2
.
Here and below, one and the same symbol C denotes various positive constants which
depend here only on

L and C
0
, where C
0
> 0 is such that C
0
.
64
Proof. 1) Denote

R(x) = (E+

H(x))
1
. Fix x
0
[0, ] and consider the main equation
(1.6.33) for x = x
0
:

(x
0
) = (E +

H(x
0
))(x
0
).
By virtue of (1.6.31),
|

H(x)

H(x
0
)| C[x x
0
[

k
C
1
[x x
0
[, x, x
0
[0, ], C
1
> 0.
Take
(x
0
) :=
1
2C
1
|

R(x
0
)|
.
Then, for [x x
0
[ (x
0
),
|

H(x)

H(x
0
)|
1
2|

R(x
0
)|
.
Using Lemma 1.5.1 we get for [x x
0
[ (x
0
),

R(x)

R(x
0
) =

k=1
_

H(x
0
)

H(x)
_
k
_

R(x
0
)
_
k+1
,
|

R(x)

R(x
0
)| 2C
1
|

R(x
0
)|
2
[x x
0
[.
Consequently,

R(x) is continuous with respect to x [0, ], and
|

R(x)| C, x [0, ].
We represent

R(x) in the form

R(x) = E H(x). Then,
|H(x)| C, x [0, ], (1.6.52)
and
(E H(x))(E +

H(x)) = (E +

H(x))(E H(x)) = E. (1.6.53)
In coordinates (1.6.53) takes the form
H
ni,kj
(x) =

H
ni,kj
(x)

,s
H
ni,s
(x)

H
s,kj
(x), (n, i), (k, j), (, s) V, (1.6.54)
H
ni,kj
(x) =

H
ni,kj
(x)

,s

H
ni,s
(x)H
s,kj
(x), (n, i), (k, j), (, s) V. (1.6.55)
The functions H
ni,kj
(x) are continuous for x [0, ], and by virtue of (1.6.52), (1.6.54)
and (1.6.30) we have
[H
ni,kj
(x)[ C
k
, x [0, ]. (1.6.56)
Substituting this estimate into the right-hand side of (1.6.54) and (1.6.55) and using (1.6.30)
we obtain more precisely,
[H
ni,kj
(x)[ C
k
_
1
[n k[ + 1
+
k
_
, x [0, ], (n, i), (k, j) V, (1.6.57)
[H
ni,kj
(x)[ C
k
_
1
[n k[ + 1
+
n
_
, x [0, ], (n, i), (k, j) V. (1.6.58)
65
We note that since

k=0
[
k
[
2
=

n=0

k=0
1
(k + 1)
2
([n k[ + 1)
2
=

k=0

n=k
1
(k + 1)
2
(n k + 1)
2
+

n=0

k=n+1
1
(k + 1)
2
(k n + 1)
2
2
_

k=1
1
k
2
_
2
,
it follows that
k
l
2
.
Solving the main equation (1.6.33) we infer

ni
(x) =

ni
(x)

k,j
H
ni,kj
(x)

kj
(x), x [0, ], (n, i), (k, j) V. (1.6.59)
According to (1.6.30) and (1.6.58), the series in (1.6.59) converges absolutely and uniformly
for x [0, ]; the functions
ni
(x) are continuous for x [0, ],
[
ni
(x)[ C, x [0, ], (n, i) V,
and (1.6.50) is valid.
2) It follows from (1.6.53) that
H

(x) = (E H(x))

H

(x)(E H(x)), (1.6.60)


the functions H
ni,kj
(x) are continuously dierentiable, and
[H

ni,kj
(x)[ C
k
. (1.6.61)
Dierentiating (1.6.59) we calculate

ni
(x) =

ni
(x)

k,j
H
ni,kj
(x)

kj
(x)

k,j
H

ni,kj
(x)

kj
(x). (1.6.62)
By virtue of (1.6.30), (1.6.57) and (1.6.61), the series in (1.6.62) converge absolutely and
uniformly for x [0, ];
ni
(x) C
1
[0, ],
[

ni
(x)[ C(n + 1), x [0, ], (n, i) V,
and (1.6.51) is valid. 2
Remark 1.6.3. The estimates for

ni
(x) can be also obtained in the following way.
Dierentiating (1.6.34) formally we get

ni
(x)

k,j

ni,kj
(x)
kj
(x) =

ni
(x) +

k,j

H
ni,kj
(x)

kj
(x). (1.6.63)
Dene
z(x) = [z
u
(x)]
uV
, z(x) = [ z
u
(x)]
uV
,

A(x) = [

A
u,v
(x)]
u,vV
, u = (n, i), v = (k, j),
by the formulas
z
ni
(x) =
1
n + 1

ni
(x), z
ni
(x) =
1
n + 1
_

ni
(x)

k,j

ni,kj
(x)
kj
(x)
_
,
66

A
ni,kj
(x) =
k + 1
n + 1

H
ni,kj
(x).
Then (1.6.63) takes the form
z(x) = (E +

A(x))z(x)
or
z
ni
(x) = z
ni
(x) +

k,j

A
ni,kj
(x)z
kj
(x). (1.6.64)
It follows from (1.6.30) that

k,j
[

A
ni,kj
(x)[ C

k=0
(k + 1)
k
(n + 1)([n k[ + 1)

C
n + 1
_

k=0
1
([n k[ + 1)
2
_
1/2
.
Since

k=0
1
([n k[ + 1)
2
=
n

k=0
1
(n k + 1)
2
+

k=n+1
1
(k n + 1)
2
2

k=1
1
k
2
, (1.6.65)
we infer

k,j
[

A
ni,kj
(x)[
C
n + 1
.
For each xed x [0, ], the operator

A(x) is a linear bounded operator, acting from m
to m, and
|

A(x)| C.
The homogeneous equation
u
ni
(x) +

k,j

A
ni,kj
(x)u
kj
(x) = 0, [u
ni
(x)] m, (1.6.66)
has only the trivial solution. Indeed, let [u
ni
(x)] m be a solution of (1.6.66). Then the
functions
ni
(x) := (n + 1)u
ni
(x) satisfy (1.6.36). Moreover, (1.6.36) gives
[
ni
(x)[ C(x)

k,j
[

H
ni,kj
(x)[(k+1) C(x)

k=0
(k + 1)
k
[n k[ + 1
C(x)
_

k=0
1
([n k[ + 1)
2
_
1/2
.
Hence, in view of (1.6.65), [
ni
(x)[ C(x), i.e. [
ni
(x)] m. By Lemma 1.6.6,
ni
(x) = 0,
i.e. u
ni
(x) = 0.
Using (1.6.64), by the same arguments as above, one can show that
ni
(x) C
1
[0, ]
and [

ni
(x)[ C(n + 1), i.e. (1.6.49) is proved. Hence, it follows from (1.6.63) that
[

ni
(x)

ni
(x)[ C

k=0
(k + 1)
k
[n k[ + 1
C
_

k=0
1
([n k[ + 1)
2
_
1/2
.
Taking (1.6.65) into account we arrive at (1.6.51). 2
We dene the functions
ni
(x) by the formulae

n1
(x) =
n1
(x),
n0
(x) =
n0
(x)
n
+
n1
(x). (1.6.67)
67
Then (1.6.19) and (1.6.6) are valid. By virtue of (1.6.67) and Lemma 1.6.7, we have
[
()
ni
(x)[ C(n + 1)

, = 0, 1, (1.6.68)
[
ni
(x)
ni
(x)[ C
n
, [

ni
(x)

ni
(x)[ C. (1.6.69)
Furthermore, we construct the functions (x, ) and (x, ) via (1.6.10), (1.6.18), and the
boundary value problem L(q(x), h, H) via (1.6.21), (1.6.25)-(1.6.26).
Clearly, (x,
ni
) =
ni
(x).
Lemma 1.6.8. The following relation holds:
q(x) L
2
(0, ).
Proof. Here we follow the proof of Lemma 1.6.4 with necessary modications. According
to (1.6.22),
0
(x) = A
1
(x) + A
2
(x), where the functions A
j
(x) are dened by (1.6.23). It
follows from (1.5.8), (1.6.3) and (1.6.6) that the series in (1.6.23) converge absolutely and
uniformly on [0, ], and (1.6.24) holds. Furthermore,
A

1
(x) =

k=0
_
1

k0

k1
_
d
dx
_

k0
(x)
k0
(x)
_
= 2

k=0
_
1

k0

k1
_

k0
(x)

k0
(x) + A(x) =

k=0

k
_
sin 2kx +

k
(x)
k + 1
_
+A(x),
where
A(x) =

k=0
_
1

k0

k1
__

k0
(x)(

k0
(x)

k0
(x)) +

k0
(x)(
k0
(x)
k0
(x))
_
, (1.6.70)

k
l
2
,
_

k=0
[
k
[
2
_
1/2
C, max
0x
[
k
(x)[ C.
By virtue of (1.6.68), (1.6.69), (1.6.6) and (1.5.8), the series in (1.6.70) converges absolutely
and uniformly on [0, ], and
[A(x)[ C
_

k=0

k
+

k=0
(k + 1)
k

k
_
C
2
_
1 +
_

k=0
[
k
[
2
_
1/2
_
C
2
,
since
k
l
2
. Hence A
1
(x) W
1
2
[0, ]. Similarly we get A
2
(x) W
1
2
[0, ]. Consequently,

0
(x) W
1
2
(0, ), (x) L
2
(0, ), i.e. q(x) L
2
(0, ). 2
Lemma 1.6.9. The following relations hold:

ni
(x) =
ni

ni
(x), (x, ) = (x, ), (x, ) = (x, ). (1.6.71)
(0, ) = 1,

(0, ) = h, U() = 1, V () = 0. (1.6.72)


Proof. 1) Using (1.6.10), (1.6.18) and acting in the same way as in the proof of Lemma
1.6.5, we get

U
a
( (x, )) = U
a
((x, ))
68
+

k=0
_
(x, ),
k0
(x))
|x=a

k0
(
k0
)
U
a
(
k0
)
(x, ),
k1
(x))
|x=a

k1
(
k1
)
U
a
(
k1
)
_
, a = 0, , (1.6.73)

U
a
(

(x, )) = U
a
((x, ))
+

k=0
_

(x, ),
k0
(x))
|x=a

k0
(
k0
)
U
a
(
k0
)

(x, ),
k1
(x))
|x=a

k1
(
k1
)
U
a
(
k1
)
_
, a = 0, , (1.6.74)
where U
0
= U, U

= V.
Since (x, ),
kj
(x))
|x=0
= 0, it follows from (1.6.10) and (1.6.73) for x = 0 that
(0, ) = 1, U
0
() = 0, and consequently

(0, ) = h. In (1.6.74) we put a = 0. Since


U
0
() = 0 we get U
0
() = 1.
2) In order to prove (1.6.71) we rst assume that

1
:=
_

k=0
((k + 1)
2

k
)
2
_
1/2
< . (1.6.75)
In this case, one can obtain from (1.6.60) by dierentiation that
H

(x) = (E H(x))

H

(x)(E H(x))
H

(x)

H

(x)(E H(x)) (E H(x))



H

(x)H(x). (1.6.76)
It follows from (1.6.30), (1.6.56) and (1.6.61) that the series in (1.6.76) converge absolutely
and uniformly for x [0, ], H
ni,kj
(x) C
2
[0, ], and
[H

ni,kj
(x)[ C(n +k + 1)
k
. (1.6.77)
Furthermore, using (1.6.59) we calculate

ni
(x) =

ni
(x)+

k,j
H
ni,kj
(x)

kj
(x)+2

k,j
H

ni,kj
(x)

kj
(x)+

k,j
H

ni,kj
(x)

kj
(x). (1.6.78)
Since


ni
(x) =
ni

ni
(x) it follows that

n0
(x) =
n0

n0
(x) +
n
(
n0

n1
)
n1
(x),

n1
(x) =
n1

n1
(x),
hence

ni
(x) C[0, ], [

ni
(x)[ C(n + 1)
2
, (n, i) V.
From this and from (1.6.57), (1.6.61), (1.6.65), (1.6.77) and (1.6.30) we deduce that the
series in (1.6.78) converge absolutely and uniformly for x [0, ], and

ni
(x) C[0, ], [

ni
(x)[ C(n + 1)
2
, (n, i) V.
On the other hand, it follows from the proof of Lemma 1.6.8 and from (1.6.75) that in our
case q(x) q(x) C[0, ]; hence

ni
(x) C[0, ], [
ni
(x)[ C(n + 1)
2
, (n, i) V.
Together with (1.6.67) this implies that

ni
(x) C[0, ], [
ni
(x)[ C(n+1)
2
, [
n0
(x)
n1
(x)[ C
n
(n+1)
2
, (n, i) V.
69
Using (1.6.19) we calculate

ni
(x) + q(x)
ni
(x) =
ni
(x) +

k=0
_

P
ni,k0
(x)
k0
(x)

P
ni,k1
(x)
k1
(x)
_
2
ni
(x)

k=0
_
1

k0

k0
(x)

k0
(x)
1

k1

k1
(x)

k1
(x)
_

k=0
_
1

k0
(
ni
(x)
k0
(x))

k0
(x)
1

k1
(
ni
(x)
k1
(x))

k1
(x)
_
.
Taking (1.6.21) and (1.6.25) into account we derive


ni
(x) =
ni
(x) +

k=0
_

P
ni,k0
(x)
k0
(x)

P
ni,k1
(x)
k1
(x)
_
+
+

k=0
_
1

k0

ni
(x),
k0
(x))
k0
(x)
1

k1

ni
(x),
k1
(x))
k1
(x)
_
. (1.6.79)
Using (1.6.10) and (1.6.18) we calculate similarly

(x, ) = (x, ) +

k=0
_
(x, ),
k0
(x))

k0
(
k0
)

k0
(x)
(x, ),
k1
(x))

k1
(
k1
)

k1
(x)
_
+

k=0
_
1

k0
(x, ),
k0
(x))
k0
(x)
1

k1
(x, ),
k1
(x))
k1
(x)
_
, (1.6.80)

(x, ) = (x, ) +

k=0
_

(x, ),
k0
(x))

k0
(
k0
)

k0
(x)

(x, ),
k1
(x))

k1
(
k1
)

k1
(x)
_
+

k=0
_
1

k0

(x, ),
k0
(x))
k0
(x)
1

k1

(x, ),
k1
(x))
k1
(x)
_
. (1.6.81)
It follows from (1.6.79) that

ni

ni
(x) =
ni
(x) +

k=0
_

P
ni,k0
(x)
k0
(x)

P
ni,k1
(x)
k1
(x)
_
+

k=0
_
(
ni

k0
)

P
ni,k0
(x)
k0
(x) (
ni

k1
)

P
ni,k1
(x)
k1
(x)
_
,
and consequently we arrive at (1.6.37) where

ni
(x) :=
ni
(x)
ni

ni
(x).
Then (1.6.36) is valid, where

n1
(x) =
n1
(x),
n0
(x) =
n
(
n0
(x)
n1
(x)),
n
:=
_

1
n
,
n
,= 0,
0,
n
= 0.
Moreover,
[
ni
(x)[ C(n + 1)
2
, [
n0
(x)
n1
(x)[ C
n
(n + 1)
2
,
70
and hence
[
ni
(x)[ C(n + 1)
2
. (1.6.82)
It follows from (1.6.36), (1.6.30), (1.6.82) and (1.6.75) that
[
ni
(x)[

k,j
[H
ni,kj
(x)
kj
(x)[ C

k=0
(k + 1)
2

k
[n k[ + 1
.
Since
k + 1
(n + 1)([n k[ + 1)
1,
we obtain
[
ni
(x)[ C(n + 1)

k=0
(k + 1)
k
C(n + 1). (1.6.83)
Using (1.6.83) instead of (1.6.82) and repeating the arguments, we infer
[
ni
(x)[

k,j
[H
ni,kj
(x)
kj
(x)[ C

k=0
(k + 1)
k
[n k[ + 1
C.
Then, by virtue of Lemma 1.6.6,
ni
(x) = 0, and consequently
ni
(x) = 0. Thus,
ni
(x) =

ni

ni
(x).
Furthermore, it follows from (1.6.80) that
(x, ) = (x, ) +

k=0
_
(x, ),
k0
(x))

k0
(
k0
)

k0

k0
(x)
(x, ),
k1
(x))

k1
(
k1
)

k1

k1
(x)
_
+

k=0
_
(x, ),
k0
(x))

k0
(
k0
)
(
k0
)
k0
(x)
(x, ),
k1
(x))

k1
(
k1
)
(
k1
)
k1
(x)
_
.
From this, by virtue of (1.6.10), it follows that (x, ) = (x, ). Analogously, using
(1.6.81), we obtain (x, ) = (x, ). Thus, (1.6.71) is proved for the case when (1.6.75)
is fullled.
3) Let us now consider the general case when (1.6.3) holds. Choose numbers

n,(j)
,
n,(j)

n0
, j 1, such that

1,(j)
:=
_

k=0
(k + 1)
2

k,(j)
)
2
_
1/2
< ,

0,(j)
:=
_

k=0
((k + 1)
k,(j)
)
2
_
1/2
0 as j ,
where

k,(j)
:= [
k,(j)

k
[ +[
k,(j)

k
[,
k,(j)
:= [
k,(j)

k
[ +[
k,(j)

k
[.
For each xed j 1, we solve the corresponding main equation:

(j)
(x) = (E +

H
(j)
(x))
(j)
(x),
and construct the functions
(j)
(x, ) and the boundary value problem L(q
(j)
(x), h
(j)
, H
(j)
).
Using Lemma 1.5.1 one can show that
lim
j
|q
(j)
q|
L
2
= 0, lim
j
h
(j)
= h,
71
lim
j
max
0x
[
(j)
(x, ) (x, )[ = 0. (1.6.84)
Denote by
0
(x, ) the solution of equation (1.1.1) under the conditions
0
(0, ) = 1 ,

0
(0, ) = h. According to Lemma 1.5.3,
lim
j
max
0x
[
(j)
(x, )
0
(x, )[ = 0.
Comparing this relation with (1.6.84) we conclude that
0
(x, ) = (x, ), i.e. (x, ) =
(x, ). Similarly we get (x, ) = (x, ).
4) Denote () := V (). It follows from (1.6.73) and (1.6.74) for a = that

() = () +

k=0
_
(x, ),
k0
(x))
|x=

k0
(
k0
)
(
k0
)
(x, ),
k1
(x))
|x=

k1
(
k1
)
(
k1
)
_
, (1.6.85)
0 = V () +

k=0
_

(x, ),
k0
(x))
|x=

k0
(
k0
)
(
k0
)

(x, ),
k1
(x))
|x=

k1
(
k1
)
(
k1
)
_
. (1.6.86)
In (1.6.85) we set =
n1
:
0 = (
n1
) +

k=0
_

P
n1,k0
()(
k0
)

P
n1,k1
()(
k1
)
_
= 0.
Since

P
n1,k1
() =
nk
we get

k=0

P
n1,k0
()(
k0
) = 0.
Then, by virtue of Lemma 1.6.6, (
k0
) = 0, k 0.
Substituting this into (1.6.86) and using the relation

(x, ),
k1
(x))
|x=
= 0, we obtain
V () = 0, i.e. (1.6.72) is valid. Notice that we additionally proved that (
n
) = 0, i.e.
the numbers
n

n0
are eigenvalues of L. 2
It follows from (1.6.18) for x = 0 that
M() =

M() +

k=0
_
1

k0
(
k0
)

1

k1
(
k1
)
_
.
According to Theorem 1.4.6,

M() =

k=0
1

k1
(
k1
)
,
hence
M() =

k=0
1

k
(
k
)
.
Thus, the numbers
n
,
n

n0
are the spectral data for the constructed boundary value
problem L. Notice that if (1.1.23) is valid we can choose

L = L( q(x),

h,

H) with q(x) W
N
2
such that n
N+1

n
l
2
, and obtain that q(x) W
N
2
. This completes the proof of Theorem
1.6.2. 2
72
Example 1.6.1. Let
n
= n
2
(n 0),
n
=

2
(n 1), and let
0
> 0 be
an arbitrary positive number. We choose

L such that q(x) = 0,

h =

H = 0. Then

n
= 0 (n 0),
n
=

2
(n 1), and
0
= . Denote a :=
1

. Clearly, a >
1

.
Then, (1.6.19) and (1.6.21) yield
1 = (1 +ax)
00
(x),
0
(x) = a
00
(x),
and consequently,

00
(x) =
1
1 + ax
,
0
(x) =
a
1 +ax
.
Using (1.6.25) and (1.6.26) we calculate
q(x) =
2a
2
(1 + ax)
2
, h = a, H =
a
1 + a
=
a
0

. (1.6.87)
By (1.6.10),
(x, ) = cos x
a
1 + ax
sin x

.
1.6.3. The nonselfadjoint case. In the nonselfadjoint case Theorem 1.6.1 remains
valid, i.e. by necessity the main equation (1.6.33) is uniquely solvable. Hence, the results of
Subsection 1.6.1 are valid, and Algorithm 1.6.1 can be applied for the solution of the inverse
problem also in the nonselfadjoint case, but the suciency part in Theorem 1.6.2 must be
corrected, since Lemma 1.6.6 is not valid in the general case. In the nonselfadjoint case we
must require that the main equation is uniquely solvable (see Condition S in Theorem 1.6.3).
As it is shown in Example 1.6.2, Condition S is essential and cannot be omitted. On the
other hand, we provide here classes of operators for which the unique solvability of the main
equation can be proved.
Let us consider a boundary value problem L of the form (1.1.1)-(1.1.2) where q(x)
L
2
(0, ) is a complex-valued function, and h, H are complex numbers. For simplicity, we
conne ourselves to the case when all zeros of the characteristic function () are simple.
In this case we shall write L V. If L V we have

n
,= 0,
n
,=
k
(n ,= k), (1.6.88)
and (1.5.8) holds.
Theorem 1.6.3 For complex numbers
n
,
n

n0
to be the spectral data for a certain
L(q(x), h, H) V with q(x) L
2
(0, ), it is necessary and sucient that
1) the relations (1.6.88) and (1.5.8) hold;
2) (Condition S) for each xed x [0, ], the linear bounded operator E+

H(x), acting
from m to m, has a unique inverse operator. Here

L V is taken such that = .
The boundary value problem L(q(x), h, H) can be constructed by Algorithm 1.6.1.
The proof of Theorem 1.6.3 is essentially the same as the proof of Theorem 1.6.2, but
only Lemma 1.6.6 must be omitted.
73
Example 1.6.2. Consider Example 1.6.1, but let
0
now be an arbitrary complex
number. Then the main equation of the inverse problem takes the form
1 = (1 + ax)
00
(x),
and Condition S means that 1 + ax ,= 0 for all x [0, ], i.e. a / (, 1/]. Hence,
Condition S is fullled if and only if
0
/ (, 0]. Thus, the numbers n
2
,
n

n0
;
n
=

2
(n 1), are the spectral data if and only if
0
/ (, 0]. The boundary value problem
can be constructed by (1.6.87). For the selfadjoint case we have
0
> 0, and Condition S
is always fullled.
In Theorem 1.6.3 one of the conditions under which arbitrary complex numbers

n
,
n

n0
are the spectral data for a boundary value problem L of the form (1.1.1)-(1.1.2)
is the requirement that the main equation is uniquely solvable (Condition S). This condition
is dicult to check in the general case. In this connection it is important to point out classes
of operators for which the unique solvability of the main equation can be checked. Here we
single out three such classes which often appear in applications.
(i) The selfadjoint case. In this case Condition S is fullled automatically (see Theorem
1.6.2 above).
(ii) Finite-dimensional perturbations. Let a model boundary value problem

L with the
spectral data

n
,
n

n0
be given. We change a nite subset of these numbers. In other
words, we consider numbers
n
,
n

n0
such that
n
=

n
and
n
=
n
for n > N, and
arbitrary in the rest. Then, according to (1.6.19), the main equation becomes the following
linear algebraic system:

ni
(x) =
ni
(x) +
N

k=0
(

P
ni,k0
(x)
k0
(x)

P
ni,k1
(x)
k1
(x)), n = 0, N, i = 0, 1, x [0, ],
and Condition S is equivalent to the condition that the determinant of this system diers
from zero for each x [0, ]. Such perturbations are very popular in applications. We note
that for the selfadjoint case the determinant is always nonzero.
(iii) Local solvability of the main equation. For small perturbations of the spectral data
Condition S is fullled automatically. More precisely, the following theorem is valid.
Theorem 1.6.4 Let

L = L( q(x),

h,

H) V be given. There exists a > 0 (which
depends on

L ) such that if complex numbers
n
,
n

n0
satisfy the condition < ,
then there exists a unique boundary value problem L(q(x), h, H) V with q(x) L
2
(0, )
for which the numbers
n
,
n

n0
are the spectral data, and
|q q|
L
2
(0,)
< C, [h

h[ < C, [H

H[ < C, (1.6.89)
where C depends on

L only.
Proof. In this proof C denotes various constants which depend on

L only. Since
< , the asymptotical formulae (1.5.8) are fullled. Choose
0
> 0 such that if <
0
then (1.6.88) is valid.
74
According to (1.6.32),
|

H(x)| C sup
n

n
[n k[ + 1
< C.
Choose
0
such that if < , then |

H(x)| 1/2 for x [0, ]. In this case there


exists (E +

H(x))
1
, and
|(E +

H(x))
1
| 2.
Thus, all conditions of Theorem 1.6.3 are fullled. By Theorem 1.6.3, there exists a unique
boundary value problem L(q(x), h, H) V for which the numbers
n
,
n

n0
are the
spectral data. Moreover, (1.6.68) and (1.6.69) are valid. Repeating now the arguments in
the proof of Lemma 1.6.8 we get
max
0x
[
0
(x)[ C, |(x)|
L
2
(0,)
C.
Together with (1.6.25)-(1.6.26) this yields (1.6.89). 2
Similarly we also can obtain the stability of the solution of the inverse problem in the
uniform norm; more precisely we get:
Theorem 1.6.5. Let

L = L( q(x),

h,

H) V be given. There exists > 0 (which
depends on

L ) such that if complex numbers
n
,
n

n0
satisfy the condition
1
< ,
then there exists a unique boundary value problem L(q(x), h, H) V for which the numbers

n
,
n

n0
are the spectral data. Moreover, the function q(x) q(x) is continuous on
[0, ], and
max
0x
[q q[ < C
1
, [h

h[ < C
1
, [H

H[ < C
1
,
where C depends on

L only.
Remark 1.6.4. Using the method of spectral mappings we can solve the inverse
problem not only in L
2
(0, ) and W
N
2
(0, ) but also for other classes of potentials. As
example we formulate here the following theorem for the non-selfadjoint case.
Theorem 1.6.6. For complex numbers
n
,
n

n0
to be the spectral data for a certain
L(q(x), h, H) V with q(x) D L(0, ), it is necessary and sucient that
1) (1.6.88) holds,
2) (Asymptotics) there exists

L = L( q(x),

h,

H) V with q(x) D such that n
n
l
2
,
3) (Condition S) for each xed x [0, ], the linear bounded operator E +

H(x), acting
from m to m, has a unique inverse operator,
4) (x) D, where the function (x) is dened by (1.6.21).
The boundary value problem L(q(x), h, H) can be constructed by Algorithm 1.6.1.
1.7. THE METHOD OF STANDARD MODELS
In the method of standard models we construct a sequence of model operators which
approximate in an adequate sense the unknown operator and allow us to construct the
potential step by step. The method of standard models provides an eective algorithm for
75
the solution of inverse problems. This method has a wide area for applications. It works for
many important classes of inverse problems when other methods turn out to be unsuitable.
For example, in [yur9] the so-called incomplete inverse problems for higher-order dierential
operators were studied when only some part of the spectral information is accessible to
measurement and when there is a priori information about the operator or its spectrum.
In [yur27] the method of standard models was applied for solving the inverse problem for
systems of dierential equations with nonlinear dependence on the spectral parameter. The
inverse problem for integro-dierential operators was treated in [yur13]. This method was
also used for studying an inverse problem in elasticity theory [yur12].
However the method of standard models requires rather strong restrictions on the op-
erator. For example, for the Sturm-Liouville operator the method works in the classes of
analytic or piecewise analytic potentials on the interval [0, ]. We can also deal with more
general classes, for example, the class of piecewise operator-analytic functions (see [fag2]) or
other classes of functions which can be expanded in series which generalize the Taylor series.
In this section we present the method of standard models using the Sturm-Liouville
operator as a model. In order to show ideas we conne ourselves to the case when the
potential q(x) in the boundary value problem (1.1.1)-(1.1.2) is an analytic function on
[0, ]. Other, more complicated cases can be treated similarly.
Let (x, ) be the Weyl solution of (1.1.1) which satises the conditions U() =
1, V () = 0, and let M() := (0, ) be the Weyl function for the boundary value prob-
lem L (see Subsection 1.4.4). From several equivalent formulations of the inverse problems
which were studied in Section 1.4 we will deal with here Inverse Problem 1.4.4 of recovering
L from the Weyl function.
Let the Weyl function M() of the boundary value problem L be given. Our goal is to
construct the potential q(x) which is an analytic function on [0, ] (to simplify calculations
we assume that the coecients h and H are known). We take a model boundary value
problem

L = L( q(x), h, H) with an analytic potential q(x). Since

(x, ) + q(x)(x, ) = (x, ),

(x, ) + q(x)

(x, ) =

(x, ),
we get
(q(x) q(x))(x, )

(x, ) = (

(x, )

(x, ) (x, )

(x, ))

,
and consequently
_

0
q(x)(x, )

(x, ) dx =

M(), (1.7.1)
where q(x) = q(x) q(x),

M() = M()

M().
We are interested in the asymptotics of the terms in (1.7.1). First we prove an auxiliary
assertion. Denote Q = : arg [
0
,
0
],
0
> 0. Then there exists
0
> 0 such
that
[Im[
0
[[ for Q. (1.7.2)
Lemma 1.7.1. Let
r(x) =
x
k
k!
_
+p(x)
_
, H(x, ) = exp(2ix)
_
1 +
(x, )

_
, x [0, a],
where p(x) C[0, a], p(0) = 0, and where the function (x, ) is continuous and bounded
for x [0, a], Q, [[

. Then for , Q,
_
a
0
r(x)H(x, ) dx =
(1)
k
(2i)
k+1
( +o(1)).
76
Proof. We calculate
(2i)
k+1
(1)
k
_
a
0
r(x)H(x, ) dx = I
1
() + I
2
() +I
3
(),
where
I
1
() = (2i)
k+1
(1)
k
_
a
0
x
k
k!
exp(2ix) dx,
I
2
() = (2i)
k+1
(1)
k
_
a
0
x
k
k!
p(x) exp(2ix) dx,
I
3
() = (2i)
k+1
()
k
_
a
0
r(x) exp(2ix)(x, ) dx.
Since
_

0
x
k
k!
exp(2ix) dx =
(1)
k
(2i)
k+1
, Q,
it follows that I
1
() 0 as [[ , Q.
Furthermore, take > 0 and choose = () such that for x [0, ], [p(x)[ <

2

k+1
0
,
where
0
is dened in (1.7.2). Then, using (1.7.2) we infer
[I
2
()[ <

2
(2[[
0
)
k+1
_

0
x
k
k!
exp(2
0
[[x) dx + (2[[)
k+1
_
a

x
k
k!
[p(x)[ exp(2
0
[[x) dx
<

2
+ (2[[)
k+1
exp(2
0
[[)
_
a
0
(x +)
k
k!
[p(x +)[ exp(2
0
[[x) dx.
By arbitrariness of we obtain that I
2
() 0 for [[ , Q.
Since [( +p(x))(x, )[ < C, then for [[ , Q,
[I
3
()[ < C[[
k
_
a
0
x
k
k!
exp(2
0
[[x) dx
C
[[
k+1
0
0.
2
Suppose that for a certain xed k 0 the Taylor coecients q
j
:= q
(j)
(0), j = 0, k 1,
have been already found. Let us choose a model boundary value problem

L such that the
rst k Taylor coecients of q and q coincide, i.e. q
j
= q
j
, j = 0, k 1. Then, using
(1.7.1) and Lemma 1.7.1 we can calculate the next Taylor coecient q
k
= q
(k)
(0). Namely,
the following assertion is valid.
Lemma 1.7.2. Fix k. Let the functions q(x) and q(x) be analytic for x [0, a], a >
0, with q
j
:= q
j
q
j
= 0 for j = 0, k 1. Then
q
k
=
1
4
(1)
k
lim
||
Q
(2i)
k+3

M(). (1.7.3)
Proof. It follows from (1.1.10), (1.1.16) and (1.4.9) that
(x, ) =
1
i
exp(ix)
_
1 + O
_
1

__
.
77
Hence, taking (1.7.1) into account, we have

M() =
_

0
q(x)(x, )

(x, ) dx =
_
a
0
x
k
k!
_
q
(k)
(0) + p(x)
_
1
(i)
2
exp(2ix)
_
1 +
(x, )

_
dx +
_

a
q(x)(x, )

(x, ) dx.
Using Lemma 1.7.1 we obtain for [[ , Q :

M() =
4(1)
k
(2i)
k+3
_
q
(k)
(0) + o(1)
_
.
2
Thus, we arrive at the following algorithm for the solution of Inverse Problem 1.4.4 in
the class of analytic potentials.
Algorithm 1.7.1. Let the Weyl function M() of the boundary value problem (1.1.1)-
(1.1.2) with an analytic potential be given . Then:
(i) We calculate q
k
= q
(k)
(0), k 0. For this purpose we successively perform the
following operations for k = 0, 1, 2, . . . : We construct a model boundary value problem

L
with an analytic potential q(x) such that q
j
= q
j
, j = 0, k 1 and arbitrary in the rest,
and we calculate q
k
= q
(k)
(0) by (1.7.3).
(ii) We construct the function q(x) by the formula
q(x) =

k=0
q
k
x
k
k!
, 0 < x < R,
where
R =
_
lim
k
_
[q
k
[
k!
_
1/k
_
1
.
If R < then for R < x < the function q(x) (which is analytic on [0, ] ) is constructed
by analytic continuation or by the step method which is described below in Remark 1.7.1.
Remark 1.7.1. The method of standard models also works when q(x) is a piecewise
analytic function. Indeed, the functions

a
(x, ) =
(x, )

(a, )
, M
a
() =
(a, )

(a, )
=
S(a, ) + M()(a, )
S

(a, ) + M()

(a, )
are the Weyl solution and the Weyl function for the interval x [a, ] respectively. Suppose
that q(x) has been already constructed on [0, a]. Then we can calculate M
a
() and go on
to the interval [a, ].
1.8. LOCAL SOLUTION OF THE INVERSE PROBLEM
In this section we present a method for the local solution of Inverse Problem 1.4.2 of
recovering Sturm-Liouville operators from two spectra. Without loss of generality we will
consider the case when H = 0. In this method, which is due to Borg [Bor1], the inverse
78
problem is reduced to a nonlinear integral equation (see (1.8.13)) which can be solved locally.
An important role in Borgs method is played by products of eigenfunctions of the boundary
value problems considered. In order to derive and study Borgs nonlinear equation one must
prove that such products are complete or form a Riesz basis in L
2
(0, ). For uniqueness
theorems it is sucient to prove the completeness of the products (see Subsection 1.4.3).
For the local solution of the inverse problem and for studying stability of the solution such
products must be a Riesz basis. For convenience of the reader we provide in Subsection 1.8.5
necessary information about Riesz bases in a Hilbert space.
We note that for Sturm-Liouville operators Borgs method is weaker than the Gelfand-
Levitan method and the method of spectral mappings. However, Borgs method turns out
to be useful when other methods do not work (see, for example, [kha1],[dub1],[yur13] and
the references therein).
1.8.1. Derivation of the Borg equation. Let
ni
=
2
ni
, n 0, i = 1, 2, be the
eigenvalues of the boundary value problems L
i
:
y := y

+q(x)y = y, (1.8.1)
y

(0) hy(0) = y
(i1)
() = 0, (1.8.2)
where q(x) L
2
(0, ) is a real function, and h is a real number. Then (see Section 1.1)

n1
=
_
n +
1
2
_
+
a
n
+

n1
n
,
n2
= n +
a
n
+

n2
n
, (1.8.3)
where

ni
l
2
, a =
1

_
h +
1
2
_

0
q(t) dt
_
.
Let L
i
and

L
i
, i = 1, 2, be such that a = a, then
:=
_

n=0
_
[
n1

n1
[
2
+[
n2

n2
[
2
_ _
1/2
< .
Denote
y
ni
(x) = (x,

ni
), y
ni
(x) = (x,

ni
), s
ni
(x) = S(x,

ni
), s
ni
(x) =

S(x,

ni
),
G
ni
(x, t) =
_
S(x,

ni
)C(t,

ni
) S(t,

ni
)C(x,

ni
), 0 t x ,
0, 0 x < t ,
where (x, ), C(x, ) and S(x, ) are solutions of (1.8.1) under the conditions C(0, ) =
(0, ) = S

(0, ) = 1, C

(0, ) = S(0, ) = 0,

(0, ) = h.
Since
y
ni
(x) =

ni
y
ni
(x),

y
ni
(x) =

ni
y
ni
(x),
we get
_

0
r(x)y
ni
(x) y
ni
(x) dx =
_

0
(y
ni
(x) y

ni
(x) y

ni
(x) y
ni
(x)) dx
=
_
y
ni
(x) y

ni
(x) y

ni
(x) y
ni
(x)
_

0
, (1.8.4)
79
where r := q q. Let
p =
1
2
_

0
r(x) dx. (1.8.5)
From the equality a = a, it follows that p =

h h; hence we infer from the boundary
conditions that (1.8.4) takes the form
_

0
r(x)y
ni
(x) y
ni
(x) dx = y
ni
() y

ni
() y

ni
() y
ni
() p, n 0, i = 1, 2. (1.8.6)
Since y
ni
(x) are eigenfunctions for the boundary value problems

L
i
, we have
y
n1
() = 0, y

n2
() = 0, n 0. (1.8.7)
Furthermore, one can easily verify by dierentiation that the function y
ni
(x) satises
the following Volterra integral equation
y
ni
(x) = y
ni
(x) + ps
ni
(x) +
_

0
G
ni
(x, t)r(t) y
ni
(t) dt, n 0, i = 1, 2. (1.8.8)
Solving (1.8.8) by the method of successive approximations we deduce
y
ni
(x) = y
ni
(x) + ps
ni
(x) +
ni
(x), n 0, i = 1, 2, (1.8.9)
where

ni
(x) =

j=1
_

0
. . .
_

0
. .
j
G
ni
(x, t
1
)G
ni
(t
1
, t
2
) . . . G
ni
(t
j1
, t
j
)
r(t
1
) . . . r(t
j
)(y
ni
(t
j
) + ps
ni
(t
j
)) dt
1
. . . dt
j
. (1.8.10)
Substituting (1.8.9)-(1.8.10) into (1.8.6) and using (1.8.5) and (1.8.7), we get
_

0
r(x)
ni
(x) dx =
ni
, n 0, i = 1, 2, (1.8.11)
where

ni
(x) = y
2
ni
(x)
1
2
,

n1
= y
n1
()(y

n1
()+ps

n1
()+

n1
())
_

0
pr(x)s
n1
(x)y
n1
(x) dx
_

0
r(x)y
n1
(x)
n1
(x) dx,

n2
= y

n2
()(y
n2
()+ps
n2
()+
n2
())
_

0
pr(x)s
n2
(x)y
n2
(x) dx
_

0
r(x)y
n2
(x)
n2
(x) dx.
It follows from (1.4.7) that

ni
(x) =
1
2
_
cos 2
ni
x +
_
x
0
V
1
(x, t) cos 2
ni
t dt
_
, (1.8.12)
where V
1
(x, t) is a continuous function. We introduce the functions
n
(x)
n0
and the
numbers z
n

n0
by the formulae

2n
(x) =
n2
(x),
2n+1
(x) =
n1
(x), z
2n
= 2
n2
, z
2n+1
= 2
n1
.
80
Then, according to (1.8.3) and (1.8.12),
z
n
= n +
a
n
+

n
n
,
n
l
2
,

n
(x) =
1
2
_
cos z
n
x +
_
x
0
V
1
(x, t) cos z
n
t dt
_
.
Hence, by virtue of Propositions 1.8.6 and 1.8.2, the set of functions
n
(x)
n0
forms
a Riesz basis in L
2
(0, ). Denote by
n
(x)
n0
the corresponding biorthonormal basis.
Then from (1.8.11) we infer
r(x) =

n=0
_

n2

2n
(x) +
n1

2n+1
(x)
_
,
and consequently, in view of (1.8.5),
r(x) = f(x) +

j=1
_

0
. . .
_

0
. .
j
H
j
(x, t
1
, . . . t
j
)r(t
1
) . . . r(t
j
) dt
1
. . . dt
j
, (1.8.13)
where
f(x) =

n=0
(y

n2
()y
n2
()
2n
(x) + y

n1
()y
n1
()
2n+1
(x)), (1.8.14)
H
1
(x, t
1
) =

n=0
_
y

n2
()
_
G
n2
(, t
1
)y
n2
(t
1
)
1
2
s
n2
()
_

2n
(x)
+y
n1
()
_
G
n1
(x, t
1
)
x

x=
y
n1
(t
1
)
1
2
s

n1
()
_

2n+1
(x)
_
, (1.8.15)
H
j
(x, t
1
, . . . , t
j
) =

n=0
_

_
y
n2
(t
1
) + y

n2
()G
n2
(, t
1
)
_
G
n2
(t
1
, t
2
) . . . G
n2
(t
j2
, t
j1
)
_
G
n2
(t
j1
, t
j
)y
n2
(t
j
)
1
2
s
n2
(t
j1
)
_

2n
(x)
_
y
n1
(t
1
) y
n1
()
G
n1
(x, t
1
)
x

x=
_

G
n1
(t
1
, t
2
) . . . G
n1
(t
j2
, t
j1
)
_
G
n1
(t
j1
, t
j
)y
n1
(t
j
)
1
2
s
n1
(t
j1
)
_

2n+1
(x)
_
, j 2. (1.8.16)
Equation (1.8.13) is called the Borg equation.
Remark 1.8.1. In order to prove that the system of functions
n
(x)
n0
forms a
Riesz basis in L
2
(0, ) we used above the transformation operator. However in many cases
of practical interest the Borg method can be applied, although the transformation operator
does not work. In Subsection 1.8.3 we describe Borgs original idea which allows us to prove
that the products of eigenfunctions form a Riesz basis in L
2
(0, ).
1.8.2. The main theorem. Using the Borg equation we can prove the following
theorem on the local solution of the inverse problem and on the stability of the solution.
Theorem 1.8.1. For the boundary value problems L
i
of the form (1.8.1)-(1.8.2), there
exists > 0 (which depends on L
i
) such that if real numbers

ni

n0
, i = 1, 2, satisfy
81
the condition < , then there exists a unique real pair q(x) L
2
(0, ) and

h, for which
the numbers

ni

n0
, i = 1, 2, are the eigenvalues of

L
i
. Moreover,
|q q|
L
2
< C, [h

h[ < C. (1.8.17)
Here and below C denotes various positive constants which depend on L
i
.
Proof. One can choose
1
> 0 such that if real numbers

ni

n0
, i = 1, 2, satisfy the
condition <
1
, then

ni
,=

kj
for (n, i) ,= (k, j), and
y
n2
() ,= 0, y

n1
() ,= 0 for all n 0. (1.8.18)
Indeed, according to Theorem 1.1.1, (x,
n2
) is an eigenfunction of the boundary value
problem L
2
, and

(,
n2
) = 0, (,
n2
) ,= 0 for all n 0. Moreover, by virtue of
(1.1.9) and (1.8.3),
(,
n2
) = cos n +O
_
1
n
_
= (1)
n
+O
_
1
n
_
.
Thus,
[(,
n2
)[ C > 0.
On the other hand, using (1.3.11) we calculate
(,

n2
) (,
n2
) = (cos
n2
cos
n2
) +
_

0
G(, t)(cos
n2
t cos
n2
t) dt,
and consequently
[(,

n2
) (,
n2
)[ < C[
n2

n2
[.
Then for suciently small we have
y
n2
() := (,

n2
) ,= 0 for all n 0.
Similarly, one can derive the second inequality in (1.8.18).
It is easy to verify that the following estimates are valid for n 0, i = 0, 1, 0 x, t ,
[y
ni
(x)[ < C, [y

n1
()[ < C(n + 1), [
G
ni
(x, t)
x
[ < C, [G
ni
(x, t)[ <
C
n + 1
,
[y

n2
()[ < C[

n2

n2
[, [y
n1
()[ <
C
n + 1
[

n1

n1
[, [s

n1
()[ <
C
n + 1
.
Indeed, the estimate [y
ni
(x)[ < C follows from (1.1.9) and (1.8.3). Since

(,
n2
) = 0,
we get
y

n2
() =

(,

n2
)

(,
n2
).
By virtue of (1.3.11),

(x, ) = sin x +G(x, x) cos x +


_
x
0
G(x, t)
x
cos t dt.
Hence

(,

n2
)

(,
n2
) =
n2
sin
n2
+
n2
sin
n2

82
+G(, t)(cos
n2
cos
n2
) +
_

0
G(x, t)
x

x=
(cos
n2
t cos
n2
t) dt,
and consequently
[y

n2
()[ < C[

n2

n2
[.
The other estimates are proved similarly.
Using (1.8.14)-(1.8.16) we obtain
|f| < C, |H
1
| < C, |H
j
| < C
j
, (j 2), (1.8.19)
where |.| is the norm in L
2
with respect to all of the arguments.
Indeed, according to (1.8.14),
f(x) =

n=0
f
n

n
(x),
where
f
2n
= y

n2
()y
n2
(), f
2n+1
= y

n1
()y
n1
().
Then, using the preceding estimates for y
ni
(x), we get
[f
2n
[ C[

n2

n2
[, [f
2n+1
[ C[

n1

n1
[.
By virtue of (1.8.58) this yields |f| < C. Other estimates in (1.8.19) can be veried
similarly.
Consider in L
2
(0, ) the nonlinear integral equation (1.8.13). We rewrite (1.8.13) in the
form
r = f +Ar,
where
Ar =

j=1
A
j
r,
(A
j
r)(x) =
_

0
. . .
_

0
. .
j
H
j
(x, t
1
, . . . t
j
)r(t
1
) . . . r(t
j
) dt
1
. . . dt
j
.
It follows from (1.8.19) that
|A
1
r| C|r|, |A
1
r A
1
r| C|r r|,
|A
j
r| (C|r|)
j
, |A
j
r A
j
r| |r r|
_
C max(|r|, | r|)
_
j1
, j 2,
|f| < C.
_

_
(1.8.20)
Fix C 1/2 such that (1.8.20) is valid. If
|r|
1
2C
, | r|
1
2C
,
then (1.8.20) implies
|Ar| C|r| + 2C
2
|r|
2
,
83
|Ar A r| C|r r|
_
+ 2 max(|r|, | r|)
_
.
Moreover, if

1
4C
, |r|
1
8C
2
, | r|
1
8C
2
,
then
|Ar|
1
2
|r|, |Ar A r|
1
2
|r r|. (1.8.21)
Therefore, there exists suciently small
2
> 0 such that if <
2
, then equation (1.8.13)
can be solved by the method of successive approximations:
r
0
= f, r
k+1
= f +Ar
k
, k 0,
r = r
0
+

k=0
(r
k+1
r
k
). (1.8.22)
Indeed, put
2
=
1
16C
3
. If <
2
, then |f|
1
16C
2
,
1
4C
. Using (1.8.21), by induction
we get
|r
k
| 2|f|, |r
k+1
r
k
|
1
2
k+1
|f|, k 0.
Consequently, the series (1.8.22) converges in L
2
(0, ) to the solution of equation (1.8.13).
Moreover, for this solution the estimate |r| 2|f| is valid.
Let = min(
1
,
2
), and let r(x) be the above constructed solution of (1.8.13). Dene
p via (1.8.5) and q(x),

h by the formulas q = q + r,

h = h + p. Clearly, (1.8.17) is valid.
Thus, we have constructed the boundary value problems

L
i
.
It remains to be shown that the numbers

ni

n0
are the eigenvalues of the constructed
boundary value problems

L
i
. For this purpose we consider the functions y
ni
(x), which are
solutions of equation (1.8.8). Then (1.8.9)-(1.8.10) hold. Clearly,
y

ni
(x) + q(x) y
ni
(x) =

ni
y
ni
(x), y
ni
(0) = 1, y

ni
(0) =

h,
and consequently (1.8.6) is valid.
Furthermore, multiplying (1.8.13) by
n
(x), integrating with respect to x and taking
(1.8.9)-(1.8.10) into account, we obtain
_

0
r(x)y
n1
(x) y
n1
(x) dx = y
n1
() y

n1
() p,
_

0
r(x)y
n2
(x) y
n2
(x) dx = y

n2
() y
n2
() p.
Comparing these relations with (1.8.6) and using (1.8.18), we conclude that (1.8.7) holds,
i.e. y
ni
(x) are eigenfunctions, and

ni

n0
are eigenvalues for

L
i
. The uniqueness follows
from Borgs theorem (see Theorem 1.4.4). 2
1.8.3. The case of Dirichlet boundary conditions. Similar results are also valid for
Dirichlet boundary conditions; in this subsection we prove the corresponding theorem (see
Theorem 1.8.2). The most interesting part of this subsection is Borgs original method for
proving that products of eigenfunctions form a Riesz basis in L
2
(0, ) (see Remark 1.8.1).
84
Let
0
ni
= (
0
ni
)
2
, n 1, i = 1, 2, be the eigenvalues of the boundary value problems
L
0
i
of the form
y

+q(x)y = y, q(x) L
2
(0, ), (1.8.23)
y(0) = y
(i1)
() = 0 (1.8.24)
with real q. Then (see Section 1.1)

0
n1
= n +
a
0
n
+

0
n1
n
,
0
n2
=
_
n
1
2
_
+
a
0
n
+

0
n2
n
,
_

0
ni
_
l
2
, a
0
=
1
2
_

0
q(t) dt.
Let L
0
i
and

L
0
i
, i = 1, 2 be such that a
0
= a
0
, then

0
:=
_

n=1
_
[
0
n1

0
n1
[
2
+[
0
n2

0
n2
[
2
_ _
1/2
< .
Hence _

0
r(t) dt = 0, (1.8.25)
where r = q q.
Acting in the same way as in Subsection 1.8.1 and using the same notations we get
_

0
r(x)s
ni
(x) s
ni
(x) dx = s
ni
() s

ni
() s

ni
() s
ni
(), n 1, i = 1, 2. (1.8.26)
s
n1
() = 0, s

n2
() = 0, n 1. (1.8.27)
s
ni
(x) = s
ni
(x) +
_

0
G
ni
(x, t)r(t) s
ni
(t) dt, n 1, i = 1, 2. (1.8.28)
Solving (1.8.28) by the method of successive approximations we obtain
s
ni
(x) = s
ni
(x) +
ni
(x), (1.8.29)
where

ni
(x) =

j=1
_

0
. . .
_

0
. .
j
G
ni
(x, t
1
)G
ni
(t
1
, t
2
) . . . G
ni
(t
j1
, t
j
)
r(t
1
) . . . r(t
j
)s
ni
(t
j
) dt
1
. . . dt
j
. (1.8.30)
Substituting (1.8.29)-(1.8.30) into (1.8.26) and using (1.8.27), we get
_

0
r(x)u
ni
(x) dx =
0
ni
, n 1, i = 1, 2, (1.8.31)
where
u
0
ni
(x) = 2n
2
s
2
ni
(x),

0
n1
= 2n
2
_
s
n1
()(s

n1
() +

n1
())
_

0
r(x)s
n1
(x)
n1
(x) dx
_
,

0
n2
= 2n
2
_
s

n2
()(s
n2
() +
n2
())
_

0
r(x)s
n2
(x)
n2
(x) dx
_
.
_

_
(1.8.32)
We introduce the functions u
n
(x)
n1
and the numbers
n

n0
by the formulae
u
2n
(x) := u
0
n1
(x), u
2n1
(x) := u
0
n2
(x),
2n
:=
0
n1
,
2n1
:=
0
n2
, (n 1),
0
:= 0.
85
Then (1.8.31) takes the form
_

0
r(x)u
n
(x) dx =
n
, n 1. (1.8.33)
Denote
v
n
(x) :=
1
n
u

n
(x), w
n
(x) := 1 u
n
(x) (n 1), w
0
(x) := 1.
Using the results of Section 1.1, one can easily calculate
v
n
(x) = sin nx +O
_
1
n
_
, w
n
(x) = cos nx +O
_
1
n
_
, n . (1.8.34)
By virtue of (1.8.25) and (1.8.33) we have
_

0
r(x)w
n
(x) dx =
n
, n 0. (1.8.35)
Lemma 1.8.1. The sets of the functions v
n
(x)
n1
and w
n
(x)
n0
are both Riesz
bases in L
2
(0, ).
Proof. In order to prove the lemma we could use the transformation operator as above,
but we provide here another method which is due to G. Borg [Bor1].
Let q(x) W
1
2
, and let y be a solution of (1.8.23) satisfying the condition y(0) = 0.
Similarly, let y be such that y

+ q(x) y = y, y(0) = 0, where q(x) W


1
2
. Denote
u = y y.
Then
u

= y y

+y

y, u

= 2u + (q + q)u + 2y

,
u

+ 4u

(q + q)u

(q

+ q

)u = 2(qy y

+ qy

y).
From this, taking into account the relations
2(qy y

+ qy

y) = (q + q)u

+ (q q)(y y

y), (y y

y)(x) =
_
x
0
( q(s) q(s))u(s) ds,
we derive
u

+ 4u

2(q(x) + q(x))u

(q

(x) + q

(x))u = (q(x) q(x))


_
x
0
( q(s) q(s))u(s) ds.
Denote v = u

. Since u(0) = u

(0) = 0, we have v(0) = 0, u(x) =


_
x
0
v(s) ds, and
consequently
v

+ 2(q(x) + q(x))v +
_
x
0
N(x, s)v(s) ds = 4v, (1.8.36)
where
N(x, s) = q

(x) + q

(x) + (q(x) q(x))


_
x
s
( q() q()) d.
In particular, for q = q we obtain that the functions v
n
(x)
n1
are the eigenfunctions for
the boundary value problem
v

+p(x)v +
_
x
0
M(x, s)v(s) ds = 4v, v(0) = v() = 0,
86
where p(x) = 4q(x), M(x, t) = 2q

(x). Then for the sequence v


n
(x)
n1
, there exists the
biorthonormal sequence v

n
(x)
n1
in L
2
(0, ) ( v

n
(x) are eigenfunctions of the adjoint
boundary value problem
v

+p(x)v

+
_

x
M(s, x)v

(s) ds = 4v

, v

(0) = v

() = 0.
By virtue of (1.8.34) and Proposition 1.8.4 this gives that v
n
(x)
n1
is a Riesz basis in
L
2
(0, ).
Furthermore, let us show that the system of the functions w
n
(x)
n0
is complete in
L
2
(0, ). Indeed, suppose that
_

0
f(x)w
n
(x) dx = 0, n 0, f(x) L
2
(0, ).
This yields
_

0
f(x) dx = 0,
and consequently
_

0
f(x)u
n
(x) dx = 0, n 1.
Integrating by parts we infer, using v
n
(0) = v
n
() = 0, that
_

0
v
n
(x) dx
_

x
f(t) dt = 0, n 1.
The system v
n
(x)
n1
is complete in L
2
(0, ). Hence
_

x
f(t) dt = 0 for x [0, ], i.e.
f(x) = 0 a.e. on (0, ). Therefore w
n
(x)
n0
is complete in L
2
(0, ). Since w
n
(x)
n0
is, in view of (1.8.34), quadratically close to the Riesz basis cos nx
n0
, it follows from
Proposition 1.8.5 that w
n
(x)
n0
is a Riesz basis in L
2
(0, ). 2
Remark 1.8.2. With the help of equation (1.8.36) one can also prove the completeness
of the products of eigenfunctions s
ni
(x) s
ni
(x), and hence provide another method of proving
Borgs uniqueness theorem (see Section 1.4) without using the transformation operator.
Denote by v

n
(x)
n1
and w

n
(x)
n0
the biorthonormal bases to v
n
(x)
n1
and
w
n
(x)
n0
respectively. Then from (1.8.35) we infer (like in Subsection 1.8.1)
r(x) = f
0
(x) +

j=1
_

0
. . .
_

0
. .
j
H
0
j
(x, t
1
, . . . , t
j
)r(t
1
) . . . r(t
j
) dt
1
. . . dt
j
, (1.8.37)
where
f
0
(x) =

n=1
2n
2
(s

n1
()s
n1
()v

2n
(x) s

n2
()s
n2
()v

2n1
(x)),
H
0
1
(x, t
1
) =

n=1
2n
2
_
s
n1
()
G
n1
(x, t
1
)
x

x=
s
n1
(t
1
)v

2n
(x) s

n2
()G
n2
(, t
1
)s
n2
(t
1
)v

2n1
(x)
_
,
H
0
j
(x, t
1
, . . . , t
j
) =

n=1
2n
2
__
s
n1
()
G
n1
(x, t
1
)
x

x=
s
n1
(t
1
)
_
G
n1
(t
1
, t
2
) . . . G
n1
(t
j1
, t
j
)
87
s
n1
(t
j
)v

2n
(x)
_
s

n2
()G
n2
(, t
1
)+s
n2
(t
1
)
_
G
n2
(t
1
, t
2
) . . . G
n2
(t
j1
, t
j
)s
n2
(t
j
)v

2n1
(x)
_
, j 2.
Using the nonlinear equation (1.8.37), by the same arguments as in Subsection 1.8.2, we
arrive at the following theorem.
Theorem 1.8.2. For the boundary value problems L
0
i
of the form (1.8.23)-(1.8.24),
there exists > 0 such that if real numbers

0
ni

n1
, i = 1, 2, satisfy the condition

0
< , then there exists a unique real function q(x) L
2
(0, ), for which the numbers

0
ni

n1
, i = 1, 2, are the eigenvalues of

L
0
i
. Moreover,
|q q|
L
2
< C
0
.
Remark 1.8.3. Using the Riesz basis v
n
(x)
n1
we can also derive another nonlinear
equation. Indeed, denote
z(x) :=
_

x
r(t) dt.
After integration by parts, (1.8.33) takes the form
_

0
z(x)v
n
(x) dx =

n
n
, n 1.
Since v
n
(x)
n1
and v

n
(x)
n1
are biorthonormal bases, this yields
z(x) =

n=1

0
n
n
v

n
(x)
or
_

x
r(t) dt =

n=1

n
n
v

n
(x), r(x) =

n=1

n
n
d
dx
v

n
(x).
From this, in view of (1.8.30) and (1.8.32), we get analogously to Subsection 1.8.1
r(x) = f
1
(x) +

j=1
_

0
. . .
_

0
. .
j
H
j1
(x, t
1
, . . . , t
j
)r(t
1
) . . . r(t
j
) dt
1
. . . dt
j
,
where
f
1
(x) =

n=1
2n(s

n1
()s
n1
()
d
dx
v

2n
(x) s

n2
()s
n2
()
d
dx
v

2n1
(x)),
H
11
(x, t
1
) =

n=1
2n
_
s
n1
()
G
n1
(x, t
1
)
x

x=
s
n1
(t
1
)
d
dx
v

2n
(x)
s

n2
()G
n2
(, t
1
)s
n2
(t
1
)
d
dx
v

2n1
(x)
_
,
H
j1
(x, t
1
, . . . , t
j
) =

n=1
2n
__
s
n1
()
G
n1
(x, t
1
)
x

x=
s
n1
(t
1
)
_
G
n1
(t
1
, t
2
) . . . G
n1
(t
j1
, t
j
)s
n1
(t
j
)
d
dx
v

2n
(x)

_
s

n2
()G
n2
(, t
1
) + s
n2
(t
1
)
_
G
n2
(t
1
, t
2
) . . . G
n2
(t
j1
, t
j
)s
n2
(t
j
)
d
dx
v

2n1
(x)
_
, j 2.
88
1.8.4. Stability of the solution of the inverse problem in the uniform norm. Let

ni
=
2
ni
, n 0, i = 1, 2, be the eigenvalues of the boundary value problems L
i
of the form
(1.8.1)-(1.8.2), where q is a real continuous function, and h is a real number. The eigenval-
ues
ni
coincide with zeros of the characteristic functions
i
() :=
(i1)
(, ), i = 1, 2,
where (x, ) is the solution of (1.8.1) under the conditions (0, ) = 1,

(0, ) = h.
Without loss of generality we shall assume in the sequel that in (1.8.3) a = 0. In this case,
by virtue of (1.8.3), we have

n=0
_
[
n1
(n + 1/2)[ +[
n2
n[
_
< . (1.8.38)
Let the boundary value problems

L
i
be chosen such that

1
:=

n=0
_
[
n1

n1
[ +[
n2

n2
[
_
< . (1.8.39)
The quantity
1
describes the distance of the spectra.
Theorem 1.8.3. There exists > 0 (which depends on L
i
) such that if
1
< then
max
0x
[q(x) q(x)[ < C
1
, [h

h[ < C
1
.
Here and below, C denotes various positive constants which depend on L
i
.
We rst prove some auxiliary propositions. Let

n
:=
_

0

2
(x,
n2
) dx
be the weight numbers for L
2
.
Lemma 1.8.2. There exists
1
> 0 such that if
1
<
1
then

n=0
[
n

n
[ < C
1
. (1.8.40)
Proof. According to (1.1.38),

n
=

2
(
n2
)
1
(
n2
), (1.8.41)
where

2
() =
d
d

2
().
The functions
i
() are entire in of order 1/2, and consequently by Hadamards
factorization theorem [con1, p.289], we have

i
() = B
i

k=0
_
1

ki
_
(1.8.42)
(the case when = 0 is the eigenvalue of L
i
requires minor modications). Then

i
()

i
()
=

B
i
B
i

k=0

ki

ki

k=0
_
1 +

ki

ki

ki

_
.
89
Since
lim

i
()

i
()
= 1, lim

k=0
_
1 +

ki

ki

ki

_
= 1,
it follows that

B
i
B
i

k=0

ki

ki
= 1, (1.8.43)
and consequently

i
()

i
()
=

k=0

ki

ki

.
Furthermore, it follows from (1.8.42) that

2
(
n2
) =
B
2

n2

k=0
k=n
_
1

n2

k2
_
.
Therefore, taking (1.8.43) into account we calculate

2
(
n2
)

2
(
n2
)
=

k=0
k=n

k2

n2

k2

n2
.
Then, by virtue of (1.8.41),

n

n
=

k=0

k1

n2

k1

n2

k=0
k=n

k2

n2

k2

n2
or

n

n
=

k=0
(1
(1)
kn
)

k=0
k=n
(1
(2)
kn
), (1.8.44)
where

(i)
kn
=

ki

ki

ki

n2
+

n2

n2

ki

n2
.
Denote

n
=

k=0
[
(1)
kn
[ +

k=0
k=n
[
(2)
kn
[.
Then

n=0

n=0

k=0
_
[
k1

k1
[
[
k1

n2
[
+
[

n2

n2
[
[
k1

n2
[
_
+

n=0

k=0
k=n
_
[
k2

k2
[
[
k2

n2
[
+
[

n2

n2
[
[
k2

n2
[
_

n=0
[
n2

n2
[
_
2

k=0
k=n
1
[
k2

n2
[
+

k=0
1
[
k1

n2
[
_
90
+

n=0
[
n1

n1
[

k=0
1
[
n1

k2
[
. (1.8.45)
Using the asymptotics of the eigenvalues (see Section 1.1), we obtain
1
[
k2

n2
[
<
C
[k
2
n
2
[
, k ,= n.
Since

k=1
k=n
1
[k
2
n
2
[
< 1, n 1,
we have

k=1
k=n
1
[
k2

n2
[
< C. (1.8.46)
Similarly,

k=0
_
1
[
k1

n2
[
+
1
[
n1

k2
[
_
< C. (1.8.47)
It follows from (1.8.45)-(1.8.47) that

n=0

n
< C
1
. (1.8.48)
Choose
1
> 0 such that if
1
<
1
then
n
< 1/4. Since for [[ < 1/2,
[ ln(1 )[

k=1
[[
k
k

k=1
[[
k
2[[,
it follows from (1.8.44) that

ln

n

k=0
[ ln(1
(1)
kn
)[ +

k=0
k=n
[ ln(1
(2)
kn
)[ < 2
n
.
Using the properties of ln, we get

n
1

< 4
n
or [
n

n
[ < C
n
. Using (1.8.48) we
arrive at (1.8.40). 2
Lemma 1.8.3. There exists
2
> 0 such that if
1
<
2
then
[G(x, t)

G(x, t)[ < C
1
, 0 x t , (1.8.49)
[

(,
n1
) (,
n1
)[ < [
n1

n1
[, (1.8.50)
[

(,

n2
) (,

n2
)[ < C[
n2

n2
[. (1.8.51)
Proof. The function G(x, t) is the solution of the Gelfand-Levitan equation (1.5.11).
By virtue of Lemma 1.5.6, [

F(x, t)[ < C


1
. Hence, from the unique solvability of the
Gelfand-Levitan equation and from Lemma 1.5.1 we obtain (1.8.49).
Furthermore, it follows from (1.1.9) and (1.8.3) that
[

(,
n1
)[ < C(n + 1).
91
Using (1.5.11) we get
(,
n1
) = (,
n1
) (,

n1
) = (cos
n1
cos
n1
) +
_

0

G(, t)(cos
n1
t cos
n1
t) dt.
Hence, for
1
<
2
we have in view of (1.8.49) that
[ (,
n1
)[ < C[
n1

n1
[,
and we arrive at (1.8.50). Similarly we obtain (1.8.51). 2
Lemma 1.8.4. Let g(x) be a continuous function on [0, ], and let z
n

n0
be num-
bers such that

n=0
[z
n
n[ < .
Suppose that
:=

n=0
[
n
[ < ,
n
:=
_

0
g(x) cos z
n
xdx.
Then [g(x)[ < M, where the constant M depends only on the set z
n

n0
.
Proof. Since the system of the functions cos z
n
x
n0
is complete in L
2
(0, ), it follows
that
n
uniquely determine the function g(x). From the equality
_

0
g(x) cos nxdx =
n
+
_

0
g(x)(cos nx cos z
n
x) dx
we obtain
g(x) =

n=0

0
n
cos nx +

n=0
1

0
n
cos nx
_

0
(cos nt cos z
n
t)g(t) dt.
Hence, the function g(x) is the solution of the integral equation
g(x) = (x) +
_

0
H(x, t)g(t) dt, (1.8.52)
where
(x) =

n=0

0
n
cos nx, H(x, t) =

n=0
1

0
n
cos nx(cos nt cos z
n
t).
The series converge absolutely and uniformly for 0 x, t , and
[(x)[ <
2

, [H(x, t)[ < C

n=0
[z
n
n[.
Let us show that the homogeneous equation
y(x) =
_

0
H(x, t)y(t) dt, y(x) C[0, ] (1.8.53)
has only the trivial solution. Indeed, it follows from (1.8.53) that
y(x) =

n=0
1

0
n
cos nx
_

0
(cos nt cos z
n
t)y(t) dt.
92
Hence _

0
y(t) cos nt dt =
_

0
(cos nt cos z
n
t)y(t) dt
or _

0
y(t) cos z
n
t dt = 0.
From this, in view of the completeness of the system cos z
n
x
n0
, follows that y(x) = 0.
Thus, the integral equation (1.8.52) is uniquely solvable, and hence [g(x)[ < M. 2
Remark 1.8.4. It is easy to see from the proof of Lemma 1.8.4 that if we consider
numbers z
n
such that

n=0
[ z
n
z
n
[ <
for suciently small , then the constant M will not depend on z
n
.
Proof of Theorem 1.8.3. Using

(x, ) + q(x)(x, ) = (x, ),

(x, ) + q(x) (x, ) = (x, ),


we get
_

0
q(x)(x, ) (x, ) dx =

(, ) (, ) (, )

(, ) +

h h.
Since

h +
1
2
_

0
q(x) dx = 0,
we get
_

0
q(x)
_
(x, ) (x, )
1
2
_
dx =

(, ) (, ) (, )

(, ). (1.8.54)
Substituting (1.4.7) into (1.8.54) we obtain for =
n1
and =

n2
,
_

0
g(x) cos 2
n1
xdx =

(,
n1
) (,
n1
),
_

0
g(x) cos 2
n2
xdx =

(,

n2
) (,

n2
),
where
g(x) = 2
_
q(x) +
_

x
V (x, t) q(t) dt
_
. (1.8.55)
Taking (1.4.8) and (1.8.49) into account, we get for
1
<
2
,
[V (x, t)[ < C. (1.8.56)
We use Lemma 1.8.4 for z
2n+1
= 2
n,1
, z
2n
= 2
n,2
,
2n+1
=

(,
n1
) (,
n1
),
2n
=

(,

n2
) (,

n2
). It follows from (1.8.35), (1.8.36), (1.8.50) and (1.8.51) that for
1
<

2
, [g(x)[ < C
1
. Since q(x) is the solution of the Volterra integral equation (1.8.55), it
follows from (1.8.56) that [ q(x)[ < C
1
, and hence [

h[ < C
1
. 2
1.8.5. Riesz bases. 1) For convenience of the reader we provide here some facts about
Riesz bases in a Hilbert space. For further discussion and proofs see [goh1], [you1] and
[Bar1]. Let B be a Hilbert space with the scalar product (., .).
93
Denition 1.8.1. A sequence f
j

j1
of vectors of a Hilbert space B is called a basis
of this space if every vector f B can be expanded in a unique way in a series
f =

j=1
c
j
f
j
, (1.8.57)
which converges in the norm of the space B.
Clearly, if f
j

j1
is a basis then f
j

j1
is complete and minimal in B. We remind
that f
j

j1
is complete means that the closed linear span of f
j

j1
coincides with B,
and f
j

j1
is minimal means that each element of the sequence lies outside the closed
linear span of the others.
Denition 1.8.2. Two sequences f
j

j1
and
j

j1
of B are called biorthonormal
if (f
j
,
k
) =
jk
(
jk
is the Kronecker delta).
If f
j

j1
is a basis then the biorthonormal sequence
j

j1
exists and it is dened
uniquely. Moreover,
j

j1
is also a basis of B. In the expansion (1.8.57) the coecients
c
j
have the form
c
j
= (f,
j
). (1.8.58)
Denition 1.8.3. A sequence f
j

j1
is called almost normalized if
inf
j
|f
j
| > 0 and sup
j
|f
j
| < .
If a basis f
j

j1
is almost normalized then the biorthonormal basis
j

j1
is also
almost normalized.
Denition 1.8.4. A sequence e
j

j1
is called orthogonal if (e
j
, e
k
) = 0 for j ,= k. A
sequence e
j

j1
is called orthonormal if (e
j
, e
k
) =
jk
.
If e
j

j1
is orthonormal and complete in B then e
j

j1
is a basis of B. For an
orthonormal basis e
j

j1
the relations (1.8.57)-(1.8.58) take the form
f =

j=1
(f, e
j
)e
j
, (1.8.59)
and for each vector f B the following Parsevals equality holds
|f|
2
=

j=1
[(f, e
j
)[
2
.
2) Let now e
j

j1
be an orthonormal basis of the Hilbert space B, and let A : B B
be a bounded linear invertible operator. Then A
1
is bounded, and according to (1.8.59)
we have for any vector f B,
A
1
f =

j=1
(A
1
f, e
j
)e
j
=

j=1
(f, A
1
e
j
)e
j
.
Consequently
f =

j=1
(f,
j
)f
j
,
94
where
f
j
= Ae
j
,
j
= A
1
e
j
. (1.8.60)
Obviously (f
j
,
k
) =
jk
(j, k 1), i.e. the sequences f
j

j1
and
j

j1
are biorthonor-
mal. Therefore if (1.8.57) is valid then c
j
= (f,
j
), i.e. the expansion (1.8.57) is unique.
Thus, every bounded linear invertible operator transforms any orthonormal basis into some
other basis of the space B.
Denition 1.8.5. A basis f
j

j1
of B is called a Riesz basis if it is obtained from
an orthonormal basis by means of a bounded linear invertible operator.
According to (1.8.60), a basis which is biorthonormal to a Riesz basis is a Riesz basis
itself. Using (1.8.60) we calculate
inf
j
|f
j
| |A
1
| and sup
j
|f
j
| |A|,
i.e. every Riesz basis is almost normalized.
For the Riesz basis f
j

j1
(f
j
= Ae
j
) the following inequality is valid for all vectors
f B :
C
1

j=1
[(f,
j
)[
2
|f|
2
C
2

j=1
[(f,
j
)[
2
, (1.8.61)
where
j

j1
is the corresponding biorthonormal basis (
j
= A
1
e
j
), and the constants
C
1
, C
2
depend only on the operator A.
Indeed, using Parsevals equality and (1.8.60) we calculate
|A
1
f|
2
=

j=1
[(A
1
f, e
j
)[
2
=

j=1
[(f,
j
)[
2
.
Since
|f| = |AA
1
f| |A| |A
1
f|, |A
1
f| |A
1
| |f|,
we get
C
1
|A
1
f|
2
|f|
2
C
2
|A
1
f|
2
,
where
C
1
= |A
1
|
2
, C
2
= |A|
2
,
and consequently (1.8.61) is valid.
The following assertions are obvious.
Proposition 1.8.1. If f
j

j1
is a Riesz basis of B and
j

j1
are complex numbers
such that 0 < C
1
[
j
[ C
2
< , then
j
f
j

j1
is a Riesz basis of B.
Proposition 1.8.2. If f
j

j1
is a Riesz basis of B and A is a bounded linear
invertible operator, then Af
j

j1
is a Riesz basis of B.
3) Now we present several theorems which give us sucient conditions on a sequence
f
j

j1
to be a Riesz basis of B. First we formulate the denitions:
Denition 1.8.6. Two sequences of vectors f
j

j1
and g
j

j1
from B are called
quadratically close if

j=1
|g
j
f
j
|
2
< .
95
Denition 1.8.7. A sequence of vectors g
j

j1
is called - linearly independent if
the equality

j=1
c
j
g
j
= 0
is possible only for c
j
= 0 (j 1).
Assumption 1.8.1. Let f
j
= Ae
j
, j 1 be a Riesz basis of B, where e
j

j1
is
an orthonormal basis of B, and A is a bounded linear invertible operator. Let g
j

j1
be
such that
:=
_

j=1
|g
j
f
j
|
2
_
1/2
< ,
i.e. g
j

j1
is quadratically close to f
j

j1
.
Proposition 1.8.3 (stability of bases). Let Assumption 1.8.1 hold. If
<
1
|A
1
|
,
then g
j

j1
is a Riesz basis of B.
Proof. Consider the operator T :
T
_

j=1
c
j
f
j
_
=

j=1
c
j
(f
j
g
j
), c
j
l
2
. (1.8.62)
In other words, Te
j
= f
j
g
j
. Clearly, T is a bounded linear operator and |T| .
Moreover,

j=1
|Te
j
|
2
< . Since
A T = (E TA
1
)A, and |TA
1
| < 1,
then A T is a linear bounded invertible operator. On the other hand,
(A T)e
j
= g
j
,
and consequently g
j

j1
is a Riesz basis of B. 2
Proposition 1.8.4 (Bari [Bar1]). Let Assumption 1.8.1 hold. If g
j

j1
is
linearly independent then g
j

j1
is a Riesz basis of B.
Proof. Dene the operator T by (1.8.62). The equation (A T)f = 0 has only the
trivial solution. Indeed, if (A T)f = 0 then from
(A T)f =

j=1
(f, e
j
)f
j

j=1
(f, e
j
)(f
j
g
j
) =

j=1
(f, e
j
)g
j
it follows that

j=1
(f, e
j
)g
j
= 0.
96
Hence (f, e
j
) = 0, j 1, by the - linearly independence of g
j

j1
. From this we
get f = 0. Thus, the operator A T is a linear bounded invertible operator. Since
(A T)e
j
= g
j
, the sequence g
j

j1
is a Riesz basis of B. 2
Proposition 1.8.5. Let Assumption 1.8.1 hold. If g
j

j1
is complete in B then
g
j

j1
is a Riesz basis of B.
Proof. Choose N such that
_

j=N+1
|g
j
f
j
|
2
_
1/2
<
1
|A
1
|
,
and consider the sequence
j

j1
:

j
=
_
f
j
, j = 1, N,
g
j
, j > N.
By virtue of Proposition 1.8.3, the sequence
j

j1
is a Riesz basis of B. Let

j1
be the biorthonormal basis to
j

j1
. Denote
D := det[(g
j
,

n
)]
j,n=1,N
.
Let us show that D ,= 0. Suppose, on the contrary, that D = 0. Then the linear algebraic
system
N

n=1

n
(g
j
,

n
) = 0, j = 1, N,
has a non-trivial solution
n

n=1,N
. Consider the vector
f :=
N

n=1

n
.
Since

n1
is a Riesz basis we get f ,= 0. On the other hand, we calculate:
(i) for j = 1, N : (g
j
, f) =
N

n=1

n
(g
j
,

n
) = 0;
(ii) for j > N : (g
j
, f) = (
j
, f) =
N

n=1

n
(
j
,

n
) = 0.
Thus, (g
j
, f) = 0 for all j 1. Since g
j

j1
is complete we get f = 0. This contradiction
shows that D ,= 0.
Let us show that g
j

j1
is linearly independent. Indeed, let c
j

j1
be complex
numbers such that

j=1
c
j
g
j
= 0. (1.8.63)
Since g
j
=
j
for j > N we get
N

j=1
c
j
(g
j
,

n
) = 0, n = 1, N.
97
The determinant of this linear system is equal to D ,= 0. Consequently, c
j
= 0, j = 1, N.
Then (1.8.63) takes the form

j=N+1
c
j

j
= 0.
Since
j

j1
is a Riesz basis we have c
j
= 0, j > N. Thus, the sequence g
j

j1
is
linearly independent, and by Proposition 1.8.4 g
j

j1
is a Riesz basis of B. 2
Proposition 1.8.6. Let numbers
n

n0
,
n
,=
k
(n ,= k) of the form

n
= n +
a
n
+

n
n
,
n
l
2
, a C (1.8.64)
be given. Then cos
n
x
n0
is a Riesz basis in L
2
(0, ).
Proof. First we show that cos
n
x
n0
is complete in L
2
(0, ). Let f(x) L
2
(0, )
be such that _

0
f(x) cos
n
x dx = 0, n 0. (1.8.65)
Consider the functions
() := (
0
)

n=1

n
2
,
n
=
2
n
,
F() =
1
()
_

0
f(x) cos xdx, =
2
, ,=
n
.
It follows from (1.8.65) that F() is entire in (after removing singularities). On the
other hand, it was shown in the proof of Lemma 1.6.6 that
[()[ C[[ exp([[), arg [, 2 ], > 0, := Im,
and consequently
[F()[
C
[[
, arg [, 2 ].
From this, using the Phragmen-Lindelof theorem [you1, p.80] and Liouvilles theorem [con1,
p.77] we conclude that F() 0, i.e.
_

0
f(x) cos xdx = 0 for C.
Hence f = 0. Thus, cos
n
x
n0
is complete in L
2
(0, ). We note that completeness of
cos
n
x
n0
follows independently also from Levinsons theorem [you1, p.118]. Further-
more, it follows from (1.8.64) that cos
n
x
n0
is quadratically close to the orthogonal
basis cos nx
n0
. Then, by virtue of Proposition 1.8.5, cos
n
x
n0
is a Riesz basis in
L
2
(0, ). 2
1.9. REVIEW OF THE INVERSE PROBLEM THEORY
98
In this section we give a short review of results on inverse problems of spectral analysis for
ordinary dierential equations. We describe briey only the main directions of this theory,
mention the most important monographs and papers, and refer to them for details.
The greatest success in the inverse problem theory was achieved for the Sturm-Liouville
operator
y := y

+q(x)y. (1.9.1)
The rst result in this direction is due to Ambarzumian [amb1]. He showed that if the
eigenvalues of the boundary value problem
y

+q(x)y = y, y

(0) = y

() = 0
are
k
= k
2
, k 0, then q = 0. But this result is an exception from the rule, and in
general the specication of the spectrum does not uniquely determine the operator (1.9.1).
Afterwards Borg [Bor1] proved that the specication of two spectra of Sturm-Liouville op-
erators uniquely determines the potential q. Levinson [Lev1] suggested a dierent method
to prove Borgs results. Tikhonov in [Tik1] obtained the uniqueness theorem for the inverse
Sturm-Liouville problem on the half-line from the given Weyl function.
An important role in the spectral theory of Sturm-Liouville operators was played by the
transformation operator. Marchenko [mar3]-[mar4] rst applied the transformation operator
to the solution of the inverse problem. He proved that a Sturm-Liouville operator on the
half-line or a nite interval is uniquely determined by specifying the spectral function. For a
nite interval this corresponds to the specication of the spectral data (see Subsection 1.4.2).
Transformation operators were also used in the fundamental paper of Gelfand and Levitan
[gel1], where they obtained necessary and sucient conditions along with a method for
recovering a Sturm-Liouville operator from its spectral function. In [gas1] similar results were
established for the inverse problem of recovering Sturm-Liouville operators on a nite interval
from two spectra. Another approach for the solution of inverse problems was suggested by
Krein [kre1], [kre2]. Blokh [blo1] and Rofe-Beketov [rof1] studied the inverse problem on the
line with given spectral matrix. The inverse scattering theory on the half-line and on the line
was constructed in [agr1], [cha1], [deg1], [dei1], [fad1], [fad2], [kay1], [kur1] and other works.
An interesting approach connected with studying isospectral sets for the Sturm-Liouville
operators on a nite interval was described in [car1], [dah1], [isa1], [isa2], [mcl2], [pos1] and
[shu1]. The created methods also allow one to study stability of the solution of the inverse
problem for (1.9.1) (see [ale1], [Bor1], [dor1], [hoc1], [hoc2], [mar6], [miz1], [rja1] and [yur3]).
In the last twenty years there appeared many new areas for applications of inverse Sturm-
Liouville problems; we briey mention some of them.
Boundary value problems with discontinuity conditions inside the interval are connected
with discontinuous material properties. For example, discontinuous inverse problems appear
in electronics for constructing parameters of heterogeneous electronic lines with desirable
technical characteristics ([lit1], [mes1]). Spectral information can be used to reconstruct the
permittivity and conductivity proles of a one-dimensional discontinuous medium ([kru1],
[she1]). Boundary value problems with discontinuities in an interior point also appear in
geophysical models for oscillations of the Earth ([And1], [lap1]). Here the main discontinuity
is caused by reection of the shear waves at the base of the crust. Discontinuous inverse
problems (in various formulations) have been considered in [hal1], [kob1], [kob2], [kru1],
[pro1] and [she1].
99
Many further applications connect with the dierential equation of the form
y

+q(x)y = r(x)y (1.9.2)


with turning points when the function r(x) has zeros or (and) changes sign. For example,
turning points connect with physical situations in which zeros correspond to the limit of
motion of a wave mechanical particle bound by a potential eld. Turning points appear also
in elasticity, optics, geophysics and other branches of natural sciences. Moreover, a wide
class of dierential equations with Bessel-type singularities and their perturbations can be
reduced to dierential equations having turning points. Inverse problems for equations with
turning points and singularities help to study blow-up solutions for some nonlinear integrable
evolution equations of mathematical physics (see [Con1]). Inverse problems for the equation
(1.9.1) with singularities and for the equation (1.9.2) with turning points and singularities
have been studied in [Bel1], [Bou1], [car2], [elr1], [FY1], [gas2], [pan1], [sta1], [yur28], [yur29]
and other works. Some aspects of the turning point theory and a number of applications are
described in [ebe1]-[ebe4], [gol1], [mch1] and [was1].
The inverse problem for the Sturm-Liouville equation in the impedance form
(p(x)y

= p(x)y, p(x) > 0, 0 < x < 1, (1.9.3)


when p

/p L
2
(0, 1) has been studied in [and1], [and2] and [col1]. Equation (1.9.3) can
be transformed to (1.9.1) when p

L
2
(0, 1) but not when only p

/p L
2
(0, 1). The lack
of the smoothness leads to additional technical diculties in the investigation of the inverse
problem. In [akt2], [dar1], [gri1] and [yak1] the inverse problem for the equation (1.9.2) with
a lack of the smoothness of r is treated.
In [bro1], [hal2], [hal3], [mcl4], [She1], [yan1] and other papers, the so-called nodal in-
verse problem is considered when the potential is to be reconstructed from nodes of the
eigenfunctions. Many works are devoted to incomplete inverse problems when only a part of
the spectral information is available for measurement and (or) there is a priori information
about the operator or its spectrum (see [akt1], [akt4], [FY2], [ges2], [gre1], [kli1], [mcl3],
[run1], [sac2], [sad1], [tik1], [Zho1] and the references therein). Sometimes in incomplete in-
verse problems we have a lack of information which leads to nonuniqueness of the solution of
the inverse problem (see, for example, [FY2] and [tik1]). We also mention inverse problems
for boundary value problems with nonseparated boundary conditions ([Gus1], [mar7], [pla1],
[yur2] and [yur20]) and with nonlocal boundary conditions of the form
_
T
0
y(x) d
j
(x) = 0
(see [kra1]). Some aspects of numerical solutions of the inverse problems have been studied
in [ahm1], [bAr1], [fab1], [kHa1], [kno1], [low1], [mue1], [neh1], [pai1] and [zhi1].
Numerous applications of the inverse problem theory connect with higher-order dieren-
tial operators of the form
ly := y
(n)
+
n2

k=0
p
k
(x)y
(k)
, n > 2. (1.9.4)
In contrast to the case of Sturm-Liouville operators, inverse problems for the operator (1.9.4)
turned out to be much more dicult for studying. Fage [fag2] and Leontjev [leo1] deter-
mined that for n > 2 transformation operators have a more complicated structure than for
100
Sturm-Liouville operators which makes it more dicult to use them for solving the inverse
problem. However, in the case of analytic coecients the transformation operators have
the same triangular form as for Sturm-Liouville operators (see [kha2], [mat1] and [sak1]).
Sakhnovich [sak2]-[sak3] and Khachatryan [kha3]-[kha4] used a triangular transformation
operator to investigate the inverse problem of recovering selfadjoint dierential operators
on the half-line with analytic coecients from the spectral function, as well as the inverse
scattering problem.
A more eective and universal method in the inverse spectral theory is the method of
spectral mappings connected with ideas of the contour integral method. Levinson [Lev1] rst
applied ideas of the contour integral method for investigating the inverse problem for the
Sturm-Liouville operator (1.9.1). Developing Levinsons ideas Leibenson in [lei1]-[lei2] inves-
tigated the inverse problem for (1.9.4) on a nite interval under the restriction of separation
of the spectrum. The spectra and weight numbers of certain specially chosen boundary value
problems for the dierential operators (1.9.4) appeared as the spectral data of the inverse
problem. The inverse problem for (1.9.4) on a nite interval with respect to a system of
spectra was investigated under various conditions on the operator in [bar1], [mal1], [yur4],
[yur8] and [yur10]. Things are more complicated for dierential operators on the half-line,
especially for the nonselfadjoint case, since the spectrum can have rather bad behavior.
Moreover, for the operator (1.9.4) even a formulation of the inverse problem is not obvious
since the spectral function turns out to be unsuitable for this purpose. On a nite interval
the formulation of the inverse problem is not obvious as well, since waiving the requirement
of separation of the spectra leads to a violation of uniqueness for the solution of the inverse
problem. In [yur1], [yur16] and [yur21] the so-called Weyl matrix was introduced and studied
as the main spectral characteristics for the nonselfadjoint dierential operator (1.9.4) on the
half-line and on a nite interval. This is a generalization of the classical Weyl function for
the Sturm-Liouville operator. The concept of the Weyl matrix and a development of ideas
of the contour integral method enable one to construct a general theory of inverse problems
for nonselfadjoint dierential operators (1.9.4) (see [yur1] for details). The inverse scattering
problem on the line for the operator (1.9.4) has been treated in various settings in [bea1],
[bea2], [Cau1], [dei2], [dei3], [kau1], [kaz1], [kaz2], [suk1] and other works. We note that the
use of the Riemann problem in the inverse scattering theory can be considered as a particular
case of the contour integral method (see, for example, [bea1]).
The inverse problem for the operator (1.9.4) with locally integrable coecients has been
studied in [yur14]. There the generalized Weyl functions are introduced, and the Riemann-
Fage formula [fag1] for the solution of the Cauchy problem for higher-order partial dierential
equations is used.
Inverse problems for higher-order dierential operators with singularities have been stud-
ied in [kud1], [yur17], [yur18] and [yur22]. Incomplete inverse problems for higher-order
dierential operators and their applications are considered in [kha1], [bek1], [elc1], [mal1],
[mcl1], [pap1], [yur8]-[yur10] and [yur26]. In particular, in [yur9] the inverse problem of
recovering a part of the coecients of the dierential operator (1.9.4) from a part of the
Weyl matrix has been studied (the rest of the coecients are known a priori). For such
inverse problems the method of standard models was suggested and a constructive solution
for a wide class of incomplete inverse problems was given. This method was also applied for
the solution of the inverse problem of elasticity theory when parameters of a beam are to be
determined from given frequences of its natural oscillations (see [yur12]). This problem can
101
be reduced to the inverse problem of recovering the fourth-order dierential operator
(h

(x)y

= h(x)y, = 1, 2, 3
from the Weyl function.
Many papers are devoted to inverse problems for systems of dierential equations (see
[alp1], [amo1], [aru1], [bea3], [bea4], [bou1], [Cha1], [Che1], [cox1], [gas3], [gas4], [goh2],
[hin1], [lee1], [li1], [mal2], [pal1], [sak5], [sAk1], [sha1], [sha2], [win1], [yam1], [zam1], [zho1],
[zho2], [zho3] and the references therein). Some systems can be treated similarly to the
Sturm-Liouville operator. As example we mention inverse problems for the Dirac system
([lev4], [aru1], [gas3], [gas4], [zam1]). But in the general case, the inverse problems for
systems deal with diculties like those for the operator (1.9.4). Such systems have been
studied in [bea3], [bea4], [lee1], [zho1], [zho2] and [zho3].
We mention also inverse spectral problems for discrete operators (see [atk1], [ber1], [gla1],
[gus1], [gus2], [Kha1], [nik1], [yur23], [yur24] and [yur26]), for dierential operators with
delay ([pik1], [pik2]), for nonlinear dierential equations ([yur25]), for integro-dierential
and integral operators ([ere1], [yur6], [yur7], [yur13]), for pencils of operators ([akt3], [bea5],
[bro2], [Chu1], [gas5], [yur5], [yur27], [yur31], [yur32]) and others.
Extensive literature is devoted to one more wide area of the inverse problem theory. In
1967 Gardner, Green, Kruskal and Miura [gar1] found a remarkable method for solving some
important nonlinear equations of mathematical physics (the Korteweg-de Vries equation, the
nonlinear Schr odinger equation, the Boussinesq equation and many others) connected with
the use of inverse spectral problems. This method is described in [abl1], [lax1], [tak1], [zak1]
and other works (see also Sections 4.1-4.2).
Many contributions are devoted to the inverse problem theory for partial dierential
equations. This direction is reected fairly completely in [ang1], [ani1], [ber1], [buk1], [cha1],
[cha3], [Isa1], [kir1], [lav1], [new1], [niz1], [Pri1], [Pri2], [ram1], [rom1] and [rom2]. In Section
2.4 we study an inverse problem for a wave equation as a model inverse problem for par-
tial dierential equations and show connections with inverse spectral problems for ordinary
dierential equations.
102
II. STURM-LIOUVILLE OPERATORS ON THE HALF-LINE
In this chapter we present an introduction to the inverse problem theory for Sturm-
Liouville operators on the half-line. First, in Sections 2.1-2.2 nonselfadjoint operators with
integrable complex-valued potentials are considered. We introduce and study the Weyl func-
tion as the main spectral characteristic, prove an expansion theorem and solve the inverse
problem of recovering the Sturm-Liouville operator from its Weyl function. For this purpose
we use ideas of the contour integral method and develop the method of spectral mappings
presented in Chapter I. Moreover connections with the transformation operator method are
established. In Section 2.3 the most important particular cases, which often appear in appli-
cations, are considered, namely: selfadjoint operators, nonselfadjoint operators with a simple
spectrum and perturbations of the discrete spectrum of a model operator. We introduce here
the so-called spectral data which describe the set of singularities of the Weyl function. The
solution of the inverse problem of recovering the Sturm-Liouville operator from its spectral
data is provided. In Sections 2.4-2.5 locally integrable complex-valued potentials are stud-
ied. In Section 2.4 we consider an inverse problem for a wave equation. In Section 2.5 the
generalized Weyl function is introduced as the main spectral characteristic. We prove an
expansion theorem and solve the inverse problem of recovering the Sturm-Liouville operator
from its generalized Weyl function. The inverse problem to construct q from the Weyl
sequence is considered in Section 2.6.
2.1. PROPERTIES OF THE SPECTRUM. THE WEYL FUNCTION.
We consider the dierential equation and the linear form L = L(q(x), h) :
y := y

+q(x)y = y, x > 0, (2.1.1)


U(y) := y

(0) hy(0), (2.1.2)


where q(x) L(0, ) is a complex-valued function, and h is a complex number. Let
=
2
, = + i, and let for deniteness := Im 0. Denote by the -
plane with the cut 0, and
1
= 0; notice that here and
1
must be
considered as subsets of the Riemann surface of the squareroot-function. Then, under the
map
2
= ,
1
corresponds to the domain = : Im 0, ,= 0. Put

= : Im 0, [[ . Denote by W
N
the set of functions f(x), x 0 such
that the functions f
(j)
(x), j = 0, N 1 are absolutely continuous on [0, T] for each xed
T > 0, and f
(j)
(x) L(0, ) , j = 0, N.
2.1.1. Jost and Birkho solutions. In this subsection we construct a special funda-
mental system of solutions for equation (2.1.1) in having asymptotic behavior at innity
like exp(ix).
Theorem 2.1.1. Equation (2.1.1) has a unique solution y = e(x, ), , x 0,
satisfying the integral equation
e(x, ) = exp(ix)
1
2i
_

x
(exp(i(x t)) exp(i(t x))) q(t)e(t, ) dt. (2.1.3)
103
The function e(x, ) has the following properties:
(i
1
) For x , = 0, 1, and each xed > 0,
e
()
(x, ) = (i)

exp(ix)(1 + o(1)), (2.1.4)


uniformly in

. For Im > 0, e(x, ) L


2
(0, ). Moreover, e(x, ) is the unique
solution of (2.1.1) (up to a multiplicative constant) having this property.
(i
2
) For [[ , , = 0, 1,
e
()
(x, ) = (i)

exp(ix)
_
1 +
(x)
i
+o
_
1

__
, (x) :=
1
2
_

x
q(t) dt, (2.1.5)
uniformly for x 0.
(i
3
) For each xed x 0, and = 0, 1, the functions e
()
(x, ) are analytic for Im > 0,
and are continuous for .
(i
4
) For real ,= 0, the functions e(x, ) and e(x, ) form a fundamental system of
solutions for (2.1.1), and
e(x, ), e(x, )) = 2i, (2.1.6)
where y, z) := yz

z is the Wronskian.
The function e(x, ) is called the Jost solution for (2.1.1).
Proof. We transform (2.1.3) by means of the replacement
e(x, ) = exp(ix)z(x, ) (2.1.7)
to the equation
z(x, ) = 1
1
2i
_

x
(1 exp(2i(t x))) q(t)z(t, ) dt, x 0, . (2.1.8)
The method of successive approximations gives
z
0
(x, ) = 1, z
k+1
(x, ) =
1
2i
_

x
(1 exp(2i(t x))) q(t)z
k
(t, ) dt, (2.1.9)
z(x, ) =

k=0
z
k
(x, ). (2.1.10)
Let us show by induction that
[z
k
(x, )[
(Q
0
(x))
k
[[
k
k!
, , x 0, (2.1.11)
where
Q
0
(x) :=
_

x
[q(t)[ dt.
Indeed, for k = 0, (2.1.11) is obvious. Suppose that (2.1.11) is valid for a certain xed
k 0. Since [1 exp(2i(t x))[ 2, (2.1.9) implies
[z
k+1
(x, )[
1
[[
_

x
[q(t)z
k
(t, )[ dt. (2.1.12)
104
Substituting (2.1.11) into the right-hand side of (2.1.12) we calculate
[z
k+1
(x, )[
1
[[
k+1
k!
_

x
[q(t)[(Q
0
(t))
k
dt =
(Q
0
(t))
k+1
[[
k+1
(k + 1)!
.
It follows from (2.1.11) that the series (2.1.10) converges absolutely for x 0, ,
and the function z(x, ) is the unique solution of the integral equation (2.1.8). Moreover,
by virtue of (2.1.10) and (2.1.11),
[z(x, )[ exp(Q
0
(x)/[[), [z(x, ) 1[ (Q
0
(x)/[[) exp(Q
0
(x)/[[). (2.1.13)
In particular, (2.1.13) yields for each xed > 0,
z(x, ) = 1 +o(1), x , (2.1.14)
uniformly in

, and
z(x, ) = 1 +O
_
1

_
, [[ , , (2.1.15)
uniformly for x 0. Substituting (2.1.15) into the right-hand side of (2.1.8) we obtain
z(x, ) = 1
1
2i
_

x
(1 exp(2i(t x))) q(t) dt +O
_
1

2
_
, [[ , (2.1.16)
uniformly for x 0.
Lemma 2.1.1. Let q(x) L(0, ), and denote
J
q
(x, ) :=
_

x
q(t) exp(2i(t x)) dt, . (2.1.17)
Then
lim
||
sup
x0
[J
q
(x, )[ = 0. (2.1.18)
Proof. 1) First we assume that q(x) W
1
. Then integration by parts in (2.1.17) yields
J
q
(x, ) =
q(x)
2i

1
2i
_

x
q

(t) exp(2i(t x)) dt,


and consequently
sup
x0
[J
q
(x, )[
C
q
[[
.
2) Let now q(x) L(0, ). Fix > 0 and choose q

(x) W
1
such that
_

0
[q(t) q

(t)[ dt <

2
.
Then
[J
q
(x, )[ [J
q

(x, )[ +[J
qq

(x, )[
C
q

[[
+

2
.
105
Hence, there exists
0
> 0 such that sup
x0
[J
q
(x, )[ for [[
0
, . By virtue of
arbitrariness of > 0 we arrive at (2.1.18). 2
Let us return to the proof of Theorem 2.1.1. It follows from (2.1.16) and Lemma 2.1.1
that
z(x, ) = 1 +
(x)
i
+o
_
1

_
, [[ , , (2.1.19)
uniformly for x 0. From (2.1.7), (2.1.9)-(2.1.11), (2.1.14) and (2.1.19) we derive (i
1
)(i
3
)
for = 0. Furthermore, (2.1.3) and (2.1.7) imply
e

(x, ) = (i) exp(ix)


_
1
1
2i
_

x
(1 + exp(2i(t x))) q(t)z(t, ) dt
_
. (2.1.20)
Using (2.1.20) we get (i
1
) (i
3
) for = 1. It is easy to verify by dierentiation that the
function e(x, ) is a solution of (2.1.1). For real ,= 0, the functions e(x, ) and e(x, )
satisfy (2.1.1), and by virtue of (2.1.4), lim
x
e(x, ), e(x, )) = 2i. Since the Wronskian
e(x, ), e(x, )) does not depend on x, we arrive at (2.1.6). 2
Remark 2.1.1. If q(x) W
N
, then there exist functions
s
(x) such that for
, , = 0, 1, 2,
e
()
(x, ) = (i)

exp(ix)
_
1 +
N+1

s=1

s
(x)
(i)
s
+o
_
1

N+1
__
,
1
(x) = (x). (2.1.21)
Indeed, let q(x) W
1
. Substituting (2.1.19) into the right-hand side of (2.1.8) we get
z(x, ) = 1
1
2i
_

x
(1 exp(2i(t x))) q(t) dt

1
2(i)
2
_

x
(1 exp(2i(t x))) q(t)(t) dt +o
_
1

2
_
, [[ .
Integrating by parts and using Lemma 2.1.1, we obtain
z(x, ) = 1 +
(x)
i
+

20
(x)
(i)
2
+o
_
1

2
_
, [[ , , (2.1.22)
where

20
(x) =
1
4
q(x) +
1
4
_

x
q(t)
_
_

t
q(s) ds
_
dt =
1
4
q(x) +
1
8
_
_

x
q(t) dt
_
2
.
By virtue of (2.1.7) and (2.1.22), we arrive at (2.1.21) for N = 1, = 0. Using induction
one can prove (2.1.21) for all N.
If we additionally assume that xq(x) L(0, ), then the Jost solution e(x, ) exists
also for = 0. More precisely, the following theorem is valid.
Theorem 2.1.2. Let (1 + x)q(x) L(0, ). The functions e
()
(x, ), = 0, 1 are
continuous for Im 0, x 0, and
[e(x, ) exp(ix)[ exp(Q
1
(x)), (2.1.23)
106
[e(x, ) exp(ix) 1[
_
Q
1
(x) Q
1
_
x +
1
[[
__
exp(Q
1
(x)), (2.1.24)
[e

(x, ) exp(ix) i[ Q
0
(x) exp(Q
1
(x)), (2.1.25)
where
Q
1
(x) :=
_

x
Q
0
(t) dt =
_

x
(t x)[q(t)[ dt.
First we prove an auxiliary assertion.
Lemma 2.1.2. Assume that c
1
0, u(x) 0, v(x) 0, (a x T ), u(x) is
bounded, and (x a)v(x) L(a, T). If
u(x) c
1
+
_
T
x
(t x)v(t)u(t) dt, (2.1.26)
then
u(x) c
1
exp
_
_
T
x
(t x)v(t) dt
_
. (2.1.27)
Proof. Denote
(x) = c
1
+
_
T
x
(t x)v(t)u(t) dt.
Then
(T) = c
1
,

(x) =
_
T
x
v(t)u(t) dt,

= v(x)u(x),
and (2.1.26) yields
0

(x) (x)v(x).
Let c
1
> 0. Then (x) > 0, and

(x)
(x)
v(x).
Hence
_

(x)
(x)
_

v(x)
_

(x)
(x)
_
2
v(x).
Integrating this inequality twice we get

(x)
(x)

_
T
x
v(t) dt, ln
(x)
(T)

_
T
x
(t x)v(t) dt,
and consequently,
(x) c
1
exp
_
_
T
x
(t x)v(t) dt
_
.
According to (2.1.26), u(x) (x), and we arrive at (2.1.27).
If c
1
= 0, then (x) = 0. Indeed, suppose on the contrary that (x) ,= 0. Since
(x) 0,

(x) 0, there exists T


0
T such that (x) > 0 for x < T
0
, and (x) 0
for x [T
0
, T]. Repeating these arguments we get for x < T
0
and suciently small > 0,
ln
(x)
(T
0
)

_
T
0

x
(t x)v(t) dt
_
T
0
x
(t x)v(t) dt,
which is impossible. Thus, (x) 0, and (2.1.27) becomes obvious. 2
107
Proof of Theorem 2.1.2. For Im 0 we have

sin

exp(i)

1. (2.1.28)
Indeed, (2.1.28) is obvious for real and for [[ 1, Im 0. Then, by the maximum
principle [con1, p.128], (2.1.28) is also valid for [[ 1, Im 0.
It follows from (2.1.28) that

1 exp(2ix)
2i

x for Im 0, x 0. (2.1.29)
Using (2.1.9) and (2.1.29) we infer
[z
k+1
(x, )[
_

x
t[q(t)z
k
(t, )[ dt, k 0, Im 0, x 0,
and consequently by induction
[z
k
(x, )[
1
k!
_
_

x
t[q(t)[ dt
_
k
, k 0, Im 0, x 0.
Then, the series (2.1.10) converges absolutely and uniformly for Im 0, x 0, and the
function z(x, ) is continuous for Im 0, x 0. Moreover,
[z(x, )[ exp
_
_

x
t[q(t)[ dt
_
, Im 0, x 0. (2.1.30)
Using (2.1.7) and (2.1.20) we conclude that the functions e
()
(x, ), = 0, 1 are continuous
for Im 0, x 0.
Furthermore, it follows from (2.1.8) and (2.1.29) that
[z(x, )[ 1 +
_

x
(t x)[q(t)z(t, )[ dt, Im 0, x 0.
By virtue of Lemma 2.1.2, this implies
[z(x, )[ exp(Q
1
(x)), Im 0, x 0, (2.1.31)
i.e. (2.1.23) is valid. We note that (2.1.31) is more precise than (2.1.30).
Using (2.1.8), (2.1.29) and (2.1.31) we calculate
[z(x, ) 1[
_

x
(t x)[q(t)[ exp(Q
1
(t)) dt exp(Q
1
(x))
_

x
(t x)[q(t)[ dt,
and consequently,
[z(x, ) 1[ Q
1
(x) exp(Q
1
(x)), Im 0, x 0. (2.1.32)
More precisely,
[z(x, ) 1[
_
x+
1
||
x
(t x)[q(t)[ exp(Q
1
(t)) dt +
1
[[
_

x+
1
||
[q(t)[ exp(Q
1
(t)) dt
108
exp(Q
1
(x))
_
_

x
(t x)[q(t)[ dt
_

x+
1
||
_
t x
1
[[
_
[q(t)[ dt
_
=
_
Q
1
(x) Q
1
_
x +
1
[[
__
exp(Q
1
(x)),
i.e. (2.1.24) is valid. At last, from (2.1.20) and (2.1.31) we obtain
[e

(x, ) exp(ix) i[
_

x
[q(t)[ exp(Q
1
(t)) dt exp(Q
1
(x))
_

x
[q(t)[ dt,
and we arrive at (2.1.25). Theorem 2.1.2 is proved. 2
Remark 2.1.2. Consider the function
q(x) =
2a
2
(1 +ax)
2
,
where a is a complex number such that a / (, 0]. Then q(x) L(0, ), but xq(x) /
L(0, ). The Jost solution has in this case the form (see Example 2.3.1)
e(x, ) = exp(ix)
_
1
a
i(1 + ax)
_
,
i.e. e(x, ) has a singularity at = 0, hence we cannot omit the integrability condition in
Theorem 2.1.2.
For the Jost solution e(x, ) there exists a transformation operator. More precisely, the
following theorem is valid.
Theorem 2.1.3. Let (1 + x)q(x) L(0, ). Then the Jost solution e(x, ) can be
represented in the form
e(x, ) = exp(ix) +
_

x
A(x, t) exp(it) dt, Im 0, x 0, (2.1.33)
where A(x, t) is a continuous function for 0 x t < , and
A(x, x) =
1
2
_

x
q(t) dt. (2.1.34)
[A(x, t)[
1
2
Q
0
_
x +t
2
_
exp
_
Q
1
(x) Q
1
_
x +t
2
__
, (2.1.35)
1 +
_

x
[A(x, t)[ dt exp(Q
1
(x)),
_

x
[A(x, t)[ dt Q
1
(x) exp(Q
1
(x)). (2.1.36)
Moreover, the function A(x
1
, x
2
) has rst derivatives
A
x
i
, i = 1, 2; the functions
A(x
1
, x
2
)
x
i
+
1
4
q
_
x
1
+x
2
2
_
are absolutely continuous with respect to x
1
and x
2
, and satisfy the estimates

A(x
1
, x
2
)
x
i
+
1
4
q
_
x
1
+x
2
2
_

109

1
2
Q
0
(x
1
)Q
0
_
x
1
+x
2
2
_
exp
_
Q
1
(x
1
) Q
1
_
x
1
+x
2
2
__
, i = 1, 2. (2.1.37)
Proof. The arguments for proving this theorem are similar to those in Section 1.3 with
adequate modications. This theorem also can be found in [mar1] and [lev2].
According to (2.1.7) and (2.1.10) we have
e(x, ) =

k=0

k
(x, ),
k
(x, ) = z
k
(x, ) exp(ix). (2.1.38)
Let us show by induction that the following representation is valid

k
(x, ) =
_

x
a
k
(x, t) exp(it) dt, k 1, (2.1.39)
where the functions a
k
(x, t) do not depend on .
First we calculate
1
(x, ). By virtue of (2.1.9) and (2.1.38),

1
(x, ) =
_

x
sin (s x)

exp(is)q(s) ds =
1
2
_

x
q(s)
_
_
2sx
x
exp(it) dt
_
ds.
Interchanging the order of integration we obtain that (2.1.39) holds for k = 1, where
a
1
(x, t) =
1
2
_

(t+x)/2
q(s) ds.
Suppose now that (2.1.39) is valid for a certain k 1. Then

k+1
(x, ) =
_

x
sin (s x)

q(s)
k
(s, ) ds
=
_

x
sin (s x)

q(s)
_
_

s
a
k
(s, u) exp(iu) du
_
ds
=
1
2
_

x
q(s)
_
_

s
a
k
(s, u)
_
_
s+ux
s+u+x
exp(it) dt
_
du
_
ds.
We extend a
k
(s, u) by zero for u < s. For s x this yields
_

s
a
k
(s, u)
_
_
s+ux
s+u+x
exp(it) dt
_
du =
_

x
exp(it)
_
_
t+sx
ts+x
a
k
(s, u) du
_
dt.
Therefore

k+1
(x, ) =
1
2
_

x
exp(it)
_
_

x
q(s)
_
_
t+sx
ts+x
a
k
(s, u) du
_
ds
_
dt =
_

x
a
k+1
(x, t) exp(it) dt,
where
a
k+1
(x, t) =
1
2
_

x
q(s)
_
_
t+sx
ts+x
a
k
(s, u) du
_
ds, t x.
Changing the variables according to u +s = 2, u s = 2, we obtain
a
k+1
(x, t) =
_

(t+x)/2
_
_
(tx)/2
0
q( )a
k
( , +) d
_
d.
110
Taking H
k
(, ) = a
k
( , +), t +x = 2u, t x = 2v, we calculate for 0 v u,
H
1
(u, v) =
1
2
_

u
q(s) ds, H
k+1
(u, v) =
_

u
_
_
v
0
q( )H
k
(, ) d
_
d. (2.1.40)
It can be shown by induction that
[H
k+1
(u, v)[
1
2
Q
0
(u)
(Q
1
(u v) Q
1
(u))
k
k!
, k 0, 0 v u. (2.1.41)
Indeed, for k = 0, (2.1.41) is obvious. Suppose that (2.1.41) is valid for H
k
(u, v). Then
(2.1.40) implies
[H
k+1
(u, v)[
1
2
_

u
Q
0
()
_
_
v
0
[q( )[
(Q
1
( ) Q
1
())
k1
(k 1)!
d
_
d.
Since the functions Q
0
(x) and Q
1
(x) are monotonic, we get
[H
k+1
(u, v)[
1
2

Q
0
(u)
(k 1)!
_

u
(Q
1
( v) Q
1
())
k1
(Q
0
( v) Q
0
()) d
=
1
2
Q
0
(u)
(Q
1
(u v) Q
1
(u))
k
k!
,
i.e. (2.1.41) is proved. Therefore, the series
H(u, v) =

k=1
H
k
(u, v)
converges absolutely and uniformly for 0 v u, and
H(u, v) =
1
2
_

u
q(s) ds +
_

u
_
_
v
0
q( )H(, ) d
_
d, (2.1.42)
[H(u, v)[
1
2
Q
0
(u) exp
_
Q
1
(u v) Q
1
(u)
_
. (2.1.43)
Put
A(x, t) = H
_
t +x
2
,
t x
2
_
. (2.1.44)
Then
A(x, t) =

k=1
a
k
(x, t),
the series converges absolutely and uniformly for 0 x t , and (2.1.33)-(2.1.35) are valid.
Using (2.1.35) we calculate
_

x
[A(x, t)[ dt exp(Q
1
(x))
_

x
Q
0
() exp(Q
1
()) d
= exp(Q
1
(x))
_

x
d
d
_
exp(Q
1
())
_
d = exp(Q
1
(x)) 1,
and we arrive at (2.1.36).
111
Furthermore, it follows from (2.1.42) that
H(u, v)
u
=
1
2
q(u)
_
v
0
q(u )H(u, ) d, (2.1.45)
H(u, v)
v
=
_

u
q( v)H(, v) d. (2.1.46)
It follows from (2.1.45)-(2.1.46) and (2.1.43) that

H(u, v)
u
+
1
2
q(u)


1
2
_
v
0
[q(u )[Q
0
(u) exp(Q
1
(u ) Q
1
(u)) d,

H(u, v)
v


1
2
_

u
[q( v)[Q
0
() exp(Q
1
( v) Q
1
()) d.
Since _
v
0
[q(u )[ d =
_
u
uv
[q(s)[ ds Q
0
(u v),
Q
1
( v) Q
1
() =
_

v
Q
0
(t) dt
_
u
uv
Q
0
(t) dt = Q
1
(u v) Q
1
(u), u ,
we get

H(u, v)
u
+
1
2
q(u)


1
2
Q
0
(u) exp(Q
1
(u v) Q
1
(u))
_
v
0
[q(u )[ d

1
2
Q
0
(u v)Q
0
(u) exp(Q
1
(u v) Q
1
(u)), (2.1.47)

H(u, v)
v


1
2
Q
0
(u) exp(Q
1
(u v) Q
1
(u))
_

u
[q( v)[ d

1
2
Q
0
(u v)Q
0
(u) exp(Q
1
(u v) Q
1
(u)). (2.1.48)
By virtue of (2.1.44),
A(x, t)
x
=
1
2
_
H(u, v)
u

H(u, v)
v
_
,
A(x, t)
t
=
1
2
_
H(u, v)
u
+
H(u, v)
v
_
,
where
u =
t +x
2
, v =
t x
2
.
Hence,
A(x, t)
x
+
1
4
q
_
x +t
2
_
=
1
2
_
H(u, v)
u
+
1
2
q(u)
_

1
2
H(u, v)
v
,
A(x, t)
t
+
1
4
q
_
x +t
2
_
=
1
2
_
H(u, v)
u
+
1
2
q(u)
_
+
1
2
H(u, v)
v
.
Taking (2.1.47) and (2.1.48) into account we arrive at (2.1.37). 2
Let (1 + x)q(x) L(0, ). We introduce the potentials
q
r
(x) =
_
q(x), x r,
0, x > r,
r 0 (2.1.49)
112
and consider the corresponding Jost solutions
e
r
(x, ) = exp(ix) +
_

x
A
r
(x, t) exp(it) dt. (2.1.50)
According to Theorems 2.1.2 and 2.1.3,
[e
r
(x, ) exp(ix)[ exp(Q
1
(x)),
[e
r
(x, ) exp(ix) 1[ Q
1
(x) exp(Q
1
(x)),
[e

r
(x, ) exp(ix) i[ Q
0
(x) exp(Q
1
(x)),
_

_
(2.1.51)
[A
r
(x, t)[
1
2
Q
0
_
x +t
2
_
exp
_
Q
1
(x) Q
1
_
x +t
2
__
. (2.1.52)
Moreover,
e
r
(x, ) exp(ix) for x > r,
A
r
(x, t) 0 for x +t > 2r.
_
_
_
(2.1.53)
Lemma 2.1.3. Let (1 + x)q(x) L(0, ). Then for Im 0, x 0, r 0,
[(e
r
(x, ) e(x, )) exp(ix)[
_

r
t[q(t)[ dt exp(Q
1
(0)), (2.1.54)
[(e

r
(x, ) e

(x, )) exp(ix)[
_
Q
0
(r) +
_

r
t[q(t)[ dt Q
0
(0)
_
exp(Q
1
(0)). (2.1.55)
Proof. Denote
z
r
(x, ) = e
r
(x, ) exp(ix), z(x, ) = e(x, ) exp(ix), u
r
(x, ) = [z
r
(x, ) z(x, )[.
It follows from (2.1.8) and (2.1.29) that
u
r
(x, )
_

x
(t x)[q
r
(t)z
r
(t, ) q(t)z(t, )[ dt, Im 0, x 0, r 0. (2.1.56)
Let x r. By virtue of (2.1.23), (2.1.49) and (2.1.56),
u
r
(x, )
_

x
(t x)[q(t)z(t, )[ dt
_

x
(t x)[q(t)[ exp(Q
1
(t)) dt.
Since the function Q
1
(x) is monotonic we get
u
r
(x, ) Q
1
(x) exp(Q
1
(x)) Q
1
(r) exp(Q
1
(0)), x r. (2.1.57)
For x r, (2.1.56) implies
u
r
(x, )
_

r
(t x)[q(t)z(t, )[ dt +
_
r
x
(t x)[q(t)[u
r
(t, ) dt.
Using (2.1.23) we infer
u
r
(x, ) exp(Q
1
(r))
_

r
t[q(t)[ dt +
_
r
x
(t x)[q(t)[u
r
(t, ) dt.
113
According to Lemma 2.1.2,
u
r
(x, ) exp(Q
1
(r))
_

r
t[q(t)[ dt exp
_
_
r
x
(t x)[q(t)[ dt
_
, x r,
and consequently
u
r
(x, ) exp(Q
1
(x))
_

r
t[q(t)[ dt exp(Q
1
(0))
_

r
t[q(t)[ dt, x r.
Together with (2.1.57) this yields (2.1.54).
Denote
v
r
(x, ) = [(e

r
(x, ) e

(x, )) exp(ix)[, Im 0, x 0, r 0.
It follows from (2.1.20) that
v
r
(x, )
_

x
[q
r
(t)z
r
(t, ) q(t)z(t, )[ dt, Im 0, x 0, r 0. (2.1.58)
Let x r. By virtue of (2.1.23), (2.1.49) and (2.1.58),
v
r
(x, )
_

x
[q(t)[ exp(Q
1
(t)) dt
Q
0
(x) exp(Q
1
(x)) Q
0
(r) exp(Q
1
(0)), Im 0, 0 r x. (2.1.59)
For x r, (2.1.58) gives
v
r
(x, )
_

r
[q(t)z(t, )[ dt +
_
r
x
[q(t)[u
r
(t, ) dt.
Using (2.1.23) and (2.1.54) we infer
v
r
(x, )
_

r
[q(t)[ exp(Q
1
(t)) dt + exp(Q
1
(0))
_

r
s[q(s)[ ds
_
r
x
[q(t)[ dt,
and consequently
v
r
(x, )
_
Q
0
(r) +Q
0
(0)
_

r
t[q(t)[ dt
_
exp(Q
1
(0)), 0 x r, Im 0.
Together with (2.1.59) this yields (2.1.55). 2
Theorem 2.1.4. For each > 0, there exists a = a

0 such that equation (2.1.1)


has a unique solution y = E(x, ),

, satisfying the integral equation


E(x, ) = exp(ix)+
1
2i
_
x
a
exp(i(xt))q(t)E(t, ) dt+
1
2i
_

x
exp(i(tx))q(t)E(t, ) dt.
(2.1.60)
The function E(x, ), called the Birkho solution for (2.1.1), has the following properties:
(i
1
) E
()
(x, ) = (i)

exp(ix)(1 + o(1)), x = 0, 1, uniformly for [[


, Im , for each xed > 0;
(i
2
) E
()
(x, ) = (i)

exp(ix)(1 + O(
1
)), [[ , uniformly for x a;
(i
3
) for each xed x 0, the functions E
()
(x, ) are analytic for Im > 0, [[ , and
are continuous for

;
114
(i
4
) the functions e(x, ) and E(x, ) form a fundamental system of solutions for (2.1.1),
and e(x, ), E(x, )) = 2i.
(i
5
) If Q
0
(0), then one can take above a = 0.
Proof. For xed > 0 choose a = a

0 such that Q
0
(a) . We transform (2.1.60)
by means of the replacement E(x, ) = exp(ix)(x, ) to the equation
(x, ) = 1 +
1
2i
_
x
a
exp(2i(x t))q(t)(t, ) dt +
1
2i
_

x
q(t)(t, ) dt. (2.1.61)
The method of successive approximations gives

0
(x, ) = 1,
k+1
(x, ) =
1
2i
_
x
a
exp(2i(x t))q(t)
k
(t, ) dt +
1
2i
_

x
q(t)
k
(t, ) dt,
(x, ) =

k=0

k
(x, ).
This yields
[
k+1
(x, )[
1
2[[
_

a
[q(t)
k
(t, )[ dt,
and hence
[
k
(x, )[
_
(Q
0
(a)
2[[
_
k
.
Thus for x a, [[ Q
0
(a), we get
[(x, )[ 2, [(x, ) 1[
Q
0
(a)
[[
.
It follows from (2.1.60) that
E

(x, ) = exp(ix)
_
i +
1
2
_
x
a
exp(2i(x t))q(t)(t, ) dt
1
2
_

x
q(t)(t, ) dt
_
.
(2.1.62)
Since [(x, )[ 2 for x a,

, it follows from (2.1.61) and (2.1.62) that


[E
()
(x, )(i)

exp(ix) 1[
1
[[
_
_
x
a
exp(2(x t))[q(t)[ dt +
_

x
[q(t)[ dt
_

1
[[
_
exp(x)
_
x/2
a
[q(t)[ dt +
_

x/2
[q(t)[ dt
_
,
and consequently (i
1
) (i
2
) are proved.
The other assertions of Theorem 2.1.4 are obvious. 2
2.1.2. Properties of the spectrum. Denote
() = e

(0, ) he(0, ). (2.1.63)


By virtue of Theorem 2.1.1, the function () is analytic for Im > 0, and continuous
for . It follows from (2.1.5) that for [[ , ,
e(0, ) = 1 +

1
i
+o
_
1

_
, () = (i)
_
1 +

11
i
+o
_
1

__
, (2.1.64)
115
where
1
= (0),
11
= (0) h. Using (2.1.7), (2.1.16) and (2.1.20) one can obtain more
precisely
e(0, ) = 1 +

1
i
+
1
2i
_

0
q(t) exp(2it) dt +O
_
1

2
_
,
() = (i)
_
1 +

11
i

1
2i
_

0
q(t) exp(2it) dt +O
_
1

2
__
.
_

_
(2.1.65)
Denote
= =
2
: , () = 0,

= =
2
: Im > 0, () = 0,

= =
2
: Im = 0, ,= 0, () = 0.
Obviously, =

is a bounded set, and

is a bounded and at most countable set.


Denote
(x, ) =
e(x, )
()
. (2.1.66)
The function (x, ) satises (2.1.1) and on account of (2.1.63) and Theorem 2.1.1 also the
conditions
U() = 1, (2.1.67)
(x, ) = O(exp(ix)), x , , (2.1.68)
where U is dened by (2.1.2). The function (x, ) is called the Weyl solution for L.
Note that (2.1.1), (2.1.67) and (2.1.68) uniquely determine the Weyl solution.
Denote M() := (0, ). The function M() is called the Weyl function for L. It
follows from (2.1.66) that
M() =
e(0, )
()
. (2.1.69)
Clearly,
(x, ) = S(x, ) + M()(x, ), (2.1.70)
where the functions (x, ) and S(x, ) are solutions of (2.1.1) under the initial conditions
(0, ) = 1,

(0, ) = h, S(0, ) = 0, S

(0, ) = 1.
We recall that the Weyl function plays an important role in the spectral theory of Sturm-
Liouville operators (see [lev3] for more details).
By virtue of Liouvilles formula for the Wronskian [cod1, p.83], (x, ), (x, )) does
not depend on x. Since for x = 0,
(x, ), (x, ))
|x=0
= U() = 1,
we infer
(x, ), (x, )) 1. (2.1.71)
Theorem 2.1.5. The Weyl function M() is analytic in

and continuous in

1
. The set of singularities of M() (as an analytic function) coincides with the set

0
:= : 0 .
116
Theorem 2.1.5 follows from (2.1.63), (2.1.69) and Theorem 2.1.1. By virtue of (2.1.70),
the set of singularities of the Weyl solution (x, ) coincides with
0
for all x 0, since
the functions (x, ) and S(x, ) are entire in for each xed x 0.
Denition 2.1.1. The set of singularities of the Weyl function M() is called the
spectrum of L. The values of the parameter , for which equation (2.1.1) has nontrivial
solutions satisfying the conditions U(y) = 0, y() = 0 (i.e. lim
x
y(x) = 0 ), are called
eigenvalues of L, and the corresponding solutions are called eigenfunctions.
Remark 2.1.3. One can introduce the operator
L
o
: D(L
o
) L
2
(0, ), y y

+q(x)y
with the domain of denition D(L
o
) = y : y L
2
(I) AC
loc
(I), y

AC
loc
(I), L
o
y
L
2
(I), U(y) = 0, where I := [0, ). It is easy to verify that the spectrum of L
o
coincides
with
0
. For the Sturm-Liouville equation there is no dierence between working either with
the operator L
o
or with the pair L. However, for generalizations for many other classes of
inverse problems, from methodical point of view it is more natural to consider the pair L
(see, for example, [yur1] ).
Theorem 2.1.6. L has no eigenvalues > 0.
Proof. Suppose that
0
=
2
0
> 0 is an eigenvalue, and let y
0
(x) be a correspond-
ing eigenfunction. Since the functions e(x,
0
), e(x,
0
) form a fundamental system
of solutions of (2.1.1), we have y
0
(x) = Ae(x,
0
) + Be(x,
0
). For x , y
0
(x)
0, e(x,
0
) exp(i
0
x). But this is possible only if A = B = 0. 2
Theorem 2.1.7. Let
0
/ [0, ). For
0
to be an eigenvalue, it is necessary and
sucient that (
0
) = 0. In other words, the set of nonzero eigenvalues coincides with

. For each eigenvalue


0

there exists only one (up to a multiplicative constant)


eigenfunction, namely,
(x,
0
) =
0
e(x,
0
),
0
,= 0. (2.1.72)
Proof. Let
0

. Then U(e(x,
0
)) = (
0
) = 0 and, by virtue of (2.1.4),
lim
x
e(x,
0
) = 0. Thus, e(x,
0
) is an eigenfunction, and
0
=
2
0
is an eigenvalue. More-
over, it follows from (2.1.66) and (2.1.71) that (x, ), e(x, )) = (), and consequently
(2.1.72) is valid.
Conversely, let
0
=
2
0
, Im
0
> 0 be an eigenvalue, and let y
0
(x) be a corresponding
eigenfunction. Clearly, y
0
(0) ,= 0. Without loss of generality we put y
0
(0) = 1. Then
y

0
(0) = h, and hence y
0
(x) (x,
0
). Since the functions E(x,
0
) and e(x,
0
) form a
fundamental system of solutions of equation (2.1.1), we get y
0
(x) =
0
E(x,
0
) +
0
e(x,
0
).
As x , we calculate
0
= 0, i.e. y
0
(x) =
0
e(x,
0
). This yields (2.1.72). Conse-
quently, (
0
) = U(e(x,
0
)) = 0, and (x,
0
) and e(x,
0
) are eigenfunctions. 2
Thus, the spectrum of L consists of the positive half-line : 0, and the discrete
set =

. Each element of

is an eigenvalue of L. According to Theorem 2.1.6,


the points of

are not eigenvalues of L, they are called spectral singularities of L.


Example 2.1.1. Let q(x) 0, h = i, where is a real number. Then () = ih,
and

= ,

= , i.e. L has no eigenvalues, and the point


0
= is a spectral
singularity for L.
117
It follows from (2.1.5), (2.1.64), (2.1.66) and (2.1.69) that for [[ , ,
M() =
1
i
_
1 +
m
1
i
+o
_
1

__
, (2.1.73)

()
(x, ) = (i)
1
exp(ix)
_
1 +
B(x)
i
+o
_
1

__
, (2.1.74)
uniformly for x 0; here
m
1
= h, B(x) = h +
1
2
_
x
0
q(s) ds.
Taking (2.1.65) into account one can derive more precisely
M() =
1
i
_
1 +
m
1
i
+
1
i
_

0
q(t) exp(2it) dt +O
_
1

2
__
, [[ , . (2.1.75)
Moreover, if q(x) W
N
, then by virtue of (2.1.21) we get
M() =
1
i
_
1 +
N+1

s=1
m
s
(i)
s
+o
_
1

N+1
__
, [[ , , (2.1.76)
where m
1
= h, m
2
=
1
2
q(0) + h
2
, .
Denote
V () =
1
2i
_
M

() M
+
()
_
, > 0, (2.1.77)
where
M

() = lim
z0, Re z>0
M( iz).
It follows from (2.1.73) and (2.1.77) that for > 0, +,
V () =
1
2i
_

1
i
_
1
m
1
i
+o
_
1

__

1
i
_
1 +
m
1
i
+o
_
1

___
,
and consequently
V () =
1

_
1 + o
_
1

__
, > 0, +. (2.1.78)
In view of (2.1.75), we calculate more precisely
V () =
1

_
1 +
1

_

0
q(t) sin 2t dt +O
_
1

2
__
, > 0, +. (2.1.79)
Moreover, if q(x) W
N+1
, then (2.1.76) implies
V () =
1

_
1 +
N+1

s=1
V
s

s
+o
_
1

N+1
__
, > 0, +, (2.1.80)
where V
2s
= (1)
s
m
2s
, V
2s+1
= 0.
118
Remark 2.1.4. Analogous results are also valid if we replace U(y) = 0 by the bound-
ary condition U
0
(y) := y(0) = 0. In this case, the Weyl solution
0
(x, ) and the Weyl
function M
0
() are dened by the conditions
0
(0, ) = 1,
0
(x, ) = O(exp(ix)), x
, M
0
() :=

0
(0, ), and

0
(x, ) =
e(x, )
e(0, )
= C(x, ) + M
0
()S(x, ), M
0
() =
e

(0, )
e(0, )
, (2.1.81)
where C(x, ) is a solution of (2.1.1) under the conditions C(0, ) = 1, C

(0, ) = 0.
2.1.3. An expansion theorem. In the - plane we consider the contour =

(with counterclockwise circuit), where

is a bounded closed contour encircling the set


0, and

is the two-sided cut along the arc : > 0, / int

.
6
-
'
E

g. 2.1.1.
Theorem 2.1.8. Let f(x) W
2
. Then, uniformly for x 0,
f(x) =
1
2i
_

(x, )F()M() d, (2.1.82)


where
F() :=
_

0
(t, )f(t) dt.
Proof. We will use the contour integral method. For this purpose we consider the
function
Y (x, ) = (x, )
_
x
0
(t, )f(t) dt +(x, )
_

x
(t, )f(t) dt. (2.1.83)
Since the functions (x, ) and (x, ) satisfy (2.1.1) we transform Y (x, ) as follows
Y (x, ) =
1

(x, )
_
x
0
(

(t, ) + q(t)(t, ))f(t) dt


+
1

(x, )
_

x
(

(t, ) + q(t)(t, ))f(t) dt.


119
Two-fold integration by parts of terms with second derivatives yields in view of (2.1.71)
Y (x, ) =
1

_
f(x) +Z(x, )
_
, (2.1.84)
where
Z(x, ) = (f

(0) hf(0))(x, ) + (x, )


_
x
0
(t, )f(t) dt
+(x, )
_

x
(t, )f(t) dt. (2.1.85)
Similarly,
F() =
1

_
f

(0) hf(0)
_
+
1

_

0
(t, )f(t) dt, > 0. (2.1.86)
The function (x, ) satises the integral equation (1.1.11). Denote

T
() = max
0xT
([(x, )[ exp([[x)), := Im.
Then (1.1.11) gives for [[ 1, x [0, T],
[(x, )[ exp([[x) C +

T
()
[[
_
T
0
[q(t)[ dt,
and consequently

T
() C +

T
()
[[
_
T
0
[q(t)[ dt C +

T
()
[[
_

0
[q(t)[ dt.
From this we get [
T
()[ C for [[

. Together with (1.1.12) this yields for =


0, 1, [[

,
[
()
(x, )[ C[[

exp([[x), (2.1.87)
uniformly for x 0. Furthermore, it follows from (2.1.74) that for = 0, 1, [[

,
[
()
(x, )[ C[[
1
exp([[x), (2.1.88)
uniformly for x 0. By virtue of (2.1.85), (2.1.87) and (2.1.88) we get
Z(x, ) = O
_
1

_
, [[ ,
uniformly for x 0. Hence, (2.1.84) implies
lim
R
sup
x0

f(x)
1
2i
_
||=R
Y (x, ) d

= 0, (2.1.89)
where the contour in the integral is used with counterclockwise circuit. Consider the contour

0
R
= ( : [[ R) : [[ = R (with clockwise circuit).
120
6
-
'
E
s
~
R

0
R
g. 2.1.2
By Cauchys theorem [con1, p.85]
1
2i
_

0
R
Y (x, )d = 0.
Taking (2.1.89) into account we obtain
lim
R
sup
x0

f(x)
1
2i
_

R
Y (x, ) d

= 0,
where
R
= : [[ R (with counterclockwise circuit). From this, using (2.1.83) and
(2.1.70), we arrive at (2.1.82), since the terms with S(x, ) vanish by Cauchys theorem.
We note that according to (2.1.79), (2.1.86) and (2.1.87),
F() = O
_
1

_
, M() = O
_
1

_
, (x, ) = O(1), x 0, > 0, ,
and consequently the integral in (2.1.82) converges absolutely and uniformly for x 0. 2
Remark 2.1.5. If q(x) and h are real, and (1 +x)q(x) L(0, ), then (see Section
2.3)

= ,

(, 0) is a nite set of simple eigenvalues, V () > 0 for > 0 ( V ()


is dened by (2.1.77)), and M() = O(
1
) as 0. Then (2.1.82) takes the form
f(x) =
_

0
(x, )F()V ()d +

(x,
j
)F(
j
)Q
j,
Q
j
:= Res
=
j
M().
or
f(x) =
_

(x, )F() d(),


where () is the spectral function of L (see [lev3]). For < 0, () is a step-function;
for > 0, () is an absolutely continuous function, and

() = V ().
Remark 2.1.6. It follows from the proof that Theorem 2.1.8 remains valid also for
f(x) W
1
.
2.2. RECOVERY OF THE DIFFERENTIAL EQUATION FROM THE WEYL
FUNCTION.
121
In this section we study the inverse problem of recovering the pair L = L(q(x), h) of
the form (2.1.1)-(2.1.2) from the given Weyl function M(). For this purpose we will use
the method of spectral mappings described in Section 1.6 for Sturm-Liouville operators on
a nite interval. Since for the half-line the arguments are more or less similar, the proofs in
this section are shorter than in Section 1.6.
First, let us prove the uniqueness theorem for the solution of the inverse problem. We
agree, as in Chapter 1, that together with L we consider here and in the sequel a pair

L = L( q(x),

h) of the same form but with dierent coecients. If a certain symbol


denotes an object related to L, then the corresponding symbol with tilde denotes the
analogous object related to

L, and := .
Theorem 2.2.1 If M() =

M(), then L =

L. Thus, the specication of the Weyl
function uniquely determines q(x) and h .
Proof. Let us dene the matrix P(x, ) = [P
jk
(x, )]
j,k=1,2
by the formula
P(x, )
_
(x, )

(x, )

(x, )

(x, )
_
=
_
(x, ) (x, )

(x, )

(x, )
_
.
By virtue of (2.1.71), this yields
P
j1
(x, ) =
(j1)
(x, )

(x, )
(j1)
(x, )

(x, )
P
j2
(x, ) =
(j1)
(x, ) (x, )
(j1)
(x, )

(x, )
_
_
_
, (2.2.1)
(x, ) = P
11
(x, ) (x, ) + P
12
(x, )

(x, )
(x, ) = P
11
(x, )

(x, ) + P
12
(x, )

(x, )
_
_
_
. (2.2.2)
Using (2.2.1), (2.1.87)-(2.1.88) we get for [[ , =
2
,
P
jk
(x, )
jk
= O(
1
), j k; P
21
(x, ) = O(1). (2.2.3)
If M()

M() , then in view of (2.2.1) and (2.1.70), for each xed x, the functions
P
jk
(x, ) are entire in . Together with (2.2.3) this yields P
11
(x, ) 1, P
12
(x, ) 0.
Substituting into (2.2.2) we get (x, ) (x, ), (x, )

(x, ) for all x and , and
consequently, L =

L. 2
Let us now start to construct the solution of the inverse problem. We shall say that
L V
N
if q(x) W
N
. We shall subsequently solve the inverse problem in the classes V
N
.
Let a model pair

L = L( q(x),

h) be chosen such that


_

4
[

V ()[
2
d < ,

V := V

V (2.2.4)
for suciently large

> 0. The condition (2.2.4) is needed for technical reasons. In


principal, one could take any

L (for example, with q(x) =

h = 0 ) but generally speaking
proofs would become more complicated. On the other hand, (2.2.4) is not a very strong
restriction, since by virtue of (2.1.79),

V () =
1

_

0
q(t) sin 2t dt +O
_
1

_
.
122
In particular, if q(x) L
2
, then (2.2.4) is fullled automatically for any q(x) L
2
. Hence,
for N 1, the condition (2.2.4) is fullled for any model

L V
N
.
It follows from (2.2.4) that
_

V ()[ d = 2
_

V ()[ d < ,

= (

)
2
, =
2
. (2.2.5)
Denote
D(x, , ) =
(x, ), (x, ))

=
_
x
0
(t, )(t, ) dt, r(x, , ) = D(x, , )

M(),

D(x, , ) =
(x, ), (x, ))

=
_
x
0
(t, ) (t, ) dt, r(x, , ) =

D(x, , )

M().
_

_
(2.2.6)
Lemma 2.2.1. The following estimate holds
[D(x, , )[, [

D(x, , )[
C
x
exp([Im[x)
[ [ + 1
, =
2
, =
2
0, Re 0. (2.2.7)
Proof. Let = + i. For deniteness, let 0 and 0. All other cases can be
treated in the same way. Take a xed
0
> 0. For [ [
0
we have by virtue of (2.2.6)
and (2.1.87),
[D(x, , )[ =

(x, ), (x, ))

C exp([[x)
[[ +[[
[
2

2
[
. (2.2.8)
Since
[[ +[[
[ +[
=

2
+
2
+
_
( +)
2
+
2

2
+
2
+

2
+
2
+
2

2
(here we use that (a +b)
2
2(a
2
+b
2
) for all real a, b ), (2.2.8) implies
[D(x, , )[
C exp([[x)
[ [
. (2.2.9)
For [ [
0
, we get
[ [ + 1
[ [
1 +
1

0
,
and consequently
1
[ [

C
0
[ [ + 1
with C
0
=

0
+ 1

0
.
Substituting this estimate into the right-hand side of (2.2.9) we obtain
[D(x, , )[
C exp([[x)
[ [ + 1
,
and (2.2.7) is proved for [ [
0
.
For [ [
0
, we have by virtue of (2.2.6) and (2.1.87),
[D(x, , )[
_
x
0
[(t, )(t, )[ dt C
x
exp([[x),
123
i.e. (2.2.7) is also valid for [ [
0
. 2
Lemma 2.2.2. The following estimates hold
_

1
d
([R [ + 1)
= O
_
ln R
R
_
, R , (2.2.10)
_

1
d

2
([R [ + 1)
2
= O
_
1
R
2
_
, R . (2.2.11)
Proof. Since
1
(R + 1)
=
1
R + 1
_
1

+
1
R + 1
_
,
1
( R + 1)
=
1
R 1
_
1
R + 1

1

_
,
we calculate for R > 1,
_

1
d
([R [ + 1)
=
_
R
1
d
(R + 1)
+
_

R
d
( R + 1)
=
1
R + 1
_
R
1
_
1

+
1
R + 1
_
d +
1
R 1
_

R
_
1
R + 1

1

_
d
=
2 ln R
R + 1
+
ln R
R 1
,
i.e. (2.2.10) is valid. Similarly, for R > 1,
_

1
d

2
([R [ + 1)
2
=
_
R
1
d

2
(R + 1)
2
+
_

R
d

2
( R + 1)
2

1
(R + 1)
2
_
R
1
_
1

+
1
R + 1
_
2
d +
1
R
2
_

R
d
( R + 1)
2
=
2
(R + 1)
2
_
_
R
1
d

2
+
_
R
1
d
(R + 1)
_
+
1
R
2
_

1
d

2
= O
_
1
R
2
_
,
i.e. (2.2.11) is valid. 2
In the - plane we consider the contour =

(with counterclockwise circuit),


where

is a bounded closed contour encircling the set

0, and

is the two-sided
cut along the arc : > 0, / int

(see g. 2.1.1).
Theorem 2.2.2. The following relations hold
(x, ) = (x, ) +
1
2i
_

r(x, , )(x, ) d, (2.2.12)


r(x, , ) r(x, , ) +
1
2i
_

r(x, , )r(x, , ) d = 0. (2.2.13)


Equation (2.2.12) is called the main equation of the inverse problem.
Proof. It follows from (2.1.73), (2.1.87), (2.2.6) and (2.2.7) that for , , Re Re
0,
[r(x, , )[, [ r(x, , )[
C
x
[[([ [ + 1)
, [(x, )[ C. (2.2.14)
124
In view of (2.2.10), it follows from (2.2.14) that the integrals in (2.2.12) and (2.2.13) converge
absolutely and uniformly on for each xed x 0.
Denote J

= : / int

. Consider the contour


R
= : [[ R
with counterclockwise circuit, and also consider the contour
0
R
=
R
: [[ = R with
clockwise circuit (see g 2.1.2). By Cauchys integral formula [con1, p.84],
P
1k
(x, )
1k
=
1
2i
_

0
R
P
1k
(x, )
1k

d, int
0
R
,
P
jk
(x, ) P
jk
(x, )

=
1
2i
_

0
R
P
jk
(x, )
( )( )
d, , int
0
R
.
Using (2.2.3) we get
lim
R
_
||=R
P
1k
(x, )
1k

d = 0, lim
R
_
||=R
P
jk
(x, )
( )( )
d = 0,
and consequently
P
1k
(x, ) =
1k
+
1
2i
_

P
1k
(x, )

d, J

, (2.2.15)
P
jk
(x, ) P
jk
(x, )

=
1
2i
_

P
jk
(x, )
( )( )
d, , J

. (2.2.16)
Here (and everywhere below, where necessary) the integral is understood in the principal
value sense:
_

= lim
R
_

R
.
By virtue of (2.2.2) and (2.2.15),
(x, ) = (x, ) +
1
2i
_

(x, )P
11
(x, ) +

(x, )P
12
(x, )

d, J

.
Taking (2.2.1) into account we get
(x, ) = (x, ) +
1
2i
_

( (x, )((x, )

(x, ) (x, )

(x, ))+

(x, )((x, ) (x, ) (x, )

(x, ))
d

.
In view of (2.1.70), this yields (2.2.12), since the terms with S(x, ) vanish by Cauchys
theorem.
Using (2.2.16) and acting in the same way as in the proof of Lemma 1.6.3, we arrive at
D(x, , )

D(x, , ) =
1
2i
_

_
(x, ),

(x, ))(x, ), (x, ))
( )( )

(x, ), (x, ))(x, ), (x, ))
( )( )
_
d.
In view of (2.1.70) and (2.2.6), this yields (2.2.13). 2
Analogously one can obtain the relation

(x, ) = (x, ) +
1
2i
_

(x, ), (x, ))

M()(x, ) d, J

. (2.2.17)
125
Let us consider the Banach space C() of continuous bounded functions z(), ,
with the norm |z| = sup

[z()[.
Theorem 2.2.3. For each xed x 0, the main equation (2.2.12) has a unique solution
(x, ) C().
Proof. For a xed x 0, we consider the following linear bounded operators in C() :

Az() = z() +
1
2i
_

r(x, , )z() d,
Az() = z()
1
2i
_

r(x, , )z() d.
Then

AAz() = z() +
1
2i
_

r(x, , )z() d
1
2i
_

r(x, , )z() d

1
2i
_

r(x, , )
_
1
2i
_

r(x, , )z() d
_
d
= z()
1
2i
_

_
r(x, , ) r(x, , ) +
1
2i
_

r(x, , )r(x, , ) d
_
z() d.
By virtue of (2.2.13) this yields

AAz() = z(), z() C().


Interchanging places for L and

L, we obtain analogously A

Az() = z(). Thus,

AA =
A

A = E, where E is the identity operator. Hence the operator

A has a bounded inverse
operator, and the main equation (2.2.12) is uniquely solvable for each xed x 0. 2
Denote

0
(x) =
1
2i
_

(x, )(x, )

M() d, (x) = 2

0
(x). (2.2.18)
Theorem 2.2.4. The following relations hold
q(x) = q(x) + (x), (2.2.19)
h =

h
0
(0). (2.2.20)
Proof. Dierentiating (2.2.12) twice with respect to x and using (2.2.6) and (2.2.18)
we get

(x, )
0
(x) (x, ) =

(x, ) +
1
2i
_

r(x, , )

(x, ) d, (2.2.21)

(x, ) = (x, ) +
1
2i
_

r(x, , )

(x, ) d
+
1
2i
_

2 (x, ) (x, )

M()

(x, ) d +
1
2i
_

( (x, ) (x, ))


M()(x, ) d. (2.2.22)
126
In (2.2.22) we replace the second derivatives using equation (2.1.1), and then we replace
(x, ) using (2.2.12). This yields
q(x) (x, ) = q(x) (x, ) +
1
2i
_

(x, ), (x, ))

M()(x, ) d
+
1
2i
_

2 (x, ) (x, )

M()

(x, ) d +
1
2i
_

( (x, ) (x, ))


M()(x, ) d.
After canceling terms with

(x, ) we arrive at (2.2.19). Taking x = 0 in (2.2.21) we get


(2.2.20). 2
Thus, we obtain the following algorithm for the solution of the inverse problem.
Algorithm 2.2.1. Let the function M() be given. Then
(1) Choose

L V
N
such that (2.2.4) holds.
(2) Find (x, ) by solving equation (2.2.12).
(3) Construct q(x) and h via (2.2.18)-(2.2.20).
Let us now formulate necessary and sucient conditions for the solvability of the inverse
problem. Denote in the sequel by W the set of functions M() such that
(i) the functions M() are analytic in with the exception of an at most countable
bounded set

of poles, and are continuous in


1
with the exception of bounded set
(in general, and

are dierent for each function M() );


(ii) for [[ , (2.1.73) holds.
Theorem 2.2.5. For a function M() W to be the Weyl function for a certain
L V
N
, it is necessary and sucient that the following conditions hold:
1) (Asymptotics) There exists

L V
N
such that (2.2.4) holds;
2) (Condition S) For each xed x 0, equation (2.2.12) has a unique solution (x, )
C();
3) (x) W
N
, where the function (x) is dened by (2.2.18).
Under these conditions q(x) and h are constructed via (2.2.19)-(2.2.20).
As it is shown in Example 2.3.1, conditions 2) and 3) are essential and cannot be omitted.
On the other hand, in Section 2.3 we provide classes of operators for which the unique
solvability of the main equation can be proved.
The necessity part of Theorem 2.2.5 was proved above. We prove now the suciency.
Let a function M() W , satisfying the hypothesis of Theorem 2.2.5, be given, and let
(x, ) be the solution of the main equation (2.2.12). Then (2.2.12) gives us the analytic
continuation of (x, ) to the whole - plane, and for each xed x 0, the function
(x, ) is entire in of order 1/2. Using Lemma 1.5.1 one can show that the functions

()
(x, ), = 0, 1, are absolutely continuous with respect to x on compact sets, and
[
()
(x, )[ C[[

exp([[x). (2.2.23)
We construct the function (x, ) via (2.2.17), and L = L(q(x), h) via (2.2.19)-(2.2.20).
Obviously, L V
N
.
Lemma 2.2.3. The following relations hold
(x, ) = (x, ), (x, ) = (x, ).
127
Proof. For simplicity, let
_

V ()[ d <
(the general case requires minor modications). Then (2.2.23) is valid for = 0, 1, 2.
Dierentiating (2.2.12) twice with respect to x we obtain (2.2.21) and (2.2.22). It follows
from (2.2.22) and (2.2.12) that

(x, ) + q(x) (x, ) = (x, ) +


1
2i
_

r(x, , )(x, ) d
+
1
2i
_

(x, ), (x, ))

M()(x, ) d 2 (x, )
1
2i
_

( (x, ) (x, ))


M()(x, ) d.
Taking (2.2.19) into account we get

(x, ) = (x, ) +
1
2i
_

r(x, , )(x, ) d
+
1
2i
_

(x, ), (x, ))

M()(x, ) d. (2.2.24)
Using (2.2.17) we calculate similarly

(x, )
0
(x)

(x, ) =

(x, ) +
1
2i
_

(x, ), (x, ))

M()

(x, ) d, (2.2.25)

(x, ) = (x, ) +
1
2i
_

(x, ), (x, ))

M()(x, ) d
+
1
2i
_

(x, ), (x, ))

M()(x, ) d. (2.2.26)
It follows from (2.2.24) that
(x, ) = (x, ) +
1
2i
_

r(x, , )(x, ) d +
1
2i
_

( ) r(x, , )(x, ) d.
Taking (2.2.12) into account we deduce for a xed x 0,
(x, ) +
1
2i
_

r(x, , )(x, ) d = 0, , (2.2.27)


where (x, ) := (x, ) (x, ). According to (2.2.23) we have
[(x, )[ C
x
[[
2
, . (2.2.28)
By virtue of (2.2.27), (2.2.6) and (2.2.7),
[(x, )[ C
x
_
1 +
_

V ()[
[ [ + 1
[(x, )[ d
_
, , > 0, Re 0. (2.2.29)
Substituting (2.2.28) into the right-hand side of (2.2.29) we get
[(x, )[ C
x
_
1 +
_

2
[

V ()[
[ [ + 1
d
_
, , > 0, Re 0.
128
Since

([ [ + 1)
1 for , 1,
this yields
[(x, )[ C
x
[[, . (2.2.30)
Using (2.2.30) instead of (2.2.28) and repeating the preceding arguments we infer
[(x, )[ C
x
, .
According to Condition S of Theorem 2.2.5, the homogeneous equation (2.2.27) has only the
trivial solution (x, ) 0. Consequently,
(x, ) = (x, ). (2.2.31)
It follows from (2.2.26) and (2.2.31) that

(x, ) = (x, ) +
1
2i
_

(x, ), (x, ))

M()(x, ) d
+
1
2i
_

( )

(x, ), (x, ))

M()(x, ) d.
Together with (2.2.17) this yields (x, ) = (x, ). 2
Lemma 2.2.4 The following relations hold
(0, ) = 1,

(0, ) = h. (2.2.32)
U() = 1, (0, ) = M(), (2.2.33)
(x, ) = O(exp(ix)), x . (2.2.34)
Proof. Taking x = 0 in (2.2.12) and (2.2.21) and using (2.2.20) we get
(0, ) = (0, ) = 1,

(0, ) =

(0, )
0
(0) (0, ) =

h +h

h = h,
i.e. (2.2.32) is valid. Using (2.2.17) and (2.2.25) we calculate
(0, ) =

(0, ) +
1
2i
_

M()

d, (2.2.35)

(0, ) =

(0, )

(0, )
0
(0) +
h
2i
_

M()

d.
Consequently,
U() =

(0, ) h(0, ) =

(0, ) (
0
(0) + h)

(0, ) =

(0, )

(0, ) =

U(

) = 1.
129
Furthermore, since y, z) = yz

z, we rewrite (2.2.17) in the form


(x, ) =

(x, ) +
1
2i
_

(x, ) (x, )

(x, )

(x, )

M()(x, ) d, (2.2.36)
where J

. The function (x, ) is the solution of the Cauchy problem (2.2.31)-(2.2.32).


Therefore, according to (2.1.87),
[
()
(x, )[ C[[

, =
2
, x 0, = 0, 1. (2.2.37)
Moreover, the estimates (2.1.87)-(2.1.88) are valid for (x, ) and

(x, ), i.e.
[
()
(x, )[ C[[

, =
2
, x 0, = 0, 1, (2.2.38)
[

()
(x, )[ C[[
1
exp([Im[x), x 0, . (2.2.39)
By virtue of (2.1.73),

M() = O
_
1

_
, [[ , . (2.2.40)
Fix J

. Taking (2.2.37)-(2.2.40) into account we get from (2.2.36) that


[(x, ) exp(ix)[ C
_
1 +
_

d
[ [
_
C
1
,
i.e. (2.2.34) is valid.
Furthermore, it follows from (2.2.35) that
(0, ) =

M() +
1
2i
_

M()

d.
By Cauchys integral formula

M() =
1
2i
_

0
R

M()

d, int
0
R
.
Since
lim
R
1
2i
_
||=R

M()

d = 0,
we get

M() =
1
2i
_

M()

d, J

.
Consequently, (0, ) =

M() +

M() = M(), i.e. (2.2.33) is valid. 2
Thus, (x, ) is the Weyl solution, and M() is the Weyl function for the constructed
pair L(q(x), h), and Theorem 2.2.5 is proved. 2
The Gelfand-Levitan method. For Sturm-Liouville operators on a nite interval, the
Gelfand-Levitan method was presented in Section 1.5. For the case of the half-line there are
similar results. Therefore here we conne ourselves to the derivation of the Gelfand-Levitan
equation. For further discussion see [mar1], [lev2] and [lev4].
130
Consider the dierential equation and the linear form L = L(q(x), h) of the form (2.2.1)-
(2.2.2). Let q(x) = 0,

h = 0. Denote
F(x, t) =
1
2i
_

cos x cos t

M() d, (2.2.41)
where is the contour dened in Section 2.1 (see g. 2.1.1). We note that by virtue of
(2.1.77) and (2.2.5),
1
2i
_

cos x cos t

M() d =
_

cos x cos t

V () d < .
Let G(x, t) and H(x, t) be the kernels of the transformation operators (1.3.11) and (1.5.12)
respectively.
Theorem 2.2.6. For each xed x, the function G(x, t) satises the following linear
integral equation
G(x, t) + F(x, t) +
_
x
0
G(x, s)F(s, t)ds = 0, 0 < t < x. (2.2.42)
Equation (2.2.42) is called the Gelfand-Levitan equation.
Proof. Using (1.3.11) and (1.5.12) we calculate
1
2i
_

R
(x, ) cos tM() d =
1
2i
_

R
cos x cos tM() d
+
1
2i
_

R
__
x
0
G(x, s) cos s ds
_
cos tM() d,
1
2i
_

R
(x, ) cos tM() d =
1
2i
_

R
(x, )(t, )M() d
+
1
2i
_

R
__
t
0
H(t, s)(s, ) ds
_
(x, )M() d,
where
R
= : [[ R. This yields

R
(x, t) = I
R1
(x, t) + I
R2
(x, t) +I
R3
(x, t) + I
R4
(x, t),
where

R
(x, t) =
1
2i
_

R
(x, )(t, )M() d
1
2i
_

R
cos x cos t

M() d,
I
R1
(x, t) =
1
2i
_

R
cos x cos t

M()d,
I
R2
(x, t) =
_
x
0
G(x, s)
_
1
2i
_

R
cos t cos s

M()d
_
ds,
I
R3
(x, t) =
1
2i
_

R
cos t
__
x
0
G(x, s) cos s ds
_

M()d,
I
R4
(x, t) =
1
2i
_

R
(x, )
__
t
0
H(t, s)(s, ) ds
_
M() d.
131
Let (t), t 0 be a twice continuously dierentiable function with compact support. By
Theorem 2.1.8,
lim
R
_

0
(t)
R
(x, t)dt = 0, lim
R
_

0
(t)I
R1
(x, t)dt =
_

0
(t)F(x, t)dt,
lim
R
_

0
(t)I
R2
(x, t)dt =
_

0
(t)
__
x
0
G(x, s)F(s, t)ds
_
dt,
lim
R
_

0
(t)I
R3
(x, t)dt =
_
x
0
(t)G(x, t)dt,
lim
R
_

0
(t)I
R4
(x, t)dt =
_

x
(t)H(t, x)dt.
Put G(x, t) = H(x, t) = 0 for x < t. In view of the arbitrariness of (t), we derive
G(x, t) + F(x, t) +
_
x
0
G(x, s)F(s, t)ds H(t, x) = 0.
For t < x, this gives (2.2.42). 2
Thus, in order to solve the inverse problem of recovering L from the Weyl function
M() one can calculate F(x, t) by (2.2.41), nd G(x, t) by solving the Gelfand-Levitan
equation (2.2.42) and construct q(x) and h by (1.5.13).
Remark 2.2.1. We show the connection between the Gelfand-Levitan equation and
the main equation of inverse problem (2.2.12). For this purpose we use the cosine Fourier
transform. Let q(x) =

h = 0. Then (x, ) = cos

x. Multiplying (2.2.42) by cos

t
and integrating with respect to t, we obtain
_
x
0
G(x, t) cos

t dt +
_
x
0
cos

t
_
1
2i
_

cos

x cos

t

M() d
_
dt+
_
x
0
cos

t
_
x
0
G(x, s)
_
1
2i
_

cos

t cos

s

M() d
_
ds = 0.
Using (1.3.11) we arrive at (2.2.12).
Remark 2.2.2. If q(x) and h are real, and (1+x)q(x) L(0, ) , then (2.2.41) takes
the form
F(x, t) =
_

cos xcos t d (),


where = , and and are the spectral functions of L and

L respectively.
2.3. RECOVERY OF THE DIFFERENTIAL EQUATION FROM THE
SPECTRAL DATA.
In Theorem 2.2.5, one of the conditions under which an arbitrary function M() is the
Weyl function for a certain pair L = L(q(x), h) of the form (2.1.1)-(2.1.2) is the require-
ment that the main equation is uniquely solvable. This condition is dicult to check in the
general case. In this connection it is important to point out classes of operators for which
the unique solvability of the main equation can be proved. The class of selfadjoint operators
is one of the most important classes having this property. For this class we introduce the
132
so-called spectral data which describe the set of singularities of the Weyl function M() ,
i.e. the discrete and continuous parts of the spectrum. In this section we investigate the
inverse problem of recovering L from the given spectral data. We show that the specica-
tion of the spectral data uniquely determines the Weyl function. Thus, the inverse problem
of recovering L from the Weyl function is equivalent to the inverse problem of recovering
L from the spectral data. The main equation, obtained in Section 2.2, can be constructed
directly from the spectral data on the set : 0. We prove a unique solvability of
the main equation in a suitable Banach space (see Theorem 2.3.9). At the end of the section
we consider the inverse problem from the spectral data also for the nonselfadjoint case.
2.3.1. The selfadjoint case. Consider the dierential equation and the linear form
L = L(q(x), h) of the form (2.1.1)-(2.1.2) and assume that q(x) and h are real. This means
that the operator L
o
, dened in Remark 2.1.3, is a selfadjoint operator. For the selfadjoint
case one can obtain additional properties of the spectrum to those which we established in
Section 2.1.
Theorem 2.3.1. Let q(x) and h be real. Then

= , i.e. there are no spectral


singularities.
Proof. Since q(x) and h are real it follows from Theorem 2.1.1 and (2.1.63) that for
real ,
e(x, ) = e(x, ), () = (). (2.3.1)
Suppose

,= , i.e. for a certain real


0
,= 0, (
0
) = 0. Then, according to (2.3.1),
(
0
) = (
0
) = 0.
Together with (2.1.6) this yields
2i
0
= e(x,
0
), e(x,
0
))
|x=0
= e(0,
0
)(
0
) e(0,
0
)(
0
) = 0,
which is impossible. 2
Theorem 2.3.2. Let q(x) and h be real. Then the non-zero eigenvalues
k
are real
and negative, and
(
k
) = 0 ( where
k
=
2
k

). (2.3.2)
The eigenfunction e(x,
k
) and (x,
k
) are real, and
e(x,
k
) = e(0,
k
)(x,
k
), e(0,
k
) ,= 0. (2.3.3)
Moreover, eigenfunctions related to dierent eigenvalues are orthogonal in L
2
(0, ).
Proof. It follows from (2.1.66) and (2.1.71) that
(x, ), e(x, )) = (). (2.3.4)
According to Theorem 2.1.7, (2.3.2) holds; therefore (2.3.4) implies
e(x,
k
) = c
k
(x,
k
), c
k
,= 0.
Taking here x = 0 we get c
k
= e(0,
k
), i.e. (2.3.3) is valid.
133
Let
n
and
k
(
n
,=
k
) be eigenvalues with eigenfunctions y
n
(x) = e(x,
n
) and
y
k
(x) = e(x,
k
) respectively. Then integration by parts yields
_

0
y
n
(x) y
k
(x) dx =
_

0
y
n
(x)y
k
(x) dx,
and hence

n
_

0
y
n
(x)y
k
(x) dx =
k
_

0
y
n
(x)y
k
(x) dx,
consequently
_

0
y
n
(x)y
k
(x) dx = 0.
Assume now that
0
= u + iv, v ,= 0 is a non-real eigenvalue with an eigenfunction
y
0
(x) , 0. Since q(x) and h are real, we get that
0
= u iv is also the eigenvalue with
the eigenfunction y
0
(x). Since
0
,=
0
, we derive as before
|y
0
|
2
L
2
=
_

0
y
0
(x)y
0
(x) dx = 0,
which is impossible. Thus, all eigenvalues
k
of L are real, and consequently the eigen-
functions (x,
k
) and e(x,
k
) are real too. Together with Theorem 2.1.6 this yields

(, 0). 2
Theorem 2.3.3. Let q(x) and h be real, and let

=
k
,
k
=
2
k
< 0. Denote

1
() :=
d
d
() ( =
2
). Then

1
(
k
) ,= 0. (2.3.5)
Proof. Since e(x, ) satises (2.1.1) we have
d
dx
e(x, ), e(x,
k
)) = (
2

2
k
)e(x, )e(x,
k
).
Using (2.1.4), (2.1.63) and (2.3.2) we calculate
(
2

2
k
)
_

0
e(x, )e(x,
k
) dx = e(x, ), e(x,
k
))

0
= e(0,
k
)(), Im > 0.
As
k
, this gives
_

0
e
2
(x,
k
)dx =
1
2
k
e(0,
k
)
_
d
d
()
_
|=
k
= e(0,
k
)
1
(
k
). (2.3.6)
Since _

0
e
2
(x,
k
)dx > 0,
we arrive at (2.3.5). 2
Let

=
k
,
k
=
2
k
< 0. It follows from (2.1.69), (2.3.2) and (2.3.5) that the Weyl
function M() has simple poles at the points =
k
, and

k
:= Res
=
k
M() =
e(0,
k
)

1
(
k
)
. (2.3.7)
134
Taking (2.3.3) and (2.3.6) into account we infer

k
=
_
_

0

2
(x,
k
) dx
_
1
> 0.
Furthermore, the function V (), dened by (2.1.77), is continuous for =
2
> 0, and by
virtue of (2.1.69),
V () :=
1
2i
_
M

() M
+
()
_
=
1
2i
_
e(0, )
()

e(0, )
()
_
.
Using (2.3.1) and (2.1.6) we calculate
V () =

[()[
2
> 0, =
2
> 0. (2.3.8)
Denote by W

N
the set of functions q(x) W
N
such that
_

0
(1 +x)[q(x)[ dx < , (2.3.9)
We shall say that L V

N
if q(x) and h are real, and q(x) W

N
.
Theorem 2.3.4. Let L V

N
. Then

is a nite set.
Proof. For x 0, 0, the function e(x, i) is real and by virtue of (2.1.24)
[e(x, i) exp(x) 1[ Q
1
(x) exp(Q
1
(x)), x 0, 0, (2.3.10)
where
Q
1
(x) =
_

x
(t x)[q(t)[ dt.
In particular, it follows from (2.3.10) that there exists a > 0 such that
e(x, i) exp(x)
1
2
for x a, 0. (2.3.11)
Suppose that

=
k
is an innite set. Since

is bounded and
k
=
2
k
< 0, it
follows that
k
= i
k
0,
k
> 0. Using (2.3.11) we calculate
_

a
e(x,
k
)e(x,
n
) dx
1
4
_

a
exp((
k
+
n
)x) dx
=
exp((
k
+
n
)a)
4(
k
+
n
)

exp(2aT)
8T
, (2.3.12)
where T = max
k

k
. Since the eigenfunctions e(x,
k
) and e(x,
n
) are orthogonal in
L
2
(0, ) we get
0 =
_

0
e(x,
k
)e(x,
n
) dx =
_

a
e(x,
k
)e(x,
n
) dx
+
_
a
0
e
2
(x,
k
) dx +
_
a
0
e(x,
k
)(e(x,
n
) e(x,
k
)) dx. (2.3.13)
135
Taking (2.3.12) into account we get
_

a
e(x,
k
)e(x,
n
) dx C
0
> 0,
_
a
0
e
2
(x,
k
) dx 0. (2.3.14)
Let us show that
_
a
0
e(x,
k
)(e(x,
n
) e(x,
k
)) dx 0 as k, n . (2.3.15)
Indeed, according to (2.1.23),
[e(x,
k
)[ exp(Q
1
(0)), x 0.
Then, using (2.1.33) we calculate

_
a
0
e(x,
k
)(e(x,
n
) e(x,
k
)) dx

exp(Q
1
(0))
_
_
a
0
[ exp(
n
x) exp(
k
x)[ dx
+
_
a
0
_
_

x
[A(x, t)(exp(
n
x) exp(
k
x))[ dt
_
dx
_
. (2.3.16)
By virtue of (2.1.35),
[A(x, t)[
1
2
Q
0
_
x +t
2
_
exp(Q
1
(x))
1
2
Q
0
_
t
2
_
exp(Q
1
(0)), 0 x t,
and consequently, (2.3.16) yields

_
a
0
e(x,
k
)(e(x,
n
) e(x,
k
)) dx

C
_
_
a
0
[ exp(
n
x) exp(
k
x)[ dx
+
_

0
Q
0
_
t
2
_
[ exp(
n
t) exp(
k
t)[ dt
_
. (2.3.17)
Clearly,
_
a
0
[ exp(
n
x) exp(
k
x)[ dx 0 as k, n . (2.3.18)
Take a xed > 0. There exists a

> 0 such that


_

a

Q
0
_
t
2
_
dt <

2
.
On the other hand, for suciently large k and n,
_
a

0
Q
0
_
t
2
_
[ exp(
n
t) exp(
k
t)[ dt Q
0
(0)
_
a

0
[ exp(
n
t) exp(
k
t)[ dt <

2
.
Thus, for suciently large k and n,
_

0
Q
0
_
t
2
_
[ exp(
n
t) exp(
k
t)[ dt < .
By virtue of arbitrariness of we obtain
_

0
Q
0
_
t
2
_
[ exp(
n
t) exp(
k
t)[ dt 0 as k, n .
136
Together with (2.3.17) and (2.3.18) this implies (2.3.15). The relations (2.3.13)-(2.3.15) give
us a contradiction. This implies that

is a nite set. 2
Theorem 2.3.5. Let L V

N
. Then

()
= O(1), 0, Im 0. (2.3.19)
Proof. Denote
g() =
2i
()
.
By virtue of (2.1.6) and (2.1.63),
()e(0, ) e(0, )() = 2i.
Hence, for real ,= 0,
g() = e(0, ) ()e(0, ), (2.3.20)
where
() :=
()
()
.
It follows from (2.3.1) that
[()[ = 1 for real ,= 0. (2.3.21)
Let

=
k

k=1,m
,
k
=
2
k
,
k
= i
k
, 0 <
1
< . . . <
m
. Denote
D = : Im > 0, [[ <

, (2.3.22)
where

=
1
/2. The function g() is analytic in D and continuous in

D 0. By
virtue of (2.3.20), (2.3.21) and (2.1.23),
[g()[ C for real ,= 0. (2.3.23)
Suppose that () is analytic in the origin. Then, using (2.3.23), we get that the function
g() has a removable singularity in the origin, and consequently (after extending g()
continuously to the origin) g() is continuous in

D, i.e. (2.3.19) is proved.
In the general case we cannot use these arguments. Therefore, we introduce the potentials
q
r
(x) by (2.1.49) and the corresponding Jost solutions by (2.1.50). Denote

r
() = e

r
(0, ) he
r
(0, ), r 0.
By virtue of (2.1.53), the functions
r
() are entire in , and according to Lemma 2.1.3,
lim
r

r
() = () (2.3.24)
uniformly for Im 0. Let
r
be the inmum of the distances between the zeros of
r
()
in the upper half-plane Im 0. Let us show that

:= inf
r>0

r
> 0. (2.3.25)
137
Indeed, suppose on the contrary that there exists a sequence r
k
such that
r
k
0.
Let
k1
= i
k1
,
k2
= i
k2
(
k1
,
k2
0) be zeros of
r
k
() such that
k1

k2
0 as
k . It follows from (2.1.51) that there exists a > 0 such that
e
r
(x, i) exp(x)
1
2
for x a, 0, r 0. (2.3.26)
Similarly to (2.3.13) we can write
0 =
_

0
e
r
k
(x,
k1
)e
r
k
(x,
k2
) dx =
_

a
e
r
k
(x,
k1
)e
r
k
(x,
k2
) dx
+
_
a
0
e
2
r
k
(x,
k1
) dx +
_
a
0
e
r
k
(x,
k1
)(e
r
k
(x,
k2
) e
r
k
(x,
k1
)) dx.
Taking (2.3.26) into account we get as before
_

a
e
r
k
(x,
k1
)e
r
k
(x,
k2
) dx
exp((
k1
+
k2
)a)
4(
k1
+
k2
)
.
By virtue of (2.3.24), [
k1
[, [
k2
[ C, and consequently,
_

a
e
r
k
(x,
k1
)e
r
k
(x,
k2
) dx C
0
> 0.
Moreover,
_
a
0
e
2
r
k
(x,
k1
) dx 0.
On the other hand, using (2.1.50) and (2.1.52) one can easily verify (similar to the proof of
(2.3.15)) that
_
a
0
e
r
k
(x,
k1
)(e
r
k
(x,
k2
) e
r
k
(x,
k1
)) dx 0 as k ,
and we get a contradiction. This means that (2.3.25) is valid.
Dene D via (2.3.22) with

= min(

1
2
,

2
). Then, the function
r
() has in

D at
most one zero = i
0
r
, 0
0
r

. Consider the function

r
() := g
r
()g
0
r
(),
where
g
r
() =
2i

r
()
, g
0
r
() =
i
0
r
+i
0
r
(if
r
() has no zeros in

D we put g
0
r
() := 1 ). Clearly
[g
0
r
()[ 1 for

D. (2.3.27)
Analogously to (2.3.20)-(2.3.21), for real ,= 0,
g
r
() = e
r
(0, )
r
()e
r
(0, ), [
r
()[ = 1. (2.3.28)
It follows from (2.3.24), (2.3.28) and (2.1.51) that
[g
r
()[ C for D, r 0,
138
where D =

D D is the boundary of D, and C does not depend on r. Together with
(2.3.27) this yields
[
r
()[ C for D, r 0.
Since the functions
r
() are analytic in

D, we have by the maximum principle [con1,
p.128],
[
r
()[ C for

D, r 0, (2.3.29)
where C does not depend on r.
Fix (0,

) and denote D

:= : Im > 0, < [[ <

. By virtue of (2.3.24),
lim
r
g
r
() = g() uniformly in D

.
Moreover, (2.3.24) implies lim
r

0
r
= 0, and consequently,
lim
r

r
() = g() uniformly in D

.
Then, it follows from (2.3.29) that
[g()[ C for D

.
By virtue of arbitrariness of we get
[g()[ C for

D 0,
i.e. (2.3.19) is proved. 2
Theorem 2.3.6. Let L V

N
. Then = 0 is not an eigenvalue of L.
Proof. The function e(x) := e(x, 0) is a solution of (2.1.1) for = 0, and according
to Theorem 2.1.2,
lim
x
e(x) = 1. (2.3.30)
Take a > 0 such that
e(x)
1
2
for x a, (2.3.31)
and consider the function
z(x) := e(x)
_
x
a
dt
e
2
(t)
. (2.3.32)
It is easy to check that
z

(x) = q(x)z(x)
and
e(x)z

(x) e

(x)z(x) 1.
It follows from (2.3.30)-(2.3.32) that
lim
x
z(x) = +. (2.3.33)
Suppose that = 0 is an eigenvalue, and let y
0
(x) be a corresponding eigenfunction. Since
the functions e(x), z(x) form a fundamental system of solutions of (2.1.1) for = 0, we
have
y
0
(x) = C
0
1
e(x) + C
0
2
z(x).
139
By virtue of (2.3.30) and (2.3.33), this is possible only if C
0
1
= C
0
2
= 0. 2
Remark 2.3.1. Let
q(x) =
2a
2
(1 + ax)
2
, h = a,
where a is a complex number such that a / (, 0]. Then q(x) L(0, ), but xq(x) /
L(0, ). In this case = 0 is an eigenvalue, and by dierentiation one veries that
y(x) =
1
1 +ax
is the corresponding eigenfunction.
According to Theorem 2.1.2, e(0, ) = O(1) as 0, Im 0. Therefore, Theorem
2.3.5 together with (2.1.69) yields
M() = O(
1
), [[ 0. (2.3.34)
It is shown below in Example 2.3.1 that if the condition (2.3.9) is not fullled, then (2.3.34)
is not valid in general.
Combining the results obtained above we arrive at the following theorem.
Theorem 2.3.7. Let L V

N
, and let M() be the Weyl function for L. Then
M() is analytic in with the exception of an at most nite number of simple poles

=
k

k=1,m
,
k
=
2
k
< 0, and continuous in
1

. Moreover,

k
:= Res
=
k
M() > 0, k = 1, m,
and (2.3.34) is valid. The function V () is continuous and bounded for =
2
> 0, and
V () > 0 for =
2
> 0.
Denition 2.3.1. The data S := (V ()
>0
,
k
,
k

k=1,m
) are called the spectral
data of L.
Lemma 2.3.1. The Weyl function is uniquely determined by the specication of the
spectral data S via the formula
M() =
_

0
V ()

d +
m

k=1

k

k
,

. (2.3.35)
Proof. Consider the function
I
R
() :=
1
2i
_
||=R
M()

d.
It follows from (2.1.73) that
lim
R
I
R
() = 0 (2.3.36)
140
uniformly on compact subsets of

. On the other hand, moving the contour [[ = R


to the real axis and using the residue theorem, we get
I
R
() = M() +
_
R
0
V ()

d +
m

k=1

k

k
.
Together with (2.3.36) this yields (2.3.35). 2
It follows from Theorem 2.2.1 and Lemma 2.3.1 that the specication of the spectral data
uniquely determines the potential q(x), x 0 and the coecient h.
Let L V

N
, and let

L V

N
be chosen such that (2.2.4) holds. Denote

n0
=
n
,
n1
=

n
,
n0
=
n
,
n1
=
n
,
ni
(x) = (x,
ni
),
ni
(x) = (x,
ni
),
p = m + m, (x) = [
k
(x)]
k=1,p
,

k
(x) =
k0
(x), k = 1, m,
k+m
(x) =
k1
(x), k = 1, m.
Analogously we dene

(x). It follows from (2.2.12), (2.2.17) and (2.2.18) that
(x, ) = (x, ) +
_

0

D(x, , )

V ()(x, ) d+
m

k=1

D(x, ,
k0
)
k0

k0
(x)
m

k=1

D(x, ,
k1
)
k1

k1
(x), (2.3.37)

ni
(x) =
ni
(x) +
_

0

D(x,
ni
, )

V ()(x, ) d+
m

k=1

D(x,
ni
,
k0
)
k0

k0
(x)
m

k=1

D(x,
ni
,
k1
)
k1

k1
(x), (2.3.38)

(x, ) = (x, ) +
_

0

(x, ), (x, ))

V ()(x, ) d+
m

k=1

(x, ),
k0
(x))

k0

k0

k0
(x)
m

k=1

(x, ),
k1
(x))

k1

k1

k1
(x), (2.3.39)

0
(x) =
_

0
(x, ) (x, )

V () d +
m

k=1

k0
(x)
k0
(x)
k0

k=1

k1
(x)
k1
(x)
k1
,
(x) = 2

0
(x). (2.3.40)
For each xed x 0, the relations (2.3.37)-(2.3.38) can be considered as a system of linear
equations with respect to (x, ), > 0 and
k
(x), k = 1, p. We rewrite (2.3.37)-(2.3.38)
as a linear equation in a corresponding Banach space.
Let C = C(0, ) be the Banach space of continuous bounded functions f : [0, )
C, f() on the half-line 0 with the norm |f|
C
= sup
0
[f

[. Obviously, for
each xed x 0, (x, ), (x, ) C. Consider the Banach space B of vectors
F =
_
f
f
0
_
, f C, f
0
= [f
0
k
]
k=1,p
R
p
141
with the norm |F|
B
= max(|f|
C
, |f
0
|
R
p
). Denote
(x) =
_
(x, )
(x)
_
,

(x) =
_
(x, )

(x)
_
.
Then (x),

(x) B for each xed x 0. Let

H
,
(x) =

D(x, , )

V (),

H
,k
(x) =

D(x, ,
k0
)
k0
, k = 1, m,

H
,k+m
(x) =

D(x, ,
k1
)
k1
, k = 1, m,

H
n,
(x) =

D(x,
n0
, )

V (), n = 1, m,

H
n+m,
(x) =

D(x,
n1
, )

V (), n = 1, m,

H
n,k
(x) =

D(x,
n0
,
k0
)
k0
, n, k = 1, m,

H
n,k+m
(x) =

D(x,
n0
,
k1
)
k1
, n = 1, m, k = 1, m,

H
n+m,k
(x) =

D(x,
n1
,
k0
)
k0
, n = 1, m, k = 1, m,

H
n+m,k+m
(x) =

D(x,
n1
,
k1
)
k1
, n, k = 1, m.
Denote by

H : B B the operator dened by

F =

HF, F =
_
f
f
0
_
B,

F =
_

f

f
0
_
B,

f() =
_

0

H
,
f() d +
p

k=1

H
,k
f
0
k
,

f
0
n
=
_

0

H
n,
f() d +
p

k=1

H
n,k
f
0
k
.
_

_
Then for each xed x 0, the operator E +

H(x) (here E is the identity operator),
acting from B to B, is a linear bounded operator. Taking into account our notations we
can rewrite (2.3.37)-(2.3.38) to the form

(x) = (E +

H(x))(x). (2.3.41)
Thus, we proved the following assertion.
Theorem 2.3.8. For each xed x 0, the vector (x) B is a solution of equation
(2.3.41).
Now let us go on to necessary and sucient conditions for the solvability of the inverse
problem under consideration. Denote by W

the set of vectors S = (V ()


>0
,
k
,
k

k=1,m
)
(in general, m is dierent for each S ) such that
(1)
k
> 0,
k
< 0, for all k, and
k
,=
s
for k ,= s;
(2) the function V () is continuous and bounded for > 0, V () > 0, and M() =
O(
1
), 0, where M() is dened by (2.3.35);
(3) there exists

L such that (2.2.4) holds.
Clearly, if S is the vector of spectral data for a certain L V

N
, then S W

.
Theorem 2.3.9. Let S W

. Then for each xed x 0, equation (2.3.41) has a


unique solution in B, i.e. the operator E +

H(x) is invertible.
142
Proof. As in the proof of Lemma 1.6.6 it is sucient to prove that for each xed x 0
the homogeneous equation
(E +

H(x))(x) = 0, (x) B, (2.3.42)
has only the zero solution. Let
(x) =
_
(x, )

0
(x)
_
B,
0
(x) = [
0
k
(x)]
k=1,p
,
be a solution of (2.3.42), i.e.
(x, ) +
_

0

D(x, , )

V ()(x, ) d
+
m

k=1

D(x, ,
k0
)
k0

k0
(x)
m

k=1

D(x, ,
k1
)
k1

k1
(x) = 0, (2.3.43)

ni
(x) +
_

0

D(x,
ni
, )

V ()(x, ) d
+
m

k=1

D(x,
ni
,
k0
)
k0

k0
(x)
m

k=1

D(x,
ni
,
k1
)
k1

k1
(x) = 0, (2.3.44)
where
k0
(x) =
0
k
(x), k = 1, m,
k1
(x) =
0
k+m
(x), k = 1, m. Then (2.3.43) gives us the
analytic continuation of (x, ) to the whole - plane, and for each xed x 0, the
function (x, ) is entire in . Moreover, according to (2.3.43), (2.3.44),
(x,
ni
) =
ni
(x). (2.3.45)
Let us show that for each xed x 0,
[(x, )[
C
x
[[
exp([[x), =
2
, := Im. (2.3.46)
Indeed, using (2.2.6) and (2.1.87), we get
[

D(x, ,
ki
)[
C
x
[[
exp([[x). (2.3.47)
For deniteness, let := Re 0. It follows from (2.3.43), (2.2.7) and (2.3.47) that
[(x, )[ C
x
exp([[x)
_
_

1
[

V ()[
[ [ + 1
d +
1
[[
_
, =
2
. (2.3.48)
In view of (2.2.4),
_
_

1
[

V ()[
[ [ + 1
d
_
2

_
_

1
[

V ()[
2

4
d
__
_

1
d

2
([ [ + 1)
2
_
C
_

1
d

2
([ [ + 1)
2
. (2.3.49)
143
Since
[ [ =
2
+
2
+
2
2, [[[ [ =
2
+
2
+
2
2[[,
we have
[ [ [[[ [. (2.3.50)
By virtue of (2.3.49), (2.3.50) and (2.2.11),
_

1
[

V ()[
[ [ + 1
d
C
[[
. (2.3.51)
Using (2.3.48) and (2.3.51) we arrive at (2.3.46).
Furthermore, we construct the function (x, ) by the formula
(x, ) =
_

0

(x, ), (x, ))

V ()(x, ) d

k=1

(x, ),
k0
(x))

k0

k0

k0
(x) +
m

k=1

(x, ),
k1
(x))

k1

k1

k1
(x). (2.3.52)
It follows from (2.1.70), (2.2.6), (2.3.43) and (2.3.52) that
(x, ) =

M()(x, )
_

0

S(x, ), (x, ))

V ()(x, ) d

k=1

S(x, ),
k0
(x))

k0

k0

k0
(x) +
m

k=1

S(x, ),
k1
(x))

k1

k1

k1
(x). (2.3.53)
Since

S(x, ), (x, ))
|x=0
= 1, we infer from (1.6.1) that

S(x, ), (x, ))

=
1

+
_
x
0

S(t, ) (t, ) dt.


Hence, (2.3.53) takes the form
(x, ) =

M()(x, ) +
_

0

V ()(x, )

d
+
m

k=1

k0

k0
(x)

k0

k=1

k1

k1
(x)

k1
+
1
(x, ), (2.3.54)
where

1
(x, ) =
_

0
_
_
x
0

S(t, ) (t, ) dt
_

V ()(x, ) d

k=1
_
_
x
0

S(t, )
k0
(t) dt
_

k0

k0
(x) +
m

k=1
_
_
x
0

S(t, )
k1
(t) dt
_

k1

k1
(x).
The function
1
(x, ) is entire in for each xed x 0. Using (2.3.35) we derive from
(2.3.54):
(x, ) = M()(x, ) +
0
(x, ), (2.3.55)
where

0
(x, ) =
1
(x, ) +
2
(x, ) +
3
(x, ),
144

2
(x, ) =
_

0

V ()((x, ) (x, ))

d,

3
(x, ) =
m

k=1

k0

k0
((x, )
k0
(x)) +
m

k=1

k1

k1
((x, )
k1
(x)).
In view of (2.3.45), the function
0
(x, ) is entire in for each xed x 0. Using (2.3.55),
(2.3.52), (2.3.46) and (2.1.88) we obtain the following properties of the function (x, ).
(1) For each xed x 0, the function (x, ) is analytic in

with respect to
(with the simple poles
k0
, k = 1, m ), and continuous in
1

. Moreover,
Res
=
k0
(x, ) =
k0

k0
(x), k = 1, m. (2.3.56)
(2) Denote

() = lim
z0, Re z>0
( iz), > 0.
Then
1
2i
_

()
+
()
_
= V ()(x, ), > 0. (2.3.57)
(3) For [[ ,
[(x, )[
C
x
[[
2
exp([[x). (2.3.58)
Now we construct the function B(x, ) via the formula
B(x, ) = (x,

)(x, ). (2.3.59)
By Cauchys theorem
1
2i
_

0
R
B(x, ) d = 0,
where the contour
0
R
is dened in Section 2.1 (see g. 2.1.2). By virtue of (2.3.46), (2.3.58)
and (2.3.59) we get
lim
R
1
2i
_
||=R
B(x, ) d = 0,
and consequently
1
2i
_

B(x, ) d = 0, (2.3.60)
where the contour is dened in Section 2.1 (see g. 2.1.1). Moving the contour in
(2.3.60) to the real axis and using the residue theorem and (2.3.56) we get for suciently
small > 0,
m

n=1

n0
[
n0
(x)[
2
+
1
2i
_
||=
B(x, ) d +
1
2i
_

B(x, ) d = 0, (2.3.61)
where

is the two-sided cut along the arc : . Since M() = O(


1
) as
[[ 0, we get in view of (2.3.55) and (2.3.59) that for each xed x 0,
B(x, ) = O(
1
) as [[ 0,
145
and consequently
lim

1
2i
_
||=
B(x, ) d = 0.
Together with (2.3.61), (2.3.57) and (2.3.59) this yields
m

k=1

k0
[
k0
(x)[
2
+
_

0
[(x, )[
2
V () d = 0.
Since
k0
> 0, V () > 0, we get
(x, ) = 0,
n0
(x) = 0,
n1
(x) = (x,
n1
) = 0.
Thus, (x) = 0, and Theorem 2.3.9 is proved. 2
Theorem 2.3.10. For a vector S = (V ()
>0
,
k
,
k

k=1,m
) W

to be the spectral
data for a certain L V

N
, it is necessary and sucient that (x) W

N
, where (x) is
dened via (2.3.40), and (x) is the solution of (2.3.41). The function q(x) and the
number h can be constructed via (2.2.19)-(2.2.20).
Denote by W

the set of functions M() such that:


(1) M() is analytic in with the exception of an at most nite number of simple poles

=
k

k=1,m
,
k
=
2
k
< 0, and
k
:= Res
=
k
M() > 0;
(2) M() is continuous in
1

, M() = O(
1
) for 0, and
V () :=
1
2i
_
M

() M
+
()
_
> 0 for > 0;
(3) there exists

L such that (2.2.4) holds.
Theorem 2.3.11. For a function M() W

to be the Weyl function for a certain


L V

N
, it is necessary and sucient that (x) W

N
, where (x) is dened via (2.3.40).
We omit the proofs of Theorems 2.3.10-2.3.11, since they are similar to corresponding
facts for Sturm-Liouville operators on a nite interval presented in Section 1.6 (see also the
proof of Theorem 2.2.5).
2.3.2. The nonselfadjoint case. One can consider the inverse problem from the
spectral data also for the nonselfadjoint case when q(x) L(0, ) is a complex-valued
function, and h is a complex number. For simplicity we conne ourselves here to the
case of a simple spectrum (see Denition 2.3.2). For nonselfadjoint dierential operators
with a simple spectrum we introduce the spectral data and study the inverse problem of
recovering L from the given spectral data. We also consider a particular case when only
the discrete spectrum is perturbed. Then the main equation of the inverse problem becomes
a linear algebraic system, and the solvability condition is equivalent to the condition that
the determinant of this system diers from zero.
Denition 2.3.2. We shall say that L has simple spectrum if () has a nite
number of simple zeros in , and M() = O(
1
), 0. We shall write L V

N
if L
has simple spectrum and q W
N
.
146
Clearly, V

N
V

N
, since the selfadjoint operators considered in Subsection 2.3.1 have
simple spectrum.
Let L V

N
. Then =
k

k=1,m
,
k
=
2
k
is a nite set, and the Weyl function M()
is analytic in

and continuous in
1
;

=
k

k=1,r
are the eigenvalues, and

=
k

k=r+1,m
are the spectral singularities of L. The next assertion is proved by the
same arguments as in Lemma 2.3.1.
Lemma 2.3.2. The following relation holds
M() =
_

0
V ()

d +
m

k=1

k

k
,
where
V () :=
1
2i
_
M

() M
+
()
_
, M

() := lim
z0, Re z>0
M( iz),

k
:=
_
_
_
e(0,
k
)(
1
(
k
))
1
, k = 1, r,
1
2
e(0,
k
)(
1
(
k
))
1
, k = r + 1, m,

1
() :=
d
d
().
Denition 2.3.3. The data S = (V ()
>0
,
k
,
k

k=1,m
) are called the spectral
data of L.
Using Theorem 2.2.1 and Lemma 2.3.2 we obtain the following uniqueness theorem.
Theorem 2.3.12. Let S and

S be the spectral data for L and

L respectively. If
S =

S, then L =

L. Thus, the specication of the spectral data uniquely determines L.
For L V

N
the main equation (2.3.41) of the inverse problem remains valid, and
Theorem 2.3.8 also holds true. Moreover, the potential q and the coecient h can be
constructed by (2.3.40), (2.2.19)-(2.2.20).
Perturbation of the discrete spectrum. Let

L V
N
, and let

M() be the Weyl
function for

L. Consider the function
M() =

M() +

0
J
a

0

0
, (2.3.62)
where J is a nite set of the - plane, and a

0
,
0
J are complex numbers. In this
case

V () = 0. Then the main equation (2.2.12) becomes the linear algebraic system
(x, z
0
) = (x, z
0
) +

0
J

D(x, z
0
,
0
)a

0
(x,
0
), z
0
J, (2.3.63)
with the determinant det(E +

G(x)), where

G(x) = [

D(x, z
0
,
0
)a

0
]
z
0
,
0
J
.
The solvability condition (Condition S in Theorem 2.2.5) takes here the form
det(E +

G(x)) ,= 0 for all x 0. (2.3.64)
The potential q and the coecient h can be constructed by the formulae
q(x) = q(x) +(x), h =

h

0
J
a

0
, (2.3.65)
147
(x) = 2

0
J
a

0
d
dx
_
(x,
0
)(x,
0
)
_
. (2.3.66)
From Theorem 2.2.5 we have
Theorem 2.3.13. Let

L V
N
. For a function M() of the form (2.3.62) to be
the Weyl function for a certain L V
N
it is necessary and sucient that (2.3.64) holds,
(x) W
N
, where (x) is dened via (2.3.66), and that (x,
0

0
J
is a solution of
(2.3.63). Under these conditions the potential q and the coecient h are constructed by
(2.3.65).
Example 2.3.1. Let q(x) = 0 and

h = 0. Then

M() =
1
i
. Consider the function
M() =

M() +
a

0
,
where a and
0
are complex numbers. Then the main equation (2.3.63) becomes
(x,
0
) = F(x)(x,
0
),
where
(x,
0
) = cos
0
x, F(x) = 1 + a
_
x
0
cos
2

0
t dt,
0
=
2
0
.
The solvability condition (2.3.64) takes the form
F(x) ,= 0 for all x 0, (2.3.67)
and the function (x) can be found by the formula
(x) =
2a
0
sin 2
0
x
F(x)
+
2a
2
cos
4

0
x
F
2
(x)
.
Case 1. Let
0
= 0. Then
F(x) = 1 +ax,
and (2.3.67) is equivalent to the condition
a / (, 0). (2.3.68)
If (2.3.68) is fullled then M() is the Weyl function for L of the form (2.2.1)-(2.2.2) with
q(x) =
2a
2
(1 + ax)
2
, h = a,
(x, ) = cos x
a
1 +ax

sin x

, e(x, ) = exp(ix)
_
1
a
i(1 + ax)
_
,
() = i, V () =
1

, (x, 0) =
1
1 + ax
.
If a < 0, then the solvability condition is not fullled, and the function M() is not a
Weyl function.
Case 2. Let
0
,= 0 be a real number, and let a > 0. Then F(x) 1, and (2.3.67) is
fullled. But in this case (x) / L(0, ), i.e. (x) / W
N
for any N 0.
148
2.4. AN INVERSE PROBLEM FOR A WAVE EQUATION
In this section we consider an inverse problem for a wave equation with a focused source
of disturbance. In applied problems the data are often functions of compact support localized
within a relative small area of space. It is convenient to model such situations mathematically
as problems with a focused source of disturbance (see [rom1]).
Consider the following boundary value problem B(q(x), h) :
u
tt
= u
xx
q(x)u, 0 x t, (2.4.1)
u(x, x) = 1, (u
x
hu)
|x=0,
(2.4.2)
where q(x) is a complex-valued locally integrable function (i.e. it is integrable on every
nite interval), and h is a complex number. Denote r(t) := u(0, t). The function r is
called the trace of the solution. In this section we study the following inverse problem.
Inverse Problem 2.4.1. Given the trace r(t), t 0, of the solution of B(q(x), h),
construct q(x), x 0, and h.
We prove an uniqueness theorem for Inverse Problem 2.4.1 (Theorem 2.4.3), provide an
algorithm for the solution of this inverse problem (Algorithm 2.4.1) and give necessary and
sucient conditions for its solvability (Theorem 2.4.4). Furthermore we show connections
between Inverse Problem 2.4.1 and the inverse spectral problems considered in Sections
2.2-2.3.
Remark 2.4.1. Let us note here that the boundary value problem B(q(x), h) is equiva-
lent to a Cauchy problem with a focused source of disturbance. For simplicity, we assume here
that h = 0. We dene u(x, t) = 0 for 0 < t < x, and u(x, t) = u(x, t), q(x) = q(x) for
x < 0. Then, using symmetry, it follows that u(x, t) is a solution of the Goursat problem
u
tt
= u
xx
q(x)u, 0 [x[ t,
u(x, [x[) = 1.
Moreover, it can be shown that this Goursat problem is equivalent to the Cauchy problem
u
tt
= u
xx
q(x)u, < x < , t > 0,
u
|t=0
= 0, u
t|t=0
= 2(x),
where (x) is the Dirac delta-function. Similarly, for h ,= 0, the boundary value problem
(2.4.1)-(2.4.2) also corresponds to a problem with a focused source of disturbance.
Let us return to the boundary value problem (2.4.1)-(2.4.2). Denote
Q(x) =
_
x
0
[q(t)[ dt, Q

(x) =
_
x
0
Q(t) dt, d = max(0, h).
Theorem 2.4.1. The boundary value problem (2.4.1)-(2.4.2) has a unique solution
u(x, t), and
[u(x, t)[ exp(d(t x)) exp
_
2Q

_
t +x
2
__
, 0 x t. (2.4.3)
149
Proof. We transform (2.4.1)-(2.4.2) by means of the replacement
= t +x, = t x, v(, ) = u
_

2
,
+
2
_
to the boundary value problem
v

(, ) =
1
4
q
_

2
_
v(, ), 0 , (2.4.4)
v(, 0) = 1, (v

(, ) v

(, ) hv(, ))
|=
= 0. (2.4.5)
Since v

(, 0) = 0, integration of (2.4.4) with respect to gives


v

(, ) =
1
4
_

0
q
_

2
_
v(, ) d. (2.4.6)
In particular, we have
v

(, )
|=
=
1
4
_

0
q
_

2
_
v(, ) d. (2.4.7)
It follows from (2.4.6) that
v(, ) = v(, )
1
4
_

_
_

0
q
_

2
_
v(, ) d
_
d. (2.4.8)
Let us calculate v(, ). Since
d
d
(v(, ) exp(h)) = (v

(, ) +v

(, ) +hv(, ))
|=
exp(h),
we get by virtue of (2.4.5) and (2.4.7),
d
d
(v(, ) exp(h)) = 2v

(, )
|=
exp(h) =
1
2
exp(h)
_

0
q
_

2
_
v(, ) d.
This yields (with v(0, 0) = 1 )
v(, ) exp(h) 1 =
1
2
_

0
exp(h)
_
_

0
q
_

2
_
v(, ) d
_
d,
and consequently
v(, ) = exp(h)
1
2
_

0
exp(h( ))
_
_

0
q
_

2
_
v(, ) d
_
d. (2.4.9)
Substituting (2.4.9) into (2.4.8) we deduce that the function v(, ) satises the integral
equation
v(, ) = exp(h)
1
2
_

0
exp(h( ))
_
_

0
q
_

2
_
v(, ) d
_
d

1
4
_

_
_

0
q
_

2
_
v(, ) d
_
d. (2.4.10)
150
Conversely, if v(, ) is a solution of (2.4.10) then one can verify that v(, ) satises
(2.4.4)-(2.4.5).
We solve the integral equation (2.4.10) by the method of successive approximations. The
calculations are slightly dierent for h 0 and h < 0.
Case 1. Let h 0. Denote
v
0
(, ) = exp(h), v
k+1
(, ) =
1
2
_

0
exp(h( ))
_
_

0
q
_

2
_
v
k
(, ) d
_
d

1
4
_

_
_

0
q
_

2
_
v
k
(, ) d
_
d. (2.4.11)
Let us show by induction that
[v
k
(, )[
1
k!
_
2Q

2
__
k
, k 0, 0 . (2.4.12)
Indeed, for k = 0, (2.4.12) is obvious. Suppose that (2.4.12) is valid for a certain k 0.
It follows from (2.4.11) that
[v
k+1
(, )[
1
2
_

0
_
_

0

q
_

2
_
v
k
(, )

d
_
d. (2.4.13)
Substituting (2.4.12) into the right-hand side of (2.4.13) we obtain
[v
k+1
(, )[
1
2k!
_

0
_
2Q

2
__
k
_
_

0

q
_

2
_

d
_
d

1
k!
_

0
_
2Q

2
__
k
_
_
/2
0
[q(s)[ ds
_
d =
1
k!
_

0
_
2Q

2
__
k
Q
_

2
_
d
=
1
k!
_
/2
0
(2Q

(s))
k
(2Q

(s))

ds =
1
(k + 1)!
_
2Q

2
__
k+1
;
hence (2.4.12) is valid.
It follows from (2.4.12) that the series
v(, ) =

k=0
v
k
(, )
converges absolutely and uniformly on compact sets 0 T, and
[v(, )[ exp
_
2Q

2
__
.
The function v(, ) is the unique solution of the integral equation (2.4.10). Consequently,
the function u(x, t) = v(t + x, t x) is the unique solution of the boundary value problem
(2.4.1)-(2.4.2), and (2.4.3) holds.
Case 2. Let h < 0. we transform (2.4.10) by means of the replacement
w(, ) = v(, ) exp(h)
151
to the integral equation
w(, ) = 1
1
2
_

0
_
_

0
q
_

2
_
exp(h( ))w(, ) d
_
d

1
4
_

_
_

0
q
_

2
_
exp(h( ))w(, ) d
_
d. (2.4.14)
By the method of successive approximations we get similarly to Case 1 that the integral
equation (2.4.14) has a unique solution, and that
[w(, )[ exp
_
2Q

2
__
,
i.e. Theorem 2.4.1 is proved also for h < 0. 2
Remark 2.4.2. It follows from the proof of Theorem 2.4.1 that the solution u(x, t) of
(2.4.1)-(2.4.2) in the domain
T
:= (x, t) : 0 x t, 0 x +t 2T
6
-
x
t
T
T
2T

T
g. 2.4.1.
is uniquely determined by the specication of h and q(x) for 0 x T, i.e. if q(x) =
q(x), x [0, T] and h =

h, then u(x, t) = u(x, t) for (x, t)
T
. Therefore, one can
also consider the boundary value problem (2.4.1)-(2.4.2) in the domains
T
and study the
inverse problem of recovering q(x), 0 x T and h from the given trace r(t), t [0, 2T].
Denote by D
N
(N 0) the set of functions f(x), x 0 such that for each xed T > 0,
the functions f
(j)
(x), j = 0, N 1 are absolutely continuous on [0, T], i.e. f
(j)
(x)
L(0, T), j = 0, N. It follows from the proof of Theorem 2.4.1 that r(t) D
2
, r(0) =
1, r

(0) = h. Moreover, the function r

has the same smoothness properties as the


potential q. For example, if q D
N
then r D
N+2
.
In order to solve Inverse Problem 2.4.1 we will use the Riemann formula for the solution
of the Cauchy problem
u
tt
p(t)u = u
xx
q(x)u +p
1
(x, t), < x < , t > 0,
u
|t=0
= r(x), u
t|t=0
= s(x).
_
_
_
(2.4.15)
It is known (see, for example, [hel1, p.150]) that the solution of (2.4.15) has the form
u(x, t) =
1
2
_
r(x +t) +r(x t)
_
+
1
2
_
x+t
xt
_
s()R(, 0, x, t) r()R
2
(, 0, x, t)
_
d+
152
1
2
_
t
0
d
_
x+t
x+t
R(, , x, t)p
1
(, ) d,
where R(, , x, t) is the Riemann function, and R
2
=
R

. Note that if q(x) const,


then R(, , x, t) = R( x, , t). In particular, the solution of the Cauchy problem
u
tt
= u
xx
q(x)u, < t < , x > 0,
u
|x=0
= r(t), u
x|x=0
= hr(t),
_
_
_
(2.4.16)
has the form
u(x, t) =
1
2
_
r(t +x) + r(t x)
_

1
2
_
t+x
tx
r()(R
2
( t, 0, x) hR( t, 0, x)) d.
The change of variables = t leads to
u(x, t) =
1
2
_
r(t +x) +r(t x)
_
+
1
2
_
x
x
r(t )G(x, ) d, (2.4.17)
where G(x, ) = R
2
(, 0, x) + hR(, 0, x).
Remark 2.4.3. In (2.4.16) take r(t) = cos t. Obviously, the function u(x, t) =
(x, ) cos t, where (x, ) was dened in Section 1.1, is a solution of problem (2.4.16).
Therefore, (2.4.17) yields for t = 0,
(x, ) = cos x +
1
2
_
x
x
G(x, ) cos d.
Since G(x, ) = G(x, ), we have
(x, ) = cos x +
_
x
0
G(x, ) cos d,
i.e. we obtained another derivation of the representation (1.3.11). Thus, the function G(x, t)
is the kernel of the transformation operator. In particular, it was shown in Section 1.3 that
G(x, x) = h +
1
2
_
x
0
q(t) dt. (2.4.18)
Let us go on to the solution of Inverse Problem 2.4.1. Let u(x, t) be the solution of
the boundary value problem (2.4.1)-(2.4.2). We dene u(x, t) = 0 for 0 t < x, and
u(x, t) = u(x, t), r(t) = r(t) for t < 0. Then u(x, t) is the solution of the Cauchy
problem (2.4.16), and consequently (2.4.17) holds. But u(x, t) = 0 for x > [t[ (this is a
connection between q(x) and r(t) ), and hence
1
2
_
r(t +x) +r(t x)
_
+
1
2
_
x
x
r(t )G(x, ) d = 0, [t[ < x. (2.4.19)
Denote a(t) = r

(t). Dierentiating (2.4.19) with respect to t and using the relations


r(0+) = 1, r(0) = 1, (2.4.20)
153
we obtain
G(x, t) + F(x, t) +
_
x
0
G(x, )F(t, ) d = 0, 0 < t < x, (2.4.21)
where
F(x, t) =
1
2
_
a(t +x) + a(t x)
_
with a(t) = r

(t). (2.4.22)
Equation (2.4.21) is called the Gelfand-Levitan equation.
Theorem 2.4.2. For each xed x > 0, equation (2.4.21) has a unique solution.
Proof. Fix x
0
> 0. It is sucient to prove that the homogeneous equation
g(t) +
_
x
0
0
g()F(t, ) d = 0, 0 t x
0
(2.4.23)
has only the trivial solution g(t) = 0.
Let g(t), 0 t x
0
be a solution of (2.4.23). Since a(t) = r

(t) D
1
, it follows from
(2.4.22) and (2.4.23) that g(t) is an absolutely continuous function on [0, x
0
]. We dene
g(t) = g(t) for t [0, x
0
], and furthermore g(t) = 0 for [t[ > x
0
.
Let us show that _
x
0
x
0
r(t )g() d = 0, t [x
0
, x
0
]. (2.4.24)
Indeed, by virtue of (2.4.20) and (2.4.23), we have
d
dt
_
_
x
0
x
0
r(t )g() d
_
=
d
dt
_
_
t
x
0
r(t )g() d +
_
x
0
t
r(t )g() d
_
= r(+0)g(t) r(0)g(t) +
_
x
0
x
0
a(t )g() d = 2
_
g(t) +
_
x
0
0
g()F(t, ) d
_
= 0.
Consequently,
_
x
0
x
0
r(t )g() d C
0
.
Taking here t = 0 and using that r()g() is an odd function, we calculate C
0
= 0, i.e.
(2.4.24) is valid.
Denote
0
= (x, t) : x x
0
t x
0
x, 0 x x
0

6
-

0
x
0 x
x
0
x
0
t
g 2.4.2.
154
and consider the function
w(x, t) :=
_

u(x, t )g() d, (x, t)


0
, (2.4.25)
where u(x, t) is the solution of the boundary value problem (2.4.1)-(2.4.2). Let us show
that
w(x, t) = 0, (x, t)
0
. (2.4.26)
Since u(x, t) = 0 for x > [t[, (2.4.25) takes the form
w(x, t) =
_
tx

u(x, t )g() d +
_

t+x
u(x, t )g() d. (2.4.27)
Dierentiating (2.4.27) and using the relations
u(x, x) = 1, u(x, x) = 1, (2.4.28)
we calculate
w
x
(x, t) = g(t +x) g(t x) +
_
tx

u
x
(x, t )g() d +
_

t+x
u
x
(x, t )g() d, (2.4.29)
w
t
(x, t) = g(t +x) + g(t x) +
_
tx

u
t
(x, t )g() d +
_

t+x
u
t
(x, t )g() d. (2.4.30)
Since, in view of (2.4.28),
_
u
x
(x, t) u
t
(x, t)
_
|t=x
=
d
dx
u(x, x) = 0,
it follows from (2.4.29)-(2.4.30) that
w
xx
(x, t) w
tt
(x, t) q(x)w(x, t) =
_

[u
xx
u
tt
q(x)u](x, t )g() d,
and consequently
w
tt
(x, t) = w
xx
(x, t) q(x)w(x, t), (x, t)
0
. (2.4.31)
Furthermore, (2.4.25) and (2.4.29) yield
w(0, t) =
_
x
0
x
0
r(t )g() d, w
x
(0, t) = h
_
x
0
x
0
r(t )g() d = 0, t [x
0
, x
0
].
Therefore, according to (2.4.24), we have
w(0, t) = w
x
(0, t) = 0, t [x
0
, x
0
]. (2.4.32)
Since the Cauchy problem (2.4.31)-(2.4.32) has only the trivial solution, we arrive at (2.4.26).
Denote u
1
(x, t) := u
t
(x, t). It follows from (2.4.30) that
w
t
(x, 0) = 2g(x) +
_
x

u
1
(x, )g() d +
_

x
u
1
(x, )g() d
= 2
_
g(x) +
_

x
u
1
(x, )g() d
_
= 2
_
g(x) +
_
x
0
x
u
1
(x, )g() d
_
.
155
Taking (2.4.26) into account we get
g(x) +
_
x
0
x
u
1
(x, )g() d = 0, 0 x x
0
.
This integral equation has only the trivial solution g(x) = 0, and consequently Theorem
2.4.2 is proved. 2
Let r and r be the traces for the boundary value problems B(q(x), h) and B( q(x),

h)
respectively.
Theorem 2.4.3. If r(t) = r(t), t 0, then q(x) = q(x), x 0 and h =

h. Thus,
the specication of the trace r uniquely determines the potential q and the coecient h.
Proof. Since r(t) = r(t), t 0, we have, by virtue of (2.4.22),
F(x, t) =

F(x, t).
Therefore, Theorem 2.4.2 gives us
G(x, t) =

G(x, t), 0 t x. (2.4.33)
By virtue of (2.4.18),
q(x) = 2
d
dx
G(x, x), h = G(0, 0) = r

(0). (2.4.34)
Together with (2.4.33) this yields q(x) = q(x), x 0 and h =

h. 2
The Gelfand-Levitan equation (2.4.21) and Theorem 2.4.2 yield nally the following
algorithm for the solution of Inverse Problem 2.4.1.
Algorithm 2.4.1. Let r(t), t 0 be given. Then
(1) Construct the function F(x, t) using (2.4.22).
(2) Find the function G(x, t) by solving equation (2.4.21).
(3) Calculate q(x) and h by the formulae (2.4.34).
Remark 2.4.4. We now explain briey the connection of Inverse Problem 2.4.1 with
the inverse spectral problem considered in Sections 2.2-2.3. Let q(x) L(0, ), and let
u(x, t) be the solution of (2.4.1)-(2.4.2). Then, according to (2.4.3), [u(x, t)[ C
1
exp(C
2
t).
Denote
(x, ) =
_

x
u(x, t) exp(it)dt, x 0, Im 0,
M() =
_

0
r(t) exp(it)dt, Im 0.
One can verify that (x, ) and M() are the Weyl solution and the Weyl function for
the pair L = L(q(x), h) of the form (2.1.1)-(2.1.2). The inversion formula for the Laplace
transform gives
r(t) =
1
2
_

1
M() exp(it) d =
1
2i
_

1
sin t

(2)M() d =
1
2i
_

sin t

M() d,
156
where is the contour dened in Section 2.1 (see g. 2.1.1), and
1
is the image of
under the map =
2
. Let q(x) =

h = 0. Since then
1
2i
_

sin t

M() d = r(t) = 1,
we infer
r(t) = 1 +
1
2i
_

sin t

M() d.
If (2.2.5) is valid we calculate
a(t) =
1
2i
_

cos t

M()d,
and consequently
F(x, t) =
1
2i
_

cos x cos t

M()d,
i.e. equation (2.4.21) coincides with equation (2.2.42).
Let us now formulate necessary and sucient conditions for the solvability of Inverse
Problem 2.4.1.
Theorem 2.4.4. For a function r(t), t 0 to be the trace for a certain boundary value
problem B(q(x), h) of the form (2.4.1)-(2.4.2) with q D
N
, it is necessary and sucient
that r(t) D
N+2
, r(0) = 1, and that for each xed x > 0 the integral equation (2.4.21) is
uniquely solvable.
Proof. The necessity part of Theorem 2.4.4 was proved above, here we prove the
suciency. For simplicity let N 1 (the case N = 0 requires small modications).
Let a function r(t), t 0, satisfying the hypothesis of Theorem 2.4.4, be given, and let
G(x, t), 0 t x, be the solution of (2.4.21). We dene G(x, t) = G(x, t), r(t) = r(t)
for t < 0, and consider the function
u(x, t) :=
1
2
_
r(t +x) +r(t x)
_
+
1
2
_
x
x
r(t )G(x, ) d, < t < , x 0. (2.4.35)
Furthermore, we construct q and h via (2.4.34) and consider the boundary value problem
(2.4.1)-(2.4.2) with these q and h . Let u(x, t) be the solution of (2.4.1)-(2.4.2), and let
r(t) := u(0, t). Our goal is to prove that u = u, r = r.
Dierentiating (2.4.35) and taking (2.4.20) into account, we get
u
t
(x, t) =
1
2
_
a(t +x) + a(t x)
_
+G(x, t) +
1
2
_
x
x
a(t )G(x, ) d, (2.4.36)
u
x
(x, t) =
1
2
_
a(t +x) a(t x)
_
+
1
2
_
r(t x)G(x, x) + r(t +x)G(x, x)
_
+
1
2
_
x
x
r(t )G
x
(x, ) d. (2.4.37)
Since a(0+) = a(0), it follows from (2.4.36) that
u
tt
(x, t) =
1
2
_
a

(t +x) +a

(t x)
_
+G
t
(x, t) +
1
2
_
x
x
a

(t )G(x, ) d. (2.4.38)
157
Integration by parts yields
u
tt
(x, t) =
1
2
_
a

(t +x) + a

(t x)
_
+G
t
(x, t)
1
2
_
a(t )G(x, )

t
x
+a(t )G(x, )

x
t
_
+
1
2
_
x
x
a(t )G
t
(x, ) d =
1
2
_
a

(t +x) +a

(t x)
_
+G
t
(x, t) +
1
2
_
a(t +x)G(x, x) a(t x)G(x, x)
_
+
1
2
_
x
x
r

(t )G
t
(x, ) d.
Integrating by parts again and using (2.4.20) we calculate
u
tt
(x, t) =
1
2
_
a

(t +x) +a

(t x)
_
+G
t
(x, t) +
1
2
_
a(t +x)G(x, x) a(t x)G(x, x)
_

1
2
_
r(t )G
t
(x, )

t
x
+r(t )G
t
(x, )

x
t
_
+
1
2
_
x
x
r(t )G
tt
(x, ) d
=
1
2
_
a

(t +x) + a

(t x)
_
+
1
2
_
a(t +x)G(x, x) a(t x)G(x, x)
_
+
1
2
_
r(t +x)G
t
(x, x) r(t x)G
t
(x, x)
_
+
1
2
_
x
x
r(t )G
tt
(x, ) d. (2.4.39)
Dierentiating (2.4.37) we obtain
u
xx
(x, t) =
1
2
_
a

(t +x) +a

(t x)
_
+
1
2
_
a(t +x)G(x, x) a(t x)G(x, x)
_
+
1
2
_
r(t +x)
d
dx
G(x, x) +r(t x)
d
dx
G(x, x)
_
+
1
2
_
r(t +x)G
x
(x, x) + r(t x)G
x
(x, x)
_
+
1
2
_
x
x
r(t )G
xx
(x, ) d.
Together with (2.4.35), (2.4.39) and (2.4.34) this yields
u
xx
(x, t) q(x)u(x, t) u
tt
(x, t) =
1
2
_
x
x
r(t )g(x, ) d, < t < , x 0, (2.4.40)
where
g(x, t) = G
xx
(x, t) G
tt
(x, t) q(x)G(x, t).
Let us show that
u(x, t) = 0, x > [t[. (2.4.41)
Indeed, it follows from (2.4.36) and (2.4.21) that u
t
(x, t) = 0 for x > [t[, and consequently
u(x, t) C
0
(x) for x > [t[. Taking t = 0 in (2.4.35) we infer like above
C
0
(x) =
1
2
_
r(x) + r(x)
_
+
1
2
_
x
x
r()G(x, ) d = 0,
i.e. (2.4.41) holds.
It follows from (2.4.40) and (2.4.41) that
1
2
_
x
x
r(t )g(x, ) d = 0, x > [t[. (2.4.42)
158
Dierentiating (2.4.42) with respect to t and taking (2.4.20) into account we deduce
1
2
_
r(0+)g(x, t) r(0)g(x, t)
_
+
1
2
_
x
x
a(t )g(x, ) d = 0,
or
g(x, t) +
_
x
0
F(t, )g(x, ) d = 0.
According to Theorem 2.4.2 this homogeneous equation has only the trivial solution g(x, t) =
0, i.e.
G
tt
= G
xx
q(x)G, 0 < [t[ < x. (2.4.43)
Furthermore, it follows from (2.4.38) for t = 0 and (2.4.41) that
0 =
1
2
_
a

(x) +a

(x)
_
+G
t
(x, 0) +
1
2
_
x
x
a

()G(x, ) d.
Since a

(x) = a

(x), G(x, t) = G(x, t), we infer


G(x, t)
t

t=0
= 0. (2.4.44)
According to (2.4.34) the function G(x, t) satises also (2.4.18).
It follows from (2.4.40) and (2.4.43) that
u
tt
(x, t) = u
xx
(x, t) q(x)u(x, t), < t < , x 0.
Moreover, (2.4.35) and (2.4.37) imply (with h=G(0,0))
u
|x=0
= r(t), u
x|x=0
= hr(t).
Let us show that
u(x, x) = 1, x 0. (2.4.45)
Since the function G(x, t) satises (2.4.43), (2.4.44) and (2.4.18), we get according to
(2.4.17),
u(x, t) =
1
2
_
r(t +x) + r(t x)
_
+
1
2
_
x
x
r(t )G(x, ) d. (2.4.46)
Comparing (2.4.35) with (2.4.46) we get
u(x, t) =
1
2
_
r(t +x) + r(t x)
_
+
1
2
_
x
x
r(t )G(x, ) d,
where u = u u, r = r r. Since the function r(t) is continuous for < t < , it
follows that the function u(x, t) is also continuous for < t < , x > 0. On the other
hand, according to (2.4.41), u(x, t) = 0 for x > [t[, and consequently u(x, x) = 0. By
(2.4.2), u(x, x) = 1, and we arrive at (2.4.45).
Thus, the function u(x, t) is a solution of the boundary value problem (2.4.1)-(2.4.2).
By virtue of Theorem 2.4.1 we obtain u(x, t) = u(x, t), and consequently r(t) = r(t). The-
orem 2.4.4 is proved. 2
159
2.5. THE GENERALIZED WEYL FUNCTION
Let us consider the dierential equation and the linear form L = L(q(x), h) :
y := y

+q(x)y = y, x > 0,
U(y) := y

(0) hy(0).
In this section we study the inverse spectral problem for L in the case when q(x) is a
locally integrable complex-valued function, and h is a complex number. In this case we
introduce the so-called generalized Weyl function as a main spectral characteristics.
For this purpose we dene a space of generalized functions (distributions). Let D be
the set of all integrable and bounded on the real line entire functions of exponential type
with ordinary operations of addition and multiplication by complex numbers and with the
following convergence: z
k
() is said to converge to z() if the types
k
of the functions
z
k
() are bounded ( sup
k
< ), and |z
k
() z()|
L(,)
0 as k . The linear
manifold D with this convergence is our space of test functions.
Denition 2.5.1. All linear and continuous functionals
R : D C, z() R(z()) = (z(), R),
are called generalized functions (GF). The set of these GF is denoted by D

. A sequence of
GF R
k
D

converges to R D

, if lim(z(), R
k
) = (z(), R), k for any z() D.
A GF R D

is called regular if it is determined by R() L

via
(z(), R) =
_

z()R() d.
Denition 2.5.2. Let a function f(t) be locally integrable for t > 0 (i.e. it is
integrable on every nite segment [0, T] ). The GF L
f
() D

dened by the equality


(z(), L
f
()) :=
_

0
f(t)
_
_

z() exp(it) d
_
dt, z() D, (2.5.1)
is called the generalized Fourier-Laplace transform for the function f(t).
Since z() D, we have
_

[z()[
2
d sup
<<
[z()[
_

[z()[ d,
i.e. z() L
2
(, ). Therefore, by virtue of the Paley-Wiener theorem [zyg1], the
function
B(t) :=
1
2
_

z() exp(it) d
is continuous and has compact support, i.e. there exists a d > 0 such that B(t) = 0 for
[t[ > d, and
z() =
_
d
d
B(t) exp(it) dt. (2.5.2)
Consequently, the integral in (2.5.1) exists. We note that f(t) L(0, ) implies
(z(), L
f
()) :=
_

z()
_
_

0
f(t) exp(it) dt
_
d,
160
i.e. L
f
() is a regular GF (dened by
_

0
f(t) exp(it) dt ) and coincides with the ordinary
Fourier-Laplace transform for the function f(t). Since
1

1 cos x

2
exp(it)d =
_
x t, t < x,
0, t > x,
the following inversion formula is valid:
_
x
0
(x t)f(t)dt =
_
1


1 cos x

2
, L
f
()
_
. (2.5.3)
Let now u(x, t) be the solution of (2.4.1)-(2.4.2) with a locally integrable complex-
valued function q(x). Dene u(x, t) = 0 for 0 < t < x, and denote (with =
2
)
(x, ) := L
u
(), i.e.
(z(), (x, )) =
_

x
u(x, t)
_
_

z() exp(it)d
_
dt. (2.5.4)
For z() D,
2
z() L(, ), = 1, 2, we put
(z(), (i)

(x, )) := ((i)

z(), (x, )),


(z(),
()
(x, )) :=
d

dx

(z(), (x, )).


Theorem 2.5.1. The following relations hold
(x, ) = (x, ), U() = 1.
Proof. We calculate
(z(), (x, )) = (z(),

(x, ) +q(x)(x, ))
=
_

(i)z() exp(ix)d u
x
(x, x)
_

z() exp(ix)d
+
_

x
_
u
xx
(x, t) q(x)u(x, t)
__
_

z() exp(it) d
_
dt,
(z(), (x, )) =
_

(i)z() exp(ix)d u
t
(x, x)
_

z() exp(ix)d
+
_

x
u
tt
(x, t)
_
_

z() exp(it) d
_
dt.
Using u
t
(x, x) + u
x
(x, x) =
d
dx
u(x, x) 0, we infer (z(), (x, ) (x, )) = 0. Fur-
thermore, since
(z(),

(x, )) =
_

z() exp(ix) d
_

x
u
x
(x, t)
_
_

z() exp(it) d
_
dt,
we get
(z(), U()) = (z(),

(0, ) h(0, )) =
_

z() d.
2
161
Denition 2.5.3. The GF (x, ) is called the generalized Weyl solution, and the
GF M() := (0, ) is called the generalized Weyl function (GWF) for L(q(x), h).
Note that if q(x) L(0, ), then [u(x, t)[ C
1
exp(C
2
t), and (x, ) and M()
coincide with the ordinary Weyl solution and Weyl function (see Remark 2.4.4).
The inverse problem considered here is formulated as follows:
Inverse Problem 2.5.1. Given the GWF M(), construct the potential q(x) and
the coecient h.
Denote r(t) := u(0, t). It follows from (2.5.4) that
(z(), M()) =
_

0
r(t)
_
_

z() exp(it) d
_
dt,
i.e. M() = L
r
(). In view of (2.5.3), we get by dierentiation
r(t) =
d
2
dt
2
_
1


1 cos t

2
, M()
_
, (2.5.5)
and Inverse Problem 2.5.1 has been reduced to Inverse Problem 2.4.1 from the trace r
considered in Section 2.4. Thus, the following theorems hold.
Theorem 2.5.2. Let M() and

M() be the GWFs for L = L(q(x), h) and

L =
L( q(x),

h) respectively. If M() =

M(), then L =

L. Thus, the specication of the GWF
uniquely determines the potential q and the coecient h.
Theorem 2.5.3. Let (x, ) be the solution of the dierential equation = under
the initial conditions (0, ) = 1,

(0, ) = h. Then the following representation holds


(x, ) = cos x +
_
x
0
G(x, t) cos t dt,
and the function G(x, t) satises the integral equation
G(x, t) +F(x, t) +
_
x
0
G(x, )F(t, ) d = 0, 0 < t < x, (2.5.6)
where
F(x, t) =
1
2
_
r

(t +x) + r

(t x)
_
.
The function r is dened via (2.5.5), and r D
2
. If q D
N
then r D
N+2
. Moreover,
for each xed x > 0, the integral equation (2.5.6) is uniquely solvable.
Theorem 2.5.4. For a generalized function M D

to be the GWF for a certain


L(q(x), h) with q D
N
, it is necessary and sucient that
1) r D
N+2
, r(0) = 1, where r is dened via (2.5.5);
2) for each x > 0, the integral equation (2.5.6) is uniquely solvable.
The potential q and the coecient h can be constructed by the following algorithm.
Algorithm 2.5.1. Let the GWF M() be given. Then
(1) Construct the function r(t) by (2.5.5).
162
(2) Find the function G(x, t) by solving the integral equation (2.5.6).
(3) Calculate q(x) and h by
q(x) = 2
dG(x, x)
dx
, h = G(0, 0).
Let us now prove an expansion theorem for the case of locally integrable complex-valued
potentials q.
Theorem 2.5.4. Let f(x) W
2
. Then, uniformly on compact sets,
f(x) =
1

_
(x, )F()(i), M()
_
, (2.5.7)
where
F() =
_

0
f(t)(t, ) dt. (2.5.8)
Proof. First we assume that q(x) L(0, ). Let f(x) Q, where Q = f W
2
:
U(f) = 0, f L
2
(0, ) (the general case when f W
2
requires small modications).
Let D
+
= z() D : z() L
2
(, ). Clearly, z() D
+
if and only if
B(t) W
1
2
[d, d] in (2.5.2). For z() D
1
, integration by parts in (2.5.2) yields
z() =
_
d
d
B(t) exp(it) dt =
1
i
_
d
d
B

(t) exp(it) dt.


Using (2.5.8) we calculate
F() =
1

_

0
f(t)
_

(t, ) + q(t)(t, )
_
dt =
1

_

0
(t, )f(t) dt,
and consequently F()(i) D
+
. According to Theorem 2.1.8 we have
f(x) =
1
2i
_

(x, )F()M() d =
1

1
(x, )F()(i)M() d, (2.5.9)
where the contour
1
in the - plane is the image of under the mapping =
2
.
In view of Remark 2.4.4,
M() =
_

0
r(t) exp(it) dt, [r(t)[ C
1
exp(C
2
t). (2.5.10)
Take b > C
2
. Then, by virtue of Cauchys theorem, (2.5.9)-(2.5.10) imply
f(x) =
1

_
+ib
+ib
(x, )F()(i)
_

_

0
r(t) exp(it) dt
_
d
=
1

_

0
r(t)
_
_
+ib
+ib
(x, )F()(i) exp(it) d
_
dt.
Using Cauchys theorem again we get
f(x) =
1

_

0
r(t)
_
_

(x, )F()(i) exp(it) d


_
dt =
1

_
(x, )F()(i), M()
_
,
163
i.e. (2.5.7) is valid.
Let now q(x) be a locally integrable complex-valued function. Denote
q
R
(x) =
_
q(x), 0 x R,
0, x > R.
Let r
R
(t) be the trace for the potential q
R
. According to Remark 2.4.2,
r
R
(t) = r(t) for t 2R. (2.5.11)
Since q
R
(x) L(0, ) we have by virtue of (2.5.7),
f(x) =
1

_

0
r
R
(t)
_
_

(x, )F()(i) exp(it) d


_
dt.
Let x [0, T] for a certain T > 0. Then there exists a d > 0 such that
f(x) =
1

_
d
0
r
R
(t)
_
_

(x, )F()(i) exp(it) d


_
dt, 0 x T.
For suciently large R (R > d/2) we have in view of (2.5.11),
f(x) =
1

_
d
0
r(t)
_
_

(x, )F()(i) exp(it) d


_
dt
=
1

_

0
r(t)
_
_

(x, )F()(i) exp(it) d


_
dt
=
1

_
(x, )F()(i), M()
_
,
i.e. (2.5.7) is valid, and Theorem 2.5.4 is proved. 2
2.6. THE WEYL SEQUENCE
We consider in this section the dierential equation and the linear form
y := y

+q(x)y = y, x > 0, U
0
(y) := y(0).
Let =
2
, Im > 0. Then the Weyl solution
0
(x, ) and the Weyl function M
0
() are
dened by the conditions

0
(0, ) = 1,
0
(x, ) = O(exp(ix)), x , M
0
() =

0
(0, ),
and (2.1.81) is valid. Let the function q be analytic at the point x = 0 (in this case we
shall write in the sequel q A ). It follows from (2.1.21) and (2.1.81) that for q A the
following relations are valid

0
(x, ) = exp(ix)
_
1 +

k=1
b
k
(x)
(2i)
k
_
, [[ , (2.6.1)
M
0
() = i +

k=1
M
k
(2i)
k
, [[ , (2.6.2)
164
in the sense of asymptotic formulae. The sequence m := M
k

k1
is called the Weyl
sequence for the potential q.
Substituting (2.6.1)-(2.6.2) into the relations

0
(x, ) + q(x)
0
(x, ) =
0
(x, ),
0
(0, ) = 1,

0
(0, ) = M
0
(),
we obtain the following recurrent formulae for calculating b
k
(x) and M
k
:
b

k+1
(x) = b

k
(x) + q(x)b
k
(x), k 0, b
0
(x) := 1, (2.6.3)
b
k
(0) = 0, M
k
= b

k
(0), k 1. (2.6.4)
Hence
b
k+1
(x) = b

k
(0) b

k
(x) +
_
x
0
q(t)b
k
(t) dt, k 0.
In particular, this gives
b
1
(x) =
_
x
0
q(t) dt, b
2
(x) = q(0) q(x) +
1
2
_
_
x
0
q(t) dt
_
2
,
b
3
(x) = q

(x) q

(0) + (q(x) +q(0))


_
x
0
q(t) dt
_
x
0
q
2
(t) dt +
1
3!
_
_
x
0
q(t) dt
_
3
, .
Moreover, (2.6.3)-(2.6.4) imply
M
1
= q(0), M
2
= q

(0), M
3
= q

(0) q
2
(0), M
4
= q

(0) + 4q(0)q

(0),
M
5
= q
(4)
(0) 6q

(0)q(0) 5(q

(0))
2
+ 2q
3
(0), .
We note that the Weyl sequence m = M
k

k1
depends only on the Taylor coecients
Q = q
(j)
(0)
j0
of the potential q at the point x = 0, and this dependence is nonlinear.
In this section we study the following inverse problem
Inverse Problem 2.6.1. Given the Weyl sequence m := M
k

k1
, construct q.
We prove the uniqueness theorem for Inverse problem 2.6.1, provide an algorithm for
the solution of this inverse problem and give necessary and sucient conditions for its solv-
ability. We note that the specication of the Weyl sequence allows us to construct q in a
neighbourhood of the original. However, if q(x) is analytic for x 0, then one can obtain
the global solution.
The following theorem gives us necessary and sucient conditions on the Weyl sequence.
Theorem 2.6.1. 1) If q A, then there exists a > 0 such that
M
k
= O
__
k

_
k
_
for k 0. (2.6.5)
2) Let an arbitrary sequence M
k

k1
, satisfying (2.6.5) for a certain > 0, be given.
Then there exists a unique function q A for which M
k

k1
is the corresponding Weyl
sequence.
First we prove auxiliary assertions. Denote
C

k
=
_
k

_
=
k!
!(k )!
, 0 k.
165
Lemma 2.6.1. The following relation holds
C

k

k
k

(k )
k
, 0 k, (2.6.6)
where we put 0
0
= 1.
Proof. By virtue of Stirlings formula [Hen1, p.42],

2k
_
k
e
_
k
k! 1.1

2k
_
k
e
_
k
, k 1. (2.6.7)
For = 0 and = k, (2.6.6) is obvious. Let 1 k 1. Then, using (2.6.7) we
calculate
C

k
=
k!
!(k )!

1.1

k
k

(k )
k

k
(k )
.
It is easy to check that
(1.1)
2
2

k
(k )
< 1, 1 k 1,
and consequently, (2.6.6) is valid. 2
Lemma 2.6.2. The following relations hold
(k s)
ks
k
k
ks

j=1
(j +s 2)
j+s2
j
j
1, k 1 s 1, (2.6.8)
k1

s=1
(k s)
ks
k
k
ks

j=1
(j +s 2)
j+s2
j
j
3, k 2. (2.6.9)
Proof. For s = 1 and s = 2, (2.6.8) is obvious since
(j +s 2)
j+s2
j
j
1.
Let now s 3. Since the function
f(x) =
(x +a)
x+a
x
x
is monotonically increasing for x > 0, we get
(k s)
ks
k
k
ks

j=1
(j +s 2)
j+s2
j
j

(k s)
ks
k
k
(k 2)
k2
(k s)
ks
(k s)
(k 2)
k1
k
k
< 1,
i.e. (2.6.8) is valid. Furthermore,
k1

s=3
(k s)
ks
k
k
ks

j=1
(j +s 2)
j+s2
j
j

(k 2)
k2
k
k
k1

s=3
(k s)
166
=
(k 2)
k2
k
k

(k 3)(k 2)
2
<
1
2
,
and consequently (2.6.9) is valid. 2
Proof of Theorem 2.6.1. 1) Let q A. We denote q
k
= q
(k)
(0), b
k
= b
()
k
(0).
Dierentiating k times the equality b

+1
(x) = b

(x) + q(x)b

(x) and inserting then


x = 0 we get together with (2.6.4),
b
+1,k+1
= b
,k+2
+
k

j=0
C
j
k
q
kj
b
j
, 0 k,
b
k+1,0
= 0, b
k+1,1
= M
k+1
, b
0,k
=
0k
, k 0,
_

_
(2.6.10)
where
jk
is the Kronecker delta. Since q A, there exist
0
> 0 and C > 0 such that
[q
k
[ C
_
k

0
_
k
.
We denote a = 1 +C
2
0
. Let us show by induction with respect to that
[b
+1,k+1
[ Ca

_
k

0
_
k
, 0 k. (2.6.11)
For = 0 (2.6.11) is obvious by virtue of b
1,k+1
= q
k
. We assume that (2.6.11) holds
for = 1, . . . , s 1. Then, using (2.6.10) and Lemma 2.6.1 we obtain
[b
s+1,ks+1
[ Ca
s1
_
k

0
_
k
+
ks

j=1
(k s)
ks
j
j
(k s j)
ksj
C
_
k s j

0
_
ksj
Ca
s1
_
s +j 2

0
_
s+j2
= Ca
s1
_
k

0
_
k
_
1 + C
2
0
(k s)
ks
k
k
ks

j=1
(s +j 2)
s+j2
j
j
_
.
By virtue of (2.6.8) this yields
[b
s+1,ks+1
[ Ca
s1
_
k

0
_
k
(1 + C
2
0
) = Ca
s
_
k

0
_
k
,
i.e. (2.6.11) holds for = s.
It follows from (2.6.11) for = k that
[M
k+1
[ Ca
k
_
k

0
_
k
, k 0,
and consequently (2.6.5) holds for =
0
/a.
2) Let the sequence M
k

k1
, satisfying (2.6.5) for a certain > 0, be given. Solv-
ing (2.6.10) successively for k = 0, 1, 2, . . . with respect to q
k
and b
+1,k+1

=0,k
, we
calculate
b
kj+1,j+1
= (1)
j
M
k+1
+
j1

=1

=1
(1)
j1
C

b
k,
, 0 j k, (2.6.12)
167
q
k
= (1)
k
M
k+1
+
k1

=1

=1
(1)
k1
C

b
k,
. (2.6.13)
By virtue of (2.6.5) there exist C > 0 and
1
> 0 such that
[M
k+1
[ C
_
k

1
_
k
. (2.6.14)
Let us show by induction with respect to k 0 that
[q
k
[ Ca
k
0
_
k

1
_
k
, (2.6.15)
[b
kj+1,j+1
[ Ca
k
0
_
k

1
_
k
, 0 j k, (2.6.16)
where a
0
= max(2,

6C
1
).
Since q
0
= b
11
= M
1
, (2.6.15)-(2.6.16) are obvious for k = 0. Suppose that (2.6.15)-
(2.6.16) hold for k = 0, . . . , n1. Then, using Lemma 2.6.1 and (2.6.13)-(2.6.14) for k = n,
we obtain
[q
n
[ C
_
n

1
_
n
+
n1

=1

=1

( )

Ca

0
_

1
_

Ca
n+2
0
_
n + 2

1
_
n+2
= C
_
n

1
_
n
a
n
0
_
1
a
n
0
+
C
2
1
a
2
0
n1

=1

n
n

=1
(n + 2)
n+2

_
.
By virtue of (2.6.9),
n1

=1

n
n

=1
(n + 2)
n+2

=
n1

s=1
(n s)
ns
n
n
ns

=1
(s + 2)
s+2

3,
and consequently
[q
n
[ C
_
n

1
_
n
a
n
0
_
1
a
n
0
+
3C
2
1
a
2
0
_
C
_
n

1
_
n
a
n
0
,
i.e. (2.6.15) holds for k = n. Similarly one can obtain that (2.6.16) also holds for k = n.
In particular, it follows from (2.6.15) that for k 0,
q
k
= O
__
k

2
_
k
_
,
2
=

1
a
0
.
We construct the function q A by
q(x) =

k=0
q
k
x
k
k!
.
It is easy to verify that M
k

k1
is the Weyl sequence for this function. Notice that the
uniqueness of the solution of the inverse problem is obvious. Theorem 2.6.1 is proved. 2
Remark 2.6.1. The relations (2.6.12)-(2.6.13) give us an algorithm for the solution of
Inverse Problem 2.6.1. Moreover, it follows from (2.6.12)-(2.6.13) that the specication of
the Weyl sequence m := M
k

k1
uniquely determines q in a neighbourhood of the origin.
168
III. INVERSE SCATTERING ON THE LINE
In this chapter the inverse scattering problem for the Sturm-Liouville operator on the line
is considered. In Section 3.1 we introduce the scattering data and study their properties. In
Section 3.2, using the transformation operator method, we give a derivation of the so-called
main equation and prove its unique solvability. In Section 3.3, using the main equation, we
provide an algorithm for the solution of the inverse scattering problem along with necessary
and sucient conditions for its solvability. In Section 3.4 a class of reectionless potentials,
which is important for applications, is studied, and an explicit formula for constructing such
potentials is given. We note that the inverse scattering problem for the Sturm-Liouville
operator on the line was considered by many authors (see, for example, [mar1], [lev2], [fad1],
[dei1]).
3.1. SCATTERING DATA
3.1.1. Let us consider the dierential equation
y := y

+q(x)y = y, < x < . (3.1.1)


Everywhere below in this chapter we will assume that the function q(x) is real, and that
_

(1 +[x[)[q(x)[ dx < . (3.1.2)


Let =
2
, = +i, and let for deniteness := Im 0. Denote
+
= : Im >
0,
Q
+
0
(x) =
_

x
[q(t)[ dt, Q
+
1
(x) =
_

x
Q
+
0
(t) dt =
_

x
(t x)[q(t)[ dt,
Q

0
(x) =
_
x

[q(t)[ dt, Q

1
(x) =
_
x

0
(t) dt =
_
x

(t x)[q(t)[ dt.
Clearly,
lim
x
Q

j
(x) = 0.
The following theorem introduces the Jost solutions e(x, ) and g(x, ) with prescribed
behavior in .
Theorem 3.1.1. Equation (3.1.1) has unique solutions y = e(x, ) and y = g(x, ),
satisfying the integral equations
e(x, ) = exp(ix) +
_

x
sin (t x)

q(t)e(t, ) dt,
g(x, ) = exp(ix) +
_
x

sin (x t)

q(t)g(t, ) dt.
The functions e(x, ) and g(x, ) have the following properties:
1) For each xed x, the functions e
()
(x, ) and g
()
(x, ) ( = 0, 1) are analytic in

+
and continuous in
+
.
169
2) For = 0, 1,
e
()
(x, ) = (i)

exp(ix)(1 + o(1)), x +,
g
()
(x, ) = (i)

exp(ix)(1 + o(1)), x ,
_
_
_
(3.1.3)
uniformly in
+
. Moreover, for
+
,
[e(x, ) exp(ix)[ exp(Q
+
1
(x)),
[e(x, ) exp(ix) 1[ Q
+
1
(x) exp(Q
+
1
(x)),
[e

(x, ) exp(ix) i[ Q
+
0
(x) exp(Q
+
1
(x)),
_

_
(3.1.4)
[g(x, ) exp(ix)[ exp(Q

1
(x)),
[g(x, ) exp(ix) 1[ Q

1
(x) exp(Q

1
(x)),
[g

(x, ) exp(ix) + i[ Q

0
(x) exp(Q

1
(x)).
_

_
(3.1.5)
3) For each xed
+
and each real , e(x, ) L
2
(, ), g(x, ) L
2
(, ).
Moreover, e(x, ) and g(x, ) are the unique solutions of (3.1.1) (up to a multiplicative
constant) having this property.
4) For [[ ,
+
, = 0, 1,
e
()
(x, ) = (i)

exp(ix)
_
1 +

+
(x)
i
+o
_
1

__
,
+
(x) =
1
2
_

x
q(t) dt,
g
()
(x, ) = (i)

exp(ix)
_
1 +

(x)
i
+o
_
1

__
,

(x) =
1
2
_
x

q(t) dt,
_

_
(3.1.6)
uniformly for x and x respectivrly.
5) For real ,= 0, the functions e(x, ), e(x, ) and g(x, ), g(x, ) form fun-
damental systems of solutions for (3.1.1), and
e(x, ), e(x, )) = g(x, ), g(x, )) 2i, (3.1.7)
where y, z) := yz

z.
6) The functions e(x, ) and g(x, ) have the representations
e(x, ) = exp(ix) +
_

x
A
+
(x, t) exp(it) dt,
g(x, ) = exp(ix) +
_
x

(x, t) exp(it) dt,


_

_
(3.1.8)
where A

(x, t) are real continuous functions, and


A
+
(x, x) =
1
2
_

x
q(t)dt, A

(x, x) =
1
2
_
x

q(t)dt, (3.1.9)
[A

(x, t)[
1
2
Q

0
_
x +t
2
_
exp
_
Q

1
(x) Q

1
_
x +t
2
__
. (3.1.10)
170
The functions A

(x, t) have rst derivatives A

1
:=
A

x
, A

2
:=
A

t
; the functions
A

i
(x, t)
1
4
q
_
x +t
2
_
are absolutely continuous with respect to x and t , and

i
(x, t)
1
4
q
_
x +t
2
_


1
2
Q

0
(x)Q

0
_
x +t
2
_
exp(Q

1
(x)), i = 1, 2. (3.1.11)
For the function e(x, ), Theorem 3.1.1 was proved in Section 2.1 (see Theorems 2.1.1-
2.1.3). For g(x, ) the arguments are the same. Moreover, all assertions of Theorem 3.1.1
for g(x, ) can be obtained from the corresponding assertions for e(x, ) by the replacement
x x.
In the next lemma we describe properties of the Jost solutions e
j
(x, ) and g
j
(x, )
related to the potentials q
j
, which approximate q.
Lemma 3.1.1. If (1 +[x[)[q(x)[ L(a, ), a > , and
lim
j
_

a
(1 +[x[)[q
j
(x) q(x)[ dx = 0, (3.1.12)
then
lim
j
sup

+
sup
xa
[(e
()
j
(x, ) e
()
(x, )) exp(ix)[ = 0, = 0, 1. (3.1.13)
If (1 +[x[)[q(x)[ L(, a), a < , and
lim
j
_
a

(1 +[x[)[q
j
(x) q(x)[ dx = 0,
then
lim
j
sup

+
sup
xa
[(g
()
j
(x, ) g
()
(x, )) exp(ix)[ = 0, = 0, 1. (3.1.14)
Here e
j
(x, ) and g
j
(x, ) are the Jost solutions for the potentials q
j
.
Proof. Denote
z
j
(x, ) = e
j
(x, ) exp(ix), z(x, ) = e(x, ) exp(ix), u
j
(x, ) = [z
j
(x, ) z(x, )[.
Then, it follows from (2.1.8) that
z
j
(x, ) z(x, ) =
1
2i
_

x
(1 exp(2i(t x)))(q(t)z(t, ) q
j
(t)z
j
(t, )) dt.
From this, taking (2.1.29) into account, we infer
u
j
(x, )
_

x
(t x)[(q(t) q
j
(t))z(t, )[ dt +
_

x
(t x)[q
j
(t)[u
j
(t, ) dt.
According to (3.1.4),
[z(x, )[ exp(Q
+
1
(x)) exp(Q
+
1
(a)), x a, (3.1.15)
171
and consequently
u
j
(x, ) exp(Q
+
1
(a))
_

a
(t a)[q(t) q
j
(t)[ dt +
_

x
(t x)[q
j
(t)[u
j
(t, ) dt.
By virtue of Lemma 2.1.2 this yields
u
j
(x, ) exp(Q
+
1
(a))
_

a
(t a)[q(t) q
j
(t)[ dt exp
_
_

x
(t x)[q
j
(t)[ dt
_
exp(Q
+
1
(a))
_

a
(t a)[q(t) q
j
(t)[ dt exp
_
_

a
(t a)[q
j
(t)[ dt
_
.
Hence
u
j
(x, ) C
a
_

a
(t a)[q(t) q
j
(t)[ dt. (3.1.16)
In particular, (3.1.16) and (3.1.12) imply
lim
j
sup

+
sup
xa
u
j
(x, ) = 0,
and we arrive at (3.1.13) for = 0.
Denote
v
j
(x, ) = [(e

j
(x, ) e

(x, )) exp(ix)[.
It follows from (2.1.20) that
v
j
(x, )
_

x
[q(t)z(t, ) q
j
(t)z
j
(t, )[ dt,
and consequently,
v
j
(x, )
_

a
[(q(t) q
j
(t))z(t, )[ dt +
_

a
[q
j
(t)[u
j
(t, ) dt. (3.1.17)
By virtue of (3.1.15)-(3.1.17) we obtain
v
j
(x, ) C
a
_
_

a
[q(t) q
j
(t)[ dt +
_

a
(t a)[q(t) q
j
(t)[ dt
_

a
[q
j
(t)[ dt
_
.
Together with (3.1.12) this yields
lim
j
sup

+
sup
xa
v
j
(x, ) = 0,
and we arrive at (3.1.13) for = 1. The relations (3.1.14) is proved analogously. 2
3.1.2. For real ,= 0, the functions e(x, ), e(x, ) and g(x, ), g(x, ) form
fundamental systems of solutions for (3.1.1). Therefore, we have for real ,= 0 :
e(x, ) = a()g(x, ) + b()g(x, ), g(x, ) = c()e(x, ) + d()e(x, ). (3.1.18)
Let us study the properties of the coecients a(), b(), c() and d().
Lemma 3.1.2. For real ,= 0, the following relations hold
c() = b(), d() = a(), (3.1.19)
172
a() = a(), b() = b(), (3.1.20)
[a()[
2
= 1 +[b()[
2
, (3.1.21)
a() =
1
2i
e(x, ), g(x, )), b() =
1
2i
e(x, ), g(x, )). (3.1.22)
Proof. Since e(x, ) = e(x, ), g(x, ) = g(x, ), then (3.1.20) follows from (3.1.18).
Using (3.1.18) we also calculate
e(x, ), g(x, )) = a()g(x, ) + b()g(x, ), g(x, )) = 2ia(),
e(x, ), g(x, )) = a()g(x, ) + b()g(x, ), g(x, )) = 2ib(),
e(x, ), g(x, )) = e(x, ), c()e(x, ) + d()e(x, )) = 2id(),
e(x, ), g(x, )) = e(x, ), c()e(x, ) +d()e(x, )) = 2ic(),
i.e. (3.1.19) and (3.1.22) are valid. Furthermore,
2i = e(x, ), e(x, )) = a()g(x, ) +b()g(x, ), a()g(x, ) + b()g(x, )) =
a()a()g(x, ), g(x, )) +b()b()g(x, ), g(x, )) = 2i
_
[a()[
2
[b()[
2
_
,
and we arrive at (3.1.21). 2
We note that (3.1.22) gives the analytic continuation for a() to
+
. Hence, the function
a() is analytic in
+
, and a() is continuous in
+
. The function b() is continuous
for real . Moreover, it follows from (3.1.22) and (3.1.6) that
a() = 1
1
2i
_

q(t) dt +o
_
1

_
, b() = o
_
1

_
, [[ (3.1.23)
(in the domains of denition), and consequently the function (a()1) is bounded in
+
.
Using (3.1.22) and (3.1.8) one can calculate more precisely
a() = 1
1
2i
_

q(t) dt +
1
2i
_

0
A(t) exp(it) dt,
b() =
1
2i
_

B(t) exp(it) dt,


_

_
(3.1.24)
where A(t) L(0, ) and B(t) L(, ) are real functions.
Indeed,
2ia() = g(0, )e

(0, ) e(0, )g

(0, )
=
_
1 +
_
0

(0, t) exp(it) dt
__
i A
+
(0, 0) +
_

0
A
+
1
(0, t) exp(it) dt
_
+
_
1 +
_

0
A
+
(0, t) exp(it) dt
__
i A

(0, 0)
_
0

1
(0, t) exp(it) dt
_
.
Integration by parts yields
i
_
0

(0, t) exp(it) dt = A

(0, 0) +
_
0

2
(0, t) exp(it) dt,
173
i
_

0
A
+
(0, t) exp(it) dt = A
+
(0, 0)
_

0
A
+
2
(0, t) exp(it) dt.
Furthermore,
_
0

(0, t) exp(it) dt
_

0
A
+
1
(0, s) exp(is) ds
=
_
0

(0, t)
_
_

t
A
+
1
(0, +t) exp(i) d
_
dt
=
_

0
_
_
0

(0, t)A
+
1
(0, +t) dt
_
exp(i) d.
Analogously,
_
0

1
(0, t) exp(it) dt
_

0
A
+
(0, s) exp(is) ds
=
_

0
_
_
0

1
(0, t)A
+
(0, +t) dt
_
exp(i) d.
Since
2(A
+
(0, 0) + A

(0, 0)) =
_

q(t) dt,
we arrive at (3.1.24) for a(), where
A(t) = A
+
1
(0, t) A

1
(0, t) + A

2
(0, t) A
+
2
(0, t) A
+
(0, 0)A

(0, t) A

(0, 0)A
+
(0, t)
+
_
0
t
A

(0, )A
+
1
(0, +t) d
_
0
t
A

1
(0, )A
+
(0, +t) d.
It follows from (3.1.10)-(3.1.11) that A(t) L(0, ). For the function b() the arguments
are similar.
Denote
e
0
(x, ) =
e(x, )
a()
, g
0
(x, ) =
g(x, )
a()
, (3.1.25)
s
+
() =
b()
a()
, s

() =
b()
a()
. (3.1.26)
The functions s
+
() and s

() are called the reection coecients (right and left, respec-


tively). It follows from (3.1.18), (3.1.25) and (3.1.26) that
e
0
(x, ) = g(x, ) +s

()g(x, ), g
0
(x, ) = e(x, ) + s
+
()e(x, ). (3.1.27)
Using (3.1.25), (3.1.27) and (3.1.3) we get
e
0
(x, ) exp(ix) + s

() exp(ix) (x ), e
0
(x, ) t() exp(ix) (x ),
g
0
(x, ) t() exp(ix) (x ), g
0
(x, ) exp(ix) + s
+
() exp(ix) (x ),
where t() = (a())
1
is called the transmission coecient.
We point out the main properties of the functions s

(). By virtue of (3.1.20)-(3.1.22)


and (3.1.26), the functions s

() are continuous for real ,= 0, and


s

() = s

().
174
Moreover, (3.1.21) implies
[s

()[
2
= 1
1
[a()[
2
,
and consequently,
[s

()[ < 1 for real ,= 0.


Furthermore, according to (3.1.23) and (3.1.26),
s

() = o
_
1

_
as [[ .
Denote by R

(x) the Fourier transform for s

() :
R

(x) :=
1
2
_

() exp(ix) d. (3.1.28)
Then R

(x) L
2
(, ) are real, and
s

() =
_

(x) exp(ix) dx. (3.1.29)


It follows from (3.1.25) and (3.1.27) that
e(x, ) = a()
_
(s

() + 1)g(x, ) + g(x, ) g(x, )


_
,
g(x, ) = a()
_
(s
+
() + 1)e(x, ) + e(x, ) e(x, )
_
,
and consequently,
lim
0
a()(s

() + 1) = 0.
3.1.3. Let us now study the properties of the discrete spectrum.
Denition 3.1.1. The values of the parameter , for which equation (3.1.1) has
nonzero solutions y(x) L
2
(, ), are called eigenvalues of (3.1.1), and the correspond-
ing solutions are called eigenfunctions.
The properties of the eigenvalues are similar to the properties of the discrete spectrum
of the Sturm-Liouville operator on the half-line (see Chapter 2).
Theorem 3.1.2. There are no eigenvalues for 0.
Proof. Repeat the arguments in the proof of Theorems 2.1.6 and 2.3.6. 2
Let
+
:= , =
2
,
+
: a() = 0 be the set of zeros of a() in the upper
half-plane
+
. Since the function a() is analytic in
+
and, by virtue of (3.1.23),
a() = 1 + O
_
1

_
, [[ , Im 0,
we get that
+
is an at most countable bounded set.
Theorem 3.1.3. The set of eigenvalues coincides with
+
. The eigenvalues
k
are
real and negative (i.e.
+
(, 0) ). For each eigenvalue
k
=
2
k
, there exists only one
(up to a multiplicative constant) eigenfunction, namely
g(x,
k
) = d
k
e(x,
k
), d
k
,= 0. (3.1.30)
175
The eigenfunctions e(x,
k
) and g(x,
k
) are real. Eigenfunctions related to dierent eigen-
values are orthogonal in L
2
(, ).
Proof. Let
k
=
2
k

+
. By virtue of (3.1.22),
e(x,
k
), g(x,
k
)) = 0, (3.1.31)
i.e. (3.1.30) is valid. According to Theorem 3.1.1, e(x,
k
) L
2
(, ), g(x,
k
) L
2
(, )
for each real . Therefore, (3.1.30) implies
e(x,
k
), g(x,
k
) L
2
(, ).
Thus, e(x,
k
) and g(x,
k
) are eigenfunctions, and
k
=
2
k
is an eigenvalue.
Conversly, let
k
=
2
k
,
k

+
be an eigenvalue, and let y
k
(x) be a corresponding
eigenfunction. Since y
k
(x) L
2
(, ), we have
y
k
(x) = c
k1
e(x,
k
), y
k
(x) = c
k2
g(x,
k
), c
k1
, c
k2
,= 0,
and consequently, (3.1.31) holds. Using (3.1.22) we obtain a(
k
) = 0, i.e.
k

+
.
Let
n
and
k
(
n
,=
k
) be eigenvalues with eigenfunctions y
n
(x) = e(x,
n
) and
y
k
(x) = e(x,
k
) respectively. Then integration by parts yields
_

y
n
(x) y
k
(x) dx =
_

y
n
(x)y
k
(x) dx,
and hence

n
_

y
n
(x)y
k
(x) dx =
k
_

y
n
(x)y
k
(x) dx
or _

y
n
(x)y
k
(x) dx = 0.
Furthermore, let
0
= u + iv, v ,= 0 be a non-real eigenvalue with an eigenfunction
y
0
(x) ,= 0. Since q(x) is real, we get that
0
= u iv is also the eigenvalue with the
eigenfunction y
0
(x). Since
0
,=
0
, we derive as before
|y
0
|
2
L
2
=
_

y
0
(x)y
0
(x) dx = 0,
which is impossible. Thus, all eigenvalues
k
are real, and consequently the eigenfunctions
e(x,
k
) and g(x,
k
) are real too. Together with Theorem 3.1.2 this yields
+
(, 0).
Theorem 3.1.3 is proved. 2
For
k
=
2
k

+
we denote

+
k
=
_
_

e
2
(x,
k
) dx
_
1
,

k
=
_
_

g
2
(x,
k
) dx
_
1
.
Theorem 3.1.4.
+
is a nite set, i.e. in
+
the function a() has at most a
nite number of zeros. All zeros of a() in
+
are simple, i.e. a
1
(
k
) ,= 0, where
a
1
() :=
d
d
a(). Moreover,

+
k
=
d
k
ia
1
(
k
)
,

k
=
1
id
k
a
1
(
k
)
, (3.1.32)
176
where the numbers d
k
are dened by (3.1.30).
Proof. 1) Let us show that
2
_
x
A
e(t, )g(t, ) dt = e(t, ), g(t, ))

x
A
,
2
_
A
x
e(t, )g(t, ) dt = e(t, ), g(t, ))

A
x
,
_

_
(3.1.33)
where in this subsection
e(t, ) :=
d
d
e(t, ), g(t, ) :=
d
d
g(t, ).
Indeed,
d
dx
e(x, ), g(x, )) = e(x, ) g

(x, ) e

(x, ) g(x, ).
Since
e

(x, ) + q(x)e(x, ) =
2
e(x, ), g

(x, ) + q(x) g(x, ) =


2
g(x, ) + 2g(x, ),
we get
d
dx
e(x, ), g(x, )) = 2e(x, )g(x, ).
Similarly,
d
dx
e(x, ), g(x, )) = 2e(x, )g(x, ),
and we arrive at (3.1.33).
It follows from (3.1.33) that
2
_
A
A
e(t, )g(t, ) dt = e(x, ), g(x, )) e(x, ), g(x, ))
+ e(x, ), g(x, ))

x=A
+e(x, ), g(x, ))

x=A
.
On the other hand, dierentiating (3.1.22) with respect to , we obtain
2ia
1
() + 2ia() = e(x, ), g(x, )) e(x, ), g(x, )).
For =
k
this yields with the preceding formula
ia
1
(
k
) =
_
A
A
e(t,
k
)g(t,
k
) dt +
k
(A), (3.1.34)
where

k
(A) =
1
2
k
_
e(x,
k
), g(x,
k
))

x=A
+e(x,
k
), g(x,
k
))

x=A
_
.
Since
k
= i
k
,
k
> 0, we have by virtue of (3.1.4),
e(x,
k
), e

(x,
k
) = O(exp(
k
x)), x +.
177
According to (3.1.8),
e(x,
k
) = ix exp(
k
x) +
_

x
itA
+
(x, t) exp(
k
t) dt,
e

(x,
k
) = i exp(
k
x) ix
k
exp(
k
x) ixA
+
(x, x) exp(
k
x)
+
_

x
itA
+
1
(x, t) exp(
k
t) dt.
Hence
e(x,
k
), e

(x,
k
) = O(1), x +.
From this, using (3.1.30), we calculate
e(x,
k
), g(x,
k
)) = d
k
e(x,
k
), e(x,
k
)) = o(1) as x +,
e(x,
k
), g(x,
k
)) =
1
d
k
g(x,
k
), g(x,
k
)) = o(1) as x .
Consequently,
lim
A +

k
(A) = 0.
Then (3.1.34) implies
ia
1
(
k
) =
_

e(t,
k
)g(t,
k
) dt.
Using (3.1.30) again we obtain
ia
1
(
k
) = d
k
_

e
2
(t,
k
) dt =
1
d
k
_

g
2
(t,
k
) dt.
Hence a
1
(
k
) ,= 0, and (3.1.32) is valid.
2) Suppose that
+
=
k
is an innite set. Since
+
is bounded and
k
=
2
k
< 0,
it follows that
k
= i
k
0,
k
> 0. By virtue of (3.1.4)-(3.1.5), there exists A > 0 such
that
e(x, i)
1
2
exp(x) for x A, 0,
g(x, i)
1
2
exp(x) for x A, 0,
_
_
_
(3.1.35)
and consequently
_

A
e(x,
k
)e(x,
n
) dx
exp((
k
+
n
)A)
4(
k
+
n
)

exp(2AT)
8T
,
_
A

g(x,
k
)g(x,
n
) dx
exp((
k
+
n
)A)
4(
k
+
n
)

exp(2AT)
8T
,
_

_
(3.1.36)
where T = max
k

k
. Since the eigenfunctions e(x,
k
) and e(x,
n
) are orthogonal in
L
2
(, ) we get
0 =
_

e(x,
k
)e(x,
n
) dx =
_

A
e(x,
k
)e(x,
n
) dx +
1
d
k
d
n
_
A

g(x,
k
)g(x,
n
) dx
+
_
A
A
e
2
(x,
k
) dx +
_
A
A
e(x,
k
)(e(x,
n
) e(x,
k
)) dx. (3.1.37)
178
Take x
0
A such that e(x
0
, 0) ,= 0. According to (3.1.30),
1
d
k
d
n
=
e(x
0
,
k
)e(x
0
,
n
)
g(x
0
,
k
)g(x
0
,
n
)
.
Since the functions e(x, ) and g(x, ) are continuous for Im 0, we calculate with the
help of (3.1.35),
lim
k,n
g(x
0
,
k
)g(x
0
,
n
) = g
2
(x
0
, 0) > 0,
lim
k,n
e(x
0
,
k
)e(x
0
,
n
) = e
2
(x
0
, 0) > 0.
Therefore,
lim
k,n
1
d
k
d
n
> 0.
Together with (3.1.36) this yields
_

A
e(x,
k
)e(x,
n
) dx +
1
d
k
d
n
_
A

g(x,
k
)g(x,
n
) dx +
_
A
A
e
2
(x,
k
) dx C > 0 (3.1.38)
for suciently large k and n. On the other hand, acting in the same way as in the proof
of Theorem 2.3.4 one can easily verify that
_
A
A
e(x,
k
)(e(x,
n
) e(x,
k
)) dx 0 as k, n . (3.1.39)
The relations (3.1.37)-(3.1.39) give us a contrudiction.
This means that
+
is a nite set. 2
Thus, the set of eigenvalues has the form

+
=
k

k=1,N
,
k
=
2
k
,
k
= i
k
, 0 <
1
< . . . <
m
.
Denition 3.1.2. The data J
+
= s
+
(),
k
,
+
k
; R, k = 1, N are called the
right scattering data, and the data J

= s

(),
k
,

k
; R, k = 1, N are called the
left scattering data.
Example 3.1.1. Let q(x) 0. Then
e(x, ) = exp(ix), g(x, ) = exp(ix), a() = 1, b() = 0, s

() = 0, N = 0,
i.e. there are no eigenvalues at all.
3.1.4. In this subsection we study connections between the scattering data J
+
and
J

. Consider the function


() =
1
a()
N

k=1
i
k
+i
k
. (3.1.40)
Lemma 3.1.3. (i
1
) The function () is analytic in
+
and continuous in
+
0.
(i
2
) () has no zeros in
+
0.
(i
3
) For [[ ,
+
,
() = 1 + O
_
1

_
. (3.1.41)
179
(i
4
) [()[ 1 for
+
.
Proof. The assertions (i
1
) (i
3
) are obvious consequences of the preceding discussion,
and only (i
4
) needs to be proved. By virtue of (3.1.21), [a()[ 1 for real ,= 0, and
consequently,
[()[ 1 for real ,= 0. (3.1.42)
Suppose that the function a() is analytic in the origin. Then, using (3.1.40) and
(3.1.42) we deduce that the function () has a removable singularity in the origin, and ()
(after extending continuously to the origin) is continuous in
+
. Using (3.1.41), (3.1.42)
and the maximum principle we arrive at (i
4
).
In the general case we cannot use these arguments for (). Therefore, we introduce the
potentials
q
r
(x) =
_
q(x), [x[ r,
0, [x[ > r,
r 0,
and consider the corresponding Jost solutions e
r
(x, ) and g
r
(x, ). Clearly, e
r
(x, )
exp(ix) for x r and g
r
(x, ) exp(ix) for x r. For each xed x, the functions
e
()
r
(x, ) and g
()
r
(x, ) ( = 0, 1) are entire in . Take
a
r
() =
1
2i
e
r
(x, ), g
r
(x, )),
r
() =
1
a
r
()
N
r

k=1
i
kr
+i
kr
,
where
kr
= i
kr
, k = 1, N
r
are zeros of a
r
() in the upper half-plane
+
. The function
a
r
() is entire in , and (see Lemma 3.1.2) [a
r
()[ 1 for real . The function
r
()
is analytic in
+
, and
[
r
()[ 1 for
+
. (3.1.43)
By virtue of Lemma 3.1.1,
lim
r
sup

+
sup
xa
[(e
()
r
(x, ) e
()
(x, )) exp(ix)[ = 0,
lim
r
sup

+
sup
xa
[(g
()
r
(x, ) g
()
(x, )) exp(ix)[ = 0,
for = 0, 1 and each real a. Therefore
lim
r
sup

+
[(a
r
() a())[ = 0,
i.e.
lim
r
a
r
() = a() uniformly in
+
. (3.1.44)
In particular, (3.1.44) yields that 0 <
kr
C for all k and r.
Let
r
be the inmum of distances between the zeros
kr
of a
r
() in the upper
half-plane Im > 0. Let us show that

:= inf
r>0

r
> 0. (3.1.45)
180
Indeed, suppose on the contrary that there exists a sequence r
k
such that
r
k
0.
Let
(1)
k
= i
(1)
k
,
(2)
k
= i
(2)
k
(
(1)
k
,
(2)
k
0) be zeros of a
r
k
() such that
(1)
k

(2)
k
0 as
k . It follows from (3.1.4)-(3.1.5) that there exists A > 0 such that
e
r
(x, i)
1
2
exp(x) for x A, 0, r 0,
g
r
(x, i)
1
2
exp(x) for x A, 0, r 0.
_
_
_
(3.1.46)
Since the functions e
r
k
(x,
(1)
k
) and e
r
k
(x,
(2)
k
) are orthogonal in L
2
(, ), we get
0 =
_

e
r
k
(x,
(1)
k
)e
r
k
(x,
(2)
k
) dx
=
_

A
e
r
k
(x,
(1)
k
)e
r
k
(x,
(2)
k
) dx +
1
d
(1)
k
d
(2)
k
_
A

g
r
k
(x,
(1)
k
)g
r
k
(x,
(2)
k
) dx
+
_
A
A
e
2
r
k
(x,
(1)
k
) dx +
_
A
A
e
r
k
(x,
(1)
k
)(e
r
k
(x,
(2)
k
) e
r
k
(x,
(1)
k
)) dx, (3.1.47)
where the numbers d
(j)
k
are dened by
g
r
k
(x,
(j)
k
) = d
(j)
k
e
r
k
(x,
(j)
k
), d
(j)
k
,= 0.
Take x
0
A. Then, by virtue of (3.1.46),
g
r
k
(x
0
,
(1)
k
)g
r
k
(x
0
,
(2)
k
) C > 0,
and
1
d
(1)
k
d
(2)
k
=
e
r
k
(x
0
,
(1)
k
)e
r
k
(x
0
,
(2)
k
)
g
r
k
(x
0
,
(1)
k
)g
r
k
(x
0
,
(2)
k
)
.
Using Lemma 3.1.1 we get
lim
k
e
r
k
(x
0
,
(1)
k
)e
r
k
(x
0
,
(2)
k
) 0;
hence
lim
k
1
d
(1)
k
d
(2)
k
0.
Then
_

A
e
r
k
(x,
(1)
k
)e
r
k
(x,
(2)
k
) dx +
1
d
(1)
k
d
(2)
k
_
A

g
r
k
(x,
(1)
k
)g
r
k
(x,
(2)
k
) dx
+
_
A
A
e
2
r
k
(x,
(1)
k
) dx C > 0.
On the other hand,
_
A
A
e
r
k
(x,
(1)
k
)(e
r
k
(x,
(2)
k
) e
r
k
(x,
(1)
k
)) dx 0 as k .
From this and (3.1.47) we arrive at a contradiction, i.e. (3.1.45) is proved.
181
Let D
,R
:=
+
: < [[ < R, where 0 < < min(

,
1
), R >
N
. Using
(3.1.44) one can show that
lim
r

r
() = () uniformly in D
,R
. (3.1.48)
It follows from (3.1.43) and (3.1.48) that [()[ 1 for D
,R
. By virtue of arbitrariness
of and R we obtain [()[ 1 for
+
, i.e. (i
4
) is proved. 2
It follows from Lemma 3.1.3 that
(a())
1
= O(1) as [[ 0,
+
. (3.1.49)
We also note that since the function a() is continuous at the origin, it follows that for
suciently small real ,
1 [a()[ =
1
[()[

C
[[
.
The properties of the function () obtained in Lemma 3.1.3 allow one to recover ()
in
+
from its modulus [()[ given for real .
Lemma 3.1.4. The following relation holds
() = exp
_
1
i
_

ln [()[

d
_
,
+
. (3.1.50)
Proof. 1) The function ln () is analytic in
+
and ln () = O(
1
) for [[
,
+
. Consider the closed contour C
R
(with counterclockwise circuit) which is the
boundary of the domain D
R
=
+
: [[ < R (see g. 3.1.1). By Cauchys integral
formula [con1, p.84],
ln () =
1
2i
_
C
R
ln ()

d, D
R
.
Since
lim
R
1
2i
_
||=R

+
ln ()

d = 0,
we obtain
ln () =
1
2i
_

ln ()

d,
+
. (3.1.51)
2) Take a real and the closed contour C

R,
(with counterclockwise circuit) consisting
of the semicircles C
0
R
= : = Rexp(i), [0, ],

= : = exp(i),
[0, ], > 0 and the intervals [R, R] [ , +] (see g.3.1.1).
6
-
6
E
G

R R
6
-
6
E
G

R R

g. 3.1.1.
182
By Cauchys theorem,
1
2i
_
C

R,
ln ()

d = 0.
Since
lim
R
1
2i
_
C
0
R
ln ()

d = 0, lim
0
1
2i
_

ln ()

d =
1
2
ln (),
we get for real ,
ln () =
1
i
_

ln ()

d. (3.1.52)
In (3.1.52) (and everywhere below where necessary) the integral is understood in the principal
value sense.
3) Let () = [()[ exp(i()). Separating in (3.1.52) real and imaginary parts we
obtain
() =
1

ln [()[

d, ln [())[ =
1

()

d.
Then, using (3.1.51) we calculate for
+
:
ln () =
1
2i
_

ln [()[

d
1
2
_

()

d
=
1
2i
_

ln [()[

d
1
2
2
_

_
_

d
( )(s )
_
ln [(s)[ ds.
Since
1
( )(s )
=
1
s
_
1


1
s
_
,
it follows that for
+
and real s,
_

d
( )(s )
=
i
s
.
Consequently,
ln () =
1
i
_

ln [()[

d,
+
,
and we arrive at (3.1.50). 2
It follows from (3.1.21) and (3.1.26) that for real ,= 0,
1
[a()[
2
= 1 [s

()[
2
.
By virtue of (3.1.40) this yields for real ,= 0,
[()[ =
_
1 [s

()[
2
.
Using (3.1.40) and (3.1.50) we obtain
a() =
N

k=1
i
k
+i
k
exp
_

1
2i
_

ln(1 [s

()[
2
)

d
_
,
+
. (3.1.53)
183
We note that since the function a() is continuous in
+
, it follows that

2
1 [s

()[
2
= O(1) as [[ 0.
Relation (3.1.53) allows one to establish connections between the scattering data J
+
and J

. More precisely, from the given data J


+
one can uniquely reconstruct J

(and
vice versa) by the following algorithm.
Algorithm 3.1.1. Let J
+
be given. Then
1) Construct the function a() by (3.1.53);
2) Calculate d
k
and

k
, k = 1, N by (3.1.32);
3) Find b() and s

() by (3.1.26).
3.2. THE MAIN EQUATION
3.2.1. The inverse scattering problem is formulated as follows:
Given the scattering data J
+
(or J

), construct the potential q.


The central role for constructing the solution of the inverse scattering problem is played
by the so-called main equation which is a linear integral equation of Fredholm type. In this
section we give a derivation of the main equation and study its properties. In Section 3.3
we provide the solution of the inverse scattering problem along with necessary and sucient
conditions of its solvability.
Theorem 3.2.1. For each xed x, the functions A

(x, t), dened in (3.1.8), satisfy


the integral equations
F
+
(x +y) + A
+
(x, y) +
_

x
A
+
(x, t)F
+
(t +y) dt = 0, y > x, (3.2.1)
F

(x +y) + A

(x, y) +
_
x

(x, t)F

(t +y) dt = 0, y < x, (3.2.2)


where
F

(x) = R

(x) +
N

k=1

k
exp(
k
x), (3.2.3)
and the functions R

(x) are dened by (3.1.28).


Equations (3.2.1) and (3.2.2) are called the main equations or Gelfand-Levitan-Marchenko
equations.
Proof. By virtue of (3.1.18) and (3.1.19),
_
1
a()
1
_
g(x, ) = s
+
()e(x, ) + e(x, ) g(x, ). (3.2.4)
Put A
+
(x, t) = 0 for t < x, and A

(x, t) = 0 for t > x. Then, using (3.1.8) and (3.1.29),


we get
s
+
()e(x, ) +e(x, ) g(x, )
184
=
__

R
+
(y) exp(iy)dy
__
exp(ix) +
_

A
+
(x, t) exp(it) dt
_
+
_

(A
+
(x, t) A

(x, t)) exp(it) dt =


_

H(x, y) exp(iy) dy,


where
H(x, y) = A
+
(x, y) A

(x, y) + R
+
(x +y) +
_

x
A
+
(x, t)R
+
(t +y) dt. (3.2.5)
Thus, for each xed x, the right-hand side in (3.2.4) is the Fourier transform of the function
H(x, y). Hence
H(x, y) =
1
2
_

_
1
a()
1
_
g(x, ) exp(iy) d. (3.2.6)
Fix x and y (y > x) and consider the function
f() :=
_
1
a()
1
_
g(x, ) exp(iy). (3.2.7)
According to (3.1.6) and (3.1.23),
f() =
c

exp(i(y x))(1 + o(1)), [[ ,


+
. (3.2.8)
Let C
,R
be a closed contour (with counterclockwise circuit) which is the boundary of the
domain D
,R
=
+
: < [[ < R, where <
1
< . . . <
N
< R. Thus, all zeros

k
= i
k
, k = 1, N of a() are contained in D
,R
. By the residue theorem,
1
2i
_
C
,R
f() d =
N

k=1
Res
=
k
f().
On the other hand, it follows from (3.2.7)-(3.2.8), (3.1.5) and (3.1.49) that
lim
R
1
2i
_
||=R

+
f() d = 0, lim
0
1
2i
_
||=

+
f() d = 0.
Hence
1
2i
_

f() d =
N

k=1
Res
=
k
f().
From this and (3.2.6)-(3.2.7) it follows that
H(x, y) = i
N

k=1
g(x, i
k
) exp(
k
y)
a
1
(i
k
)
.
Using the fact that all eigenvalues are simple, (3.1.30), (3.1.8) and (3.1.32) we obtain
H(x, y) = i
N

k=1
d
k
e(x, i
k
) exp(
k
y)
a
1
(i
k
)
=
N

k=1

+
k
_
exp(
k
(x +y)) +
_

x
A
+
(x, t) exp(
k
(t +y)) dt
_
. (3.2.9)
185
Since A

(x, y) = 0 for y > x, (3.2.5) and (3.2.9) yield (3.2.1).


Relation (3.2.2) is proved analogously. 2
Lemma 3.2.1. Let nonnegative functions v(x), u(x) (a x T ) be given such
that v(x) L(a, T), u(x)v(x) L(a, T), and let c
1
0. If
u(x) c
1
+
_
T
x
v(t)u(t) dt, (3.2.10)
then
u(x) c
1
exp
_
_
T
x
v(t) dt
_
. (3.2.11)
Proof. Denote
(x) := c
1
+
_
T
x
v(t)u(t) dt.
Then (T) = c
1
,

(x) = v(x)u(x), and (3.2.10) yields


0

(x) v(x)(x).
Let c
1
> 0. Then (x) > 0, and
0

(x)
(x)
v(x).
Integrating this inequality we obtain
ln
(x)
(T)

_
T
x
v(t) dt,
and consequently,
(x) c
1
exp
_
_
T
x
v(t) dt
_
.
According to (3.2.10) u(x) (x), and we arrive at (3.2.11).
If c
1
= 0, then (x) = 0. Indeed, suppose on the contrary that (x) ,= 0. Then, there
exists T
0
T such that (x) > 0 for x < T
0
, and (x) 0 for x [T
0
, T]. Repeating
the arguments we get for x < T
0
and suciently small > 0,
ln
(x)
(T
0
)

_
T
0

x
v(t) dt
_
T
0
x
v(t) dt,
which is impossible. Thus, (x) 0, and (3.2.11) becomes obvious. 2
Lemma 3.2.2. The functions F

(x) are absolutely continuous and for each xed a >


,
_

a
[F

(x)[ dx < ,
_

a
(1 +[x[)[F

(x)[ dx < . (3.2.12)


Proof. 1) According to (3.2.3) and (3.1.28), F
+
(x) L
2
(a, ) for each xed a > .
By continuity, (3.2.1) is also valid for y = x :
F
+
(2x) + A
+
(x, x) +
_

x
A
+
(x, t)F
+
(t +x) dt = 0. (3.2.13)
186
Rewrite (3.2.13) to the form
F
+
(2x) + A
+
(x, x) + 2
_

x
A
+
(x, 2 x)F
+
(2) d = 0. (3.2.14)
It follows from (3.2.14) and (3.1.10) that the function F
+
(x) is continuous, and for x a,
[F
+
(2x)[
1
2
Q
+
0
(x) + exp(Q
+
1
(a))
_

x
Q
+
0
()[F
+
(2)[ d. (3.2.15)
Fix r a. Then for x r, (3.2.15) yields
[F
+
(2x)[
1
2
Q
+
0
(r) + exp(Q
+
1
(a))
_

x
Q
+
0
()[F
+
(2)[ d.
Applying Lemma 3.2.1 we obtain
[F
+
(2x)[
1
2
Q
+
0
(r) exp(Q
+
1
(a) exp(Q
+
1
(a))), x r a,
and consequently
[F
+
(2x)[ C
a
Q
+
0
(x), x a. (3.2.16)
It follows from (3.2.16) that for each xed a > ,
_

a
[F
+
(x)[ dx < .
2) By virtue of (3.2.14), the function F
+
(x) is absolutely continuous, and
2F
+

(2x) +
d
dx
A
+
(x, x) 2A
+
(x, x)F
+
(2x)
+2
_

x
_
A
+
1
(x, 2 x) + A
+
2
(x, 2 x)
_
F
+
(2) d = 0,
where
A
+
1
(x, t) =
A
+
(x, t)
x
, A
+
2
(x, t) =
A
+
(x, t)
t
.
Taking (3.1.9) into account we get
F
+

(2x) =
1
4
q(x) + P(x), (3.2.17)
where
P(x) =
_

x
_
A
+
1
(x, 2 x) + A
+
2
(x, 2 x)
_
F
+
(2) d
1
2
F
+
(2x)
_

x
q(t) dt.
It follows from (3.2.16) and (3.1.11) that
[P(x)[ C
a
(Q
+
0
(x))
2
, x a. (3.2.18)
Since
xQ
+
0
(x)
_

x
t[q(t)[ dt,
187
it follows from (3.2.17) and (3.2.18) that for each xed a > ,
_

a
(1 +[x[)[F
+

(x)[ dx < ,
and (3.2.12) is proved for the function F
+
(x). For F

(x) the arguments are similar. 2


3.2.2. Now we are going to study the solvability of the main equations (3.2.1) and (3.2.2).
Let sets J

= s

(),
k
,

k
; R, k = 1, N be given satisfying the following condition.
Condition A. For real ,= 0, the functions s

() are continuous, [s

()[ < 1, s

() =
s

() and s

() = o
_
1

_
as [[ . The real functions R

(x), dened by (3.1.28),


are absolutely continuous, R

(x) L
2
(, ), and for each xed a > ,
_

a
[R

(x)[ dx < ,
_

a
(1 +[x[)[R

(x)[ dx < . (3.2.19)


Moreover,
k
=
2
k
< 0,

k
> 0, k = 1, N.
Theorem 3.2.2. Let sets J
+
( J

) be given satisfying Condition A. Then for each


xed x, the integral equation (3.2.1) ((3.2.2) respectively) has a unique solution A
+
(x, y)
L(x, ) ( A

(x, y) L(, x) respectively).


Proof. For deniteness we consider equation (3.2.1). For (3.2.2) the arguments are the
same. It is easy to check that for each xed x, the operator
(J
x
f)(y) =
_

x
F
+
(t +y)f(t) dt, y > x
is compact in L(x, ). Therefore, it is sucient to prove that the homogeneous equation
f(y) +
_

x
F
+
(t +y)f(t) dt = 0 (3.2.20)
has only the zero solution. Let f(y) L(x, ) be a real function satisfying (3.2.20). It
follows from (3.2.20) and Condition A that the functions F
+
(y) and f(y) are bounded
on the half-line y > x, and consequently f(y) L
2
(x, ). Using (3.2.3) and (3.1.28) we
calculate
0 =
_

x
f
2
(y) dy +
_

x
_

x
F
+
(t +y)f(t)f(y) dtdy
=
_

x
f
2
(y) dy +
N

k=1

+
k
_
_

x
f(y) exp(
k
y) dy
_
2
+
1
2
_

s
+
()
2
() d,
where
() =
_

x
f(y) exp(iy) dy.
According to Parsevals equality
_

x
f
2
(y) dy =
1
2
_

[()[
2
d,
and hence
N

k=1

+
k
_
_

x
f(y) exp(
k
y) dy
_
2
+
1
2
_

[()[
2
_
1 [s
+
()[ exp(i(2() + ()))
_
d = 0,
188
where () = arg (), () = arg(s
+
()). In this equality we take the real part:
N

k=1

+
k
_
_

x
f(y) exp(
k
y) dy
_
2
+
1
2
_

[()[
2
_
1 [s
+
()[ cos((2() + ()))
_
d = 0.
Since [s
+
()[ < 1, this is possible only if () 0. Then f(y) = 0, and Theorem 3.2.2 is
proved. 2
Remark 3.2.1. The main equations (3.2.1)-(3.2.2) can be rewritten in the form
F
+
(2x +y) + B
+
(x, y) +
_

0
B
+
(x, t)F
+
(2x +y +t) dt = 0, y > 0,
F

(2x +y) +B

(x, y) +
_
0

(x, t)F

(2x +y +t) dt = 0, y < 0,


_

_
(3.2.21)
where B

(x, y) = A

(x, x +y).
3.3. THE INVERSE SCATTERING PROBLEM
3.3.1. In this section, using the main equations (3.2.1)-(3.2.2), we provide the solution
of the inverse scattering problem of recovering the potential q from the given scattering
data J
+
(or J

). First we prove the uniqueness theorem.


Theorem 3.3.1. The specication of the scattering data J
+
(or J

) uniquely deter-
mines the potential q.
Proof. Let J
+
and

J
+
be the right scattering data for the potentials q and q
respectively, and let J
+
=

J
+
. Then, it follows from (3.2.3) and (3.1.28) that F
+
(x) =

F
+
(x). By virtue of Theorems 3.2.1 and 3.2.2, A
+
(x, y) =

A
+
(x, y). Therefore, taking
(3.1.9) into account, we get q = q. For J

the arguments are the same. 2


The solution of the inverse scattering problem can be constructed by the following algo-
rithm.
Algorithm 3.3.1. Let the scattering data J
+
(or J

) be given. Then
1) Calculate the function F
+
(x) (or F

(x) ) by (3.2.3) and (3.1.28).


2) Find A
+
(x, y) (or A

(x, y) ) by solving the main equation (3.2.1) (or (3.2.2) respec-


tively).
3) Construct q(x) = 2
d
dx
A
+
(x, x) (or q(x) = 2
d
dx
A

(x, x) ).
Now we going to study necessary and sucient conditions for the solvability of the inverse
scatering problem. In other words we will describe conditions under which a set J
+
(or
J

) represents the scattering data for a certain potential q. First we prove the following
auxiliary assertion.
Lemma 3.3.1. Let sets J

= s

(),
k
,

k
; R, k = 1, N be given satisfying
Condition A, and let A

(x, y) be the solutions of the integral equations (3.2.1)-(3.2.2).


Construct the functions e(x, ) and g(x, ) by (3.1.8) and the functions q

(x) by the
formulae
q
+
(x) = 2
d
dx
A
+
(x, x), q

(x) = 2
d
dx
A

(x, x). (3.3.1)


189
Then, for each xed a > ,
_

a
(1 +[x[)[q

(x)[ dx < , (3.3.2)


and
e

(x, ) + q
+
(x)e(x, ) =
2
e(x, ), g

(x, ) +q

(x)g(x, ) =
2
g(x, ). (3.3.3)
Proof. 1) It follows from (3.2.19) and (3.2.3) that
_

a
[F

(x)[ dx < ,
_

a
(1 +[x[)[F

(x)[ dx < . (3.3.4)


We rewrite (3.2.1) in the form
F
+
(y + 2x) +A
+
(x, x +y) +
_

0
A
+
(x, x +t)F
+
(t +y + 2x) dt = 0, y > 0, (3.3.5)
and for each xed x consider the operator
(T
x
f)(y) =
_

0
F
+
(t +y + 2x)f(t) dt, y 0
in L(0, ). It follows from Theorem 3.2.2 that there exists (E + T
x
)
1
, where E is the
identity operator, and |(E + T
x
)
1
| < . Using Lemma 1.5.1 (or an analog of Lemma
1.5.2 for an innite interval) it is easy to verify that A
+
(x, y), y x and |(E + T
x
)
1
|
are continuous functions. Since
|T
x
| = sup
y
_

0
[F
+
(t +y + 2x)[ dt = sup
y
_

y+2x
[F
+
()[ d
_

2x
[F
+
()[ d,
we get
lim
x
|T
x
| = 0.
Therefore,
C
0
a
:= sup
xa
|(E +T
x
)
1
| < .
Denote

0
(x) =
_

x
[F
+

(t)[ dt,
1
(x) =
_

x

0
(t) dt =
_

x
(t x)[F
+

(t)[ dt.
Then
[F
+
(x)[
0
(x). (3.3.6)
It follows from (3.3.5) that
A
+
(x, x +y) = (E +T
x
)
1
F
+
(y + 2x),
consequently the preceding formulae yield,
_

0
[A
+
(x, x +y)[ dy C
0
a
_

0
[F
+
(y + 2x)[ dy C
0
a

1
(2x), x a. (3.3.7)
190
Using (3.3.5), (3.3.6) and (3.3.7) we calculate
[A
+
(x, x +y)[
0
(y + 2x) +
_

0
[A
+
(x, x +t)[
0
(t +y + 2x) dt

_
1 + C
0
a

1
(2x)
_

0
(y + 2x). (3.3.8)
Applying Lemma 1.5.1 one can show that the function A
+
(x, y) has the rst derivatives
A
+
1
(x, y) :=
A
+
(x, y)
x
, A
+
2
(x, y) :=
A
+
(x, y)
y
,
and therefore, dierentiating (3.2.1), we get
F
+

(x +y) + A
+
1
(x, y) A
+
(x, x)F
+
(x +y) +
_

x
A
+
1
(x, t)F
+
(t +y) dt = 0,
F
+

(x +y) + A
+
2
(x, y) +
_

x
A
+
(x, t)F
+

(t +y) dt = 0.
_

_
(3.3.9)
Denote A
+
0
(x, x+y) :=
d
dx
A
+
(x, x+y). Dierentiating (3.3.5) with respect to x we obtain
2F
+

(y + 2x) + A
+
0
(x, x +y)
+
_

0
A
+
0
(x, x +t)F
+
(t +y + 2x) dt + 2
_

0
A
+
(x, x +t)F
+

(t +y + 2x) dt = 0, (3.3.10)
and consequently
_

0
[A
+
0
(x, x +y)[ dy 2C
0
a
_
_

0
[F
+

(y + 2x)[ dy
+
_

0
[A
+
(x, x +t)[
_
_

0
[F
+

(t +y + 2x)[ dy
_
dt
_
.
By virtue of (3.3.8), this yields
_

0
[A
+
0
(x, x +y)[ dy 2C
0
a
_

0
(2x) + (1 + C
0
a

1
(2x))
_

0

2
0
(t + 2x) dt
_
2C
0
a

0
(2x)
_
1 + (1 + C
0
a

1
(2x))
1
(2x)
_
. (3.3.11)
It follows from (3.3.10) that
[A
+
0
(x, x +y) + 2F
+

(y + 2x)[
_

0
[A
+
0
(x, x +t)F
+
(t +y + 2x)[ dt + 2
_

0
[A
+
(x, x +t)F
+

(t +y + 2x)[ dt.
Using (3.3.6), (3.3.11) and (3.3.8) we get
[A
+
0
(x, x +y) + 2F
+

(y + 2x)[ 2C
0
a

0
(y + 2x)
0
(2x)
1
(2x)(1 + C
0
a

1
(2x))
+2
0
(2x)
0
(y + 2x)(1 + C
0
a

1
(2x)).
For y = 0 this implies
[2
d
dx
A
+
(x, x) + 4F
+

(2x)[ 2C
a

2
0
(2x), x a,
191
where C
a
= 4(1 +C
0
a

1
(2a))
2
. Taking (3.3.1) into account we derive
_

a
(1 +[x[)[q
+
(x)[ dx 4
_

a
(1 +[x[)[F
+

(2x)[ dx +C
a
_

a
(1 +[x[)
2
0
(2x) dx.
Since
x
0
(2x)
_

2x
t[F
+

(t)[ dt
_

0
t[F
+

(t)[ dt,
we arrive at (3.3.2) for q
+
. For q

the arguments are the same.


2) Let us now prove (3.3.3). For deniteness we will prove (3.3.3) for the function
e(x, ). First we additionally assume that the function F
+

(x) is absolutely continuous and


F
+

(x) L(a, ) for each xed a > . Dierentiating the equality


J(x, y) := F
+
(x +y) +A
+
(x, y) +
_

x
A
+
(x, t)F
+
(t +y) dt = 0, y > x, (3.3.12)
we calculate
J
yy
(x, y) = F
+

(x +y) + A
+
yy
(x, y) +
_

x
A
+
(x, t)F
+

(t +y) dt = 0, (3.3.13)
J
xx
(x, y) = F
+

(x +y) + A
+
xx
(x, y)
d
dx
A
+
(x, x)F
+
(x +y)
A
+
(x, x)F
+

(x +y) A
+
1
(x, t)
|t=x
F
+
(x +y) +
_

x
A
+
xx
(x, t)F
+
(t +y) dt = 0. (3.3.14)
Integration by parts in (3.3.13) yields
J
yy
(x, y) = F
+

(x +y) + A
+
yy
(x, y) +
_
A
+
(x, t)F
+

(t +y) A
+
2
(x, t)F
+
(t +y)
_

x
+
_

x
A
+
tt
(x, t)F
+
(t +y) dt = 0.
It follows from (3.3.8) and (3.3.9) that here the substitution at innity is equal to zero, and
consequently,
J
yy
(x, y) = F
+

(x +y) +A
+
yy
(x, y) A
+
(x, x)F
+

(x +y) + A
+
2
(x, x)F
+
(x +y)
+
_

x
A
+
tt
(x, t)F
+
(t +y) dt = 0. (3.3.15)
Using (3.3.14), (3.3.15), (3.3.12), (3.3.1) and the equality
J
xx
(x, y) J
yy
(x, y) q
+
(x)J(x, y) = 0,
which follows from J(x, y) = 0, we obtain
f(x, y) +
_

x
f(x, t)F
+
(t +y) dt = 0, y x, (3.3.16)
where
f(x, y) := A
+
xx
(x, y) A
+
yy
(x, y) q
+
(x)A
+
(x, y).
It is easy to verify that for each xed x a, f(x, y) L(x, ). According to Theorem
3.2.2, the homogeneous equation (3.3.16) has only the zero solution, i.e.
A
+
xx
(x, y) A
+
yy
(x, y) q
+
(x)A
+
(x, y) = 0, y x. (3.3.17)
192
Dierentiating (3.1.8) twice we get
e

(x, ) = (i)
2
exp(ix) (i)A
+
(x, x) exp(ix)

_
d
dx
A
+
(x, x) + A
+
1
(x, t)
|t=x
_
exp(ix) +
_

x
A
+
xx
(x, t) exp(it) dt. (3.3.18)
On the other hand, integrating by parts twice we obtain

2
e(x, ) = (i)
2
exp(ix) (i)
2
_

x
A
+
(x, t) exp(it) dt
= (i)
2
exp(ix) + (i)A
+
(x, x) exp(ix) A
+
2
(x, t)
|t=x
exp(ix)
_

x
A
+
tt
(x, t) exp(it) dt.
Together with (3.1.8) and (3.3.18) this gives
e

(x, ) +
2
e(x, ) q
+
(x)e(x, ) =
_
2
d
dx
A
+
(x, x) q
+
(x)
_
exp(ix)
+
_

x
(A
+
xx
(x, t) A
+
tt
(x, t) q
+
(x)A
+
(x, t)) exp(it) dt.
Taking (3.3.1) and (3.3.17) into account we arrive at (3.3.3) for e(x, ).
Let us now consider the general case when (3.3.4) holds. Denote by e(x, ) the Jost
solution for the potential q
+
. Our goal is to prove that e(x, ) e(x, ). For this purpose
we choose the functions F
+
j
(x) such that F
+
j
(x), F
+
j

(x) are absolutely continuous, and


for each xed a > , F
+
j

(x) L(a, ) and


lim
j
_

a
[F
+
j
(x) F
+
(x)[ dx = 0, lim
j
_

a
(1 +[x[)[F
+
j

(x) F
+

(x)[ dx = 0. (3.3.19)
Denote

0j
(x) =
_

x
[F
+
j

(t) F
+

(t)[ dt,
1j
(x) =
_

x

0j
(t) dt =
_

x
(t x)[F
+
j

(t) F
+

(t)[ dt.
Using (3.3.19) and Lemma 1.5.1 one can show that for suciently large j, the integral
equation
F
+
j
(x +y) + A
+
(j)
(x, y) +
_

x
A
+
(j)
(x, t)F
+
j
(t +y) dt = 0, y > x,
has a unique solution A
+
(j)
(x, y), and
_

x
[A
+
(j)
(x, y) A
+
(x, y)[ dy C
a

1j
(2x), x a. (3.3.20)
Consequently,
lim
j
max
xa
_

x
[A
+
(j)
(x, y) A
+
(x, y)[ dy = 0. (3.3.21)
Denote
e
j
(x, ) = exp(ix) +
_

x
A
+
(j)
(x, t) exp(it) dt, (3.3.22)
q
+
j
(x) = 2
d
dx
A
+
(j)
(x, x).
193
It was proved above that
e

j
(x, ) + q
+
j
(x)e
j
(x, ) =
2
e
j
(x, ),
i.e. e
j
(x, ) is the Jost solution for the potential q
+
j
. Using (3.3.19)-(3.3.20) and acting in
the same way as in the rst part of the proof of Lemma 3.3.1 we get
lim
j
_

a
(1 +[x[)[q
+
j
(x) q
+
(x)[ dx = 0.
By virtue of Lemma 3.1.1, this yields
lim
j
max

+
max
xa
[(e
j
(x, ) e(x, )) exp(ix)[ = 0.
On the other hand, using (3.1.8), (3.3.21) and (3.3.22) we infer
lim
j
max

+
max
xa
[(e
j
(x, ) e(x, )) exp(ix)[ = 0.
Consequently, e(x, ) e(x, ), and (3.3.3) is proved for the function e(x, ). For g(x, )
the arguments are similar. Therefore, Lemma 3.3.1 is proved. 2
3.3.2. Let us now formulate necessary and sucient conditions for the solvability of
the inverse scattering problem.
Theorem 3.3.1. For data J
+
= s
+
(),
k
,
+
k
; R, k = 1, N to be the right
scattering data for a certain real potential q satisfying (3.1.2), it is necessary and sucient
that the following conditions hold:
1)
k
=
2
k
, 0 <
1
< . . . <
N
;
+
k
> 0, k = 1, N.
2) For real ,= 0, the function s
+
() is continuous, s
+
() = s
+
(), [s
+
()[ < 1, and
s
+
() = o
_
1

_
as [[ , (3.3.23)

2
1 [s
+
()[
2
= O(1) as [[ 0. (3.3.24)
3) The function (a() 1), where a() is dened by
a() :=
N

k=1
i
k
+i
k
exp(B()), B() :=
1
2i
_

ln(1 [s
+
()[
2
)

d,
+
, (3.3.25)
is continuous and bounded in
+
, and
1
a()
= O(1) as [[ 0,
+
, (3.3.26)
lim
0
a()
_
s
+
() + 1
_
= 0 for real . (3.3.27)
4) The functions R

(x), dened by
R

(x) =
1
2
_

() exp(ix) d, s

() := s
+
()
a()
a()
, (3.3.28)
194
are real and absolutely continuous, R

(x) L
2
(, ), and for each xed a > ,
(3.2.19) holds.
Proof. The necessity part of Theorem 3.3.1 was proved above. We prove here the
suciency. Let a set J
+
satisfying the hypothesis of Theorem 3.3.1 be given. According to
(3.3.25),
B() =
1
2i
_

()

d,
+
, () := ln
1
1 [s
+
()[
2
. (3.3.29)
For real ,= 0, the function () is continuous and
() = () 0, (3.3.30)
() = o
_
1

2
_
as ,
() = O
_
ln
1

_
as 0.
Furthermore, the function B() is analytic in
+
, continuous in
+
0 and for real
,= 0,
B() =
1
2
() +
1
2i
_

()

d, (3.3.31)
where the integral in (3.3.31) is understood in the principal value sense:
_

:= lim
0
_
_

+
_

+
_
.
It follows from assumption 3) of the theorem, (3.3.25), (3.3.30) and (3.3.31) that
a() = a() for real ,= 0. (3.3.32)
Moreover, for real ,= 0, (3.3.25) and (3.3.31) imply
[a()[
2
= [ exp(B())[
2
= exp(2Re B()) = exp(()),
and consequently,
1 [s
+
()[
2
=
1
[a()[
2
for real ,= 0. (3.3.33)
Furthermore, the function s

(), dened by (3.3.28), is continuous for real ,= 0, and by


virtue of (3.3.32),
s

() = s

(), [s

()[ = [s
+
()[.
Therefore, taking (3.3.23), (3.3.24) and (3.3.27) into account we obtain
s

() = o
_
1

_
as [[ ,

2
1 [s

()[
2
= O(1) as [[ 0,
195
lim
0
a()
_
s

() + 1
_
= 0 for real .
Let us show that
a
2
1
(
k
) < 0, k = 1, N, (3.3.34)
where a
1
() =
d
d
a(),
k
= i
k
. Indeed, it follows from (3.3.25) that
a
1
(
k
) =
d
d
_
N

j=1
i
j
+i
j
_
|=
k
exp(B(
k
)).
Using (3.3.30) we calculate
B(
k
) =
1
2i
_

()
i
k
d
=
1
2i
_

()

2
+
2
k
d +

k
2
_

()

2
+
2
k
d =

k
2
_

()

2
+
2
k
d.
Since
d
d
_
N

j=1
i
j
+i
j
_
|=
k
=
1
2i
k
N

j=1
j=k

k

j

k
+
j
_
,
the numbers a
1
(
k
) are pure imaginary, and (3.3.34) is proved.
Denote

k
=
1
a
2
1
(
k
)
+
k
, k = 1, N. (3.3.35)
According to (3.3.34)-(3.3.35),

k
> 0, k = 1, N.
Thus, we have the sets J

= s

(),
k
,

k
; R, k = 1, N which satisfy Condition A.
Therefore, Theorem 3.2.2 and Lemma 3.3.1 hold. Let A

(x, y) be the solutions of (3.2.1)-


(3.2.2). We construct the functions e(x, ) and g(x, ) by (3.1.8) and the functions q

(x)
by (3.3.1). Then (3.3.2)-(3.3.4) are valid.
Lemma 3.3.2. The following relations hold
s
+
()e(x, ) + e(x, ) =
g(x, )
a()
,
s

()g(x, ) + g(x, ) =
e(x, )
a()
.
_

_
(3.3.36)
Proof. 1) Denote

+
(x, y) = R
+
(x +y) +
_

x
A
+
(x, t)R
+
(t +y) dt,

(x, y) = R

(x +y) +
_
x

(x, t)R

(t +y) dt.
For each xed x,

(x, y) L
2
(, ), and
_

+
(x, y) exp(iy) dy = s
+
()e(x, ),
_

(x, y) exp(iy) dy = s

()g(x, ).
_

_
(3.3.37)
196
Indeed, using (3.1.8) and (3.3.28), we calculate
s
+
()e(x, ) =
_
exp(ix) +
_

x
A
+
(x, t) exp(it) dt
_
_

R
+
() exp(i) d
=
_

R
+
(x +y) exp(iy) dy +
_

x
A
+
(x, t)
_
_

R
+
(t +y) exp(iy) dy
_
dt
=
_

_
R
+
(x +y) +
_

x
A
+
(x, t)R
+
(t +y) dt
_
exp(iy) dy =
_

+
(x, y) exp(iy) dy.
The second relation in (3.3.37) is proved similarly.
On the other hand, it follows from (3.2.1)-(3.2.2) that

+
(x, y) = A
+
(x, y)
N

k=1

+
k
exp(
k
y)e(x, i
k
), y > x,

(x, y) = A

(x, y)
N

k=1

k
exp(
k
y)g(x, i
k
), y < x.
Then _

+
(x, y) exp(iy) dy =
_
x

+
(x, y) exp(iy) dy

_

x
_
A
+
(x, y) +
N

k=1

+
k
exp(
k
y)e(x, i
k
)
_
exp(iy) dy.
According to (3.1.8),
_

x
A
+
(x, y) exp(iy) dy = e(x, ) exp(ix),
and consequently,
_

+
(x, y) exp(iy) dy =
_
x

+
(x, y) exp(iy) dy
+exp(ix) e(x, )
N

k=1

+
k

k
+i
exp(ix) exp(
k
x)e(x, i
k
).
Similarly,
_

(x, y) exp(iy) dy =
_

x

+
(x, y) exp(iy) dy
+exp(ix) g(x, )
N

k=1

k
+i
exp(ix) exp(
k
x)g(x, i
k
).
Comparing with (3.3.37) we deduce
s
+
()e(x, ) +e(x, ) =
h

(x, )
a()
,
s

()g(x, ) +g(x, ) =
h
+
(x, )
a()
,
_

_
(3.3.38)
197
where
h

(x, ) := exp(ix)a()
_
1 +
_
x

+
(x, y) exp(i(x y)) dy

k=1

+
k

k
+i
exp(
k
x)e(x, i
k
)
_
,
h
+
(x, ) := exp(ix)a()
_
1 +
_

x

(x, y) exp(i(y x)) dy

k=1

k
+i
exp(
k
x)g(x, i
k
)
_
.
_

_
(3.3.39)
2) Let us study the properties of the functions h

(x, ). By virtue of (3.3.38),


h

(x, ) = a()
_
s
+
()e(x, ) + e(x, )
_
,
h
+
(x, ) = a()
_
s

()g(x, ) + g(x, )
_
.
_

_
(3.3.40)
In particular, this yields that the functions h

(x, ) are continuous for real ,= 0, and in


view of (3.3.32), h

(x, ) = h

(x, ). Since
lim
0
a()
_
s

() + 1
_
= 0,
it follows from (3.3.40) that
lim
0
h

(x, ) = 0, (3.3.41)
and consequently the functions h

(x, ) are continuous for real . By virtue of (3.3.39)


the functions h

(x, ) are analytic in


+
, continuous in
+
, and (3.3.41) is valid for

+
. Taking (3.3.40) and (3.1.7) into account we get
e(x, ), h

(x, )) = h
+
(x, ), g(x, )) = 2ia(). (3.3.42)
Since [s

()[ < 1, it follows from (3.3.38) that for real ,= 0,


sup
=0

1
a()
h

(x, )

< . (3.3.43)
Using (3.3.39) we get
h
+
(x, i
k
) = ia
1
(i
k
)

k
g(x, i
k
), h

(x, i
k
) = ia
1
(i
k
)
+
k
e(x, i
k
), (3.3.44)
lim
||
Im0
h

(x, ) exp(ix) = 1, (3.3.45)


where a
1
() =
d
d
a().
3) It follows from (3.3.38) that
s
+
()e(x, ) + e(x, ) =
h

(x, )
a()
,
e(x, ) +s
+
()e(x, ) =
h

(x, )
a()
.
198
Solving this linear algebraic system we obtain
e(x, )
_
1 s
+
()s
+
()
_
=
h

(x, )
a()
s
+
()
h

(x, )
a()
.
By virtue of (3.3.32)-(3.3.33),
1 s
+
()s
+
() = 1 [s
+
()[
2
=
1
[a()[
2
=
1
a()a()
.
Therefore,
e(x, )
a()
= s

()h

(x, ) + h

(x, ). (3.3.46)
Using (3.3.46) and the second relation from (3.3.38) we calculate
h

(x, )g(x, ) h

(x, )g(x, ) = G(), (3.3.47)


where
G() :=
1
a()
_
h
+
(x, )h

(x, ) e(x, )g(x, )


_
. (3.3.48)
According to (3.3.44) and (3.3.35),
h
+
(x, i
k
)h

(x, i
k
) e(x, i
k
)g(x, i
k
) = 0, k = 1, N,
and hence (after removing singularities) the function G() is analytic in
+
and continuous
in
+
0. By virtue of (3.3.45),
lim
||
Im0
h
+
(x, )h

(x, ) = 1.
Since
a() = 1 +O
_
1

_
as [[ ,
+
,
it follows from (3.3.48) that
lim
||
Im0
G() = 0.
By virtue of (3.3.47),
G() = G() for real ,= 0.
We continue G() to the lower half-plane by the formula
G() = G(), Im < 0. (3.3.49)
Then, the function G() is analytic in C 0, and
lim
||
G() = 0. (3.3.50)
Furthermore, it follows from (3.3.48), (3.3.26), (3.3.41) and (3.3.49) that
lim
0

2
G() = 0,
199
i.e. the function G() is entire in . On the other hand, using (3.3.48), (3.3.41) and
(3.3.43) we obtain that for real ,
lim
0
G() = 0,
and consequently, the function G() is entire in . Together with (3.3.50), using Liouvilles
theorem, this yields G() 0, i.e.
h
+
(x, )h

(x, ) = e(x, )g(x, ),


+
, (3.3.51)
h

(x, )g(x, ) = h

(x, )g(x, ) for real ,= 0. (3.3.52)


4) Now we consider the function
p(x, ) :=
h
+
(x, )
e(x, )
.
Denote c = x : e(x, 0)e(x, i
1
) . . . e(x, i
N
) = 0. Since e(x, ) is a solution of the
dierential equation (3.3.3) and for
+
,
[e(x, ) exp(ix) 1[
_

x
[A
+
(x, t)[ dt 0 as x , (3.3.53)
it follows that c is a nite set. Take a xed x / c. Let


+
be a zero of e(x, ), i.e.
e(x,

) = 0. Since x / c, it follows that

,= 0,

,= i
k
, k = 1, N; hence

a(

) ,= 0.
In view of (3.3.42) this yields h

(x,

) ,= 0. According to (3.3.51), h
+
(x,

) = 0. Since
all zeros of e(x, ) are simple (this fact is proved similarly to Theorem 2.3.3), we deduce
that the function p(x, ) (after removing singularities) is analytic in
+
and continuous in

+
0. It follows from (3.3.45) and (3.3.53) that
p(x, ) 1 as [[ ,
+
.
By virtue of (3.3.51)-(3.3.52),
p(x, ) = p(x, ) for real ,= 0.
We continue p(x, ) to the lower half-plane by the formula
p(x, ) = p(x, ), Im < 0.
Then, the function p(x, ) is analytic in C 0, and
lim
||
p(x, ) = 1. (3.3.54)
Since e(x, 0) ,= 0, it follows from (3.3.41) that
lim
0
p(x, ) = 0,
and consequently, the function p(x, ) is entire in . Together with (3.3.54) this yields
p(x, ) 1, i.e.
h
+
(x, ) e(x, ). (3.3.55)
200
Then, in view of (3.3.51),
h

(x, ) g(x, ). (3.3.56)


From this, using (3.3.38), we arrive at (3.3.36). Lemma 3.3.2 is proved.
Let us return to the proof of Theorem 3.3.1. It follows from (3.3.36) and (3.3.3) that
q

(x) = q
+
(x) := q(x). (3.3.57)
Then (3.3.2) implies (3.1.2), and the functions e(x, ) and g(x, ) are the Jost solutions
for the potential q dened by (3.3.57).
Denote by

J

= s
+
(),

k
,
+
k
; R, k = 1,

N the scattering data for this potential
q, and let
a() :=
1
2i
e(x, ), g(x, )). (3.3.58)
Using (3.3.42) and (3.3.55)-(3.3.56) we get
e(x, ), g(x, )) = 2ia().
Together with (3.3.58) this yields a() a(), and consequently,

N = N,

k
=
k
, k =
1, N. Furthermore, it follows from (3.1.21) that
s
+
()e(x, ) + e(x, ) =
g(x, )
a()
,
s

()g(x, ) + g(x, ) =
e(x, )
a()
.
Comparing with (3.3.26) we infer s

() = s

(), R.
By virtue of (3.1.26)

+
k
=
d
k
ia
1
(
k
)
,

k
=
1
id
k
a
1
(
k
)
. (3.3.59)
On the other hand, it follows from (3.3.55)-(3.3.56) and (3.3.44) that
e(x, i
k
) = ia
1
(i
k
)

k
g(x, i
k
),
g(x, i
k
) = ia
1
(i
k
)
+
k
e(x, i
k
),
i.e.
d
k
= ia
1
(i
k
)
+
k
=
1
ia
1
(i
k
)

k
.
Comparing with (3.3.59) we get

, and Theorem 3.3.1 is proved. 2


Remark 3.3.1. There is a connection between the inverse scattering problem and the
Riemann problem for analytic functions. Indeed, one can rewrite (3.1.27) in the form
Q

(x, ) = Q
+
(x, )Q(), (3.3.60)
where
Q

(x, ) =
_
g(x, ) e(x, )
g

(x, ) e

(x, )
_
, Q
+
(x, ) =
_
e(x, ) g(x, )
e

(x, ) g

(x, )
_
,
201
Q() =
_

_
1
a()
b()
a()

b()
a()
1
a()
_

_
.
For each xed x, the matrix-functions Q

(x, ) are analytic and bounded for Im > 0.


By virtue of (3.1.26) and (3.1.53), the matrix-function Q() can be uniquely reconstructed
from the given scattering data J
+
(or J

). Thus, the inverse scattering problem is reduced


to the Riemann problem (3.3.60). We note that the theory of the solution of the Riemann
problem can be found, for example, in [gak1]. Applying the Fourier transform to (3.3.60) as
it was shown in Section 3.2, we arrive at the Gelfand-Levitan-Marchenko equations (3.2.1)-
(3.2.2) or (3.2.21). We note that the use of the Riemann problem in the inverse problem
theory has only methodical interest and does not constitute an independent method, since
it can be considered as a particular case of the contour integral method (see [bea1], [yur1]).
3.4. REFLECTIONLESS POTENTIALS. MODIFICATION OF THE
DISCRETE SPECTRUM.
3.4.1. A potential q satisfying (3.1.2) is called reectionless if b() 0. By virtue of
(3.1.26) and (3.1.53) we have in this case
s

() 0, a() =
N

k=1
i
k
+i
k
. (3.4.1)
Theorem 3.3.1 allows one to prove the existence of reectionless potentials and to describe
all of them. Namely, the following theorem is valid.
Theorem 3.4.1. Let arbitrary numbers
k
=
2
k
< 0,
+
k
> 0, k = 1, N be given.
Take s
+
() 0, R, and consider the data J
+
= s
+
(),
k
,
+
k
; R, k = 1, N.
Then, there exists a unique reectionless potential q satisfying (3.1.2) for which J
+
are
the right scattering data.
Theorem 3.4.1 is an obvious corollary of Theorem 3.3.1 since for this set J
+
all conditions
of Theorem 3.3.1 are fullled and (3.4.1) holds.
For a reectionless potential, the Gelfand-Levitan-Marchenko equation (3.2.1) takes the
form
A
+
(x, y) +
N

k=1

+
k
exp(
k
(x+y)) +
N

k=1

+
k
exp(
k
y)
_

x
A
+
(x, t) exp(
k
t) dt = 0. (3.4.2)
We seek a solution of (3.4.2) in the form:
A
+
(x, y) =
N

k=1
P
k
(x) exp(
k
y).
Substituting this into (3.4.2), we obtain the following linear algebraic system with respect
to P
k
(x) :
P
k
(x) +
N

j=1

+
j
exp((
k
+
j
)x)

k
+
j
P
j
(x) =
+
k
exp(
k
x), k = 1, N. (3.4.3)
202
Solving (3.4.3) we infer P
k
(x) =
k
(x)/(x), where
(x) = det
_

kl
+
+
k
exp((
k
+
l
)x)

k
+
l
_
k,l=1,N
, (3.4.4)
and
k
(x) is the determinant obtained from (x) by means of the replacement of k-column
by the right-hand side column. Then
q(x) = 2
d
dx
A
+
(x, x) = 2
N

k=1

k
(x)
(x)
exp(
k
x),
and consequently one can show that
q(x) = 2
d
2
dx
2
ln (x). (3.4.5)
Thus, (3.4.4) and (3.4.5) allow one to calculate reectionless potentials from the given num-
bers
k
,
+
k

k=1,N
.
Example 3.4.1. Let N = 1, =
1
, =
+
1
, and
(x) = 1 +

2
exp(2x).
Then (3.4.5) gives
q(x) =
4
_
exp(x) +

2
exp(x)
_
2
.
Denote
=
1
2
ln
2

.
Then
q(x) =
2
2
cosh
2
((x ))
.
3.4.2. If q = 0, then s

() 0, N = 0, a() 1. Therefore, Theorem 3.4.1 shows


that all reectionless potentials can be constructed from the zero potential and given data

k
,
+
k
, k = 1, N. Below we briey consider a more general case of changing the discrete
spectrum for an arbitrary potential q. Namely, the following theorem is valid.
Theorem 3.4.2. Let J
+
= s
+
(),
k
,
+
k
; R, k = 1, N be the right scattering
data for a certain real potential q satifying (3.1.2). Take arbitrary numbers

k
=
2
k
<
0,
+
k
> 0, k = 1,

N, and consider the set

J
+
= s
+
(),

k
,
+
k
; R, k = 1,

N with
the same s
+
() as in J
+
. Then there exists a real potential q satisfying (3.1.2), for which

J
+
represents the right scattering data.
Proof. Let us check the conditions of Theorem 3.3.1 for

J
+
. According to (3.3.25) and
(3.3.28) we construct the functions a() and s

() by the formulae
a() :=

k=1
i
k
+i
k
exp
_

1
2i
_

ln(1 [s
+
()[
2
)

d
_
,
+
,
203
s

() := s
+
()
a()
a()
. (3.4.6)
Together with (3.1.53) this yields
a() = a()

k=1
i
k
+i
k
N

k=1
+i
k
i
k
. (3.4.7)
By virtue of (3.1.26),
s

() = s
+
()
a()
a()
. (3.4.8)
Using (3.4.6)-(3.4.8) we get
s

() = s

(). (3.4.9)
Since the scattering data J
+
satisfy all conditions of Theorem 3.3.1 it follows from (3.4.7)
and (3.4.9) that

J
+
also satisfy all conditions of Theorem 3.3.1. Then, by Theorem 3.3.1
there exists a real potential q satisfying (3.1.2) for which

J
+
are the right scattering data.
2
204
IV. APPLICATIONS OF THE INVERSE PROBLEM THEORY
In this chapter various applications of inverse spectral problems for Sturm-Liouville op-
erators are considered. In Sections 4.1-4.2 we study the Korteweg-de Vries equation on the
line and on the half-line by the inverse problem method. Section 4.3 is devoted to a synthesis
problem of recovering parameters of a medium from incomplete spectral information. Sec-
tion 4.4 deals with discontinuous inverse problems which are connected with discontinuous
material properties. Inverse problems appearing in elasticity theory are considered in Section
4.5. In Section 4.6 boundary value problems with aftereect are studied. Inverse problems
for dierential equations of the Orr-Sommerfeld type are investigated in Section 4.7. Section
4.8 is devoted to inverse problems for dierential equations with turning points.
4.1. SOLUTION OF THE KORTEWEG-DE VRIES EQUATION
ON THE LINE
Inverse spectral problems play an important role for integrating some nonlinear evolution
equations in mathematical physics. In 1967, G.Gardner, G.Green, M.Kruskal and R.Miura
[gar1] found a deep connection of the well-known (from XIX century) nonlinear Korteweg-de
Vries (KdV) equation
q
t
= 6qq
x
q
xxx
with the spectral theory of Sturm-Liouville operators. They could manage to solve globally
the Cauchy problem for the KdV equation by means of reduction to the inverse spectral
problem. These investigations created a new branch in mathematical physics (for further
discussions see [abl1], [lax1], [tak1], [zak1]). In this section we provide the solution of the
Cauchy problem for the KdV equation on the line. For this purpose we use ideas from [gar1],
[lax1] and results of Chapter 3 on the inverse scattering problem for the Sturm-Liouville
operator on the line.
4.1.1. Consider the Cauchy problem for the KdV equation on the line:
q
t
= 6qq
x
q
xxx
, < x < , t > 0, (4.1.1)
q
|t=0
= q
0
(x), (4.1.2)
where q
0
(x) is a real function satisfying (3.1.2). Denote by Q
0
the set of real functions
q(x, t), < x < , t 0, such that for each xed T > 0,
max
0tT
_

(1 +[x[)[q(x, t)[ dt < .


Let Q
1
be the set of functions q(x, t) such that q, q, q

, q

, q

Q
0
. Here and below, dot
denotes derivatives with respect to t, and prime denotes derivatives with respect to x.
We will seek the solution of the Cauchy problem (4.1.1)-(4.1.2) in the class Q
1
. First we
prove the following uniqueness theorem.
Theorem 4.1.1. The Cauchy problem (4.1.1)-(4.1.2) has at most one solution.
205
Proof. Let q, q Q
1
be solutions of the the Cauchy problem (4.1.1)-(4.1.2). Denote
w := q q. Then w Q
1
, w
|t=0
= 0, and
w
t
= 6(qw
x
+w q
x
) w
xxx
.
Multiplying this equality by w and integrating with respect to x, we get
_

ww
t
dx = 6
_

w(qw
x
+w q
x
) dx
_

ww
xxx
dx.
Integration by parts yields
_

ww
xxx
dx =
_

w
x
w
xx
dx =
_

w
x
w
xx
dx,
and consequently
_

ww
xxx
dx = 0.
Since
_

qww
x
dx =
_

q
_
1
2
w
2
_
x
dx =
1
2
_

q
x
w
2
dx,
it follows that
_

ww
t
dx =
_

_
q
x

1
2
q
x
_
w
2
dx.
Denote
E(t) =
1
2
_

w
2
dx, m(t) = 12 max
xR

q
x

1
2
q
x

.
Then
d
dt
E(t) m(t)E(t),
and consequently,
0 E(t) E(0) exp
_
_
t
0
m() d
_
.
Since E(0) = 0 we deduce E(t) 0, i.e. w 0. 2
4.1.2. Our next goal is to construct the solution of the Cauchy problem (4.1.1)-(4.1.2)
by reduction to the inverse scattering problem for the Sturm-Liouville equation on the line.
Let q(x, t) be the solution of (4.1.1)-(4.1.2). Consider the Sturm-Liouville equation
Ly := y

+q(x, t)y = y (4.1.3)


with t as a parameter. Then the Jost-type of (4.1.3) solutions and the scattering data
depend on t. Let us show that equation (4.1.1) is equivalent to the equation

L = [A, L], (4.1.4)


where in this section
Ay = 4y

+ 6qy

+ 3q

y
is a linear dierential operator, and [A, L] := AL LA.
Indeed, since Ly = y

+qy, we have

Ly = qy, ALy = 4(y

+qy)

+ 6q(y

+qy)

+ 3q

(y

+qy),
206
LAy = (4y

+ 6qy

+ 3q

y)

+q(4y

+ 6qy

+ 3q

y),
and consequently (AL LA)y = (6qq

)y.
Equation (4.1.4) is called the Lax equation or Lax representation, and the pair A, L is
called the Lax pair, corresponding to (4.1.3).
Lemma 4.1.1. Let q(x, t) be a solution of (4.1.1), and let y = y(x, t, ) be a solution
of (4.1.3). Then (L)( y Ay) = 0, i.e. the function y Ay is also a solution of (4.1.3).
Indeed, dierentiating (4.1.3) with respect to t, we get

Ly + (L ) y = 0, or, in view
of (4.1.4), (L ) y = LAy ALy = (L )Ay. 2
Let e(x, t, ) and g(x, t, ) be the Jost-type solutions of (4.1.3) introduced in Section
3.1. Denote e

= e(x, t, ), g

= g(x, t, ).
Lemma 4.1.2. The following relation holds
e
+
= Ae
+
4i
3
e
+
. (4.1.5)
Proof. By Lemma 4.1.1, the function e
+
Ae
+
is a solution of (4.1.3). Since the
functions e
+
, e

form a fundamental system of solutions of (4.1.3), we have


e
+
Ae
+
=
1
e
+
+
2
e

,
where
k
=
k
(t, ) , k = 1, 2 do not depend on x. As x +,
e

exp(ix), e
+
0, Ae
+
4i
3
exp(ix),
consequently
1
= 4i
3
,
2
= 0, and (4.1.5) is proved. 2
Lemma 4.1.3. The following relations hold
a = 0,

b = 8i
3
b, s
+
= 8i
3
s
+
, (4.1.6)

j
= 0,
+
j
= 8
3
j

+
j
. (4.1.7)
Proof. According to (3.1.18),
e
+
= ag
+
+bg

. (4.1.8)
Dierentiating (4.1.8) with respect to t, we get e
+
= ( ag
+
+

bg

) + (a g
+
+ b g

). Using
(4.1.5) and (4.1.8) we calculate
a(Ag
+
4i
3
g
+
) +b(Ag

4i
3
g

) = ( ag
+
+

bg

) + (a g
+
+b g

). (4.1.9)
Since g

exp(ix), g

0, Ag

4i
3
exp(ix) as x , then (4.1.9)
yields 8i
3
exp(ix) a exp(ix) +

b exp(ix), i.e. a = 0,

b = 8i
3
b. Consequently,
s
+
= 8i
3
s
+
, and (4.1.6) is proved.
The eigenvalues
j
=
2
j
=
2
j
,
j
> 0, j = 1, N are the zeros of the function
a = a(, t). Hence, according to a = 0, we have

j
= 0. Denote
e
j
= e(x, t, i
j
), g
j
= g(x, t, i
j
), j = 1, N.
207
By Theorem 3.1.3, g
j
= d
j
e
j
, where d
j
= d
j
(t) do not depend on x. Dierentiating the
relation g
j
= d
j
e
j
with respect to t and using (4.1.5), we infer
g
j
=

d
j
e
j
+d
j
e
j
=

d
j
e
j
+d
j
Ae
j
4
3
j
d
j
e
j
or
g
j
=

d
j
d
j
g
j
+Ag
j
4
3
j
g
j
.
As x , g
j
exp(
j
x), g
j
0, Ag
j
4
3
j
exp(
j
x), and consequently

d
j
=
8
3
j
d
j
, or, in view of (3.1.32),
+
j
= 8
3
j

+
j
. 2
Thus, it follows from Lemma 4.1.3 that we have proved the following theorem.
Theorem 4.1.2. Let q(x, t) be the solution of the Cauchy problem (4.1.1)-(4.1.2), and
let J
+
(t) = s
+
(t, ),
j
(t),
+
j
(t), j = 1, N be the scattering data for q(x, t). Then
s
+
(t, ) = s
+
(0, ) exp(8i
3
t),

j
(t) =
j
(0),
+
j
(t) =
+
j
(0) exp(8
3
j
t), j = 1, N (
j
=
2
j
).
_
_
_
(4.1.10)
The formulae (4.1.10) give us the evolution of the scattering data with respect to t, and
we obtain the following algorithm for the solution of the Cauchy problem (4.1.1)-(4.1.2).
Algorithm 4.1.1. Let the function q(x, 0) = q
0
(x) be given. Then
1) Construct the scattering data s
+
(0, ),
j
(0),
+
j
(0), j = 1, N .
2) Calculate s
+
(t, ),
j
(t),
+
j
(t), j = 1, N by (4.1.10).
3) Find the function q(x, t) by solving the inverse scattering problem (see Section 3.3).
Thus, if the Cauchy problem (4.1.1)-(4.1.2) has a solution, then it can be found by
Algorithm 4.1.1. On the other hand, if q
0
Q
1
then J
+
(t), constructed by (4.1.10),
satises the conditions of Theorem 3.3.1 for each xed t > 0, and consequently there exists
q(x, t) for which J
+
(t) are the scattering data. It can be shown (see [mar1]) that q(x, t)
is the solution of (4.1.1)-(4.1.2).
We notice once more the main points for the solution of the Cauchy problem (4.1.1)-
(4.1.2) by the inverse problem method:
(1) The presence of the Lax representation (4.1.4).
(2) The evolution of the scattering data with respect to t.
(3) The solution of the inverse problem.
4.1.3. Among the solutions of the KdV equation (4.1.1) there are very important
particular solutions of the form q(x, t) = f(x ct). Such solutions are called solitons.
Substituting q(x, t) = f(xct) into (4.1.1), we get f

+6ff

+cf

= 0, or (f

+3f
2
+cf)

=
0. Clearly, the function
f(x) =
c
2 cosh
2
_
cx
2
_
satises this equation. Hence, the function
q(x, t) =
c
2 cosh
2
_
c(xct)
2
_
(4.1.11)
208
is a soliton.
It is interesting to note that solitons correspond to reectionless potentials (see Section
3.4). Consider the Cauchy problem (4.1.1)-(4.1.2) in the case when q
0
(x) is a reectionless
potential, i.e. s
+
(0, ) = 0. Then, by virtue of (4.1.10), s
+
(t, ) = 0 for all t, i.e. the
solution q(x, t) of the Cauchy problem (4.1.1)-(4.1.2) is a reectionless potential for all t.
Using (3.4.5) and (4.1.10) we derive
q(x, t) = 2
d
2
dx
2
(x, t),
(x, t) = det
_

kl
+
+
k
(0) exp(8
3
k
t)
exp((
k+

l
)x)

k+

l
_
k,l=1,N
.
In particular, if N = 1, then
+
1
(0) = 2
1
, and
q(x, t) =
2
2
1
cosh
2
(
1
(x 4
2
1
t))
.
Taking c = 4
2
1
, we get (4.1.11).
4.2. KORTEWEG-DE VRIES EQUATION ON THE HALF-LINE.
NONLINEAR REFLECTION
4.2.1. We consider the mixed problem for the KdV equation on the half-line:
q
t
= 6qq
x
q
xxx
, x 0, t 0, (4.2.1)
q
|t=0
= q
0
(x), q
|x=0
= q
1
(t), q
x|x=0
= q
2
(t), q
xx|x=0
= q
3
(t). (4.2.2)
Here q
0
(x) and q
j
(t), j = 1, 2, 3, are continuous complex-valued functions, and q
0
(x)
L(0, ), q
0
(0) = q
1
(0). In this section the mixed problem (4.2.1)-(4.2.2) is solved by the
inverse problem method. For this purpose we use the results of Chapter 2 on recovering the
Sturm-Liouville operator on the half-line from the Weyl function. We provide the evolution
of the Weyl function with respect to t, and give an algorithm for the solution of the mixed
problem (4.2.1)-(4.2.2) along with necessary and sucient conditions for its solvability.
Denote by Q the set of functions q(x, t) such that the functions q, q, q

, q

, q

are
continuous in D = (x, t) : x 0, t 0, and integrable on the half-line x 0, for each
xed t. We will seek the solution of (4.2.1)-(4.2.2) in the class Q.
Consider the Sturm-Liouville equation
Ly := y

+q(x, t)y = y. (4.2.3)


where q(x, t) L(0, ) for each xed t 0. Let (x, t, ) be the solution of (4.2.3) under
the conditions (0, t, ) = 1, (x, t, ) = O(exp(ix)), x (for each xed t ). Denote
M(t, ) :=

(0, t, ). Then,
(x, t, ) =
e(x, t, )
e(0, t, )
, M(t, ) =
e

(0, t, )
e(0, t, )
,
209
where e(x, t, ) is the Jost solution for q(x, t). The function M(t, ) is the Weyl function
for equation (4.2.3) with respect to the linear form U(y) := y(0) (see Remark 2.1.4). From
Chapter 2 we know that the specication of the Weyl function uniquely determines the
potential q(x, t). Our goal is to obtain the evolution of the Weyl function M(t, ) with
respect to t when q satises (4.2.1). For this purpose we consider two methods.
Method 1. Let us use the Lax equation (4.1.4), which is equivalent to the KdV equation.
Let q(x, t) be a solution of the mixed problem (4.2.1)-(4.2.2). By Lemma 4.1.1, the function

A is a solution of (4.2.3). Moreover, one can verify that



A = O(exp(ix)). Hence

A = , where = (t, ) does not depend on x. Taking x = 0, we calculate


= (A)
|x=0
, and consequently

= A (A)
|x=0
.
Dierentiating this relation with respect to x and taking x = 0, we get

M = (A
1
)
|x=0
M(A)
|x=0
, (4.2.4)
where A
1
y := (Ay)

. If y is a solution of (4.2.3), then


Ay = 4y

+ 6qy

+ 3q

y = F
11
y +F
12
y

,
A
1
y = 4y
(4)
+ 6qy

+ 9q

+ 3q

y = F
21
y +F
22
y

,
where F = F(x, t, ) is dened by
F =
_
F
11
F
12
F
21
F
22
_
=
_
q

4 + 2q
(4 + 2q)(q ) q

_
. (4.2.5)
Hence
(A)
|x=0
= F
11
(0, t, ) +F
12
(0, t, )M(t, ), (A
1
)
|x=0
= F
21
(0, t, ) F
22
(0, t, )M(t, ).
Substituting this into (4.2.4), we obtain the nonlinear ordinary dierential equation

M(t, ) = F
21
(0, t, ) + (F
22
(0, t, ) F
11
(0, t, ))M(t, ) F
12
(0, t, )M
2
(t, ). (4.2.6)
The functions F
jk
(0, t, ) depend on q
j
(t), j = 1, 2, 3, only, i.e. they are known functions.
Equation (4.2.6) gives us the evolution of the Weyl function M(t, ) with respect to t.
This equation corresponds to the equations (4.1.6)-(4.1.7). But here the evolution equation
(4.2.6) is nonlinear. This happens because of nonlinear reection at the point x = 0. That
is the reason that the mixed problem on the half-space is more dicult than the Cauchy
problem.
Here is the algorithm for the solution of the mixed problem (4.2.1)-(4.2.2).
Algorithm 4.2.1. (1) Using the functions q
0
, q
1
, q
2
and q
3
, construct the functions
M(0, ) and F
jk
(0, t, ).
(2) Calculate M(t, ) by solving equation (4.2.6) with the initial data M(0, ).
(3) Construct q(x, t) by solving the inverse problem.
210
Remark 4.2.1. Equation (4.2.6) is a (scalar) Riccati equation. Therefore it follows from
Radons lemma (see subsection 4.2.3 below) that the solution of (4.2.6) with given initial
condition M(0, ) has the form
M(t, ) =
X
2
(t, )
X
1
(t, )
,
where X
1
(t, ) and X
2
(t, ) is the unique solution of the initial value problem
_

X
1
(t, )

X
2
(t, )
_
= F(0, t, )
_
X
1
(t, )
X
2
(t, )
_
,
_
X
1
(0, )
X
2
(0, )
_
=
_
1
M(0, )
_
.
For convenience of the reader we present below in Subsection 4.2.3 a general version of
Radons lemma.
Method 2. Instead of the Lax equation we will use here another representation of the
KdV equation. Denote
G =
_
0 1
q 0
_
, U =

GF

+GF FG,
where F is dened by (4.2.5). Then, it follows by elementary calculations that
U =
_
0 0
u 0
_
,
where
u = q
t
6qq
x
+q
xxx
.
Thus, the KdV equation is equivalent to the equation
U = 0. (4.2.7)
We shall see that Equation (4.2.7) is sometimes more convenient for the solution of the mixed
problem (4.2.1)-(4.2.2) than the Lax equation (4.1.4).
Let the matrix W(x, t, ) and V (x, t, ) be solutions of the Cauchy problems:
W

= GW, W
|x=0
= E,

V = FV, V
|t=0
= E, E =
_
1 0
0 1
_
, (4.2.8)
Denote by C(x, t, ) and S(x, t, ) the solutions of equation (4.2.3) under the initial con-
ditions C
|x=0
= S

|x=0
= 1, S
|x=0
= C

|x=0
= 0. Clearly,
W =
_
C S
C

_
.
Lemma 4.2.1. Let q(x, t) be a solution of (4.2.1)-(4.2.2). Then

W(x, t, ) = F(x, t, )W(x, t, ) W(x, t, )F(0, t, ), (4.2.9)


W(x, t, ) = V (x, t, )W(x, 0, )V
1
(0, t, ). (4.2.10)
211
Proof. Using (4.2.8) we calculate
(

W FW)

G(

W FW) = UW.
Since U = 0, it follows that the matrix =

WFW is a solution of the equation

= G.
Moreover,
|x=0
= F(0, t, ). Hence, (x, t, ) = W(x, t, )F(0, t, ), i.e. (4.2.9) is valid.
Denote Y = W(x, t, )V (0, t, ), Z = V (x, t, )W(x, 0, ). Then, using (4.2.8) and
(4.2.9), we calculate

Y = F(x, t, )Y =

Z, Y
|t=0
= W(x, 0, ) = Z
|t=0
. By virtue of the
uniqueness of the solution of the Cauchy problem, we conclude that Y Z, i.e. (4.2.10) is
valid. 2
Consider the matrices
=
_
S

_
, N =
_
1 0
M 1
_
, N
1
:= N
1
=
_
1 0
M 1
_
.
Since = C +MS, we get = WN or W = N
1
. Then (4.2.9)-(4.2.10) take the form

(x, t, ) = F(x, t, )(x, t, ) (x, t, )D


0
(t, ), (4.2.11)
(x, t, ) = V (x, t, )(x, 0, )B(t, ), (4.2.12)
where
D
0
(t, ) = (

N
1
(t, ) + N
1
(t, )F(0, t, ))N(t, ), D
0
=
_
d
0
11
d
0
12
d
0
21
d
0
22
_
, (4.2.13)
B(t, ) = N
1
(0, )V
1
(0, t, )N(t, ), B =
_
b
11
b
12
b
21
b
22
_
. (4.2.14)
Lemma 4.2.2. Let q(x, t) be a solution of (4.2.1)-(4.2.2). Then

B(t, ) = B(t, )D
0
(t, ), B(0, ) = E, (4.2.15)
d
0
21
(t, ) 0, b
21
(t, ) 0. (4.2.16)
Proof. Dierentiating (4.2.12) with respect to t and using the equality

V = FV, we
get

=

V (
|t=0
)B +V (
|t=0
)

B = F + V (
|t=0
)

B. (4.2.17)
On the other hand, it follows from (4.2.11) and (4.2.12) that

= FV (
|t=0
)BD
0
.
Together with (4.2.17) this gives

B = BD
0
. Moreover, according to (4.2.12), B(0, ) = E.
Rewriting (4.2.11) in coordinates, we obtain
Sd
0
21
=

+ F
11
+ F
12

d
0
11
. (4.2.18)
Since

+F
11
+ F
12

d
0
11
= O(exp(ix)), x ,
212
equality (4.2.18) is possible only if d
0
21
= 0. Then (4.2.15) yields

b
21
= b
21
d
0
11
, b
21 |t=0
= 0,
and consequently b
21
0. 2
Denote
R(t, ) := V
1
(0, t, ), R =
_
R
11
R
12
R
21
R
22
_
.
Since R(V
|x=0
) = E, we calculate

R(V
|x=0
) + R(

V
|x=0
) = 0. Thus, the matrix R(t, ) is a
solution of the Cauchy problem

R(t, ) = R(t, )

F(t, ), R(0, ) = E, (4.2.19)
where

F =
_

F
11

F
12

F
21

F
22
_
=
_
q
2
4 + 2q
1
(4 + 2q
1
)(q
1
) q
3
q
2
_
.
It is important that (4.2.19) is a linear system.
Lemma 4.2.3. Let q(x, t) be a solution of (4.2.1)-(4.2.2). Then

M(t, ) =

F
21
(t, ) + (

F
22
(t, )

F
11
(t, ))M(t, )

F
12
(t, )M
2
(t, ), (4.2.20)
M(t, ) =
M
0
()R
11
(t, ) R
21
(t, )
M
0
()R
12
(t, ) R
22
(t, )
. (4.2.21)
where M
0
() = M(0, ).
Proof. Rewriting (4.2.13) in coordinates, we get
d
0
21
(t, ) =

M(t, ) + F
21
(0, t, ) + (F
22
(0, t, ) F
11
(0, t, ))M(t, ) F
12
(0, t, )M
2
(t, ).
(4.2.22)
By virtue of (4.2.16), d
0
21
0, i.e. (4.2.20) holds. Rewriting (4.2.14) in coordinates, we
have
b
21
(t, ) = M(0, )(R
11
(t, ) + M(t, )R
12
(t, )) + (R
21
(t, ) +M(t, )R
22
(t, )).
By virtue of (4.2.16), b
21
0, i.e. (4.2.21) holds. 2
Formula (4.2.21) gives us the solution of the nonlinear evolution equation (4.2.20). Thus,
we obtain the following algorithm for the solution of the mixed problem (4.2.1)-(4.2.2) by
the inverse problem method.
Algorithm 4.2.2 (1) Using the functions q
0
, q
1
, q
2
and q
3
, construct the functions
M
0
() and

F
jk
(t, ).
(2) Find R(t, ) by solving the Cauchy problem (4.2.19).
(3) Calculate M(t, ) by (4.2.21).
(4) Construct q(x, t) by solving the inverse problem (see Chapter 2).
4.2.2. In this subsection we study the existence of the solution of the mixed problem
(4.2.1)-(4.2.2) and provide necessary and sucient conditions for its solvability. The follow-
ing theorem shows that the existence of the solution of (4.2.1)-(4.2.2) is equivalent to the
solvability of the corresponding inverse problem.
213
Theorem 4.2.1. Let for x 0, t 0 continuous complex-valued functions q
0
(x), q
1
(t),
q
2
(t), q
3
(t) be given such that q
0
(x) L(0, ), q
0
(0) = q
1
(0). Construct the function
M(t, ) according to steps (1)-(3) of Algorithm 4.2.2. Assume that there exists a function
q(x, t) Q for which M(t, ) is the Weyl function. Then q(x, t) is the solution of the
mixed problem (4.2.1)-(4.2.2).
Thus, the problem (4.2.1)-(4.2.2) has a solution if and only if the solution of the corre-
sponding inverse problem exists. Necessary and sucient conditions for the solvability of
the inverse problem for the Sturm-Liouville operator on the half-line from its Weyl function
and an algorithm for the solution are given in Chapter 2.
Proof. We will use the notations of Subsection 4.2.1. Since
(

W FW)

G(

W FW) = UW,
(

W FW)
|x=0
= F(0, t, ),
we get

W(x, t, ) = F(x, t, )W(x, t, ) W(x, t, )(x, t, ), (4.2.23)


where
(x, t, ) = F(0, t, )
_
x
0
W
1
(s, t, )U(s, t)W(s, t, ) ds.
It follows from (4.2.23) and the relation W = N
1
that

(x, t, ) = F(x, t, )(x, t, ) (x, t, )D(x, t, ), (4.2.24)


where
D(x, t, ) = (

N
1
(t, ) N
1
(t, )(x, t, ))N(t, ), D =
_
d
11
d
12
d
21
d
22
_
. (4.2.25)
Rewriting (4.2.24) and (4.2.25) in coordinates we obtain
d
11
(x, t, ) = d
0
11
(t, ) +
_
x
0
u(s, t)(s, t, )S(s, t, ) ds, (4.2.26)
d
21
(x, t, ) = d
0
21
(t, )
_
x
0
u(s, t)
2
(s, t, ) ds, (4.2.27)
Sd
21
=

+ F
11
+ F
12

d
11
. (4.2.28)
Fix t 0 and ( =
2
, Im > 0). It follows from (4.2.26) that d
11
= O(1), x .
From this and (4.2.28) we get d
21
= O(exp(2ix)), x , and consequently d
21
0 as
x . Then, by virtue of (4.2.27),
d
0
21
(t, ) =
_

0
u(x, t)
2
(x, t, ) dx.
Together with (4.2.22) this yields

M(t, ) = F
21
(0, t, ) + (F
22
(0, t, ) F
11
(0, t, ))M(t, ) F
12
(0, t, )M
2
(t, )

_

0
u(x, t)
2
(x, t, ) dx. (4.2.29)
214
Furthermore, it follows from (4.2.21) that
M(0, ) = M
0
(). (4.2.30)
Since the specication of the Weyl function uniquely determines the potential we have from
(4.2.30) that
q(x, 0) = q
0
(x). (4.2.31)
Dierentiating (4.2.21) with respect to t we arrive at (4.2.20). Comparing (4.2.20) with
(4.2.29) we obtain

F
21
(t, ) + (

F
22
(t, )

F
11
(t, ))M(t, )

F
12
(t, )M
2
(t, )
_

0
u(x, t)
2
(x, t, ) dx,
where

F
jk
(t, ) = F
jk
(0, t, )

F
jk
(t, ). Hence

_
q
xx
(0, t) q
3
(t)
_
+ 2
_
q(0, t) q
1
(t)
_
+ 2
_
q
2
(0, t) q
2
1
(t)
_
+ 2
_
q
x
(0, t) q
2
(t)
_
M(t, )
2
_
q(0, t) q
1
(t)
_
M
2
(t, )
_

0
u(x, t)
2
(x, t, ) dx = 0. (4.2.32)
Since
(x, t, ) =
e(x, t, )
e(0, t, )
, M(t, ) =
e

(0, t, )
e(0, t, )
,
we have for a xed t,
M(t, ) = (i) + O
_
1

_
,
_

0
u(x, t)
2
(x, t, ) dx = O
_
1

_
, [[ .
Then, (4.2.32) implies
q(0, t) = q
1
(t), q
x
(0, t) = q
2
(t), q
xx
(0, t) = q
3
(t), (4.2.33)
_

0
u(x, t)
2
(x, t, ) dx = 0. (4.2.34)
In particular, (4.2.34) yields
_

0
u(x, t)e
2
(x, t, ) dx = 0. (4.2.35)
According to (2.1.33),
e(x, t, ) = exp(ix) +
_

x
A(x, t, ) exp(i) d,
and consequently,
e
2
(x, t, ) = exp(2ix) +
_

x
E(x, t, ) exp(2i) d, (4.2.36)
where
E(x, t, ) = 4A(x, t, 2 x) +
_
2x
x
A(x, s)A(x, 2 s) ds.
215
Substituting (4.2.36) into (4.2.35) we get
_

0
_
u(x, t) +
_
x
0
E(, t, x)u(, t) d
_
exp(2ix) dx = 0.
Hence, u(x, t) = 0, i.e. the function q(x, t) satises the KdV equation (4.2.1). Together
with (4.2.31) and (4.2.33) this yields that q(x, t) is a solution of the mixed problem (4.2.1)-
(4.2.2). Theorem 4.2.1 is proved. 2
4.2.3. In this subsection we formulate and prove Radons lemma for the matrix Riccati
equation.
Let us consider the following Cauchy problem for the Riccati equation:

Z = Q
21
(t) + Q
22
(t)Z ZQ
11
(t) ZQ
12
(t)Z, t [t
0
, t
1
],
Z(t
0
) = Z
0
,
_
_
_
(4.2.37)
where Z(t), Z
0
, Q
11
(t), Q
12
(t), Q
21
(t) and Q
22
(t) are complex matrix of dimensions m
n, mn, n n, n m, mn, and mm respectively, and Q
jk
(t) C[t
0
, t
1
].
Let the matrices (X(t), Y (t), t [t
0
, t
1
]), be the unique solution of the linear Cauchy
problem

X = Q
11
(t)X +Q
12
(t)Y, X(t
0
) = E,

Y = Q
21
(t)X +Q
22
(t)Y, Y (t
0
) = Z
0
,
_
_
_
(4.2.38)
where E is the identity n n matrix.
Radons Lemma.The Cauchy problem (4.2.37) has a unique solution Z(t) C
1
[t
0
, t
1
],
if and only if det X(t) ,= 0 for all t [t
0
, t
1
]. Moreover,
Z(t) = Y (t)(X(t))
1
. (4.2.39)
Proof. 1) Let det X(t) ,= 0 for all t [t
0
, t
1
]. Dene the matrix Z(t) via (4.2.39).
Then, using (4.2.38), we get
Z(t
0
) = Y (t
0
)(X(t
0
))
1
= Z
0
,

Z =

Y X
1
Y X
1

XX
1
= (Q
21
X +Q
22
Y )X
1
Y X
1
(Q
11
X +Q
12
Y )X
1
= Q
21
+Q
22
Z ZQ
11
ZQ
12
Z.
Thus, Z(t) is the solution of (4.2.37).
2) Let Z(t) C
1
[t
0
, t
1
] be a solution of (4.2.37). Denote by

X(t) the solution of the
Cauchy problem

X = (Q
11
(t) + Q
12
(t)Z(t))

X,

X(t
0
) = E. (4.2.40)
Clearly, det

X(t) ,= 0 for all t [t
0
, t
1
]. Take

Y (t) := Z(t)

X(t).
Then, it follows from (4.2.37) and (4.2.40) that

Y =

Z

X +Z

X = Q
21

X +Q
22

Y ,
216

X = Q
11

X +Q
12

Y ,
and

X(t
0
) = E,

Y (t
0
) = Z
0
. By virtue of the uniqueness theorem, this yields

X(t) X(t),

Y (t) Y (t),
and Radons lemma is proved. 2
4.3. CONSTRUCTING PARAMETERS OF A MEDIUM FROM
INCOMPLETE SPECTRAL INFORMATION
In this section the inverse problem of synthesizing parameters of dierential equations
with singularities from incomplete spectral information. We establish properties of the spec-
tral characteristics, obtain conditions for the solvability of such classes of inverse problems
and provide algorithms for constructing the solution.
4.3.1. Properties of spectral characteristics.
Let us consider the system
dy
1
dx
= iR(x) y
2
,
dy
2
dx
= i
1
R(x)
y
1
, x [0, T] (4.3.1)
with the initial conditions
y
1
(0, ) = 1, y
2
(0, ) = 1.
Here = k +i is the spectral parameter, and R(x) is a real function.
For a wide class of problems describing the propagation of electromagnetic waves in a
stratied medium, Maxwells equations can be reduced to the canonical form (4.3.1), where
x is the variable in the direction of stratication, y
1
and y
2
are the components of the
electromagnetic eld, R(x) is the wave resistance which describes the refractive properties
of the medium and is the wave number in a vacuum. System (4.3.1) also appears in radio
engineering for the design of directional couplers for heterogeneous electronic lines, which
constitute one of the important classes of radiophysical synthesis problems [mes1].
Some aspects of synthesis problems for system (4.3.1) with a positive R(x) were studied
in [lit1], [mes1], [tik1]-[tik3], [zuy1] and other works. In this section we partially use these
results. We study the inverse problem for system (4.3.1) from incomplete spectral infor-
mation in the case when R(x) is nonnegative and can have zeros which are called turning
points. More precisely, we shall consider two classes of functions R(x) . We shall say that
R(x) B
0
if R(x), R

(x) are absolutely continuous on [0, T], R

(x) L
2
(0, T), R(x) >
0, R(0) = 1, R

(0) = 0. We also consider the more general case when R(x) has zeros
0 < x
1
< . . . < x
p
< T, p 0 inside the interval (0, T). We shall say that R(x) B
+
0
if
R(x) has the form
1
R(x)
=
p

j=1
R
j
(x x
j
)
2
+R
0
(x), 0 < x
1
< . . . < x
p
< T, R
j
> 0, 0 j p,
and R
0
(x), R

0
(x) are absolutely continuous on [0, T], R

0
(x) L
2
(0, T), R(x) > 0 (x ,=
x
j
), R(0) = 1, R

(0) = 0. In particular, if here p = 0 then R(x) B


0
.
217
As the main spectral characteristics we introduce the amplitude reection coecient
r() =
y
1
(T, ) + R
0
y
2
(T, )
y
1
(T, ) R
0
y
2
(T, )
, R
0
:= R(T);
the power reection coecient
(k) = [r(k)[
2
, k := Re ;
the transmission coecients
f
1
() =
y
1
(T, ) R
0
y
2
(T, )
2

R
0
, f
2
() =
y
1
(T, ) +R
0
y
2
(T, )
2

R
0
and the characteristic function
() =
1

R
0
y
1
(T, ).
Clearly,
r() = f
2
()/f
1
(), (4.3.2)
() = f
1
() + f
2
(). (4.3.3)
In many cases of practical interest, the phase is dicult or impossible to measure, while
the amplitude is easily accessible to measurement. Such cases lead us to the so-called in-
complete inverse problems where only a part of the spectral information is available. In
this paper we study one of the incomplete inverse problems, namely, the inverse problem of
recovering the wave resistance from the power reection coecient:
Inverse Problem 4.3.1. Given (k) , construct R(x).
The lack of spectral information leads here to nonuniqueness of the solution of the inverse
problem. Let us briey describe a scheme of the solution of Inverse Problem 4.3.1.
Denote
j
(k) = [f
j
(k)[, j = 1, 2. Since
2
1
(k)
2
2
(k) 1 (see (4.3.19) below), we get
in view of (4.3.2),
(k) = 1 (
1
(k))
2
, 0 (k) < 1.
Firstly, from the given power reection coecient (k) we construct
j
(k). Then, using
analytic properties of the transmission coecients f
j
() and, in particular, information
about their zeros, we reconstruct the transmission coecients from their amplitudes. Namely
on this stage we are faced with nonuniqueness. Problems concerning the reconstruction of
analytic functions from their moduli often appear in applications and have been studied in
many works (see [hoe1], [hur1] and references therein). The last of our steps is to calculate
the characteristic function () by (4.3.3) and to solve the inverse problem of recovering
R(x) from () . Here we use the Gelfand-Levitan-Marchenko method (see [mar1], [lev2]
and Chapters 1-2 of this book).
To realize this scheme, in Subsection 4.3.1 we study properties of the spectral character-
istics. In Subsection 4.3.2 we solve the synthesis problem for R(x) from the characteristic
function () , obtain necessary and sucient conditions for its solvability and provide
three algorithms for constructing the solution. In Subsection 4.3.3 we study the problem of
recovering the transmission coecients from their moduli. In Subsection 4.3.4, the so-called
symmetrical case is considered, and in Subsection 4.3.5 we provide an algorithm for the
218
solution of Inverse Problem 1. We note that inverse problems for Sturm-Liouville equations
with turning points have been studied in [FY1] and [FY3]. Some aspects of the turning point
theory and a number of its applications are described in [ebe4], [mch1] and [was1].
We shall say that R(x) B

0
if (R(x))
1
B
+
0
. An investigation of the classes B
+
0
and B

0
is completely similar because the replacement R 1/R is equivalent to the re-
placement (y
1
, y
2
) (y
2
, y
1
). Below it will be more convenient for us to consider the
case R(x) B

0
.
We transform (4.3.1) by means of the substitution
y
1
(x, ) =
_
R(x)u(x, ), y
2
(x, ) =
1
_
R(x)
v(x, ) (4.3.4)
to the system
u

+h(x)u = iv, v

h(x)v = iu, x [0, T] (4.3.5)


with the initial conditions u(0, ) = 1, v(0, ) = 1, where
h(x) =
R

(x)
2R(x)
. (4.3.6)
Hence the function u(x, ) satises the equation
u

+q(x)u = u, =
2
(4.3.7)
and the initial conditions
u(0, ) = 1, u

(0, ) = i,
where
q(x) = h
2
(x) h

(x) (4.3.8)
or
q(x) =
3
4
_
R

(x)
R(x)
_
2

1
2
R

(x)
R(x)
.
Similarly,
v

+g(x)v = v, v(0, ) = 1, v

(0, ) = i,
where g(x) = h
2
(x) + h

(x).
In view of (4.3.4), the transmission coecients, the amplitude reection coecient and
the characteristic function take the form
f
1
() =
u(T, ) v(T, )
2
, f
2
() =
u(T, ) + v(T, )
2
,
r() =
u(T, ) + v(T, )
u(T, ) v(T, )
=
f
2
()
f
1
()
, () = u(T, ).
_

_
(4.3.9)
Since y
1
(x, 0) 1 , we have according to (4.3.4),
_
R(x)u(x, 0) 1. (4.3.10)
219
Lemma 4.3.1. R(x) B

0
if and only if q(x) L
2
(0, T), (0) ,= 0.
Proof. 1) Let R(x) B

0
, i.e.
R(x) =
p

j=1
R
j
(x x
j
)
2
+R
0
(x), 0 < x
1
< . . . < x
p
< T, R
j
> 0,
R
0
(x) W
2
2
(0, T), R(x) > 0 (x ,= x
j
), R(0) = 1, R

(0) = 0.
_

_
(4.3.11)
Using (4.3.6) and (4.3.11) we get for x x
j
,
h(x)
1
x x
j
+h

j
(x), h

j
(x) W
1
2
, h

j
(x
j
) = 0,
and consequently q(x) L
2
(0, T). If follows from (4.3.10) that u(T, 0) ,= 0, i.e. (0) ,= 0 .
2) Let now q(x) L
2
(0, T), u(T, 0) ,= 0, and let 0 < x
1
< . . . < x
p
< T be zeros of
u(x, 0). It follows from (4.3.10) that
R(x) = (u(x, 0))
2
.
Hence R(x) > 0 (x ,= x
j
), R(0) = 1, R

(0) = 0, and
R(x)
R
j
(x x
j
)
2
, x x
j
, R
j
> 0.
Denote
R
0
(x) := R(x)
p

j=1
R
j
(x x
j
)
2
.
It is easy to see that R
0
(x) W
2
2
[0, T]. 2
Remark 4.3.1. If R(x) B

0
, then the functions y
1
(x, ) and v(x, ) have singulari-
ties of order 1 at x = x
j
, and there exist nite limits lim
xx
j
(xx
j
)v(x, ), lim
xx
j
(xx
j
)y
1
(x, ).
In the other words, we continue solutions in the neighbourhoods of the singular points with
the help of generalized Bessel type solutions (see, [yur28]). It follows from (4.3.10) that
lim
xx
j
(x x
j
)
_
R(x) = R
j
.
Example 4.3.1. Let R(x) =
1
cos
2
x
. Then h(x) = tan x, q(x) = 1, u(x, 0) =
1
_
R(x)
= cos x, u(x, ) = cos xi
sin x

,
2
=
2
+1. If T
_
(2p 1)
2
,
(2p + 1)
2
_
,
then u(x, 0) has p zeros x
j
=
(2j 1)
2
, j = 1, p.
In the sequel, we denote by AC[a, b] the set of absolutely continuous functions on the
segment [a, b].
Theorem 4.3.1. Let R(x) B

0
. Then
(i) The characteristic function () is entire in , and the following representation holds
() = e
iT
+
_
T
T
(t)e
it
dt, (t) AC[T, T],

(t) L
2
(T, T), (T) = 0, (4.3.12)
220
where (t) is a real function, and
(T) =
1
2
_
T
0
q(t) dt =
h(T)
2
+
1
2
_
T
0
h
2
(t) dt.
(ii) For real , () has no zeros. For Im > 0, () has at most a nite number of
simple zeros of the form
j
= i
j
,
j
> 0, j = 1, m, m 0.
(iii) The function () has no zeros for Im 0 if and only if R(x) B
0
.
Proof. 1) It is well-known (see [mar1] and Section 1.3 of this book) that u(x, ) has
the form
u(x, ) = e
ix
+
_
x
x
K(x, t)e
it
dt,
where K(x, t) is a real, absolutely continuous function, K
t
(T, t) L
2
(T, T), K(x, x) =
0, K(x, x) =
1
2
x
_
0
q(t) dt. Moreover,
u(x, ) = u(x, ), (4.3.13)
u(x, ), u(x, )) 2i, (4.3.14)
where y, z) := yz

z. Hence we arrive at (4.3.12), where (t) = K(T, t).


2) If follows from (4.3.13) that for real , () = (), and consequently, in view
of (4.3.14), for real ,= 0, () has no zeros. By virtue of Lemma 4.3.1, (0) ,= 0.
Furthermore, denote
+
= : Im > 0. Let = k +i
+
be a zero of (). Since
u

+q(x)u =
2
u, u

+q(x)u =
2
u,
we get
(
2

2
)
_
T
0
[u[
2
dx =

T
0
u, u) = i( +),
which is possible only if k := Re = 0. Hence, all zeros of () in
+
are pure imaginary,
and by virtue of (4.3.12), the number of zeros in
+
is nite.
3) Let us show that in
+
all zeros of () are simple. Suppose that for a certain
= i, > 0, () =

() = 0, where

() =
d
d
() . Denote u
1
= u . Then, by
virtue of (4.3.7),
u

1
+q(x)u
1
=
2
u
1
+ 2u, u
1
(0, ) = 0, u

1
(0, ) = i.
Consequently,
2
_
T
0
u
2
(x, ) dx =
_
T
0
u(x, )(u

1
(x, ) + q(x)u
1
(x, )
2
u
1
(x, )) dx
=

T
0
u, u
1
) = i,
i.e.
2
_

0
[u(x, )[
2
dx = 1,
which is impossible.
221
4) If R(x) B
0
, then in view of (4.3.10) u(x, 0) > 0, x [0, T]. Since u(0, i) =
1, 0 and u(x, i) > 0, x [0, T] for large , we get u(T, i) > 0 for all 0 , i.e.
() has no zeros in
+
. The inverse assertion is proved similarly. 2
In the analogous manner one can prove the following theorem.
Theorem 4.3.2.Let R(x) B

0
.
(i) The functions f
1
(), f
2
() are entire in , and have the form
f
1
() = e
iT
+
_
T
T
g
1
(t)e
it
dt, g
1
(t) AC[T, T], g

1
(t) L
2
(T, T), g
1
(T) = 0,
(4.3.15)
f
2
() =
_
T
T
g
2
(t)e
it
dt, g
2
(t) AC[T, T], g

2
(t) L
2
(T, T), g
2
(T) = 0, (4.3.16)
where g
j
(t) are real, and
g
1
(T) =
1
2
_
T
0
h
2
(t), g
2
(T) =
h
2
, h := h(T). (4.3.17)
(ii)
f
1
()f
1
() f
2
()f
2
() 1. (4.3.18)
(iii) For real , f
1
() has no zeros. In
+
, f
1
() has a nite number of simple zeros of
the form

j
= i

j
,

j
> 0, j = 1, m

, m

0 .
(iv) If R(x) B
0
, then f
1
() has no zeros in
+
, i.e. m

= 0 .
We note that (4.3.18) follows from (4.3.14). Since f
j
() = f
j
() , we obtain from
(4.3.18) that

2
1
(k)
2
2
(k) 1. (4.3.19)
It follows from (4.3.9), (4.3.15), (4.3.16) and (4.3.19) that

2
1
(k) = 1 +
h
2
4k
2
+
(k)
k
2
,
2
2
(k) =
h
2
4k
2
+
(k)
k
2
,
(k) =
h
2
4k
2
+h
2
+

1
(k)
k
2
, [k[ ,
_

_
(4.3.20)
where (k),
1
(k) L
2
(, ).
Lemma 4.3.2. Denote g

j
(t) = g
j
(t T) . Then
g

1
() +
_

0
g

1
(t)g

1
(t + 2T ) dt =
_

0
g

2
(t)g

2
(t + 2T ) dt, [0, 2T]. (4.3.21)
Proof. Indeed, using (4.3.15) and (4.3.16), we calculate
f
1
()f
1
() = 1 +
_
2T
2T
G
1
()e
i
d, f
2
()f
2
() =
_
2T
2T
G
2
()e
i
d, (4.3.22)
where G
j
() = G
j
(),
G
1
() = g
1
(T ) +
_
T
T
g
1
(t)g
1
(t )dt, G
2
() =
_

T
g
2
(t)g
2
(t )dt, > 0.
222
Substituting (4.3.22) into (4.3.18) we arrive at (4.3.21). 2
Denote
p(x) =
_
q(T x) for 0 x T,
0 for x > T,
and consider the equation
y

+p(x)y = y, x > 0 (4.3.23)


on the half-line. Let e(x, ) be the Jost solution of (4.3.23) such that e(x, ) e
ix
for
x T . Denote () = e(0, ) . Clearly,
e(x, ) = e
iT
u(T x, ), 0 x T, () = e
iT
().
Then
e(x, ) = e
ix
+
_
2Tx
x
G(x, t)e
it
dt, 0 x T, e(x, ) e
ix
, x T, (4.3.24)
() = 1 +
_
2T
0
(t)e
it
dt, (t) AC[0, 2T],

(t) L
2
(0, 2T), (2T) = 0, (4.3.25)
where G(x, t) = K(T x, T t), (t) = (T t) = G(0, t), G(x, 2T x) = 0,
(0) =
1
2
_
T
0
p(t) dt, G(x, x) =
1
2
_
T
x
p() d.
Denote

j
= (
_

0
e
2
(x,
j
)dx)
1
> 0,
j
= i
j
,
j
> 0, j = 1, m.
Lemma 4.3.3.

j
=
i(
j
)

(
j
)
, (4.3.26)
where

() =
d
d
().
Proof. Since e

+p(x)e =
2
e and (
j
) = 0 , we have
(
2

2
j
)
_

0
e(x, )e(x,
j
)dx =

0
e(x, ), e(x,
j
)) = e

(0,
j
)().
If yields

j
=
2
j
e

(0,
j
)

(
j
)
. (4.3.27)
Since e(x, ), e(x, )) 2i , we get
(
j
)e

(0,
j
) = 2i
j
. (4.3.28)
Together with (4.3.27) it gives (4.3.26). 2
4.3.2. Synthesis of R(x) from the characteristic function ().
In this subsection we study the inverse problem of recovering the wave resistance R(x)
from the given () in the class B

0
. This inverse problem has a unique solution. We
223
provide physical realization conditions, i.e. necessary and sucient conditions on a function
() to be the characteristic function for a certain R(x) B

0
. We also obtain three
algorithms for the solution of the inverse problem.
Denote s() =
()
()
. For real , s() is continuous and, by virtue of (4.3.25),
s() = 1 + O(
1

), [[ . Consequently, 1 s() L
2
(, ) . We introduce the
Fourier transform
F
0
(t) =
1
2
_

(1 s())e
it
d, (4.3.29)
1 s() =
_

F
0
(t)e
it
dt. (4.3.30)
Substituting (4.3.25) and (4.3.30) into the relation () s() = () , we obtain the con-
nection between F
0
(t) and (t) :
(t) + F
0
(t) +
_
2T
0
(s)F
0
(t +s) ds = 0, t > 0, (4.3.31)
where (t) 0 for t > 2T . Further, we consider the function
F(t) = F
0
(t) + F
1
(t), (4.3.32)
where
F
1
(t) =
m

j=1

j
e
i
j
t
. (4.3.33)
Since
F
1
(t) +
_
2T
0
(s)F
1
(t +s)ds =
m

j=1

j
e
i
j
t
(
j
) = 0,
we get by virtue of (4.3.31) that
(t) + F(t) +
_
2T
0
(s)F(t +s) ds = 0, t > 0. (4.3.34)
Calculating the integral (4.3.29) by Jordans lemma for t > 2T and using Lemma 4.3.3
we obtain
F
0
(t) = i
m

j=1
Res
=
j
(1 s())e
it
= i
m

j=1
(
j
)

(
j
)
e
i
j
t
=
m

j=1

j
e
i
j
t
= F
1
(t).
Thus,
F(t) 0 , t > 2T. (4.3.35)
Taking (4.3.35) into account we rewrite (4.3.34) as follows
(t) + F(t) +
_
2T
t
(s t)F(s) ds = 0, t (0, 2T). (4.3.36)
In particular it yields F(t) AC[0, 2T], F

(t) L
2
(0, 2T), F(2T) = 0 .
We note that the function F(t) can be constructed from equation (4.3.36) which is
usually better than to calculate F(t) directly by (4.3.32), (4.3.33), (4.3.29).
224
Acting in the same way as in [mar1] one can obtain that
G(x, t) +F(x, t) +
_
2Tt
x
F(t +s)G(x, s) ds = 0, 0 x T, x < t < 2T x. (4.3.37)
Equation (4.3.37) is called the Gelfand-Levitan-Marchenko equation.
Now let us formulate physical realization conditions for the characteristic function () .
Theorem 4.3.3. For a function () of the form (4.3.12) to be the characteristic
function for a certain R(x) B

0
, it is necessary ans sucient that all zeros of () in

+
are simple, have the form
j
= i
j
,
j
> 0, j = 1, m, m 0 , and

j
:=
i(
j
)

(
j
)
> 0.
R(x) B
0
if and only if () has no zeros in
+
, i.e. m = 0 . The specication of the
characteristic function () uniquely determines R(x).
Proof. The necessity part of Theorem 4.3.3 was proved above. We prove the suciency.
Put () = e
iT
() . Then (4.3.25) holds, where (t) = (T t) . Since for each xed
x 0 , the homogeneous integral equation
w(t) +
_

x
F(t +s)w(s) ds = 0, t > x (4.3.38)
has only the trivial solution (see [mar1]), then equation (4.3.37) has a unique solution G(x, t).
The function G(x, t) is absolutely continuous, G(x, 2Tx) = 0 and
d
dx
G(x, x) L
2
(0, T) .
We construct the function e(x, ) by (4.3.24).
Let us show that e(0, ) = () . Indeed, it follows from (4.3.24) and (4.3.25) that
e(0, ) () =
_
2T
0
(G(0, t) (t))e
it
dt. (4.3.39)
By virtue of (4.3.36) and (4.3.37),
(G(0, t) (t)) +
_
2Tt
0
(G(0, s) (s))F(t +s) ds = 0.
In view of (4.3.35), it gives us that the function w(t) := G(0, t) (t) satises (4.3.38) for
x = 0 . Hence G(0, t) = (t) , and according to (4.3.39), e(0, ) = () .
Put p(x) := 2
dG(x, x)
dx
, x [0, T] , and p(x) 0, x > T . It is easily shown that
e

(x, ) + p(x)e(x, ) =
2
e(x, ), x > 0.
We construct R(x) by the formula
R(x) =
1
u
2
(x, 0)
, (4.3.40)
where u(x, ) = e
iT
e(T x, ) . Since (0) ,= 0 , we have u(T, 0) ,= 0 . It follows from
(4.3.40) that R(x) B

0
, where 0 < x
1
< . . . < x
p
< T are zeros of u(x, 0) . In particular,
225
if () has no zeros in
+
, then, by virtue of Theorem 1, R(x) B
0
. The uniqueness of
recovering R(x) from () was proved for example in [yur5]. 2
Theorem 4.3.3 gives us the following algorithm for constructing R(x) from the charac-
teristic function () :
Algorithm 4.3.1. Let a function () satisfying the hypothesis of Theorem 4.3.3 be
given. Then
(1) Construct F(t), t (0, 2T) from the integral equation (4.3.36), where (t) = (T t) ,
or directly from (4.3.26), (4.3.29), (4.3.32), (4.3.33).
(2) Find G(x, t) from the integral equation (4.3.37).
(3) Calculate R(x) by
R(x) =
1
e
2
(T x)
, e(x) = 1 +
_
2Tx
x
G(x, t) dt. (4.3.41)
Next we provide two other algorithms for the synthesis of the wave resistance from the
characteristic function, which sometimes may give advantage from the numerical point of
view.
Let S(x, ) be the solution of (4.3.23) under the conditions S(0, ) = 0, S

(0, ) = 1.
Then
S(x, ) =
sin x

+
_
x
0
Q(x, t)
sin t

dt, (4.3.42)
where Q(x, t) is a real, absolutely continuous function, Q(x, 0) = 0, Q(x, x) =
1
2
_
x
0
p(t)dt .
Since e(x, ), S(x, )) () and (
j
) = 0 we get
e(x,
j
) = e

(0,
j
)S(x,
j
),
j
=
2
j
, (4.3.43)
and by virtue of (4.3.27) and (4.3.28),

j
:=
__

0
S
2
(x,
j
)dx
_
1
=
2
j
e

(0,
j
)

(
j
)
=
4i
2
j
(
j
)

(
j
)
> 0. (4.3.44)
Taking into account the relations

(
j
) = e
i
j
T

(
j
), (
j
) = e
i
j
T
(
j
),
we obtain from (4.3.44)

j
=
4i
2
j
(
j
)

(
j
)
.
Consider the function
a(x) =
m

j=1

j
cos
j
x

2
j
+
2

_

0
cos x
_
1
[()[
2
1
_
d. (4.3.45)
By virtue of (4.3.12),

_
1
[()[
2
1
_
L
2
(0, ),
226
and consequently a(x) is absolutely continuous, and a

(x) L
2
. The kernel Q(x, t) from
(4.3.42) satises the equation (see [lev2])
f(x, t) + Q(x, t) +
_
x
0
Q(x, s)f(s, t)ds = 0, x 0, 0 < t < x, (4.3.46)
where
f(x, t) =
1
2
(a(x t) a(x +t)).
The function R(x) can be constructed by the following algorithm:
Algorithm 4.3.2. Let () be given. Then
(1) Construct a(x) by (4.3.45).
(2) Find Q(x, t) from the integral equation (4.3.46).
(3) Calculate p(x) := 2
d
dx
Q(x, x) .
(4) Construct
R(x) :=
1
e
2
(T x)
,
where e(x) is the solution of the Cauchy problem y

= p(x)y, y(T) = 1, y

(T) = 0 .
Remark 4.3.2. We also can construct R(x) using (4.3.6) and (4.3.8), namely:
R(x) = exp(2
_
x
0
h(t)dt),
where h(x) is the solution of the equation
h(x) =
_
x
0
h
2
(t)dt + 2Q(T x, T x) 2Q(T, T).
Remark 4.3.3. If R(x) B
0
, then
a(x) =
2

_

0
cos x
_
1
[()[
2
1
_
d,
and for constructing the solution of the inverse problem it is sucient to specify [(k)[ for
k 0 .
Now we provide an algorithm for the solution of the inverse problem which uses discrete
spectral characteristics.
Let X
k
(x, ), k = 1, 2 be solutions of (4.3.7) under the conditions X
1
(0, ) = X

2
(0, ) =
1, X

1
(0, ) = X
2
(0, ) = 0 . Denote
k
() = X
k
(T, ) . Clearly,
u(x, ) = X
1
(x, ) iX
2
(x, ), () =
1
() i
2
(),
and consequently

1
() =
() + ()
2
,
2
() =
() ()
2i
.
Let
n

n1
be the zeros of
2
() , and

n
:=
_
T
0
X
2
2
(x,
n
)dx > 0.
227
It is easily shown that

n
= (
1
(
n
))
1
(
d
d

2
())
|=
n
,
and

n
=
n
T
+
1
2n
_
T
0
q()d +

n
n
,
n
=
T
3
2n
2

2
+

n1
n
3
,
n
,
n1

2
. (4.3.47)
We introduce the function
A(x) =
2
T

n=1
_
T cos

n
x
2
n

n
cos
nx
T
_
. (4.3.48)
By virtue of (4.3.47), A(x) is absolutely continuous, and A

(x) L
2
. It is known (see
[mar1]) that
X
2
(x, ) =
sin x

+
_
x
0
D(x, t)
sin t

dt,
where D(x, t) is a real, absolutely continuous function, and D(x, 0) = 0, D(x, x) =
1
2
x
_
0
q(t)dt . The function D(x, t) satises the equation (see [lev2])
D(x, t) + B(x, t) +
_
x
0
D(x, s)B(s, t)ds = 0, 0 x T , 0 < t < x, (4.3.49)
where B(x, t) =
1
2
(A(x t) A(x + t)) . The function R(x) can be constructed from the
discrete data
n
,
n

n1
by the the following algorithm.
Algorithm 4.3.3. Let
n
,
n

n1
be given. Then
(1) Construct A(x) by (4.3.48).
(2) Find D(x, t) from the integral equation (4.3.49).
(3) Calculate q(x) := 2
dD(x, x)
dx
.
(4) Construct R(x) :=
1
u
2
(x)
, where u(x) is the solution of the Cauchy problem
u

= q(x)u, u(0) = 1, u

(0) = 0,
or by
R(x) = exp(2
_
x
0
h(t)dt), h(x) =
_
x
0
h
2
(t)dt 2D(x, x).
4.3.3. Reconstruction of the transmission coecients from their moduli.
Lemma 4.3.4.Suppose that a function () is regular in
+
, has no zeros in
+
,
and for [[ ,
+
, () = 1 + O(
1

) . Let (k) = [(k)[e


i(k)
, k = Re . Then
(k) =
1

ln [()[
k
d. (4.3.50)
In (4.3.50) (and everywhere below, where necessary) the integral is understood in the principal
value sense.
228
Proof. First we suppose that (k) ,= 0 for real k . By Cauchys theorem, taking into
account the hypothesis of the lemma, we obtain
1
2i
_
C
r,
ln ()
k
d = 0, (4.3.51)
where C
r,
is the closed contour (with counterclockwise circuit) consisting of the semicircles
C
r
= : = re
i
, [0, ],

= : k = e
i
, [0, ] and the intervals
[r, r] [k , k +] of the real axis. Since
lim
0
1
2i
_

ln ()
k
d =
1
2
ln (k),
lim
r
1
2i
_
C
r
ln ()
k
d = 0,
we get from (4.3.51) that
ln (k) =
1
i
_

ln ()
k
d.
Separating here real and imaginary parts, we arrive at (4.3.50).
Suppose now that for real , the function () has one zero
0
= 0 of multiplicity s
(the general case is treated in the same way). Denote
() = ()
_
+i

_
s
, > 0; (k) = [ (k)[e
i

(k)
.
Then
(k) =

(k) + s arctg

k
. (4.3.52)
For the function () , (4.3.50) has been proved. Hence, (4.3.52) takes the form
(k) =
1

ln [()[
k
d +
s
2
_

infty
ln(1 +

2

2
)
k
d +s arctg

k
.
When 0 , it gives us (4.3.50). 2
Lemma 4.3.5. Suppose that a function () is regular in
+
, () = () , and for
[[ ,
+
, () = 1 + O(
1

) . Let (k) = [(k)[e


i(k)
, and let
j
= k
j
+ i
j
,
j
>
0, j = 1, s be zeros of () in
+
. Then
(k) =
1

ln [()[
k
d + 2
s

j=1
arctg

j
k k
j
. (4.3.53)
Proof. Since () = () , the zeros of () in
+
are symmetrical with respect to
the imaginary axis. If
j
= i
j
,
j
> 0 , then
arg
k +
j
k
j
= 2arctg

j
k
. (4.3.54)
229
If
j
= k
j
+i
j
, k
j
> 0,
j
> 0 , then
arg
(k +
j
)(k
j
)
(k
j
)(k +
j
)
= 2arctg

j
k k
j
+ 2arctg

j
k +k
j
. (4.3.55)
Denote
() = ()
s

j=1
+
j

j
.
The function () satises the hypothesis of Lemma 4.3.4. Then using (4.3.50), (4.3.54)
and (4.3.55) we arrive at (4.3.53). 2
Using Lemma 5 one can construct the transmission coecients from their moduli and
information about their zeros in
+
. For deniteness we conne ourselves to the case
h ,= 0 .
Theorem 4.3.4.Let
f
1
(k) =
1
(k)e
i
1
(k)
, (4.3.56)
and let

j
= i

j
,

j
> 0, j = 1, m

be zeros of f
1
() in
+
. Then

1
(k) = kT +
1

ln
1
()
k
d + 2
m

j=1
arctg

j
k
. (4.3.57)
In particular, if R(x) B
0
, then

1
(k) = kT +
1

ln
1
()
k
d. (4.3.58)
Proof. Denote () = e
iT
f
1
() . If follows from (4.3.15) that
() = 1 +
_
T
T
g
1
(t)e
i(Tt)
dt,
and consequently, ( ) = () , and for [[ ,
+
, () = 1 + O
_
1

_
. Thus,
the function () satises the hypothesis of Lemma 4.3.5. Using (4.3.53) and the relations
[(k)[ =
1
(k),
1
(k) = (k) + kT , we arrive at (4.3.57). 2
Similarly we prove the following theorem.
Theorem 4.3.5. Let
f
2
(k) =
2
(k)e
i
2
(k)
, (4.3.59)
and let
0
j
= k
0
j
+i
0
j
,
0
j
> 0, j = 1, m
0
be zeros of f
2
() in
+
. Then

2
(k) =

2
sign
_
k
h
_
+kT +
1

ln [
2
h

2
()[
k
d + 2
m
0

j=1
arctg

0
j
k k
0
j
. (4.3.60)
In particular, if f
2
() has no zeros in
+
, then

2
(k) =

2
sign
_
k
h
_
+kT +
1

ln [
2
h

2
()[
k
d. (4.3.61)
230
Thus, the specication of
j
(k) uniquely determines the transmission coecients only
when they have no zeros in
+
. In particular, for the class B
0
, f
1
() is uniquely deter-
mined by its modulus, and all possible transmission coecients f
2
() can be constructed
by means of solving the integral equation (4.3.21).
4.3.4. Symmetrical case.
The wave resistance is called symmetrical if R(T x) = R(x) . For the symmetrical case
the transmission coecient f
2
(k) is uniquely determined (up to the sign) from its modulus.
Theorem 4.3.6. For the wave resistance to be symmetrical it is necessary and sucient
that Re f
2
(k) = 0 . Moreover, f
2
(k) = f
2
(k), g
2
(t) = g
2
(t) , and (see (4.3.16))
f
2
(k) = 2i
_
T
0
g
2
(t) sin kt dt.
Proof. According to (4.3.9) and (4.3.5),
f
2
() =
1
2i
(u

(T, ) + iu(T, ) + hu(T, )).


Since u(x, ) = X
1
(x, ) iX
2
(x, ) , we calculate
Re f
2
(k) =
1
2
(X
1
(T, ) X

2
(T, ) hX
2
(T, )). (4.3.62)
If R(x) = R(T x) , then it follows from (4.3.6) and (4.3.8) that h(x) = h(T x), q(x) =
q(T x) , and consequently h = 0 , X
1
(T, ) X

2
(T, ) (see [yur2]), i.e. Re f
2
(k) = 0 .
Conversely, if Re f
2
(k) = 0, then (4.3.62) gives us
X
1
(T, ) X

2
(T, ) hX
2
(T, ) 0. (4.3.63)
Since for k ,
X
2
(T, k
2
) =
sin kT
k
+O
_
1
k
2
_
, X
1
(T, ) X

2
(T, ) = O
_
1
k
2
_
,
we obtain from (4.3.63) that h = 0, X
1
(T, ) X

2
(T, ). Consequently, q(x) = q(T x)
(see [yur2]). Similarly one can prove that g(x) = g(T x) . Hence h(x) = h(T x) , and
R(x) = R(T x) .
Furthermore, it follows from (4.3.16) that
Re f
2
(k) =
_
T
T
g
2
(t) cos kt dt, Imf
2
(k) =
_
T
T
g
2
(t) sin kt dt.
For the symmetrical case Re f
2
(k) = 0 , and consequently g
2
(t) = g
2
(t) . Then
f
2
(k) = i Imf
2
(k) = 2i
_
T
0
g
2
(t) sin kt dt, f
2
(k) = f
2
(k).
2
231
Thus, f
2
(k) can be constructed (up to the sign) by the formulas
Re f
2
(k) = 0, [Imf
2
(k)[ =
2
(k), f
2
(k) = f
2
(k).
4.3.5. Synthesis of the wave resistance from the power reection coecient.
In this subsection, using results obtained above, we provide a procedure for constructing
R(x) from the given power reection coecient (k) . For deniteness we conne ourselves
to the case h ,= 0 . Let (k) (0 (k) < 1, (k) = (k)) be given. Our scheme of
calculation is:
Step 1. Calculate
1
(k) and
2
(k) by the formula

2
1
(k) = 1 +
2
2
(k) =
1
1 (k)
. (4.3.64)
Step 2. Construct f
1
(k) by (4.3.56), (4.3.57) or for R(x) B
0
by (4.3.56), (4.3.58).
Find g
1
(t) from the relation
_
T
T
g
1
(t)e
ikt
dt = f
1
(k) e
ikT
.
Step 3. Construct f
2
(k) by (4.3.59), (4.3.60) or, if f
2
() has no zeros in
+
, by
(4.3.59), (4.3.61). Find g
2
(t) from the relation
_
T
T
g
2
(t)e
ikt
dt = f
2
(k).
We note that g
2
(t) can be constructed also from the integral equation (4.3.21). In this case
f
2
(k) is calculated by (4.3.16).
Step 4. Calculate (t) = g
1
(t) + g
2
(t) and () by (4.3.12).
Step 5. Construct R(x) using one of the Algorithms 4.3.1 - 4.3.3.
Remark 4.3.4. In some concrete algorithms it is not necessary to make all calculations
mentioned above. For example, let R(x) B
0
, and let us use Algorithm 4.3.2. Then it is
not necessary to calculate g
1
(t), g
2
(t) and (t) , since we need only [(k)[.
Now we consider a concrete algorithm which realizes this scheme. For simplicity, we
consider the case R(x) B
0
.
For t [0, T] we consider the functions

j
(t) = g
j
(t) + g
j
(t),
j
(t) = g
j
(t) g
j
(t), j = 1, 2, (4.3.65)
(t) = (t) + (t), (t) = (t) (t). (4.3.66)
Since (t) = g
1
(t) +g
2
(t), we get
(t) =
1
(t) +
2
(t), (t) =
1
(t) +
2
(t). (4.3.67)
Solving (4.3.65) and (4.3.66) with respect to g
j
(t) and (t) we obtain
g
j
(t) =
_

_
1
2
(
j
(t) +
j
(t)) , t > 0,
1
2
(
j
(t)
j
(t)) , t < 0,
(t) =
_

_
1
2
((t) + (t)) , t > 0
1
2
((t) +(t)) , t < 0
(4.3.68)
232
It follows from (4.3.17) and (4.3.65) that

1
(T) =
1
(T) = w
1
,
2
(T) =
2
(T) =
h
2
,
1
(0) =
2
(0) = 0, (4.3.69)
where
w
1
=
1
2
_
T
0
h
2
() d.
By virtue of Lemma 4.3.3,
f
1
(k) = (cos kT +C
1
(k)) i(sin kT +S
1
(k)),
f
2
(k) = C
2
(k) iS
2
(k),
(k) = ((cos kT +C(k)) i(sin kT +S(k)),
_

_
(4.3.70)
where
C
j
(k) =
_
T
0

j
(t) cos kt dt, S
j
(k) =
_
T
0

j
(t) sin kt dt, (4.3.71)
C(k) =
_
T
0
(t) cos kt dt, S(k) =
_
T
0
(t) sin kt dt. (4.3.72)
Using (4.3.69) and (4.3.71) we obtain the following asymptotic formulae for C
j
(k) and
S
j
(k) as k :
C
1
(k) = w
1
sin kT
k
+
(k)
k
, C
2
(k) =
h
2
sin kT
k
+
(k)
k
,
S
1
(k) = w
1
cos kT
k
+
(k)
k
, S
2
(k) =
h
2
cos kT
k
+
(k)
k
.
_

_
(4.3.73)
Here and below, one and the same symbol (k) denotes various functions from L
2
(, ) .
Comparing (4.3.70) with the relations f
j
(k) =
j
(k) e
i
j
(k)
, we derive
C
1
(k) =
1
(k) cos
1
(k) cos kT, S
1
(k) =
1
(k) sin
1
(k) sin kT, (4.3.74)
C
2
(k) =
2
(k) cos
2
(k), S
2
(k) =
2
(k) sin
2
(k). (4.3.75)
For calculating the arguments of the transmission coecients we will use (4.3.58) and
(4.3.61), i.e.

1
(k) = kT +

1
(k),
2
(k) =

2
w +kT +

2
(k), k > 0, (4.3.76)
where w = sign h ,

1
(k) =
1

ln
1
()
k
d, (4.3.77)

2
(k) =
1

ln
2
()
k
d,
2
() :=

2
h

2
()

. (4.3.78)
It follows from (4.3.73)-(4.3.75) and (4.3.20) that for k +,

1
(k) = kT +
w
1
k
+
(k)
k
,
2
(k) =

2
w +kT +(k). (4.3.79)
233
Substituting (4.3.76) into (4.3.74) and (4.3.75), we calculate for k > 0,
C
1
(k) = cos kT(
1
(k) cos

1
(k) 1) sin kT(
1
(k) sin

1
(k)),
S
1
(k) = sin kT(
1
(k) cos

1
(k) 1) + cos kT(
1
(k) sin

1
(k)),
_
_
_
(4.3.80)
C
2
(k) =
2
(k)w(sin kT cos

2
(k) + cos kT sin

2
(k)),
S
2
(k) =
2
(k)w(cos kT cos

2
(k) sin kT sin

2
(k)).
_
_
_
(4.3.81)
Furthermore, consider the functions

1
(t) =
1
(t) + w
1
,

1
(t) =
1
(t) + w
1
t
T
,

2
(t) =
2
(t) +
h
2
,

2
(t) =
2
(t) +
h
2

t
T
.
_
_
_
(4.3.82)
Then in view of (4.3.69),

j
(T) =

j
(T) =

j
(0) = 0, j = 1, 2. (4.3.83)
Denote
C

j
(k) =
_
T
0

j
(t) cos kt dt, S

j
(k) =
_
T
0

j
(t) sin kt dt. (4.3.84)
Integrating the integrals in (4.3.84) by parts and taking (4.3.83) into account, we obtain
C

j
(k) =
_
T
0

j
(t)
sin kt
k
dt, S

j
(k) =
_
T
0

j
(t)
cos kt
k
dt. (4.3.85)
Clearly,
C
1
(k) = C

1
(k) w
1
sin kT
k
, S
1
(k) = S

1
(k) + w
1
_
cos kT
k

sin kT
Tk
2
_
, (4.3.86)
C
2
(k) = C

2
(k)
h
2
sin kT
k
, S
2
(k) = S

2
(k) +
h
2
_
cos kT
k

sin kT
Tk
2
_
. (4.3.87)
Now let the power reection coecient (k) (0 (k) < 1, (k) = (k)) be given for
[k[ B , and put
(k) =
h
2
4k
2
+h
2
, [k[ > B, (4.3.88)
where B > [h[ is chosen suciently large, such that (see (4.3.20)) (k) is suciently
accurate for [k[ > B.
Using (4.3.64) we calculate
1
(k) and
2
(k) . Then

2
1
(k)
2
2
(k) = 1,
j
(k) =
j
(k) > 0,
and

2
1
(k) = 1 +
h
2
4k
2
,
2
2
(k) =
h
2
4k
2
, [k[ > B. (4.3.89)
In order to calculate

1
(k) we use (4.3.77). First let [k[ > B + , > 0 . In view of
(4.3.89), equality (4.3.77) takes the form

1
(k) =
1

_
B
B
ln
1
()
k
d +
1
2
_
||>B
ln(1 +
h
2
4
2
)
k
d.
234
Since
1
2
_
||>B
ln
_
1 +
h
2
4
2
_
k
d =
1
2

j=1
(1)
j1
j
_
h
2
4
_
j _
||>B
d

2j
( k)
=
1
2

j=1
(1)
j1
j
_
h
2
4
_
j _
||>B
_
_
_
1
k
2j
_
1
k

1

2j1

s=1
1
k
2js

s+1
_
_
_
d,
we get

1
(k) =
1

_
B
B
ln
1
()
k
d +
1
2
ln
_
1 +
2B
k B
_
ln
_
1 +
h
2
4k
2
_
+
1

j=1
(1)
j
j
_
h
2
4
_
j
j1

=0
1
(2j 2 1)B
2j21
k
2+1
.
(4.3.90)
Using the relation
1

_
B
B
ln
1
()
k
d =
1
k
_
B
B
ln
1
() d +
1

_
B
B
ln
1
()
k( k)
d,
we separate in (4.3.90) the terms of order 1/k , i.e.

1
(k) =
w
1
k
+

1
(k), (4.3.91)
where

1
(k) =
1

_
B
B
ln
1
()
k( k)
d +
1
2
ln
_
1 +
2B
k B
_
ln
_
1 +
h
2
4k
2
_
+
1

j=1
(1)
j
j
_
h
2
4
_
j
j1

=1
1
(2j 2 1)B
2j21
k
2+1
, [k[ > B +, (4.3.92)
w
1
=
1

_
B
B
ln
1
() d +
1

j=1
(1)
j
j
_
h
2
4B
2
_
j

B
2j 1
< 0. (4.3.93)
It is easy to show that
[

1
(k)[

1
k
2
,
where

1
=
B +

_
B
0
ln
1
()d +
h
2
8
ln
_
1 +
2B

_
+
h
2
2
.
Now let [k[ B + . In this case we rewrite (4.3.77) in the form

1
(k) =
1

_

0
ln
1
( +k) ln
1
( k)

d. (4.3.94)
Take r 2B + . Since

ln
_
1 +
h
2
4
2
_

h
2
4
2


h
4
32
4
,
we get
1

_

r
ln
1
( +k) ln
1
( k)

d =
h
2
8
_

r
1

_
1
( +k)
2

1
( k)
2
_
d +(k),
235
where
[(k)[
h
4
96r(r B )
3
.
Calculating this integral and substituting into (4.3.94), we obtain

1
(k) =
1

_
r
0
ln
1
( +k) ln
1
( k)

d +
h
2
8
_
1
k
2
ln
r +k
r k

1
k
_
1
R +k
+
1
R k
_
_
+(k),

1
(0) = 0, [k[ B +, r 2B +. (4.3.95)
In order to calculate
2
(k) we use (4.3.78). Let [k[ > B + , > 0. According to
(4.3.89) we have
2
(k) = 1 for [k[ > B . Hence

2
(k) =
1

_
B
B
ln
2
()
k
d, [[ > B +. (4.3.96)
For [k[ B + it is more convenient to use another formula:

2
(k) =
1

_
r
0
ln
2
( +k) ln
2
( k)

d, [k[ B +, r = 2B +. (4.3.97)
Let us calculate

j
(t),

j
(t) . By virtue of (4.3.84),

j
(t) =
2
T

n=1
S

j
_
n
T
_
sin
n
T
t,

j
(t) =
1
T
C

j
(0) +
2
T

n=1
C

j
_
n
T
_
cos
n
T
t.
(4.3.98)
According to (4.3.85), the series in (4.3.98) converge absolutely and uniformly. By virtue of
(4.3.80), (4.3.81), (4.3.86), (4.3.87), the coecients S

j
_
n
T
_
and C

j
_
n
T
_
can be calculated
via the formulae
S

1
(k) = (1)
n
(
1
(k) sin

1
(k)
w
1
k
), k =
n
T
, n 1,
S

2
(k) = (1)
n
(
2
(k)wcos

2
(k)
h
2k
), k =
n
T
, n 1,
C

1
(k) = (1)
n
(
1
(k) cos

1
(k) 1) +
n0
w
1
T, k =
n
T
, n 0,
C

2
(k) = (1)
n+1
wsin

2
(k) +
n0
hT
2
, k =
n
T
, n 0,
_

_
(4.3.99)
where
nj
is the Kronecker delta.
Using the formulae obtained above we arrive at the following numerical algorithm for the
solution of Inverse Problem 4.3.1.
Algorithm 4.3.4. Let the power reection coecient (k) be given for [k[ B , and
take (k) according to (88) for [k[ > B . Then
(1) Construct
1
(k) and
2
(k) by (4.3.64).
(2) Find the number w
1
by (4.3.93).
(3) Calculate

1
(k),

2
(k) by (4.3.91), (4.3.92), (4.3.96) for [k[ > B + , and by (4.3.95),
(4.3.97) for [k[ B + .
236
(4) Construct

j
(t),

j
(t) by (4.3.98), where S

j
_
n
T
_
, C

j
_
n
T
_
is dened by (4.3.99).
(5) Find
j
(t),
j
(t) using (4.3.82).
(6) Calculate (t), t [T, T] by (4.3.67), (4.3.68).
(7) Find F(t), 0 < t < 2T from the integral equation (4.3.36), where (t) = (T t) .
(8) Calculate G(x, t) from the integral equation (4.3.37).
(9) Construct R(x), x [0, T] via (4.3.41).
Remark 4.3.5. This algorithm is one of several possible numerical algorithms for
the solution of Inverse Problem 4.3.1. Using the obtained results one can construct various
algorithms for synthesizing R(x) from spectral characteristics.
4.4. DISCONTINUOUS INVERSE PROBLEMS
This section deals with Sturm-Liouville boundary value problems in which the eigenfunc-
tions have a discontinuity in an interior point. More precisely, we consider the boundary
value problem L for the equation:
y := y

+q(x)y = y (4.4.1)
on the interval 0 < x < T with the boundary conditions
U(y) := y

(0) hy(0) = 0, V (y) := y

(T) +Hy(T) = 0, (4.4.2)


and with the jump conditions
y(a + 0) = a
1
y(a 0), y

(a + 0) = a
1
1
y

(a 0) + a
2
y(a 0). (4.4.3)
Here is the spectral parameter; q(x), h, H, a
1
and a
2
are real; a (0, T), a
1
> 0, q(x)
L
2
(0, T). Attention will be focused on the inverse problem of recovering L from its spectral
characteristics.
Boundary value problems with discontinuities inside the interval often appear in math-
ematics, mechanics, physics, geophysics and other branches of natural sciences. As a rule,
such problems are connected with discontinuous material porperties. The inverse problem
of reconstructing the material properties of a medium from data collected outside of the
medium is of central importance in disciplines ranging from engineering to the geoscienes.
For example, discontinuos inverse problems appear in electronics for constructing parameters
of heterogeneous electronic lines with desirable technical characteristics ([lit1], [mes1]). After
reducing the corresponding mathematical model we come to the boundary value problem
L where q(x) must be constructed from the given spectral information which describes
desirable amplitude and phase characteristics. Spectral information can be used to recon-
struct the permittivity and conductivity proles of a one-dimensional discontinuous medium
([kru1], [she1]). Boundary value problems with discontinuities in an interior point also ap-
pear in geophysical models for oscillations of the Earth ([And1], [lap1]). Here the main
discontinuity is cased by reection of the shear waves at the base of the crust. Further,
it is known that inverse spectral problems play an important role for investigating some
237
nonlinear evolution equations of mathematical physics. Discontinuous inverse problems help
to study the blow-up behavior of solutions for such nonlinear equations. We also note that
inverse problems considered here appear in mathematics for investigating spectral properties
of some classes of dierential, integrodierential and integral operators.
Direct and inverse spectral problems for dierential operators without discontinuities
have been studied in Chapter 1. The presence of discontinuities produces essential qualita-
tive modications in the investigation of the operators. Some aspects of direct and inverse
problems for discontinuous boundary value problems in various formulations have been con-
sidered in [she1], [hal1], [akt2], [ebe4] and other works. In particular, it was shown in [hal1]
that if q(x) is known a priori on [0, T/2], then q(x) is uniquely determined on [T/2, T]
by the eigenvalues. In [she1] the discontinuous inverse problem is considered on the half-line.
Here in Subsection 4.4.1 we study properties of the spectral characteristics of the bound-
ary value problem L. Subsections 4.4.2-4.4.3 are devoted to the inverse problem of recovering
L from its spectral characteristics. In Subsection 4.4.2 the uniqueness theorems are proved,
and in Subsection 4.4.3 we provide necessary and sucient conditions for the solvability of
the inverse problem and also obtain a procedure for the solution of the inverse problem.
In order to study the inverse problem for the boundary value problem L we use the
method of spectral mappings described in Section 1.6. We will omit the proofs (or provide
short versions of the proofs) for assertions which are similar to those given in Section 1.6.
4.4.1. Properties of the spectral characteristics. Let y(x) and z(x) be continu-
ously dierentiable functions on [0, a] and [a, T]. Denote y, z) := yz

z. If y(x) and
z(x) satisfy the jump conditions (4.4.3), then
y, z)
|x=a+0
= y, z)
|x=a0
, (4.4.4)
i.e. the function y, z) is continuous on [0, T]. If y(x, ) and z(x, ) are solutions of the
equations y = y and z = z respectively, then
d
dx
y, z) = ( )yz. (4.4.5)
Let (x, ), (x, ), C(x, ), S(x, ) be solutions of (4.4.1) under the initial conditions
C(0, ) = (0, ) = S

(0, ) = (T, ) = 1, C

(0, ) = S(0, ) = 0,

(0, ) = h,

(T, ) =
H, and under the jump conditions (4.4.3). Then U() = V () = 0. Denote
() = (x, ), (x, )).
By virtue of (4.4.4) and Liouvilles formula for the Wronskian [cod1, p.83], () does not
depend on x. The function () is called the characteristic function of L. Clearly,
() = V () = U(). (4.4.6)
Theorem 4.4.1. 1) The eigenvalues
n

n0
of the boundary value problem L coincide
with zeros of the characteristic function (). The functions (x,
n
) and (x,
n
) are
eigenfunctions, and
(x,
n
) =
n
(x,
n
),
n
,= 0. (4.4.7)
2) Denote

n
=
_
T
0

2
(x,
n
) dx. (4.4.8)
238
Then

n
=
1
(
n
), (4.4.9)
where
1
() =
d
d
(). The data
n
,
n

n0
are called the spectral data of L.
3) The eigenvalues
n
and the eigenfunctions (x,
n
), (x,
n
) are real. All zeros
of () are simple, i.e.
1
(
n
) ,= 0. Eigenfunctions related to dierent eigenvalues are
orthogonal in L
2
(0, T).
We omit the proof of Theorem 4.4.1, since the arguments here are the same as for the
classical Sturm-Liouville problem (see Section 1.1).
Let C
0
(x, ) and S
0
(x, ) be smooth solutions of (4.4.1) on the interval [0, T] under
the initial conditions C
0
(0, ) = S

0
(0, ) = 1, C

0
(0, ) = S
0
(0, ) = 0. Then, using the jump
condition (4.4.3) we get
C(x, ) = C
0
(x, ), S(x, ) = S
0
(x, ), x < a, (4.4.10)
C(x, ) = A
1
C
0
(x, ) + B
1
S
0
(x, ), S(x, ) = A
2
C
0
(x, ) +B
2
S
0
(x, ), x > a, (4.4.11)
where
_

_
A
1
= a
1
C
0
(a, )S

0
(a, ) a
1
1
C

0
(a, )S
0
(a, ) a
2
C
0
(a, )S
0
(a, ),
B
1
= (a
1
1
a
1
)C
0
(a, )C

0
(a, ) + a
2
C
2
0
(a, ),
A
2
= (a
1
a
1
1
)S
0
(a, )S

0
(a, ) a
2
S
2
0
(a, ),
B
2
= a
1
1
C
0
(a, )S

0
(a, ) a
1
C

0
(a, )S
0
(a, ) + a
2
C
0
(a, )S
0
(a, ).
(4.4.12)
Let =
2
, = +i. It was shown in Section 1.1 that the function C
0
(x, ) satises
the following integral equation:
C
0
(x, ) = cos x +
_
x
0
sin (x t)

q(t)C
0
(t, ) dt, (4.4.13)
and for [[ ,
C
0
(x, ) = cos x +O
_
1

exp([[x)
_
.
Then (4.4.13) implies
C
0
(x, ) = cos x+
sin x
2
_
x
0
q(t) dt+
1
2
_
x
0
q(t) sin (x2t) dt+O
_
1

2
exp([[x)
_
, (4.4.14)
C

0
(x, ) = sin x+
cos x
2
_
x
0
q(t) dt+
1
2
_
x
0
q(t) cos (x2t) dt+O
_
1

exp([[x)
_
, (4.4.15)
Analogously,
S
0
(x, ) =
sin x


cos x
2
2
_
x
0
q(t) dt+
1
2
2
_
x
0
q(t) cos (x2t) dt+O
_
1

3
exp([[x)
_
, (4.4.16)
S

0
(x, ) = cos x+
sin x
2
_
x
0
q(t) dt
1
2
_
x
0
q(t) sin (x2t) dt +O
_
1

2
exp([[x)
_
. (4.4.17)
239
By virtue of (4.4.12) and (4.4.14)-(4.4.17),
A
1
= b
1
+b
2
cos 2a +
_
b
2
_
a
0
q(t) dt
a
2
2
_
sin 2a

+O
_
1

2
_
,
B
1
= b
2
_
sin 2a cos 2a
_
a
0
q(t) dt
_
a
0
q(t) cos 2(a t) dt
_
+
a
2
2
_
1 + cos 2a
_
+O
_
1

_
, A
2
= b
2
sin 2a

+O
_
1

2
_
, B
2
= b
1
+b
2
cos 2a +O
_
1

_
,
where b
1
= (a
1
+a
1
1
)/2, b
2
= (a
1
a
1
1
)/2. Since (x, ) = C(x, )+hS(x, ), we calculate
using (4.4.10)-(4.4.12), (4.4.14)-(4.4.17):
(x, ) = cos x +
_
h +
1
2
_
x
0
q(t) dt
_
sin x

+o
_
1

exp([[x)
_
, x < a, (4.4.18)
(x, ) = (b
1
cos x +b
2
cos (2a x))+
f
1
(x)
sin x

+f
2
(x)
sin (2a x)

+o
_
1

exp([[x)
_
, x > a, (4.4.19)

(x, ) = sin x +
_
h +
1
2
_
x
0
q(t) dt
_
cos x +o(exp([[x)), x < a, (4.4.20)

(x, ) = (b
1
sin x +b
2
sin (2a x))+
f
1
(x) cos x f
2
(x) cos (2a x) +o(exp([[x)), x > a, (4.4.21)
where
f
1
(x) = b
1
_
h +
1
2
_
x
0
q(t) dt
_
+
a
2
2
, f
2
(x) = b
2
_
h
1
2
_
x
0
q(t) dt +
_
a
0
q(t) dt
_

a
2
2
.
In particular, (4.4.18)-(4.4.21) yield

()
(x, ) = O([[

exp([[x)), 0 x . (4.4.22)
Similarly,

()
(x, ) = O([[

exp([[(T x))), 0 x . (4.4.23)


It follows from (4.4.6),(4.4.19) and (4.4.21) that
() = (b
1
sin T b
2
sin (2aT))
1
cos T
2
cos (2aT) +o(exp([[T)), (4.4.24)
where

1
= b
1
_
H+h+
1
2
_
T
0
q(t) dt
_
+
a
2
2
,
2
= b
2
_
Hh+
1
2
_
T
0
q(t) dt
_
a
0
q(t) dt
_

a
2
2
. (4.4.25)
Using (4.4.24), by the well-known methods (see, for example, [bel1]) one can obtain the
following properties of the characteristic function () and the eigenvalues
n
=
2
n
of the
boundary value problem L :
1) For [[ , () = O([[ exp([[T)).
2) There exist h > 0, C
h
> 0 such that [()[ C
h
[[ exp([[T) for [Im[ h.
Hence, the eigenvalues
n
lie in the domain [Im[ < h.
240
3) The number N
a
of zeros of () in the rectangle R
a
= : [[ h, [a, a +1]
is bounded with respect to a.
4) Denote G

= : [
n
[ . Then
[()[ C

[[ exp([[T), G

. (4.4.26)
5) There exist numbers R
N
such that for suciently small > 0, the circles
[[ = R
N
lie in G

for all N.
6) Let
0
n
= (
0
n
)
2
be zeros of the function

0
() = (b
1
sin T b
2
sin (2a T)). (4.4.27)
Then

n
=
0
n
+o(1), n . (4.4.28)
Substituting (4.4.18),(4.4.19) and (4.4.28) into (4.4.8) we get

n
=
0
n
+o(1), n , (4.4.29)
where

0
n
=
a
2
+
_
b
2
1
+b
2
2
2
+b
1
b
2
cos 2
0
n
a
_
(T a). (4.4.30)
It follows from (4.4.29) and (4.4.30) that
[
n
[ C, (4.4.31)
which means that
n
= O(1) and (
n
)
1
= O(1). By virtue of (4.4.7),

n
= (0,
n
) =
1
(T,
n
)
. (4.4.32)
Taking (4.4.22) and (4.4.23) into account, we deduce [
n
[ C. Together with (4.4.9) and
(4.4.31) this yields
[
1
(
n
)[ C. (4.4.33)
We need more precise asymptotics for
n
and
n
than (4.4.28) and (4.4.29). Substi-
tuting (4.4.24) into the relation (
n
) = 0, we get
b
1
sin
n
T b
2
sin
n
(2a T) = O
_
1

0
n
_
. (4.4.34)
According to (4.4.28),
n
=
0
n
+
n
,
n
0. Since

0
(
0
n
) =
0
n
(b
1
sin
0
n
T b
2
sin
0
n
(2a T)) = 0,
it follows from (4.4.34) that

n
(b
1
T cos
0
n
T b
2
(2a T) cos
0
n
(2a T)) = O
_
1

0
n
_
+O(
2
n
).
In view of (4.4.27),

0
1
(
0
n
) :=
_
d
d

0
()
_
|=
0
n
= 2
1
_
b
1
T cos
0
n
T b
2
(2a T) cos
0
n
(2a T)
_
.
241
Hence

0
1
(
0
n
) = O
_
1

0
n
_
+O(
2
n
).
By virtue of (4.4.33), [
0
1
(
0
n
)[ C, and we conclude that

n
= O
_
1

0
n
_
. (4.4.35)
Substituting (4.4.24) into the relation (
n
) = 0 again and using (4.4.35) we arrive at

n
=
0
n
+

n

0
n
+

n

0
n
, (4.4.36)
where
n
= o(1), and

n
= (
1
cos
0
n
T +
2
cos
0
n
(2a T))(2
0
1
(
0
n
))
1
.
Here
1
and
2
are dened by (4.4.25). At last, using (4.4.8), (4.4.18), (4.4.19) and
(4.4.36), one can calculate

n
=
0
n
+

n1

0
n
+

n1

0
n
, (4.4.37)
where
n1
= o(1), and

n1
= b
0
sin 2
0
n
a +
b
2
1
4
sin 2
0
n
T
b
2
2
4
sin 2
0
n
(2a T) +
b
1
b
2
2
sin 2
0
n
(T a),
b
0
= 2a
n

b
2
1
4
+
b
2
2
4
+ (T a)
_
b
1
b
2
(2h +
_
a
0
q(t) dt) +
a
2
2
(b
2
b
1
)
_
.
Furthermore, one can show that
n
,
n1

2
. If q(x) is a smooth function one can
obtain more precise asymptotics for the spectral data.
4.4.2. Formulation of the inverse problem. Uniqueness theorems. Let (x, )
be the solution of (4.4.1) under the conditions U() = 1, V () = 0 and under the jump
conditions (4.4.3). We set M() := (0, ). The functions (x, ) and M() are called
the Weyl solution and the Weyl function for the boundary value problem L, respectively.
Clearly,
(x, ) =
(x, )
()
= S(x, ) + M()(x, ), (4.4.38)
(x, ), (x, )) 1, (4.4.39)
M() =
()
()
, (4.4.40)
where () = (0, ) = V (S) is the characteristic function of the boundary value problem
L
1
for equation (4.4.1) with the boundary conditions U(y) = 0, y(T) = 0 and with the
jump conditions (4.4.3). Let
n

n0
be zeros of (), i.e. the eigenvalues of L
1
.
In this section we study the following inverse problems of recovering L from its spectral
characteristics:
(i) from the Weyl function M();
(ii) from the spectral data
n
,
n

n0
;
242
(iii) from two spectra
n
,
n

n0
.
These inverse problems are a generalization of the inverse problems for Sturm-Liouville
equations without discontinuities (see Section 1.4).
First, let us prove the uniqueness theorems for the solutions of the problems (i)-(iii).
For this purpose we agree that together with L we consider a boundary value problem

L
of the same form but with dierent coecients q(x),

h,

H, a, a
k
. If a certain symbol
denotes an object related to L, then will denote the analogous object related to

L, and
= .
Theorem 4.4.2. If M() =

M(), then L =

L. Thus, the specication of the Weyl
function uniquely determines the operator.
Proof. It follows from (4.4.23), (4.4.26) and (4.4.38) that
[
()
(x, )[ C

[[
1
exp([[x), G

. (4.4.41)
Let us dene the matrix P(x, ) = [P
jk
(x, )]
j,k=1,2
by the formula
P(x, )
_
(x, )

(x, )

(x, )

(x, )
_
=
_
(x, ) (x, )

(x, )

(x, )
_
.
Taking (4.4.39) into account we calculate
P
j1
(x, ) =
(j1)
(x, )

(x, )
(j1)
(x, )

(x, ),
P
j2
(x, ) =
(j1)
(x, ) (x, )
(j1)
(x, )

(x, ),
_
_
_
(4.4.42)
(x, ) = P
11
(x, ) (x, ) +P
12
(x, )

(x, ),
(x, ) = P
11
(x, )

(x, ) + P
12
(x, )

(x, ).
_
_
_
(4.4.43)
According to (4.4.38) and (4.4.42), for each xed x, the functions P
jk
(x, ) are meromor-
phic in with simple poles in the points
n
and

n
. Denote G
0

= G

. By virtue of
(4.4.22), (4.4.41) and (4.4.42) we get
[P
12
(x, )[ C

[[
1
, [P
11
(x, )[ C

, G
0

. (4.4.44)
It follows from (4.4.38) and (4.4.42) that
P
11
(x, ) = (x, )

(x, ) S(x, )

(x, ) + (

M() M())(x, )

(x, ),
P
12
(x, ) = S(x, ) (x, ) (x, )

S(x, ) + (M()

M())(x, ) (x, ).
Thus, if M()

M() , then for each xed x, the functions P
1k
(x, ) are entire in .
Together with (4.4.44) this yields P
12
(x, ) 0, P
11
(x, ) A(x). Using (4.4.43) we derive
(x, ) A(x) (x, ), (x, ) A(x)

(x, ). (4.4.45)
It follows from (4.4.18) and (4.4.19) that for [[ , arg [, ], > 0,
(x, ) = 2
1
b exp(ix)(1 + O(
1
)),
243
where b = 1 for x < a, and b = b
1
for x > a. Similarly, one can calculate
(x, ) = (ib)
1
exp(ix)(1 + O(
1
)).
Together with (4.4.39) and (4.4.45) this gives b
1
=

b
1
, A(x) 1, i.e. (x, ) (x, ),
(x, )

(x, ) for all x and . Consequently, L =

L. 2
Theorem 4.4.3. If
n
=

n
,
n
=
n
, n 0, then L =

L. Thus, the specication of
the spectral data
n
,
n

n0
uniquely determines the operator.
Proof. It follows from (4.4.40) that the Weyl function M() is meromorphic with
simple poles
n
. Using (4.4.40), (4.4.32) and (4.4.9) we calculate
Res
=
n
M() =
(
n
)

1
(
n
)
=

n

1
(
n
)
=
1

n
. (4.4.46)
Let = = u + iv : u = (2h
2
)
2
v
2
h
2
be the image of the set Im = h under
the mapping =
2
. Denote
n
= : [[ r
n
, and consider the closed contours

n0
=
n
: [[ = r
n
, / int ,
n1
=
n
: [[ = r
n
, int (with
counterclockwise circuit). Since the Weyl function M() is regular for int
n0
, we
get by Cauchys theorem that
M() =
1
2i
_

n0
M()

d, int
n0
.
Furthermore, since () = (0, ), estimate (4.4.23) gives
() = O(exp([[T)). (4.4.47)
Using (4.4.40), (4.4.26) and (4.4.47) we obtain
[M()[ C

[[
1
, G

. (4.4.48)
Hence
lim
n
1
2i
_
||=r
n
M()

d = 0,
i.e.
M() = lim
n
1
2i
_

n1
M()

d.
Calculating this integral by the residue theorem and taking (4.4.46) into account we arrive
at
M() =

k=0
1

k
(
k
)
, (4.4.49)
where the series converges with brackets:

k=0
= lim
n

|
k
|<r
n
. Moreover, it is obvious that
the series converges absolutely. Under the hypothesis of the theorem we get, in view of
(4.4.49), that M()

M(), and consequently by Theorem 4.4.2, L =

L. 2
Theorem 4.4.4. If
n
=

n
,
n
=
n
, n 0, then q(x) = q(x) a.e. on (0, T), h =

h, H =

H, a = a, a
1
= a
1
and a
2
= a
2
.
244
Proof. It follows from (4.4.24) that the function () is entire in of order 1/2, and
consequently () is uniquely determined up to a multiplicative constant by its zeros:
() = C

n=0
_
1

n
_
(4.4.50)
(the case when (0) = 0 requires minor modications). According to (4.4.27),

0
() =
0

n=1
_
1

0
n
_
,
0
= Tb
1
(2a T)b
2
.
Then
()

0
()
= C

n=1

0
n

n=1
_
1 +

n

0
n

0
n

_
.
Since
lim

()

0
()
= 1, lim

n=1
_
1 +

n

0
n

0
n

_
= 1,
we get
C =
0

n=1

0
n
.
Substituting into (4.4.50) we arrive at
() =
0
(
0
)

n=1

0
n
. (4.4.51)
It follows from (4.4.51) that the specication of the spectrum
n

n0
uniquely determines
the characteristic function (). Analogously, the function () is uniquely determined by
its zeros
n

n0
. Let now
n
=

n
,
n
=
n
, n 0. Then ()

(), ()

(),
and consequently by (4.4.40), M()

M(). From this, in view of Theorem 4.4.2, we
obtain q(x) = q(x) a.e. on (0, T), h =

h, H =

H, a = a, a
1
= a
1
and a
2
= a
2
. 2
4.4.3 Solution of the inverse problem. For deniteness we consider the inverse
problem of recovering L from the spectral data
n
,
n

n0
. Let boundary value problems
L and

L be such that
a = a,

n=0

n
[
n
[ < , (4.4.52)
where
n
:= [
n

n
[ +[
n

n
[. Denote

n0
=
n
,
n1
=

n
,
n0
=
n
,
n1
=
n
,
ni
(x) = (x,
ni
),
ni
(x) = (x,
ni
),

Q
kj
(x, ) =
(x, ),
kj
(x))

kj
(
kj
)
=
1

kj
_
x
0
(t, )
kj
(t) dt,

Q
ni,kj
(x, ) =

Q
kj
(x,
ni
).
Here we lean on (4.4.4) and (4.4.5). It follows from (4.4.18)-(4.4.21) and (4.4.31) that
[
()
nj
(x)[ C([
0
n
[ + 1)

, [
()
nj
(x)[ C([
0
n
[ + 1)

, (4.4.53)
[

Q
ni,kj
(x)[
C
[
0
n

0
k
[ + 1
, [

Q
(+1)
ni,kj
(x)[ C([
0
n
[ +[
0
k
[ + 1)

. (4.4.54)
245
Lemma 4.4.1. The following relation holds
(x, ) = (x, ) +

k=0
(

Q
k0
(x, )
k0
(x)

Q
k1
(x, )
k1
(x)). (4.4.55)
Proof. By virtue of (4.4.52) we have
a = a, a
1
= a
1
. (4.4.56)
Then it follows from (4.4.18)-(4.4.21) that
[
()
(x, )
()
(x, )[ C[[
1
exp([[x). (4.4.57)
Similarly,
[
()
(x, )

()
(x, )[ C[[
1
exp([[(T x)). (4.4.58)
Denote G
0

= G

. In view of (4.4.38), (4.4.23), (4.4.26) and (4.4.58) we obtain


[
()
(x, )

()
(x, )[ C

[[
2
exp([[x), G
0

. (4.4.59)
Let P(x, ) be the matrix dened in Subsection 4.4.2. Since for each xed x, the functions
P
1k
(x, ) are meromorphic in with simple poles
n
and

n
, we get by Cauchys theorem
P
1k
(x, )
1k
=
1
2i
_

n0
P
1k
(x, )
1k

d, k = 1, 2, (4.4.60)
where int
n0
, and
jk
is the Kronecker delta.
Further, (4.4.39) and (4.4.42) imply
P
11
(x, ) = 1 + ((x, ) (x, ))

(x, ) ((x, )

(x, ))

(x, ). (4.4.61)
Using (4.4.22), (4.4.23), (4.4.41), (4.4.42), (4.4.57), (4.4.59) and (4.4.61) we infer
[P
1k
(x, )
1k
[ C

[[
1
, G
0

. (4.4.62)
By virtue of (4.4.62),
lim
n
1
2i
_
||=r
n
P
1k
(x, )
1k

d = 0,
and consequently, (4.4.60) yields
P
1k
(x, )
1k
= lim
n
1
2i
_

n1
P
1k
(x, )

d.
Substituting into (4.4.43) we obtain
(x, ) = (x, ) + lim
n
1
2i
_

n1
(x, )P
11
(x, ) +

(x, )P
12
(x, )

d.
Taking (4.4.42) into account we calculate
(x, ) = (x, ) + lim
n
1
2i
_

n1
( (x, )((x, )

(x, ) (x, )

(x, ))+
246

(x, )((x, ) (x, ) (x, )

(x, ))
d

,
or, in view of (4.4.38),
(x, ) = (x, ) + lim
n
1
2i
_

n1
(x, ), (x, ))

M()(x, ) d. (4.4.63)
It follows from (4.4.46) that
Res
=
kj
(x, ), (x, ))

M()(x, ) =

Q
kj
(x, )
kj
(x).
Calculating the integral in (4.4.63) by the residue theorem we arrive at (4.4.55). 2
Analogously one can obtain the following relation

(x, ) = (x, ) +

k=0
(

D
k0
(x, )
k0
(x)

D
k1
(x, )
k1
(x)), (4.4.64)
where

D
kj
(x, ) :=

(x, ),
kj
(x))

kj
(
kj
)
.
It follows from (4.4.55) that

ni
(x) =
ni
(x) +

k=0
(

Q
ni,k0
(x)
k0
(x)

Q
ni,k1
(x)
k1
(x)). (4.4.65)
Let V be a set of indices u = (n, i), n 0, i = 0, 1. For each xed x, we dene the
vectors
(x) = [
u
(x)]
uV
=
_

n0
(x)

n1
(x)
_
n0
,

(x) = [

u
(x)]
uV
=
_

n0
(x)

n1
(x)
_
n0
by the formulas

n0
(x) = (
n0
(x)
n1
(x))
1
n
,
n1
(x) =
n1
(x),

n0
(x) = (
n0
(x)
n1
(x))
1
n
,

n1
(x) =
n1
(x)
(if
n
= 0 for a certain n, then we put
n0
(x) =

n0
(x) = 0 ). Similarly we dene the
block matrix

H(x) = [

H
u,v
(x)]
u,vV
=
_

H
n0,k0
(x)

H
no,k1
(x)

H
n1,k0
(x)

H
n1,k1
(x)
_
n,k0
,
where u = (n, i), v = (k, j),

H
n0,k0
(x) = (

Q
n0,k0
(x)

Q
n1,k0
(x))
k

1
n
,

H
n1,k1
(x) =

Q
n1,k0
(x)

Q
n1,k1
(x),

H
n1,k0
(x) =

Q
n1,k0
(x)
k
,

H
n0,k1
(x) = (

Q
n0,k0
(x)

Q
n1,k0
(x)

Q
n0,k1
(x) +

Q
n1,k1
(x))
1
n
.
It follows from (4.4.18)-(4.4.21), (4.4.31), (4.4.36), (4.4.33), (4.4.53), (4.4.54) and Schwarzs
lemma that
[
()
nj
(x)[ C([
0
n
[ + 1)

, [

()
nj
(x)[ C([
0
n
[ + 1)

, (4.4.66)
247
[

H
ni,kj
(x)[
C
k
[
0
n

0
k
[ + 1
, [

H
(+1)
ni,kj
(x)[ C([
0
n
[ +[
0
k
[ + 1)

k
. (4.4.67)
Let us consider the Banach space m of bounded sequences = [
u
]
uV
with the norm
||
m
= sup
uV
[
u
[. It follows from (4.4.67) that for each xed x, the operator E +

H(x)
(here E is the identity operator), acting from m to m , is a linear bounded operator, and
|

H(x)| C sup
n

k
[
0
n

0
k
[ + 1
< .
Taking into account our notations, we can rewrite (4.4.65) in the form

ni
(x) =
ni
(x) +

k=0
(

H
ni,k0
(x)
k0
(x) +

H
ni,k1
(x)
k1
(x)),
or

(x) = (E +

H(x))(x). (4.4.68)
Thus, for each xed x, the vector (x) m is a solution of equation (4.4.68) in the Banach
space m. Equation (4.4.68) is called the main equation of the inverse problem.
Now let us denote

0
(x) =

k=0
_
1

k0

k0
(x)
k0
(x)
1

k1

k1
(x)
k1
(x)
_
, (x) = 2

0
(x). (4.4.69)
By virtue of (4.4.52), the series converges absolutely and uniformly on [0, a] and [a, T];
0
(x)
is absolutely continuous on [0, a] and [a, T], and (x) L
2
(0, T).
Lemma 4.4.2 The following relations hold
q(x) = q(x) + (x), (4.4.70)
a
2
= a
2
+ (a
1
1
a
3
1
)
0
(a 0), (4.4.71)
h =

h
0
(0), H =

H +
0
(T). (4.4.72)
Proof. Dierentiating (4.4.55) twice with respect to x and using (4.4.5) we get

(x, )
0
(x) (x, ) =

(x, ) +

k=0
(

Q
k0
(x, )

k0
(x)

Q
k1
(x, )

k1
(x)), (4.4.73)

(x, ) =

(x, ) +

k=0
(

Q
k0
(x, )

k0
(x)

Q
k1
(x, )

k1
(x))+
2 (x, )

k=0
_
1

k0

k0
(x)

k0
(x)
1

k1

k1
(x)

k1
(x)
_
+

k=0
_
1

k0
( (x, )
k0
(x))

k0
(x)
1

k1
( (x, )
k1
(x))

k1
(x)
_
.
We replace here the second derivatives, using equation (4.4.1), and then we replace (x, ),
using (4.4.55). After canceling terms with

(x, ) we arrive at (4.4.70).


248
By virtue of (4.4.3), it follows from (4.4.69) that

0
(a + 0) = a
2
1

0
(a 0). (4.4.74)
In (4.4.73) we set x = a + 0. Since the functions

Q
kj
(x, ) are continuous with respect to
x [0, T], we calculate using (4.4.3), (4.4.56) and (4.4.74):
a
1
1

(a 0, ) + a
2
(a 0, ) a
3
1

0
(a 0) (a 0, ) = a
1
1

(a 0, ) + a
2
(a 0, )+
a
1
1

k=0
(

Q
k0
(a, )

k0
(a 0)

Q
k1
(a, )

k1
(a 0))+
a
2

k=0
(

Q
k0
(a, )
k0
(a 0)

Q
k1
(a, )
k1
(a 0)).
Replacing here (a 0, ) and

(a 0, ) from (4.4.55) and (4.4.73) respectively, we get


after simple calculations
( a
2
a
3
1

0
(a 0) + a
1
1

0
(a 0) a
2
) (a 0, ) = 0,
i.e. (4.4.71) holds. Denote h
0
= h, h
T
= H, U
0
= U, U
T
= V. In (4.4.55) and (4.4.73)
we put x = 0 and x = T. Then

(d, ) + (h
d

0
(d)) (d, ) = U
d
((x, ))+

k=0
(

Q
k0
(d, )U
d
(
k0
)

Q
k0
(d, )U
d
(
k0
)), d = 0, T. (4.4.75)
Let d = 0. Since U
0
((x, )) = 0, (0, ) = 1,

(0, ) =

h
0
, we get h
0

h
0

0
(0) = 0,
i.e. h =

h
0
(0). Let d = T. Since
U
T
((x, )) = (), U
T
(
k0
) = 0, U
T
(
k1
) = (
k1
),
(x, ), (x, )) = (T, )

() (T, )

(),
it follows from (4.4.75) that

(T, ) + (h
T

0
(T)) (T, ) = () +

k=0

k1
(T)

()

k1
(
k1
)
(
k1
).
For =
n1
this yields

n1
(T) + (h
T

0
(T))
n1
(T) = (
n1
)
_
1 + (
n1
)
1

n1
(T)

1
(
n1
)
_
.
By virtue of (4.4.7) and (4.4.9),
(
n1
)
1

n1
(T)

1
(
n1
) = 1,
i.e.

n1
(T) + (h
T

0
(T))
n1
(T) = 0.
On the other hand,

n1
(T) +

h
T

n1
(T) =

(
n1
) = 0.
249
Then (h
T

0
(T)

h
T
)
n1
(T) = 0, i.e. h
T

h
T
=
0
(T). 2
Now let us formulate necessary and sucient conditions for the solvability of the inverse
problem.
Theorem 4.4.5. For real numbers
n
,
n

n0
to be the spectral data for a certain
boundary value problem L of the form (4.4.1)-(4.4.3), it is necessary and sucient that

n
> 0,
n
,=
m
(n ,= m), and there exists

L such that (4.4.52) holds. The boundary
value problem L can be constructed by (4.4.56), (4.4.70)-(4.4.72).
Proof. We give only a short sketch of the proof of Theorem 4.4.5, since it is similar to
the proof of Theorem 1.6.2 for the classical Sturm-Liouville boundary value problems.
The necessity part of Theorem 4.4.5 was proved above. We prove the suciency. Let
numbers
n
,
n

n0
satisfying the hypothesesis of Theorem 4.4.5 be given. We construct

(x) and

H(x), and consider the main equation (4.4.68).
Lemma 4.4.3 For each xed x [0, T], the operator E +

H(x), acting from m to
m, has a bounded inverse operator, and the main equation (4.4.68) has a unique solution
(x) m.
We omit the proof of Lemma 4.4.3, since the arguments here are the same as in the proof
of Lemma 1.6.6.
Let (x) = [
u
(x)]
uV
be the solution of the main equation (4.4.68). Then the functions

()
ni
(x), = 0, 1 are absolutely continuous on [0, a] and [a, T], and (4.4.66) holds. We
dene the functions
ni
(x) by the formulae

n1
(x) =
n1
(x),
n0
(x) =
n0
(x)
n
+
n1
(x).
Then (4.4.65) and (4.4.53) are valid. Furthermore, we construct the functions (x, ) and
(x, ) via (4.4.55) and (4.4.64), and the boundary value problem L via (4.4.56), (4.4.69)-
(4.4.72). Clearly,
ni
(x) = (x,
ni
).
Using (4.4.55), (4.4.64) and (4.4.65) we get analogously to the proof of Lemma 1.6.9,

ni
(x) =
ni

ni
(x), (x, ) = (x, ), (x, ) = (x, ).
Furthermore, from (4.4.55) by dierentiation we get (4.4.73). For =
ni
, (4.4.73) implies

ni
(x) =

ni
(x) +
0
(x)
ni
(x) +

k=0
_

Q
ni,k0
(x)

k0
(x)

Q
ni,k1
(x)

k1
(x)
_
. (4.4.76)
Let us show that the functions (x, ) and (x, ) satisfy the jump conditions (4.4.3).
For this purpose we put we put x = a+0 in (4.4.65). Using the jump conditions for
ni
(x),
we get
a
1

ni
(a 0) =
ni
(a + 0) +

k=0
(

Q
ni,k0
(a)
k0
(a + 0)

Q
ni,k1
(a)
k1
(a + 0)).
Comparing with (4.4.65) for x = a 0, we infer
g
ni
+

k=0
(

Q
ni,k0
(a)g
k0


Q
ni,k1
(a)g
k1
) = 0, g
ni
:=
ni
(a + 0) a
1

ni
(a 0).
250
According to Lemma 4.4.3 this yields g
ni
= 0, i.e.

ni
(a + 0) = a
1

ni
(a 0). (4.4.77)
Taking x = a+0 in (4.4.76) and using (4.4.77) and the jump conditions for
ni
(x), we get
a
1
1

ni
(a 0) + a
2

ni
(a 0) =

ni
(a + 0) + a
3
1

0
(a 0)
ni
(a + 0)+

k=0
(

Q
ni,k0
(a)

k0
(a + 0)

Q
ni,k1
(a)

k1
(a + 0)).
Replacing here
ni
(a 0) and

ni
(a 0) from (4.4.65) and (4.4.76) and taking (4.4.71)
into account, we obtain
G
ni
+

k=0
(

Q
ni,k0
(a)G
k0


Q
ni,k1
(a)G
k1
) = 0, G
ni
:=

ni
(a+0) a
1
1

ni
(a0) a
2

ni
(a0).
By Lemma 4.4.3 this yields G
ni
= 0, i.e.

ni
(a + 0) = a
1
1

ni
(a 0) + a
2

ni
(a 0). (4.4.78)
It follows from (4.4.55), (4.4.71), (4.4.73), (4.4.77), (4.4.78) and the jump conditions for
(x, ) that
(a + 0, ) = a
1
(a 0, ),

(a + 0, ) = a
1
1

(a 0, ) + a
2
(a 0, ).
Similarly one can calculate
(a + 0, ) = a
1
(a 0, ),

(a + 0, ) = a
1
1

(a 0, ) + a
2
(a 0, ).
Thus, the functions (x, ) and (x, ) satisfy the jump conditions (4.4.3).
Let us now show that
(0, ) = 1,

(0, ) = h, U() = 1, V () = 0. (4.4.79)


Indeed, using (4.4.55) and (4.4.64) and acting in the same way as in the proof of Lemma
4.4.2, we get

U
d
( (x, )) = U
d
((x, )) +

k=0
_

Q
k0
(d, )U
d
(
k0
)

Q
k1
(d, )U
d
(
k1
)
_
, (4.4.80)

U
d
(

(x, )) = U
d
((x, )) +

k=0
_

D
k0
(d, )U
d
(
k0
)

D
k1
(d, )U
d
(
k1
)
_
, (4.4.81)
where d = 0, T; U
0
= U, U
T
= V. Since (x, ),
kj
(x))
|x=0
= 0, it follows from (4.4.55)
and (4.4.80) that (0, ) = 1, U
0
() = 0, and consequently

(0, ) = h. In (4.4.81) we
put d = 0. Since U
0
() = 0 we get U
0
() = 1.
Denote () := V (). It follows from (4.4.80) and (4.4.81) for d = T that

() = () +

k=0
_

Q
k0
(T, )(
k0
)

Q
k1
(T, )(
k1
)
_
, (4.4.82)
251
0 = V ()

k=0
_

D
k0
(T, )(
k0
)

D
k1
(T, )(
k1
)
_
. (4.4.83)
In (4.4.82) we set =
n1
:
(
n1
) +

k=0
_

Q
n1,k0
(T)(
k0
)

Q
n1,k1
(T)(
k1
)
_
= 0.
Since

Q
n1,k1
(T) =
nk
we get

k=0

Q
n1,k0
(T)(
k0
) = 0.
Then, by virtue of Lemma 4.4.3, (
k0
) = 0, k 0. Substituting this into (4.4.83) and
using the relation

(x, ),
k1
(x))
|x=T
= 0, we obtain V () = 0, and (4.4.79) is proved.
We additionally proved that (
n
) = 0, i.e. the numbers
n

n0
are eigenvalues of
L. It follows from (4.4.64) for x = 0 that
M() =

M() +

k=0
_
1

k0
(
k0
)

1

k1
(
k1
)
_
.
But according to (4.4.49),

M() =

k=0
1

k1
(
k1
)
.
Hence
M() =

k=0
1

k
(
k
)
.
Thus, the numbers
n
,
n

n0
are the spectral data for the constructed boundary value
problem L. Theorem 4.4.5 is proved. 2
Example 4.4.1. Take

L such that q(x) = 0,

h =

H = a
2
= 0. Let

n
,
n

n0
be
the spectral data of

L. Clearly,

0
= 0,
0
= a + (T a)a
1
,
00
(x) = 1 (x < a),
00
(x) = a
1
(x > a).
Let
n
=

n
(n 0),
n
=
n
(n 1), and let
0
> 0 be an arbitrary positive
number. Denote A :=
1

1

0
. Then, (4.4.65) and (4.4.69) yield

00
(x) =
00
(x)
_
1 +A
_
x
0

2
00
(t) dt
_
,
0
(x) = A
00
(x)
00
(x),
and consequently,

00
(x) =
_
(1 +Ax)
1
, x < a,
a
1
(B +Aa
2
1
x)
1
, x > a,

0
(x) =
_
A(1 + Ax)
1
, x < a,
Aa
2
1
(B +Aa
2
1
x)
1
, x > a,
where B = 1 + Aa Aa
2
1
a. Using (4.4.70)-(4.4.72) we calculate
q(x) =
_
2A
2
(1 + Ax)
2
, x < a,
2A
2
a
4
1
(B +Aa
2
1
x)
2
, x > a,
252
h = A, H = Aa
2
1
(B +Aa
2
1
T)
1
, a
2
= (a
1
1
a
3
1
)A(1 + Aa)
1
.
Remarks. 1) Analogous results are also valid for boundary value problems having an
arbitrary number of discontinuities.
2) In Theorem 4.4.5, (4.4.52) is a condition on the asymptotic behaviour of the spectral
data. It follows from the proof that Theorem 4.4.5 is also valid if instead of (4.4.52) we take

n
l
2
.
3) In order to choose a model boundary value problem

L, one can use the asymptotics
of the spectral data. We note that in applications a model boundary value problem often is
given a priori. It describes a known object, and its perturbations are studied.
4) For boundary value problems without discontinuities (i.e. a
1
= 1, a
2
= 0 ), Theorem
4.4.5 gives the following assertion as a corollary:
For real numbers
n
,
n

n0
to be the spectral data for a certain boundary value
problem L of the form (4.4.1)-(4.4.2), it is necessary and sucient that
n
> 0,
n
,=

m
(n ,= m), and

n
=
n
T
+

n
+

n
n
,
n
=
T
2
+

n1
n
,
n
,
n1

2
.
5) This method also works for nonselfadjoint boundary value problems and for more
general jump conditions.
4.5. INVERSE PROBLEMS IN ELASTICITY THEORY
The problem of determining the dimensions of the transverse cross-sections of a beam
from the given frequencies of its natural vibrations is examined. Frequency spectra are
indicated that determine the dimensions of the transverse cross-sections of the beam uniquely,
an eective procedure is presented for solving the inverse problem, and a uniqueness theorem
is proved.
4.5.1. Consider the dierential equation describing beam vibrations in the form
(h

(x)y

= h(x)y, 0 x T. (4.5.1)
Here h(x) is a function characterizing the beam transverse section, and = 1, 2, 3 is
a xed number. We will assume that the function h(x) is absolutely continuous in the
segment [0, T], and h(x) > 0, h(0) = 1. The inverse problem for (4.5.1) in the case = 2
(similar transverse sections) was investigated in [ain1] in determining small changes in the
beam transverse sections from given small changes in a nite number of its natural vibration
frequencies.
Let
kj

k1
, j = 1, 2 be the eigenvalues of the boundary value problems Q
j
for (4.5.1)
with the boundary conditions
y(0) = y
(j)
(0) = y(T) = y

(T) = 0.
The inverse problem is formulated as follows.
Inverse Problem 4.5.1. Find the function h(x), x [0, T] from the given spectra

kj

k1, j=1,2
.
253
Let us show that this inverse problem can be reduced to the inverse problem of recovering
the dierential equation (4.5.1) from the corresponding Weyl function. Let (x, ) be a
solution of (4.5.1) under the conditions
(0, ) = (T, ) =

(T, ) = 0,

(0, ) = 1.
We set M() :=

(0, ). The function M() is called the Weyl function for (4.5.1).
Let the functions C

(x, ), = 0, 3 be solutions of (4.5.1) under the initial conditions


C
()

(0, ) =

, , = 0, 3. Denote

j
() = C
3j
(T, )C

3
(T, ) C
3
(T, )C

3j
(T, ), j = 1, 2.
Then
(x, ) =
1

1
()
det[C

(x, ), C

(T, ), C

(T, )]
=1,2,3
,
and hence
M() =

2
()

1
()
.
Denote
(x) =
_
x
0
(h(t))
(1)/4
dt, = (T).
Let =
4
. By the well-known method (see, for example [nai1, Ch.1]) one can obtain the
following asymptotic formulae

kj
=
_
k

_
4
_
1 +
A
j1
k
+O
_
1
k
2
__
, k , (4.5.2)

j
() = O(
j5
exp(C[[)), [[ . (4.5.3)
For deniteness, let S := : arg [0, /4]. Then, for [[ , S, arg =
,= 0,

j
() =
j5
A
j2
exp((1 i))
_
1 + O
_
1

__
, (4.5.4)

()
(x, ) =
1
2

=1
(R

(x))

(x) exp(R

(x))
_
1 + O
_
1

__
, x [0, T], (4.5.5)
where R
1
= 1, R
2
= i; the functions g

(x) are absolutely continuous, and g

(x) ,=
0, g
1
(0) = g
2
(0) = (1 i)
1
. The numbers A
js
in (4.5.2) and (4.5.4) depend on . In
particular, (4.5.5) yields
M() = (1 +i)
_
1 + O
_
1

__
.
Lemma 4.5.1. The Weyl function M() is uniquely dened by giving the spectra

kj

k1, j=1,2
.
Proof. The eigenvalues
kj

k1, j=1,2
of the boundary value problems Q
j
coincide
with the zeros of the entire functions
j
(). Indeed, let
0
be an eigenvalue and (x) an
eigenfunction of the boundary value problem Q
j
. Then
(x) =
3

=0

(x,
0
),
254
where
3

=0

C
(k)

(0,
0
) = 0,
3

=0

C
(s)

(T,
0
), k = 0, j s = 0, 1.
Since ,= 0, this linear homogeneous algebraic system has non-zero solutions and, therefore,
its determinant equals zero, i.e.
j
(
0
) = 0. Repeating all arguments in reverse order, we
obtain that if
j
(
0
) = 0, then
0
is an eigenvalue of the boundary value problem Q
j
.
It follows from (4.5.3) that the order of the entire functions
j
() is less than 1, and
therefore, according to Hadamards factorization theorem [con1, p.289]

j
() = B
j

k=1
_
1

kj
_
, (4.5.6)
with some constants B
j
. Let us consider a positive function

h(x) AC[0, T] such that

h(0) = 1. As before we agree that if a certain symbol denotes an object related to h,


then the corresponding symbol with tilde denotes the analogous object related to

h, and
:= .
Let = . We have from (4.5.6)

j
()

j
()
=
B
j

B
j

k=1

kj

kj

k=1
_
1

kj

kj

kj

_
.
By virtue of (4.5.2) and (4.5.4), we have for [[ , arg = ,= 0, S,

j
()

j
()
1,

k=1
_
1

kj

kj

kj

_
1.
This yields
B
j

B
j

k=1

kj

kj
= 1,
and therefore,
M() =

B

k=1

k1

k2


k2

k1

,

B =

B
2

B
1
.
Hence the specication of the spectra
kj

k1, j=1,2
uniquely determines the Weyl function
M(). 2
Thus, Inverse Problem 4.5.1 is reduced to the following inverse problem.
Inverse Problem 4.5.2. Given the Weyl function M(), nd h(x), x [0, T].
4.5.2. We shall solve Inverse Problem 4.5.2 by the method of standard models. First
we prove several auxiliary assertions.
Lemma 4.5.2. Let p(x) := h

(x). The following relationship holds


_
T
0
_

h(x)(x, )

(x, ) p(x)

(x, )

(x, )
_
dx =

M(). (4.5.7)
Proof. Denote

y = (p(x)y

h(x)y, L(y, z) = (p(x)y

z p(x)y

+p(x)y

y(p(x)z

.
255
Then
_
T
0

y(x)z(x) dx = L(y(x), z(x))

T
0
+
_
T
0
y(x)

z(x) dx. (4.5.8)


Using (4.5.8) and the equality

(x, ) =

(x, ) = 0, we obtain
_
T
0
(

)(x, )

(x, ) dx =
_
T
0

(x, )

(x, ) dx
=

L((x, ),

(x, ))

T
0

_
T
0
(x, )

(x, ) dx
=

(0, )

(0, )

(0, )

(0, ) =

M().
On the other hand, integrating by parts we have
_
T
0
(

)(x, )

(x, ) dx =
_
T
0
(( p(x)

(x, ))

h(x)(x, ))

(x, ) dx
= (( p(x)

(x, ))

(x, ) p(x)

(x, )

(x, ))

T
0
+
_
T
0
( p(x)

(x, )

(x, )

h(x)(x, ))

(x, )) dx.
Since the substitution vanishes, we hence obtain (4.5.7). 2
Lemma 4.5.3. Consider the integral
J(z) =
_
T
0
f(x)H(x, z) dx, (4.5.9)
where
f(x) C[0, T], f(x) f

!
(x +0), H(x, z) = exp(za(x))
_
1 +
(x, z)
z
_
,
a(x) C
1
[0, T], 0 < a(x
1
) < a(x
2
) (0 < x
1
< x
2
),
a
()
(x) a
0
x
1
(x +0, = 0, 1), a

(x) > 0,
the function (x, z) is continuous and bounded for x [0, T],
z Q :=
_
z : arg z
_


2
+
0
,

2

0
_
,
0
> 0
_
.
Then as z , z Q,
J(z) f

(a
0
z)
1
. (4.5.10)
Proof. The function t = a(x) has the inverse x = b(t), where b(t) C
1
[0, T
1
], T
1
=
a(T), b(t) > 0 for t > 0, and b
()
(t) (a
0
)
1
t
1
, = 0, 1 as t +0. Let us make the
change of variable t = a(x) in the integral in (4.5.9). We obtain
J(z) =
_
T
0
g(t) exp(zt)(1 + z
1
(b(t), z)) dt, (4.5.11)
where g(t) = f(b(t))b

(t). It is clear that for t +0,


g(t) (a
0
)
1
f

!
.
256
Applying Lemma 1.7.1 to (4.5.11), we obtain (4.5.10). 2
Denote
A

=
1
(R
1
R
2
)
2
2

,s=1
(1)
+s
(1 R
2

R
2
s
)
(R

+R
s
)
+1
, 1.
Let us show that A

,= 0 for all 1. For deniteness, we put arg (0,



4
), i.e.
R
1
= 1, R
2
= i. Then
A

=
1
(R
1
R
2
)
2

1
(2)
+1
a

,
where
a

= ( 1)(1 + i
+1
) + 2( + 1)(1 +i)
+1
.
Since [1+i
+1
[ 2 and [1+i[
+1
= (

2)
+1
, it follows that a

,= 0 for all 1. Hence


A

,= 0 for all 1. 2
Lemma 4.5.3. As x 0 let

h(x)

h

(!)
1
x

. Then as [[ , arg = ,= 0,
there exists a nite limit I

= lim
1

M(), and
A

= I

. (4.5.12)
Proof. Since p(x) = h

(x), then by virtue of the conditions of the lemma we have


p(x)

!
, x +0.
Using the asymptotic formulae (4.5.5) and Lemma 4.5.3 we nd as , arg = ,=
0, S :
_
T
0

h(x)(x, )

(x, ) dx
1

(R
1
R
2
)
2
2

,s=1
(1)
+s
(R

+R
s
)
+1
,
_
T
0
p(x)

(x, )

(x, ) dx
1

(R
1
R
2
)
2
2

,s=1
(1)
+s
R
2

R
2
s
(R

+R
s
)
+1
.
Substituting the expressions obtained in (4.5.7), we obtain the assertion of the lemma. 2
Let A be the set of analytic functions on [0, T]. From the facts presented above we
have the following theorem.
Theorem 4.5.1. Inverse Problem 4.5.2 has a unique solution in the class h(x) A.
This solution can be found according to the following algorithm:
1) We calculate h

= h
()
(0), 0, h
0
= 1. For this purpose we successively perform
the following operations for = 1, 2, . . . : we construct the function

h(x) A,

h(x) > 0
such that

h
()
(0) = h

, = 0, 1, and arbitrary in the rest, and we calculate h

by
(4.5.12).
We construct the function h(x) by the formula
h(x) =

=0
h

!
, 0 < x < R,
257
where
R =
_
lim

[h

[
!
_
1
.
If R < T, then for R < x < T the function h(x) is constructed by analytic continuation.
We note that the inverse problem in the class of piecewise-analytic function can be solved
in an analogous manner.
4.6. BOUNDARY VALUE PROBLEMS WITH AFTEREFFECT
In this section, the perturbation of the Sturm-Liouville operator by a Volterra integral
operator is considered. The presence of an aftereect in a mathematical model produces
qualitative changes in the study of the inverse problem. The main results of the section are
expressed by Theorems 4.6.1-4.6.3. We use here several methods which were described in
Chapter 1. In order to prove the uniqueness theorem (Theorem 4.6.1) the transformation
operator method is applied. With the help of the method of standard models we get an
algorithm for the solution of the inverse problem (Theorem 4.6.2), and the Borg method is
used for the local solvability of the inverse problem and for the study the stability of the
solution (Theorem 4.6.3).
4.6.1. Let
n

n1
be the eigenvalues of a boundary value problem L = L(q, M) of
the form
y(x) y

(x) + q(x)y(x) +
_
x
0
M(x t)y(t) dt = y(x) =
2
y(x), (4.6.1)
y(0) = y() = 0. (4.6.2)
Considered the following problem:
Inverse Problem 4.6.1. Given the function q(x) and the spectrum
n

n1
, nd
the function M(x) .
Put
M
0
(x) = ( x)M(x), M
1
(x) =
_
x
0
M(t) dt, Q(x) = M
0
(x) M
1
(x).
We shall assume that q(x), Q(x) L
2
(0, ), M
k
(x) L(0, ), k = 0, 1.
Let S(x, ) be the solution of (4.6.1) under the initial conditions S(0, ) = 0, S

(0, ) =
1.
Lemma 4.6.1. The representation
S(x, ) =
sin x

+
_
x
0
K(x, t)
sin t

dt, (4.6.3)
holds, where K(x, t) it is a continuous function, and K(x, 0) = 0.
Proof. The function S(x, ) is the solution of the integral equation
S(x, ) =
sin x

+
_
x
0
sin (x )

_
q()S(, ) +
_

0
M( s)S(s, ) ds
_
d. (4.6.4)
258
Since
_
x
0
sin (x )

f() d =
_
x
0
_
_
t
0
f() cos (t ) d
_
dt,
then (4.6.4) is transformable to the form
S(x, ) =
sin x

+
_
x
0
_
t
0
_
q()S(, ) +
_

0
M( s)S(s, ) ds
_
cos (t ) ddt. (4.6.5)
Apply the method of successive approximations to solve the equation (4.6.5):
S
0
(x, ) =
sin x

,
S
n+1
(x, ) =
_
x
0
_
t
0
_
q()S
n
(, ) +
_

0
M( s)S
n
(s, ) ds
_
cos (t ) ddt.
Transform S
1
(x, ) :
S
1
(x, ) =
1
2
_
x
0
sin t
_
_
t
0
q() d
_
dt +
1
2
_
x
0
_
_
t
0
q() sin (2 t) d
_
dt
+
1
2
_
x
0
_
t
0
_

0
M( s) sin (s+t) ds d dt+
1
2
_
x
0
_
t
0
_

0
M( s) sin (st+) ds d dt.
Carrying out the change of variables = 2 t, = s +t , = s t +, respectively,
in the last three integrals and reversing the order of integration, we obtain
S
1
(x, ) =
_
x
0
K
1
(x, )
sin

d,
where
K
1
(x, ) =
1
2
_

0
q() d +
1
4
_
x

_
q
_
+t
2
_
q
_
t
2
__
dt
+
1
2
_
x

_
M(t ) +
_
t
+t
2
M(2 t) d
_
t
t
2
M(2 + t) d
_
dt. (4.6.6)
Clearly K
1
(x, 0) = 0 . In an analogous manner we compute
S
n
(x, ) =
_
x
0
K
n
(x, )
sin

d,
where the functions K
n
(x, ) are determined by the recurrence formula
K
n+1
(x, ) =
1
2
_
x

_
_
t
t
q()K
n
(, + t) d +
_
t
+t
2
q()K
n
(, +t ) d

_
t
t
2
q()K
n
(, +t ) d +
_
t
t
_
_

t+
M( s)K
n
(s, t +) ds
_
d
+
_

t+
2
_
_

+t
M( s)K
n
(s, +t ) ds
_
d

_
t
t
2
_
_

+t
M( s)K
n
(s, +t ) ds
_
d
_
dt. (4.6.7)
259
Clearly, K
n
(x, 0) = 0 . From (4.6.6) and (4.6.7) we obtain by induction the estimate
[K
n
(x, )[
1
n!
(Cx)
n
, 0 x .
Thus,
S(x, ) =

n=0
S
n
(x, ) =
sin x

+
_
x
0
K(x, )
sin

d,
where
K(x, ) =

n=1
K
n
(x, ), (4.6.8)
and the series (4.6.8) converges absolutely and uniformly for 0 x . Lemma 4.6.1
is proved. 2
Denote () = S(, ). The eigenvalues
n

n1
of the boundary value problem L
coincide with the zeros of the function (), and like in the proof of Theorem 1.1.3 we get
for n ,

n
=
_

n
= n +
A
1
n
+

n
n
,
n

2
, A
1
=
1
2
_

0
q(t) dt. (4.6.9)
The following assertion can be proved like Theorem 1.1.4.
Lemma 4.6.2. The function () is uniquely determined by its zeros. Moreover,
() =

n=1

n
2
. (4.6.10)
We shall now prove the uniqueness theorem for the solution of Inverse Problem 4.6.1.
Let

n1
be the eigenvalues of the boundary value problem

L = L(q,

M) .
Theorem 4.6.1. If
n
=

n
, n 1, then M(x) =

M(x), x (0, ).
Proof. Let the function S

(x, ) be the solution of the equation

z(x) := z

(x) + q(x)z(x) +
_

x
M(t x)z(t) dt = z(x) (4.6.11)
under the conditions S

(, ) = 0, S

(, ) = 1. Put

() = S

(0, ), and denote

M(x) := M(x)

M(x). Then
_

0
S

(x, )
_
x
0

M(x t)

S(t, ) dt dx
=
_

0
S

(x, )

S(x, ) dx
_

0
S

(x, )

S(x, ) dx
=
_

0

(x, )

S(x, ) dx
_

0
S

(x, )

S(x, ) dx
+
_

S(x, )S

(x, )

S

(x, )S

(x, )
_

0
=

()

().
260
For

= we have

() (), and consequently


_

0
S

(x, )
_
x
0

M(x t)

S(t, ) dt dx =

(). (4.6.12)
Transform (4.6.12) into
_

0

M(x)
_
_

x
S

(t, )

S(t x, ) dt
_
dx =

(). (4.6.13)
Denote w(x, ) = S

( x, ),

N(x) =

M( x),
(x, ) =
_
x
0
w(t, )

S(t x, ) dt. (4.6.14)


Then (4.6.13) takes the form
_

0

N(x)(x, ) dx =

(). (4.6.15)
Lemma 4.6.3. The representation
(x, ) =
1
2
2
_
x cos x +
_
x
0
V (x, t) cos t dt
_
(4.6.16)
holds, where V (x, t) is a continuous function.
Proof. Since w(x, ) = S

(x, ), the function w(x, ) is the solution of the Cauchy


problem
w

(x, ) +q( x)w(x, ) +


_
x
0
M(x t)w(t, ) dt = w(x, ),
w(0, ) = 0, w

(0, ) = 1.
Therefore, by Lemma 4.6.1, the representation
w(x, ) =
sin x

+
_
x
0
K
0
(x, t)
sin t

dt (4.6.17)
holds, where K
0
(x, t) is a continuous function. Substituting (4.6.3) and (4.6.17) into
(4.6.14), we obtain
(x, ) =
1
(x, ) +
2
(x, ) +
3
(x, ) +
4
(x, ),
where

1
(x, ) =
1

2
_
x
0
sin t sin (x t) dt,

2
(x, ) =
1

2
_
x
0
sin (x t)
_
_
t
0
K
0
(t, ) sin d
_
dt,

3
(x, ) =
1

2
_
x
0
sin t
_
_
xt
0

K(x t, ) sin d
_
dt,

4
(x, ) =
1

2
_
x
0
_
t
0
K
0
(t, ) sin
_
xt
0

K(x t, ) sin d d dt.


261
For
1
(x, ) we have

1
(x, ) =
1
2
2
_
x
0
_
cos (x 2t) cos x
_
dt =
1
2
2
_
x cos x +
_
x
0
cos t dt
_
.
Transform
2
(x, ) :

2
(x, ) =
1
2
2
_
x
0
_
t
0
K
0
(t, )
_
cos (x t ) cos (x t +)
_
d dt
=
1
2
2
_
x
0
__
xt
x2t
K
0
(t, x t s) cos s ds
_
x
xt
K
0
(t, s +t x) cos s ds
_
dt.
Changing the order of integration, we obtain

2
(x, ) =
1
2
2
_
x
0
V
2
(x, t) cos t dt,
where
V
2
(x, t) =
_
xt
xt
2
K
0
(s, x t s) ds +
_
x
x+t
2
K
0
(s, x +t s) ds
_
x
xt
K
0
(s, t +s x) ds.
The integrals
3
(x, ) and
4
(x, ) are transformable in an analogous manner. Lemma
4.6.3 is proved. 2
Let us return to proving Theorem 4.6.1. Since
n
=

n
, n 1, we have by Lemma
4.6.2
()

().
Then, substituting (4.6.16) into (4.6.15), we obtain
_

0
cos x
_
x

N(x) +
_

x
V (t, x)

N(t) dt
_
dx 0,
and consequently,
x

N(x) +
_

x
V (t, x)

N(t) dt = 0. (4.6.18)
For each xed > 0, (4.6.18) is a homogeneous Volterra integral equation of the second
kind in the interval (, ). Consequently,

N(x) = 0 a.e. in (, ) and, since is arbitrary,
this holds in the whole interval (0, ). Thus, M(x) =

M(x) a.e. in (0, ). 2
4.6.2. Relation (4.6.15) also makes it possible to obtain an algorithm for solving Inverse
Problem 4.6.1 in the case when M(x) PA[0, ] (i.e. M is piecewise analytic in [0, ] ).
Indeed, consider the boundary value problems L(q, M) and L(q,

M), and assume that
q(x) L
2
(0, ); M(x),

M(x) PA[0, ]. Let for some xed a > 0

N(x) = 0, x (a, );

N(x)

N
a

(!)
1
(a x)

, x a 0. (4.6.19)
It follows from (4.6.16) that as [[ , arg [, ], x [, ], > 0, > 0, the
asymptotic formula
(x, ) =
x
4
2
exp(ix)
_
1 + O
_
1

__
(4.6.20)
262
holds. Furthermore, we infer from (4.6.16) that
[(x, )[ < C[
2
exp(ix)[, x [0, ], Im 0. (4.6.21)
With (4.6.21) we obtain the estimate

_

0

N(x)(x, ) dx

< C[
2
exp(i)[, Im 0. (4.6.22)
Using (4.6.19), (4.6.20) and Lemma 1.7.1, we get for [[ ,
_
a

N(x)(x, ) dx =
a
4(i)
+3
exp(ia)(

N
a

+o(1)). (4.6.23)
Since

N(x) = 0 for x (a, ), it follows from (4.6.15), (4.6.22) and (4.6.23) that for
[[ , arg [, ],

() =
a
4
(i)
3
exp(ia)(

N
a

+o(1)),
and consequently

N
a

=
4
a
lim

()(i)
+3
exp(ia), [[ , arg [, ]. (4.6.24)
Thus we have proved the following theorem.
Theorem 4.6.2. Let
n

n1
be the eigenvalues of L(q, M), where q(x) L
2
(0, ) ,
M(x) PA[0, ]. Then the solution of Inverse Problem 4.6.1 can be found by the following
algorithm:
1) From
n

n1
construct the function () by (4.6.10).
2) Take a = .
3) For = 0, 1, 2, . . . carry out successively the following operations: construct a function

M(x) PA[0, ] such that



N(x) = 0, x (a, );

N
(k)
(a 0) = 0, k = 0, 1 and nd
N
a

= (1)

N
()
(a 0) from (4.6.24).
4) Construct N(x) for x (a
+
, a) by the formula
N(x) =

=0
N
a

(a x)

!
.
5) If a
+
> 0 set a := a
+
and go to step 3.
4.6.3. We shall now investigate local solvability of Inverse Problem 4.6.1 and its
stability. First let us prove an auxiliary assertion.
Lemma 4.6.4. Consider in a Banach space B the nonlinear equation
r = f +

j=2

j
(r), (4.6.25)
where
|
j
(r)| (C|r|)
j
, |
j
(r)
j
(r

)| |r r

|(C max(|r|, |r

|))
j1
.
263
There exists > 0 such that if |f| < , then in the ball |r| < 2 equation (4.6.25) has
a unique solution r B, for which |r| 2|f|.
Proof. Assume that C 1. Put
(r) =

j=2

j
(r), C
0
= 2C
2
, =
1
4C
0
.
If |r|, |r

| (2C
0
)
1
, then
|(r)|

j=2
(C|r|)
j
C
0
|r|
2

1
2
|r|,
|(r) (r

)| |r r

j+2
(C(2C
0
)
1
)
j1

1
2
|r r |.
_

_
(4.6.26)
Let |f| ; construct
r
0
= f, r
k+1
= f +(r
k
), k 0.
By induction, using (4.6.26), we obtain the estimates
|r
k
| 2|f|, |r
k+1
r
k
|
1
2
k+1
|f|, k 0.
Concequently, the series
r = r
0
+

k=0
(r
k+1
r
k
)
converges to the solution of (4.6.25), and |r| 2|f|. Lemma 4.6.4 is proved. 2
Theorem 4.6.3. For the boundary value problem L = L(q, M) with the spectrum

n1
there exists > 0 (which depends on L ) such that if numbers

n1
satisfy
the condition
:=
_

n=1
[
n

n
[
2
_
1/2
< ,
then there exists a unique

L = L(q,

M) for which the numbers

n1
are the eigenvalues,
and, furthermore,
|Q(x)

Q(x)|
L
2
(0,)
C,
|M
k
(x)

M
k
(x)|
L(0,)
C, k = 0, 1.
Here and below, C denotes various constants depending on L.
Proof. For brevity, we conne ourselves to the case when all eigenvalues are simple.
The Cauchy problem y(x) y(x) +f(x) = 0, y(0) = y

(0) = 0 has a unique solution


y(x) =
_
x
0
g(x, t, )f(t) dt,
where g(x, t, ) is the Green function satisfying the relations
g
xx
(x, t, ) + q(x)g(x, t, ) g(x, t, ) +
_
x
t
M(x )g(, t, ) d = 0, x > t,
264
g(t, t, ) = 0, g
x
(x, t, )
|x=t
= 1.
Denote
G(x, t, ) = g
t
(x, t, ), y
n
(x) = S(x,

n
),
n
= n
2
(

n
),
v
n
(x, t) =
_
w

( x t,

n
), 0 < t < x
0, x < t < ,
G
n
(x, t, s) =
_
G(x, s +t,

n
), s +t x
0, s +t > x,

n
(x) =
_
x
0
w(t,

n
)S(x t,

n
) dt,
n
(x) =
_

0
v
n
(x, t)y
n
(t) dt,

n
(x) =
n
x

n
(x),
n0
(x) =
n
2
n
sin
n
x,
n
(x) =
n
x

n
(x).
Let here W
1
2
be the space of functions f(x) absolutely continuous on [0, ] and such that
f

(x) L
2
(0, ) , with the norm |f|
W
1
2
= |f|
L
2
(0,)
+ |f

|
L
2
(0,)
, and let W
1
20
= f(x) :
f(x) W
1
2
, f(0) = f() = 0.
Lemma 4.6.5. The functions
n
(x)
n1
constitute a Riesz basis in L
2
(0, ), and the
biorthogonal basis

n
(x)
n1
possesses the following properties:
1)

n
(x) W
1
20
;
2) [
n
(x)[ C, n 1, x [0, ];
3) for any
n

2
(x) :=

n=1

n
n

n
(x) W
1
20
, |(x)|
W
1
20
C
_

n=1
|
n
|
2
_
1/2
.
Proof. We shall use the well-known results for the inverse Sturm-Liouville problem (see
Chapter 1). Since < , it follows from (4.6.9) that

n
=
_

n
= n +
A
1
n
+

n
n
,
n

2
. (4.6.27)
Consequently, there exists a function q(x) (not unique) such that the numbers

n1
are the eigenvalues of the Sturm-Liouville boundary value problem
y

+ q(x)y = y, y(0) = y() = 0. (4.6.28)


Let s
n
(x) be the eigenfunctions of (4.6.28) normalized by the condition s

(0) =
n
2
. The
functions s
n
(x)
n1
contitute a Riesz basis in L
2
(0, ), and
_

0
s
n
(x) s
m
(x) dx =
nm

n
. (4.6.29)
Using Lemma 4.6.1, we obtain
s
n
(x) =
n0
(x) +
_
x
0

K(x, t)
n0
(t) dt,

K(x, 0) = 0. (4.6.30)
In particular, it follows from(4.6.30), (4.6.29) and (4.6.27) that
s
n
(x) =
1
2
sin nx +O
_
1
n
_
,
n
=
_

0
s
2
n
(x) dx =

8
+O
_
1
n
_
, n .
265
Due to (4.6.30), the functions
n0
(x)
n1
constitute a Riesz basis in L
2
(0, ). Denote

n0
(x) = s
n
(x) +
_

x

K(t, x) s
n
(t) dt. (4.6.31)
It follows from (4.6.29)-(4.6.31) that
_

0

n0
(x)

m0
(x) dx =
_

0

n0
_
s
m
(x) +
_

x

K(t, x) s
m
(t) dt
_
dx
=
_

0
s
m
(x)
_

n0
(x) +
_
x
0

K(x, t)
n0
(t) dt
_
dx =
_

0
s
n
(x) s
m
(x) dx =
nm

n
. (4.6.32)
Furthermore, we compute

n
(x) =
n
x
_
x
0
w(t,

n
)S

(x t,

n
) dt. (4.6.33)
Since
S

(x, ) = cos x +
_
x
0
K
1
(x, t) cos t dt, (4.6.34)
we obtain, substituting (4.6.34) and (4.6.17) into (4.6.33) like in the proof of Lemma 4.6.3,

n
(x) =
n0
(x) +
_
x
0
V
0
(x, t)
n0
(t) dt, (4.6.35)
where V
0
(x, t) is a continuous function with V
0
(x, 0) = 0. Solving the integral equation
(4.6.35), we nd

n0
(x) =
n
(x) +
_
x
0
V
1
(x, t)
n
(t) dt, V
1
(x, 0) = 0. (4.6.36)
Consider the functions

n
(x) =

n0
(x) +
_

x
V
1
(t, x)

n0
(t) dt. (4.6.37)
It follows from (4.6.32), (4.6.36) and (4.6.37) that
_

0

n
(x)

m
(x) dx =
nm

n
. (4.6.38)
By virtue of (4.6.35) and (4.6.38), the functions
n
(x)
n1
constitute a Riesz basis in
L
2
(0, ), and the biorthogonal basis

n
(x)
n1
has the form

n
(x) =
1
n

n
(x). Substi-
tuting (4.6.31) into (4.6.37), we have

n
(x) = s
n
(x) +
_

x
V
0
1
(t, x) s
n
(t) dt, V
0
1
(t, 0) = 0.
Hence we obtain the required properties of the biorthogonal basis. 2
Since
n
(x) =
n
( x), Lemma 4.6.5 implies
Corollary 4.6.1. The function
n
(x)
n1
constitute a Riesz basis in L
2
(0, ), and
the biorthogonal basis
n
(x)
n1
has the properties:
1)

n
(x) W
1
20
;
2) [
n
(x)[ C, n 1, x [0, ];
266
3) for any
n

2
(x) =

n=1

n
n

n
(x) W
1
20
, |(x)|
W
1
2
C
_

n=1
|
n
|
2
_
1/2
.
Let us return to proving Theorem 4.6.3. Put
(x) =

n=1

n
n

n
(x). (4.6.39)
Using Lemma 4.6.1, the relations () = S(, ), (
n
) = 0, and formulae (4.6.9),
(4.6.27), we obtain the estimate
[
n
[ = n
2
[(

n
) (
n
[ C[
n

n
[.
Now by Corollary 4.6.1 we have
(x) W
1
20
, |(x)|
W
1
2
C.
Consider in W
1
20
the nonlinear equation
r = +

j=2

j
(r), (4.6.40)
where (x) is dened by (4.6.39), and the operators z
j
=
j
(r) act from W
1
20
to W
1
20
according to the formula
z
j
(x) =

n=1
_
_

0
. . .
_

0
. .
j
r(t
1
) . . . r(t
j
)B
nj
(t
1
, . . . , t
j
) dt
1
. . . dt
j
_

n
(x),
where
r(x) W
1
20
, B
nj
(t
1
, . . . , t
j
) =
n
( t
1
) . . . ( t
j
)

_

0
. . .
_

0
. .
j
v
n
(t
1
, s
1
)G
n
(s
1
, t
2
, s
2
) . . . G
n
(s
j1
, t
j
, s
j
)y
n
(s
j
) ds
1
. . . ds
j
and
|
j
(r)|
W
1
2
(C|r|
W
1
2
)
j
,
|
j
(r)
j
(r

)|
W
1
2
|r r

|
W
1
2
(C max(|r|
W
1
2
, |r

|
W
1
2
))
j1
.
By Lemma 4.6.4, there exists > 0 such that for < equation (4.6.40) has a solution
r(x) W
1
20
, |r(x)|
W
1
20
C. Put

M(x) = M(x) (( x)
1
r(x))

, and consider the


boundary value problem

L = L(q,

M). Clearly

Q(x) = Q(x) r

(x) L
2
(0, ), |Q(x)

Q(x)|
L
2
(0,)
C.
Since

M
1
(x) =
1
x
_

x

Q(t) dt,

M
0
(x) =

Q(x) +

M
1
(x),
267
we have
|M
k
(x)

M
k
(x)|
L
2
(0,)
C, k = 0, 1.
It remains to show that the numbers

n1
are the eigenvalues of the problem

L. To do
this, consider the functions y
n
(x) which are solutions of the integral equations
y
n
(x) = y
n
(x) +
_

0

M
1
(t)
_
_

0
G
n
(x, t, s) y
n
(s) ds
_
dt (4.6.41)
or, which is the same,
y
n
(x) = y
n
(x) +
_
x
0

M
1
(t)
_
_
xt
0
G(x, s +t,

n
) y
n
(s) ds
_
dt. (4.6.42)
After integration by parts, (4.6.42) takes the form
y
n
(x) = y
n
(x)
_
x
0

M
1
(t)
_
x
t
g(x, s,

n
) y

n
(s t) dsdt.
Reverse the integration order:
y
n
(x) = y
n
(x)
_
x
0
g(x, t,

n
)
_
t
0

M
1
(s) y

n
(t s) dsdt.
Integrate by parts:
y
n
(x) = y
n
(x)
_
x
0
g(x, t,

n
)
_
t
0

M(t s) y
n
(s) dsdt. (4.6.43)
It follows from (4.6.43) that
( y
n
(x) y
n
(x)) =
_
x
0

M(t s) y
n
(s) ds = (

) y
n
(x),
and consequently,

y
n
(x)

n
y
n
(x), y
n
(0) = (0), y

n
(0) = 1.
Since the solution of the Cauchy problem is unique, we have y
n
(x) =

S(x,

n
).
Write (4.6.13) in the form
_

0

M(x)
_
_
x
0
w( x t, )

S(t, ) dt
_
dx =

().
Integrating by parts, we obtain for =

n
_

0

M
1
(x)
_
_

0
v
n
(x, t) y
n
(t) dt
_
dx =

(

n
). (4.6.44)
Solving (4.6.41) by the method of successive approximations, we have
y
n
(x) = y
n
(x) + Y
n
(x), (4.6.45)
where
Y
n
(x) =

j=1
_

0
. . .
_

0
. .
j

M
1
(t
1
) . . .

M
1
(t
j
)
_
_

0
. . .
_

0
. .
j
G
n
(x, t
1
, s
1
)
268
G
n
(s
1
, t
2
, s
2
) . . . G
n
(s
j1
, t
j
, s
j
)y
n
(s
j
) ds
1
. . . ds
j
_
dt
1
. . . dt
j
.
Furthermore, multiplying (4.6.40) by
n
(x) and integrating from 0 to , we obtain
_

0
r(x)
n
(x) dx +

j=2
_

0
. . .
_

0
. .
j
r(t
1
) . . . r(t
j
)B
nj
(t
1
) . . . t
j
) dt
1
. . . dt
j
=

n
n
. (4.6.46)
Since r(x) = ( x)

M
1
(x),
n
(x) = n( x)
1

n
(x), we can tramsform (4.6.46) to the
form
_

0

M
1
(x)
n
(x) dx +

j=2
_

0
. . .
_

0
. .
j

M
1
(t
1
) . . .

M
1
(t
j
)
_
_

0
. . .
_

0
. .
j
v
n
(t
1
, s
1
)
G
n
(s
1
, t
2
, s
2
) . . . G
n
(s
j1
, t
j
, s
j
)y
n
(s
j
) ds
1
. . . ds
j
_
dt
1
. . . dt
j
=

n
n
2
.
Hence, taking (4.6.45) into account, we obtain
_

0

M
1
(x)
_
_

0
v
n
(x, t) y
n
(t) dt
_
dx = (

n
). (4.6.47)
Comparing (4.6.44) with (4.6.47), we nd that

(

n
) = 0. Hence the number

n1
are
the eigenvalues of the boundary value problem

L. Theorem 4.6.3 is proved. 2
4.7. DIFFERENTIAL OPERATORS OF THE ORR-SOMMERFELD TYPE
In this section we study the inverse problem of recovering dierential operators of the
Orr-Sommerfeld type (or Kamke type) from the Weyl matrix. Properties of the Weyl matrix
are investigated, and an uniqueness theorem for the solution of the inverse problem is proved.
4.7.1. Let us consider the dierential equation
Ly = zy (4.7.1)
where
Ly = y
(n)
+
n2

k=0
p
k
(x)y
(k)
, y = y
(m)
+
m1

j=0
q
j
(x)y
(j)
, n > m 1, x (0, T).
Here p
k
and q
j
are complex-valued integrable functions.
Dierential equations of the form (4.7.1) play an important role in various areas of math-
ematics as well as in applications. The Orr-Sommerfeld equation from the theory of hydro-
dynamic stability is a typical example for equation (4.7.1). Spectral properties of boundary
value problems for the Orr-Sommerfeld equation and the more general equation (4.7.1) have
been studied in many works (see [ebe5], [tre1], [shk1] and the references given therein). How-
ever at present inverse spectral problems for such classes of operators have not been studied
yet because of their complexity.
In this paper we study the inverse problem of recovering the coecients of equation
(4.7.1) from the so-called Weyl matrix. We introduce and investigate the Weyl matrix and
269
prove an uniqueness theorem for the solution of the inverse problem. We note that for
the case of m = 0, i.e. for the equation Ly = zy, inverse problems of recovering L
from various spectral characteristics have been studied in [bea1], [bea2], [kha3], [lei1], [sak1],
[yur1], [yur16], [yur22] and other works. In particular, for such operators the theory of the
solution of the inverse problem by means of the Weyl matrix (and its applications) has been
constructed in [yur1], [yur12], [yur15], [yur16], [yur22]. The Weyl matrix introduced in this
paper is a generalization of the Weyl matrix for the equation Ly = zy introduced in [yur16].
4.7.2. Let the functions
k
(x, z), k = 1, n, be solutions of equation (4.7.1) under the
conditions
(j1)
k
(0, z) =
kj
, j = 1, k,
(s1)
k
(T, z) = 0, s = 1, n k. Here
kj
is the
Kronecker delta. Denote M
kj
(z) =
(j1)
k
(0, z), j = k + 1, n. The functions
k
(x, z) are
called the Weyl solutions, and the functions M
kj
(z) are called the Weyl functions. The
matrix M(z) = [M
kj
(z)]
k,j=1,n
, M
kj
(z) =
kj
, j = 1, k is called the Weyl matrix for
equation (4.7.1). The inverse problem studied in this section is formulated as follows:
Given the Weyl matrix, construct L and .
Let us consider the fundamental system of solutions of equation (4.7.1) C
k
(x, z), k = 1, n,
which satises the conditions C
(j1)
k
(0, z) =
kj
, j = 1, n. Then

k
(x, z) = C
k
(x, z) +
n

j=k+1
M
kj
(z)C
j
(x, z), (4.7.2)
det[
(j1)
k
(x, z)]
k,j=1,n
= det[C
(j1)
k
(x, z)]
k,j=1,n
=
_
1, m < n 1,
exp(zx), m = n 1.
(4.7.3)
Let us denote

kj
(z) = (1)
k+j
det[C
(1)

(T, z)]
=1,nk; =k.n,=j
.
The function
kj
(z) is entire in z of order less than 1, and its zeros coincide with
the eigenvalues of the boundary-value problem for equation (4.7.1) under the conditions
y
(s1)
(0) = y
(p1)
(T) = 0; s = 1, k 1, j; p = 1, n k. The function
kj
(z) is uniquelly
determined by its zeros.
Using the boundary conditions on the Weyl solutions we calculate

k
(x, z) = (
kk
(z))
1
det[C
j
(x, z), C
j
(T, z), . . . , C
(nk1)
j
(T, z)]
j=k,n
, (4.7.4)
and consequently
M
kj
(z) = (
kk
(z))
1

kj
(z). (4.7.5)
Let h
k
(x), k = 1, m, be the solutions of the dierential equation y = 0 under the
initial conditions h
(1)
k
(0) =
k
, = 1, m. Denote
B
s
(x) = det[h
k
(x), h
k
(T), h

k
(T), . . . , h
(ms1)
k
(T)]
k=s,m
, s = 1, m1, B
m
(x) = h
m
(x),

s
= det[h
()
k
(T)]
k=s,m; =0,ms
, s = 1, m,
m+1
= 1.
Clearly,

1
= exp(
_
T
0
q
m1
(t) dt).
270
For deniteness, we shall assume that
s
,= 0, s = 2, m. Other cases require separate
calculations. We shall also suppose the coecients p
k
and q
j
to be suciently smooth
such that the asymptotic formulae (4.7.12)-(4.7.13) hold (see [ebe5], [tre1]).
By virtue of (4.7.5), the Weyl functions M
kj
(z) are meromorphic. It follows from above
that the specication of the Weyl matrix M(z) is equivalent to the specication of the
spectra of the corresponding boundary-value problems for equation (4.7.1) or corresponds to
the specication of poles and residues of the Weyl functions. Therefore the inverse problem
of recovering equation (4.7.1) from the Weyl matrix is a generalization of the well-known
inverse problems for the Sturm-Liouville operators from discrete spectral characteristics (see,
for example, [lev2] and Chapter 1).
Let b
s
(x), s = 1, m be the solutions of the equation y = 0 with the conditions
b
(1)
s
(0) =
s
, = 1, s; b
(1)
s
(T) = 0, = 1, ms. In other words, the functions b
s
(x)
are the Weyl solutions for the equation y = 0. As in (4.7.3) and (4.7.4), we see that
b
s
(x) = (
s+1
)
1
B
s
(x)
and
det[b
(1)
s
(x)]
s,=1,m
= det[h
(1)
s
(x)]
s,=1,m
= exp(
_
x
0
q
m1
(t) dt).
Together with L and we consider operators

L and

of the same form but with
dierent coecients. We agree that if a symbol denotes an object related to L and ,
then will denote the analogous object related to

L and

.
The main result of this section is the following uniqueness theorem for the solution of
the inverse problem by means of the Weyl matrix.
Theorem 4.7.1. If M(z) =

M(z) then L =

L and =

.
Proof. Let
C(x, z) = [C
(j1)
k
(x, z)]
j,k=1,n
, (x, z) = [
(j1)
k
(x, z)]
j,k=1,n
.
Then (4.7.2) takes the form
(x, z) = C(x, z)M
t
(z), (4.7.6)
where t denotes transposition. We dene the matrix Q(x, z) = [Q
jk
(x, z)]
j,k=1,n
from the
relation Q(x, z)

(x, z) = (x, z). Let r := n m.


Lemma 4.7.1. The following relations hold
Q

jk
(x, z) = Q
j+1,k
(x, z)Q
j,k1
(x, z)+(1)
nk+1
Q
j,n
(x, z)
k1
(x, z), j, k = 1, n 1, r 1,
(4.7.7)
Q

jn
(x, z) = Q
j+1,n
(x, z) Q
j,n1
(x, z) zQ
j,n
(x, z), j = 1, n 1, r = 1, (4.7.8)
Q

jn
(x, z) = Q
j+1,n
(x, z) Q
j,n1
(x, z), j = 1, n 1, r > 1, (4.7.9)
where
s
(x, z) = p
s
(x) z q
s
(x).
Proof of Lemma 1. Let us denote W(x, z) = det (x, z). In virtue of (4.7.3), W(x, z) ,=
0, and consequently one can write Q(x, z) = (x, z)(

(x, z))
1
or in the coordinates
Q
jk
(x, z) = (

W(x, z))
1
det[

s
(x, z),

s
(x, z) . . .

(k2)
s
(x, z),
(j1)
s
(x, z),

(k)
s
(x, z),

(k+1)
s
(x, z) . . .

(n2)
s
(x, z),

(n1)
s
(x, z)]
s=1,n
. (4.7.10)
271
Let j, k = 1, n 1. Dierentiating (4.7.10) with respect to x we get
Q

jk
(x, z) = z
r1
Q
jk
(x, z) + Q
j+1,k
(x, z) Q
j,k1
(x, z) + (

W(x, z))
1
det[

s
(x, z),

s
(x, z) . . .

(k2)
s
(x, z),
(j1)
s
(x, z),

(k)
s
(x, z),

(k+1)
s
(x, z) . . .

(n2)
s
(x, z),

(n)
s
(x, z)]
s=1,n
.
We replace here

(n)
s
(x, z) with the help of equation (4.7.1), and after some simple calcula-
tions we arrive at (4.7.7).
The relations (4.7.8) and (4.7.9) are obtained analogously. 2
Lemma 4.7.2. If Q
jn
(x, z) 0 for j = 1, n 1, then Q
jk
(x, z) 0 for j < k.
Lemma 4.7.2 can be proved by induction with respect to k using the formulae (4.7.7)-
(4.7.9).
Let us now study the asymptotic behavior of the Weyl solutions
k
(x, z) for z .
Let z =
r
. We can divide -plane into the sectors
S

= : arg (/r, ( + 1)/r), = 0, 2r 1,


in each of which the roots R
k
of the equation R
r
= 1 can be numbered so that
Re (R
1
) < Re (R
2
) < . . . < Re (R
r
), S

. (4.7.11)
It is known (see, for example, [ebe5], [tre1] and the references therein) that in each sector
S

with the property (4.7.11) there exists a fundamental system of solutions Y


s
(x, )
s=1,n
of equation (4.7.1) of the form
Y
()
s
(x, ) = (R
s
)

exp(R
s
x)(a(x) +O(
1
)), s = 1, r, [[ , = 0, n 1, (4.7.12)
Y
()
s+r
(x, ) = h
()
s
(x) + O(z
1
), s = 1, m, [[ , = 0, n 1, (4.7.13)
where
a(x) = exp
_
1
r
_
x
0
q
m1
(t) dt
_
.
Denote

k
() = det[Y
s
(0, ), . . . , Y
(k1)
s
(0, ), Y
s
(T, ), . . . , Y
(nk1)
s
(T, )]
s=1,n
, k = 1, n.
For deniteness, let r = 2. The case when r is odd is considered similarly. Using (4.7.12)-
(4.7.13) we obtain for [[ , S, arg z = const ,= 0, ,

s
() = (1)
m(rs)

s+1,r

1s
(R
s+1
R
r
)
m
a
rs
(T)

exp((R
s+1
+ +R
r
)T)(1 + O(
1
)), s = 1, , (4.7.14)

+s
() = (1)
(m+s)

s+1

+1,r

1
(R
+1
R
r
)
ms
(R
1
R

)
s
a

(T)

s+

exp((R
+1
+ +R
r
)T)(1 + O(
1
)), s = 0, m, (4.7.15)

+m+s
() =
+s+1,r

1,+s
(R
1
R
+s
)
m
a
s
(T)

s++m

exp((R
+s+1
+ +R
r
)T)(1 + O(
1
)), s = 0, , (4.7.16)
272
where

jk
= det[R

]
=j,k; =o,kj
,

s
= (s(s 1) + (n s 1)(n s) (m1)m)/2, s = 1, ,

s+
= ((+s1)(+s)s(s1)+(ns1)(ns)(ms)(ms1))/2, s = 0, m,

s++m
= (( s 1)( s) + (m + +s 1)(m + +s) (m1)m)/2, s = 0, .
Moreover
[
s
()[ C[[

exp([[), [[ ,

S, s = 1, n,
for some > 0 and > 0.
Using the boundary conditions on the Weyl solutions we calculate

k
(x, z) = (
k
())
1
det[Y
s
(0, ), . . . , Y
(k2)
s
(0, ),
Y
s
(x, ), Y
s
(T, ), . . . , Y
(nk1)
s
(T, )]
s=1,n
. (4.7.17)
Substituting the asymptotic formulas (4.7.12)-(4.7.16) into (4.7.17) we obtain for [[
, S, arg z = const ,= 0, ,

()
s
(x, z) =
1,s1
(
1s
)
1
a(x)
1s
(R
s
)

exp(R
s
x)(1 + O(
1
)), s = 1, , (4.7.18)

()
+s
(x, z) = (1)

(R
1
R

)
1
b
()
s
(x)

(1 + O(
1
)), s = 1, m, (4.7.19)

()
+m+s
(x, z) =
1,+s1
(
1,+s
)
1
R
m
+s
a(x)
1sm
(R
+s
)

exp(R
+s
x)
(1 + O(
1
)), s = 1, , (4.7.20)
where
10
= 1.
Now we transform the matrix Q(x, z). For this we use (4.7.6). Since M(z) =

M(z) by
the assumption of Theorem 4.7.1, we get
Q(x, z) = (x, z)(

(x, z))
1
= C(x, z)M
t
(z)(

M
t
(z))
1
(

C(x, z))
1
= C(x, z)(

C(x, z))
1
.
Using (4.7.3) we conclude that for each xed x the functions Q
jk
(x, z) are entire in z
of order 1/r. Substituting (4.7.18)-(4.7.20) into (4.7.10) we calculate for z , arg z =
const ,= 0, ,
Q
jn
(x, z) = O(
1
), j = 1, n 1.
Then the theorems of Phragmen-Lindelof and Liouville give us
Q
jn
(x, z) 0, j = 1, n 1.
By virtue of Lemma 4.7.2, this yields
Q
jk
(x, z) 0, j < k.
Since Q(x, z)

(x, z) = (x, z), we obtain


Q
11
(x, z)

k
(x, z)
k
(x, z), k = 1, n.
This implies

()
k
(x, z) = Q
11
(x, z)

()
k
(x, z) +
1

s=0
C
s

Q
(s)
11
(x, z)

(s)
k
(x, z),
273
and hence Q
n
11
(x, z)

W(x, z) W(x, z). In view of (4.7.3),

W(x, z) W(x, z), i.e. Q
n
11
(x, z)
1. Since Q
11
(0, z) 1, we have Q
11
(x, z) 1. Consequently,

k
(x, z)
k
(x, z), k =
1, n, L =

L, =

. Theorem 4.7.1 is proved. 2
Using this method one can also obtain a constructive procedure for the solution of this
inverse problem along with necessary and sucient conditions of its solvability.
4.8. DIFFERENTIAL EQUATIONS WITH TURNING POINTS
4.8.1. We consider in this section boundary value problems L of the form
y := y

+q(x)y = R
2
(x)y, x [0, 1], (4.8.1)
U(y) := y

(0) hy(0) = 0, (4.8.2)


V (y) := y

(1) + h
1
y(1) = 0. (4.8.3)
Here =
2
is the spectral parameter; R
2
and q are real functions, and h, h
1
are real
numbers. We suppose that
R
2
(x) =
m

=1
(x x

R
0
(x),
where 0 < x
1
< x
2
< . . . < x
m
< 1,

N, R
0
(x) > 0 for x I := [0, 1] , and R
0
is twice
continuously dierentiable on I . In the other words, R
2
has in I m zeros x

, = 1, m
of order

. The zeros x

of R
2
are called turning points. We also assume that q is
bounded and integrable on I .
In this section we study the following three inverse problems of recovering L from its
spectral characteristics, namely
(i) from the Weyl function,
(ii) from two spectra and
(iii) from the so-called spectral data.
Dierential equations with turning points play an important role in various areas of mathe-
matics as well as in applications (see [ebe1]-[ebe4], [gol1], [mch1], [was1] and the references
therein). For example, turning points connected with physical situations in which zeros
correspond to the limit of motion of a wave mechanical particle bound by a potential eld.
Turning points appear also in elasticity, optics, geophysics and other branches of natural
sciences. Moreover, a wide class of dierential equations with Bessel-type singularities and
their perturbations can be reduced to dierential equations having turning points. Inverse
problems for equations with turning points also help to study blow-up behavior of solutions
for nonlinear evolution equations in mathematical physics [Con1].
In order to study the inverse problem in this section we use the method of spectral map-
pings described in Section 1.6 for the classical Sturm-Liouville equation without turning
points. The presence of turning points in the dierential equation produces essential qual-
itative modications in the method. An important role in this investigation is played by
the special fundamental system of solutions for equation (4.8.1) constructed in [ebe4]. This
fundamental system of solutions gives us an opportunity to obtain the asymptotic behavior
274
of the so-called Weyl solutions and the Weyl function for the boundary value problem L
and to solve the corresponding inverse problems. In Subsection 4.8.2 we prove uniqueness
theorems, and in Subsection 4.8.3 we provide a constructive procedure for the solution of
the inverse problem.
Let > 0 be xed, suciently small and let D
0
= [0, x
1
], D

= [x

+, x
+1
]
for 1 m1, D
m
= [x
m
+, 1], D

=
m
_
=0
D

, and I

= D
1,
[x

, x

+]D

.
We distinguish four dierent types of turning points: For 1 m
T

=
_

_
I, if

is even and R
2
(x)(x x

< 0 in I

,
II, if

is even and R
2
(x)(x x

> 0 in I

,
III, if

is odd and R
2
(x)(x x

< 0 in I

,
IV, if

is odd and R
2
(x)(x x

> 0 in I

,
is called type of x

. Further we set for 1 m

=
1
2 +

=
_

_
1 if

>
1
4
,
1
0
(with
0
> 0 arbitrary small) if

=
1
4
,
4

if

<
1
4
,
and 0 <
0
= min

[1 m . We also denote
I
+
= x : R
2
(x) > 0, I

= x : R
2
(x) < 0,
(x) =
_
0, for x I
+
1, for x I

=
_
_
_
2 sin

2
, for T

= III, IV,
sin

, for T

= I, II,
K

(x) = (

(0,x)

) exp(i

4
((x) (0))),
K

(x) = (

(0,x)

) exp(i

4
((x) + (0))),
R
2
+
(x) = max(0, R
2
(x)), R
2

(x) = max(0, R
2
(x)).
Clearly,
K

(x)K

(x) = exp(i

2
(x)) =
_
1, if x I
+
i, if x I

.
Let
S
k
= : arg [
k
4
,
(k + 1)
4
],

s
= : arg [
s
2
,
s
2
+], > 0,
275

=
_
s

s
, S

k
= S
k

, S

=
1
_
k=2
S

k
.
Below it is sucient to consider the sectors S
k
and S

k
for k = 2, 1, 0, 1 only.
It is shown in [ebe4] that for each xed sector S
k
(k = 2, 1, 0, 1) there exists a
fundamental system of solutions of (1) z
1
(x, ), z
2
(x, ), x I, S
k
such that the
functions
(x, ) z
(s)
j
(x, ) (j = 1, 2; s = 0, 1)
are continuous for x I, S
k
and holomorphic for each xed x I with respect to
S
k
; moreover for [[ , S
k
, x D

, j = 1, 2,
z
(j)
1
(x, ) = (i)
j
[R(x)[
j1/2
(exp(i

2
(x))
j
exp(
_
x
0
[R

(t)[dt)
exp(i
_
x
0
[R
+
(t)[dt)K

(x)(x, p), (4.8.4)


z
(j)
2
(x, ) = (i)
j
[R(x)[
j1/2
(exp(i

2
(x))
j
exp(
_
x
0
[R

(t)[dt)
exp(i
_
x
0
[R
+
(t)[dt)K

(x)(x, ), (4.8.5)

z
1
(x, ) z
2
(x, )
z

1
(x, ) z

2
(x, )

= (2i)[1]. (4.8.6)
Here and in the sequel:
(i) the upper or lower signs in formulas correspond to the sectors S
2
, S
1
or S
0
, S
1
, re-
spectively;
(ii) [1]
def
= 1 + O(
1

0
) uniformly in x D

,
(iii) one and the same symbol (x, ) denotes various functions such that:
(1) uniformly in x D

, (x, ) = O(1) as [[ , S
k
,
(2) for each xed > 0, (x, ) = [1] as [[ , S

k
.
4.8.2. Let (x, ) be the solution of (4.8.1) under the initial conditions (0, ) =
1,

(0, ) = h and denote


() =

(1, ) + h
1
(1, ). (4.8.7)
The function () is entire in , and its zeros coincide with the eigenvalues
n

n0
of the boundary value problem L . The functions x (x,
n
) are eigenfunctions of L .
Let (x, ) be the solution of (4.8.1) under the boundary conditions U() = 1, V () =
0 . We set M() = (0, ) . The functions x (x, ) and M() are called the Weyl
solution and the Weyl function for the boundary value problem (4.8.1)-(4.8.3), respectively.
Clearly
(x, ), (x, )) 1 (4.8.8)
for all x and , where y, z) := yz

z.
The inverse problem studied in this section is formulated as follows. Suppose that the
function R
2
is known a priori. Our goal is to nd q(x), h and h
1
from the given Weyl
function M .
276
In order to formulate and prove the uniqueness theorem for the solution of this inverse
problem we agree that together with L = L(R
2
(x), q(x), h, h
1
) we consider a boundary
value problem

L = L(R
2
(x), q(x),

h,

h
1
) of the same form (4.8.1)-(4.8.3) but with dierent
coecients. If a certain symbol denotes an object related to L , then the corresponding
symbol with tilde will denote the analogous object related to

L .
Theorem 4.8.1. If M =

M then q(x) = q(x) for x I, h =

h and h
1
=

h
1
.
Thus, the specication of the Weyl function M uniquely determines L .
Proof. Let us dene the matrix P(x, ) = [P
jk
(x, )]
j,k=1,2
by the formula
P(x, )
_
(x, )

(x, )

(x, )

(x, )
_
=
_
(x, ) (x, )

(x, )

(x, )
_
. (4.8.9)
Using (4.8.8) we calculate
P
11
(x, ) = (x, )

(x, ) (x, )

(x, )
P
12
(x, ) = (x, ) (x, ) (x, )

(x, )
_
(4.8.10)
Let us now study the asymptotic behavior of (x, ), (x, ) and P
1j
(x, ) as [[ .
For this purpose we use the above-mentioned fundamental system of solutions z
1
(x, ), z
2
(x, )
in each xed sector S
k
, k = 2, 1, 0, 1 .
From the initial conditions on (x, ) we calculate
(x, ) =
1
w()
(U(z
2
(x, ))z
1
(x, ) U(z
1
(x, ))z
2
(x, )), (4.8.11)
where
w() =

z
1
(0, ) z
2
(0, )
z

1
(0, ) z

2
(0, )

.
Using (4.8.4)-(4.8.6) we get for [[ , S
k
U(z
1
(x, )) = (i )[R(0)[
1/2
exp(i

2
(0))(),
U(z
2
(x, )) = (i)[R(0)[
1/2
(),
w() = 2i[1];
_

_
(4.8.12)
here and in the sequel () = O(1) for S
k
and () = [1] for S

k
as [[ .
Substituting (4.8.4), (4.8.5) and (4.8.12) into (4.8.11) we conclude that for [[ ,
S
k
, x D

, j = 0, 1,

(j)
(x, ) =
1
2
(i)
j
[R(0)[
1/2
[R(x)[
j1/2
(exp(i

2
(x)))
j
exp(
_
x
0
[R

(t)[dt) exp(i
_
x
0
[R
+
(t)[dt)K

(x)(x, ). (4.8.13)
Consequently, taking (4.8.7) into account, we have
() =
1
2
(i)[R(0)R(1)[
1/2
exp(
_
1
0
[R

(t)[dt)
277
exp(i
_
1
0
[R
+
(t)[dt)K

(1)(), S
k
, [[ . (4.8.14)
From (4.8.13) and (4.8.14) we obtain the estimates:
[
(j)
(x, )[ C[[
j
[ exp(
_
x
0
[R

(t)[dt)
exp(i
_
x
0
[R
+
(t)[dt)[, S
k
, x D

, (4.8.15)
and
[()[ C[ exp(
_
1
0
[R

(t)[dt) exp(i
_
1
0
[R
+
(t)[dt)[, S
k
. (4.8.16)
Here and below one and the same symbol C denotes various positive constants in estimates.
Moreover, it can be shown that
[()[ C

[ exp(
_
1
0
[R

(t)[dt) exp(i
_
1
0
[R
+
(t)[dt)[, G

, (4.8.17)
where
G

= : [
n
[ , n 0, > 0.
It follows from (4.8.15) and (4.8.16) that the functions
(j)
(x, ) and are entire functions
of order 1/2.
Let us go on to the Weyl solution (x, ) . Applying the boundary conditions to (x, )
we calculate
(x, ) =
1

0
()
(V (z
2
(x, ))z
1
(x, ) V (z
1
(x, ))z
2
(x, )) (4.8.18)
where

0
() =

U(z
1
(x, )) U(z
2
(x, ))
V (z
1
(x, )) V (z
2
(x, ))

.
Taking (4.8.4)-(4.8.6) into account we obtain from (4.8.18) that for [[ , S

, x
D

, j = 0, 1,

(j)
(x, ) = (i)
j1
[R(0)[
1/2
[R(x)[
j1/2
(exp( i

2
(x)))
j
exp(
_
x
0
[R

(t)[dt) exp( i
_
x
0
[R
+
(t)[dt)K

(x)[1] (4.8.19)
It follows from the boundary conditions on (x, ) and (x, ) that
(x, ) = S(x, ) + M()(x, ), (4.8.20)
where S(x, ) is the solution of (4.8.1) satisfying the conditions S(0, ) = 0, S

(0, ) = 1 .
Using the fundamental system of solutions z
2
(x, ), z
2
(x, ) we get
S(x, ) =
1
w()
(z
1
(0, )z
2
(x, ) z
2
(0, )z
1
(x, )).
By virtue of (4.8.4)-(4.8.6), we infer that for S
k
, x D

, j = 0, 1,
S
(j)
(x, ) =
1
2
(i)
j1
[R(0)[
1/2
[R(x)[
j1/2
(exp(i

2
(x))
j
exp(i

2
(0))
278
exp(i

2
(0)) exp(
_
x
0
[R

(t)[dt) exp(i
_
x
0
[R
+
(t)[dt)K

(x)(x, ) (4.8.21)
and
[S
(j)
(x, )[ C[[
j1
[ exp(
_
x
0
[R

(t)[dt) exp(i
_
x
0
[R
+
(t)[dt)[. (4.8.22)
It follows from (4.8.22) that the functions S
(j)
(x, ) are entire of order 1/2 . Using (4.8.10),
(4.8.13) and (4.8.19) we get
P
11
(x, ) = [1], P
12
(x, ) = O(
1

0
), [[ , S

, x D

. (4.8.23)
Furthermore, substituting (4.8.20) into (4.8.10), we calculate
P
11
(x, ) = (x, )

(x, ) S(x, )

(x, ) + (

M() M()(x, )

(x, ),
P
12
(x, ) = S(x, ) (x, ) (x, )

S(x, ) + (M()

M())(x, ) (x, ).
By the hypothesis of Theorem 4.8.1, M()

M() , and consequently the functions
P
11
(x, ) and P
12
(x, ) are entire in of order 1/2 . It follows from this and (4.8.23) that
P
11
(x, ) 1, P
12
(x, ) 0 . Substituting into (4.8.9) we obtain (x, ) (x, ), (x, )

(x, ) for all x and . Consequently, q(x) = q(x) for x I, h =



h and h
1
=

h
1
.
Theorem 4.8.1 is proved. 2
Let
1
n

n0
be the sequence of eigenvalues of the boundary value problem L
1
for
equation (4.8.1) with the boundary conditions y(0) = V (y) = 0 . Now we consider the
inverse problem of recovering q(x), h and h
1
from the given two spectra
n

n0
of L
and
1
n

n0
of L
1
.
Theorem 4.8.2. If
n
=

n
,
1
n
=

1
n
for all n 0 , then q(x) = q(x) for x I, h =

h and h
1
=

h
1
.
Proof. The function is entire of order 1/2 , and () is uniquely determined by
its zeros
n

n0
via the formula
() = A

n=0
(1

n
) ( if
n
,= 0 n),
where the constant A is uniquely determined with the help of the asymptotic formula
(4.8.14), (the case when (0) = 0 requires minor modications).
The eigenvalues
1
n

n0
of the boundary value problem L
1
coincide with zeros of the
entire function
1
() = S

(1, ) +h
1
S(1, ) of order 1/2 . It follows from (4.8.21) that

1
() =
1
2
[R(0)[
1/2
[R(1)[
1/2
exp(i

2
(0)) exp(
_
1
0
[R

(t)[dt)
exp(i
_
1
0
[R
+
(t)[dt)K

(1)(). (4.8.24)
The function
1
is uniquely determined by its zeros
1
n

n0
via the formula

1
() = A
1

n=0
(1

1
n
),
279
where the constant A
1
is uniquely determined with the help of (4.8.24).
Since V () = 0 , it follows from (4.8.20) that
M() =

1
()
()
. (4.8.25)
By hypothesis of Theorem 4.8.2,
n
=

n
,
1
n
=

1
n
for all n 0 , and consequently
()

(),
1
()

1
() . From this, in view of (4.8.25), we get M()

M() .
Using Theorem 4.8.1 we conclude that q(x) = q(x) for x I, h =

h and h
1
=

h
1
; hence
Theorem 4.8.2 is proved. 2
We consider now the inverse problem of recovering L from the so-called spectral data.
For simplicity, we conne ourselves to the case when all zeros of () are simple.
Denote

n
=
_
1
0
R
2
(x)
2
(x,
n
)dx.
The data
n
,
n

n0
are called the spectral data of the boundary value problem L .
Theorem 4.8.3. If
n
=

n
,
n
=
n
for all n 0 , then q(x) = q(x) for x
I, h =

h and h
1
=

h
1
.
Proof. Since

(x, ) + q(x)(x, ) = R
2
(x)(x, ),

(x,
n
) + q(x)(x,
n
) =
n
R
2
(x)(x,
n
),
we get
((x, )

(x,
n
)

(x, )(x,
n
))

= (
n
)R
2
(x)(x, )(x,
n
)
and hence
(
n
)
_
1
0
R
2
(x)(x, )(x,
n
)dx =

1
0
((x, )

(x,
n
)

(x, )(x,
n
)) = 1.
In view of (4.8.20) and (4.8.25), we have
(
n
)
_
1
0
R
2
(x)S(x, )(x,
n
)dx (
n
)

1
()
()
_
1
0
R
2
(x)(x, )(x,
n
)dx = 1.
Let
n
, then

1
(
n
)

(
n
)
_
1
0
R
2
(x)
2
(x,
n
)dx = 1,
where

() =
d
d
() . Hence

n
=

(
n
)

1
(
n
)
. (4.8.26)
It follows from (4.8.25) that the Weyl function M is meromorphic with simple poles in the
points
n
, n 0 . Using (4.8.26) we calculate
Res
=
n
M() =
1

n
. (4.8.27)
280
Furthermore, by virtue of (4.8.19), we derive
M() =
1
(i)[R(0)[
exp(i

2
(0)), [[ , S

,
and consequently
[M()[
C
[[
, [[ , S

. (4.8.28)
Let us consider the function N()
def
= M()

M() . By hypothesis of Theorem 4.8.3
and (4.8.27), we have
Res
=
n
N() =
1

1

n
= 0.
Thus, the function N is an entire function of order 1/2 . On the other hand, in view of
(4.8.28),
[N()[
C
[[
, S

, [[ .
Consequently, N() 0 , i.e. M()

M() . Using Theorem 4.8.1 we obtain q(x) = q(x)
for x I, h =

h and h
1
=

h
1
. Theorem 4.8.3 is proved. 2
Remark 4.8.1. Theorem 4.8.2 and 4.8.3 are generalizations of results by Borg and
Marchenko (see Section 1.4) for the Sturm-Liouville equation without turning points where
R
2
(x) 1 .
Remark 4.8.2. We can also apply this method to investigate the inverse problems for
equation (4.8.1) with turning points in 0 or (and) 1 .
4.8.3. Let us now go on to constructing the solution of the inverse problem. The central
role for solving the inverse problem is played by the so-called main equation of the inverse
problem which connects the spectral characteristics with the corresponding solutions of the
dierential equation. We give a derivation of the main equation, which is a linear equation
in the Banach space m of bounded sequences. Moreover we prove the unique solvability
of the main equation. For simplicity, we conne ourselves to the most important particular
case when m = 1 and T
1
= IV , i.e. the weight-function changes sign exactly once. The
general case can be treated analogously.
For deriving the main equation of the inverse problem we need more precise asymptotics
for the solutions of equation (4.8.1). For deniteness, everywhere below S
0
S
1
(the
case S
1
S
2
is considered in the same way). It has been shown in [ebe4] that for
[[ , j = 0, 1 the following asymptotic formulas are valid
z
(j)
1
(x, ) =
j
[R(x)[
j1/2
exp(
_
x
0
[R(t)[dt)[1], x < x
1
,
z
(j)
1
(x, ) = (i)
j
[R(x)[
j1/2
exp(
_
x
1
0
[R(t)[dt)
1
2
csc

1
2
exp(i

4
)
(exp(i
_
x
x
1
[R(t)[dt)[1] + (1)
j+1
i exp(i
_
x
x
1
[R(t)[ dt)[1], x > x
1
,
_

_
(4.8.29)
281
z
(j)
2
(x, ) = ()
j
[R(x)[
j1/2
(i exp(
_
x
0
[R(t)[dt)[1]
+(1)
j
exp(
_
x
0
[R(t)[dt) exp(2
_
x
1
0
[R(t)[dt)[1]), x < x
1
,
z
(j)
2
(x, ) = (i)
j
[R(x)[
j1/2
2 sin

1
2
exp(i

4
) exp(
_
x
1
0
[R(t)[dt)
exp(i
_
x
x
1
[R(t)[dt)[1], x > x
1
.
_

_
(4.8.30)
Here, as above, [1] = 1 + O(
1

0
) uniformly in x D

. It follows from (4.8.11), in view of


(4.8.6), (4.8.29)-(4.8.30), that

(j)
(x, ) =
1
2

j
[R(0)[
1/2
[R(x)[
j1/2
(exp(
_
x
0
[R(t)[dt)[1]
+(1)
j
exp(
_
x
0
[R(t)[dt)[1]), x < x
1
,

(j)
(x, ) =
1
2
(i)
j
[R(0)[
1/2
[R(x)[
j1/2
(A
1
() exp(i
_
x
x
1
[R(t)[dt)[1]
+(1)
j
A
2
() exp(i
_
x
x
1
[R(t)[dt)[1]), x > x
1
,
_

_
(4.8.31)
where
A
2
() =
1
2
csc

1
2
exp(i

4
)(exp(
_
x
1
0
[R(t)[dt)[1]
i exp(
_
x
1
0
[R(t)[dt)[1]),
A
1
() = iA
2
() + 2 sin

1
2
exp(i

4
) exp(
_
x
1
0
[R(t)[ dt)[1].
_

_
(4.8.32)
Remark 4.8.3. Let =
_
x
x
1
[R(t)[dt . It follows from results of [ebe4] that (4.8.29)-
(4.8.31) are also valid uniformly for [[ 1 with [1] = 1 + O(
1
()

0
) ; moreover for
[[ 1 we have the estimates
[(x, )[ C[R(x)[
1/2
[ exp(
_
x
0
[R(t)[dt)[, x x
1
[(x, )[ C[R(x)[
1/2
[ exp(
_
x
1
0
[R(t)[dt)[, x x
1
.
_

_
(4.8.33)
Lemma 4.8.1. (i) The spectrum
k
of the boundary value problem L consists of
two sequences of eigenvalues:
k
=
+
k

k
, k N , such that

k
=
_

k
=

k,0
+O(
1
k

0
), k +, (4.8.34)
where

+
k,0
= (
_
1
x
1
[R(t)[dt)
1
(k +
1
4
),

k,0
= (
_
x
1
0
[R(t)[dt)
1
(k +
1
4
)i.
(ii) Denote

k
=
_
1
0
R
2
(x)
2
(x,

k
)dx , i.e.
k
=
+
k

k
. Then

+
k
=
1
2
[R(0)[
_
1
x
1
[R(t)[dt(
1
2
(csc

1
2
)
2

2
k
(1 + O(
1
k

)), k +,

k
=
1
2
[R(0)[
_
x
1
0
[R(t)[dt(1 + O(
1
k

)), k +,
_

_
(4.8.35)
282
where

k
= exp(
+
k,0
_
x
1
0
[R(t)[dt), = min(1
0
,
0
).
Proof. Substituting (4.8.31) into (4.8.7) we calculate
() =
1
2
(i)[R(0)R(1)[
1/2
(A
1
() exp(i
_
x
x
1
[R(t)[dt)[1]
A
2
() exp(i
_
x
x
1
[R(t)[dt)[1]). (4.8.36)
Let S
0
. It follows from (4.8.32) that
A
1
() =
1
2
csc

1
2
exp(i

4
) exp(
_
x
1
0
[R(t)[dt),
A
2
() =
1
2
csc

1
2
exp(i

4
) exp(
_
x
1
0
[R(t)[dt)[1].
Hence, by virtue of (4.8.36), the equation () = 0 can be rewritten in the form
exp(2i
_
1
x
1
[R(t)[dt) = i[1].
This equation has a countable set of roots
+
k
such that
+
k
=
+
k,0
+O(
1
k

0
) . Analogously,
if S
1
then the equation () = 0 can be transformed to
exp(2
_
x
1
0
[R(t)[dt) = i[1];
therefore this equation has a countable set of roots

k
such that

k
=

k,0
+O(
1
k

0
) .
Let us now consider

k
. Put

k
=

k,1
+

k,2
, where

k,1
=
_
x
1
0
R
2
(x)
2
(x,

k
)dx,

k,2
=
_
1
x
1
R
2
(x)
2
(x,

k
)dx.
Denote I
k,1
= x [0, x
1
] : [

k
[ 1, I
k,2
= x [0, x
1
] : [

k
[ 1 , where =
_
x
x
1
[R(t)[dt . According to (4.8.31),
_
I
k,1
R
2
(x)
2
(x,

k
)dx =
1
4
[R(0)[
_
I
k,1
[R(x)[(exp(
_
x
0
[R(t)[dt)[1]
+exp(
_
x
0
[R(t)[dt)[1])
2
/=

k
dx.
The change of variables gives
_
I
k,1
R
2
(x)
2
(x,

k
)dx =
1
4
[R(0)[
_

1
1/||
(exp((
1
))(1 + O(()

0
))
+exp((
1
))(1 + O(()

0
))
2
d[
=

k
,
283
where
1
=
_
x
1
0
[R(t)[dt . Hence
_
I
k,1
R
2
(x)
2
(x,

k
)dx =
1
2
[R(0)[
_
x
1
0
[R(x)[dx(1 + O(
1
k

)).
Furthermore, it follows from (4.8.33) that
[
_
I
k,2
R
2
(x)
2
(x,

k
)dx[
_
I
k,2
[R(x) exp(2
_
x
0
[R(t)[dt)
|
=

k
[dx.
In view of (4.8.34), the exponential is bounded, and consequently,
[
_
I
k,2
R
2
(x)
2
(x,

k
)dx[
_
I
k,2
[R(x)[dx =
_
1/||
0
d
|=

k
= O(
1
k
).
Thus, we arrive at

k,1
=
1
2
[R(0)[
_
x
1
0
[R(x)[dx(1 + O(
1
k

)). (4.8.37)
Let us now estimate

k,2
. Denote J
k,1
= x [x
1
, 1] : [

k
[ 1, J
k,2
= x [x
1
, 1] :
[

k
[ 1 . In the same way as above one can show that
[
_
J
k,2
R
2
(x)
2
(x,

k
)dx[ = O(
1
k
).
Then

k,2
=
_
J
k,1
R
2
(x)
2
(x,

k
)dx +O(
1
k
).
Taking (4.8.31) into account, we get

k,2
=
[R(0)[
4
_
J
k,1
(A
1
() exp(i
_
x
x
1
[R(t)[dt)[1]
+A
2
() exp(i
_
x
x
1
[R(t)[dt)[1])
2
=

k
dx +O(
1
k
).
The integral seems to be unbounded as k +. But it follows from (4.8.36) and (4.8.32)
that
A
2
()
A
1
()
|=

k
= exp(2i

k
_
1
x
1
[R(t)[dt)[1], (4.8.38)
A
1
(

k
) = 2 exp(i

4
) sin

1
2
exp(

k
_
x
1
0
[R(t)[dt)[1]. (4.8.39)
Then

k2
=
[R(0)[
4
A
1
(

k
)
_
J
k,1
[R(x)[(exp(i
_
x
x
1
[R(t)[dt)[1]
+exp(2i
_
1
x
1
[R(t)[dt) exp(i
_
x
x
1
[R(t)[dt)[1])
2
/=

k
dx +O(
1
k
).
After changing variables x =
_
x
x
1
[R(t)[dt and integrating we arrive at the estimate

k,2
= O(
1
k

).
284
Combining this with (4.8.37), we obtain (4.8.35) for

k
. For
+
k
the proof is analogous;
hence Lemma 4.8.1 is proved. 2
Now we go on to the derivation of the main equation of the inverse problem. We assume
that the spectral data
k
,
k

k0
of L are given. Let

L = L(R
2
(x), q(x),

h,

h
1
) be
a certain known model boundary value problem with the same weight-function R
2
(x) and
with the spectral data

k
,
k

k0
.
Lemma 4.8.2. For each xed x ,= x
1
, and k + the following estimates hold:
(i) If x < x
1
then
(x,

k
) = O(1), (x,

k
) = O(1), (4.8.40)
(x,

k
) (x,

k
) = O(

k
), (4.8.41)
(x,
+
k
) = O(exp(
+
k,0
_
x
0
[R(t)[dt)),
(x,

+
k
) = O(exp(
+
k,0
_
x
0
[R(t)[dt)),
_

_
(4.8.42)
(x,
+
k
) (x,

+
k
) = O([
+
k

+
k
[ exp(
+
k,0
_
x
0
[R(t)[dt)). (4.8.43)
(ii) If x > x
1
then
(x,

k
) = O(exp(i

k,0
_
x
x
1
[R(t)[dt)), (4.8.44)
(x,

k
) = O(k

0
exp(i

k,0
_
x
x
1
[R(t)[dt)), (4.8.45)
(x,
+
k
) = O(
k
), (x,

+
k
) = O(
k
), (4.8.46)
(x,
+
k
) (x,

+
k
) = O([
+
k

+
k
[
k
). (4.8.47)
We mark the dierence in the estimates (4.8.44) and (4.8.46) for the functions (x,

k
)
and (x,

k
) ; the real part of the exponent in (4.8.44) is negative but the real part of the
exponent in (4.8.46) is positive.
Proof. Let x < x
1
. The estimates (4.8.40) and (4.8.42) follow from (4.8.31) and
the asymptotics of
+
k
and

k
. Using (4.8.40), (4.8.42) and Schwarzs lemma we obtain
(4.8.41) and (4.8.43).
Let now x > x
1
. For =

k
and =

k
we rewrite (4.8.31) as follows
(x, ) =
1
2
[R(0)[
1/2
[R(x)[
1/2
A
1
()(exp(i
_
x
x
1
[R(t)[dt)[1]
+
A
2
()
A
1
()
exp(i
_
x
x
1
[R(t)[dt)[1]). (4.8.48)
From this, using (4.8.38) and (4.8.39), we obtain (4.8.44). Furthermore, it is easy to show
that
A
2
()
A
1
()
|=

k
= O(
1
k

0
), A
1
(

k
) = O(1).
Substituting into (4.8.48) we arrive at (4.8.45). The estimates (4.8.46) and (4.8.47) are
proved in the same way as (4.8.42) and (4.8.43). Lemma 4.8.2 is proved. 2
285
Lemma 4.8.3. For each xed x I, the following relations hold
(x, ) = (x, ) +

k
_
(x, ), (x,
k
))

k
(
k
)
(x,
k
)
(x, ), (x,

k
))

k
(

k
)
(x,

k
)
_
, (4.8.49)
(x, ), (x, ))


(x, ), (x, ))

+

k
_
(x, ), (x,
k
))(x,
k
), (x, ))

k
(
k
)(
k
)

(x, ), (x,

k
))(x,

k
), (x, ))

k
(

k
)(

k
)
_
= 0.
(4.8.50)
We omit the proof of Lemma 4.8.3 since the arguments here are the same as in Lemma
1.6.3.
Denote
k0
=
k
,
k1
=

k
,
k0
=
k
,
k1
=
k
,
kj
(x) = (x,
kj
),
kj
(x) =
(x,
kj
) ,

P
ni,kj
(x) =

ni
(x),
kj
(x))

kj
(
nj

kj
)
=
1

kj
_
x
0
R
2
(t)
ni
(t)
kj
(t)dt,
P
ni,kj
(x) =

ni
(x),
kj
(x))

kj
(
ni

kj
)
=
1

kj
_
x
0
R
2
(t)
ni
(t)
kj
(t)dt.
It follows from (4.8.49) and (4.8.50) that

ni
(x) =
ni
(x) +

k
(

P
ni,k0
(x)
k0
(x)

P
ni,k1
(x)
k1
(x)), (4.8.51)
P
ni,j
(x)

P
ni,j
(x) +

k
(

P
ni,ko
(x)P
ko,j
(x)

P
ni,k1
(x)P
k1,j
(x)) = 0. (4.8.52)
Let x < x
1
. Denote

k
(x) =
_
_
_
exp(
+
k,0
_
x
0
[R(t)[dt), for
k
=
+
k
1, for
k
=

k
,

k
=
k
(x
1
),
k
= [
k

k
[ +
[
k

k
[

2
k
.
Clearly,
k
= O(k

) . It follows from (4.8.40)-(4.8.43) that


[
kj
(x)[ C
k
(x), [
k0
(x)
k1
(x)[ C
k

k
(x),
[P
ni,kj
(x)[
C
n
(x)
k
(x)
([n k[ + 1)
2
k
_

_
. (4.8.53)
Let V be a set of indices v = (k, j), k N, j = 0, 1 . Dene the vector (x) =
[
v
(x)]
vV
:= [
00
,
01
,
10
,
11
, . . .]
T
and similarly the block matrix
H(x) = [H
u,v
(x)]
u,vV
=
_
H
n0,k0
(x) H
no,k1
(x)
H
n1,k0
(x) H
n1,k1
(x)
_
n,kN
,
u = (n, i), v = (k, j) , where

k0
(x) = (
k0
(x)
k1
(x))(
k

k
(x))
1
,
k1
(x) =
k1
(x)(
k
(x))
1
,
286
H
n0,k0
(x) = (P
n0,k0
(x) P
n1,k0
(x))
k

k
(x)(
n

n
(x))
1
,
H
n1,k0
(x) = P
n1,k0
(x)
k

k
(x)(
n
(x))
1
, H
n1,k1
(x) = (P
n1,k0
(x) P
n1,k1
(x))
k
(x)(
n
(x))
1
,
H
n0,k1
(x) = (P
n0,k0
(x) P
n1,k0
(x) P
n0,k1
(x) + P
n1,k1
(x))
k
(x)(
n

n
(x))
1
.
Analogously we dene

(x),

H(x) . It follows from (4.8.53) and Schwarzs lemma that for
each xed x (0, x
1
),
[
ni
(x)[ C, [H
ni,kj
(x)[
C
k
([n k[ + 1)
. (4.8.54)
Similarly,
[

ni
(x)[ C, [

H
ni,kj
(x)[
C
k
([n k[ + 1)
. (4.8.55)
Let us consider the Banach space m of bounded sequences = [
v
]
vV
with the norm
||
m
= sup
vV
[
v
[ . It follows from (4.8.54) and (4.8.55) that for each xed x (0, x
1
) the
operators E +

H(x) and E H(x) (here E is the identity operator), acting from m to
m , are linear bounded operators, and
|H(x)|, |

H(x)| C sup
n

k
([n k[ + 1
) < .
Theorem 4.8.4. For each xed x (0, x
1
) , the vector (x) m is the solution of the
equation

(x) = (E +

H(x))(x) (4.8.56)
in the Banach space m . The operator E +

H(x) has a bounded inverse operator, i.e.
equation (4.8.56) is uniquely solvable.
Proof. Indeed, taking into account our notations, we can rewrite (4.8.51) and (4.8.52) in
the form

(x) = (E +

H(x))(x), (E +

H(x))(E H(x)) = E.
Interchanging places for L and

L we obtain analogously
(x) = (E H(x))

(x), (E H(x))(E +

H(x)) = E.
Hence the operator (E +

H(x))
1
exists, and it is a linear bounded operator. 2
Equation (4.8.56) is called the main equation of the inverse problem. Solving (4.8.56)
we nd the vector (x) , and consequently, the functions
ni
(x) . Since
ni
(x) = (x,
ni
)
are the solutions of (4.8.1), we can construct the function q(x) for x (0, x
1
) and the
coecient h . Thus, the inverse problem has been solved for the interval x (0, x
1
) . For
x > x
1
we can act analogously, starting from (4.8.51)-(4.8.52) and using (4.8.44)-(4.8.47)
instead of (4.8.40)-(4.8.43), and construct q(x) for x (x
1
, 1) and the coecient h
1
.
Remark 4.8.4. To construct q(x) for x (x
1
, 1) we can act also in another way.
Suppose that, using the main equation of the inverse problem, we have constructed q(x)
for x (0, x
1
) and h . Consequently, the solutions (x, ) and S(x, ) are known for
x [0, x
1
] . By virtue of (4.8.20), the solution (x, ) is known for x [0, x
1
] too. Let
287
(x, ) be the solution of (4.8.1) under the conditions (1, ) = 1,

(1, ) = h
1
. Clearly,
(x, ) =
(x, )
()
. Thus, using q(x) for x [0, x
1
] , we can constant the functions

j
() =
(j1)
(x
1
, ), j = 1, 2.
The functions
j
() are characteristic functions of the boundary value problems Q
j
for
equation (4.8.1) on x (x
1
, 1) with the conditions y
(j1)
(x
1
) = y

(1) + h
1
y(1) = 0 . Thus,
we can reduce our problem to the inverse problem of recovering q(x), x (x
1
, 1) from
two spectra of boundary value problems Q
j
on the interval (x
1
, 1) . In this problem the
weight-function R
2
(x) does not change sign. We can treat this inverse problem by the same
method as above. In this case the main equation will be simpler. We note that the case
when the weight-function does not change sign were studied in more general cases in [yur22]
and other papers.
288
REFERENCES
[abl1] Ablowitz M.J. and Segur H., Solitons and the Inverse Scattering Transform, SIAM,
Philadelphia, 1981.
[abl2] Ablowitz M. and Segur H., The inverse scattering transform: semi-innite interval, J.
Math. Phys. 16 (1975), 1054-1056.
[abl3] Ablowitz M.J. and Clarkson P.A., Solitons, Nonlinear Evolution Equations and Inverse
Scattering, London Math. Soc. Lecture Note Series, 149, Cambridge University Press,
Cambridge, 1991.
[agr1] Agranovich Z.S. and Marchenko V.A., The Inverse Problem of Scattering Theory,
Gordon and Breach, New York, 1963.
[ahm1] Ahmad F. and Razzaghi M., A numerical solution to the Gelfand-Levitan-Marchenko
equation, Dierential equations and computational simulations, II (Mississippi State,
MS, 1995). Appl. Math. Comput. 89 (1998), no. 1-3, 31-39.
[ain1] Ainola L., An inverse problem on eigenoscillations of elastic covers, Appl. Math. Mech.
2 (1971), 358-365.
[akt1] Aktosun T., Inverse Schrodinger scattering on the line with partial knowledge of the
potential, SIAM J. Appl. Math. 56, no. 1 (1996), 219-231.
[akt2] Aktosun T., Klaus M. and van der Mee C., Recovery of discontinuities in a non-
homogeneous medium, Inverse Problems, 12 (1996), 1-25.
[akt3] Aktosun T., Klaus M. and van der Mee C., Inverse scattering in one-dimensional
nonconservative media, Integral Equations Operator Theory 30 (1998), no. 3, 279-316.
[akt4] Aktosun, Tuncay; Sacks, Paul E. Inverse problem on the line without phase informa-
tion. Inverse Problems 14 (1998), no. 2, 211-224.
[ale1] Alekseev A.A., Stability of the inverse Sturm-Liouville problem for a nite interval,
Dokl. Akad. Nauk SSSR 287 (1986), 11-13; English transl. in Soviet Math.Dokl. 33
(1986), no.2, 299-301.
[ale2] Alekseev A.A. and Lorenzi A., Sturm-Liouville inverse problems with functionally sym-
metric coecients, Boll. Unione Mat. Ital. A (7) 8 (1994), no. 1, 11-23.
[alp1] Alpay D. and Gohberg I., Inverse spectral problem for dierential operators with ra-
tional scattering matrix functions, J. Di. Equat. 118 (1995), no. 1, 1-19.
[amb1] Ambarzumian V.A.,

Uber eine Frage der Eigenwerttheorie, Zs. f. Phys. 53 (1929),
690-695.
[amo1] Amour L., Inverse spectral theory for the AKNS system with separated boundary
conditions, Inverse Problems 9 (1993), no. 5, 507-523.
289
[and1] Andersson L.-E., Inverse eigenvalue problems with discontinuous coecients, Inverse
Problems 4 (1988), no. 2, 353-397.
[and2] Andersson L.E., Inverse eigenvalue problems for a Sturm-Liouville equation in impedance
form, Inverse Problems 4 (1988), 929-971.
[And1] Anderssen R.S., The eect of discontinuities in density and shear velocity on the asymp-
totic overtone structure of tortional eigenfrequencies of the Earth, Geophys. J.R. astr.
Soc. 50 (1997), 303-309.
[ang1] Anger G., Inverse Problems in Dierential Equations, Plenum Press, New York, 1990.
[ani1] Anikonov Y.E., Multidimensional Inverse and Ill-Posed Problems for Dierential Equa-
tions, Utrecht, VSP, 1995.
[aru1] Arutyunyan T.N., Isospectral Dirac operators, Izv. Nats. Akad. Nauk Armenii Mat.
29 (1994), no. 2, 3-14; English transl. in J. Contemp. Math. Anal., Armen. Acad.
Sci. 29 (1994), no. 2, 1-10;
[atk1] Atkinson F., Discrete and Continuous Boundary Problems, Academic Press, New York,
1964.
[bae1] Baev A.V., Solution of inverse problems of dissipative scattering theory, Dokl. Akad.
Nauk SSSR, 315 (1990), no. 5, 1103-1104; English transl. in Soviet Phys. Dokl. 35
(1990), no.12, 1035-1036.
[bar1] Baranova E.A., On recovery of higher-order ordinary dierential operators from their
spectra, Dokl. Akad. Nauk SSSR, 205 (1972), 1271-1273; English transl. in Soviet
Math. Dokl. 13 (1972).
[baR1] Barcilon V., Explicit solution of the inverse problem for a vibrating string, J. Math.
Anal. Appl. 93 (1983), no. 1, 222-234.
[Bar1] Bari N.K., Biorthogonal systems and bases in a Hilbert space, Uchenye Zapiski Moskov.
Gos. Univ., Matematika, 148, 4 (1951), 69-107.
[bAr1] Barnes D.C., The inverse eigenvalue problem with nite data, SIAM J. Math. Anal.
22 (1991), no. 3, 732-753.
[bea1] Beals R., Deift P. and Tomei C., Direct and Inverse Scattering on the Line, Math.
Surveys and Monographs, v.28. Amer. Math. Soc. Providence: RI, 1988.
[bea2] Beals R., The inverse problem for ordinary dierential operators on the line, Amer. J.
Math. 107 (1985), 281-366.
[bea3] Beals R. and Coifman R.R., Scattering and inverse scattering for rst order systems,
Comm. Pure Appl. Math., 37 (1984), 39-90.
[bea4] Beals R. and Coifman R.R., Scattering and inverse scattering for rst-order systems
II, Inverse Problems 3 (1987), no. 4, 577-593.
[bea5] Beals R., Henkin G.M. and Novikova N.N., The inverse boundary problem for the
Rayleigh system, J. Math. Phys. 36 (1995), no. 12, 6688-6708.
290
[bek1] Bekhiri S.E., Kazaryan A.R. and Khachatryan I.G., On the reconstruction of a regular
two-term dierential operator of arbitrary even order from the spectrum, Erevan. Gos.
Univ. Uchen. Zap. Estestv. Nauk, 181 (1994), no. 2, 8-22.
[Bel1] Belishev M.I., An inverse spectral indenite problem for the equation y

+zp(x)y = 0
in the interval. Funkt. Anal. i Prilozhen. 21 (1987), no. 2, 68-69; English transl. in
Funct. Anal. Appl. 21 (1987), no. 2, 146-148.
[bel1] Bellmann R. and Cooke K., Dierential-dierence Equations, Academic Press, New
York, 1963.
[ber1] Berezanskii Y.M., The uniqueness theorem in the inverse spectral problem for the
Schr odinger equation, Trudy Moskov. Mat. Obshch. 7 (1958), 3-51.
[ber2] Berezanskii Y.M., Expansions in eigenfunctions of Selfadjoint Operators, Translations
of Math. Monographs, vol.17, AMS, Providence, RI, 1968; (transl. from Russian:
Naukova Dumka, Kiev, 1965.
[ber3] Berezanskii Y.M., Integration of nonlinear dierence equations by the inverse problem
method, Dokl. Akad. Nauk SSSR 281 (1985), 16-19.
[blo1] Blokh M.S., On the determination of a dierential equation from its spectral matrix-
function, Dokl. Akad. Nauk SSSR 92 (1953), 209-212.
[Bor1] Borg G., Eine Umkehrung der Sturm-Liouvilleschen Eigenwertaufgabe, Acta Math. 78
(1946), 1-96.
[bor1] Borovikov V.A., On the Sturm-Liouville problem with non-regular asymptotics of
eigenvalues, Matem. Zametki, 50, no.6 (1991), 43-51; English transl. in Math. Notes
50 (1991), no. 5-6, 1243-1249.
[Bou1] Boumenir A., The inverse spectral problem for the generalized second-order operator,
Inverse Problems 10 (1994), no. 5, 1079-1097.
[Bou2] Boumenir A., Inverse spectral problem for the Laguerre dierential operator, J. Math.
Anal. Appl. 224 (1998), no. 2, 218-240.
[bou1] Boutet de Monvel, A. and Shepelsky D., Inverse scattering problem for anisotropic
media, J. Math. Phys. 36 (1995), no. 7, 3443-3453.
[bro1] Browne P.J. and Sleeman B.D., Inverse nodal problems for Sturm-Liouville equations
with eigenparameter dependent boundary conditions, Inverse Problems 12 (1996), no.
4, 377-381.
[bro2] Browne P.J. and Sleeman B.D., A uniqueness theorem for inverse eigenparameter de-
pendent Sturm-Liouville problems, Inverse Problems 13 (1997), no. 6, 1453-1462.
[buk1] Bukhgeim A.L., Introduction to the Inverse Problem Theory, Nauka, Novosibirsk, 1988.
[car1] Carlson R., Inverse spectral theory for some singular Sturm-Liouville problems, J. Di.
Equations 106 (1993), 121-140.
291
[car2] Carlson R., A Borg-Levinson theorem for Bessel operators, Pacic J. Math. 177 (1997),
no. 1, 1-26.
[cau1] Caudill L.F., Perry P.A. and Schueller A.W. Isospectral sets for fourth-order ordinary
dierential operators, SIAM J. Math. Anal. 29 (1998), no. 4, 935-966.
[Cau1] Caudrey P., The inverse problem for the third-order equation, Phys. Lett. 79A (1980),
264-268.
[cha1] Chadan K. and Sabatier P.C., Inverse Problems in Quantum Scattering Theory, 2nd
ed., Texts and Monographs in Physics. Springer-Verlag, New York-Berlin, 1989.
[cha2] Chadan K. and Musette M., Inverse problems in the coupling constant for the Schr odin-
ger equation, Inverse Problems 5 (1989), no. 3, 257-268.
[cha3] Chadan K., Colton D., Paivarinta L. and Rundell W., An Introduction to Inverse Scat-
tering and Inverse Spectral Problems. SIAM Monographs on Mathematical Modeling
and Computation. Society for Industrial and Applied Mathematics, Philadelphia, PA,
1997.
[Cha1] Chakravarty N.K., A necessary and sucient condition for the existence of the spectral
matrix of a dierential system, Indian J. Pure Appl. Math. 25 (1994), no. 4, 365-380.
[Che1] Chern H. and Shen C.-L., On the n -dimensional Ambarzumyans theorem, Inverse
Problems 13 (1997), no. 1, 15-18.
[Chu1] Chugunova M.V., An inverse boundary value problem on a nite interval, Funct. Anal.,
35, 113-122, Ulyanovsk. Gos. Ped. Univ., Ulyanovsk, 1994.
[cla1] Clancy K. and Gohberg I., Factorization of Matrix Functions and Singular Integral
Operators, Birkhauser, Basel, 1981.
[cod1] Coddington E. and Levinson N., Theory of Ordinary Dierential Equations, McGraw-
Hill Book Company, New York, 1955.
[col1] Coleman C.F. and McLaughlin J.R., Solution of the inverse spectral problem for an
impedance with integrable derivative I, II, Comm. Pure Appl. Math. 46 (1993), no.
2, 145-184, 185-212.
[Con1] Constantin A., On the inverse spectral problem for the Camassa-Holm equation, J.
Funct. Anal. 155 (1998), no. 2, 352-363.
[con1] Conway J.B., Functions of One Complex Variable, 2nd ed., vol.I, Springer-Verlag, New
York, 1995.
[cox1] Cox S. and Knobel R., An inverse spectral problem for a nonnormal rst order dier-
ential operator, Integral Equations Operator Theory 25 (1996), no. 2, 147-162.
[dah1] Dahlberg B. and Trubowitz E., The inverse Sturm-Liouville problem III, Comm. Pure
Appl. Math. 37 (1984), 255-267.
[dar1] Darwish A.A., The inverse scattering problem for a singular boundary value problem,
New Zeland J. Math. 22 (1993), 37-56.
292
[deg1] Degasperis A. and Shabat A., Construction of reectionless potentials with innite
discrete spectrum, Teoret. i Matem. Fizika, 100, no. 2 (1994), 230-247; English
transl.: Theoretical and Mathem. Physics, 100, no. 2 (1994), 970-984.
[dei1] Deift P. and Trubowitz E., Inverse scattering on the line, Comm. Pure Appl. Math.
32 (1979), 121-251.
[dei2] Deift P., Tomei C. and Trubowitz E., Inverse scattering and the Boussinesq equation,
Comm. Pure Appl. Math. 35 (1982), 567-628.
[dei3] Deift P. and Zhou X., Direct and inverse scattering on the line with arbitrary singu-
larities, Comm. Pure Appl. Math. 44 (1991), no.5, 485-533.
[den1] Denisov A.M., The uniqueness of the solution of some inverse problems (Russian), Zh.
Vych. Mat. Mat. Fiz., 22, no.4 (1982), 858-864.
[dip1] DiPrima R.C. and Habetler G.J., A completeness theorem for non-selfadjoint eigen-
value problems in hydrodynamic stability, Arch. Rational Mech. Anal. 34 (1969),
218-227.
[dor1] Dorren H.J.S., Muyzert E.J. and Snieder R.K., The stability of one-dimensional inverse
scattering, Inverse Problems 10, no. 4, (1994), 865-880.
[dub1] Dubrovskii V.V. and Sadovnichii V.A., Some properties of operators with a discrete
spectrum, Di. Uravneniya 15 (1979), no.7, 1206-1211; English transl.: Di. Equations
15 (1979), no. 7, 854-858.
[ebe1] Eberhard W., Freiling G. and Schneider A., On the distribution of the eigenvalues of a
class of indenite eigenvalue problem, J. Di. Integral Equations 3 (1990), 1167-1179.
[ebe2] Eberhard W. and Freiling G., The distribution of the eigenvalues for second-order
eigenvalue problems in the presence of an arbitrary number of turning points, Results
in Mathematics, 21 (1992), 24-41.
[ebe3] Eberhard W. and Freiling G., An expansion theorem for eigenvalue problems with
several turning points, Analysis, 13 (1993), 301-308.
[ebe4] Eberhard W., Freiling G. and Schneider A., Connection formulas for second-order
dierential equations with a complex parameter and having an arbitrary number of
turning point, Math. Nachr., 165 (1994), 205-229.
[ebe5] Eberhard W., Freiling G., Stone-regulare Eigenwertprobleme. Math. Z. 160 (1978),
139-161.
[elc1] Elcrat A. and Papanicolaou V.G., On the inverse problem of a forth-order selfadjoint
binomial operator, SIAM J. Math. Anal. 28, no. 4 (1997), 886-896.
[elr1] El-Reheem Z.F.A., On the scattering problem for the Sturm-Liouville equation on the
half-line with sign valued weight coecient, Applicable Analysis, 57 (1995), 333-339.
[ere1] Eremin M.S., An inverse problem for a second-order integro-dierential equation with
a singularity, Dieren. Uravneniya, 24 (1988), 350-351.
293
[fab1] Fabiano R.H., Knobel R. and Lowe B.D., A nite-dierence algorithm for an inverse
Sturm-Liouville problem, IMA J. Numer. Anal. 15, no. 1 (1995), 75-88.
[fad1] Faddeev L.D., On a connection the S -matrix with the potential for the one-dimensional
Schr odinger operator, Dokl. Akad. Nauk SSSR 121 (1958), 63-66.
[fad2] Faddeev L.D., Properties of the S -matrix of the one-dimensional Schrodinger equa-
tion, Trudy Mat.Inst.Steklov, 73 (1964) 314-336; English transl. in Amer. Math. Soc.
Transl. (2) 65 (1967), 139-166.
[fag1] Fage M.K., Solution of certain Cauchy problem via enlargement of number of indepen-
dent variables, Mat. Sb. 46(88) (1958), 261-290.
[fag2] Fage M.K., Integral representations of operator-valued analytic functions of one inde-
pendent variable, Trudy Moskov. Mat. Obshch. 8 (1959), 3-48.
[FY1] Freiling G. and Yurko V.A., Inverse problems for dierential equations with turning
points, Inverse Problems, 13 (1997), 1247-1263.
[FY2] Freiling G. and Yurko V.A., On constructing dierential equations with singularities
from incomplete spectral information, Inverse Problems, 14 (1998), 1131-1150.
[FY3] Freiling G. and Yurko V.A., Inverse spectral problems for dierential equations on the
half-line with turning points, J. Di. Equations, 153 (1999).
[gak1] Gakhov F.D., Boundary Value Problems, 3rd ed., Nauka, Moscow, 1977; English transl.
of 2nd ed., Pergamon Press, Oxford and Addison-Wesley, Reading, Mass., 1966.
[gar1] Gardner G., Green J., Kruskal M. and Miura R., A method for solving the Korteweg-de
Vries equation, Phys. Rev. Letters, 19 (1967), 1095-1098.
[gas1] Gasymov M.G. and Levitan B.M., Determination of o dierential equation by two of
its spectra, Usp. Mat. Nauk 19, no. 2 (1964), 3-63; English transl. in Russian Math.
Surveys 19 (1964), 1-64.
[gas2] Gasymov M.G., Determination of Sturm-Liouville equation with a singular point from
two spectra, Dokl. Akad. Nauk SSSR, 161 (1965), 274-276; English transl. in Sov.
Math., Dokl. 6 (1965), 396-399.
[gas3] Gasymov M.G. and Levitan B.M., The inverse problem for a Dirac system, Dokl. Akad.
Nauk SSSR, 167 (1966), 967-970.
[gas4] Gasymov M.G., The inverse scattering problem for a system of Dirac equations of order
2n, Trudy Moskov. Mat. Obst. 19 (1968), 41-119; English transl. in Trans. Mosc.
Math. Soc. 19 (1968).
[gas5] Gasymov M.G. and Gusejnov G.S., Determination of diusion operators according to
spectral data, Dokl. Akad. Nauk Az. SSR 37, no. 2, (1981) 19-23.
[gel1] Gelfand I.M. and Levitan B.M., On the determination of a dierential equation from
its spectral function, Izv. Akad. Nauk SSSR, Ser. Mat. 15 (1951), 309-360; English
transl. in Amer. Math. Soc. Transl. (2) 1 (1955).
294
[ges1] Gesztesy F. and Simon B., Uniqueness theorems in inverse spectral theory for one-
dimensional Schr odinger operators, Trans. Amer. Math. Soc. 348 (1996), no. 1,
349-373.
[ges2] Gesztesy F. and Simon B., Inverse spectral analysis with partial information on the
potential. The case of an a.c. component in the spectrum. Helv. Phys. Acta 70
(1997), no.1-2, 66-71.
[gla1] Gladwell G.M.L., Inverse Problems in Scattering, An introduction. Solid Mechanics
and its Applications 23, Kluwer Acad. Publ., Dordrecht, 1993.
[goh1] Gohberg I.C. and Krein M.G., Introduction to the Theory of Linear Nonselfadjoint
Operators, Nauka, Moscow, 1965; English transl.: Transl. Math. Monographs, vol.18,
Amer. Math. Soc., Providence, RI, 1969.
[goh2] Gohberg I.C., Kaashoek, M.A. and Sakhnovich A.L., Pseudo-canonical systems with
rational Weyl functions: explicit formulas and applications, J. Di. Equat. 146 (1998),
no. 2, 375-398.
[gol1] Goldenveizer A.L., Lidsky V.B. and Tovstik P.E., Free Vibration of Thin Elastic Shells,
Nauka, Moscow, 1979.
[gon1] Gonchar A.A., Novikova N.N. and Khenkin G.M., Multipoint Pade approximants in
an inverse Sturm-Liouville problem, Mat. Sb. 182 (1991), no. 8, 1118-1128; English
transl. in Math. USSR, Sb. 73 (1992), no. 2, 479-489.
[gre1] Grebert B. and Weder R., Reconstruction of a potential on the line that is a priori
known on the half-line, SIAM J. Appl. Math. 55, no. 1 (1995), 242-254.
[gri1] Grinberg N.I., The one-dimensional inverse scattering problem for the wave equation,
Mat. Sb. 181 (1990), no. 8, 1114-1129; English transl. in Math. USSR, Sb. 70 (1991),
no. 2, 557-572.
[gus1] Guseinov G.S., The determination of the innite nonselfadjoint Jacobi matrix from its
generalized spectral function, Mat. Zametki, 23 (1978), 237-248.
[gus2] Guseinov G.S. and Tuncay H., On the inverse scattering problem for a discrete one-
dimensional Schr odinger equation, Comm. Fac. Sci. Univ. Ankara Ser. A1 Math.
Statist. 44 (1995), no. 1-2, 95-102 (1996).
[Gus1] Guseinov I.M. and Nabiev I.M., Solution of a class of inverse Sturm-Liouville boundary
value problems, Mat. Sbornik, 186 (1995), no. 5, 35-48; English transl. in Sb. Math.
186 (1995), no. 5, 661-674.
[hal1] Hald O.H., Discontinuous inverse eigenvalue problems, Comm. Pure Appl. Math. 37
(1984), 539-577.
[hal2] Hald O.H. and McLaughlin J.R., Solutions of inverse nodal problems, Inverse Problems,
5 (1989), 307-347.
[hal3] Hald O.H. and McLaughlin J.R., Inverse problems: recovery of BV coecients from
nodes, Inverse Problems, 14 (1998), no. 2, 245-273.
295
[hel1] Hellwig G., Partial Dierential Equations, Teuber, Stuttgart, 1977.
[Hen1] Henrici P., Applied and Computational Complex Analysis, vol.2, John Wiley and Sons,
New York, 1977.
[hin1] Hinton D.B., Jordan A.K., Klaus M. and Shaw J.K., Inverse scattering on the line for
a Dirac system, J. Math. Phys. 32 (1991), no. 11, 3015-3030.
[hoc1] Hochstadt H., The inverse Sturm-Liouville problem, Comm. Pure Appl. Math. 26
(1973), 715-729.
[hoc2] Hochstadt H., On the well-posedness of the inverse Sturm-Liouville problems, J. Di.
Eq. 23 (1977), no. 3, 402-413.
[hoe1] Hoenders B.J. On the solution of the phase retrieval problem, J. Math. Phys. 16, no.9
(1975), 1719-1725.
[hur1] Hurt N.E. Phase Retrieval and Zero Crossing, Kluwer, Dordrecht, 1989.
[isa1] Isaacson E.L. and Trubowitz E., The inverse Sturm-Liouville problem I, Comm. Pure
Appl. Math. 36 (1983), 767-783.
[isa2] Isaacson E.L., McKean H.P. and Trubowitz E., The inverse Sturm-Liouville problem
II, Comm. Pure Appl. Math. 37 (1984), 1-11.
[Isa1] Isakov V., Inverse Problems for Partial Dierential equations, Springer-Verlag, New-
York, 1998.
[jer1] Jerri A.J., Introduction to Integral Equations with Applications, Dekker, New York,
1985.
[kab1] Kabanikhin S.I. and Bakanov G.B., A discrete analogue of the Gelfand-Levitan method.
(Russian) Doklady Akad. Nauk 356 (1997), no.2, 157-160.
[kau1] Kaup D., On the inverse scattering problem for cubic eigenvalue problems, Stud. Appl.
Math. 62 (1980), 189-216.
[kay1] Kay J. and Moses H., The determination of the scattering potential from the spectral
measure function I, II, III., Nuovo Cimento 2 (1955), 917-961 and 3 (1956), 56-84,
276-304.
[kaz1] Kazarian A.R. and Khachatryan I.G., The inverse scattering problem for higher-order
operators with summable coecients, I, Izvestija Akad. Nauk Armenii, Mat., 29. no.
5 (1994), 50-75; English transl. in J. Contemporary Math. Anal. Armenian Acad.
Sci., 29, no. 5 (1994), 42-64.
[kaz2] Kazarian A.R. and Khachatryan I.G., The inverse scattering problem for higher-order
operators with summable coecients, II, Izvestija Akad. Nauk Armenii, Mat., 30. no.
1 (1995), 39-65; English transl. in J. Contemporary Math. Anal. Armenian Acad.
Sci., 30, no. 1 (1995), 33-55.
[kha1] Khachatryan I.G., The recovery of a dierential equation from its spectrum, Funkt.
Anal. i Prilozhen. 10 (1976), no. 1, 93-94.
296
[kha2] Khachatryan I.G., On transformation operators for higher-order dierential equations,
Izv. Akad. Nauk Armyan. SSR, Ser. Mat. 13 (1978), no. 3, 215-237.
[kha3] Khachatryan I.G., On some inverse problems for higher-order dierential operators on
the half-line, Funkt. Anal. i Prilozhen. 17 (1983), no. 1, 40-52; English transl. in
Func. Anal. Appl. 17 (1983).
[kha4] Khachatryan I.G., Necessary and sucient conditions of solvability of the inverse scat-
tering problem for higher-order dierential equations on the half-line, Dokl. Akad.
Nauk Armyan. SSR 77 (1983), no. 2, 55-58.
[kHa1] Khanh B.D., A numerical resolution of the Gelfand-Levitan equation, J. Comput.
Appl. Math. 72, no. 2 (1996), 235-244.
[Kha1] Khasanov A.B., The inverse problem of scattering theory on the half-axis for a sys-
tem of dierence equations, Boundary value problems for nonclassical equations of
mathematical physics, 266-295, 423, Fan, Tashkent, 1986.
[Khr1] Khruslov E.Y. and Shepelsky D.G., Inverse scattering method in electromagnetic
sounding theory. Inverse Problems, 10 (1994), no.1, 1-37.
[kir1] Kirsch A., An Introduction to the Mathematical Theory of Inverse Problems, Applied
Mathematical Sciences, 120, Springer-Verlag, Berlin, 1996.
[kli1] Klibanov M.V. and Sacks P.E., Use of partial knowledge of the potential in the phase
problem of inverse scattering, J. Cmput. Phys. 112 (1994), 273-281.
[kno1] Knobel R. and Lowe B.D., An inverse Sturm-Liouville problem for an impedance, Z.
Angew. Math. Phys. 44 (1993), no. 3, 433-450.
[kob1] Kobayashi M., A uniqueness proof for discontinuous inverse Sturm-Liouville problems
with symmetric potentials, Inverse Problems, 5 (1989), no. 5, 767-781.
[kob2] Kobayashi M., An algorithm for discontinuous inverse Sturm-Liouville problems with
symmetric potentials, Comput. Math. Appl. 18 (1989), no. 4, 349-356.
[kra1] Kravchenko K.V., Inverse spectral problems for dierential operators with nonlocal
boundary conditions, Ph.D. Thesis, Saratov Univ., Saratov, 1998, 101pp.
[kre1] Krein M.G., Solution of the inverse Sturm-Liouville problem, Dokl. Akad. Nauk SSSR,
76 (1951), 21-24.
[kre2] Krein M.G., On a method of eective solution of the inverse problem, Dokl. Akad.
Nauk SSSR, 94 (1954), 987-990.
[Kre1] Kreyszig E., Introductory Functional Analysis with Applications, John Wiley and Sons,
New York, 1978.
[kru1] Krueger R.J., Inverse problems for nonabsorbing media with discontinuous material
properties, J.Math.Phys. 23 (1982), no. 3, 396-404.
[kud1] Kudishin P.M., Recovery of dierential operators with a singular point, Proc. Conf.
dedicated to 90th Anniversary of L.S.Pontryagin, MSU, Moscow, 1998, p.66.
297
[kur1] Kurasov P., Scattering matrices with nite phase shift and the inverse scattering prob-
lem, Inverse Problems, 12 (1996), no. 3, 295-307.
[lap1] Lapwood F.R. and Usami T., Free Oscillations of the Earth, Cambridge University
Press, Cambridge, 1981.
[lav1] Lavrentjev M.M., Reznitskaya K. and Yakhno V., One-Dimensional Inverse Problems
of Mathematical Physics, Nauka: Sibirsk. Otdel., Novosibirsk, 1982; English transl.:
AMS Translations, Series 2, 130. American Mathematical Society, Providence, R.I.,
1986.
[lax1] Lax P., Integrals of nonlinear equations of evolution and solitary waves, Comm. Pure
Appl. Math. 21 (1968), 467-490.
[lee1] Lee J.-H., On the dissipative evolution equations associated with the Zakharov-Shabat
system with a quadratic spectral parameter, Trans. Amer. Math. Soc. 316 (1989),
no. 1, 327-336.
[lei1] Leibenson Z.L., The inverse problem of spectral analysis for higher-order ordinary
dierential operators, Trudy Moskov. Mat. Obshch. 15 (1966), 70-144; English transl.
in Trans. Moscow Math. Soc. 15 (1966).
[lei2] Leibenson Z.L., Spectral expansions of transformations of systems of boundary value
problems, Trudy Moskov. Mat. Obshch. 25 (1971), 15-58; English transl. in Trans.
Moscow Math. Soc. 25 (1971).
[leo1] Leontjev A.F., A growth estimate for the solution of a dierential equation for large
parameter values and its application to some problems of function theory, Sibirsk. Mat.
Zh. 1 (1960), 456-487.
[Lev1] Levinson N., The inverse Sturm-Liouville problem, Math. Tidsskr. 13 (1949), 25-30.
[lev1] Levitan B.M., Theory of Generalized Translation Operators, 2nd ed., Nauka, Moscow,
1973; English transl. of 1st ed.: Israel Program for Scientic Translations, Jerusalem,
1964.
[lev2] Levitan B.M., Inverse Sturm-Liouville Problems, Nauka, Moscow, 1984; English transl.,
VNU Sci.Press, Utrecht, 1987.
[lev3] Levitan B.M. and Sargsjan I.S., Introduction to Spectral Theory, Nauka, Moscow,
1970; English transl.: AMS Transl. of Math. Monographs. 39, Providence, RI, 1975.
[lev4] Levitan B.M. and Sargsjan I.S., Sturm-Liouville and Dirac Operators, Nauka, Moscow,
1988; English transl.: Kluwer Academic Publishers, Dordrecht, 1991.
[li1] Li S.Y., The eigenvalue problem and its inverse spectrum problem for a class of dier-
ential operators, Acta Math. Sci. (Chinese) 16 (1996), no.4, 391-403.
[lit1] Litvinenko O.N. and Soshnikov V.I., The Theory of Heterogenious Lines and their
Applications in Radio Engineering, Moscow: Radio, 1964.
298
[low1] Lowe B.D., Pilant M. and Rundell W., The recovery of potentials from nite spectral
data, SIAM J. Math. Anal. 23 (1992), no. 2, 482-504.
[mal1] Malamud M.M., Similarity of Volterra operators and related problems in the theory
of dierential equations of fractional orders, Trudy Moskov. Mat. Obshch. 55 (1994),
73-148; English transl. in Trans. Moscow Math. Soc. 1994, 57-122 (1995).
[mal2] Malamud M.M., A connection between the potential matrix of the Dirac system and
its Wronskian, Dokl. Akad. Nauk, 344, no. 5 (1995), 601-604; English transl. in Dokl.
Math. 52, no. 2 (1995), 296-299.
[mar1] Marchenko V.A., Sturm-Liouville Operators and their Applications, Naukova Dumka,
Kiev, 1977; English transl., Birkhauser, 1986.
[mar2] Marchenko V.A., Spectral Theory of Sturm-Liouville Operators, Naukova Dumka,
Kiev, 1972.
[mar3] Marchenko V.A., Some problems in the theory of a second-order dierential operator,
Dokl. Akad. Nauk SSSR, 72 (1950), 457-460.
[mar4] Marchenko V.A., Some problems in the theory of linear dierential operators, Trudy
Moskov. Mat. Obshch. 1 (1952), 327-420.
[mar5] Marchenko V.A., Expansion in eigenfunctions of nonselfadjoint singular second-order
dierential operators, Mat. Sb. 52(94) (1960), 739-788.
[mar6] Marchenko V.A. and Maslov K.V., Stability of recovering the Sturm-Liouville operator
from the spectral function, Mat. Sb. 81 (123) (1970), 525-551.
[mar7] Marchenko V.A. and Ostrovskii I.V., A characterization of the spectrum of the Hill
operator, Mat. Sb. 97 (1975), 540-606; English transl. in Math. USSR-Sb. 26 (1975),
4, 493-554.
[mat1] Matsaev V.I., The existence of operator transformations for higher-order dierential
operators, Dokl. Akad. Nauk SSSR 130 (1960), 499-502; English transl. in Soviet
Math. Dokl. 1 (1960), 68-71.
[mch1] McHugh J., An historical survey of ordinary linear dierential equations with a large
parameter and turning points, Arch. Hist. Exact. Sci., 7 (1970), 277-324.
[mcl1] McLaughlin J.R., An inverse eigenvalue problem of order four, SIAM J. Math. Anal.
7 (1976), no. 5, 646-661.
[mcl2] McLaughlin J.R., Analytical methods for recovering coecients in dierential equa-
tions from spectral data, SIAM Rev. 28 (1986), 53-72.
[mcl3] McLaughlin J.R., On uniqueness theorems for second order inverse eigenvalue prob-
lems, J. Math. Anal. Appl. 118 (1986), no. 1, 38-41.
[mcl4] McLaughlin J.R., Inverse spectral theory using nodal points as data - a unique result,
J. Di. Eq. 73 (1988), 354-362.
299
[mes1] Meschanov V.P. and Feldstein A.L., Automatic Design of Directional Couplers, Moscow:
Sviaz, 1980.
[miz1] Mizutani A., On the inverse Sturm-Liouville problem, J. Fac. Sci Univ. Tokio, Sect.IA,
Math. 31 (1984), 319-350.
[mos1] Moses H., A solution of the Korteweg-de Vries equation in a half-space bounded by a
wall, J.Math.Phys. 17 (1976) 73-75.
[mue1] Mueller J.L. and Shores T.S., Uniqueness and numerical recovery of a potential on the
real line, Inverse Problems, 13 (1997), no. 3, 781-800.
[nai1] Naimark M.A., Linear Dierential Operators, 2nd ed., Nauka, Moscow, 1969; English
transl. of 1st ed., Parts I,II, Ungar, New York, 1967, 1968.
[neh1] Neher M., Enclosing solutions of an inverse Sturm-Liouville problem with nite data,
Computing 53, no. 3-4 (1994), 379-395.
[new1] Newton R.G. Inverse Schrodinger scattering in three dimensions. Texts and Mono-
graphs in Physics. Springer-Verlag, 1989.
[nik1] Nikishin E.M., The discrete Sturm-Liouville operator and some problems of the theory
of functions, Trudy Seminara I.G. Petrovskogo, Moscow 10 (1984), 3-77; English transl.
in J. Sov. Math. 35 (1986), 2679-2744.
[niz1] Nizhnik L.P., Inverse Scattering Problems for Hyperbolic Equations, Naukova Dumka,
Kiev, 1991.
[pai1] Paine J., A numerical method for the inverse Sturm-Liouville problem, SIAM J. Sci.
Statist. Comput. 5 (1984), no. 1, 149-156.
[pal1] Paladhi B.R., The inverse problem associated with a pair of second-order dierential
equations, Proc. London Math. Soc. (3) 43 (1981), no. 1, 169-192.
[pan1] Panakhov E.S., Inverse problem in two spectra for a dierential operator with a sin-
gularity at zero, Akad. Nauk Azerb. SSR Dokl. 36 (1980), no. 10, 6-10.
[pap1] Papanicolaou V.G. and Kravvaritis D., An inverse spectral problem for the Euler-
Bernoulli equation for the vibrating beam, Inverse Problems, 13 (1997), no. 4, 1083-
1092.
[pik1] Pikula M., Determination of a Sturm-Liouville type dierential operator with retarded
argument from two spectra, Mat. Vestnik, 43 (1991), no. 3-4, 159-171.
[pik2] Pikula M., On the determination of a dierential equation with variable delay, Math.
Montisnigri 6 (1996), 71-91.
[pla1] Plaksina O.A., Inverse problems of spectral analysis for the Sturm-Liouville operators
with nonseparated boundary conditions, Mat. Sb. 131 (1986), 3-26; English transl.,
Math. USSR-Sb. 59 (1988), 1, 1-23.
300
[pod1] Podolskii V.E., Reconstruction of the Sturm-Liouville operator from its weighted zeta-
function, Dokl. Akad. Nauk SSSR, 313 (1990), no. 3, 559-562; English tran. in Soviet
Math. Dokl. 42 (1991), no. 1, 53-56.
[pos1] Poschel J. and Trubowitz E., Inverse Spectral Theory, Academic Press, New York,
1987.
[pov1] Povzner A.Y., On dierential equations of Sturm-Liouville type on a half-axis, Mat.
Sb. 23 (65) (1948), 3-52; English transl. in Amer. Math. Soc. Transl. (1) 4 (1962),
24-101.
[Pri1] Prilepko A.I. and Kostin A.B., Some inverse problems for parabolic equations with
nal and integral observation. (Russian) Matemat. Sbornik 183 (1992), no. 4, 49-68;
translation in Russian Acad. Sci. Sb. Math. 75 (1993), no. 2, 473-490.
[Pri2] Prilepko A.I. and Kostin, A.B., On some problems of the reconstruction of a boundary
condition for a parabolic equation. I,II. (Russian) Dier. Uravnenija. 32 (1996), no.
1, 107-116; no. 11, 1519-1528; translation in Dierential Equations 32 (1996), no. 1,
113-122; no. 11, 1515-1525.
[pri1] Privalov I.I., Introduction to the Theory of Functions of a Complex Variable, 11th ed.,
Nauka, Moscow, 1967; German trans. of 9th ed., Vols I,II,III, Teubner, Leipzig, 1958,
1959.
[pro1] Provotorov V.V., On the solution of an inverse problem for the Sturm-Liouville op-
erator with boundary conditions inside the interval, Functional-dierential equations,
132-137, Perm. Politekh. Inst., Perm, 1989.
[ram1] Ramm A.G., Multidimensional Inverse Scattering Problems. Pitman Monographs and
Surveys in Pure and Applied Mathematics, 51. Longman Scientic and Technical,
Harlow; co-published in the United States with John Wiley and Sons, Inc., New York,
1992.
[rei1] Reid, W.T., Riccati dierential equations. Academic Press, New York, London, 1972.
[rja1] Rjabushko T.I., Stability of recovering the Sturm-Liouville operator from two spectra,
Theory of Functions, Funct. Analysis and their Applications, 16, 186-198, Kharkov,
1972.
[rof1] Rofe-Beketov F.S., The spectral matrix and the inverse Sturm-Liouville problem, The-
ory of Functions, Funct. Analysis and their Applications, 4, 189-197, Kharkov, 1967.
[rom1] Romanov V.G., Inverse Problems in Mathematical Physics, Nauka, Moscow, 1984;
English transl.: VNU Scince Press, Utrecht, 1987.
[rom2] Romanov, V.G.; Kabanikhin, S.I., Inverse problems for Maxwells equations. Inverse
and Ill-posed Problems Series. Utrecht: VSP, iii, 249 p.
[run1] Rundell W. and Sacks P.E., On the determination of potentials without bound state
data, J. Comput. Appl. Math. 55 (1994), no. 3, 325-347.
301
[sac1] Sacks P.E., Recovery of singularities from amplitude information, J. Math. Phys. 38
(1997), no. 7, 3497-3507.
[sad1] Sadovnichii V.A., On some formulations of inverse spectral problems, Uspekhi Mat.
Nauk 38 (1983), no. 5, p.132.
[sak1] Sakhnovich L.A., The inverse problem for dierential operators of order n > 2 with
analytic coecients, Mat. Sb. 46 (88) (1958), 61-76.
[sak2] Sakhnovich L.A., The transformation operator method for higher-order equations, Mat.
Sb. 55 (97) (1961), 347-360.
[sak3] Sakhnovich L.A., On an inverse problem for fourth-order equations, Mat. Sb. 56 (98)
(1962), 137-146.
[sak4] Sakhnovich L.A., Evolution of spectral data and nonlinear equations, Ukrain. Math.
Zh. 40 (1988), 533-535.
[sak5] Sakhnovich L.A., Inverse problems for equations systems, Matrix and operator valued
functions, 202-211, Oper. Theory Adv. Appl., 72, Birkhauser, Basel, 1994.
[sAk1] Sakhnovich A., Canonical systems and transfer matrix-functions, Proc. Amer. Math.
Soc. 125 (1997), no. 5, 1451-1455.
[sha1] Shabat A.B., Inverse scattering problem for a system of dierential equations, Funkt.
Anal. Prilozh. 9 (1975), 75-78; English transl. in Func.Anal.Appl 9 (1975), 244-247.
[sha2] Shabat A.B., An inverse scattering problem, Dier. Uravneniya 15 (1979), no. 10,
1824-1834; English transl. in Dier. Equations 15 (1979), no. 10, 1299-1307.
[She1] Shen C.L. and Tsai T.M., On a uniform approximation of the density function of
a string equation using eigenvalues and nodal points and some related inverse nodal
problems, Inverse Problems, 11 (1995), no. 5, 1113-1123.
[she1] Shepelsky D.G., The inverse problem of reconstruction of the mediums conductivity in
a class of discontinuous and increasing functions, Spectral operator theory and related
topics, 209-232, Advances in Soviet Math., 19, Amer. Math. Soc., Providence, RI,
1994.
[shk1] Shkalikov A.A. and Tretter C., Spectral analysis for linear pencils of ordinary dier-
ential operators, Math. Nachr. 179 (1996), 275-305.
[shu1] Shubin C., An inverse problem for the Schrodinger equation with a radial potential, J.
Di. Equations 103 (1993), no. 2, 247-259.
[sjo1] Sjoeberg A., On the Korteweg-de Vries equation: existence and uniqueness, J. Math.
Anal. Appl. 29 (1970), 569-579.
[sta1] Stashevskaya V.V., On inverse problems of spectral analysis for a certain class of
dierential equations, Dokl. Akad. Nauk SSSR, 93 (1953), 409-412.
[suk1] Sukhanov V.V., The inverse problem for a selfadjoint dierential operator on the line,
Mat. Sb. 137 (179) (1988), 242-259; English transl. in Math. USSR Sb. 65 (1990).
302
[tak1] Takhtadjan L.A. and Faddeev L.D., Hamiltonian Methods in the Theory of Soli-
tons, Nauka, Moscow, 1986; English transl.: Springer Series in Soviet Mathematics.
Springer-Verlag, Berlin-New York, 1987.
[Tik1] Tikhonov A.N., On uniqueness of the solution of a electroreconnaissance problem,
Dokl. Akad. Nauk SSSR, 69 (1949), 797-800.
[tik1] Tikhonravov A.V. The accuracy obtainable in principle when solving synthesis prob-
lems, Zh. Vychisl. Mat. i Mat. Fiz., 22, no. 6 (1982), 1421-1433; English transl.:
USSR Comput. Maths. Math. Phys., 22, no. 6 (1982), 143-157.
[tik2] Tikhonravov A.V. The synthesis of laminar media with specied amplitude and phase
properties, Zh. Vychisl. Mat. i Mat. Fiz. 25, no.11 (1985), 1674-1688 (Russian);
English transl.: USSR Comput. Maths. Math. Phys., 25, no.11 (1985), 55-64.
[tik3] Tikhonravov A.V., Amplitude-phase properties of the spectral coecients of laminar
media, Zh. Vychisl. Mat. i Mat. Fiz., 25, no.2 (1985), 442-450 (Russian); English
transl. in USSR Comput. Maths. Math. Phys., 25, no.2 (1985), 77-83.
[tre1] Tretter C., On -Nonlinear Boundary Eigenvalue Problems, Mathematical Research,
vol.71, Berlin: Akad. Verlag, 1993.
[was1] Wasow W. Linear Turning Point Theory, Springer, Berlin, 1985.
[win1] Winkler H., The inverse spectral problem for canonical systems, Integral Equations
Operator Theory, 22 (1995), no. 3, 360-374.
[yak1] Yakubov V.Y., Reconstruction of the Sturm-Liouville equation with a summable weight,
Uspekhi Mat. Nauk, 51 (1996), no. 4 (310), 175-176; English transl. in Russ. Math.
Surv. 51 (1996), no. 4, 758-759.
[yam1] Yamamoto M., Inverse spectral problem for systems of ordinary dierential equations
of rst order I, J. Fac. Sci. Univ. Tokyo Sect. IA Math. 35 (1988), no. 3, 519-546.
[yam2] Yamamoto M., Inverse eigenvalue problem for a vibration of a string with viscous drag.
J. Math. Anal. Appl., 152 (1990), no.1, 20-34.
[yan1] Yang X.-F., A solution of the inverse nodal problem, Inverse Problems, 13 (1997),
203-213.
[you1] Young R.M., An introduction to Nonharmonic Fourier Series, Academic Press, New
York, 1980.
[yur1] Yurko V.A., Inverse Spectral Problems for Linear Dierential Operators and their
Applications, Gordon and Breach, New York, 2000.
[yur2] Yurko V.A., An inverse problem for second-order dierential operators with regular
boundary conditions, Matematicheskie Zametki, 18 (1975), no. 4, 569-576; English
transl. in Math.Notes, 18 (1975), no. 3-4, 928-932.
[yur3] Yurko V.A., On stability of recovering the Sturm-Liouville operators, Dier. Equations
and Theory of Functions, Saratov Univ., Saratov 3 (1980), 113-124.
303
[yur4] Yurko V.A., Reconstruction of fourth-order dierential operators, Dier. Uravneniya,
19 (1983), no. 11, 2016-2017.
[yur5] Yurko V.A., Boundary value problems with a parameter in the boundary conditions,
Izv. Akad. Nauk Armyan. SSR, Ser. Mat., 19 (1984), no. 5, 398-409; English transl.
in Soviet J. Contemporary Math. Anal., 19 (1984), no. 5, 62-73.
[yur6] Yurko V.A., An inverse problem for integrodierential operators of the rst order,
Funct. Anal., 22, 144-151, Ulyanovsk. Gos. Ped. Univ., Ulyanovsk, 1984.
[yur7] Yurko V.A., An inverse problem for integral operators, Matematicheskie Zametki, 37
(1985), no. 5, 690-701; English transl. in Math. Notes, 37 (1985), no. 5-6, 378-385.
[yur8] Yurko V.A., Uniqueness of the reconstruction of binomial dierential operators from
two spectra, Matematicheskie Zametki, 43 (1988), no. 3, 356-364; English transl. in
Math. Notes, 43 (1988), no. 3-4, 205-210.
[yur9] Yurko V.A., Reconstruction of higher-order dierential operators, Dier. Uravneniya,
25 (1989), no. 9, 1540-1550; English transl. in Dierential Equations, 25 (1989), no.
9, 1082-1091.
[yur10] Yurko V.A., Inverse Problem for Dierential Operators, Saratov University, Saratov,
1989.
[yur11] Yurko V.A., Recovery of dierential operators from the Weyl matrix, Doklady Akademii
Nauk SSSR, 313 (1990), no. 6, 1368-1372; English transl. in Soviet Math. Dokl., 42
(1991), no. 1, 229-233.
[yur12] Yurko V.A., A problem in elasticity theory, Prikladnaya matematika i mekhanika, 54
(1990), no. 6, 998-1002; English transl. in J. Appl. Math. Mech., 54 (1990), no. 6,
820-824.
[yur13] Yurko V.A., An inverse problem for integro-dierential operators, Matematicheskie
zametki, 50 (1991), no. 5, 134-146; English transl. in Math. Notes, 50 (1991), no. 5-6,
1188-1197.
[yur14] Yurko V.A., An inverse problem for dierential operators on the half-line, Izvestija
Vusshikh Uchebnykh Zavedenii, seriya matematika, no. 12 (1991), 67-76; English
transl. in Soviet Math. (Iz.VVZ), 35 (1991), no. 12, 67-74.
[yur15] Yurko V.A., Solution of the Boussinesq equation on the half-line by the inverse problem
method, Inverse Problems, 7 (1991), 727-738.
[yur16] Yurko V.A., Recovery of nonselfadjoint dierential operators on the half-line from the
Weyl matrix, Mat. Sbornik, 182 (1991), no. 3, 431-456; English transl. in Math.
USSR Sbornik, 72 (1992), no. 2, 413-438.
[yur17] ) Yurko V.A., Inverse problem for dierential equations with a singularity, Dier.
Uravneniya, 28 (1992), no. 8, 1355-1362; English transl. in Dierential Equations, 28
(1992), 1100-1107.
304
[yur18] Yurko V.A., On higher-order dierential operators with a singular point, Inverse Prob-
lems, 9 (1993), 495-502.
[yur19] Yurko V.A., Inverse problem for selfadjoint dierential operators on the half-line, Dok-
lady RAN, 333 (1993), no. 4, 449-451; English transl. in Russian Acad. Sci Dokl.
Math. 48 (1994), no. 3, 583-587.
[yur20] Yurko V.A., On dierential operators with nonseparated boundary conditions, Funkt.
Analiz i Prilozheniya, 28 (1994), no. 4, 90-92; English transl. in Funct. Anal. Appl.,
28 (1994), no. 4, 295-297.
[yur21] Yurko V.A., On determination of selfadjoint dierential operators on the half-line,
Matematicheskie Zametki, 57 (1995), no. 3, 451-462; English transl. in Math. Notes,
57 (1995), no. 3-4, 310-318.
[yur22] Yurko V.A., On higher-order dierential operators with a singularity, Mat. Sbornik,
186 (1995), no. 6, 133-160; English transl. in Sbornik; Mathematics 186 (1995), no. 6,
901-928.
[yur23] Yurko V.A., On integration of nonlinear dynamical systems by the inverse problem
method, Matematicheskie Zametki, 57 (1995), no. 6, 945-949; English transl. in Math.
Notes, 57 (1995), no. 6, 672-675.
[yur24] Yurko V.A., On higher-order dierence operators, Journal of Dierence Equations and
Applications, 1 (1995), 347-352.
[yur25] Yurko V.A., On inverse problems for nonlinear dierential equations, Dier. Urav-
neniya, 31 (1995), no. 10, 1768-1769; English transl. in Dierential Equations, 31
(1995), no. 10, 1741-1743.
[yur26] Yurko V.A., An inverse problems for operators of a triangular structure, Results in
Mathematics, 30 (1996), no. 3/4, 346-373.
[yur27] Yurko V.A., An inverse problem for systems of dierential equations with nonlinear
dependence on the spectral parameter, Dier. Uravneniya, 33 (1997), no. 3, 390-395;
English transl. in Dierential Equations, 33 (1997), no. 3, 388-394.
[yur28] Yurko V.A., Integral transforms connected with dierential operators having singular-
ities inside the interval, Integral Transforms and Special Functions, 5 (1997), no. 3-4,
309-322.
[yur29] Yurko V.A., On recovering Sturm-Liouville dierential operators with singularities
inside the interval, Matematicheskie Zametki, 64 (1998), no. 1, 143-156; English transl.
in Math. Notes, 64 (1998), no. 1, 121-132.
[yur30] Yurko V.A., Inverse spectral problems for dierential operators and their applications,
J. Math. Sciences (to appear).
[yur31] Yurko V.A., An inverse problem for dierential equations of the Orr-Sommerfeld type,
Mathematische Nachrichten 211 (2000), 177-183.
305
[yur32] Yurko V.A., On recovery of pencils of dierential operators on the halg-line, Matem-
aticheskie Zametki, 67, no. 2 (2000), 316-319.
[zak1] Zakharov V.E., Manakov S.V., Novikov S.P. and Pitaevskii L.P., Theory of solitons.
The inverse scattering method, Nauka, Moscow, 1980; English transl.: Contemporary
Soviet Mathematics. Consultants Bureau [Plenum], New York-London, 1984.
[zam1] Zamonov M.Z. and Khasanov A.B., Solvability of the inverse problem for the Dirac
system on the whole axis, Vestnik Moskov. Univ. Ser. I Mat. Mekh. 1985, no. 6, 3-7,
110; English transl. in Moscow Univ. Math. Bull. 40 (1985), no. 6, 1-5.
[zhi1] Zhidkov E.P. and Airapetyan R.G., A numerical method for solving an inverse problem
of quantum scattering theory, Vestnik Moskov. Univ. Ser. I Mat. Mekh. 1996, no.
6, 38-40, 110; English transl. in Moscow Univ. Math. Bull. 51 (1996), no. 6, 23-25
(1997)
[Zho1] Zhornitskaya L.A. and Serov V.S., Inverse eigenvalue problems for a singular Sturm-
Liouville operator on (0, 1) , Inverse Problems, 10 (1994), no. 4, 975-987.
[zho1] Zhou, Xin., Direct and inverse scattering transforms with arbitrary spectral singulari-
ties, Comm. Pure Appl. Math. 42 (1989), no. 7, 895-938.
[zho2] Zhou X., Inverse scattering transform for systems with rational spectral dependence,
J. Di. Equations 115 (1995), no. 2, 277-303.
[zho3] Zhou X., L
2
-Sobolev space bijectivity of the scattering and inverse scattering trans-
forms, Comm. Pure Appl. Math. 51 (1998), no. 7, 697-731.
[zuy1] Zuyev I.V. and Tikhonravov A.V., The uniqueness of the determination of parameters
of a stratied medium from the energy reectance, Zh. Vych. Mat. Mat. Fiz. 33, no.3
(1993), 428-438; English transl. in Comput. Maths. Math. Phys. 33, no.3 (1993),
387-395.
[zyg1] Zygmund A., Trigonometric Series, I,II, Cambridge University Press, Cambridge, 1959.

Potrebbero piacerti anche