Sei sulla pagina 1di 20

Integral Equations

R. Kress

Introduction

Some forty years ago when I was working on my thesis I fell in love with
integral equations as one of the most beautiful topics in both pure and applied
analysis. This article is intended to stimulate the reader to share this love
with me.
The term integral equation was first used by Paul du Bois-Reymond in
1888. In general, an integral equation is an equation where an unknown
function occurs under an integral. Typical examples are
Z 1
K(x, y)(y) dy = f (x)
(1)
0

and
Z
(x) +

K(x, y)(y) dy = f (x).

(2)

In these equations the function is the unknown, and the so-called kernel K
and the right hand side f are given functions. Solving these integral equations
amounts to determining a function such that (1) or (2), respectively, are
satisfied for all 0 x 1. The equations (1) and (2) carry the name of
Ivar Fredholm and are called Fredholm integral equations of the first and
second kind, respectively. In the first equation the unknown function only
occurs under the integral whereas in the second equation it also appears
outside the integral. Later on we will show that this is more than just a
formal difference between the two types of equations. A first impression on
the difference can be obtained by considering the special case of a constant
kernel K(x, y) = c 6= 0 for all x, y [0, 1]. On one hand, it is easily seen that
the equation of the second kind (2) has a unique solution given by
Z 1
c
f (y) dy
=f
1+c 0
1

R1
provided c 6= 1. If c = 1 then (2) is solvable if and only if 0 f (y) dy = 0
and the general solution is given by = f + with an arbitrary constant .
One the other hand, the equation of the first kind (1) is solvable if and only
if f is a constant, let say, f (x) = for all x. In this case every function
with mean value is a solution.
Of course, the integration domains in (1) and (2) are not restricted to the
interval [0, 1]. In particular, the integration domain can be multidimensional
and for the integral equation of the first kind the domain where the equation
is required to be satisfied need not to coincide with the integration domain.
The main aim of this article is to first guide the reader through part of the
historical development of the theory and the applications of these equations.
In particular, we exhibit their close connection to partial differential equations and emphasize their fundamental role in initiating the development of
functional analysis as the appropriate abstract framework for studying integral and also differential equations. Then in the second part we will illustrate
how integral equations play an important role in current mathematical research on inverse and ill-posed problems in areas such as medical imaging
and non-destructive evaluation.
Two mathematical problems are called inverse to each other if the formulation of the first problem contains the solution of the second problem
and vice versa. According to this definition at first glance it seems arbitrary
to distinguish one of the two problems as an inverse problem. However,
in general, one of the two problems is easier and more intensively studied
whereas the other is more difficult and less explored. Then the first problem
is denoted as the direct problem and the second as the inverse problem.
A wealth of inverse problems arise in the mathematical modeling of noninvasive evaluation and imaging methods in science, medicine and technology.
For imaging devices such as conventional X-rays or X-ray tomography the
corresponding direct problem consists of determining the images, i.e., projections from the known density distribution of an object. Conversely, the
inverse problem asks to reconstruct the density from the images. More generally, inverse problems answer questions about the cause of a given effect
whereas in the corresponding direct problem the cause is known and the effect is to be determined. A common feature of such inverse problems is their
ill-posedness or instability, i.e., small changes in the measured effect may
result in large changes in the estimated cause.
The equations (1) and (2) are linear equations since the unknown function appears in a linear fashion. Though nonlinear integral equations also
2

constitute an import part of the mathematical theory and the applications


of integral equations, due to space limitations here we do not consider them.

Abels integral equation

As an appetizer we consider Abels integral equation that occurred as one


of the first integral equations in mathematical history. A tautochrone is a
planar curve for which the time taken by an object sliding without friction in
uniform gravity to its lowest point is independent of its starting point. The
problem to identify this curve was solved by Christiaan Huygens in 1659 who,
using geometrical tools, established that the tautochrone is a cycloid.
In 1823 Niels Henrik Abel attacked the more general problem determining
a planar curve such that the time of descent for a given starting height
y coincides with the value f (y) of a given function f . The tautochrone
then reduces to the special case when f is a constant. Following Abel we
describe the curve by x = (y) (with (0) = 0) and, using the principle of
conservation of energy, obtain
Z y
(y)

d, y > 0,
(3)
f (y) =
y
0
for the total time f (y) required for the object to fall from P = ((y), y) to
P0 = (0, 0) where
s
1 + [ 0 ]2
:=
2g
and g denotes the earth gravity. Equation (3) is known as Abels integral
equation. Given the shape the falling time f is obtained by simply evaluating the integral on the right hand side of (3). However, the solution of the
generalized tautochrone problem requires the solution of the inverse problem,
i.e., given the function f the solution of the integral equation (3) has to
be found which certainly is a more challenging task. This solution can be
shown to be given by
Z y
1 d
f ()

(y) =
d, y > 0.
(4)
dy 0
y
p
For the special case of a constant f = a/2g with a > 0 one obtains from
3

(4) after some calculations that


[ 0 (y)]2 =

a
1
y

and it can be seen that the solution of this equation is given by the cycloid
with parametric representation
(x(t), y(t)) =

a
(t + sin t, 1 cos t),
2

0 t .

Early years

We proceed by giving a brief account on the close connections of the early


development of integral equations with potential theory. For the sake of
simplicity we confine the presentation to the two-dimensional case as a model
for the practically relevant three-dimensional case. In the sequel x = (x1 , x2 )
and y = (y1 , y2 ) stand for points or vectors in the Euclidean space IR2 . Twice
continuously differentiable solutions u to the Laplace equation
2u 2u
+
=0
x21 x22
are called harmonic functions. They model time-independent temperature
distributions, potentials of electrostatic and magnetostatic fields, and velocity
potentials of incompressible irrotational fluid flows.
For a simply connected bounded domain D in IR2 with smooth boundary
:= D the Dirichlet problem of potential theory consists of finding a harmonic function u in D that is continuous up to the boundary and assumes
boundary values u = f on for a given continuous function f on . A first
approach to this problem, developed in the early 19th century, was to create
a so-called single-layer potential by distributing point sources with a density
on the boundary curve , i.e., by looking for a solution in the form
Z
u(x) = (y) ln |x y| ds(y), x D.
(5)

Here, | | denotes the Euclidean norm, i.e., |x y| represents the distance


between the two points x in D and y on . Since ln |x y| satisfies Laplaces
equation if x 6= y the function u given by (5) is harmonic and in order to
4

satisfy the boundary condition it suffices to choose the unknown function


as a solution of the integral equation
Z
(y) ln |x y| ds(y) = f (x), x ,
(6)

which is known as Symms integral equation. However, the analysis available


at that time did not allow a successful treatment of this integral equation of
the first kind. Actually, only in the second half of the 20th century was a
satisfying analysis of (6) achieved. Therefore, it represented a major breakthrough when August Beer in 1856 proposed to place dipoles on the boundary
curve, i.e., to look for a solution in the form of a double-layer potential
Z
ln |x y|
ds(y), x D,
(7)
u(x) = (y)
(y)

where denotes the unit normal vector to the boundary curve directed into
the exterior of D. Now the so-called jump relations from potential theory
require that
Z
ln |x y|
1
1
(y)
(x) +
ds(y) = f (x)
(8)

(y)

is satisfied for x in order to fulfill the boundary condition. This is an


integral equation of the second kind and as such, in principle, was accessible
to the method of successive approximations. However, in order to achieve
convergence for the case of convex domains in 1877 Carl Neumann had to
modify the successive approximations into what he called the method of
arithmetic means, i.e., a relaxation method in modern terms.
For the general case, establishing the existence of a solution to (8) had
to wait until the pioneering results of Ivar Fredholm that were published
in final form in 1903 in Acta Mathematica with the title Sur une classe
dequations fonctionelles. Fredholm considered equations of the form (2) with
a general kernel K and assumed all the functions involved to be continuous
and real valued. His approach was to consider the integral equation as the
limiting case of a system of linear equations by approximating the integral by
Riemannian sums. In Cramers rule for this linear system Fredholm passes
to the limit by using Kochs theory of infinite determinants from 1896 and
Hadamards inequality for determinants from 1893. The idea to view integral
equations as the limiting case of linear systems had already been used by
Volterra in 1896, but it was Fredholm who completed it successfully.
5

In addition to equation (2), Fredholms results also contain the adjoint


integral equation that is obtained by interchanging the variables in the kernel
function. They can be summarized in the following alternative. Note that
all four equation (9)(12) in Theorem 3.1 are required to be satisfied for all
0 x 1.
Theorem 3.1 Either the homogeneous integral equations
Z 1
K(x, y)(y) dy = 0
(x) +

(9)

and

Z
(x) +

K(y, x)(y) dy = 0

(10)

only have the trivial solutions = 0 and = 0, and the inhomogeneous


integral equations
Z 1
K(x, y)(y) dy = f (x)
(11)
(x) +
0

and
Z
(x) +

K(y, x)(y) dy = g(x)

(12)

have a unique continuous solution and for each continuous right hand
side f and g, respectively, or the homogeneous equations (9) and (10) have
the same finite number of linearly independent solutions and the inhomogeneous
integral equations are solvable if and only if the right hand sides satisfy
R1
f (x)(x) dx = 0 for all solutions to the homogeneous adjoint equation
0
R1
(10) and 0 (x)g(x) dx = 0 for all solutions to the homogeneous equation
(9).
We explicitly note that this theorem implies that for the first of the two
alternatives each one of the four properties implies the three other ones.
Hence, in particular, uniqueness for the homogeneous equation (9) implies
existence of a solution to the inhomogeneous equation (11) for each right
hand side. This is notable, since it is almost always much simpler to prove
uniqueness for a linear problem than to prove existence.
Fredholms existence results also clarify the existence of a solution to
the boundary integral equation (8) for the Dirichlet problem for the Laplace
6

equation. By inserting a parameterization for the boundary curve we


can transform (8) into the form (2) with a continuous kernel for which the
homogeneous equation only allows the trivial solution.
Over the last century this boundary integral equation approach via potentials of the form (5) and (7) has been successfully extended to almost
all boundary and initial-boundary value problems for second order partial
differential equations with constant coefficients such as the time-dependent
heat equation, the time-dependent and the time-harmonic wave equation and
Maxwell equations among many others. In addition to settling existence of
solutions, boundary integral equations provide an excellent tool for obtaining
approximate solutions of the boundary and initial-boundary value problems
by solving the integral equations numerically (see Section 5). These so-called
boundary element methods, in short BEM, compete well with the finite element methods, in short FEM. It is an important part of current research on
integral equations to develop, implement and theoretically justify new efficient algorithms for boundary integral equations for very complex geometries
in three dimensions that arise in real applications.

Impact on functional analysis

Fredholms results on the integral equation (2) initiated the development of


modern functional analysis in the 1920s. The almost literal agreement of
the Fredholm alternative for linear integral equations as formulated in Theorem 3.1 with the corresponding alternative for linear systems soon gave gave
rise to research for a broader and more abstract form of the Fredholm alternative. This in turn also allowed extensions of the integral equation theory
under weaker regularity requirements on the kernel and solution functions.
In addition, many years later it was found that more insight was achieved in
the structure of Fredholm integral equations by altogether abandoning the
at first very fruitful analogy between integral equations and linear systems.
A first answer to the search for a general formulation of the Fredholm
alternative was given through Riesz. In his work from 1916 he interpreted
the integral equation as a special case of an equation of the second kind
+ A = f
with a compact linear operator A : X X mapping a normed space X into
itself. The notion of a normed space that is common in todays mathematics
7

was not yet available in 1916.


Riesz set his work up in the function space of continuous real valued
functions on the interval [0, 1], in todays terminology the space C[0, 1]. The
maximum of the absolute value a function f on [0, 1] he called the norm of f
and confirmed its properties that we now know as the standard norm axioms.
Only these axioms, and not the special meaning as the maximum norm, were
used by Riesz.
The concept of a compact operator also was not yet available in 1916.
However, using the notion of compactness as introduced by Frechet in 1906,
Riesz formulated that the integral operator A defined by
Z 1
K(x, y)(y) dy, x [0, 1],
(13)
(A)(x) :=
0

on the space C[0, 1] of continuous functions maps bounded sets into relatively
compact sets, i.e., in todays terminology A is a compact operator.
What is fascinating about the work of Riesz is that his proofs are still
usable and can almost literally be transferred from the case of an integral
operator in the space of continuous functions to the general case of a compact
operator in a normed space. Riesz knew about the generality of his method
since he explicitly noted that the restriction to continuous functions was not
relevant.
The results of Riesz can be summarized into the following theorem where
I denotes the identity operator.
Theorem 4.1 For a compact linear operator A : X X in a normed space
X either I + A is injective and surjective and has a bounded inverse or the
null space of N (I + A) := { : + A = 0} has nonzero finite dimension
and the image space (I + A)(X) is a proper subspace of X.
The central and most valuable piece of the Riesz theory is again the equivalence of injectivity and surjectivity. Theorem 4.1 does not yet completely
encompass the alternative for Fredholm integral equations of Theorem 3.1
since a link with an adjoint equation and the characterization of the image
space in the second case of the alternative are missing. Due to space limitations we have to omit closing this gap through the results of Schauder from
1929 and more recent developments in the 1960s (see [4]).
The following theorem is just a consequence of the fact that the identity
operator on a normed space is compact if and only if X has finite dimension.
8

It explains why the difference between the two integral equations (1) and (2)
is more than just formal.
Theorem 4.2 Let X and Y be normed spaces and let A : X Y be a
compact linear operator. Then A cannot have a bounded inverse if X is
infinite dimensional.

Numerical solution

The conceptually most straight forward idea for the numerical solution of
integral equations of the second kind goes back to Nystrom in 1930 and
consists in replacing the integral in (2) by numerical integration. Using a
quadrature formula
Z 1
n
X
ak g(xk )
g(x) dx
0

k=1

with quadrature points x1 , . . . , xn [0, 1] and quadrature weights a1 , . . . , an


IR such as the composite trapezoidal or composite Simpson rule, we approximate the integral operator (13) by the numerical integration operator
(An )(x) :=

n
X

ak K(x, xk )(xk )

(14)

k=1

for x [0, 1], i.e., we apply the quadrature formula for g = K(x, ). Then
the solution to the integral equation of the second kind + A = f is
approximated by the solution of n + An n = f, which reduces to solving a
finite-dimensional linear system as follows. If n is a solution of
n (x) +

n
X

ak K(x, xk )n (xk ) = f (x)

(15)

k=1

for x [0, 1] then clearly the values n,j := n (xj ) at the quadrature points
satisfy the linear system
n,j +

n
X

ak K(xj , xk )n,k = f (xj )

k=1

(16)

for j = 1, . . . , n. Conversely, if n,j is a solution of the system (16) the


function n defined by
n (x) := f (x)

n
X

ak K(x, xk )n,k

(17)

k=1

for x [0, 1] can be seen to solve equation (15). Under appropriate assumptions on the kernel K and the right hand side f for a convergent sequence
of quadrature rules it can be shown that the corresponding sequence (n ) of
approximate solutions converges uniformly to the solution of the integral
equation if n . Furthermore it can be established that the error estimates for the quadrature rules carry over to error estimates for the Nystrom
approximations.
We conclude this short discussion on the numerical solution of integral
equations by pointing out that in addition to the Nystrom method a wealth of
other methods is available such as collocation and Galerkin methods (see [4]).

Ill-posed problems

For problems in mathematical physics in 1923 Hadamard postulated three


requirements: A solution should exist, the solution should be unique, and
the solution should depend continuously on the data. The third postulate is
motivated by the fact that in applications the data will be measured quantities and therefore always contaminated by errors. A problem satisfying all
three requirements is called well-posed. Otherwise, it is called ill-posed. If
A : X Y is a bounded linear operator mapping a normed space X into
a normed space Y , then the equation A = f is well-posed if A is bijective
and the inverse operator A1 : Y X is bounded, i.e., continuous. Otherwise it is ill-posed. The main concern with ill-posed problems is the case
of instability, i.e., the case where the solution of A = f does not depend
continuously on the data f .
As an example for an ill-posed problem we present the backward heat
conduction. Consider the forward heat equation
2u
u
=
t
x2
for the time dependent temperature u in a rectangle [0, 1] [0, T ] subject to
10

the homogeneous boundary conditions


0 t T,

u(0, t) = u(1, t) = 0,
and the initial condition

0 x 1,

u(x, 0) = (x),

where is a given initial temperature. By separation of variables the solution


can be obtained in the form
u(x, t) =

2 2 t

an en

sin nx

(18)

n=1

with the Fourier coefficients


Z

an = 2

(y) sin ny dy

(19)

of the given initial values. This initial value problem is well-posed: The
final temperature f := u( , T ) clearly depends continuously on the initial
temperature because of the exponentially decreasing factors in the series
f (x) =

2 2 T

an en

sin nx.

(20)

n=1

However, the corresponding inverse problem, i.e., the determination of the


initial temperature from the knowledge of the final temperature f is illposed. From (20) we deduce
(x) =

2 2 T

bn en

sin nx

(21)

n=1

with the Fourier coefficients bn of the final temperature f . Changes in the


final temperature will be drastically amplified by the exponentially increasing
factors in the series (21).
Inserting (19) into (18), we see that this example can be put into the form
of an integral equation of the first kind of the form (1) with the kernel given
by

X
2 2
K(x, y) = 2
en T sin nx sin ny.
n=1

11

In general, integral equations of the first kind with continuous kernels provide
typical examples for ill-posed problems as consequence of Theorem 4.2.
Of course, the ill-posed nature of an equation has consequences for its numerical solution. The fact that an operator does not have a bounded inverse
means that the condition numbers of its finite-dimensional approximations
grow with the quality of the approximation. Hence, a careless discretization
of ill-posed problems leads to a numerical behavior that at first glance seems
to be paradoxical. Namely, increasing the degree of discretization, i.e., increasing the accuracy of the approximation for the operator, will cause the
approximate solution to the equation to become less and less reliable.

Regularization

Methods for obtaining a stable approximate solution of an ill-posed problem


are called regularization methods. It is our aim to describe a few ideas on
regularization concepts for equations of the first kind with a compact linear
operator A : X Y between two normed spaces X and Y . We wish to
approximate the solution to the equation A = f from a perturbed right
hand side f with a known error level kf f k . Using the erroneous
data f we want to construct a reasonable approximation to the exact
solution of the unperturbed equation A = f . Of course, we want this
approximation to be stable, i.e., we want to depend continuously on the
actual data f . Therefore, assuming without major loss of generality that
A is injective, our task requires finding an approximation of the unbounded
inverse operator A1 : A(X) X by a bounded linear operator R : Y X.
Having this in mind, a family of bounded linear operators R : Y X,
> 0, with the property of point wise convergence
lim R A =

(22)

for all X is called a regularization scheme for the operator A. The


parameter is called the regularization parameter.
The regularization scheme approximates the solution of A = f by the
regularized solution := R f . Then, for the total approximation error by
the triangle inequality we have the estimate
k k kR k + kR A k.
12

This decomposition shows that the error consists of two parts: the first term
reflects the influence of the incorrect data and the second term is due to
the approximation error between R and A1 . Assuming that X is infinite
dimensional, R cannot be uniformly bounded since otherwise A would have
a bounded inverse. Consequently, the first term will be increasing as 0
whereas the second term will be decreasing as 0 according to (22).
Every regularization scheme requires a strategy for choosing the parameter in dependence on the error level and the data f in order to achieve
an acceptable total error for the regularized solution. On one hand, the accuracy of the approximation asks for a small error kR A k, i.e., for a small
parameter . On the other hand, the stability requires a small kR k, i.e., a
large parameter . A popular strategy is given by the discrepancy principle.
Its motivation is based on the consideration that, in general, for erroneous
data the residual kA f k should not be smaller than the accuracy of
the measurements of f , i.e., the regularization parameter should be chosen
such that kAR f f k .
We now assume that X and Y are Hilbert spaces and denote their inner products by (, ) with the space L2 [0, 1] of Lebesgue square integrable
complex valued functions on [0, 1] as a typical example. Each bounded linear operator A : X Y has a unique adjoint operator A : Y X with
the property (A, g) = (, A g) for all X and g Y . If A is compact then A is also compact. The adjoint of the compact integral operator
A : L2 [0, 1] L2 [0, 1] defined by (13) is given by the integral operator with
the kernel K(y, x) where the bar indicates the complex conjugate.
Extending the singular value decomposition for matrices from linear algebra, for each compact linear operator A : X Y there exists a singular
system consisting of a monotonically decreasing null sequence (n ) of positive numbers and two orthonormal sequences (n ) in X and (gn ) in Y such
that
An = n gn , A gn = n n , n IN.
For each X we have the singular value decomposition
=

(, n )n + P

n=1

where P : X N (A) is the orthogonal projection operator onto the null

13

space of A and
A =

n (, n )gn .

n=1

From the singular value decomposition it can be readily deduced that the
equation of the first kind A = f is solvable if and only if f is orthogonal to
the null space of A and satisfies

X
1
|(f, gn )|2 < .
2

n
n=1

(23)

If (23) is fulfilled a solution is given by


=

X
1
(f, gn )n .

n
n=1

(24)

The solution (24) clearly demonstrates the ill-posed nature of the equation
A = f . If we perturb the right hand side f by f = f + gn we obtain the

solution = + 1
n n . Hence, the ratio k k/kf f k = 1/n can
be made arbitrarily large due to the fact that the singular values tend to
zero. This observation suggest to regularize by damping the influence of the
factor 1/n in the solution formula (24). In the Tikhonov regularization this
is achieved by choosing
R f :=

X
n=1

n
(f, gn )n .
+ 2n

(25)

Computing R f does not require the singular system since for injective A it
can be shown that R = (I + A A)1 A . Hence := R f can be obtained
as the unique solution of the well-posed equation of the second kind
+ A A = A f.

Computerized tomography

In transmission computerized tomography a cross-section of an object is


scanned by a thin X-ray beam whose intensity loss is recorded by a detector
and processed to an image. Denote by f the space dependent attenuation
14

coefficient within a two-dimensional medium. The relative intensity loss of


an X-ray along a straight line L is given by dI = If ds and by integration
it follows that
 Z

Idetector = Isource exp f ds ,
L

i.e., the scanning process in principle provides the line integrals over all lines
traversing the scanned cross-section. The transform that maps a function
in IR2 onto its line integrals is called the Radon transform and the inverse
problem of computerized tomography requires its inversion. Already in 1917
Radon gave an explicit inversion formula which however is not immediately
applicable for practical computations.
For the formal description of the Radon transform it is convenient to
parameterize the line L by its unit normal vector and its signed distance s
from the origin in the form L = {s + t : t IR}, where is obtained by
rotating counter-clockwise by 90 degrees. Now the two-dimensional Radon
transform R is defined by
Z
f (s + t ) dt, S 1 , s IR,
(Rf )(, s) :=

and maps L1 (IR2 ) into L1 (S 1 IR). Given the measured line integrals g, the
inverse problem of computerized tomography consists of solving
Rf = g

(26)

for f . Although it is not of the conventional form (1) or (2), clearly (26) can
be viewed as an integral equation. Its solution can obtained using Radons
inversion formula
1
R H
Rf
(27)
f=
4
s
with the Hilbert transform
Z
1
g(t)
(Hg)(s) :=
dt, s IR,
s t
applied with respect to the second variable in Rf . The operator R is the
adjoint of R with respect to the L2 inner products on IR2 und S 1 IR. By
interchanging integrations, it can be seen to be given by
Z

(R g)(x) =
g(, x ) d, x IR2 ,
S1

15

i.e., it can be considered as integrating over all lines through x and therefore
is called the back projection operator. Because of the occurrence of the
Hilbert transform in (27), inverting the Radon transform is not local, i.e.,
the line integrals through a neighborhood of the point x do not suffice for the
reconstruction of f (x). Due to the derivative appearing in (27) the inverse
problem to reconstruct the function f from its line integrals is ill-posed. For
a detailed study of computerized tomography we refer to [5].
In practice the integrals can be measured only for a finite number of lines
and correspondingly a discrete version of the Radon transform has to be
inverted. The most widely used inversion algorithm is the filtered back projection algorithm which may be considered as an implementation of Radons
inversion formula with the middle part H/s replaced by a convolution,
i.e., a filter in the terminology of image processing. However, so-called algebraic reconstruction techniques are also used where the function f is decomposed into pixels, i.e., approximated by piecewise constants on a grid of
little squares. The resulting sparse linear system for the pixel values then is
solved iteratively, for example, by Kaczmarzs method.
For the case of a radially symmetric function f , that is, f (x) = f0 (|x|)
clearly Rf does not depend on , that is, (Rf )(, s) = g0 (s) where
Z
f0 ( s2 + t2 )dt, s 0.
g0 (s) = 2
0

Substituting t =

r2 s2 this transform into


Z
r
g0 (s) = 2
f0 (r)
dr,
2
r s2
s

s 0,

which is an Abel type integral equation again. Its solution can be shown to
be given by
Z
1
1
g00 (s)
ds, r 0.
f0 (r) =
r
s2 r 2
This approach can be extended to a full inversion formula by expanding both
f and g = Rf in a Fourier series with respect to the polar angle. Then the
Fourier coefficients of f and g are related by Abel type integral equations
involving Chebyshev polynomials.
X-ray tomography was first suggested by the physicist Allan Cormack in
1963 and due to the efforts of the electrical engineer Godfrey Hounsfield it
was introduced into medical practice in the 1970s. For their contributions
16

to X-ray tomography Cormack and Hounsfield were awarded with the 1979
Nobel prize for medicine.

Inverse scattering

Scattering theory is concerned with the effects that obstacles and inhomogeneities have on the propagation of waves and in particular time-harmonic
waves. Inverse scattering provides the mathematical tools for such fields as
radar, sonar, medical imaging and nondestructive testing.
For time-harmonic waves the time dependence is factored out in the form
U (x, t) = Re {u(x) eit } with a positive frequency , i.e, the complex valued
space dependent part u represents the real valued amplitude and phase of the
wave and satisfies the Helmholtz equation u + k 2 u = 0 with a positive wave
number k. For a vector d IR3 with |d| = 1, the function eik xd satisfies the
Helmholtz equation for all x IR3 . It is called a plane wave, since ei(k xdt)
is constant on the planes k x d t = const. Assume that an incident field
is given by ui (x) = eik xd . Then the simplest obstacle scattering problem is
to find the scattered field us as a solution to the Helmholtz equation in the
exterior of a bounded scatterer D IR3 such that the total field u = ui + us
satisfies the Dirichlet boundary condition u = 0 on D modeling a soundsoft obstacle or a perfect conductor. In addition, to ensure that the scattered
wave is outgoing it has to satisfy the Sommerfeld radiation condition

 s
u
s
lim r
iku = 0,
(28)
r
r
where r = |x| and the limit holds uniformly in all directions x/|x|. This
ensures uniqueness of the solution to this exterior Dirichlet problem for
the Helmholtz equation. Existence of the solution has been established via
boundary integral equations in the spirit of Section 3 in the 1950s by Vekua,
Weyl and M
uller.
The radiation condition (28) can be shown to be equivalent to the asymptotic behavior

 
1
eik|x|
s
u (
x) + O
, |x| ,
u (x) =
|x|
|x|
uniformly for all directions x = x/|x|, where the function u defined on
the unit sphere S 2 is known as the far field pattern of the scattered wave.
17

We indicate its dependence on the incident direction d and the observation


direction x by writing u = u (
x, d). The inverse scattering problem now
consists of determining the scattering obstacle D from a knowledge of the far
field pattern u . For motivation we could think of the problem to determine
from the shape of the water waves arriving at the shore whether a ball or a
cube was thrown into the water in the middle of a lake. We note that this
inverse problem is nonlinear since the scattered wave depends nonlinearly on
the scatterer D and it is ill-posed since the far field pattern u in an analytic
function on S 2 with respect to x. For a detailed study of inverse scattering
theory we refer to [1].
Roughly speaking one can distinguish between three groups of methods
for solving the inverse obstacle scattering problem: iterative, decomposition
and sampling methods. Iterative methods interpret the inverse problem as
a nonlinear ill-posed operator equation which is solved by iteration methods
such as regularized Newton type iterations. The main idea of decomposition
methods is to break up the inverse scattering problem into two parts: the
first part deals with the ill-posedness by constructing the scattered wave us
from its far field pattern u and the second part deals with the nonlinearity
by determining the unknown boundary D of the scatterer as the location
where the boundary condition for the total field is satisfied. Since boundary
integral equations play an essential role in the existence analysis and numerical solution of the direct scattering problem it is not surprising that they
also are an efficient tool within these two groups of methods for solving the
inverse problem (see [2]).
Sampling methods are based on choosing an appropriate indicator function f on IR3 such that its value f (z) decide on whether z lies inside or
outside the scatterer D. In contrast to iterative and decomposition methods,
sampling methods do not need any a priori information on the geometry of
the obstacle. However, they require the knowledge of the far field pattern for
a large number of incident waves, whereas the iterative and decomposition
methods, in principle, work with one incident field.
For two of the sampling methods, the so-called linear sampling method
proposed by Colton and Kirsch and the factorization method proposed by
Kirsch, these indicator functions are defined in terms of ill-posed linear integral equations of the first kind in terms of the integral operator F : L2 (S 2 )

18

L2 (S 2 ) with kernel u (
x, d) given by
Z
u (
x, d)g(d) ds(d),
(F g) :=

x S 2 .

S2

With the far field pattern


(
x, z) =

1 ik xz
e
4

of the fundamental solution


(x, z) :=

eik|xz|
,
4|x z|

x 6= z,

of the Helmholtz equation with source point in z IR3 the linear sampling
method is based on the ill-posed equation
F gz = ( , z)

(29)

whereas the factorization method is based on


(F F gz )1/4 = ( , z).

(30)

An essential tool in the linear sampling method are Herglotz wave functions
with kernel g defined as the superposition of plane waves given by
Z
vg (x) :=
eik xd g(d) ds(d), x IR3 .
S2

It can be shown that if z D then the value of the Herglotz wave function vgz, (z) with the kernel gz, given by the solution of (29) obtained
by Tikhonov regularization with parameter remains bounded as 0
whereas it is unbounded if z 6 D. This criteria can be evaluated numerically on a sufficiently fine grid of points to visualize the scatterer D. The
main feature of the factorization method is the fact that the equation (30)
is solvable in L2 (S 2 ) if and only if z D. This can be utilized to visualize
the scatterer on a grid with the aid of the solubility condition (23) in terms
of a singular system of F . For details we refer to [3].
The references [1, 2, 4] are evidence of my continuing love for integral
equations.
19

References
[1] Colton, D. and Kress, R. 1998 Inverse Acoustic and Electromagnetic Scattering Theory. 2nd. ed. Springer-Verlag, New York.
[2] Ivanyshyn, O., Kress, R. and Serranho, P. 2010 Huygens principle and
iterative methods in inverse obstacle scattering. Advances in Computational Mathematics 33, 413429.
[3] Kirsch, A. and Grinberg, N. 2008 The Factorization Method for Inverse
Problems. Oxford University Press, Oxford.
[4] Kress, R. 1997 Linear Integral Equations. 2nd ed. Springer-Verlag, New
York.
[5] Natterer, N. 2001 The Mathematics of Computerized Tomography. SIAM,
Philadelphia.

20

Potrebbero piacerti anche