Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
:= sup
xE
[(x)[, B(E).
It is a standard work to check that B(E) and C
0
(E) are Banach spaces with this norm. In general, a
function : E R will be called observable. Moreover, if f C
0
(E) the supnorm is actually a true
maximum as it is easily proved applying the Weierstrass theorem being E locally compact. Sometimes
it will be useful to recall the
Theorem 2.1 (Riesz). The topological dual of C
0
(E) is the space of all bounded real valued measure on
B(E). In particular
, =
_
E
(x) (dx).
Moreover the C
0
(E)
= closure
x
: x E, where
x
, = (x).
The natural space for trajectories of Evalued Markov processes is the space
D
E
[0, +[:= : [0, +[ E, right continuous and with left limit .
Frenchmen call this type of trajectories cadlag: continue a droite et avec limite a gauche. The space E
is called states space. We dene also the classical coordinate mappings
t
: D
E
[0, +[ E,
t
() := (t), t 0.
Moreover, we will dene
3
F the smallest algebra of D
E
[0, +[ such that all
t
are measurable;
F
t
the smallest algebra of D
E
[0, +[ such that all
s
for s t are measurable.
Clearly (F
t
) is an increasing family of algebras.
3 Markov Processes
Denition 3.1. Let (E, d) be a metric space. A family (P
x
)
xE
of probability measures on the path space
(D
E
[0, +[, F) is called Markov process if
i) P
x
((0) = x) = 1, for any x E.
ii) (Markov property) P
x
((t +) F [ F
t
) = P
(t)
(F), for any F F and t 0.
iii) the mapping x P
x
(F) is measurable for any F F.
Let (P
x
)
xE
be a Markov process. We denote by E
x
the expectation w.r.t. P
x
, that is
E
x
[] =
_
D
E
[0,+[
dP
x
, L
1
(D
E
[0, +[, F, P
x
).
The Markov property has a more exible and general form by means of conditioned expectations:
E
x
[((t +)) [ F
t
] = E
(t)
[] , L
. (3.1)
We now introduce the fundamental object of our investigations: lets dene
S(t)(x) := E
x
[((t))]
_
D
E
[0,+[
((t)) dP
x
(), B(E). (3.2)
We will see immediately that any S(t) is well dened for t 0. The family (S(t))
t0
is called Markov
semigroup associated to the process (P
x
)
xE
. This is because of the following
Proposition 3.2. Let (P
x
)
xE
be a Markov process on E and (S(t)
t0
be the associated Markov semi-
group. Then:
i) S(t) : B(E) B(E) is a bounded linear operator for any t 0 and |S(t)|
||
for any
B(E), t 0 (that is |S(t)| 1 for any t 0).
ii) S(0) = I.
iii) S(t +r) = S(t)S(r), for any t, r 0.
iv) S(t) 0 a.e. if 0 a.e.: in particular, if a.e., then S(t) S(t) a.e..
v) S(t)1 = 1 a.e. (here 1 is the function constantly equal to 1).
4
Proof i) It is standard (about the measurability proceed by approximation: the statement is true for = A,
A B(E) by iii) of the denition of Markov process because
S(t)A(x) = Px((t) A) = Px (
t
(A)) ,
and F :=
t
(A) F; hence it holds for sum of A, that is for simple functions; for general take rst
f 0 and approximate it by an increasing sequence of simple functions). Linearity follows by the linearity of the
integral. Clearly
[S(t)(x)[ Ex [[((t))[] ||, x E, = |S(t)| ||, t 0.
In other words S(t) L(B(E)) and |S(t)| 1.
ii) Evident.
iii) This involves the Markov property:
S(t +r)(x) = Ex [((t +r))] = Ex [Ex [((t +r)) [ Ft]]
(3.1)
= Ex
_
E
(t)
[((r))]
_
= Ex [S(t)((r))]
= S(r) [S(t)] (x).
iv), v) Evident.
4 Feller processes
To treat with bounded measurable observables is in general quite dicult because of their poor properties,
so its better to restrict to continuous observables:
Denition 4.1 (Feller property). Let (S(t))
t0
be the Markov semigroup associated to a Markov process
(P
x
)
xE
where (E, d) is locally compact. We say that the semigroup fullls the Feller property if
S(t)f C
0
(E), f C
0
(E), t 0.
This property turns out to give the strongly continuity of the semigroup:
Theorem 4.2 (strong continuity). Let (S(t))
t0
be the Markov semigroup associated to a Markov process
(P
x
)
xE
where (E, d) is locally compact. If (S(t))
t0
fullls the Feller property, it is then strongly
continuous on C
0
(E), that is
S() C([0, +[; C
0
(E)), C
0
(E).
Proof First we prove right weak-continuity, that is
lim
tt
0
+
S(t)(x) = S(t0)(x), x E, C
b
(E).
This follows immediately as an application of dominated convergence and because trajectories are right continuous.
Indeed
lim
tt
0
+
S(t)(x) = lim
tt
0
+
_
D
E
[0,+[
((t)) Px(d).
5
Now: [((t))[ || which is Pxintegrable, and (t) (t0) as t t0 because DE[0, +[. In a
similar way we have
lim
tt
0
(x) :=
_
+
0
e
t
S(t)(x) dt, > 0, x E.
We will see later what is the meaning of this. The integral is well dened and convergent because
[e
t
S(t)(x)[ = e
t
[S(t)(x)[ e
t
||.
We say that R
, that is S()R
C
0
(E)
S(t0)R
, as t t0 +.
We start with the case t0 = 0. Notice that
S(t)R
(x) = S(t)
_
+
0
e
r
S(r)(x) dr =
_
+
0
e
r
S(r +t)(x) dr = e
t
_
+
t
e
r
S(r)(x) dr,
hence
S(t)R
(x) R
(x) = (e
t
1)
_
+
t
e
r
S(r)(x) dr +
_
t
0
e
r
S(r)(x) dr,
therefore
|S(t)R
|
_
e
t
1
_
_
+
t
e
r
|| dr +
_
t
0
e
r
|| dr
e
t
1
|| +t|| 0,
as t 0+. For generic t0 we have
|S(t)R
S(t0)R
)|
|S(t t0)R
0,
as t t0+. Now we can prove the left continuity at t0: assuming now t < t0,
|S(t)R
S(t0)R
)|
|S(t0 t)R
0, t t0 .
We will now show that the set of R
C0(E)
such that
0 =
, R
) =
_
E
R
(x) =
_
+
0
e
t
S(t)(x) d(t) =
_
+
0
e
r
S
_
r
_
(x) dr.
Applying the right continuity in 0 of the semigroup and the dominated convergence it is easy to deduce that
R
(x)
_
+
0
e
r
S(0)(x) dr = S(0)(x)
_
+
0
e
r
dr = (x).
6
Moreover, always by previous formula
|R
|
_
+
0
e
r
|| dr = ||.
Therefore, applying the dominated convergence ( is a nite measure) we have
0 =
_
E
R
d
_
E
d, C0(E).
But then
P
t
(x, F) :=
_
_
P
t
(x, F), if x E, F E,
1 P
t
(x, E), if x E, F = ,
0, if x = , F E,
1, if x = , F = .
It is easy to check that i) and ii) of previous Proposition are still true.
Transition probabilities are a good tool to construct Markov processes. The idea is to use the Kolmogorov
Theorem imposing the Markov property by the transition probabilities using the ChapmanKolmogorov
equation. Basically we want that
P
x
( D
E
[0, +[ : (0) = x, (t
1
) F
1
, . . . , (t
n
) F
n
) =
=
_
F1
P
t1
(x, dy
1
)
_
F2
P
t2t1
(y
1
, dy
2
)
_
Fn
P
tntn1
(y
n1
, dy
n
).
(4.2)
We are not interested here going in deep with this, see Revuz & Yor [4].
5 Strongly continuous semigroups on Banach spaces
In this section we will see some general facts that involves only the structure of continuous semigroup on
a generic Banach space X.
Denition 5.1. A family (S(t))
t0
of bounded linear operators on a Banach space X is called strongly
continuous semigroup if
i) S(0) = I;
ii) S(t +r) = S(t)S(r), for all t, r 0;
iii) t S(t) C([0, +[; X) for all X.
If |S(t)| 1 for all t 0 the semigroup is called contraction semigroup.
By what we have seen in the previous section we will be particularly interested in contraction semigroups
so we will limit the general discussion to this case even if all the theorem have extensions to the general
case. It is natural to expect that, in a suitable sense, S(t) = e
tA
for some A. Thinking to the case
A L(X),
A = lim
t0+
e
tA
I
t
.
For this reason we will introduce the following
9
Denition 5.2. Let (S(t))
t0
be a strongly continuous semigroup on a Banach space X. The operator
A := lim
h0+
S(h)
h
, x D(A) :=
_
X : lim
h0+
S(h)
h
_
.
is called innitesimal generator of (S(t))
t0
.
The reason why we call it innitesimal generator will be clear by the HilleYosida theorem: we will rst
characterize some properties an innitesimal generator veries; once we will have these properties we
will see that they are enough to construct a strongly continuous semigroup by an operator which veries
them. A rst set of properties is given by the
Theorem 5.3. Let X be a Banach space, (S(t))
t0
a contraction semigroup and A its innitesimal
generator. Then
i) D(A) is dense in X;
ii) A is closed (i.e. G(A) := (, A) X X : D(A) is closed in the product topology of
X X).
Proof i) Let X and dene :=
1
0
S(r) dr. Clearly as 0 (mean value theorem). Lets
prove that D(A) for all t > 0. We have
1
h
(S(h) ) =
1
1
h
_
S(h)
_
0
S(r) dr
_
0
S(r) dr
_
=
1
h
__
+h
h
S(r) dr
_
0
S(r) dr
_
=
1
_
1
h
_
+h
S(r) dr
1
h
_
h
0
S(r) dr
_
1
(S() ),
always in force of the mean value theorem. This proves i).
ii) We have to prove that
if (n, An) (, ), = (, ) G(A), i.e. D(A) and = A.
In other words, we have to prove that
lim
h0+
S(h)
h
= .
Now
S(h) = lim
n
(S(h)n n) .
Because n D(A) we know that t S(t)n is dierentiable in t = 0 from the right. More:
Lemma 5.4. If D(A) then
d
dt
S(t) = S(t)A = AS(t), t 0.
Proof of the Lemma We have to prove that t S(t) is dierentiable for all t > 0 and that S(t) D(A).
We start with the rst.
First step: exists
d
+
dt
S(t). Indeed
d
+
dt
S(t) = lim
h0+
S(t +h) S(t)
h
= S(t) lim
h0+
S(h)
h
= S(t)A.
10
Second step: exists
d
dt
S(t). Indeed
d
dt
S(t) = lim
h0+
S(t h) S(t)
h
= lim
h0+
S(t h)
S(h)
h
.
Now:
_
_
_
_
S(t h)
S(h)
h
S(t)A
_
_
_
_
_
_
_
_
S(t h)
_
S(h)
h
A
__
_
_
_
+|S(t h)A S(t)A|.
Clearly, by the strong continuity, the second term converges to 0. For the rst, by the estimate |S(t)| 1 we
obtain _
_
_
_
S(t h)
_
S(h)
h
A
__
_
_
_
_
_
_
_
S(h)
h
A
_
_
_
_
0, h 0.
By this the conclusion follows.
Third step: S(t) D(A). Indeed
S(h)S(t) S(t)
h
= S(t)
S(h)
h
S(t)A, = AS(t) = S(t)A.
Coming back to the proof of the Theorem,
S(h)n n =
_
h
0
d
dr
S(r)n dr =
_
h
0
S(r)An dr
_
h
0
S(r) dr.
To justify the last passage, notice that
_
_
_
_
_
h
0
S(r)An dr
_
r
0
S(r) dr
_
_
_
_
_
h
0
|S(r)(An )| dr h|An | 0.
Finally:
S(h) =
_
h
0
S(r) dr.
Therefore,
S(h)
h
=
1
h
_
h
0
S(r) dr , = D(A), e A = .
The previous result gives the rst indications about the properties of the innitesimal generator. Of
particular interest is the weak continuity property given by the closure of the operator A. This property
is weaker than continuity and it is fullled by unbounded operators. For instance,
Example 5.5. Let X = C([0, 1]) be endowed with the supnorm | |
and A given by
D(A) :=
_
C
1
([0, 1]) : (0) = 0
_
, A =
,
then A is closed.
Sol. Indeed, if (n) D(A) is such that n in X with An =
n
in X (that is, uniformly on
[0, 1]), by a well known result C
1
([0, 1]) and
n=0
(tA)
n
n!
,
to dene the semigroup by a given A. Theres however another possible formula to dene the exponential,
that is
e
tA
= lim
n+
_
I +
tA
n
_
n
lim
n+
_
I
tA
n
_
n
. (5.1)
While the rst limit seems bad because I + A and its powers seems bad, the second is much more
interesting, because we may expect that (I A)
1
is nicer than A.
Example 5.6. Compute (I A)
1
in the case of A dened in the previous example.
Sol.
(I A) =
= , = (I A)
1
.
We may think that given we have to solve the dierential equation
=
1
, (x) = e
x
0
1
dy
__
x
0
e
y
0
1
dz
1
(y) dy +C
_
= e
x
_
1
_
x
0
e
(y) dy +C
_
.
Now, imposing that D(A) we have
(0) = 0, C = 0, (x) (I A)
1
(x) =
e
x
_
x
0
e
(y) dy.
Writing
(I A)
1
=
1
_
1
I A
_
1
=:
1
R1
,
we nd a very familiar concept of Functional Analysis:
Denition 5.7. Let C. If R
:= (I A)
1
L(X) we say that (A) (resolvent set) and we
call R
resolvent operator. The set (A) := C(A) is called spectrum of A. Analytically the spectrum
is divided in
point spectrum, denoted by
p
(A), that is the set of C such that I A is not injective (in
other words: the elements of
p
(A) are the eigenvalues);
continuum spectrum, denoted by
c
(A), that is the set of C such that I A is bijective but
the inverse is not continuous;
residual spectrum, what it remains, that is (A)
p
(A)
c
(A).
Looking at the second limit in (5.1), we would expect that (A) [
0
, +[ for some
0
. To this aim
we need a relationship between the resolvent operator and the semigroup. This is formally easy: writing
S(t) = e
tA
and treating I A as a negative number we have
_
+
0
e
r
S(r) dr =
_
+
0
e
r(A)
dr =
_
e
r(A)
A
_r=+
r=0
= (I A)
1
.
This formula turns out to be true. Precisely we have the
12
Theorem 5.8. Let (S(t))
t0
be a contraction semigroup on a Banach space X. Then:
i) (A) Re > 0 and
R
=
_
+
0
e
r
S(r) dr, C : Re > 0, X; (5.2)
ii) The following estimate holds:
|R
|
1
Re
, C : Re > 0. (5.3)
A
Figure 1: The spectrum contained in the half plane Re < 0.
Proof We rst prove that the integral in (5.2) is well dened. This is easy because, recalling that |S(t)| 1
for any t 0, we have
|e
r
S(r)| e
rRe
|| L
1
([0, +[), C : Re > 0.
Therefore, being r e
r
S(r) C([0, +[; X) the integral is well dened. By the same estimate we get
|R
|
_
+
0
||e
rRe
dr =
1
Re
||, X,
that is (5.3). It remain to prove that the integral operator is indeed the resolvent operator, that is:
a) R
D(A), X, b) (I A)R
= , X, c) R
(I A) = , D(A).
We start from the rst. Notice that
S(h)R
h
=
1
h
_
S(h)
_
+
0
e
r
S(r) dr
_
+
0
e
r
S(r) dr
_
=
1
h
__
+
h
e
(rh)
S(r) dr
_
+
0
e
r
S(r) dr
_
=
e
h
1
h
_
+
h
e
r
S(r) dr +
1
h
__
+
h
e
r
S(r) dr
_
+
0
e
r
S(r) dr
_
_
+
0
e
r
S(r) dr = R
.
13
Therefore, R
D(A) and
AR
= R
, (I A)R
= , X,
that is (I A)R
= I, that is b). To nish, we have to prove c). Let D(A). Notice that
R
(I A) =
_
+
0
e
r
S(r) dr
_
+
0
e
r
S(r)A dr.
Because D(A), by Lemma 5.4 we have that S(r)A = (S(r))
dr =
_
e
r
S(r)
_
r=+
r=0
_
+
0
e
r
S(r) dr = +R
,
that is the conclusion.
6 HilleYosida theorem
In the previous section we have seen that:
Corollary 6.1. The innitesimal generator A : D(A) X X of a contraction semigroup fullls the
following properties:
i) A is densely dened and closed;
ii) (A) C : Re > 0 and |R
| = |(I A)
1
|
1
Re
for any C such that Re > 0.
Actually we may notice that to be closed is redundant because the following general fact
Proposition 6.2. Let A : D(A) X X be a linear operator such that (A) ,= . Then A is closed.
Proof Let (n) D(A) such that
n , An .
We need to prove D(A) and A = . Now let (A) and consider
n An = (I A)n, n = (I A)
1
(n An) (I A)
1
( )
because (I A)
1
L(X). In particular
= (I A)
1
( ) D(A), and A = , A = .
In this section we will see that these conditions are sucient to construct a unique contraction semigroup
which generator is just A. The idea is to construct e
tA
by approximations
e
tA
= lim
+
e
tA
,
where A
L(X) are suitable approximations of A. Such approximations, that will be called Yosida
regularizations, are extraordinarily intuitive because are given by
A
:=
A
I
1
+
A.
Let us introduce formally these operators:
14
Denition 6.3. Let A : D(A) X X be a densely dened linear operator such that
(A) C : Re > 0 , |R
| = |(I A)
1
|
1
Re
, C : Re > 0.
We call Yosida regularization of A the family (A
)
>0
L(X) dened as
A
:= AR
= A(I A)
1
, > 0.
Remark 6.4. At rst sight may be is not evident that A
L(X): indeed
(I A)R
= IX, = A
= AR
= (R
IX) L(X).
Morally A
= A, D(A).
Proof Because, as in the remark, AR
= R
IX we can write
A
= AR
=
2
R
IX.
Therefore, if D(A) we have A
= (R
). But R
(I A) = I
D(A)
, so
R
( A) = , R
= R
A, = A = R
A.
Set := A. If we prove that
lim
+
R
= , X, (6.1)
we are done. Assume rst that D(A): by the same identity as before,
R
= R
A, = |R
| = |R
A| |R
||A|
1
|A| 0, +.
In the general case that X, by density of D(A) in X there exists D(A) such that | | .
Therefore,
|R
| |R
| +|R
| +| |
1
+|R
| +,
and by this the conclusion follows easily.
We are now ready for the main result:
Theorem 6.6 (HilleYosida). Let X be a Banach space, A : D(A) X X fullling hypotheses of
Denition 6.3 that is:
i) A is densely dened;
ii) (A) C : Re > 0 and
|R
| = |(I A)
1
|
1
Re
, C : Re > 0. (6.2)
15
There exists then a unique contraction semigroup (S(t))
t0
which generator is A. This semigroup will be
denoted by (e
tA
)
t0
.
Proof As announched, we want to dene S(t) := lim
+
e
tA
.
First step : construction of the semigroup. Because A
e
tA
=
_
1
0
_
e
trA
e
t(1r)A
dr =
_
1
0
_
e
trA
tA
e
t(1r)A
+e
trA
e
t(1r)A
(tA)
_
dr.
Clearly, A
A = AA
. Therefore
e
tA
e
tA
= t
_
1
0
e
trA
e
t(1r)A
(A
A) dr,
henceforth
_
_
_e
tA
e
tA
_
_
_ t
_
1
0
_
_
_e
trA
_
_
_
_
_
_e
t(1r)A
_
_
_ dr |(A
A)| .
Let us give an estimate of |e
uA
=
2
R
IX. Therefore
(1)
e
uA
= e
u(
2
R
I
X
)
= e
u
2
R
e
uI
X
= e
u
e
u
2
R
,
so _
_
_e
uA
_
_
_ = e
u
_
_
_e
u
2
R
_
_
_ e
u
e
u
2
R
e
u
e
u
= 1. (6.3)
We deduce by this that
_
_
_e
tA
e
tA
_
_
_ t |(A
A)| . (6.4)
Now, if D(A) by previous Lemma A
)
>0
is uniformly Cauchy on every interval [0, T], for all T > 0. Call S() the uniform limit
function, dened on [0, +[. Passing to the limit into the (6.4) we have
_
_
_e
tA
S(t)
_
_
_ t |A
A| , D(A). (6.5)
In this way, for all t 0, we have dened S(t) := lim
+
e
tA
| ||.
Being X complete and D(A) dense, it is easy to conclude that S(t) is extendible to all X and |S(t)| 1. Of
course, ii) and iii) hold true to all X. It remain to prove that also iv) extends to all X. Fix X and let
D(A) such that | | . Then, if h > 0,
|S(t +h) S(t)| |S(t +h)( )| +|S(t +h) S(t)| +|S(t) S(t)|
2| | +|S(t +h) S(t)|
2 +|S(t +h) S(t)|.
1
Here we are using the property e
A+B
= e
A
e
B
. Of course in general it is false, but if A and B commutes, as in the
present case, it is true.
16
Therefore,
limsup
h0+
|S(t +h) S(t)| 2.
But > 0 is free, so limsup
h0+
|S(t +h)S(t)| = 0 and this says that S() is right continuous on [0, +[.
For left continuity, if h > 0
|S(t h) S(t)| = |S(t h)[ S(h)]| | S(h)| 0, h 0+,
by right continuity.
Second step: A is the generator of (S(t))
t0
. Let
B : D(B) X X, D(B) :=
_
X : lim
h0+
S(h)
h
_
, B := lim
h0+
S(h)
h
, D(B),
the innitesimal generator of (S(t))
t0
. We will prove that D(B) = D(A) and A = B.
We start by proving that D(A) D(B) and A = B for all D(A). Let D(A). By denition of S(h) we
have
S(h) = lim
+
_
e
hA
_
= lim
+
_
h
0
_
e
rA
dr = lim
+
_
h
0
e
rA
dr.
Its natural to show that
lim
+
_
h
0
e
rA
dr =
_
h
0
S(r)A dr. (6.6)
We have
_
_
_
_
_
h
0
e
rA
dr
_
h
0
S(r)A dr
_
_
_
_
_
h
0
_
_
_e
rA
S(r)A
_
_
_ dr
_
h
0
_
_
_e
rA
(A
A)
_
_
_ +
_
_
_e
rA
A S(r)A
_
_
_ dr
h|A
A| +
_
h
0
_
_
_e
rA
A S(r)A
_
_
_ dr
By Lemma 6.5 the rst term goes to 0 as +. For the second term notice rst that e
rA
S(r)
uniformly on [0, h] for all X. Indeed: by construction, this is true if D(A). On the other hand, being
D(A) dense in X, if D(A) is such that | | , by estimate (6.5)
|e
rA
S(r)| |e
rA
( )| +|e
rA
A|,
Therefore
|e
A
S()|
,[0,h]
2 +h|A
A|, = limsup
+
|e
A
S()|
,[0,h]
2.
Being > 0 arbitrary the conclusion follows. This explain completely (6.6). As consequence,
S(h) =
_
h
0
S(r)A dr, D(A). (6.7)
By mean value theorem,
lim
h0+
S(h)
h
= lim
h0+
1
h
_
h
0
S(r)A dr = S(0)A = A,
17
and this means exactly that D(A) D(B) and B = A for D(A).
The inverse inclusion is much more soft. Indeed: because B is the generator of a contraction semigroup, it
fullls the conditions i) and ii) of Theorem 5.8. In particular, 1 (B), that is (I B)
1
: X D(B) is
bounded and D(B) = (I B)
1
X. On the other hand, we have seen that D(A) D(B) and B
D(A)
= A. In
particular, (I B)D(A) = (I A)D(A). By our assumption, 1 (A), so again D(A) = (I A)
1
X, that is
X = (I A)D(A). Hence, (I B)D(A) = X and this is possible i D(B) D(A). By this the conclusion follows
easily.
7 Generators of Feller semigroups
What we have seen in the previous two sections are general results that involves the rst three properties
in the Denition 4.3 of Feller semigroup. Now clearly the question is: which other property we need in
order that, in the specic case of the Banach space X = C
0
(E), a linear operator A be the generator
of e Feller semigroup? Knowing the general connection between a semigroup and its generator we have
immediately the
Proposition 7.1. Let (e
tA
)
t0
be a Feller semigroup on C
0
(E), (E, d) locally compact metric space.
Then the positive maximum principle holds:
if D(A) has a maximum at x
0
with (x
0
) 0 then A(x
0
) 0.
Proof Let x0 a positive maximum for . Notice rst that by denition
A = lim
h0+
S(h)(x0) (x0)
h
.
The limit is intended in C0(E), therefore in the uniform convergence. This means, in particular, that
A(x0) = lim
h0+
S(h)(x0) (x0)
h
.
Now, setting
+
:= max|, 0 C0(E), we have
S(h) S(h)
+
|S(h)
+
| |
+
| = (x0).
Therefore, if h > 0,
S(h)(x0) (x0)
h
(x0) (x0)
h
= 0, = A(x0) 0.
The maximum principle gives basically the estimate (6.2). Actually
Proposition 7.2. Let A : D(A) C
0
(E) C
0
(E). If A fullls the positive maximum principle then
it is dissipative that is
| A| ||, D(A), > 0. (7.1)
Proof Because C0(E), by Weierstrass thm there exists x0 E such that
[(x0)[ = max
xE
[(x)[ = ||.
18
We may assume (x0) 0 (otherwise replace with ). Therefore
| A| = sup
xE
[(x) A(x)[ [(x0) A(x0)[.
By the positive maximum principle, A(x0) 0, therefore (x0) A(x0) 0, hence
| A| [(x0) A(x0)[ = (x0) A(x0) (x0) = ||.
Notice in particular that
Lemma 7.3. Let A : D(A) X X, X normed space. If A is dissipative then I A is injective. If
it is also surjective then (A) and |R
|
1
|
1
, > 0.
By the Lemma 7.3 it is clear that 0 (A). Therefore the unique thing to prove is that (A) ]0, +[, that is:
every > 0 belongs to the resolvent set. The idea is to prove that (A)]0, +[ (which is non empty by what
we have seen just now) is open and closed: it is therefore connected, hence equal to all ]0, +[.
First: (A) is open, therefore (A)]0, +[ is open in ]0, +[. This is a general fact so we state it
separately:
Lemma 7.5. The resolvent set of any linear (eventually unbounded) operator is open in C and R
C((A); L(X)). In particular: if (A) then B(,
1
2R
[ (A).
Proof The argument is based on the following formal identity: xed (A) and C,
R =
1
A
=
1
+ ( A)
=
1
A
1
1 +
A
= R
(I + ( )R
)
1
. (7.2)
It is easy to check that if the right hand side makes sense as bounded linear operator then it is exactly R. To
give a meaning to the right hand side, we have to justify that B = I + ( )R
n=0
(I B)
n
L(X).
2
Actually, if the space X is reexive, the Phillips thm. is an i.
19
In our case
|I B| = |( )R
| = [ [|R
| < 1, [ [ <
1
|R
|
.
This means that B(,
1
R
[ (A). The continuity of the resolvent map follows again by the (7.2). Indeed
R = R
n=0
(( )R
)
n
= R
+()R
n=0
(( )R
)
n
, = |RR
| [[|R
n=0
|()R
|
n
.
Now, if [ [
1
2R
it follows that
|R R
| [ [|R
n=0
1
2
n
= 2|R
|[ [,
and now the conclusion is evident.
Second: (A)]0, +[ is closed. Let (n) (A)]0, +[ such that n ]0, +[. We want to prove
that (A). By previous arguments, this is equivalent to show that R(I A) = H. So, x H. We look
for D(A) such that
A = .
Let n D(A) such that
nn An = .
The idea is to pass to the limit in this equation. We have just to prove that (n) is convergent rst, in particular
of Cauchy. To this aim notice that
_
_
_
nn An = ,
mm Am = ,
= nnmmA(nm) = 0, n(nm)A(nm) = (mn)m
so, by dissipativity,
[m n[|m| = |n(n m) A(n m)| [n[|n m|.
Now: because n ]0, +[ we can say that [n[ for all n N, for some > 0. Suppose we have proved
that (n) is bounded, that is |n| K for all n: we are done because, in this case,
|n m|
K
[n m[ , n, m N0(),
i.e. (n) would be a Cauchy sequence. Boundedness of (n) follows by the same argument due to the dissipativity:
indeed
n|n| |nn An| = ||, = |n|
||
, n N.
Summarizing: (n) is convergent. Then, because An = nn , also (An) is convergent. Now, you will
recall that if (A) ,= then A is closed. So, being this the case in our context, we have that n D(A),
An A and passing to the limit in the equation we would get A = , that is the conclusion. With
this the proof is nished.
Denition 7.6. Let A : D(A) C
0
(E) C
0
(E) be a linear operator, (E, d) be a locally compact
metric space. We say that A is a Markov generator if
20
i) D(A) is dense in C
0
(E).
ii) A fullls the positive maximum principle.
iii) R(
0
I A) = C
0
(E) for some
0
> 0.
In other words, combining the HilleYosida thm with the Phillips thm we have the
Corollary 7.7. A linear operator A generates a Markov semigroup on C
0
(E) i A is a Markov generator.
8 Examples
To check that an operator A is a Markov generator is not, in general, an easy business. In particular, is
the third conditions that usually is a little bit dicult to check and often it is useful to have a sort of
mild version of it. The problem is that we have in particular to solve the equation
A = ,
for a given C
0
(E). This is not generally easy. Lets see a rst example.
Example 8.1. The operator
A : D(A) C
0
([0, 1]) C
0
([0, 1]), A :=
, D(A) :=
_
C
2
([0, 1]) C
0
([0, 1]) :
C
0
([0, 1])
_
,
is a Markov generator.
Sol. Clearly the rst two properties are true. Lets see the third. Take C0([0, 1]) and consider the
equation
(x)
x
. By the variation of constants formula
U(x) =
_
1
2
_
x
0
(y)e
y
dy
_
e
x
+
_
1
2
_
x
0
(y)e
y
dy
_
e
x
=
1
_
x
0
(y)
e
(xy)
e
(xy)
2
dy =
1
_
x
0
(y) sinh
_
(x y)
_
dy.
Therefore
(x) = c1e
x
+c2e
x
+
1
_
x
0
(y) sinh
_
(x y)
_
dy.
Here theres a rst problem we meet: if C([0, 1]) it is not evident that C
2
([0, 1]). Indeed, it is easy to
check that C
1
([0, 1]) and
(x) = c1
x
+c2
x
+
1
(x) sinh(
0) +
_
x
0
(y) cosh(
(x y)) dy
= c1
x
+c2
x
+
_
x
0
(y) cosh(
(x y)) dy.
21
Repeating the procedure we see that C
2
([0, 1]). Lets impose that ,
(0) =
(1) = 0.
Notice that by the equation we have
_
c1 +c2 = 0,
c1e
+c2e
+
1
_
1
0
(y) sinh
_
(1 y)
_
dy = 0.
This is a 2 2 system in c1, c2 with determinant e
= 2 sinh
= , 2
= 2, C0(R).
Because the equation is basically the same of Example 8.1 so we have, for general solution
(x) = c1e
2x
+c2e
2x
+
2
2
_
x
0
(y) sinh
_
2(x y)
_
dy.
By the same argument of Example 8.1 we deduce that C
2
(R). Lets see if C0(R), that is when () = 0.
To this aim rewrite the solution in the form,
(x) =
_
c1 +
1
2
_
x
0
(y)e
2y
dy
_
e
2x
+
_
c2
1
2
_
x
0
(y)e
2y
dy
_
e
2x
.
Notice that we have a unique possibility in such a way that (+) = 0, that is
c1 +
1
2
_
+
0
(y)e
2y
dy = 0. (8.2)
Indeed: rst notice that
c2e
2x
0, x +,
and
e
2x
_
x
0
(y)e
2y
dy =
_
x
0
(y)e
2y
dy
e
2x
(H)
=
(x)e
2x
2e
2x
=
1
2
(x) 0, x +,
because C0(R). Moreover, the integral in (8.2) is convergent because [(y)e
2y
[ ||e
2y
. Therefore
if the (8.2) is not 0, by previous considerations, we would have
_
c1 +
1
2
_
x
0
(y)e
2y
dy
_
e
2x
.
So the unique possibility is that (8.2) holds true. In that case applying again the H opital rule we would have
c1 +
1
2
_
x
0
(y)e
2y
dy
e
2x
(H)
=
1
2
(x)e
2x
2e
2x
=
1
2
(x) 0, x +,
again because C0(R). The moral is: the unique possible choice for c1 such that (+) = 0 is given by (8.2).
Similarly, at we will have
c2
1
2
_
0
(y)e
2y
dy = 0. (8.3)
This means that theres a unique C
2
0
(R) such that A = . We can summarize the discussion with the
statement
Theorem 8.3. The operator
A :=
1
2
d
2
dx
2
, D(A) :=
_
C
2
(R) C
0
(R) :
C
0
(R)
_
is a Markov generator.
23
8.3.1 Case d 2
This case is sensibly dierent from the previous one because now the equation A = is a a PDE
1
2
= . (8.4)
To solve this equation we invoke the Fourier transform. To this aim recall the Schwarz space
S(R
d
) :=
_
C
(R
d
) : sup
xR
d
(1 +[x[)
n
[
(x)[ < +, n N, N
d
_
,
where
stands for the usual multi-index notation for derivatives. It is well known that the Fourier transform
is a bijection on S(R
d
). To solve (8.4) in the Schwarz space is a standard application of the Fourier transform.
Indeed: we take S(R
d
) and we look for a solution S(R
d
). Applying the Fourier transform we have
1
2
= , +
4
2
[[
2
2
=
, ( + 2
2
[[
2
) =
, =
1
+ 2
2
[[
2
.
Now it is easy to check that if S(R
d
) then
1
+2
2
||
2
S(R
d
): therefore, inverting the Fourier transform,
there exists a unique S(R
d
) such that the previous equation holds. So if we would dene
A :=
1
2
, D(A) := S(R
d
),
we would have
R(I A) S(R
d
) =: Y.
With respect to or space X := C0(R
d
) we have that S(R
d
) is dense in X. So we have that
1
2
fullls the rst
two conditions of Markov generator and the third is a little bit weaker being
R(I A) is dense in C0(R
d
),
for any > 0. In this case it seems a little bit impossible that when we start with C0(R
d
) we get S(R
d
).
In other words: the domain of A seems too small to represent the natural domain for A (and indeed: A involves
only second order derivatives whereas in S(R
d
) therere very regular functions). On the other hand it is not at
all easy to solve the equation directly in C
2
(R
d
). So, how can we solve this impasse? The point is that theres,
in a suitable sense, a unique possible extension of A which is a Markov generator. Lets rst treat
this question in general.
We start introducing some denitions in order to make clear what does it means extension here. Its
not natural to talk about continuous extension because our operators wont be, in general, continuous.
The right concept turns out to be that on of closed extension.
Denition 8.4. Let A : D(A) X X be a linear operator. We say that A is closable if G(A) is
the graph of some linear (clearly closed) operator. We will denote by A such operator. In particular:
G(A) = G(A).
Lemma 8.5. Let A : D(A) X X be a densely dened dissipative linear operator on X Banach
space. Then
i) A is closable.
24
ii) A is dissipative.
iii)
R(I A) = R
_
I A
_
, > 0. (8.5)
Proof i) A is closable. Take G(A). It is enough to check that if (, 1), (, 2) G(A) then 1 = 2. In
this case G(A) will be the graph of a well dened closed linear operator. By denition
(n, An) G(A), (n, An) (, 1), ( n, A n) G(A), ( n, A n) (, 2).
By linearity taking dierences and calling n := n n we have
(n, An) (0, ), where := 1 2,
so we are reduced to prove that = 0. Because D(A) is dense, take (n) D(A) such that n . Noticed
that
n An ,
we have
|(I A)m | = limn |(I A)m + (I A)(n)| = limn |(I A)(m +n)|
limn |m +n| = |m|.
Dividing by and letting it to + we have
|m|
_
_
_
_
m
1
Am
_
_
_
_
|m |.
Finally, letting m + we deduce || 0.
ii) A is dissipative. This is straightforward: if D(A) there exists (n) D(A) such that (n, An)
(, A). By dissipativity of A we have
|n An| |n|, = | A| ||.
iii) We prove now the (8.5). It is clear that being G(A) G(A) we have
R(I A) R(I A), = R(I A) R(I A).
Lets see that R(I A) is closed: by this in (8.5) will follows. Now, the conclusion follows because A is closed.
Indeed: if
R(I A), = = lim
n
_
n An
_
=: lim
n
n, (n) D(A).
Notice that, being A dissipative,
|n m| =
_
_
(n m) A(n m)
_
_
|n m|.
We deduce that (n) is Cauchy, hence converges to some . But then An = n n , and because
A is closed it follows D(A) and A = , that is = (I A), so R(I A) as advertised.
To prove the other inclusion take R(I A): = A for some D(A). Now: there exists
(n) D(A) such that (n, An) (, A). But then
n An A = , = R(I A).
25
In particular we deduce the following
Theorem 8.6. Let A : D(A) X X be a densely dened, dissipative linear operator such that
R(
0
I A) is dense in X for some
0
> 0.
Then A is closable and A generates a strongly continuous semigroup of contractions on X.
Proof By the Lemma A is well dened and dissipative (and of course densely dened). By (8.5) and our
assumption
R(0I A) = X,
so the conclusion follows by the Phillips Thm.
Lets now particularize the discussion to the case of Markov processes. It is useful to introduce the
following
Denition 8.7. Let A : D(A) C
0
(E) C
0
(E) be a linear operator, (E, d) be a locally compact
metric space. We say that A is a Markov pregenerator if
i) D(A) is dense in C
0
(E).
ii) A fullls the positive maximum principle.
iii) R(
0
I A) = C
0
(E) for some
0
> 0.
By previous Thm. it follows that if A is a Markov pregenerator, then A is closable and A generates
a strongly continuous semigroup of contractions. Is it a Markov semigroup? We need to check that A
fullls the positive maximum principle:
Proposition 8.8. If A is a Markov pregenerator then A is a Markovgenerator.
Proof Exercise.
Therefore, applying this result to our case we could say what follows: A =
1
2
dened on D(A) = S(R
d
)
is closable and A generates a Markov semigroup. Of course A coincide with A on S(R
d
).
Now the further question is: is it possible that A is the restriction of some other operator B ,= A
generating a strongly continuous semigroup of contractions? In other words: does A identify uniquely
the Markov semigroup? To this aim we introduce the following concept:
Denition 8.9. Let A : D(A) X X be a linear operator. A set D D(A) is called a core for A
if A coincides with the closure of its restriction to D, that is if
G(A
|
D
) = G(A).
Here
G(A
|
D
) := (, A) : D ,
and of course the closure is intended to be in the space X X.
26
It is clear that if A and B are two densely dened closed linear operators such that D is a core for both
and they coincide on D, then A = B. So if D = S(R
d
) is a core for A, any other generator B having D
as core and coinciding with
1
2
on D will generate the same semigroup of A. How to check if D is a core
then? The following proposition is sometimes useful:
Proposition 8.10. Let A be the generator of a strongly continuous semigroup of contractions on a
Banach space X. A set D X is a core for A i
i) D is dense in X;
ii) R(
0
I A
|
D
) = X for some
0
> 0.
Proof =Assume D is a core. Because A is densely dened (as generator) xed X there exists D(A)
such that | | . But G(A) = G(A
|
D
): in particular, taking (, A) there exists ( , AI
D
) G(AI
D
)
close to it. This means that | | , therefore | | 2. This proves i). About ii) x X and
consider D(A) such that
0 A = .
Now by assumption there exists (n) D such that (n, An) (, A). Clearly 0n An , that is
R(0I A
|
D
).
= Take (, A) G(A) and consider := 0 A. By ii) there exists (n) D such that n := 0n
An . Being n = R
0
n we deduce that n := R
0
. Therefore An 0 = A. But then
(n, An) (, A), that is G(A) G(A
|
D
). The other inclusion is evident.
In particular, in our context, D = S(R
d
) is a core for the generator of the Brownian motion.
It is in general not easy to characterize the domains of generators of strongly continuous semigroups.
In the present case it is possible to show that C
2
0
(R
d
) is contained into the domain, but this is not,
however, the full domain. Actually it is possible to show that the domain of the generator is the subset
of C
0
(R
d
) with C
0
(R
d
) in distributional sense. We dont enter in these details.