Sei sulla pagina 1di 48

Solutions to Stochastic Calculus for Finance II (Steven Shreve)

Dr. Guowei Zhao


Dept. of Mathematics and Statistics
McMaster University
Hamilton,ON L8S 4K1
October 18, 2013

Contents
1 Chapter 1
1.1 Exercise
1.2 Exercise
1.3 Exercise
1.4 Exercise
1.5 Exercise
1.6 Exercise
1.7 Exercise
1.8 Exercise

1.4 .
1.5 .
1.6 .
1.7 .
1.8 .
1.10
1.11
1.14

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

2
2
3
4
4
5
6
6
7

2 Chapter 2
2.1 Exercise
2.2 Exercise
2.3 Exercise
2.4 Exercise
2.5 Exercise
2.6 Exercise

2.2 .
2.3 .
2.4 .
2.5 .
2.7 .
2.10

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

7
7
8
9
9
11
11

3 Chapter 3
3.1 Exercise
3.2 Exercise
3.3 Exercise
3.4 Exercise
3.5 Exercise
3.6 Exercise
3.7 Exercise

3.2
3.3
3.4
3.5
3.6
3.7
3.8

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

12
12
12
12
13
14
15
17

4 Chapter 4
4.1 Exercise 4.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.2 Exercise 4.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.3 Exercise 4.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

19
19
20
21

Email:

.
.
.
.
.
.

zhaog22@mcmaster.ca

4.4
4.5
4.6
4.7
4.8
4.9
4.10
4.11
4.12

Exercise
Exercise
Exercise
Exercise
Exercise
Exercise
Exercise
Exercise
Exercise

4.6 .
4.7 .
4.8 .
4.9
4.11
4.13
4.15
4.18
4.19

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

22
22
23
24
26
27
27
28
29

5 Chapter 5
5.1 Exercise
5.2 Exercise
5.3 Exercise
5.4 Exercise
5.5 Exercise
5.6 Exercise

5.1 .
5.4 .
5.5
5.12
5.13
5.14

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

30
30
31
33
34
36
37

6 Chapter 6
6.1 Exercise
6.2 Exercise
6.3 Exercise
6.4 Exercise
6.5 Exercise

6.1
6.3
6.4
6.5
6.7

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

40
40
41
41
43
45

.
.
.
.

7 Chapter 7

47

8 Chapter 8

47

9 Chapter 9

47

10 Chapter 10

47

11 Chapter 11
11.1 Exercise 11.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

47
47

Introduction
This solution manual will be updated anytime, and is NOT intended for any business use. The author
suggests this manual as a reference book to the metioned book by Steven Shreve. Also anyone involved
in any mathematical finance courses shall not copy the solutions in this book directly. This is ideally
for self-check purpose.
If anyone find any mistakes or misprints in this book, please inform me. Thanks.

Chapter 1

1.1

Exercise 1.4

(i) On the infinite coin-toss space , for n = 1, 2, . . . we define


(
1,
if n = H
Yn () =
0,
if n = T
2

We set X =
let

Yn
n=1 2n

Then by Example 1.2.5, X is uniformly distributed on [0, 1]. Furthermore,


Z z
2
1
N (z) =
()d
where
() = e /2
2

Then for Z = N 1 (X), by Example 1.2.6,


Z [a, b] = P ( : a Z() b) = P ( : a N 1 (X()) b) = P ( : N (a) X N (b)) = N (b)N (a)
for any < a b < . So we conculde that Z = N 1 (X) is a standard normal distributionon
on ( , F , P ).
(ii) Let
Zn () = N 1 (Xn ())

where

Xn () =

n
X
Yk ()

2k

k=1

It is clear that
lim Zn () = lim N 1 (Xn ()) = N 1 ( lim Xn ()) = N 1 (X()) = Z()

for every

Since Yk () only depends on the kth coin toss, Xn will only depend on the first n coin tosses,
which means Zn will also only depend on the first n coin tosses.

1.2

Exercise 1.5

First, since
(
0,
I(0,X()) (x) =
1,

if x
/ (0, X())
if x (0, X())

(1.1)

we can compute the double integral as:


Z Z
Z
I(0,X()) (x)dxdP () = [X() 0]dP () = E[X]

(1.2)

In the other hand, we may also compute the same double integral by changing the order f the integration
as:
Z Z
Z Z
I(0,X()) (x)dxdP () =

I(0,X()) (x)dP ()dx


0

Now let us consider the innner integral with respect to P (), in which we shall consider the function
I(0,X()) (x) as a function of instead of x. Now note the condition x (0, X()) in (1.1) is equivalent
to X() x. So
Z
Z
I(0,X()) (x)dP () =
I:X()x ()dP () = P ( : X() x) = P (X x) = 1P (X x) = 1F (x)

Then

Z Z

I(0,X()) (x)dxdP () =

By equating (1.1) and (1.3), we conculde that


Z

[1 F (x)]dx

E[X] =
0

[1 F (x)]dx

I(0,X()) (x)dP ()dx =

(1.3)

1.3

Exercise 1.6

1.
E[euX ] =

eux f (x)dx


Z
1
(x )2
=
dx
exp ux
2 2
2


Z
1
x2 2x + 2
=
dx
exp ux
2 2
2


Z

1
1
2
2
2
=
exp 2 x (2 + 2u )x 2 dx
2
2
2


Z

1
1
1
2
2
2 2
2
=
exp 2 x ( + u ) + 2 ( + u ) 2 dx
2
2
2
2




Z

1
1
1
2
2
2 2
=
exp 2 x ( + u 2 )
dx exp
(
+
u
)

2
2 2
2 2
2




Z
1
2u 2
2 4
1
=
+
exp 2 y 2 dy exp
2
2
2
2 2
2


1
= 1 exp u + u2 2
2

2.

E[(X)] = E[euX ] = eu+ 2 u

1.4

eu = () = (EX)

Exercise 1.7

1. f (x) = 0 for any x;


R
R
2. limn fn (x)dx = 1 6= 0 = f (x)dx;
3. The sequence fi is not monotone. Refer to the following diagram for the graphs of the functions:

Figure 1: Graphs of Exercise 1.7 with n=1,2,3,4 from the top respectively

1.5

Exercise 1.8

(i) We consider the sequence


Yn () =
First note that
lim Yn () =

sn t

etX() esn X()


t sn

d tX()
[e
] = X()etX()
dt

Also by the Mean value Theorem, there exists n [sn , t] (or [t, sn ]) such that
Yn () =

etX() esn X()


= X()en ()X()
t sn

Note that for n sufficiently large, n max{t, sn } 2|t| since sn t, so


|Yn ()| = X()en ()X() X()e2|t|X()
Also note that for any n and E[XetX ] < for any t, so by Dominated Convergence Theorem,
we have
Z
Z
Z
Yn ()dP () =
lim Yn ()dP () =
X()etX() dP ()
lim EYn = lim
n

=E[Xe

tX

i.e.,
lim EYn = E[ lim Yn ] = E[XetX ]

(ii) If X can take both positive and negative values, then write X = X + X , where X + =
max{X, 0} and X = max{X, 0}. Then by the Mean value Theorem, there exists n [sn , t]
(or [t, sn ]) such that
Yn () = X()en ()X()
Then





|Yn ()| = X()en ()X() |X| en ()X() |X| e2|t||X|

(1.4)

Now we need to show that E[|X| et|X| ] < for any t to apply Dominated Convergence Theorem.
For any t,
Z
t|X|
E[|X| e
]=
|X| et|X| dP ()

Z
Z
=
1X()0 |X| et|X| dP () +
1X()0 |X| et|X| dP ()
Z
Z

+ tX +
=
1X()0 X e
dP () +
1X()0 X etX dP ()

+ tX +

=E[1X()0 X e

] + E[1X()0 X etX ]

Since E[|X| etX ] < for any t, so


Z
tX
E[|X| e ] =
|X| etX dP ()

Z
Z
=
1X()0 |X| etX dP () +

1X()0 |X| etX dP ()

1X()0 X + etX dP () +

1X()0 X etX dP ()

+ tX +

=E[1X()0 X e

] + E[1X()0 X etX ] <

which implies for any t


+

E[1X()0 X + etX ] <

and

E[1X()0 X etX ] <

Or equivalently for any t, we have


+

E[1X()0 X + etX ] <

and

Hence

E[1X()0 X etX ] <

E[|X| et|X| ] = E[1X()0 X + etX ] + E[1X()0 X etX ] <


for any t. Then E[|X| e|t||X| ] < for any t. Then by (1.4) and Dominated Convergence
Theorem, we have
lim EYn = E[ lim Yn ] = E[XetX ]
n

which is 0 (t) = E[XetX ].

1.6

Exercise 1.10

1. Since Z is a nonnegative random variable and


Z
1
1
E[Z] =
Z()d = 0 + 2 = 1
2
2

by Theorem 1.6.1, P is a probability measure.


2.
P (A) =

Z()dP () =

Z()dP () +

A{:01/2}

Z()dP ()
A{:1/21}

= 0 P (A { : 0 1/2}) + 2 P (A { : 1/2 1})


0 + 2 P (A) = 0
since P (A) 0, P (A) = 0.
3. Let A = { : 0 1/2}, then P (A) = 1/2 > 0, P (A) =

1.7

R 1/2
0

Z()d =

Exercise 1.11

By (1.6.4) of Text, we have for any u R


i
h




euY =E euY Z = E eu(X+) eX 21 2
E
h
i
2
1 2
1
1 2
=E e(u)X eu 2 = e 2 (u) eu 2
1

=e 2 u
=eu

u+ 12 2 +u 12 2

/2

therefore Y is standard normal under P .


6

R 1/2
0

0d = 0.

1.8

Exercise 1.14

Since for a 0
P (X a) = 1 ea =

f (x)dx

where

(
ex ,
f (x) =
0,

x0
x<0

f (x) is the density function for such expoential r.v.


1.
P () =

Z
Z()dP ()

e()x f (x)dx

Z
()x

e
(ex )dx
=

0
Z

x
=
e
dx
=

=1
2.
P (X a) =

Z
:X()a Z()dP ()

Z
=

Z0 a
=

e()x (ex )dx

x
e
dx

= 1 ea

Chapter 2

2.1

Exercise 2.2

1. By the definition, (X) = {{X B}; B ranges over the subsets of R}.
when B = {1},

{X B} = {X = 1} = { 2 : S2 () = 4} = {HT, T H}

when B = {0},

{X B} = {X = 0} = { 2 : S2 () 6= 4} = {HH, T T }

So all the sets in (X) are , 2 , {HT, T H} and {HH, T T }.


2. By the definition, (S1 ) = {{S1 B}; B ranges over the subsets of R}.
when B = {8},

{S1 B} = {S1 = 8} = { 2 : S1 () = 4} = {HT, HH}

when B = {2},

{S1 B} = {S1 = 2} = { 2 : S1 () 6= 4} = {T H, T T }

So all the sets in (X) are , 2 , {HT, HH} and {T H, T T }.

3. Here we only investigate the non-trivial cases:


A (X)
{HT, T H}
{HT, T H}
{HH, T T }
{HH, T T }

B (S1 )
{HT, HH}
{T H, T T }
{HT, HH}
{T H, T T }

AB
{HT}
{TH}
{HH}
{TT}

( 14
( 14
( 14
( 14

P (A)P (B)
+ 14 ) ( 41 + 14 ) =
+ 14 ) ( 41 + 14 ) =
+ 14 ) ( 41 + 14 ) =
+ 14 ) ( 41 + 14 ) =

P (A B)
1
4
1
4
1
4
1
4

1
4
1
4
1
4
1
4

therefore for any A (X) and B (S1 ), we have P (A B) = P (A)P (B). So (X) and (S1 )
are independent under P .
4. Again we only investigate the non-trivial cases:
A (X)
{HT, T H}
{HT, T H}
{HH, T T }
{HH, T T }

B (S1 )
{HT, HH}
{T H, T T }
{HT, HH}
{T H, T T }

AB
{HT}
{TH}
{HH}
{TT}

( 29
( 29
( 49
( 49

+
+
+
+

P (A)P (B)
( 29 + 49 ) =
( 29 + 19 ) =
( 29 + 49 ) =
( 29 + 19 ) =

2
9)
2
9)
1
9)
1
9)

P (A B)
8
27
4
27
10
27
5
27

2
9
2
9
4
9
1
9

therefore for some A (X) and B (S1 ) (indeed for any non-trivial elements as shown), we
have P (A B) 6= P (A)P (B). So (X) and (S1 ) are not independent under P .
5. If we are told that X = 1, then we have S2 = 4, i.e., the event must be HT or T H. But since
P (HT ) = P (T H) = 2/9, we canot use the un-conditioned probability for S1 any longer. In other
words, based on the current information, we shall estimate P (S1 = 8|X = 1) = 1/2 = P (S1 =
2|X = 1).

2.2

Exercise 2.3

Since V is normal random variable and


E[V ] = E[X cos + Y sin ] = cos E[X] + sin E[Y ] = cos 0 + sin 0 = 0
V ar[V ] = V ar[X cos + Y sin ] = cos2 V ar[X] + sin2 V ar[Y ] = cos2 1 + sin2 1 = 1
so V is standard normal. Similarly W is also standard normal.
To show the independence, we quote the fact that if Z is a standard normal random variable, then
for any u, E[euZ ] = exp(u2 /2) (Exercise 1.6). Then for any real numbers a and b,
E[eaV +bW ] = E[ea(X cos +Y

sin )+b(X sin +Y cos )

= E[e

X(a cos b sin )+Y (a cos +b sin )

= E[e

X(a cos b sin )


= exp

Y (a cos +b sin )

] E[e
2

by the independence of X and Y




(a cos + b sin )
(a cos b sin )
+
2
2

a2 cos2 + b2 sin2 2ab cos sin a2 cos2 + b2 sin2 + 2ab cos sin
+
2
2
 2

2
a
b
= exp
+
2
2


= exp

= E[eaV ]E[ebW ]
So V and W are independent.
8

2.3

Exercise 2.4

1. By the definition,


E euX+vY =

eux+vxz dZ (z)dX (x)




Z 
1 ux+vx1 1 ux+vx(1)
=
e
+ e
dX (x)
2
2
Z
Z
1 (uv)x
1 (u+v)x
e
dX (x) +
e
dX (x)
=
2
2

=E[e(u+v)X ] + E[e(uv)X ]
2
2
1
1
= e(u+v) /2 + e(uv) /2
2
2
2
2
euv + euv
=e(u +v )/2
2
for any u and v.
2. Let u = 0 in the last result, then
E[evY ] = e(0

+v 2 )/2

2
e0v + e0v
= ev /2
2

So Y must be a standard normal random variable.


3. If X and Y are independent, then by Theorem 2.2.7 of text, the joint moment generating function
must factor as:
EeuX+vY = EeuX EevY
But apparently
EeuX+vY = e(u

+v 2 )/2

2
2
euv + euv
6= eu /2 ev /2 = EeuX EevY
2

So they are not independent.

2.4

Exercise 2.5
Z

fX (x) =

fX,Y (x, y)dy



(2|x| + y)2
2|x| + y

exp
dy
2
2
|x|
 2
Z
z
z
exp
dz
(z = y + 2|x|)
=
2
2
|x|
1
z 2 z=
= exp[ ]
2 z=|x|
2
2
1
x
= exp[ ]
2
2
Z
fY (y) =
fX,Y (x, y)dx



Z
2|x| + y
(2|x| + y)2

dx
=
exp
2
2
|x|y
=

if y 0, then



2|x| + y
(2|x| + y)2

exp
dx
2
2



Z
2x + y
(2x + y)2

=2
exp
dx
2
2
0
 2
Z
z
z dz
exp
=2
(z = 2x + y)
2 2
2
y
Z

fY (y) =

y2
1
= exp[ ]
2
2
if y 0, then





Z y
2|x| + y
(2|x| + y)2
2|x| + y
(2|x| + y)2

exp
exp
dx +
dx
2
2
2
2
y





Z
Z y
2x + y
2x + y
(2x + y)2
(2x + y)2

dx +
dx
=
exp
exp
2
2
2
2

y


Z
2x + y
(2x + y)2

=2
exp
dx
2
2
y
 2
Z
z
z dz
exp
=2
(z = 2x + y)
2 2
2
y
Z

fY (y) =

1
y2
= exp[ ]
2
2
Therefore both X and Y are standard random variable.
It is clear that fX,Y (x, y) 6= fX (x) fY (y), so they are not independent.
And it is also clear that E[X] = E[Y ] = 0, and
E[XY ]
Z Z
=
xyfX,Y (x, y)dydx



Z Z
2|x| + y
(2|x| + y)2

dydx
=
xy
exp
2
2
|x|




Z Z
Z 0 Z
2x + y
(2x + y)2
2x + y
(2x + y)2
=
xy
exp
dydx +
xy
exp
dydx
2
2
2
2
0
x
x




Z Z
Z 0Z
2x + y
(2x + y)2
2t + y
(2t + y)2

exp
exp
=
xy
dydx +
(t)y
dy(dt)
2
2
2
2
0
x
t
Z

=
0

(t = x)




Z Z
2
(2x + y)
2x + y
(2x + y)2
2x + y
xy
xy
exp
dydx
exp
dydx
2
2
2
2
0
x

=0
so E[XY ] = E[X]E[Y ], which implies X and Y are uncoorelated.

10

2.5

Exercise 2.7

Let = E[Y X], then we consider




V ar(Y X) =E (Y X )2
h
i
2
=E ((Y E[Y |G]) + (E[Y |G] X ))
h
i
2
=E (Y E[Y |G])2 + (E[Y |G] X ) + 2 (Y E[Y |G]) (E[Y |G] X )


E (Y E[Y |G])2 + 0 + 2E [(Y E[Y |G]) (E[Y |G] X )]

(2.1)

=V ar(Err) + 2E [(Y E[Y |G]) (E[Y |G] X )]




The last step is since E(Err) = 0, so V ar(Err) = E (Y E[Y |G])2 .
Now note that both X and E[Y |G] are Gmeasurable random variables,
E [(Y E[Y |G]) (E[Y |G] X )]
=E [Y E[Y |G] XY Y E[Y |G]E[Y |G] + XE[Y |G] + E[Y |G]]
=E [Y E[Y |G] XY Y E[Y E[Y |G]|G] + E[XY |G] + E[Y |G]]

by Gmeasurable

=E[Y E[Y |G]] E[XY ] E[y] E[E[Y E[Y |G]|G]] + E[E[XY |G]] + E[E[Y |G]]
=E[Y E[Y |G]] E[XY ] E[y] E[Y E[Y |G]] + E[XY ] + E[Y ]

by iterated conditioning E[E[Z|G]] = E[Z]

=0
By substituting this result into (2.1), we will have
V ar(Y X) V ar(Err)

2.6

Exercise 2.10

By the notations, for any A (X),


Z
g(X)dP
A
Z
=
g(X())dP ()
{:X()B}
Z
=
g(X()) 1(X B)dP ()
Z
=
g(x)fX (x) 1(x B)dx


Z Z
yfX,Y (x, y)
=
dy fX (x) 1(x B)dx
fX (x)

Z Z
=
y 1(x B)fX,Y (x, y)dxdy

=E[Y 1(X B)]

=E[Y 1(A)]
Z
=
Y dP

by the notation of (2.6.3)


by the notation of (2.6.2) and A = {X B}

whic verifies the partial averaging property, hence proves E[Y |X] = g(X).

11

Chapter 3

3.1

Exercise 3.2

For 0 s t,
E[W 2 (t) t|F(s)]
=E[(W (t) W (s))2 + 2W (t)W (s) W 2 (s) t|F(s)]
=E[(W (t) W (s))2 t|F(s)] + E[(2W (t)W (s)|F(s)] + E[W 2 (s)|F(s)]
=[(t s) t] + 2W (s)E[W (t)|F(s)] W 2 (s)
= s + 2W 2 (s) W 2 (s)
=W 2 (s) s
so martingale.

3.2

Exercise 3.3
h
i
1 2 2
(u) = E eu(X) = e 2 u
h
i
1 2 2
0 (u) = E (X )eu(X) = u 2 e 2 u
h
i
1 2 2
00 (u) = E (X )2 eu(X) = ( 2 + u2 4 )e 2 u
h
i 
 1 2 2
(3) (u) = E (X )3 eu(X) = 2u 4 + ( 2 + u2 4 ) u 2 e 2 u
 1 2 2
= 3u 4 + u3 6 e 2 u
h
i 
 1 2 2
(4) (u) = E (X )4 eu(X) = (3 4 + 3u2 6 ) + (3u 4 + u3 6 ) u 2 e 2 u
 1 2 2
= 3 4 + 6u2 6 + u4 8 e 2 u

In particular, (3) (0) = 0 and (4) (0) = 3 4 . The latter implies the kurtosis of a normal random
variable is (4) (0)/( 2 )2 = 3.

3.3

Exercise 3.4

1. Since
n1
X

(W (tj+1 ) W (tj ))

j=0

max

0kn1

|W (tk+1 W (tk )|

n1
X

|W (tj+1 ) W (tj )|

j=0

By re-writing it, we have


n1
X
j=0

Pn1
|W (tj+1 ) W (tj )|

j=0

(W (tj+1 ) W (tj ))

max0kn1 |W (tk+1 ) W (tk )|

for almost every path of the Brownina motion W as n and || 0.

12

T
=
0

2.
0

n1
X

|W (tj+1 ) W (tj )|

j=0

max

0kn1

|W (tk+1 ) W (tk )|

n1
X

|W (tj+1 ) W (tj )|

j=0

0 T = 0
for almost every path of the Brownina motion W as n and || 0.

3.4

Exercise 3.5

1
exp(x2 /2T ). Then by
Note that W (T ) N (0, T ), with probability density function f (x) = 2T
the definition of the expectation


E erT (S(T ) K)+
Z

+
1 2
=
erT S(0)e(r 2 )T +x K f (x)dx

Z


1 2
=
erT S(0)e(r 2 )T +x K f (x)dx
A

where A is the constant such that S(T ) K for any x A and S(T ) K for any x A. More
precisely,
1
A = min{x : (r 2 )T + x K}
2 

K
log S(0)
(r 21 2 )T
= min{x : x
}



K
log S(0)
(r 21 2 )T
=

= T d (T, S(0))
Therefore,


E erT (S(T ) K)+
Z
Z
1 2
=
erT S(0)e(r 2 )T +x f (x)dx
=e

S(0)e

erT Kf (x)dx

T d

T d

rT

(r 21 2 )T

T d

rT

e f (x)dx e
{z
I

13

Z
K

T d

f (x)dx

{z
II

Now we compute the two integrals separately:


Z
x2
1
I =
ex e 2T dx

2T T d


Z

1
t2
exp T t
=
dt
2
2 d


Z
2
1
1 2 T /2
exp (t T ) dt
= e
2
2
d
Z
2
2
1
x2
= e T /2
dx
e
2
d T
Z
2
x2
1
e 2 dx
= e T /2
2
d+
=e

T /2

x
(t = )
T
(completing the square)

(x = t T )

(d+ = d + T )

N (d+ )

and
Z
x2
1
e 2T dx
II =

2T T d
 2
Z
1
t
=
exp
dt
2
2 d

x
(t = )
T

=N (d )
By substituting I and II,we have the expected payoff of the option


E erT (S(T ) K)+
1

=erT S(0)e(r 2

)T 2 T /2

N (d+ ) erT KN (d )

=S(0)N (d+ ) erT KN (d )

3.5

Exercise 3.6

1. Write
E[f (X(t))|F(s)] = E[f (X(t) X(s) + X(s))|F(s)]
We define g(x) = E[f (X(t)X(s)+x)], then we have E[f (X(t)X(s)+X(s))|F(s)] = g(X(s)).
Now since
X(t) X(s) = (t s) + W (t) W (s)
is normally distibuted with mean (t s) and variance t s,


Z
1
( (t s))2
g(x) = p
f ( + x) exp
d
2(t s)
2(t s)


Z
1
(y x (t s))2
=p
f (y) exp
dy
2(t s)
2(t s)
by changing variable y = + x.
Therefore E[f (X(t))|F(s)] = g(X(s)), i.e. X has Markov property.
2. Write

 


S(t)
E[f (S(t))|F(s)] = E f
S(s) |F(s)
S(s)
14

h 

i
S(t)
We define g(x) = E f S(s)
x |F(s) , then since
log

S(t)
= (t s) + (W (t) W (s))
S(s)

is normally distibuted with mean (t s) and variance 2 (t s),




Z
1
( (t s))2

f (e x) exp
g(x) =
d

2 2 (t s)
2 t s


Z
(log xy (t s))2
1
1
=
f (y) exp
dy

2 2 (t s)
2 t s 0 y
by changing variable y = e x ( d = dy/y and x implies 0 y ).
Therefore
Z

f (y)p(, x, y)dy

g(x) =

with

p(, x, y) =



(log xy )2
1
exp
2 2
2y

satisfies E[f (S(t))|F(s)] = g(S(s)) for any f . Hence S has Markov property.

3.6

Exercise 3.7

1.

h
i

E Z(t) F(s)



Z(t)

Z(s) F(s)
=E
Z(s)


Z(t)
=Z(s)E
F(s)
Z(s)




1

=Z(s)E exp (t s) + (W (t) W (s)) ( + 2 )(t s) F(s)
2
h
i
(ts)
(W (t)W (s))
(+ 12 2 )(ts)
=Z(s)e
E e
(by independence)
e
1

=Z(s)e(ts) e 2

(ts)

e(+ 2

)(ts)

(since W (t) W (s) N (0, s t))

=Z(s)
2. since m is a stopping time, and a martingale which is stoped at a stopping time is still a
martingale,
E[Z(t m )] = E[Z(0 m )] = E[Z(0)] = Z(0) = 1
which is exactly


1
E[exp{X(t m ) + 2 (t m )}] = 1
2

(3.1)



1
lim Z(t m ) = Z(m ) = exp m ( + 2 )m
t
2

(3.2)



1 2
lim Z(t m ) = lim exp X(t) ( + )t
t
t
2

(3.3)

3. For m < ,

For m = ,

15

since



1
lim exp ( + 2 )t = 0
t
2


1
lim Z(t m ) = lim exp [X(t)] lim exp ( + 2 )t = lim eX(t) 0 = 0
t
t
t
t
2
0 eX(t) em

(3.4)
(3.5)

therefore






1 2
1 2
lim exp X(t m ) + (t m ) = Im < exp m ( + )m
t
2
2
By taking the limits in (3.1), we have



1 2
E Im < exp m ( + )m
2





1 2
=E lim exp X(t m ) + (t m )
t
2





1
= lim E exp X(t m ) + 2 (t m )
t
2

(3.6)

(3.7)

=1
Furthermore




1
1 = lim E Im < exp m ( + 2 )m = E[Im < ]
0+
2

which is equivalent to P [M < ] = 1.


Therefore,






1 2
1 2
1 = E Im < exp m ( + )m = E exp m ( + )m
2
2
or equivalentely, for any > 0



1 2
E exp ( + )m = em
2

Let = + 21 2 , then > 0 and = u2 + 2. But since > 0, = + u2 + 2,


so for any > 0
E[em ]



1 2
=E exp ( + )m
2
=em

=e(+
=e

u2 +2)m

u2 +2m

16

4. By differentiating
d  m 
d h mm2+2 i
e
E e
=
d
d



m
2
2 emm 2+
E m em = p
2
2 2 +



m
2
m
lim E m e
emm 2+
= lim p
2
0+
0+
2 +
2
m
E [m ] = p
emm 0+
0 + 2
m
E [m ] =
since > 0

And E [m ] =

< follows immediately.

5. For < 0 and > 2 > 0, we still have (3.2) and (3.3) respectively. The slight difference to
the argument in (3.4) will be,




1
( + 2)
0 lim exp ( + 2 )t lim exp
t =0
t
t
2
2
as both > 0 and + 2 > 0.
Then (3.5)-(3.7) follow directly as part (iii), which is



1 2
E Im < exp m ( + )m = 1
2

(3.8)

Insteading of taking limits 0+ in (ref130724g), we can only take the limit 2 as:
Furthermore



1 2
1 = lim E Im < exp m ( + )m = E[e2m Im < ]
2
2
which is equivalent to P [M < ] = E[Im < ] = e2m = e2m|| .
The Laplace transform argument is exactly the same as part (iii).

3.7

Exercise 3.8

(i)



1
n (u) =E exp u Mnt,n
n
!#
"
nt
u X
=E exp
Xk,n
n
k=1



nt
Y
u
=
E exp Xk,n
n
=

k=1
nt h
Y

pn eu/

+ qn eu/

by independence

i
n

k=1

in
= pn eu/ n + qn eu/ n
17

by i.i.d

"
= e

r
1
n +

/
n
e

u/ n

e/ n

e/ n

!
e

u/ n

!#n
r
1 e/ n
n +

e/ n e/ n

n = 1/x, we have


2
x
t
rx2 + 1 ex
ux rx + 1 e
log 1/x2 (u) = 2 log eux

e
x
ex ex
ex ex


t
eux eux
e(u)x e(u)x
= 2 log (rx2 + 1) x
+
x
e ex
ex ex


t
(rx2 + 1) sinh ux + sinh( u)x
= 2 log
x
sinh x

(ii) First by the substitution

Then by sinh(A B) = sinh A cosh B cosh A sinh B, we have




(rx2 + 1) sinh ux + sinh( u)x
t
2
log 1/x (u) = 2 log
x
sinh x


2
(rx + 1) sinh ux + sinh x cosh ux cosh x sinh ux
t
= 2 log
x
sinh x


2
(rx + 1 cosh x) sinh ux
t
= 2 log cosh ux +
x
sinh x
(iii) Since sinh z = z + O(z 3 ),
ux + O(x3 )
u
sinh ux
=
= + O(x3 )
3
sinh x
x + O(x )

This is because of the identity


(x + O(x3 )) (

u
+ O(x3 )) = ux + O(x3 ) + O(x6 ) = ux + O(x3 )

And since cosh z = 1 + 21 z 2 + O(z 4 ),


1
1 cosh z = z 2 + O(z 4 )
2
Therefore
(rx2 + 1 cosh x) sinh ux
sinh x
u2 x2
2 x2
u
=1 +
+ O(x4 ) + (rx2
+ O(x4 )) ( + O(x3 ))
2
2

u2 x2
rux2
1 2
=1 +
+
ux + O(x4 ) + O(x5 ) + O(x7 )
2

2
u2 x2
rux2
1
=1 +
+
ux2 + O(x4 )
2

2
cosh ux +

(iv) Since
log 1/x2 (u) =



t
u2 x2
rux2
1 2
4
log
1
+
+

ux

+
O(x
)
x2
2

2
18


 2


t
u
ru u
2
4
log
1
+
+

x
+
O(x
)
x2
2

2
 2


u
ru u
t
2
4
6
8
+

x + O(x ) + O(x ) + O(x )


= 2
x
2

2
 2


u
t
ru u
2
4
= 2
+

x + O(x )
x
2

2

 2
ru u
u
t + O(x2 )
+

=
2

2
=

we have the moment generating function



u2
ru u
+

t
2

2


rt t
u2
=u

+ t

2
2


lim log 1/x2 (u) =


x0

This implies that


1
Mnt,n
n



rt t

 
,t

as n

Now by the following property of the normal random variables


Z N (a, b)
we have

Mnt,n
n

cZ N (ca, c2 b)

 





rt t
2

, 2 t = N
r
t, 2 t

2
2

as n , i.e., the limiting distribution of


2 t.

Mnt,n
n

is normal with mean (r 2 /2)t and variance

Chapter 4

Theorem 1. If dx = a(x, t)dt + b(x, t)dW , where dW is a Wiener process/Brownian motion then for
G(x, t), we have


G G 1 2 2 G
G
dG = a
+
+ b
dt + b
dW
(4.1)
x
t
2 x2
x

4.1

Exercise 4.1

For any 0 s t T , without loss of generality, we assume s = tl and t = tk 1 . Now


E[I(t)|F(s)]

l1
l

X
X

=E
tj [M (tj+1 M (tj )) +
tj [M (tj+1 M (tj ))] F(tl )
j=0

l1
X

j=k1



i k1
i
X h


E tj [M (tj+1 M (tj )) F(tl ) +
E tj [M (tj+1 M (tj )) F(tl )
h

j=0
1 since

j=l

we can always re-arrange the indices to have a new partition over 0 to T

19

l1
X

[tj [M (tj+1 M (tj ))] +

j=0

k1
X


i
i
h h


E E tj [M (tj+1 M (tj )) F(tj ) F(tl )

j=l

=I(s) +

k1
X


i
i
h h


E E tj [M (tj+1 M (tj )) F(tj ) F(tl )

j=l

=I(s) +

k1
X


i
h

E [tj [M (tj M (tj ))] F(tl )

j=l

=I(s) +

k1
X

i
h

E 0 F(tl )

j=l

=I(s) + 0
=I(s)
so I(t) is a martingale for 0 t T .

4.2

Exercise 4.2

(i) Without loss of generality, we assume t = tk and s = tl where tl < tk . In other words, to show
that I(t) I(s) in independent of F(s) if 0 s < t T , it is sufficient to show that I(tk ) I(tl )
is independent of F(tl ) for 0 tl < tk T . Hence
I(t) I(s) =

k
X

(tj )[W (tj+1 ) W (tj )]

(tj )[W (tj+1 ) W (tj )]

j=0

j=0
k
X

l
X

(tj )[W (tj+1 ) W (tj )]

j=l+1

Now since (t) is a nonrandom simple process, each (tj ) will be a constant; also since W (t) is
a brownian motion, each W (tj+1 ) W (tj ) will be independent of F(tj ) for j > l. Futhermore,
since F(tl ) F(tj ) for all j < l, each W (tj+1 ) W (tj ) will be independent of F(tl ), which
implies each (tj )[W (tj+1 ) W (tj ) is independent of F(tl ). BY taking the summation, we have
Pk
I(t) I(s) = j=l+1 (tj )[W (tj+1 ) W (tj )] is independent of F(tl ).
(ii) Again, we consider
I(t) I(s) =

k
X

(tj )[W (tj+1 ) W (tj )]

j=l+1

Now since each W (tj+1 ) W (tj ) is a normal distribution random variable with mean 0 and
variance tj+1 tj , (tj )[W (tj+1 ) W (tj )] will have a normal distribution with mean 0 and
variance 2 (tj )(tj+1 tj ). Since the summation of any normal random variables is still a normal
random variable, then

k
X
I(t) I(s) N 0,
2 (tj )(tj+1 tj )
j=l+1


 P
k
As (t) is a nonrandom simple process, N 0, j=l+1 2 (tj )(tj+1 tj ) is exactly the defiRt
nition of the Riemann sum tlk 2 (u)du. Hence
 Z t

I(t) I(s) N 0,
2 (u)du
s

20

(iii) Since
E[I(t)|F(s)] =E[I(t) I(s) + I(s)|F(s)]
=E[I(t) I(s)|F(s)] + E[I(s)|F(s)]
=E[I(t) I(s)] + I(s)
=0 + I(s)
=I(s)

since I(t) I(s) is independent of F(s)

since I(t) I(s) is normally distributed with mean 0

I(t) is a martingale.
(iv) First we write
Z

I (t)

2 (u)du

0
2

=(I(t) I(s)) + 2I(t)I(s) I (s)

2 (u)du

0
2

=(I(t) I(s)) + 2(I(t) I(s))I(s) + 2I (s) I (s)

2 (u)du

then for 0 s t T ,


Z t
E I 2 (t)
2 (u)du|F(s)
0


Z t
=E (I(t) I(s))2 + 2(I(t) I(s))I(s) + I 2 (s)
2 (u)du|F(s)
0



=E (I(t) I(s))2 + 2(I(t) I(s))I(s)|F(s) + I 2 (s)

2 (u)du
0
Z t


2
2
=E (I(t) I(s)) + 2E [2(I(t) I(s))|F(s)] I(s) + I (s)
2 (u)du
0
Z t
Z t
2
2
=
(u)du + 2 0 I(s) + I (s)
2 (u)du
s
0
Z s
=I 2 (s)
2 (u)du
0
2

which implies that I (t)

4.3

Rt
0

2 (u)du is a martingale.

Exercise 4.5

1. In Theorem 1, G(x, t) = log x, then





1
1 2 2
1
1
d log S(t) = (t)S(t)
+ 0 + S (t) 2
dt + S(t)
dW (t)
S(t)
2
S (t)
S(t)


1
= (t) 2 dt + (t)dW (t)
2
2.

Z t

so


Z t
1 2
log S(t) log S(0) =
(s) ds +
(s)dW (s)
2
0
0
Z t 


Z t
1
(s) 2 ds +
(s)dW (s)
S(t) = S(0) exp
2
0
0
21

4.4

Exercise 4.6

Define Itos process


Z
X(t) =
0

1
((s) 2 )ds +
2

(s)dW (s)
0

Then
1
dX(t) = ( 2 )dt + dW (t)
2
dX(t)dX(t) = 2 dt
Now we use two different notations to compute d(S p (t)):
1. Under the notations in Theorem 1, G(x, t) = S p (0)epx , a = 21 2 and b = . Then


1 2
1 2 2 p
p
p
pX(t)
pX(t)
d(S (t)) = ( ) pS (0)e
+ 0 + p S (0)e
dt + S p (0)pepX(t) dW (t)
2
2


1 2 1 2 2
= p p + p S p (t)dt + pS p (t)dW (t)
2
2
2. Under the notations in Theorem 4.4.6 of Text, S p (t) = f (X(t)), then f (x) = S p (0)epx , f 0 (x) =
pS p (0)epx and f 00 (x) = p2 S p (0)epx . Now
d(S p (t)) = df (X(t))
1
= pS p (0)epX(t) dX(t) + p2 S p (0)epX(t) dX(t)dX(t)
2
1 2 p
p
= pS (t)dX(t) + p S (t)dX(t)dX(t)
2


1
1 2
p
= pS (t) ( )dt + dW (t) + p2 S p (t) 2 dt
2
2


1
1
= p p 2 + p2 2 S p (t)dt + pS p (t)dW (t)
2
2

4.5

Exercise 4.7

Here we use the following version (4.4.1)of Itos lemma:


1
df (W (t)) = f 0 (W (t))dW (t) + f 00 (W (t))dt
2

(4.2)

1. Set f (x) = x4 in (4.2), then f 0 (x) = 4x3 and f 00 (x) = 12x2 . Then (4.2) becomes
dW 4 (t) = 4W 3 (t)dW (t) +

1
12W 2 (t)dt = 4W 3 (t)dW (t) + 6W 2 (t)dt
2

We may also write it in the integral form as:


Z T
Z T
Z
W 4 (T ) = W 4 (0)+4
W 3 (t)dW (t)+6
W 2 (t)dt = 4
0

W 3 (t)dW (t)+6

2. First we note that W (t) N (0, t), then by the symmetry of the integrand
Z
1 x2
E[W 3 (t)] =
x3
e 2t dx = 0
2t

22

Z
0

W 2 (t)dt (4.3)

Now we take the expectation in (4.3),


" Z
 4

E W (T ) =E 4

W (t)dt
0



4E W 3 (t) dW (t) + 6



E W 2 (t) dt

0
T

W (t)dW (t) + 6

t2 dt

40dW (t) + 6
0

=3T 2
3. Set f (x) = x6 in (4.2), then f 0 (x) = 6x5 and f 00 (x) = 30x4 . Then (4.2) becomes
dW 6 (t) = 6W 5 (t)dW (t) +

1
30W 4 (t)dt = 6W 5 (t)dW (t) + 15W 4 (t)dt
2

We may also write it in the integral form as:


Z T
Z
6
6
5
W (T ) = W (0) + 6
W (t)dW (t) + 15
0

W (t)dt = 6

W (t)dW (t) + 15

W 4 (t)dt

It is easy to see that E[W 5 (t)] = 0. Then by taking expectation in the above equation, we have
" Z
#
Z T
T
 6

5
4
E W (T ) =E 6
W (t)dW (t) + 15
W (t)dt
0



6E W 5 (t) dW (t) + 15



E W 4 (t) dt

=6

0dW (t) + 15
0

3t2 dt

=15T 3

4.6

Exercise 4.8

1. In (4.4.23) of text, let f (t, x) = et x, then ft = et x, fx = et and fxx = 0. Then (4.4.23)


becomes

d et R(t) =et R(t)dt + et dX(t)
=et R(t)dt + et [( R(t))dt + dW (t)]
=et dt + et dW (t)
2. Integrating the both sides of the above equation, we have
Z t
Z t
et R(t) =e0 R(0) +
es ds +
es dW (s)
0
0
Z t

t
e 1 +
es dW (s)
=R(0) +

0
which implies
R(t) = et R(0) +

1 et + et

23

Z
0

es dW (s)

4.7

Exercise 4.9

1. It is clear that

y2
1
N 0 (y) = e 2
2

and



2
1
x
+ (r
)(T t)
d = d (T t, x) =
log
K
2
T t

Then
 2

d+
d2
N 0 (d+ )
=
exp

+
N 0 (d )
2
2
"
2 
2 #
x
1
x
2
2
= exp 2
log
+ (r
)(T t) log
+ (r +
)(T t)
2 (T t)
K
2
K
2



 2
1
x
= exp 2
4 log
+ r(T t)
(T t)
2 (T t)
K
2
h
i
x
= exp log
+ r(T t)
K
K
= er(T t)
x
2. First

1
d
=
x
T tx

By the definition
c(t, x) = xN (d+ (T t, x)) Ker(T t) N (d (T t, x))
cx =

c
x

d+
d
= N (d+ (T t, x)) + xN 0 (d+ )
Ker(T t) N 0 (d )
x
x


1
0
r(T t) 0
= N (d+ ) + xN (d+ ) Ke
N (d )
T tx
= N (d+ )
3.
ct =

c
t

d
d+
Ker(T t) rN (d ) Ker(T t) N 0 (d )
t
t


d+
r(T t)
0
r(T t) d+
= Ke
rN (d ) + N (d+ ) x
Ke
xKer(T t)
t
t


d+
d
= Ker(T t) rN (d ) + xN 0 (d+ )

t
t
= xN 0 (d+ )

Now note that

d+ d = T t
(d+ d )

=
t
2 T t
24

so

x
ct = Ker(T t) rN (d )
N 0 (d+ )
2 T t

4.
1
ct + rxcx + 2 x2 cxx
2
x
N 0 (d+ ) + rxN (d+ ) +
= rKer(T t) N (d )
2 T t
x
= rKer(T t) N (d )
N 0 (d+ ) + rxN (d+ ) +
2 T t
x
= rKer(T t) N (d )
N 0 (d+ ) + rxN (d+ ) +
2 T t
=rxN (d+ ) rKer(T t) N (d )

1 2 2 d
x
[N (d+ (x))]
2
dx
1 2 2 0
d+
x N (d+ )
2
x
1 2 2 0
1
x N (d+ )
2
T tx

=rc
5. As t T ,

x
x
> 0 if x > K, and ln K
< 0 if 0 < x < K. Therefore
T t 0. And ln K


1
1 2
x
lim d = lim
+ (r (T t))
log
tT
tT
K
2
T t
h
i
1
x
1
= lim
log
+ r + lim (r 2 T t)
tT
tT
K
2
T t
(
,
if x > K
=
,
if 0 < x < k

Then

(
lim N (d )

tT

N () = 1,
N () = 0,

if x > K
if 0 < x < k

So
(
lim c(t, x) =

tT

x 1 Ker(T T ) 1 = x K,
x 0 Ker(T T ) 0 = 0,

if x > K
if 0 < x < K

=(x K)+
x
6. For 0 t < T , ln K
as x 0+.



x
1 2
1
log
+ (r (T t))
lim d = lim
x0+
x0+
K
2
T t
=
which implies N (d ) N () = 0 as x 0+. Therefore c(t, x) x 0 Ker(T t) 0 = 0
as x 0+.
x
7. For 0 t < T , ln K
as x .

lim d = lim



1
x
1
log
+ (r 2 (T t))
K
2
T t

=
25

which implies N (d ) N () = 1 as x .
h

i
lim c(t, x) x er(T t) K
x
h

i
= lim xN (d+ ) Ker(T t) x er(T t) K
x

= lim x(N (d+ ) 1)


x

N (d+ ) 1
x
x1
d
[N (d+ ) 1]
= lim dx
x
x2
0
N (d+ ) xT t
= lim
x
x2
1
=
lim x3 N 0 (d+ )
T t x
2
1
1
lim x3 ed+ /2
=
x
T t
2



2
2
1
3
=
) ed+ /2
lim K exp 3 T td+ 3(T t)(r +
x
2
2 T t


 2


d+
1
2
=
exp 3(T t)(r +
) lim exp
+ 3 T td+
x
2
2
2 T t


1
2
=
exp 3(T t)(r +
) 0
2
2 T t

= lim

=0

4.8

Exercise 4.11

Without confusion, we shorten the notations when it is proper through this problem:
S = S(t),

c = c(t, S(t)),

cx = cx (t, S(t)),

ct = ct (t, S(t)),

cxx = cxx (t, S(t))

By
dS(t) = Sdt + 2 SdW (t)
we have
dS(t)dS(t) = 2 S 2 dtdt + 2S2 SdtdW (t) + 22 SdW (t)dW (t) = 22 Sdt
and

1
dc(t, S(t)) = ct dt + cx dS + cxx dSdS
2

Now we compute
d(ert X(t)) = rert X(t)dt + ert dX(t)


1 2
2
2
rt
rt
= re X(t)dt + e
dc cx dS + r(X c + Scx )dt (2 1 )S cxx dt
2



1
1 2
rt
2
2
=e
ct dt + cx dS + cxx dSdS cx dS + r(X c + Scx )dt (2 1 )S cxx dt
2
2


1
1
= ert ct + 12 S 2 cxx rc + rScx (22 12 )S 2 cxx dt + ert (cx cx ) dS
2
2
1
= ert (ct rc + rScx + 12 S 2 cxx )dt
2
=0
26

which implies ert X(t) is a non-random constant and ert X(t) = er0 X(0) = X(0) = 0 for any t.
Since r > 0, so X(t) = 0 for any t.

4.9

Exercise 4.13

Since B1 (t) and B2 (t) are Brownian motions, dB1 (t)dB1 (t) = dt and dB2 (t)dB2 (t) = dt. By substituting the definition
dB1 (t) =dW1 (t)
dB2 (t) =(t)dW1 (t) +

1 2 (t)dW2 (t)

Note that these equations imply that W1 (t) is Brownian motion


and W2 (t) is a continuous
martingale
p
p
with W2 (0) = 0 (since we can write dW2 (t) = dB2 (t)/ 1 2 (t) (t)dW1 (t)/ 1 2 (t) with
1 < < 1). Furthermore we have
(
dW1 (t)dW1 (t) = dt
p
p
(4.4)
((t)dW1 (t) + 1 2 (t)dW2 (t)) ((t)dW1 (t) + 1 2 (t)dW2 (t)) = dt
Also by dB1 (t)dB2 (t) = (t)dt, we have
p
dW1 (t)((t)dW1 (t) + 1 2 (t)dW2 (t)) = (t)dt
p
(t)dt + 1 2 (t)dW1 (t)dW2 (t) = (t)dt
which implies
dW1 (t)dW2 (t) = 0

(4.5)

Then substitute (4.5) into the second equation of (4.4),


p
2 dt + (1 2 )dW2 (t)dW2 (t) + 1 2 dW1 (t)dW2 (t) = dt
2 dt + (1 2 )dW2 (t)dW2 (t) + 0 = dt
dW2 (t)dW2 (t) = dt

(4.6)

Based on the fact that W1 (t) and W2 (t) are continuous martingales with W1 (0) = 0 = W2 (0) ,
and (4.5)(4.4)(4.6), by applying Theorem 4.6.5, we conclude that W1 (t) and W2 (t) are independent
Brownian motions.

4.10

Exercise 4.15

1. Note that

(
dt
dWi (t)dWj (t) =
0

if j = k
if j =
6 k

and
dBi (t) =

d
X
ij (t)
j=1

i (t)

dWj (t)

which implies Bi (t) is a continuous martingale with Bi (0) = 0 for any 1 i d.

27

(4.7)

Then we compute the quadratic variation, or equivalently,

d
d
X
X
ij (t)
ik (t)
dBi (t)dBi (t) =
dWj (t)
dWk (t)

(t)
i (t)
i
j=1
k=1
" d
#
d
X
1 X
ij (t)dWj (t)
= 2
ik (t)dWk (t)
i (t) j=1
k=1

1
i2 (t)

d
X

ij (t)dWj (t)ij (t)dWj (t)

by (4.7)

j=1
d

1 X 2
(t)dt
2
i (t) j=1 ij

=1
2.

d
d
X
X
kl (t)

(t)
ij
dWj (t)
dWl (t)
dBi (t)dBk (t) =
i (t)
k (t)
j=1
l=1
" d
#
d
X
X
1
ij (t)dWj (t)
kl (t)dWl (t)
=
i (t)k (t) j=1
l=1

1
i (t)k (t)

d
X

ij (t)dWj (t)kj (t)dWj (t)

by (4.7)

j=1
d

X
1
=
ij (t)kj (t)dt
i (t)k (t) j=1
=ik (t)

4.11

Exercise 4.18

1. In (4.4.13), let
f (t, x) = ex(r+

/2)t

then
ft (t, x) = (r + 2 /2)ex(r+
fx (t, x) = ex(r+
fxx (t, x) = 2 ex(r+

/2)t

/2)t

/2)t

Then Itoe Formula of (4.4.13) becomes


2

d(t) = (r + 2 /2)eW (t)(r+ /2)t dt eW (t)(r+


2
1
+ 2 eW (t)(r+ /2)t dW (t)dW (t)
2
1
= (r + 2 /2)(t)dt (t)dW (t) + 2 (t)dt
2
= (t)dW (t) r(t)dt
28

/2)t

dW (t)

2. By Itos Product rule,


d((t)Z(t)) =(t)d(Z(t)) + Z(t)d((t)) + d((t))d(Z(t))
= [rXdt + ( r)Sdt + SdW (t)] + X [dW (t) rdt]
+ [rXdt + ( r)Sdt + SdW (t)] [dW (t) rdt]
= [rX + ( r)S rX S] dt
+ [S X] dW (t)
=S [( r) ] dt + [S X]dW (t)
=[S X]dW (t)

(since = ( r)/)

i.e.,d((t)Z(t)) has no dt term. Therefore (t)Z(t) is a martingale.


3. If an investor begin with initial capital X(0) amd has portfolio value V (T ) at time T , which is
X(T ) = V (T ), then
(T )X(T ) = (T )V (T )
Since X(T ) and V (T ) are random, we take the expectation in the above formula
E[(T )X(T )] = E[(T )V (T )]
Now since (t)X(t) is a martingale,
E[(T )X(T )] = (0)X(0) = X(0)
By equating the above two equations, we have X(0) = E[(T )V (T )].

4.12

Exercise 4.19

1. By the definition, dB(t) = sign(t)dW (t), B(t) is a martingale with continuous path. Furthermore, since dB(t)dB(t) = sign2 (t)dW (t)dW (t) = dt, B(t) is a Brownian motion.
2. By Itos product formula,
d[B(t)W (t)] =d[B(t)]W (t) + d[W (t)]B(t) + d[B(t)]d[W (t)]
=sign(W (t))W (t)dW (t) + B(t)dW (t) + sign(W (t))dW (t)dW (t)
=[sign(W (t))W (t) + B(t)]dW (t) + sign(W (t))dt
Integrate both sides, we will have
Z t
Z t
B(t)W (t) =B(0)W (0) +
sign(W (s))ds +
[sign(W (s))W (s) + B(s)]dW (s)
0
0
Z t
Z t
=
sign(W (s))ds +
[sign(W (s))W (s) + B(s)]dW (s)
0

Note that the expectation of an Itos integral is 0 and W (s) N (0, s), which implies
Z
Z
Z 0
2
2
2
1
1
1
ex /2s dx
ex /2s dx = 0
E[sign(W (s))] =
sign(x) ex /2s dx =
2
2
2

Then by taking expectation on both sides,


Z

E[sign(W (s))]ds = 0

E[B(t)W (t)] =
0

Note that E[W (t)] = 0, the above result also implies that B(t) and W (t) are uncorrelated.
29

3. By Itos product formula,


d[W (t)W (t)] =d[W (t)]W (t) + d[W (t)]W (t) + d[W (t)]d[W (t)]
=W (t)dW (t) + W (t)dW (t) + dW (t)dW (t)
=2W (t)dW (t) + dt
4. By Itos product formula,
d[B(t)W 2 (t)] =d[B(t)]W 2 (t) + d[W 2 (t)]B(t) + d[B(t)]d[W 2 (t)]
=sign(W (t))W 2 (t)dW (t) + [2W (t)dW (t) + dt] B(t)
+ sign(W (t))dW (t) [2W (t)dW (t) + dt]
=[sign(W (t))W 2 (t) + 2B(t)W (t)]dW (t) + [B(t) + 2sign(W (t))W (t)]dt
By taking the integral on both sides,
Z t
Z t
[B(s) + 2sign(W (s))W (s)]ds
[sign(W (s))W 2 (s) + 2B(s)W (s)]dW (s) +
B(t)W 2 (t) =
0

Note that again the expectation of an Itos integral is 0 and W (s) N (0, s). Then by taking
the expectation, we have
Z t
E[B(t)W 2 (t)] =
E [[B(s) + 2sign(W (s))W (s)] ds
0
Z t
=2
E [sign(W (s))W (s)] ds
0

Now
Z

2
1
sign(x) ex /2s dx
2

Z
Z 0
2
2
x
x
ex /2s dx
ex /2s dx = 0
=
2
2
0

Z
x x2 /2s
e
dx
=2
2
0
>0

E[sign(W (s))] =

Thererfore E[B(t)W 2 (t)] > 0 6= 0 = E[B(t)]E[W 2 (t)].


If B(t) and W (t) are independent, then B(t) and W 2 (t) will also be independent, which implies
E[B(t)W 2 (t)] = E[B(t)]E[W 2 (t)]. Contradiction. Therefore B(t) and W (t) are uncorrelated
but not independent.

Chapter 5

5.1

Exercise 5.1

Recall that dtdt = 0, dtdW = 0 and dW (t)dW (t) = dt.


1. First note that

1
dX(t) = (t)dW (t) + ((t) R(t) 2 (t))dt
2
30

For f (x) = S(0)ex , we have f 0 (x) = f 00 (x) = f (x) and f (X(t)) = S(0)eX(t) = D(t)S(t), therefore
by Itos Lemma,
d(D(t)S(t)) =df (X(t))
1
=f 0 (X(t))dX(t) + f 0 (X(t))dX(t)dX(t)
2
1
=S(0)eX(t) dX(t) + S(0)eX(t) dX(t)dX(t)
2



1 2
1 2
=D(t)S(t) (t)dW (t) + ((t) R(t) (t))dt +
dt + 0 + 0
2
2
=D(t)S(t) [(t)dW (t) + ((t) R(t))dt]
=D(t)S(t)(t) [dW (t) + (t)dt]
with (t) = ((t) R(t))/(t).
2. The other way is by Itos Product Rule,
d(D(t)S(t)) =S(t)dD(t) + D(t)dS(t) + dD(t)dS(t)
=S(t) [R(t)D(t)dt] + D(t) [(t)S(t)dt + (t)S(t)dW (t)]
+ [R(t)D(t)dt] [(t)S(t)dt + (t)S(t)dW (t)]
= [(t)S(t)D(t) S(t)R(t)D(t)] dt + D(t)(t)S(t)dW (t)
=D(t)S(t) [(t)dW (t) + ((t) R(t))dt]
=D(t)S(t)(t) [dW (t) + (t)dt]

5.2

Exercise 5.4

1. Let us consider d(ln S(t)), by Itos Lemma




1
1
1
S(t) + 2
dS(t)dS(t)
d(ln S(t)) =
S(t)
2
S (t)
2
(t) dW
(t)dW
(t)
=r(t)dt + (t)dW
2

=

r(t)

2 (t)
2

(t)
dt + (t)dW

By writing it in the integral form, we have



Z T
Z T
2 (t)
(t)
dt +
(t)dW
ln S(T ) = ln S(0) +
r(t)
2
0
0
"Z 
#

Z T
T
2 (t)
(t)
S(T ) =S(0) exp
r(t)
dt +
(t)dW
2
0
0
Under the form S(T ) = S(0)eX ,
Z
X=
0

2 (t)
r(t)
2

Z
dt +

(t)
(t)dW

RT
(t) is a normal random variable with mean 0 and variance
By Theorem 4.4.9 of Text, 0 (t)dW

RT 2
RT 
2
(t)dt, which implies X is normal distributed with mean 0 r(t) 2(t) dt and variance
0
RT 2
(t)dt.
0
31

2. To simplify the notations, denote


R=

1
T
s

r(t)dt
0

1
T

2 (t)dt

Then X is normal with mean RT T 2 /2 and variance T 2 .


c(0, S(0))


eRT (S(T ) K)+
=E
" 
#
2
Z
1
T 2
RT
x
2
(S(0)e K) exp x RT +

=e
/2T dx
2
2 T
x:S(T )K
#
" 
2
Z
eRT
T 2
2
x
e exp x RT +

/2T dx
=S(0)
2
2 T
x:S(T )K
" 
#
2
Z
T 2
1
RT
2
exp x RT +

/2T dx
Ke
2
2 T
x:S(T )K

(5.1)

Now
Z
x:S(T )K

" 
#
2
T 2
1
2
exp x RT +

/2T dx
2
2 T

=P (S(T ) K)


S(0)
=P X ln
K
K
ln S(0)
(RT 21 T 2 )
X (RT 12 T 2 )

T
T

1
2
X (RT 12 T 2 )
ln S(0)
K + (RT 2 T )

=P

T
T



S(0)
1
1

ln
=N
+ (RT T 2 )
K
2
T

=P

32

(5.2)

and
" 
#
2
eRT
T 2
x
2
e exp x RT +

/2T dx
2
2 T
x:S(T )K
#
" 
2
Z
T 2
1
2
exp x RT
/2T dx
=
2
2
x:S(T )K
#
" 
2
Z
1
T 2
2
=
/2T dx
exp x RT
K
2
2 ln S(0)
!
K
(RT + 21 T 2 )
ln S(0)
X (RT + 12 T 2 )

=P

T
T
!
1
2
X (RT + 12 T 2 )
ln S(0)
K + (RT + 2 T )

=P

T
T



S(0)
1
1

ln
+ (RT + T 2 )
=N
K
2
T
Z

(5.3)

By substituting (5.2) and (5.3) into (5.1), we have


c(0, S(0)) = BSM (T, S(0); K, R, )

5.3

Exercise 5.5

1. By the definition
1
= exp
Z(t)

Z
0

Let
Z
X(t) =
0

1
(u)dW (u) +
2

1
(u)dW (u) +
2


(u)du
2

2 (u)du

then



1
d h X(t) i
d
=
e
Z(t)
dt
1
=eX(t) dX(t) + eX(t) dX(t)dX(t)
2


1
1 2
1
=
(t)dW (t) + (t)dt +
2 (t)dt
Z(t)
2
2Z(t)

1 
=
(t)dW (t) + 2 (t)dt
Z(t)
(t) is a martingale, for 0 s t,
2. By Lemma 5.2.2 of text, since M





h
i
h
i
1

(t) F(s) = Z(s)M


(s)
E Z(t)M (t) F(s) = Z(s)E
Z(t)M (t) F(s) = Z(s)E M
Z(t)
(t)Z(t) is a martingale.
which means M

33

3. Note that dM (t) = (t)dW (t), then


h
i
(t)
d M


1
=d M (t)
Z(t)




1
1
1
+
=M (t)d
d [M (t)] + d [M (t)] d
Z(t)
Z(t)
Z(t)




2
(t)
(t)
1
(t)
2 (t)
=M (t)
dW (t) +
dt +
(t)dW (t) +
dW (t) +
dt (t)dW (t)
Z(t)
Z(t)
Z(t)
Z(t)
Z(t)

1 
1
(t)(t) + M (t)2 (t)M (t) dt
[(t) + M (t)(t)] dW (t) +
=
Z(t)
Z(t)
(t) = dW (t) + (t)dt, if we define
4. Now since dW
= (t) + M (t)(t)
(t)
Z(t)
then
h
i
(t)
d M
1
(t)
[(t) + M (t)(t)] dW (t) +
[(t) + M (t)(t)M (t)] dt
Z(t)
Z(t)

=(t)dW
(t) + (t)(t)dt
=

(t)
=(t)d
W
Integrate both sides, we will have
(t) = M
(0) +
M

(u)
(u)d
W

5.4

Exercise 5.12

i (t) is a continuous martingale. Secondly, since dBi (t) = P ij (t)/i (t)dWt (t),
1. First each B
j
dBi (t)dt = 0 and furthermore we have
i (t)dB
i (t) = [dBi (t) + i (t)] [dBi (t) + i (t)] = dBi (t)dBi (t) = dt
dB
i (t) must be a brownian motion for every i.
B
2. By
i (t) = dBi (t) + i (t)
dB
we have
dSi (t) =i (t)Si (t)dt + i (t)Si (t)dBi (t)
i (t) i (t)]
=i (t)Si (t)dt + i (t)Si (t)[dB
i (t)
= [i (t)Si (t) i (t)i (t)Si (t)] dt + i (t)Si (t)dB
i (t)
=R(t)Si (t)dt + i (t)Si (t)dB

34

3. Since dBi (t)dBk (t) = ik (t)dt,


i (t)dB
k (t)
dB
= [dBi (t) + i (t)dt] [dBk (t) + k (t)dt]
=dBi (t)dBk (t) + i (t)dtdBk (t) + k (t)dtdBi (t) + i (t)k (t)dtdt
=dBi (t)dBk (t)
=ik (t)dt
4. By Itos product rule,
d[Bi (t)Bk (t)] = Bi (t)dBk (t) + Bk (t)dBi (t) + dBi (t)dBk (t)
then since dBi (t)dBk (t) = ik (t)dt, we have
Z t
Z t
Z t
Bi (t)Bk (t) =
Bi (u)dBk (u) +
Bk (u)dBi (u) +
ik (u)du
0

Now note the fact that the expectation of an Itos integral is 0, if we take the expectation on the
above formula, then
Z t
 Z t
E[Bi (t)Bk (t)] = E
ik (u)du =
ik (u)du
0

i (t)B
k (t)] =
Similarly we have E[B

Rt
0

ik (u)du.

5. For E[B1 (t)B2 (t)], since dB1 (t) = dW2 (t) and dB2 (t) = sign(W1 (t))dW2 (t) by Itos product
rule,
d[B1 (t)B2 (t)] =B1 (t)dB2 (t) + B2 (t)dB1 (t) + dB1 (t)dB2 (t)
=W2 (t)dB2 (t) + B2 (t)dW2 (t) + sign(W1 (t))dW2 (t)dW2 (t)
By dW2 (t)dW2 (t) = dt, we may write it in the integral version as:
Z t
Z t
Z t
B1 (t)B2 (t) = B1 (0)B2 (0) +
W2 (u)dB2 (u) +
B2 (u)dW2 (u) +
sign(W1 (u))du
0

Note that the expectation of an Itos integral is 0 and W1 (u) N (0, u), which implies
Z
Z
Z 0
2
2
2
1
1
1

ex /2u dx = 0
E[sign(W1 (u))] =
sign(x)
ex /2u dx =
ex /2u dx
2u
2u
2

Then we have
E[B1 (t)B2 (t)] = 0

(5.4)

1 (t) = dB1 (t) = dW2 (t) = dW


2 (t)
dB

(5.5)

2 (t) = dB2 (t) = sign(W1 (t))dW2 (t) = sign(W


1 (t) t)dW
2 (t)
dB

(5.6)

1 (t)B
2 (t)], since
For E[B
and
by Itos product rule,

1 (t)B
2 (t)] =B
1 (t)dB
2 (t) + B
2 (t)dB
1 (t) + dB
1 (t)dB
2 (t)
d[B
1 (t) t)dW
2 (t) + B2 (t)dW
2 (t) + sign(W1 (t))dt
=W2 (t)sign(W
1 (t) t)dW
2 (t) + B2 (t)dW
2 (t) + sign(W
1 (t) t)dt
=W2 (t)sign(W
35

we may write it in the integral version as:


Z t
Z t
1 (t)B
2 (t) = B
1 (0)B
2 (0) +
1 (u) u) + B2 (u)]dW
2 (u) +
1 (u) u)du
B
[W2 (u)sign(W
sign(W
0

1 (u) N (0, u), which implies W


1 (u)u
Note that the expectation of an Itos integral is 0 and W
N (u, u) and
Z
2
1
E[sign(W1 (u) u)] =
sign(x)
e(xu) /2u dx
2u

Z
Z 0
2
1
1
(xu)2 /2u

=
e
e(xu) /2u dx
dx
2u
2u
0

Z
Z
2
2
1
1

e(xu) /2u dx
e(x+u) /2u dx
=
2u
2u
0
0
Z h
i
2
2
1
=
e(xu) /2u e(x+u) /2u du
2u 0
>0
since u > 0
Then we have
1 (t)B
2 (t)] =
E[B

1 (u) u)du] > 0


E[sign(W

1 (t)B
2 (t)] > 0 6= 0 = E[B1 (t)B2 (t)].
(5.4) and (5.7) are the exmaples that E[B

5.5

Exercise 5.13

1.
1 (t)] =E[
W1(t)] = 0
E[W
Z t
Z t
1 (u)]du = 0
2 (t)] =E[
W2 (t)
W2 (t)]
E[W
E[W
W1 (u)du] = E[
0

2. Note that the covariance

Cov[W
1 (T ), W2 (T )] =E[W1 (T )W2 (T )] E[W1 (T )]E[W2 (T )]
1 (T )W2 (T )]
=E[W
1 (T )] = 0 = E[W
2 (T )].
because of E[W
Now by Itos product rule, we have
Z
W1 (T )W2 (T ) = W1 (0)W2 (0) +

Z
W1 (t)dW2 (t) +

Here since
dW1 (t) =dW1 (t)
dW2(t) =dW2 (t) + W1 (t)dt
therefore,
dW1 (t) =dW1 (t)
dW2 (t) =dW2 W1 (t)dt
36

W2 (t)dW1 (t)
0

(5.7)

then
W1 (T )W2 (T ) = W1 (0)W2 (0) +

W1 (t)dW2 (t)

W1 (t)W1 (t)dt +

W2 (t)dW1 (t)

Now since the expectation of any Itos integration is 0, and W1 (0) = 0 = W2 (0),
Z T
Z T
T2
2

E[W1 (T )W2 (T )] =
E[W1 (t)]dt =
tdt =
2
0
0
W1 2 (t)] = Cov[W1 ] = t.
here we used the fact that E[

5.6

Exercise 5.14

1. Since

d ert X(t)
= rert X(t)dt + ert dX(t)
= rert X(t)dt + ert (t)dS(t) ert a(t)dt + rert (X(t) (t)S(t)) dt
h


i
(t) + adt a(t)dt r(t)S(t)dt
=ert (t) rS(t)dt + S(t)dW
(t)
=S(t)ert (t)dW
contains no dt term, ert X(t) is a martingale.
(t) + (r 1 2 )t, then
2. (a) Let Z(t) = W
2


dY (t) =d eZ(t)
1
=eZ(t) dZ(t) + eZ(t) dZ(t)dZ(t)
2


(t) + (r 1 2 )dt + 1 Y (t) 2 (t)
=Y (t) dW
2
2
(t)
=rY (t)dt + dW
(b) Since

d ert Y (t)
= rert Y (t)dt + ert dY (t)
(t)
= rert Y (t)dt + ert Y (t)dt + ert Y (t)dW
rt
=Y (t)e dW (t)
contains no dt term, ert Y (t) is a martingale.
(c) By Itos product rule,
Z t
1
a
a
dS(t) =S(0)dY (t) +
dsdY (t) + Y (t)
dt + dY (t)
dt
Y
(s)
Y
(t)
Y
(t)
0


Z t
1
= S(0) +
ds dY (t) + adt
since dY (t)dt = 0
0 Y (s)
S(t)
=
dY (t) + adt
Y (t)
i
S(t) h
(t) + adt
rY (t)dt + Y (t)dW
=
Y (t)
(t) + adt
=rS(t)dt + S(t)dW
37

(5.8)

which is (5.9.7) of Text.


3. First

E[S(T
)|F(t)]
Z

a
ds|F(t)]
0 Y (s)
"
#


Z t
Z T
a
a

ds|F(t) + E Y (T )
ds|F(t)
=S(0)E[Y (T )|F(t)] + E Y (T )
Y (s)
0 Y (s)
t

Z t
Z T 
a
Y (T )

E
=S(0)E[Y (T )|F(t)] + E [Y (T )|F(t)]
ds + a
|F(t) ds
Y (s)
0 Y (s)
t

=E[S(0)Y
(T ) + Y (T )

(5.9)

by taking out the what is known.


Secondly, since ert Y (t) is a martingale under P , we have
(T )|F(t)] = E[Y
(T )erT |F(t)]erT = Y (t)ert erT = Y (t)er(T t)
E[Y
And by the definition of Y (t), we have
 


1 2
Y (T )

= exp W (T ) W (s) + (r )(T s)


Y (s)
2
i
h 
1 2
(T ) W
(s) e(r 2 )(T s)
= exp W

(5.10)

(T ) W
(s) is normally distributed with mean 0 and variance T s, by taking the
Since W
expectation on both sides of the above equation, we have


1 2
Y (T )

E
=E[e(W (T )W (s)) ]e(r 2 )(T s)
Y (s)
1 2
1 2
(5.11)
=e 2 (T s) e(r 2 )(T s)
=er(T s)
Now by substituting (5.10) and (5.11) into (5.9), we have
[S(T )|F(t)]
E
=S(0)er(T t) Y (t) + er(T t) Y (t)

Z
0

=er(T t) S(t) + a

a
ds + a
Y (s)

Z
t

er(T s) ds
(5.12)

er(T s) ds
t
i
a h r(T t)
e
1
=er(T t) S(t) +
r
4. Now we differentiate (5.12)
[S(T )|F(t)])
d(E

 a 

=d er(T t) S(t) + d er(T t) 1
r

rT
rt
=e d e S(t) aer(T t) dt

38

(5.13)

Since
d ert S(t)

= rert S(t)dt + ert dS(t)


(t) + aert dt
= rert S(t)dt + ert rS(t)dt + ert S(t)dW
(t) + aert dt
=ert S(t)dW

(5.14)

by substituting (5.14) into (5.13), we have


[S(T )|F(t)])
d(E


=erT d ert) S(t) + aer(T t) dt
(t) + aer(T t) dt aer(T t) dt
=er(T t) S(t)dW
(t)
=er(T t) S(t)dW
[S(T )|F(t)]). Therefore E
[S(T )|F(t)] is a martingale under P .
There is no dt term in d(E
5. Since
h
i
er(T t) (S(T ) K)|F(t)
E
h
i
er(T t) S(T )|F(t) er(T t) K
=E
r(T t)

=e

[S(T )|F(t)] e
E

r(T t)

by linearility
by taking out what is known



er(T t) (S(T ) K)|F(t) = 0 implies that K = E
[S(T )|F(t)], i.e.,
E
[S(T )|F(t)] = F utS (t, T )
F orS (t, T ) = K = E
6. According to the hedgeing strategy, the value of the portfolio at time t is governed by (5.9.7) of
Text with (t) = 1, i.e.,
dX(t) = dS(t) adt + r(X(t) S(t))dt
Then (5.8) implies that

(t)
d ert X(t) = S(t)ert dW
By integrating both sides from 0 to T , we have
erT X(T ) X(0) =

(t)dt
S(t)ert dW

ert (dS(t) rS(t)dt adt)

=
0

Z
=


d ert S(t)

aert dt

=erT S(T ) S(0)


a
1 erT
r

Associated with the fact that X(0) = 0, we can rewrite it as


X(T ) = S(T ) S(0)erT

39


a rT
e 1
r

by (5.9.7) of Text

By (5.12),
S(0)erT +


a rT
e 1 = F utS (0, T ) = F orS (0, T )
r

So
X(T ) = S(T ) F orS (0, T )
Therefore eventually, at time T , she delivers the asset at the price F orS (0, T ), which is exactly
rT
the amount
 that she needs to cover the debt in the money market X(T ) S(T ) = S(0)e +
a
rT
e

1
.
r

Chapter 6

6.1

Exercise 6.1

Rt
Rt
1. It is easy to see Z(t) = e0 = 1 since t (v)dW (v) = 0 = t (b(v) 2 (v)/2)dv. Let
Z u
Z u
(b(v) 2 (v)/2)dv
(v)dW (v) +
S(u) =
t

. Then Z(u) = e

S(u)

and

dS(u) = (u)dW (u) + b(u) 2 (u)/2 dv

Now set f (t, x) = ex in Itos Formula (4.4.23) of text, then fx = fxx = ex and ft = 0. Then we
have
dZ(u) =df (t, S(u))
1
=eS(u) dS(u) + eS(u) dS(u)dS(u)
2

=Z(u) [(u)dW (u) + b(u) 2 (u)/2 dv]


1
+ Z(u) [(u)dW (u) + b(u) 2 (u)/2 dv] [(u)dW (u) + b(u) 2 (u)/2 dv]
2

 1 2
2
=Z(u) b(u) (u)/2 + (u) du
(since dW (u)dW (u) = du)
2
+ Z(u) [(u) + 0] du

(since dW (u)du = 0 = dudu)

=b(u)Z(u)du + (u)Z(u)dW (u)


2. It is clear that if X(u) = Y (u)Z(u), then X(t) = Y (t)Z(t) = x 1 = x. Furthermore, by Itos
product rule, dW (u)dW (u) = du and dW (u)du = 0 = dudu, we have
dX(u) =d[Y (u)Z(u)]
=Y (u)dZ(u) + Z(u)dY (u) + dZ(u)dY (u)


a

= [bzdu + zdW (u)] Y (u) +


+ dW (u) Z(u)
Z
Z


a

+ [bzdu + zdW (u)]


+ dW (u)
Z
Z


Z
= [(a )du + dW (u)] + [bZY du + ZY dW (u)] +
du + 0
Z
= [a + bZY + ] du + [ + ZY ] dW (u)
= (a(u) + b(u)X(u)) du + ((u) + (u)X(u)) dW (u)
i.e. X(u) = Z(u)Y (u) solve the stochastic differential equation (6.2.4).
40

(since X = ZY )

6.2
1.

Exercise 6.3
i
Rs
Rs
Rs
d h R s b(v)dv
C(s, T ) = b(s)e 0 b(v)dv C(s, T ) + e 0 b(v)dv C 0 (s, T ) = e 0 b(v)dv
e 0
ds

2.
T

Z T
i
Rs
d h R s b(v)dv
C(s, T ) ds =
e 0 b(v)dv ds
e 0
ds
t
t
Z T
Rs
Rt
RT
e 0 b(v)dv ds
e 0 b(v)dv C(T, T ) e 0 b(v)dv C(t, T ) =
Z

By C(T, T ) = 0,
RT
t

C(t, T ) =

e
e

Rs
0

Rt
0

b(v)dv

ds

b(v)dv

Z
=
t

3.

Z
A(T, T ) A(t, T ) =
t

Rs

Rt

0
0

b(v)dv
b(v)dv

Z
ds =

Rs
t

b(v)dv

ds



2 (s) 2
a(s)C(s, T ) +
C (s, T ) ds
2

The result follows from A(T, T ) = 0.

6.3

Exercise 6.4

1.

" Z
#
d 2 T
2
(t) = (t)
C(u, T )dt = (t)C(t, T )
dt 2 t
2
0

00 (t) =

2 0
2
(t)C(t, T )
(t)C 0 (t, T )
2
2

therefore,
C 0 (t, T ) =

2 0
2 (t)C(t, T )
2
2 (t)

00 (t)

200 (t) 2 2
200 (t) 0 (t)

C(t, T ) = 2
+
C (t, T )
2
(t)
(t)
(t)
2

2. Rearrange (6.5.14) into


C0
we have

2 2
C =bC 1
2

200 (t)
20 (t)
= b 2
1
2
(t)
(t)

which implies (6.9.10)


3. By the characteristic equation 2 b 21 2 = 0, we have
=
Then

b2 + 2 2
b
=
2
2
b

(t) = a1 e( 2 +)t + a2 e( 2 )t

41

Indeed, by the facts (T ) = 1 and 0 (T ) = 2 (T )C(T, T ) = 0, we have a set of equations:


(
b
b
a1 e( 2 +)T + a2 e( 2 )T
=1
b
b
a1 ( 2b + )e( 2 +)T + a2 ( 2b )e( 2 )T = 0
which imples

a1 =

2b ( b +)T
2
2 e
b
+ 2 ( b )T
2
2 e

a2 =

=
=

b
2
e( 2 +)T
4(r+ 2b )
b
2
e( 2 )T
4(r 2b )

so c1 = c2 = 2 /4.
2

4. (6.9.12) is straightforward from (6.9.11) by differentiatin w.r.t t. Again, since 0 (T ) = 2 (T )C(T, T ) =


0, we have
c1 e0 c2 e0 = 0
i.e. c1 = c2 .

5. By = b2 + 2 2 /2, we have
2

b2
b
b
2
= ( + )( ) =
4
2
2
2

then
"
(t) = c1 e

2b (T t)

1
e(T t)
+

b
2

"

b
2

2b (T t)

b
2

(T t)

1
e(T t)

b
2

#
(T t)

2 e
2 e
2 + b4
2 + b4


b
b
2b (T t) 2
(T t)
(T t)
= c1 e
+ ( + )e
( )e
2
2
2


(T t)
(T t)
b
e(T t) + e(T t)
2
e

e
= c1 e 2 (T t) 2 b
+ 2

2
2
b
2
= c1 e 2 (T t) 2 [b sinh((T t)) + 2 cosh((T t))]

= c1 e

and
h
i
b
0 (t) = c1 e 2 (T t) e(T t) e(T t)
b

= 2c1 e 2 (T t) sinh((T t))


Therefore
C(t, T ) =

20 (t)
2 (t)
b

4c1 e 2 (T t) sinh((T t))


2 c1 e

2b (T t)

2
2

[b sinh((T t)) + 2 cosh((T t))]


sinh((T t))
= b
sinh((T
t)) + cosh((T t))
2

42

6. By the fact (T ) = 1, we have


b

c1 e 2 (T T )

2
[b sinh((T T )) + 2 cosh((T T ))] = 1
2
c1

2
2 = 1
2
c1 =

2
4

Eventually,
Z
A(t, T ) = A(T, T )
Z
=0
t

A0 (s, T )ds

t
0

2a (s)
ds
2 (s)

a C(s, T )ds

=
t

log((t))
2 /2


2c1 b sinh((T t)) + 2 cosh((T t))
2a

= 2 log
b

2
e 2 (T t)
"
#
b
2a
e 2 (T t)
= 2 log

b sinh((T t)) + 2 cosh((T t))


=a

6.4

Exercise 6.5

1. Since
g(t, x1 , x2 ) = E t,x1 ,x2 h(x1 (T ), x2 (T ))
we have for any 0 s t T similarly as Theorem 6.3.1 of Text,

h
i

E h(X1 (T ), X2 (T )) F(t) = g(t, X1 (t), X2 (t))

h
i

E h(X1 (T ), X2 (T )) F(s) = g(s, X1 (s), X2 (s))
Then


h
i
h
i


E g(t, X1 (t), X2 (t)) F(s) = E E [h(X1 (T ), X2 (T ))|F(t)] F(s)

h
i

= E h(X1 (T ), X2 (T )) F(t)
= g(s, X1 (s), X2 (s))
Similarly, Since


ert f (t, x1 , x2 ) = E t,x1 ,x2 erT h(x1 (T ), x2 (T )) = erT E t,x1 ,x2 [h(x1 (T ), x2 (T ))]
ert f (t, X1 (t), X2 (t)) is also a martingale.

43

2. If W1 and W2 are independent, dW1 (t)dW1 (t) = dt = dW1 (t)dW1 (t) and dW1 (t)dW2 (t) = 0. By
Itos formula,
dg(t, X1 (t), X2 (t))
1
1
=gt dt + gx1 dX1 (t) + gx2 dX2 (t) + gx1 x1 dX1 (t)dX1 (t) + gx1 x2 dX1 (t)dX2 (t) + gx2 x2 dX2 (t)dX2 (t)
2
2
=gt dt + gx1 [1 dt + 11 dW1 (t) + 12 dW2 (t)]
+ gx2 [2 dt + 21 dW1 (t) + 22 dW2 (t)]
1
+ gx1 x1 [1 dt + 11 dW1 (t) + 12 dW2 (t)] [1 dt + 11 dW1 (t) + 12 dW2 (t)]
2
+ gx1 x2 [1 dt + 11 dW1 (t) + 12 dW2 (t)] [2 dt + 21 dW1 (t) + 22 dW2 (t)]
1
+ gx2 x2 [2 dt + 21 dW1 (t) + 22 dW2 (t)]
2




1
1
2
2
2
2
= gt + 1 gx1 + 2 gx2 + gx1 x1 11 + 12 + gx1 x2 (11 21 + 12 22 ) + gx2 x2 21 + 22 dt
2
2
+ [11 gx1 + 21 gx2 ] dW1 (t) + [12 gx1 + 22 gx2 ] dW2 (t)
Since g(t, X1 (t), X2 (t)) is a martingale, there is no dt term in dg(t, X1 (t), X2 (t)). Therefore by
setting the dt term to zero in the above differential form, we will have (6.6.3) of Text immediately.
Smilarly by Itos formula,


d ert f (t, X1 (t), X2 (t))

= rert f + ert ft dt + ert fx1 dX1 (t) + ert fx2 dX2 (t)
1
1
+ ert fx1 x1 dX1 (t)dX1 (t) + ert fx1 x2 dX1 (t)dX2 (t) + ert fx2 x2 dX2 (t)dX2 (t)
2
 2



1
1
rt
2
2
2
2
=e
rf + ft + 1 fx1 + 2 fx2 + fx1 x1 11 + 12 + fx1 x2 (11 21 + 12 22 ) + fx2 x2 21
+ 22
dt
2
2
+ ert [11 fx1 + 21 fx2 ] dW1 (t) + ert [12 fx1 + 22 fx2 ] dW2 (t)
Since ert f (t, X1 (t), X2 (t)) is a martingale, there is no dt term in d [ert f (t, X1 (t), X2 (t))].
(6.6.4) of Text follows immediately by setting the dt term to zero in the above differential form.
3. If dW1 (t)dW2 (t) = dt, dW1 (t)dW1 (t) = dt = dW1 (t)dW1 (t), then by Itos formula,
dg(t, X1 (t), X2 (t))
1
1
=gt dt + gx1 dX1 (t) + gx2 dX2 (t) + gx1 x1 dX1 (t)dX1 (t) + gx1 x2 dX1 (t)dX2 (t) + gx2 x2 dX2 (t)dX2 (t)
2
2
=gt dt + gx1 [1 dt + 11 dW1 (t) + 12 dW2 (t)]
+ gx2 [2 dt + 21 dW1 (t) + 22 dW2 (t)]
1
+ gx1 x1 [1 dt + 11 dW1 (t) + 12 dW2 (t)] [1 dt + 11 dW1 (t) + 12 dW2 (t)]
2
+ gx1 x2 [1 dt + 11 dW1 (t) + 12 dW2 (t)] [2 dt + 21 dW1 (t) + 22 dW2 (t)]
1
+ gx2 x2 [2 dt + 21 dW1 (t) + 22 dW2 (t)]
2




1
1
2
2
2
2
= gt + 1 gx1 + 2 gx2 + gx1 x1 11 + 12 + gx1 x2 (11 21 + 12 22 ) + gx2 x2 21 + 22 dt
2
2
+ [gx1 x1 11 12 + gx1 x2 (11 22 + 12 21 ) + gx2 x2 21 22 ] dt
+ [11 gx1 + 21 gx2 ] dW1 (t) + [12 gx1 + 22 gx2 ] dW2 (t)
44

Since g(t, X1 (t), X2 (t)) is a martingale, there is no dt term in dg(t, X1 (t), X2 (t)). Therefore
by setting the dt term to zero in the above differential form, we will have (6.9.13) of Text
immediately.
Smilarly by Itos formula,


d ert f (t, X1 (t), X2 (t))

= rert f + ert ft dt + ert fx1 dX1 (t) + ert fx2 dX2 (t)
1
1
+ ert fx1 x1 dX1 (t)dX1 (t) + ert fx1 x2 dX1 (t)dX2 (t) + ert fx2 x2 dX2 (t)dX2 (t)
2
2




1
1
2
2
2
2
+ 12
+ fx1 x2 (11 21 + 12 22 ) + fx2 x2 21
+ 22
dt
=ert rf + ft + 1 fx1 + 2 fx2 + fx1 x1 11
2
2
+ ert [gx1 x1 11 12 + gx1 x2 (11 22 + 12 21 ) + gx2 x2 21 22 ] dt
+ ert [11 fx1 + 21 fx2 ] dW1 (t) + ert [12 fx1 + 22 fx2 ] dW2 (t)
Since ert f (t, X1 (t), X2 (t)) is a martingale, there is no dt term in d [ert f (t, X1 (t), X2 (t))].
(6.9.14) of Text follows immediately by setting the dt term to zero in the above differential form.

6.5

Exercise 6.7

1. By the iterated conditioning, for 0 s t T , we have




i
h
i
h
h
i
ert c(t, S(t), V (t)) F(s) =E
ert E
er(T t) (S(T ) K)+ F(t) F(s)
E

h
i
ert er(T t) (S(T ) K)+ F(s)
=E

h
i
erT (S(T ) K)+ F(s)
=E

h
i
er(T s) (S(T ) K)+ F(s)
=ers E
=ers c(s, S(s), V (s))
Then since it is a martingale, the dt term in the differential d[ert c(t, S(t), V (t))] should be 0,
d[ert c(t, S(t), V (t))]


1
1
rt
=e
rcdt + ct dt + cs dS(t) + cv dV (t) + css dS(t)dS(t) + csv dS(t)dV (t) + cvv dV (t)dV (t)
2
2


1
1
=ert rc + ct + cs rS + cv (a bV ) + css V S 2 + csv V S + cvv 2 V dt
2
2
h i
h
i
+ ert cs V S dW1 (t) + ert cv V dW2 (t)
which is

1
1
rc + ct + cs rs + cv (a bv) + css vs2 + csv vs + cvv 2 v = 0
2
2
This is the same as (6.9.26) of text.

2. To simplify the notations, we denote f = f (t, log s, v), g = g(t, log s, v) and their paritial deriva-

45

tives as well. Then


c(t, s, v) =sf (t, log s, v) er(T t) Kg(t, log s, v)
ct =sft er(T t) Kgt rer(T t) Kg
1
1
1
cs =f + sfx er(T t) Kgx = f + fx er(T t) Kgx
s
s
s
r(T t)
cv =sfv e
Kgv
1
1
1
css = fx + fxx er(T t) Kgxx 2
s
s
s
1
csv =fv + fxv er(T t) Kgxv
s
r(T t)
cvv =sfvv e
Kgvv
Then the left handside of (6.9.26) of Text becomes
1
1
ct + rscs + (a bv)cv + s2 vcss + sv + 2 vcvv
2
2


v
2 v
v
fvv
=s ft + rf + rfx + (a bv)fv + fx + fxx + vfv + vfxv +
2
2
2


2

v
v
gvv
+ er(T t) K gt + rg rgx + (a bv)gv + gxx + vgxv +
2
2
=rsf rer(T t) Kg
=rc
which verifies (6.9.26) of Text.
3. One method is to observe first that f (t, X(t), V (t)) is a martingale, and take differentiation and
then set dt = 0 to get the equation.
Here we apply the Two Dimensional Feynman-Kac Theorem (Exercise 6.5 of Text)directly2 Now
by the definition of dX(t) and dV (t),
1
1 = r + v
2
2 = a bv + v

, 11 =

v,

, 21 = 0,

12 = 0

12 = v

Then (6.9.13) yields


1
v
1
ft + (r + v)fx + (a bv + v)fv + fxx + vfxv + 2 vfvv = 0
2
2
2
Also for the boundary condition, in (6.6.1)of Text h(x, v) = Ixlog K , then
f (T, x, v) = Ixlog K
4. Here we still apply the Two Dimensional Feynman-Kac Theorem directly, by the definition of
dX(t) and dV (t),
1
1 = r v
2
2 = a bv
2 Here

, 11 =

, 21 = 0,

v,

12 = 0

12 = v

we cannot apply (6.6.3) or (6.6.4) of Text since they are for independent Brownian motions.

46

Then (6.9.13) yields


1
v
1
gt + (r v)gx + (a bv)gv + gxx + vgxv + 2 vgvv = 0
2
2
2
Also for the boundary condition, in (6.6.1)of Text h(x, v) = Ixlog K , then
gT, x, v) = Ixlog K
5. Apparently the function c(t, x, v) defined by (6.9.34) satisfies the equation (6.9.26). Now by the
precise definition
c(T, s, v)
=sf (T, log s, v) er(T T ) Kg(T, log s, v)
(
s K,
if log s log K
=
s 0 K 0,
otherwise
=(s K)+
which implies that such c satisfies the boundary condition.

Chapter 7

Chapter 8

Chapter 9

10

Chapter 10

11

Chapter 11

11.1

Exercise 11.1

Since
M 2 (t) M 2 (s)


= N 2 (t) N 2 (s) + 2 (t2 s2 ) 2tN (t) + 2sN (s)


= (N (t) N (s))2 + 2N (t)N (s) 2N 2 (s) + 2 (t2 s2 ) 2t[N (t) N (s)] + 2N (s)(s t)
=[N (t) N (s)]2 + 2N (s)[N (t) N (s)] + 2 (t2 s2 ) 2t[N (t) N (s)] + 2N (s)(s t)
we have


E[M 2 (t) M 2 (s) F(s)]
=E[[N (t) N (s)]2 + 2N (s)[N (t) N (s)] + 2 (t2 s2 )


2t[N (t) N (s)] + 2N (s)(s t) F(s)]
=2 (t s)2 + (t s) + 2N (s) (t s) + 2 (t2 s2 )
2t (t s) + 2N (s)(s t)
=2 (t2 + s2 2ts + t2 s2 2t2 + 2ts) + (t s)
=(t s)
Therefore, for any 0 s t,
47

1.


E[M 2 (t) F(s)]


=E[M 2 (t) M 2 (s) + M 2 (s) F(s)]


=E[M 2 (t) M 2 (s) F(s)] + M 2 (s)
=(t s) + M 2 (s)
M 2 (s)
i.e., M 2 (t) is submartingale.
2.


E[M 2 (t) t F(s)]


=E[M 2 (t) M 2 (s) (t s) + M 2 (s) s F(s)]


=E[M 2 (t) M 2 (s) F(s)] (t s) + M 2 (s) s
=(t s) (t s) + M 2 (s) s
=M 2 (s) s
i.e., M 2 (t) t is martingale.

48

Potrebbero piacerti anche