Sei sulla pagina 1di 40

MSc Course in Mathematics and Finance

Imperial College London, 2010-11

Finite Difference Methods


Mark Davis
Department of Mathematics
Imperial College London
www.ma.ic.ac.uk/∼mdavis

Part II
3. Numerical solution of PDEs

1
4. Numerical Solution of Partial Differential Equations
1. Diffusion Equations of One State Variable.

2
∂u 2 ∂ u
=c , (x, t) ∈ D , (1)
∂t ∂x2
where t is a time variable, x is a state variable, and u(x, t) is an unknown
function satisfying the equation.
To find a well-defined solution, we need to impose the initial condition

u(x, 0) = u0 (x) (2)

and, if D = [a, b] × [0, ∞), the boundary conditions

u(a, t) = ga (t) and u(b, t) = gb (t) , (3)

where u0 , ga , gb are continuous functions.

2
If D = (−∞, ∞) × (0, ∞), we need to impose the boundary conditions
2
lim u(x, t) e−a x = 0 for any a > 0. (4)
|x|→∞

(4) implies u(x, t) does not grow too fast as |x| → ∞.


The diffusion equation (1) with the initial condition (2) and the boundary
conditions (3) is well-posed, i.e. there exists a unique solution that depends
continuously on u0 , ga and gb .

3
2. Grid Points.
To find a numerical solution to equation (1) with finite difference methods,
we first need to define a set of grid points in the domain D as follows:
Choose a state step size Δx = b−a N (N is an integer) and a time step size Δt,
draw a set of horizontal and vertical lines across D, and get all intersection
points (xj , tn ), or simply (j, n),
where xj = a + j Δx, j = 0, . . . , N , and tn = n Δt, n = 0, 1, . . ..
T
If D = [a, b] × [0, T ] then choose Δt = M (M is an integer) and tn = n Δt,
n = 0, . . . , M .

4
tM = T
..
.

t3

t2

t1

t0 = 0
x0 = a x 1 x2 ... xN = b

5
3. Finite Differences.
The partial derivatives
∂u ∂ 2u
ux := and uxx := 2
∂x ∂x
are always approximated by central difference quotients, i.e.
unj+1 − unj−1 unj+1 − 2 unj + unj−1
ux ≈ and uxx ≈ (5)
2 Δx (Δx)2
at a grid point (j, n). Here unj = u(xj , tn ).

Depending on how ut is approximated, we have three basic schemes: ex-


plicit, implicit, and Crank–Nicolson schemes.

6
4. Explicit Scheme.
If ut is approximated by a forward difference quotient
un+1
j − unj
ut ≈
Δt
at (j, n),
then the corresponding difference equation to (1) at grid point (j, n) is

wjn+1 = λ wj+1
n
+ (1 − 2 λ) wjn + λ wj−1
n
, (6)

where
Δt
λ = c2 .
(Δx)2
The initial condition is wj0 = u0 (xj ), j = 0, . . . , N , and
the boundary conditions are w0n = ga (tn ) and wN
n
= gb (tn ), n = 0, 1, . . ..
The difference equations (6), j = 1, . . . , N − 1, can be solved explicitly.

7
5. Implicit Scheme.
If ut is approximated by a backward difference quotient
un+1
j − unj
ut ≈
Δt
at (j, n + 1),
then the corresponding difference equation to (1) at grid point ( j, n + 1) is
n+1
−λ wj+1 + (1 + 2 λ) wjn+1 − λ wj−1
n+1
= wjn . (7)

The difference equations (7), j = 1, . . . , N −1, together with the initial and
boundary conditions as before, can be solved using the Crout algorithm or
the SOR algorithm.

8
Explicit Method.

wjn+1 − wjn wn − 2 wjn + wj+1


2 j−1
n
=c
Δt (Δx)2
Δt
Letting λ := c2 (Δx) 2 gives (6).

Implicit Method.

wjn+1 − wjn wn+1 − 2 wjn+1 + wj+1


2 j−1
n+1
=c
Δt (Δx)2
Δt
Letting λ := c2 (Δx) 2 gives (7).

In matrix form
 
1 + 2 λ −λ
 −λ 1 + 2 λ −λ 
 
 . . . . w = b.
 . . 
−λ 1 + 2 λ
The matrix is tridiagonal and diagonally dominant. ⇒ Crout / SOR.

9
6. Crank–Nicolson Scheme.
The Crank–Nicolson scheme is the average of the explicit scheme at (j, n)
and the implicit scheme at (j, n + 1).
The resulting difference equation is
λ n+1 λ n+1 λ n λ n
− wj−1 + (1 + λ) wjn+1 − wj+1 = wj−1 + (1 − λ) wjn + wj+1 . (8)
2 2 2 2
The difference equations (8), j = 1, . . . , N − 1, together with the initial
and boundary conditions as before, can be solved using Crout algorithm
or SOR algorithm.

10
Crank–Nicolson.
1
[(†) + (‡)] gives
2

wjn+1 − wjn n
1 2 wj−1 − 2 wjn + wj+1
n n+1 n+1 n+1
1 2 wj−1 − 2 wj + wj+1
= c + c
Δt 2 (Δx)2 2 (Δx)2
Δt
Letting μ := 12 c2 (Δx) 2 =
λ
2 gives
n+1
− μ wj+1 + (1 + 2 μ) wjn+1 − μ wj−1
n+1
bjn+1
=w
bjn+1 = μ wj+1
and w n
+ (1 − 2 μ) wjn + μ wj−1
n
.

This can be interpreted as

bjn+1
w — predictor (explicit method)
wjn+1 — corrector (implicit method)

11
7. Local Truncation Errors.
These are measures of the error by which the exact solution of a differential
equation does not satisfy the difference equation at the grid points and are
obtained by substituting the exact solution of the continuous problem into
the numerical scheme.
A necessary condition for the convergence of the numerical solutions to the
continuous solution is that the local truncation error tends to zero as the
step size goes to zero. In this case the method is said to be consistent.
It can be shown that all three methods are consistent.
The explicit and implicit schemes have local truncation errors O(Δt, (Δx)2 ),
while that of the Crank–Nicolson scheme is O((Δt)2 , (Δx)2 ).

12
Local Truncation Error.
For the explicit scheme we get for the LTE at (j, n)

u(xj , tn+1 ) − u(xj , tn ) u(xj−1 , tn ) − 2 u(xj , tn ) + u(xj+1 , tn )


Enj = − c2 .
Δt (Δx)2

With the help of a Taylor expansion at (xj , tn ) we find that


u(xj , tn+1 ) − u(xj , tn )
= ut (xj , tn ) + O(Δt) ,
Δt
u(xj−1 , tn ) − 2 u(xj , tn ) + u(xj+1 , tn )
2
= uxx (xj , tn ) + O((Δx)2 ) .
(Δx)
Hence

Enj = ut (xj , tn ) − c2 uxx (xj , tn ) +O(Δt) + O((Δx)2 ) .


| {z }
= 0

13
8. Numerical Stability.
Consistency is only a necessary but not a sufficient condition for conver-
gence.
Roundoff errors incurred during calculations may lead to a blow up of the
solution or erode the whole computation.
A scheme is stable if roundoff errors are not amplified in the calculations.
The Fourier method can be used to check if a scheme is stable.
Assume that a numerical scheme admits a solution of the form

vjn = a(n) (ω) ei j ω Δx , (9)



where ω is the wave number and i = −1.

14
Define
a(n+1) (ω)
G(ω) = (n) ,
a (ω)
where G(ω) is an amplification factor, which governs the growth of the
Fourier component a(ω).
The von Neumann stability condition is given by

|G(ω)| ≤ 1

for 0 ≤ ωΔx ≤ π.
It can be shown that the explicit scheme is stable if and only if λ ≤ 12 ,
called conditionally stable, and the implicit and Crank–Nicolson schemes
are stable for any values of λ, called unconditionally stable.

15
Stability Analysis.
For the explicit scheme we get on substituting (9) into (6) that
a(n+1) (ω) ei j ω Δx = λ a(n) (ω) ei (j+1) ω Δx + (1 − 2 λ) a(n) (ω) ei j ω Δx + λ a(n) (ω) ei (j−1) ω Δx
a(n+1) (ω)
⇒ G(ω) = (n)
= λ ei ω Δx + (1 − 2 λ) + λ e−i ω Δx .
a (ω)
The von Neumann stability condition then is
|G(ω)| ≤ 1 ⇐⇒ |λ ei ω Δx + (1 − 2 λ) + λ e−i ω Δx | ≤ 1
⇐⇒ |(1 − 2 λ) + 2 λ cos(ω Δx)| ≤ 1
 
2 ω Δx
⇐⇒ |1 − 4 λ sin |≤1 [cos 2 α = 1 − 2 sin2 α]
2
 
ω Δx
⇐⇒ 0 ≤ 4 λ sin2 ≤2
2
1
⇐⇒ 0≤λ≤ 
2 sin2 ω Δx
2
for all 0 ≤ ω Δx ≤ π.
This is equivalent to 0 ≤ λ ≤ 12 .

16
Remark.
The explicit method is stable, if and only if
(Δx)2
Δt ≤ . (†)
2 c2
1
(†) is a strong restriction on the time step size Δt. If Δx is reduced to 2 Δx,
then Δt must be reduced to 14 Δt.
So the total computational work increases by a factor 8.
Example.

ut = uxx (x, t) ∈ [0, 1] × [0, 1]


Take Δx = 0.01. Then
1
λ≤ ⇒ Δt ≤ 0.00005
2
I.e. the number of grid points is equal to
1 1
= 100 × 20, 000 = 2 × 106 .
Δx Δt

17
Remark.
In vector notation, the explicit scheme can be written as

wn+1 = A wn + bn ,

−1 ) ∈ R
N −1
where wn = (w1n , . . . , wN
n T
and
 
  λ w0n
1 − 2λ λ  
   0 
 λ 1 − 2λ λ   . 
A= ... ...  ∈ R(N −1)×(N −1) , bn =  . 
 . ∈R
N −1
.
   
 0 
λ 1 − 2λ n
λ wN

For the implicit method we get


 
1 + 2 λ −λ
 −λ 1 + 2 λ −λ 
 
B w n+1 = wn + bn+1 , where B =  . . . . .
 . . 
−λ 1 + 2 λ

18
Remark.
Forward diffusion equation: ut − c2 uxx = 0 t ≥ 0.
Backward diffusion equation:

ut + c2 uxx = 0 t≤T
u(x, T ) = uT (x) ∀x
u(a, t) = ga (t), u(b, t) = gb (t) ∀t [as before]

[Note: We could use the transformation v(x, t) := u(x, T − t) in order to trans-


form this into a standard forward diffusion problem.]
We can solve the backward diffusion equation directly by starting at t = T and
solving “backwards”, i.e. given wn+1 , find wn .
e w n + b̃n
Implicit: wn+1 = A e w n+1 = wn + b̃n
Explicit: B

The von Neumann stability condition for the backward problem then becomes
n
a (ω)
e
|G(ω)| = n+1 ≤ 1.
a (ω)

19
Stability of the Binomial Model.
The binomial model is an explicit method for a backward equation.
1 1
Vjn = (p Vj+1 n+1 n+1
+ (1 − p) Vj−1 ) = (p Vj+1 n+1
+ 0 Vjn+1 + (1 − p) Vj−1 n+1
)
R R
for j = −n, −n + 2, . . . , n − 2, n and n = N − 1, . . . , 1, 0.
N N N N
Here the initial values V−N , V−N +2 , . . . , VN −2 , VN are given.
Now let Vjn = a(n) (ω) ei j ω Δx , then
(n) i j ω Δx 1  (n+1) i (j+1) ω Δx (n+1) i (j−1) ω Δx

a (ω) e = pa (ω) e + (1 − p) a (ω) e
R 
e
⇒ G(ω) = p ei ω Δx + (1 − p) e−i ω Δx e−r Δt

= cos(ω Δx) + (2 p − 1) i sin(ω Δx) e−r Δt
| {z }
= q
2
e
⇒ |G(ω)| = (cos2 (ω Δx) + q 2 sin2 (ω Δx)) e−2 r Δt
= (1 + (q 2 − 1) sin2 (ω Δx)) e−2 r Δt
≤ e−2 r Δt ≤ 1
if q 2 ≤ 1 ⇐⇒ −1 ≤ q ≤ 1 ⇐⇒ p ∈ [0, 1]. Hence the binomial model is
stable.

20
Stability of the CRR Model.
We know that the binomial model is stable if p ∈ (0, 1).
For the CRR model we have that
√ √ R−d
u = eσ Δt , d = e−σ Δt , p = ,
u−d
so p ∈ (0, 1) is equivalent to u > R > d.
Clearly, for Δt small, we can ensure that

Δt
e σ
> er Δt .
σ2
Hence the CRR model is stable, if Δt is sufficiently small, i.e. if Δt < r2 .

Alternatively, one can argue (less rigorously) as follows. Since Δx = u S − S =


√ √
S (eσ Δt − 1) ≈ S σ Δt and as the BSE can be written as
1
ut + σ 2 S 2 uSS + . . . = 0 ,
|2 {z }
c2
it follows that
Δt 1 2 2 Δt 1
λ = c2 = σ S = ⇒ CRR is stable.
(Δx)2 2 S 2 σ 2 Δt 2

21
9. Simplification of the BSE.
Assume V (S, t) is the price of a European option at time t.
Then V satisfies the Black–Scholes equation (??) with appropriate initial
and boundary conditions.
Define
τ = T − t, x = ln S, w(τ, x) = eα x+β τ V (t, S) ,
where α and β are parameters.
Then the Black–Scholes equation can be transformed into a basic diffusion
equation:
∂w 1 2 ∂ 2w
= σ
∂τ 2 ∂x2
with a new set of initial and boundary conditions.
Finite difference methods can be used to solve the corresponding difference
equations and hence to derive option values at grid points.

22
Transformation of the BSE.
Consider a call option.
Let τ = T − t be the remaining time to maturity. Set u(x, τ ) = V (x, t). Then
∂u ∂V
∂τ = − ∂t and the BSE (??) is equivalent to
1 2 2
uτ = σ S uSS + r S uS − r u , (†)
2
u(S, 0) = V (S, T ) = (S − X)+ , (IC)
u(0, τ ) = V (0, t) = 0 , u(S, τ ) = V (S, t) ≈ S as S → ∞. (BC)

Let x = ln S ( ⇐⇒ S = ex ). Set ũ(x, τ ) = u(S, τ ). Then

ũx = uS ex = S uS , ũxx = S uS + S 2 uSS

and (†) becomes


1 2 1
ũτ = σ ũxx + (r − σ 2 ) ũx − r ũ , (‡)
2 2
ũ(x, 0) = u(S, 0) = (e − X)+ ,
x
(IC)
ũ(x, τ ) = u(0, τ ) = 0 as x → −∞ , ũ(x, τ ) = u(ex , τ ) ≈ ex as x → ∞. (BC)

23
2
Note that the growth condition (4), lim |x|→∞ ũ(x, τ ) e−a x = 0 for any a > 0, is
satisfied. Hence (‡) is well defined.
Let w(x, τ ) = eα x+β τ ũ(x, τ ) ⇐⇒ ũ(x, τ ) = e−α x−β τ w(x, τ ) =: C w(x, τ ).
Then

ũτ = C (−β w + wτ )
ũx = C (−α w + wx )
ũxx = C (−α (−α w + wx ) + (−α wx + wxx )) = C (α2 w − 2 α wx + wxx ) .

So (‡) is equivalent to
1 2 1
C (−β w + wτ ) = σ C (α2 w − 2 α wx + wxx ) + (r − σ 2 ) C (−α w + wx ) − r C w .
2 2
In order to cancel the w and wx terms we need to have
( (
1 2 2 1 2 1 1 2
−β = 2 σ α − (r − 2 σ ) α − r , α= σ 2 (r − 2 σ ) ,
⇐⇒
0 = 12 σ 2 (−2 α) + r − 12 σ 2 . β= 1 1 2 2
2 σ 2 (r − 2 σ ) +r.

24
With this choice of α and β the equation (‡) is equivalent to
1 2
wτ = σ wxx , (])
2
w(x, 0) = eα x ũ(x, 0) = eα x (ex − X)+ , (IC)
w(x, τ ) = 0 as x → −∞ , w(x, τ ) ≈ eα x+β τ ex as x → ∞. (BC)

Note that the growth condition (4) is satisfied. Hence (]) is well defined.

Implementation.

1. Choose a truncated interval [a, b] to approximate (−∞, ∞).


e−8 = 0.0003, e8 = 2981 ⇒ [a, b] = [−8, 8] serves all practical
purposes.
b−a T −t
2. Choose integers N , M to get the step sizes Δx = N and Δτ = M .
Grid points (xj , τn ):
xj = a + j Δx, j = 0, 1, . . . , N and τn = n Δτ , n = 0, 1, . . . , M .
Note: x0 , xN and τ0 represent the boundary of the grid with known values.

25
3. Solve (]) with
w(x, 0) = eα x (ex − X)+ , (IC)
(
e(α+1) b+β τ or
w(a, τ ) = 0 , w(b, τ ) = . (BC)
eα b (eb − X) eβ τ (a better choice)
Note: If the explicit method is used, N and M need to be chosen such that
1 2 Δτ 1 σ 2 (T − t) 2
σ ≤ ⇐⇒ M≥ N .
2 (Δx)2 2 (b − a)2
If the implicit or Crank–Nicolson scheme is used, there are no restrictions
on N , M . Use Crout or SOR to solve.
4. Assume w(xj , τM ), j = 0, 1, . . . , N are the solutions from step 3, then the
call option price at time t is
V (Sj , t) = e−α xj −β (T −t) w(xj , T − }t)
| {z j = 0, 1, . . . , N ,
τM
where Sj = exj and T − t ≡ τM .
Note: The Sj are not equally spaced.

26
10. Direct Discretization of the BSE.
Exercise: Apply the Crank–Nicolson scheme directly to the BSE (??),
i.e. there is no transformation of variables, and write out the resulting
difference equations and do a stability analysis.
C++ Exercise: Write a program to solve the BSE (??) using the result of
the previous exercise and the Crout algorithm. The inputs are the interest
rate r, the volatility σ, the current time t, the expiry time T , the strike
price X, the maximum price Smax , the number of intervals N in [0, Smax ],
and the number of subintervals M in [t, T ]. The output are the asset prices
Sj , j = 0, 1, . . . , N , at time t, and their corresponding European call and
put prices (with the same strike price X).

27
11. Greeks.
Assume that the asset prices Sj and option values Vj , j = 0, 1, . . . , N , are
known at time t.
The sensitivities of V at Sj , j = 1, . . . , N − 1, are computed as follows:
∂V Vj+1 − Vj−1
δj = |S=Sj ≈ ,
∂S Sj+1 − Sj−1
Vj+1 − Vj−1
which is , if S is equally spaced.
2 ΔS
Vj+1 −Vj Vj −Vj−1
∂ 2V Sj+1 −Sj − Sj −Sj−1
γj = |S=Sj ≈ ,
∂S 2 Sj − Sj−1
Vj+1 − 2 Vj + Vj−1
which is , if S is equally spaced.
(ΔS)2

28
12. Diffusion Equations of Two State Variables.
 2 
∂u 2 ∂ u ∂ 2u
=α + , (x, y, t) ∈ [a, b] × [c, d] × [0, ∞). (10)
∂t ∂x2 ∂y 2
The initial conditions are

u(x, y, 0) = u0 (x, y) ∀ (x, y) ∈ [a, b] × [c, d] ,

and the boundary conditions are

u(a, y, t) = ga (y, t), u(b, y, t) = gb (y, t) ∀ y ∈ [c, d], t ≥ 0 ,

and

u(x, c, t) = gc (x, t), u(x, d, t) = gd (x, t) ∀ x ∈ [a, b], t ≥ 0 .

Here we assume that all the functions involved are consistent, in the sense
that they have the same value at common points, e.g. ga (c, t) = gc (a, t) for
all t ≥ 0.

29
13. Grid Points.
(xi , yj , tn ), where
b−a
xi = a + i Δx, , i = 0, . . . , I ,
Δx =
I
d−c
yj = c + j Δy, Δy = , j = 0, . . . , J ,
J
T
tn = n Δt, Δt = , n = 0, . . . , N
N
and I, J, N are integers.

Recalling the finite differences (5), we have


uni+1,j − 2 uni,j + uni−1,j uni,j+1 − 2 uni,j + uni,j−1
uxx ≈ and uyy ≈ .
(Δx)2 (Δy)2
at a grid point (i, j, n).

30
Depending on how ut is approximated, we have three basic schemes: ex-
plicit, implicit, and Crank–Nicolson schemes.

31
14. Explicit Scheme.
If ut is approximated by a forward difference quotient
un+1 n
i,j − ui,j
ut ≈
Δt
at (i, j, n),
then the corresponding difference equation at grid point (i, j, n) is
n+1 n n n n n
wi,j = (1 − 2 λ − 2 μ) wi,j + λ wi+1,j + λ wi−1,j + μ wi,j+1 + μ wi,j−1 (11)

for i = 1, . . . , I − 1 and j = 1, . . . , J − 1, where


Δt Δt
λ = α2 and μ = α2 .
(Δx)2 (Δy)2
(11) can be solved explicitly. It has local truncation error O(Δt, (Δx)2 , (Δy)2 ),
but is only conditionally stable.

32
15. Implicit Scheme.
un+1 −uni,j
If ut is approximated by a backward difference quotient ut ≈ i,j Δt at
(i, j, n + 1), then the difference equation at grid point (i, j, n + 1) is
n+1 n+1 n+1 n+1 n+1 n
(1 + 2 λ + 2 μ) wi,j − λ wi+1,j − λ wi−1,j − μ wi,j+1 − μ wi,j−1 = wi,j (12)

for i = 1, . . . , I − 1 and j = 1, . . . , J − 1.
For fixed n, there are (I − 1) (J − 1) unknowns and equations. (12) can be
solved by relabeling the grid points and using the SOR algorithm.
(12) is unconditionally stable with local truncation error O(Δt, (Δx)2 , (Δy)2 ),
but is more difficult to solve, as it is no longer tridiagonal, so the Crout
algorithm cannot be applied.

16. Crank–Nicolson Scheme.


It is the average of the explicit scheme at (i, j, n) and the implicit scheme
at (i, j, n + 1). It is similar to the implicit scheme but with the improved
local truncation error O((Δt)2 , (Δx)2 , (Δy)2 ).

33
Solving the Implicit Scheme.
n+1 n+1 n+1 n+1 n+1 n
(1 + 2 λ + 2 μ) wi,j − λ wi+1,j − λ wi−1,j − μ wi,j+1 − μ wi,j−1 = wi,j

With SOR for ω ∈ (0, 2).


For each n = 0, 1, . . . , N

1. Set wn+1,0 := wn and


n+1,0 n+1,0 n+1,0 n+1,0
fill in the boundary conditions w0,j , wI,j , wi,0 , wi,J for all i, j.

2. For k = 0, 1, . . .

For i = 1, . . . I − 1, j = 1, . . . , J − 1
 
k+1 1 n n+1,k n+1,k+1 n+1,k n+1,k+1
wbi,j = 1+2 λ+2 μ wi,j + λ wi+1,j + λ wi−1,j + μ wi,j+1 + μ wi,j−1
n+1,k+1 n+1,k k+1
wi,j = (1 − ω) wi,j bi,j
+ωw

until kwn+1,k+1 − wn+1,k k < ε.

3. Set wn+1 = wn+1,k+1 .

34
With Block Jacobi/Gauss–Seidel.
   
w1,j e1
w
 w   w 
 2,j   e2 
Denote wej =  ..  ∈ RI−1 , j = 1, . . . , J −1, w =  ..  ∈ R(I−1) (J−1) .
 .   . 
wI−1,j eJ−1
w
On setting c = 1 + 2 λ + 2 μ, we have from (12) for j fixed
      
c −λ w1,j   w1,j+1   w1,j−1
−λ c −λ  w  −μ  w  −μ  
  2,j   . .  2,j+1   . .  w2,j−1 
 ...   + .  + .  
  ...   ..
.   ..
. 
−μ −μ
−λ c wI−1,j wI−1,j+1 wI−1,j−1
 
n n+1
w1,j + λ w0,j
 n 
 w2,j 
 . 
=
.. 
 ⇐⇒ Awejn+1 + B w n+1
ej+1 +Bw n+1
ej−1 = dnj
 n
wI−2,j 
 
n n+1
wI−1,j + λ wI,j

35
Rewriting
ejn+1 + B w
Aw n+1
ej+1 n+1
ej−1
+Bw = dnj j = 1, . . . , J − 1
as
     
A B e1n+1
w −B dn1 w e
e0n+1
d1n
   n+1 
   
B A B e 
 w  dn2   dn2 
  2    
 ... ... ...   ...  =  ..
.  =:  ...  ,
      
   ..   n   n 
 B A B  .   dJ−2  dJ−2 
B A n+1
eJ−1
w dnJ−1 − B weJn+1 denJ−1
where we0n+1 and weJn+1 represent boundary points, leads to the following Block
Jacobi iteration: For k = 0, 1, . . .
e2n+1,k + den1
e1n+1,k+1 = −B w
Aw
e2n+1,k+1 = −B w
Aw e1n+1,k − B w
e3n+1,k + dn2
..
.
n+1,k+1 n+1,k n+1,k
eJ−2
Aw = −B w
eJ−3 −Bw eJ−1 + dnJ−2
en+1,k+1 = −B w
Aw J−1 en+1,k + den
J−2 J−1

36
Similarly, the Block Gauss–Seidel iteration is given by:
For k = 0, 1, . . .
e2n+1,k + den1
e1n+1,k+1 = −B w
Aw
e2n+1,k+1 = −B w
Aw e1n+1,k+1 − B w
e3n+1,k + dn2
..
.
n+1,k+1 n+1,k+1 n+1,k
eJ−2
Aw = −B w
eJ−3 −BweJ−1 + dnJ−2
en+1,k+1 = −B w
Aw J−1 en+1,k + den
J−2 J−1

ejn+1,k+1 , j = 1, . . . , J − 1.
In each case, use the Crout algorithm to solve for w

Note on Stability. (n+1)


(ω)
Recall that in 1d a scheme was stable if |G(ω)| = aa(n) (ω) ≤ 1, where vjn =
a(n) (ω) ei j ω Δx .
In 2d, this is adapted to
√ √
n
vi,j = a(n) (ω) e −1 i ω Δx+ −1 j ω Δy
.

37
17. Alternating Direction Implicit (ADI) Method.
An alternative finite difference method is the ADI scheme, which is un-
conditionally stable while the difference equations are still tridiagonal and
diagonally dominant.
The ADI algorithm can be used to efficiently solve the Black–Scholes two
asset pricing equation:

Vt + 12 σ12 S12 VS1 S1 + 12 σ22 S22 VS2 S2 +ρ σ1 σ2 S1 S2 VS1 S2 +r S1 VS1 +r S2 VS2 −r V = 0.


(13)
See Clewlow and Strickland (1998) for details on how to transform the
Black–Scholes equation (13) into the basic diffusion equation (10) and then
to solve it with the ADI scheme.

38
ADI scheme
Implicit method at (i, j, n + 1):
n+1
wi,j n
− wi,j 2
n
wi+1,j n
− 2 wi,j n
+ wi−1,j wn+1 − 2 wi,j
2 i,j+1
n+1 n+1
+ wi,j−1
= α +α
Δt (Δx)2 (Δy)2
| {z }
approx. uxx using (i, j, n) data

Implicit method at (i, j, n + 2):


n+2 n+1
wi,j − wi,j wn+2 − 2 wi,j
2 i+1,j
n+2 n+2
+ wi−1,j n+1
wi,j+1 n+1
− 2 wi,j n+1
+ wi,j−1
=α 2
+α2
Δt (Δx) (Δy)2
| {z }
approx. uyy using (i, j, n + 1) data
We can write the two equations as follows:
n+1 n+1 n+1 n n n
−μ wi,j+1 + (1 + 2 μ) wi,j − μ wi,j−1 = λ wi+1,j + (1 − 2 λ) wi,j + λ wi−1,j (†)
n+2 n+2 n+2 n+1 n+1 n+1
−λ wi+1,j + (1 + 2 λ) wi,j − λ wi−1,j = μ wi,j+1 + (1 − 2 μ) wi,j + μ wi,j−1 (‡)

39
n+1
To solve (†), fix i = 1, . . . , I − 1 and solve a tridiagonal system to get wi,j for
j = 1, . . . , J − 1.
This can be done with e.g. the Crout algorithm.
n+2
To solve (‡), fix j = 1, . . . , J − 1 and solve a tridiagonal system to get wi,j for
i = 1, . . . , I − 1.

Currently the method works on the interval [tn , tn+2 ] and has features of an
explicit method. In order to obtain an (unconditionally stable) implicit method,
we need to adapt the method so that it works on the interval [tn , tn+1 ] and hence
n
gives values wi,j for all n = 1, . . . , N .
n+ 12
Introduce the intermediate time point n + 12 . Then (†) generates wi,j (not
n+1
used) and (‡) generates wi,j .

μ n+ 12 n+ 1 μ n+ 12 λ n n λ n
− wi,j+1 + (1 + μ) wi,j 2 − wi,j−1 = wi+1,j + (1 − λ) wi,j + wi−1,j (†)
2 2 2 2
λ n+1 n+1 λ n+1 μ n+ 21
n+ 1 μ n+ 12
− wi+1,j + (1 + λ) wi,j − wi−1,j = wi,j+1 + (1 − μ) wi,j 2 + wi,j−1 (‡)
2 2 2 2

40

Potrebbero piacerti anche