Sei sulla pagina 1di 22

Insurance: Mathematics and Economics 36 (2005) 399420

A numerical method to nd the probability of ultimate ruin in the


classical risk model with stochastic return on investments
Jostein Paulsen
a,
, Juma Kasozi
b
, Andreas Steigen
c
a
Department of Mathematics, University of Bergen, Johs. Brunsgt. 12, 5008 Bergen, Norway
b
Department of Mathematics, Makerere University, Kampala, Uganda
c
Center of Environmental and Resource Studies, University of Bergen, Nygaardsgt. 5, 5020 Bergen, Norway
Received July 2004 ; received in revised form February 2005; accepted 11 March 2005
Abstract
Let (y) be the probability of ultimate ruin in the classical risk process compounded by a linear Brownian motion. Here y is
the initial capital. We give sufcient conditions for the survival probability function = 1 to be four times continuously
differentiable, which in particular implies that is the solution of a second order integro-differential equation. Transforming this
equation into an ordinary Volterra integral equation of the second kind, we analyze properties of its numerical solution when
basically the block-by-block method in conjunction with Simpsons rule is used. Finally, several numerical examples show that
the method works very well.
2005 Elsevier B.V. All rights reserved.
Keywords: Ruin probability; Volterra equation; Block-by-block method
1. Introduction
The problem of nding the probability of ultimate ruin for a risk process has been the subject of a vast number of
papers in the insurance literature. By far the majority of these papers are concentrated on the analytical aspects of
the problem, but there is also a quite considerable number that deal with numerical methods to actually calculate this
evasive probability. These papers are roughly divided into two groups, the largest attempting to solve the relevant
integral- or integro-differential equation numerically, and the other using Monte Carlo techniques. Examples of
papers in this latter group are Lehtonen and Nyrhinen (1992) and Asmussen and Binswanger (1997) for the classical
risk process while Asmussen and Nielsen (1995) allows for a deterministic return on investments. However, it is

Corresponding author.
E-mail addresses: jostein@mi.uib.no (J. Paulsen), kasozi@math.mak.ac.ug (J. Kasozi), andreas.steigen@smr.uib.no (A. Steigen).
0167-6687/$ see front matter 2005 Elsevier B.V. All rights reserved.
doi:10.1016/j.insmatheco.2005.02.008
400 J. Paulsen et al. / Insurance: Mathematics and Economics 36 (2005) 399420
clear fromthe paper by Paulsen and Rasmussen (2003) that successful use of Monte Carlo techniques in the presence
of a stochastic return on investments is by no means a routine task.
When it comes to numerical methods for the classical risk process, the literature abounds with suggestions, see
e.g. De Vylder (1996) for an overview of some of these methods. Surprisingly few of the suggested methods take
advantage of the vast knowledge about integral equations in the numerical literature, and as a consequence many
do not pay much attention to the error rate inherent in any numerical method. An exception is the paper by Ramsay
and Usabel (1997) where the method of product integration is used to solve the Volterra integral equation for the
ruin probability.
It should also be mentioned that there is a considerable literature dealing with numerical methods for calculating
the ruin probability in nite time, see De Vylder (1996) for several of these. More recent papers allowing for
a constant return on investments are Dickson and Waters (1999), Brekelmans and De Waegenaere (2001), and
Cardoso and Waters (2003). See also the survey paper by Paulsen (1998a). Since very little is known analytically
here, numerical methods are even more important than in the innite time case.
In this paper we will be concerned with innite time ruin probabilities for the diffusion perturbated classical
risk process compounded by a linear Brownian motion, see Section 2 for the details. When there is no Brownian
motion, the abovementioned paper by Dickson and Waters (1999) presents several numerical methods from the
literature and show by numerical examples that some of them work well. However, none of these methods belong
to the standard numerical methods developed by numerical analysts to solve the relevant equations, and conse-
quently Dickson and Waters do not discuss the error rate of the various methods. In Section 2, we use integration
by parts to represent the survival probability as a linear Volterra integral equation of the second kind, and then
use (in one case a modication of) the standard block-by-block method in conjunction with the Simpson rule to
solve this equation. Partly relying on known results in the numerical literature, we prove in Section 3 that the
numerical solution has an error of order 4. For this to hold it is necessary that the survival function is four times
continuously differentiable, and some effort is spent in Section 2 to prove this fact. We also prove in Section 3 that
the error rate of order 4 still holds when functions appearing in the kernel of the integral operator are calculated
numerically using the Simpson rule. Finally, in Section 4, several numerical examples are given showing that the
method really works. Here it is also shown how known asymptotic results can be used to approximate very small
ruin probabilities and also how they can be used to facilitate calculations when ruin probabilities are very heavy
tailed.
In this paper we concentrate on the case when assets earn return on investments. Needless to say, our methods
would of course work equally well for the simpler case with no such return.
2. The model and some theoretical results
We will assume that all processes and random variables are dened on a ltered probability space (, F, F, P)
satisfying the usual conditions, i.e. F
t
is right continuous and P-complete.
The basic process is the surplus generating process
P
t
= pt +
P
W
P,t

N
t

i=1
S
i
, t 0. (2.1)
Here W
P
is a standard Brownian motion independent of the compound Poisson process

N

i=1
S
i
. We denote the
intensity of Nby , and let Fbe the distribution function of the S
i
. It is assumed that Fis continuous and concentrated
on (0, ). In the literature P is called a classical risk process perturbated by a diffusion.
The interpretation of (2.1) is that p is the premium intensity, N
P
is the claim number process and the {S
P,i
} are
the claim sizes. The assumption F
P
(0) = 0 assures that they are positive. The Brownian term
P
W
P
is meant to
take care of small perturbations in the other two terms.
J. Paulsen et al. / Insurance: Mathematics and Economics 36 (2005) 399420 401
The risk process Y is the solution of the linear stochastic differential equation
Y
t
= y +P
t
+
_
t
0
Y
s
dR
s
, (2.2)
where R is the return on investment generating process
R
t
= rt +
R
W
R,t
, t 0, (2.3)
and W
R
is another Brownian motion, independent of the process P. The interpretation of (2.3) is that r is the nonrisky
part of the investments so that R
t
= rt implies that one unit invested at time zero will be worth e
rt
at time t. The
term
R
W
R
then takes care of the random uctuations in the investment return.
By Paulsen (1993) the solution of (2.2) is
Y
t
=

R
t
_
y +
_
t
0

R
1
s
dP
s
_
, (2.4)
where

R
t
= exp
__
r
1
2

2
R
_
t +
R
W
R,t
_
, t 0,
is just the standard geometric Brownian motion so extensively used in mathematical nance. It is the solution of
d

R
t
= r

R
t
dt +
R

R
t
dW
R,t
with

R
0
= 1.
Since both P and R have stationary independent increments, Y is a homogeneous strong Markov process. Using
Itos formula we nd that the innitesimal generator for Y is given by
Ag(y) =
1
2
(
2
R
y
2
+
2
P
)g

(y) +(ry +p)g

(y) +
_

0
(g(y x) g(y)) dF(x).
Let T
y
= inf{t : Y
t
< 0} be the time of ruin, with T
y
= if ruin never occurs and let (y) = P(T
y
< ) be the
probability of eventual ruin. The following simple, but very useful result is proved in Paulsen and Gjessing (1997)
Theorem 2.1.
Theorem 2.1. Assume that the equation A = 0 has a bounded, twice continuously differentiable solution (ones
continuously differentiable if
R
=
P
= 0) that satises the boundary conditions
(y) = 0 on y < 0,
(0) = 0 if
2
P
> 0,
lim
y
(y) = 1.
(2.5)
Then (y) = 1 (y) is the probability of survival.
Using that (y) = 0 for y < 0, A = 0 takes the form
1
2
(
2
R
y
2
+
2
P
)

(y) +(ry +p)

(y) +
_
y
0
(y x) dF(x) (y) = 0. (2.6)
Eq. (2.6) is a second order integro-differential equation of the Volterra type. However, repeated integration by
parts brings it into an ordinary Volterra integral equation, which will be the object of our numerical study.
402 J. Paulsen et al. / Insurance: Mathematics and Economics 36 (2005) 399420
Theorem 2.2. The integro-differential equation (2.6) can be represented as the Volterra integral equation of the
second kind
(y) +
_
y
0
K(y, x)(x) dx = (y), (2.7)
where for the case with no diffusion, i.e.
2
P
=
2
R
= 0,
K(y, x) =
r +

F(y x)
ry +p
, (y) =
p
ry +p
(0), (2.8)
with

F(x) = 1 F(x). When there is diffusion, i.e.
2
P
+
2
R
> 0,
K(y, x) = 2
(2r 3
2
R
+)x +p +F
2
(y x) (r
2
R
+)y

2
R
y
2
+
2
P
,
(y) =
_

_
2p

2
R
y
(0) if
2
P
= 0,

2
P
y

2
R
y
2
+
2
P

(0) if
2
P
> 0,
(2.9)
with F
2
(x) =
_
x
0
F(v) dv.
Remark. There is an apparent singularity at 0 for the case
2
P
= 0 and
2
R
> 0, but a simple Taylor expansion of
around zero shows that it is a removable singularity. More details here are given in the proof of Theorem 3.1.
Proof. Replacing y in (2.6) with z and integrating by parts w.r.t. z on [0, y] gives in combination with (2.5)
1
2
(
2
R
y
2
+
2
P
)

(y)
1
2

2
P

(0) +((r
2
R
)y +p)(y) p(0) (r
2
R
+)
_
y
0
(x) dx
+
_
y
0
F(y x)(x) dx = 0. (2.10)
In particular if there is no diffusion, i.e. if
P
=
R
= 0, this is just (2.7) and (2.8).
If either or both of
P
and
R
are nonzero, then replacing y in (2.10) with z and again integrating by parts w.r.t.
z on [0, y] using (2.5) shows that still satises the equation (2.7) with K(y, x) and (y) given by (2.9).
Before a deeper analysis is started, it is useful to be aware of the trivial cases. A proof of the following result can
be found in Paulsen (1998b).
Theorem 2.3. Let (y) be the survival probability and assume that p > 0, > 0 and r > 0. Then (y) > 0 for all
y > 0 if and only if r >
1
2

2
R
, and in this case () = 1. When r
1
2

2
R
, (y) = 0 for all y.
Remark. The fact that (y) 0 whenever r
1
2

2
R
may seema bit surprising at rst sight, but it is easily explained.
Assume rst that r <
1
2

2
R
. Then

R
1
t
as t at a geometric rate, so that V
t
=
_
t
0

R
1
s
dP
s
will uctuate
more and more. Now by (2.4), T
y
= inf{t : V
t
< y}, and because of the large uctuations, T
y
will with probability
one be nite. The case with r =
1
2

2
R
is the limiting case, giving

R
1
t
= e

R
W
R,t
. Therefore, the oscillatory nature
of W
R
again assures that ruin will happen. When r >
1
2

2
R
, then

R
1
t
0 as t 0 at a geometric rate, so that
V
t
V

, where V

is a nite random variable. So in this case T


y
may or may not be a nite.
J. Paulsen et al. / Insurance: Mathematics and Economics 36 (2005) 399420 403
From now on it is assumed that r >
1
2

2
R
. To return to our Eq. (2.8), x g(0) (or g

(0) if
2
P
> 0) and consider
g(y) +
_
y
0
K(y, x)g(x) dx = (y), (2.11)
where (y) is as (y) in (2.9) or (2.10), but with (0)(

(0)) replaced by g(0)(g

(0)). If g is a bounded solution of


(2.11) it is clear that
(y) =
1
g()
g(y),
since () = 1.
The object of this paper is to discuss the numerical solution of (2.7)(2.9). Basically we will apply the Simp-
son rule which is known to have an error of order 4, provided the relevant functions are C
4
[0, ), where
by C
q
[0, ) we shall mean q times continuously differentiable functions. The following result complements
Theorem 2.1.
Theorem 2.4. Assume that p > 0, r >
1
2

2
R
and that F C
4
[0, ). Assume also that

F(x)x

stays bounded for


some > 0. Then the survival probability satises (2.6) and C
4
[0, ).
Proof. If
2
P
+
2
R
= 0 or
2
P
> 0 it follows from (2.8) and (2.9) together with Theorem 3.1 in Linz (1985) that the
solution of (2.11) is in C
4
[0, ). When
2
P
= 0 and
2
R
> 0 it is shown in Gaier and Grandits (2004) (henceforth
G&G) that g is in C
2
[0, ), but the proof in their Theorem 5 is easily extended to C
4
[0, ) as follows. Note rst
that upon replacing by g, then as in G&G, Eq. (2.6) can be written as
1
2
(
2
R
y
2
+
2
P
)g

(y) +(ry +p)g

(y) +f
(u)
(y) = 0, (2.12)
where
f
(u)
(x) =
2

2
R
_
g(0)

F(x) +
_
x
0
g

(y z)

F(z) dz
_
.
G&G solve (2.12) for g
1
= g

to get
g
1
(y) =
__
y
0
f
(u)
(x)x
2
e
/x
dx
_
y

e
/y
, (2.13)
where =
2p

2
R
and =
2r

2
R
> 1. For q a positive integer,
__
y
0
x

x
2
e
/x
dx
_
y

e
/y
= y

_

0
(1 +t)
(1++)
e
(/x)t
dt = y

i=1
c
i
y
i
+o(y
+q
),
where the rst equality is just the substitution t =
x
y
1 and the second is Watsons lemma (Davies, 1978). We can
now as in G&G argue recursively that g
1
C
i
[0, ) for i = 2, 3, hence g C
4
[0, ).
If we can prove that there is an increasing bounded solution of (2.12) with g(0) > 0 if
2
P
= 0 and g(0) = 0 if

2
P
> 0, then it is clear that (y) =
g(y)
g()
will satisfy the boundary conditions (2.5), and hence by Theorem 2.1,
is the survival probability function. For the case with
2
P
= 0 and
2
R
> 0 it is shown in G&G that g is increasing
and bounded provided

F(x)x
1+
is bounded for some > 0. Their method would also work for the other cases as
404 J. Paulsen et al. / Insurance: Mathematics and Economics 36 (2005) 399420
well, but here we will present a completely different proof that may be of interest in itself. It also works under the
weaker condition that

F(x)x

is bounded, thus making the theorem valid for, e.g. Pareto distributions without nite
expectations.
It follows from(2.13) that g

(y) > 0 for all y when


2
P
= 0 and
2
R
> 0. Solving (2.12) w.r.t. g

as in (2.13) shows
that g

(y) > 0 for all y for the other cases as well. Therefore, to prove that g is bounded, it is sufcient to prove that
_
y
0
g(x) dx < cy(1 +(y)) (2.14)
for some constant c and a function satisfying lim
y
(y) = 0. By the version of a Tauberian theorem given in
Linz (1985), Theorem 6.8, (2.14) follows provided we can prove that
lim
t0
t g(t) = C (2.15)
for some constant C. Here
g(t) =
_

0
e
ty
g(y) dy
is the Laplace transform of g. Now for any y
0
> 0 there is a k so that K(y, x) k for all x 0 and all y y
0
. By
Example 3.3 in Linz (1985), g(y) c
1
e
ky
for some positive c
1
, and therefore g exists for t > k. Taking the Laplace
transform on both sides of (2.12) then gives for t > k,
1
2

2
R
t
2
g

(t) +(2
2
R
r)t g

(t) +(t) g(t) = , (2.16)


where
(t) =
2
R
r +pt +
1
2

2
P
t
2
t

F(t), = pg(0) +
1
2

2
P
g

(0),

F(t) =
_

0
e
ty

F(y) dy.
Replacing t by a complex z, it is standard that the solution of (2.16) is analytical on Re(z) > 0, hence by Theorem
10.1 in Chapter 5 of Widder (1971), g(t) exists for all t > 0. Furthermore, t

F(t) = o(t
/2
), so for the asymptotics
of g near 0 we may replace (t) by its dominating term
2
R
r since this gives
lim
t0
t g(t) = lim
t0
t g(t),
where g solves
1
2

2
R
t
2
g

(t) +(2
2
R
r)t g

(t) +(
2
R
r) g(t) = . (2.17)
The general solution of (2.17) when
2
R
= r is
g(t) = c
1
t
1
+c
2
t

2
R
r
, (2.18)
where c
1
= 0 and c
2
= 0 if and only if
2
R
> 0 in which case = 2
2r

2
R
< 1. When
2
R
= r, the solution is
g(t) = c
1
t
1
+c
2
+
2
r
ln t. (2.19)
Eqs. (2.18) and (2.19) together now gives (2.15), and the theorem is proved.
J. Paulsen et al. / Insurance: Mathematics and Economics 36 (2005) 399420 405
Remark. Due to the smoothing effects in the model (2.2), particularly if
2
P
> 0, the C
q
[0, ) properties of
F are probably not necessary for Theorem 2.1 to hold, see e.g. Proposition 3.4 and Theorem 3.2 in Paulsen
(1993) for a justication of this conjecture. However, they are necessary for the Simpson rule to have an error of
order 4.
3. Numerical methods
In this section we will discuss numerical solutions of the survival probability (y) using a xed grid y =
0, h, 2h, . . . . It is assumed throughout that the assumptions of Theorem 2.1 hold. For this to be the case, by
Theorem 2.2, it is necessary that r >
1
2

2
R
. The numerical solution will be of the form
g
n
+h
n

i=1
w
i
K
n,i
g
i
=
n
, (3.1)
where g
i
is the numerical approximation to g(ih), K
n,i
= K(nh, ih) and
n
= (nh). The w
i
are the integration
weights. Here we shall use the block-by-block method in conjunction with the Simpson integration rule, thus
obtaining solutions in blocks of 2. To briey explain the method, Simpsons rule gives for any k C
4
[ih, (i +2)h],
_
(i+2)h
ih
k(x) dx =
h
3
(k(ih) +4k((i +1)h) +k((i +2)h)) +O(h
5
).
Therefore, for n = 2, (3.1) becomes
g
2
+
1
3
h(K
2,0
g
0
+4K
2,1
g
1
+K
2,2
g
2
) =
2
. (3.2)
Here g
1
is unknown, but using the same rule with stepsize h/2, Simpsons rule gives
g
1
+
1
6
h(K
1,0
g
0
+4K
1,(1/2)
g
(1/2)
+K
1,1
g
1
) =
1
. (3.3)
Quadratic interpolation gives that g
(1/2)

3
8
g
0
+
3
4
g
1

1
8
g
2
and inserting this into (3.3) yields
g
1
+
1
6
h
__
K
1,0
+
3
2
K
1,(1/2)
_
g
0
+(K
1,1
+3K
1,(1/2)
)g
1

1
2
K
1,(1/2)
g
2
_
=
1
. (3.4)
Eqs. (3.2) and (3.4) is a pair of equations to solve for g
1
and g
2
. Continuing like this in blocks of 2 we get,
g
2m+2
+h
2m

i=0
w
i
K
2m+2,i
g
i
+
h
3
(K
2m+2,2m
g
2m
+4K
2m+2,2m+1
g
2m+1
+K
2m+2,2m+2
g
2m+2
) =
2m+2
(3.5)
and
g
2m+1
+h
2m

i=0
w
i
K
2m+1,i
g
i
+
h
6
(K
2m+1,2m
g
2m
+4K
2m+1,2m+(1/2)
g
2m+(1/2)
+K
2m+1,2m+1
g
2m+1
) =
2m+1
(3.6)
Approximating g
2m+(1/2)
by
3
8
g
2m
+
3
4
g
2m+1

1
8
g
2m+2
and inserting this into (3.5) yields a set of two linear
equations for g
2m+1
and g
2m+2
.
In order to solve for the g
n
, a starting value is required. When
2
P
= 0, any positive value of g
0
can be chosen.
When
2
P
> 0, it follows from Theorem 2.1 that g
0
= 0 must be used, while g

(0) > 0 can be chosen arbitrary.


406 J. Paulsen et al. / Insurance: Mathematics and Economics 36 (2005) 399420
This is because in both cases different starting values will cancel in (y) = g(y)/g(). This was conrmed by the
numerical examples, where we found that the results did not depend on the level of these starting values.
Theorem3.1. Given the assumptions of Theorem2.3, assume that (2.11) is solved using the block-by-block method
in conjunction with Simpsons rule. Let y be xed so that y = nh. If
2
P
> 0 then with g
0
= 0 and g

(0) > 0,
|g
n
g(y)| = O(h
4
). (3.7)
If
2
P
=
2
R
= 0, then (3.7) also holds for any g
0
= g(0) > 0. If
2
P
= 0 and
2
R
> 0, then with any g
0
= g(0) > 0,
(3.7) holds for n = 0 and n > 2 provided that for n = 1, 2,
g
n
=
_
1 +

p
nh +
( r) 2f(0)p
p
2
(nh)
2
_
g
0
and the block-by-block method is used for n > 2.
Furthermore, if y is chosen so that
|g( y) g()| = O(h
q
0
), (3.8)
then with n
0
= y/h and

n
=
g
n
g
n
0
,
the difference (for n = 0 and n > 2 if
2
P
= 0 and
2
R
> 0)
|
n
(y)| = O(h
q
),
with q = min{4, q
0
}.
Proof. Except for the case with
2
P
= 0 and
2
R
> 0, it follows from results in Chapter 7 of Linz (1985) that the
solution satises (3.7) provided g is four times continuously differentiable as is the case here by Theorem 2.3.
The case when
2
P
= 0 and
2
R
> 0 needs special consideration at 0 due to the (removable) singularity there.
Writing g(y) =
0
+
1
y +
2
y
2
+O(y
3
) we shall see below how to identify
1
and
2
. Clearly
0
= g(0). Then
setting g
i
=
0
+
1
ih +
2
(ih)
2
gives that |g
i
g(ih)| = O(h
3
) for i = 1, 2. Using g
0
, g
1
and g
2
as starting values
in the block-by-block method, it follows from e.g. formula (7.20) in Linz (1985) that (3.7) holds for n > 2 in this
case as well.
To identify
1
and
2
, write
K(y, x) =
2

2
R
y
2
(
0
+
1
x +
2
F
2
(y x)
3
y),
where

0
= p,
1
= 2r 3
2
R
+,
2
= ,
3
= r
2
R
+.
J. Paulsen et al. / Insurance: Mathematics and Economics 36 (2005) 399420 407
Approximating F
2
(x) by f
2
x
2
+f
3
x
3
(since F
2
(0) = F

2
(0) = 0), calling the resulting K(y, x) by

K(y, x), we
get
g(y) +
_
y
0

K(y, x) g(x) dx = (y) +O(y


2
) (3.9)
where g(y) =
0
+
1
y +
2
y
2
. Identifying constant terms as well as terms of rst order in y then gives

0
+
2

2
R
_
1
2

1
+
1
2

0
_
= 0 (3.10)
and

1
+
2

2
R
_
1
3

2
+
1
3

1
+
1
3
f
2

1
2

0
_
= 0. (3.11)
Solving (3.10) for
1
in terms of
0
and then inserting this value into (3.11) to nd
2
give after inserting the
proper values of the
i
s,
g(y) =
_
1 +

p
y +
( r) 2f(0)p
p
2
y
2
_
g(0) +O(y
3
), (3.12)
where we used that f
2
=
1
2
F

2
(0) =
1
2
f(0) with f the density of F.
Finally,
|
n
(y)| =

g
n
g
n
0

g(y)
g()

g
n
g()
1
1 +((g
n
0
g())/g())

g(y)
g()

=
1
g()
|g
n
g(y)| +O(h
q
) = O(h
q
)
since
|g
n
0
g()| |g
n
0
g( y)| +|g( y) g()| = O(h
4
) +O(h
q
0
) = O(h
q
).
This ends the proof of the theorem.
The condition (3.8) is a bit impractical since it only concerns the order of convergence and not the accuracy of
the convergence itself. What we really want is that |g( y) g()| is negligible compared to |g
n
g(y)|. This will
be the case if |g( y) g()| is of the same size as the numerical accuracy of the computed g
n
(due to standard
truncation errors depending on the number of digits used by FORTRAN, see the discussion in Example 4.1). So
what we typically do in practice is to try a y and see if the g
n
stabilize as n approaches n
0
= y/h. If they do, y is
large enough, but otherwise y has to be increased until stability is obtained. To this end it may be useful to know
that for the block-by-block method |g
2m+2
g
2m+1
| = O(h
4
), hence these oscillations will make it slightly more
difcult to judge when stability is obtained.
Nevertheless, some asymptotics can be indicative of how large y should be. Note that
|g( y) g()| = g()(y),
and using FORTRAN with DOUBLE PRECISION, experience has shown us that the computational accuracy is
about 10

where will vary between 8 and 10 with closer to 8 if n


0
is large, see Section 4. Therefore, if y is
408 J. Paulsen et al. / Insurance: Mathematics and Economics 36 (2005) 399420
chosen so that
( y) 10

, (3.13)
then y should be sufciently large.
In order to nd if a y can be found so that (3.13) is satised, the asymptotic properties of (y) can sometimes be
useful. A lot is known about these properties, and here is a rough, but for our purpose sufcient accurate description
of the asymptotics. Again it is as always assumed that r >
1
2

2
R
. By a slowly varying function l we mean that
lim
x
l(tx)/l(x) = 1 for all t > 0.
Assume rst that
2
R
= 0 in which case the asymptotics depend strongly on F. If for some c > 0,

F(y)e
cy
goes to
zero, then (y) goes to zero exponentially fast. In this case y do not have to be very large, but some experimentation
along the lines described above is recommended.
Consider now the following 3 heavy tailed cases, which will be sufcient for our purpose.
Case 1.
2
R
= 0 and

F(y) y

l(y) for some slowly varying l. Then


(y)

r
y

l(y).
Case 2.
2
R
> 0 and lim
y
y

1
F(y) = 0 where
1
=
2r

2
R
1. Then
(y) cy

,
where =
1
and c is a constant.
Case 3.
2
R
> 0 and

F(y) y

l(y) for some slowly varying l where <


1
with
1
given as in Case 2 above.
Then
(y)
2

2
R
(
1
)
y

l(y).
For more details on these and other cases with
2
R
= 0, see Asmussen (1998), and Kl uppelberg and Stadtm uller
(1998) and the survey paper by Paulsen (1998a). For more information on Cases 2 and 3, see e.g. Frolova et al.
(2002), Kalashnikov and Norberg (2002), Paulsen (2002), and Gaier and Grandits (2004).
For all these three heavy tailed cases, (y) cy

l(y) where and l(y) are known, while c is known in Cases 1


and 3. There is a formula for c in Case 2 as well, but this is useless in practice. Then (3.13) is satised provided
y = (10

cl( y))
1/
. (3.14)
Formula (3.14) is just meant to give a rough idea of the size of y. Checking if the g
n
stabilize as n approaches
n
0
= y/h is still recommended. When y must be chosen very high in order for |g( y) g()| to be sufciently
small, a method to avoid the calculation of all the g
n
for n = 1, . . . , n
0
is proposed in Section 4.3. This method
makes use of the above asymptotics.
As mentioned above, for the block-by-block method, |g
2m+2
g
2m+1
| = O(h
4
) as well, so in order to limit the
effect of these oscillations, let
g

=
1
m
0
+1
n
0

k=n
0
m
0
g
k
, (3.15)
J. Paulsen et al. / Insurance: Mathematics and Economics 36 (2005) 399420 409
where again n
0
= y/h. As just discussed, y must be large enough so that |g( y m
0
h) g()| is negligible. Then
we set

n
=
g
n
g

. (3.16)
It may well be that analytical expression for F(x) and
F
2
(x) = xF(x)
_
x
0
vf(v) dv
do not exist, in which case they must be calculated numerically. Typically this gives values only on grid points
x = 0,
1
2
h, h,
3
2
h, . . . yielding numeric values

F(ih/2) and

F
2
(ih/2). Note from (3.6) that values are required on
multiples of h/2. However, interpolating between grid points yields functions

F and

F
2
that are in C
q
[0, ) for
any q 0 and so that
sup
y
(|

F(y) F(y)| +|

F
2
(y) F
2
(y)|) < Ch
m
(3.17)
for some C > 0. Here m is the order of the error when numerically calculating

F and

F
2
. If the Simpson rule is
used then m = 4. Doing so gives instead of (2.11),
g(y) +
_
y
0

K(y, x) g(x) dx = (x),


where

K is the same as K, but with the appropriate of F or F
2
replaced by

F or

F
2
. Actually, since Simpsons rule
requires an odd number of gridpoints, to calculate, e.g. F(ih/2) we need a gridsize of h/4 since then the numerical
recursion of e.g.
F
_
i
h
2
_
= F
_
(i 1)
h
2
_
+
_
ih/2
(i1)h/2
f(x) dx
becomes

F
_
i
h
2
_
=

F
_
(i 1)
h
2
_
+
h
12
_
f
_
(i 1)
h
2
_
+4f
__
i
1
2
_
h
2
_
+f
_
i
h
2
__
. (3.18)
Theorem 3.2. In addition to the assumptions of Theorem 3.1, assume that F or F
2
are numerically calculated
using Simpsons rule as in (3.18). Let g
n
be the numerical value of g(nh) using the same method as described above
for calculating g
n
. Then the results of Theorem 3.1 still hold with g
n
replaced by g
n
and
n
replaced by

n
=
g
n
g
n
0
.
Proof. We have
| g
n
g(nh)| | g
n
g(nh)| +|g(nh) g(nh)|.
410 J. Paulsen et al. / Insurance: Mathematics and Economics 36 (2005) 399420
Interpolating the

F(nh) or the

F
2
(nh) so that they satisfy the assumptions of Theorem 2.1 shows just as in the
above discussion of |g
n
g(nh)| that | g
n
g(nh)| = O(h
4
). It therefore remains to prove that
|g(nh) g(nh)| = O(h
4
). (3.19)
If
2
P
+
2
R
= 0 or
2
P
> 0, using an interpolation that satises (3.17), the result (3.19) follows from Theorem
3.10 in Linz (1985). It therefore only remains to prove (3.19) for the case
2
P
= 0 and
2
R
> 0. Again we use an
interpolation that satises (3.17) and so that

F
2
C
4
[0, ). In addition it can be made to satisfy sup
y
|
d
4
dy
4

F
2
(y)| <
C
1
for some C
1
> 0. Since |

F
2
(x) F
2
(x)| = O(h
5
) for x h and F
2
(0) = F

2
(0) = 0, it follows that we in this
case can assume
F
2
(x) =
4

i=2
f
i
x
i
+O(h
5
),

F
2
(x) =
4

i=2
f
i
x
i
+O(h
5
). (3.20)
Therefore, writing g(y) =

4
i=0

i
x
i
+O(h
5
) and g(y) =

4
i=0

i
x
i
+O(h
5
), it follows using the same argu-
ments as in (3.9)(3.12) together with (3.20) that
i
=
i
for i 4. This is because, e.g.
4
only depends on f
i
, i 4,
but by (3.20) for these i,

f
i
= f
i
. Therefore,
|g(y) g(y)| = O(h
5
), 0 y h.
If we can prove that for any i and y h,
|g(ih +y) g(ih) ( g(ih +y) g(ih))| = O(h
5
), (3.21)
then summing up gives (3.19).
We will prove (3.21) by induction, so assume it holds for i = 0, 1, . . . , k. Then for 0 y h, some calculations
give
v
k
(y) +
_
y
0
K(kh +y, kh +x)v
k
(x) dx =
k
(y),
where
v
k
(y) = g(kh +y) g(kh)
and

k
(y) = (kh +y) (kh)
k

i=1
g((i 1)h)
_
ih
(i1)h
(K(kh +y, x) K(kh, x)) dx

i=1
_
ih
(i1)h
(K(kh +y, x) K(kh, x))(g(x) g((i 1)h)) dx g(kh)
_
kh+y
kh
K(kh +y, x) dx.
As an example, we show that

g(kh)
_
kh+y
kh
K(kh +y, x) dx g(kh)
_
kh+y
kh

K(kh +y, x) dx

= O(h
4
). (3.22)
J. Paulsen et al. / Insurance: Mathematics and Economics 36 (2005) 399420 411
To show this, note rst that by the induction hypothesis |g(kh) g(kh)| = O(h
4
). Furthermore with c =
2

2
R
,

_
kh+y
kh
(K(kh +y, x)

K(kh +y, x)) dx

c
(kh +y)
2

_
kh+y
kh
(F(kh +y x)

F(kh +y x)) dx

c
k
2
h
2

_
y
0
(F(y x)

F(y x)) dx

=
1
k
2
h
2
O(h
6
) =
1
k
2
O(h
4
).
This proves (3.22). Similar arguments using that the induction hypothesis also allows us to write |g((i 1)h)
g((i 1)h)| = (i 1)O(h
5
), show that in fact for 0 y h,

k
(y) =

k
(y) +O(h
4
),
where

k
is the same as
k
, but with g and Kreplaced by g and

K, respectively. We therefore get, again for 0 y h,
v
k
(y) +
_
y
0
K(kh +y, kh +x)v
k
(x) dx =

k
(y) +O(y
4
) (3.23)
and by denition
v
k
(y) +
_
y
0

K(kh +y, kh +x) v


k
(x) dx =

k
(y). (3.24)
Comparing (3.23) and (3.24) with (3.9) and (2.11) and using the same arguments as after (3.9), but with two more
terms in the expansion of v
k
(y) and v
k
(y), it follows from(3.20) and the discussion following it that |v
k
(y) v
k
(y)| =
O(h
5
). This ends the induction hypothesis and hence the proposition is proved.
Remark. In practice we did as in (3.15) and calculated
g

=
1
m
0
+1
n
0

k=n
0
m
0
g
k
,
setting

n
=
g
n
g

(3.25)
in accordance with (3.16).
4. Numerical results
In this section we will report some numerical results obtained using the methods described in Section 3. For a
given y, let (y) = 1 (y) be the ruin probability, and let
h
(y) be the calculated ruin probability using (3.16)
when a gridsize h is used, and the relevant of

F or F
2
is calculated analytically. Similarly, using (3.25), let
h,h
0
(y)
be the same as
h
(y), but with the relevant of

F or F
2
calculated using the Simpson rule with gridsize h
0
/4 as
described in (3.18). Here h/h
0
must be an integer. In all the calculations of g

or g

, m
0
= 1000 which is probably
unnecessary high.
412 J. Paulsen et al. / Insurance: Mathematics and Economics 36 (2005) 399420
When (y) is known, we set
D
h
(y) = 100

h
(y) (y)
(y)
and D
h,h
0
(y) = 100

h,h
0
(y) (y)
(y)
,
i.e. the percentage relative error.
In the examples two different distributions for S (generic for the S
i
) are used. The rst is the standard exponential
distribution, i.e.
f(x) = e
x
, x > 0, (4.1)
so that in particular E[S] = 1. The other is the Pareto distribution
f(x) = ( 1)

( 1 +x)
(1+)
, x > 0, > 1, (4.2)
and again E[S] = 1. Furthermore with this distribution,

F(x) =
_
1
1 +x
_

and
F
2
(x) = x 1 +
_
1
1 +x
_
1
.
Note that asymptotically

F(x) ( 1)

= l(x)x

,
where then l(x) is the constant ( 1)

and = .
All the calculations were done on a PCwith an AMDAthlon XP 3200+ processor and 1024 MBinternal memory.
The programlanguage was FORTRAN, and we used DOUBLE PRECISIONto get satisfactory accuracy. Of course
slower programs like Splus, R, Gauss, Matlab, Maple or Mathematica could have been used, but only at the expense
of a considerably longer computing time.
The calculations go to an upper limit y, the choice of which is discussed in the examples. Hence n
0
= y/h g
i
s
are required, and the number of elementary calculations for these go like n
2
0
. It turned out that the computing time
for n
0
= 10,000 is 4 s (yes, it is really fast), hence for n
0
= 100,000 it is about 7 min and for n
0
equal to 1 million
it is about 11 h.
On the other hand, the calculation of the

F(ih) or the F
2
(ih) is only linear in the number required, hence letting,
e.g. h
0
= h/10 when using the Simpson rule really does not add much to the computing time.
4.1. The case without diffusion
Dickson and Waters (1999) presented several numerical methods for this case, but without any effort to analyze
the error rate. Of all the methods they discuss, the one that is closest to that used in this paper is a method proposed
by Sundt and Teugels (1995). They showed numerically that it performs very well, although it is clear from the
algorithm of this method that its error is of order no higher than 2. On the other hand, the Sundt and Teugels method
has the advantage that it provides an upper and lower bound, which of course is useful unless these bounds are too
far from each other.
The following example shows that the block-by-block method performs extremely well.
J. Paulsen et al. / Insurance: Mathematics and Economics 36 (2005) 399420 413
Table 1
Ruin probabilities and relative errors
y (y)
0.1
(y) D
0.1
(y)
0.01
(y) D
0.01
(y)
0 0.7909540044 0.7909543928 0.0000 0.7909540044 0.0000
5 0.1776111024 0.1776112352 0.0001 0.1776111048 0.0000
10 0.0241449177 0.0241449299 0.0001 0.0241449168 0.0000
15 0.0022199914 0.0022199735 0.0008 0.0022199906 0.0000
20 0.0001502219 0.0001502021 0.0132 0.0001502171 0.0032
25 0.0000079593 0.0000079395 0.2484 0.0000079565 0.0356
30 0.0000003456 0.0000003237 6.3518 0.0000003399 1.6564
35 0.0000000127 0.0000000013 89.4923 0.0000000130 1.8223
40 0.0000000004 0.0000000004 203.0081 0.0000000103 2419.8958
Table 2
Relative error for various values of h
y D
1
(y) D
0.5
(y) D
0.1
(y) D
0.05
(y) D
0.01
(y) D
0.005
(y)
0 0.0026 0.0002 0.0000 0.0000 0.0000 0.0000
5 1.1428 0.0140 0.0001 0.0000 0.0000 0.0000
10 4.5876 0.1513 0.0001 0.0000 0.0000 0.0000
15 25.556 1.9577 0.0008 0.0000 0.0000 0.0001
20 657.32 26.432 0.0132 0.0035 0.0032 0.0043
25 5346 446.41 0.2484 0.0311 0.0356 0.0097
30 227627 9273 6.3518 1.8852 1.6564 2.0665
35 2759602 229115 89.492 25.475 1.8223 9.4278
Example 4.1. In Tables 14, the values p = 1.1, = 1, r = 0.05 are used, and in addition S is assumed to be
exponentially distributed with expectation 1 as in (4.1). For this model the true ruin probability was found way back
by Segerdahl (1942) and later reappeared in many papers. This enables us to assess the quality of the numerical
method, just as was done by Dickson and Waters (1999). The exact ruin probability is calculated to a high degree
of accuracy from Segerdahls formula using the Splus program integrate.
It is seen from Table 1 that the results are extremely good. Letting h = 0.1 gives accurate results down to true
values of about 10
6
and when h = 0.01 they go all the way down to about 10
8
. For lower ruin probabilities than
that, i.e. when y = 40, so that (y) = 0.4 10
10
, the number of digits used by FORTRAN is not sufcient, and
the result becomes poor. For some reason the relative error with h = 0.01 is about the same when y = 30 as when
y = 35, which is a bit strange. Trying various values of y did not change this unreasonable result. In the tables,
y = 100 was used.
Table 2 shows the relative percentage error for various stepsizes h. We see that there is a steady gain as h
decreases from h = 1 down to h = 0.05. After that the picture is somewhat less clear, and in particular when going
from h = 0.01 to 0.005 the error increases for most y values. The reason for this is probably that due to the limited
Table 3
Comparing exact values against those obtained using Simpsons rule for the claim distributions
y D
0.1
(y) D
0.1,0.1
(y) D
0.1,0.01
(y) D
0.01
(y) D
0.01,0.01
(y)
0 0.0000 0.0000 0.0000 0.0000 0.0000
5 0.0001 0.0000 0.0001 0.0000 0.0000
10 0.0001 0.0007 0.0001 0.0000 0.0000
15 0.0008 0.0083 0.0008 0.0000 0.0000
20 0.0132 0.1177 0.0132 0.0032 0.0032
25 0.2484 2.0262 0.2485 0.0356 0.0357
30 6.3518 44.870 6.3528 1.6564 1.6571
35 89.492 1177 89.510 1.8223 1.8049
414 J. Paulsen et al. / Insurance: Mathematics and Economics 36 (2005) 399420
Table 4
Comparing single and double precision
y D
0.1
(y) D
SP
0.1
(y) D
SP
0.1,0.1
(y) D
SP
0.1,0.01
(y) D
0.01
(y) D
SP
0.01
(y) D
SP
0.01,0.01
(y)
0 0.0000 0.0000 0.0000 0.0001 0.0000 0.0000 0.0000
5 0.0001 0.0006 0.0213 0.1188 0.0000 0.0002 0.0601
10 0.0001 0.0046 0.1834 1.0531 0.0000 0.0003 0.5427
15 0.0008 0.0537 1.9844 11.820 0.0000 0.0090 6.1604
20 0.0132 0.8604 28.126 168.55 0.0032 0.1346 84.710
25 0.2484 16.471 504.87 3011 0.0356 44.194 1433
30 6.3518 390.21 11031 65837 1.6564 682.83 29513
35 89.492 11642 285622 1702136 1.8223 195296 707495
number of digits used by FORTRAN, the gain of decreasing the stepsize is offset with the loss of having to calculate
twice as many g
i
s for the same y-value.
In Table 3, the effect of using the Simpson rule to calculate

F(y) instead of using the exact value is considered. We
see that with h
0
= 0.01 there is virtually no effect, which is really good news. When h
0
is only 0.1 (and h = 0.1),
there is some loss in quality compared to when using exact values, but as mentioned in the introduction to this
section, decreasing h
0
from h to h/10 does not increase the computing time by much.
Initially we used the single precision REALin the FORTRANprograms, but the results were not quite satisfactory,
as we can see from Table 4. Here the superscript SP indicates that single precision is used. It is seen that with single
precision, accurate results are only obtained for (y) as large as 10
4
, and furthermore that using Simpsons rule to
calculate

F(y) resulted in considerably poorer results, even when h
0
= 0.01. Another consequence of using single
precision is that the calculated ruin probabilities depend much more on the choice of y due to fairly large oscillations
in the g
n
.
4.2. The case with diffusion
In all the examples in this subsection, the values p = 1.1, = 1, r = 0.1 and
R
= 0.2 are xed. This gives

1
=
2r

2
R
1 = 4.
Example 4.2. In this example S is exponentially distributed with expectation 1 and
P
= 0.2. We see from
Table 5 that except for the case
0.1,0.1
(y), the results are for any practical purpose identical. This is Case 2
discussed after Theorem 2.1. Since c is unknown, formula (3.14) is of limited use, but letting c = 1 and = 10
gives y = 10
2.5
316. In Table 5, y = 1000 was chosen. A little experimentation showed that g

changed almost
nothing whenever y was at least 600, giving condence in the results. This is in contrast to using single precision,
where as mentioned at the end of Example 4.1, the calculated g

would oscillate quite markedly with y.


Table 5
Calculated ruin probabilities with exponentially distributed jumps and
P
= 0.2
y
0.1
(y)
0.1,0.1
(y)
0.1,0.01
(y)
0.01
(y)
0.01,0.01
(y)
5 0.16875282 0.16875271 0.16875282 0.16875159 0.16875159
10 0.03804299 0.03804287 0.03804299 0.03804278 0.03804278
20 0.00390649 0.00390639 0.00390649 0.00390644 0.00390644
50 0.00010944 0.00010936 0.00010944 0.00010946 0.00010946
100 0.00000676 0.00000670 0.00000676 0.00000676 0.00000676
200 0.00000042 0.00000038 0.00000042 0.00000041 0.00000041
300 0.00000008 0.00000005 0.00000008 0.00000008 0.00000008
J. Paulsen et al. / Insurance: Mathematics and Economics 36 (2005) 399420 415
Table 6
Calculated ruin probabilities with Pareto distributed jumps with = 2 and
P
= 0.2
y
0.5
(y)
0.5,0.05
(y)
0.5,0.01
(y)
0.2
(y)
0.1
(y)
10 0.09922014 0.09921874 0.09922014 0.09887592 0.09886689
50 0.00559161 0.00559033 0.00559161 0.00557826 0.00557795
200 0.00032490 0.00032388 0.00032489 0.00032409 0.00032407
500 0.00005079 0.00004995 0.00005078 0.00005066 0.00005065
1000 0.00001260 0.00001191 0.00001259 0.00001256 0.00001256
2000 0.00000313 0.00000258 0.00000313 0.00000312 0.00000312
5000 0.00000049 0.00000013 0.00000049 0.00000049 0.00000049
Example 4.3. In this example S has the distribution (4.2) with = 2, while
P
= 0.2 as in Example 4.2. This is
Case 3 in Section 3, and (3.14) gives y = (10

12.5)
1/2
. With = 10 this gives y = 353,000. This is very high,
but the more reasonable (since so many g
n
have to be calculated, increasing errors due to a limited number of digits
used by FORTRAN) = 8 gives y = 35,300. Indeed some experimentation with h = 0.5 showed that y = 30,000
is sufcient, and consequently that is the value we have used in Table 6. In fact, for all heavy tailed ruin probabilities
it turned out that in order to experimentally nd a sufciently large y, a fairly large value of h was just as good
as a small one, hence a good y could be obtained rather fast except from the exceptionally heavy tailed cases, see
Example 4.8 below.
Again the results were stable w.r.t. changes in y as long as y was at least 30,000. It is seen from the table that

0.2
(y) and
0.1
(y) are more or less exactly the same, and even
0.5
(y) do not differ much. The fact that heavy
tailed ruin probabilities allowed for bigger h was another positive lesson learned from the numerical work. In the
table the values for
0.2,0.01
(y) and
0.1,0.01
(y) have been omitted, the reason for that is that they were exactly equal
to
0.2
(y) and
0.1
(y), respectively.
Example 4.4. Here S has the distribution (4.2) with = 3 and
P
= 0, so therefore the quadratic approxi-
mation given in Theorem 3.1 to get g
1
and g
2
was used. Here (3.14) gives y = (10
50
3
8)
1/3
. With = 10 this
gives y = 11006 while = 8 gives y = 2371. The results with y = 5000 are reported in Table 7, and again
they are stable and reasonable when compared to results obtained for higher y values. In the last column,
the

0.1
(y) are the calculated ruin probabilities with the same parameters except that
P
has been changed to

P
= 0.01 and the standard block-by-block method without using the quadratic approximation to get g
1
and
g
2
was used. This column then serves as a control of the quadratic approximation, and we see that except
from the obvious case when y = 0, the two methods give virtually the same result. Ruin probabilities with

P
= 0.01 are always the highest, as should be. In Table 8, ruin probabilities for the two cases
P
= 0 and
0.01 are compared for small y values, and it is seen that the effect of changing
P
from 0 to 0.01 dies out
rapidly.
Table 7
Calculated ruin probabilities with Pareto distributed jumps with = 3 and
P
= 0
y
0.5
(y)
0.1
(y)
0.1,0.01
(y)
0.02
(y)

0.1
(y)
0 0.73416167 0.73421098 0.73421098 0.73421112 1.00000000
20 0.01657526 0.01652787 0.01652787 0.01652783 0.01652832
50 0.00124534 0.00123814 0.00123814 0.00123814 0.00123818
100 0.00015034 0.00014858 0.00014858 0.00014858 0.00014859
200 0.00001816 0.00001772 0.00001772 0.00001772 0.00001772
500 0.00000116 0.00000109 0.00000109 0.00000109 0.00000109
1000 0.00000015 0.00000013 0.00000013 0.00000013 0.00000013
416 J. Paulsen et al. / Insurance: Mathematics and Economics 36 (2005) 399420
Table 8
Comparison between
P
= 0 and 0.01 for small y-values
y = 0 y = 1 y = 2 y = 3 y = 5

0.1
(y) 0.73421098 0.54716895 0.42102326 0.32878118 0.20686509

0.1
(y) 1.00000000 0.58155668 0.42106264 0.32880088 0.20687433
4.3. Using asymptotics for very small and very heavy tailed ruin probabilities
In the examples so far we have only made limited use of the asymptotic results stated in Section 3. We saw in
Example 4.1 that the number of digits used by FORTRAN is not sufcient to get ruin probabilities much below
10
8
(which probably is enough anyway). Also, if the ruin probability is very heavy tailed, it may be difcult to
compute g

.
In this section we shall assume that (y) cy

l(y) where l is a slowly varying function and c is a constant. This


asymptotics is discussed in Section 3, where c and l are explained. In particular l is known since

F(y) l(y)y

.
It is numerically the most challenging since otherwise the ruin probabilities will go sufciently fast to zero for the
methods already discussed to work well.
We start with the problem of computing very small ruin probabilities. For y
1
and y sufciently large,
(y)
_
y
y
1
_

l(y)
l(y
1
)
(y
1
). (4.3)
We therefore dene

h,y
1
(y) =
_
y
y
1
_

l(y)
l(y
1
)

h
(y
1
), y y
1
, (4.4)
where in particular

h,y
1
(y
1
) =
h
(y
1
).
Example 4.5. This is the same as Example 4.2, and the results are reported in Table 9. The column
0.1
(y) is taken
from Table 5, and the

0.1,y
1
(y) are calculated according to (4.4), using that here = 4 and l(y) is constant.
Comparing
0.1
(y) with the

0.1,y
1
(y) for different y and y
1
, we see that the asymptotics (4.3) works very
well, particularly when y
1
50. Going below the horizontal line to higher y-values, using e.g.

0.1,300
(2000) =
4.20e11= 4.20 10
11
seems to be a good approximation to (2000).
Table 9
Comparison of exact and asymptotically calculated ruin probabilities
y
0.1
(y)

0.1,20
(y)

0.1,50
(y)

0.1,100
(y)

0.1,200
(y)

0.1,300
(y)
20 0.00390649 0.00390649
50 0.00010944 0.00010000 0.00010944
100 0.00000676 0.00000625 0.00000680 0.00000676
200 0.00000042 0.00000039 0.00000043 0.00000042 0.00000042
300 0.00000008 0.00000008 0.00000008 0.00000008 0.00000008 0.00000008
500 1.000e8 1.094e8 1.081e8 1.076e8 1.037e8
1000 6.25e10 6.84e10 6.76e10 6.70e10 6.72e10
2000 3.90e11 4.27e11 4.22e11 4.19e11 4.20e11
J. Paulsen et al. / Insurance: Mathematics and Economics 36 (2005) 399420 417
Table 10
Estimated -values from Example 4.2
(y
1
, y
2
) (5,10) (10,20) (20,50) (50,100) (100,200) (200,300)

0.1
(y
1
, y
2
) 2.149 3.284 3.902 4.017 4.011 3.993
Table 11
Estimated -values from Example 4.4
(y
1
, y
2
) (20,50) (50,100) (100,200) (200,500) (500,1000)

0.1
(y
1
, y
2
) 2.828 3.059 3.068 3.039 3.019
Another, but of course related way to assess whether the asymptotics (4.4) will work is to compare the true
with an estimated value. To be more specic, note that (4.3) gives

ln((y
1
)/(y)) +ln(l(y)/l(y
1
))
ln(y/y
1
)
.
Therefore, for y
1
< y
2
we dene

h
(y
1
, y
2
) =
ln(
h
(y
1
)/
h
(y
2
)) +ln(l(y
2
)/l(y
1
))
ln(y
2
/y
1
)
. (4.5)
Whenever y
1
is large enough so that
h
(y
1
, y
2
) is close to the true , approximating (y) with

h,y
2
(y) will be quite
precise for y > y
2
and sufciently small h.
Example 4.6. Table 10 contains the
0.1
(y
1
, y
2
) calculated from Example 4.2 and Table 11 comes from Example
4.4. In Table 10 = 4 while in Table 11 = 3. In both cases l(y) is a constant. We see that the estimated values
become close to the true -values whenever y
1
and y
2
are large enough.
Table 12 contains the estimated
h
(y
1
, y
2
) from the fairly heavy tailed model in Example 4.3 where = 2. We
see that for y
1
moderately large the estimates are again good. The only surprise is that the error in
h
(2000, 5000)
is larger than in
h
(1000, 2000) say.
Another interesting and useful observation is that the
h
(y
1
, y
2
) are basically unchanged when h goes from 0.2
to 1, as opposed to the
h
(y) which vary much more.
Finally we shall look more into the case with very heavy tailed ruin probabilities. Although we may be interested in
(y) for small or moderately large values of y, the formula (y) = 1 g(y)/g() seems to demand the calculation
of g(x) for large values of x in order to have a good approximation of g(). However, (4.3) gives for y
1
and y
2
sufciently large
1 (g(y
2
)/g())
1 (g(y
1
)/g())
=
(y
2
)
(y
1
)

_
y
2
y
1
_

l(y
2
)
l(y
1
)
.
Table 12
Estimated -values from Example 4.3
(y
1
, y
2
) (10,50) (50,200) (200,500) (500,1000) (1000,2000) (2000,5000)

1.0
(y
1
, y
2
) 1.793 2.052 2.025 2.011 2.007 2.016

0.2
(y
1
, y
2
) 1.786 2.053 2.025 2.011 2.008 2.015
418 J. Paulsen et al. / Insurance: Mathematics and Economics 36 (2005) 399420
Table 13
Comparison of standard calculated values, calculated values using a low upper limit and adjusted calculated values
y
0.1
(y)

0.1,2200
(y)

0.1,1000,2000
(y)
10 0.09886689 0.09886446 0.09886690
50 0.00557795 0.00557527 0.00557797
200 0.00032407 0.00032138 0.00032408
500 0.00005065 0.00004796 0.00005067
1000 0.00001256 0.00000986 0.00001257
2000 0.00000312 0.00000042 0.00000313
This gives
g()
g(y
2
) (y
1
/y
2
)

(l(y
2
)/l(y
1
))g(y
1
)
1 (y
1
/y
2
)

(l(y
2
)/l(y
1
))
.
So with y = nh, dene

h,y
1
,y
2
(y) = 1
g
n
g
,y
1
,y
2
, (4.6)
where
g
,y
1
,y
2
=
g
n
2
(n
1
/n
2
)

(l(n
2
h)/l(n
1
h))g
n
1
1 (n
1
/n
2
)

(l(n
2
h)/l(n
1
h))
with n
1
h = y
1
and n
2
h = y
2
.
We have used the notation
h
(y) for the numerical approximation to (y) when the upper limit y is large enough
so that ( y m
0
h) is practically equal to zero. When this is not the case, write

h, y
(y) = 1
g
n
g
, y
,
where y = nh, g
, y
is as in (3.15), but now y is not large enough for ( y m
0
h) to equal zero. This implies in
particular that

h, y
(y) <
h
(y). Also if we let
h, y
(y
1
, y
2
) be as in (4.5), but with
h
(y
i
) replaced by

h, y
(y
i
), then
it is easy to show that

h, y
(y
1
, y
2
) >
h
(y
1
, y
2
) (4.7)
whenever l(y) is a constant.
Example 4.7. This is Example 4.3 again. In addition to the
0.1
(y) taken from Table 6, Table 13 also contains
the

0.1,2200
(y) and the

0.1,1000,2000
(y). As could be expected, setting y = 2200 results in

0.1,2200
(y) values too
far from (y) as y approaches 2000. However, using the adjusted value (4.6) with y
1
= 1000 and y
2
= 2000 gives
almost perfect results, so it is not necessary to go all the way to y = 30,000 or even higher as was done in Example
4.3.
Example 4.8. In this example p = 1.1, = 1, S has the distribution (4.2) with = 2, r = 0.03 and
P
=
R
= 0.2.
This is then Case 2 in Section 3 with
1
= 0.5 so that asymptotically (y) cy
1/2
for some c > 0, indeed a heavy
tailed ruin probability. Using (3.14) with = 8 and the unknown c set equal to 1 gives y = 10
16
. Therefore, it is
computationally impossible to numerically calculate
h
(y) for any reasonably small h (using increasing stepsize
with y may have worked), so it is necessary to make use of the approximation (4.6). In order to nd y
1
, y
2
so that
J. Paulsen et al. / Insurance: Mathematics and Economics 36 (2005) 399420 419
Table 14
Estimated -values from Example 4.8
(y
1
, y
2
) (1000,4000) (4000,10000) (10000,20000) (20000,50000)

10,1e7
(y
1
, y
2
) 0.5251 0.5441 0.5683 0.6102
Table 15
Estimated ruin probabilities from Example 4.8
y

2,1.2e7
(y)

2,5e6,1e7
(y)

0.2,52000
(y)

0.2,20000,50000
(y)
500 0.078541 0.080043 0.072357 0.079586
1000 0.054997 0.056538 0.048801 0.056214
4000 0.026654 0.028241 0.020445 0.028078
20000 0.011013 0.012626 0.004798 0.012553
50000 0.006364 0.007985 0.000147 0.007939
10
6
0.004022 0.005646 0.005614
10
7
0.000155 0.001785 0.001775
10
8
0.000564 0.000561
10
9
0.000178 0.000177
g
,y
1
,y
2
g(), we calculated
10,1e7
(y
1
, y
2
) (where 1e7 = 10
7
), and the result is given in Table 14. It may be
objected that h = 10 is a very large stepsize, but in Table 12 we saw that the estimated do not depend much
on the stepsize, this was also conrmed here trying various stepsizes. From Table 14 we see that
10,1e7
(y
1
, y
2
) is
increasing away from = 0.5 as y
1
, y
2
increase, but by (4.7) this is to be expected. Therefore, we conclude that
y
1
= 20,000 is sufciently large for the asymptotics to be accurate. In Table 15 results are reported for the two cases
h = 2 and 0.2. For large y-values the numbers have been extrapolated as in (4.4), so in particular

2,5e6,1e7
(y) =
_
y
10
7
_
0.5

2,5e6,1e7
(10
7
), y 10
7
and

0.2,20000,50000
(y) =
_
y
50000
_
0.5

0.2,20000,50000
(50000), y 50000.
It is seen that the

are almost the same in both cases, although both h and (y
1
, y
2
) differ. The case with h = 2 gives
the highest estimated ruin probability, but from, e.g. Table 6 we nd that this is natural. So in conclusion we feel
condent that the approximation

0.2,20000,50000
(y) is sufciently accurate.
Acknowledgments
This work was supported by the NUFU project 33/02. We also thank an unknown referee for several useful
suggestions.
References
Asmussen, S., 1998. Subexponential asymptotics for stochastic processes: extremal behaviour, stationary distributions and rst passage proba-
bilities. Ann. Appl. Prob. 8, 354374.
Asmussen, S., Binswanger, K., 1997. Simulation of ruin probabilities for subexponential claims. ASTIN Bull. 27, 297318.
Asmussen, S., Nielsen, H.M., 1995. Ruin probabilities via local adjustment coefcients. J. Appl. Prob. 32, 736755.
Brekelmans, R., De Waegenaere, A., 2001. Approximating the nite-time ruin probability under interest force. Insur.: Math. Econ. 29, 217229.
Cardoso, R.M.R., Waters, H.R., 2003. Recursive calculation of nite time ruin probabilities under interest force. Insur.: Math. Econ. 33, 659676.
420 J. Paulsen et al. / Insurance: Mathematics and Economics 36 (2005) 399420
Davies, B., 1978. Integral transforms and their applications, third ed. Texts in Applied Mathematics, vol. 41, Springer, 2002.
De Vylder, F., 1996. Advanced Risk Theory. Editions de lUniversit e de Bruxelles.
Dickson, D.C.M., Waters, H.R., 1999. Ruin probabilities with compounding assets. Insur.: Math. Econ. 25, 4962.
Frolova, A.G., Kabanov, Y.M., Pergamenshchikov, S.M., 2002. In the insurance business risky investments are dangerous. Finance Stochastics
6, 227235.
Gaier, J., Grandits, P., 2004. Ruin probabilities and investment under interest force in the presence of regularly varying tails. Scand. Actuarial J.
(4) 256278.
Kalashnikov, V., Norberg, R., 2002. Power tailed ruin probabilities in the presence of risky investments. Stochastic Processes Appl. 98, 211228.
Kl uppelberg, C., Stadtm uller, U., 1998. Ruin probabilities in the presence of heavy-tails and interest rates. Scand. Actuarial J. (1) 4958.
Lehtonen, T., Nyrhinen, H., 1992. Simulating level-crossing probabilities by importance sampling. Adv. Appl. Prob. 24, 858874.
Linz, P., 1985. Analytical and Numerical Methods for Volterra Equations. SIAM Studies in Applied Mathematics, Philadelphia.
Paulsen, J., 1993. Risk theory in a stochastic economic environment. Stochastic Processes Appl. 46, 327361.
Paulsen, J., 1998. Ruin theory with compounding assets a survey. Insur.: Math. Econ. 22, 316.
Paulsen, J., 1998. Sharp conditions for certain ruin in a risk process with stochastic return on investments. Stochastic Processes Applic. 75,
135148.
Paulsen, J., 2002. On Cram er-like asymptotics for risk processes with stochastic return on investments. Ann. Appl. Prob. 12, 12471260.
Paulsen, J., Gjessing, H.K., 1997. Ruin theory with stochastic return on investments. Adv. Appl. Prob. 29, 965985.
Paulsen, P., Rasmussen, B.N., 2003. Simulating ruin probabilities for a class of semimartingales by importance sampling methods. Scand.
Actuarial J. 178216.
Ramsay, C., Usabel, M., 1997. Calculating ruin probabilities via product integration. ASTIN Bull. 27, 263272.
Segerdahl, C.O., 1942.

Uber einige risikotheoretische Fragestellungen. Skandinavisk Aktuartidsskrift 25, 4383.
Sundt, B., Teugels, J., 1995. Ruin estimates under interest force. Insur.: Math. Econ. 16, 722.
Widder, D.V., 1971. An Introduction to Transform Theory. Academic Press, New York.

Potrebbero piacerti anche