Sei sulla pagina 1di 17
Poisson Approximation for Some Point Processes in Reliability Author(s): Jean-Bernard Gravereaux and James Ledoux Source:

Poisson Approximation for Some Point Processes in Reliability Author(s): Jean-Bernard Gravereaux and James Ledoux Source: Advances in Applied Probability, Vol. 36, No. 2 (Jun., 2004), pp. 455-470 Published by: Applied Probability Trust

Accessed: 13/09/2010 06:23

Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless

you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.

Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/action/showPublisher?publisherCode=apt.

Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.

JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org.

information about JSTOR, please contact support@jstor.org. Applied Probability Trust is collaborating with JSTOR to

Applied Probability Trust is collaborating with JSTOR to digitize, preserve and extend access to Advances in Applied Probability.

http://www.jstor.org

Adv. Appl. Prob. 36, 455-470 (2004) Printedin NorthernIreland @ AppliedProbability Trust2004

POISSON APPROXIMATION FOR SOME POINT PROCESSES IN RELIABILITY

JEAN-BERNARDGRAVEREAUX * AND

JAMES LEDOUX,* ** INSAand IRMAR,Rennes

Abstract

Inthis paper, weconsiderafailure pointprocess relatedtotheMarkovianarrival process

defined by Neuts.Weshowthatit converges in distributionto a homogeneous Poisson process. This convergence takes place inthecontextof rareoccurrencesof failures.We

also provide a convergence

using an approachdevelopedbyKabanov,Liptser and Shiryaev forthe doubly stochastic Poisson process driven by a finiteMarkov process.

rateof the convergence

intotalvariationofthis pointprocess

Keywords:Compensator;

tic process

2000Mathematics

software reliability; Markovian-arrival

process;doubly stochas-

Subject Classification:

Primary60G55;60J25

Secondary60J75;90B25;93E11

1. Introduction

This work originates in Littlewood's papers [12] and [13] on a Markov-type model for reliability assessment of modularsoftware. Basically, for a piece of softwarewith a finite numberof modules:

1. the structureof the softwareis representedby a finite continuous-timeMarkovchain (CTMC)(Xt)t, where Xt is the activemoduleattime t;

2. whenmodulei is active, failurestimesare part of a homogeneous Poisson process(HPP) with intensity /(i);

3. whencontrolswitchesfrommodulei to module j, a failure mayhappen with probability

g (i, j);

4. when any failure appears, it does notaffectthesoftwarebecausetheexecutionis assumed to be restarted instantaneously. Suchan eventis referredas to a secondary failurein [9].

An extension of such a model was consideredin [9], taking into accountthe influence of failures on the execution dynamic of the softwareand dealing with the delays in recovering an operational state. Transient analysis was providedby meansof resultsfrom [10]. Roughly speaking, the failure point process was a Markov-arrival process (MAP) as defined by Neuts (see e.g. [14]). Itis well knownthatwe thenobtainas particularinstancesof ourfailure process a phase-typerenewal process, a doubly stochasticPoisson process with a stochastic intensity driven by a CTMC(also calleda Markov-modulatedPoisson process(MMPP)in the queueing literature),etc.

Received28 February2002; revisionreceived27 October2003.

Postaladdress:Centrede Math6matiques, INSA, 20 avenuedes Buttesde **Emailaddress: james.ledoux@insa-rennes.fr

*

455

Codsmes, 35043 Rennes Cedex, France.

456

J.-B.GRAVEREAUXANDJ.LEDOUX

An important issue in reliabilitytheory,specifically for software systems, is what happens when the failure parameters become smallerandsmaller.Littlewoodstatedin [12] (or in [13] for a semi-Markov process (Xt)t) that,

as all failure parameters (i), p(i, j) tend to zero, the failure process described aboveis asymptotically an HPPwith intensity

X=

Q(i, j)(i,

j)

+

r(i)•

(i)'

i

where 7r is the stationary distributionand Q the generator of the CTMC (Xt) (assumed to be irreducible).

This statementis well knownin the software-reliabilitycommunity andhas been widely used to justify the hierarchical approach to modelling modularsoftware (see e.g. [6] for details). However, to the best of the authors' knowledge, no proof of this fact is reported in the applied probability literature.The aim of this note is to provideprecise statementsand proofs for the asymptotic of the general failure pointprocess definedin [9] for which Littlewood'smodel is a particular case. Specifically, we show thatthe countingprocess corresponding to this point process converges in distributionto the countingprocess of an HPP when failure parameters tendto zero but at a specific time-scale. Roughlyspeaking, we introducea small parameter 8 into the failure parameters andthe convergence takes place at time-scale t/e (in other words, when failure parameters are small and on a large horizon time). Proving this result is easy using the criterionof convergence in distribution given in [8] for instance. It is based on the convergence in probability of compensatorscorresponding to the various countingprocesses. In fact, the countingprocessconverges in variationto the countingprocess of an HPPandthe convergence ratewill be shown using a method developed in [8] for the MMPP. Note thatthe class of MMPPsis widely used to model trafficstreamsfor communication

systems. It is easily seen asymptotic MMPPwith a

knownfact that, when jittering takes place, the arrival process tends to Poissonian (see [15,

p. 116] for a partialdiscussion).

The articleis organized as follows. Section2 recallssome background on the pointprocess studiedhere. In addition, the compensator of the pointprocess is derivedin a straightforward manner.In Section 3, we report resultsabout convergence in distributionof the pointprocess to an HPP. Connectionto the problem of fast modulationin the case of an MMPPis briefly addressedin Subsection3.3. The rateof convergence in totalvariationof the pointprocess is

given in Section 4. Appendix A recalls an estimateof the convergence rateof the singularly perturbatedgeneratorprovided in [18]. The derivationof an inequality used in the text is reported in Appendix B.

that dealing with the present issue is equivalent to considering an fast modulating Markovchain. Thus, we retrievethe more or less

2. Definition of the counting process and compensator

2.1. Definition of the model

We do not report the rationaleunderlying the definitionof the reliability model discussed here;insteadwe referthe readerto [9] for details. We just need its mathematicalformulation.

The parameters

.) and

are as in the introduction.

(Xt)t

These failures were called

secondaryevents in [9]. The process X =

generatorQ = (Q(i, j))i,jad,

/t(.,

/t(.)

is an irreducibleCTMCwith infinitesimal

, M}. The term Xt is the active

where A( is the finiteset {1

Poisson approximation

457

component at time t for a failure-free system, that is, X is the execution process. The vector

a = (a(i))isA denotesthedistributionof randomvariable X0. Thenew parametersX(i, j) and X(i) (i, j E AM) aredefinedto have the same meaning as tg(i, j) and g (i). But, when failure

of this type happens in module i or during a transitionfrom module i, thereis

p(i, k) thatexecutionrestartsin modulek. So, for each i E NM,(p(i, k))kE is a probability

distribution.Suchaneventwas referredas to a primary failurein [9]. Simultaneousoccurrence

of primary and secondary eventsis neglected. For simplicity, we do not considerthe delay in

a probability

recovering an operational stateas in [9]. Then, taking into account failure occurrences, the randomvariable Xt* gives the active

component at time t. The randomvariable Nt countsthe numberof (primary and

failuresin the interval ]0, t] (No = 0). Thus, N = (Nt)t is the countingprocess of the failure

secondary)

pointprocess. Underthe various assumptions in [9], the bivariate process Z := (Nt,

be consideredas a CTMCoverthe state space by G, has the special structure

S =

N

x

Do

DI

0

G 0

Do

DI

XT)t

can

. Its infinitesimal generator, denoted

using a lexicographic orderon the state space S. The matrices Do and D1 aredefined by

I Q(i,j)(1 - X(i,j))(1 - pt(i,j))

Do(i, j) :=

_

Q(i, j) - X(i) - L

(i)

ifi

if i =

j,

j,

[(i) +1 Q(i, k)X(i,

k) p(ij)+ Q(i, j)[1- X(i,j)]pt(i,

j)

if

j,

+ Q(i,k)X(i,k) p(i, i) +

[(i)

if i = j.

W.(i)

k/i Note thatmax I G (x, x)I < +oo. Thestructureof the generator G showsthatN is the

process of aMAP. Finally, X* is aCTMCwithstate spaceA, initialdistributiona and generator

counting

Q* := Do+ D1.

The CTMC X* is supposed to be right continuouswith left limits (cadlig).

parameters areassumedto be such that X(i,

X is. This assumption is not very stringent.

Example 2.1. (Littlewood'smodel.) Assumethatthereareno

and .(i, j) = 0 for all i, j E M; this is the model of Littlewood.The matrices Do and D1 are

given by

j) < 1 for any (i, j), then X* is irreduciblesince

If the failure

primaryfailures, that is, k(i) = 0

I)

=

Q(i, j)(1 -

p(i, j))

I Q(i, j)Qu(i, j)

if i

if i

j,

=

j,

Dl(i,

Sr (i)

ifi

=

j.

Furthermore,Q* = Qandwe retrievethe factthatfailuresdo not affectthe executionprocess.

458

J.-B.GRAVEREAUXANDJ.LEDOUX

Example 2.2. (Markov-modulated Poisson process.) Another interestingpointprocess is that

secondary failure during

in the previous

expressions, we

Z' = (X*, Nt)t is also a Markov process with homogeneous second component

as definedin [5], or a Markov-additive process of arrivalsdiscussedin [16].

2.2. Compensator and intensity of the counting process

obtained by assuming in Littlewood'smodel thatthe probability of a

a control transferis 0.

Note that

In this case, setting A(i, j)

=

0 for all i, j

E

N

get Do = Q - diag(A(i)) and D1 = diag(A(i)). This is anMMPP.

The basic facts on point processes, martingales,compensator and intensity used in this

paper are reported in [2]. A nice survey on pointprocesses is [17]. For anyprocess V = (Vt)t,

7(V) = (!(V))t

countingprocess N allows us

will denoteits internal history,i.e.

t(V) :=

r(Vs, s _

t).

Considering the bivariate process Z in orderto analyse the

to deal with a Markov process with discrete state space. Thus, we can take advantage of

the powerful analytic theory and the computational material developed for such a class of processes. This fact was exploited in [10] and [9] to assess various reliability measuresrelated

to

thetransientbehaviourof the countingprocess. Due to the special structureof the generator

G

of Z, N may be interpreted as the counterof specific transitionsin Z. More precisely, if we

areinterestedin the set

T

of pairs of statesin S, then

:= {((n, i); (n + 1, j)),

i, j

E A, n > 0}

Nt=

(x,y)ET O<s<t

1 {Zs_=x,Zs=y}

-

(x,y)E T

Nt(x, y).

(2.1)

to y (x, y e T) in the interval ]0, t]. It is

well known (see e.g. [2]) that, for a counter N(x, y) = (Nt(x, y))t of transitionsfromstatex

to y in a Markov chain,

In other words, Nt countsthe transitionsfromstatex

is

an F-z) = (Ft())t

Nt(x,

y)

-

l{zs_=x} G(x, y) ds

martingale. The randomfunction

Xt(x, y) := 1{Z,_=x} G(x, y)

is

the F•(Z)-intensity of N (x, y) and

At(x, y)

j

s(x, y) ds

(2.2)

is the

thatM = (Mt)t with Mt = Nt - At and

?(z)-compensator

(7F(z) dual predictableprojection) of N(x, y). Thenit is easily seen

At =

(x

j

y)

~ T

At(x,y)

Poisson approximation

is an $(z)-martingale. The F(Z)-intensity of N is

t:=

=

=

=

l{z,_=x) G(x,y)

(x,y)eT

X

X

X

n>Oie~M je~M

X

X

n>O0 ieM

jeM

G((n, i); (n + 1, j)) 1{(Nt_,X_)=(n,i)1

DI (i, j) 1{(Nt_,X*_)=(n,i)}

lxt=i)[

Dl(i, j)] =

D(X*,

j).

459

In the case of Example2.1, the intensity of the countingprocess corresponding to Little- wood's model is

AM(X_)+

jAxt- Q (Xt-,

j)(Xt-,

j).

In the case of Example2.2, we retrievethe well-known expression for the $(N,X)-intensity

for an MMPP

(Xt-).

3. Convergence of the counting process

3.1. What does it mean for the failure parameters to be small?

A basic way to represent smallerandsmallerfailure parameters is to multiply each of them

by a scalare andto investigate the behaviourof the countingprocess N as e tendsto 0. Thus, we considerthe perturbated failure parameters

eX(i),

ex(i, j),

ep(i),

eL(i, j),

i, je EM.

(3.1)

Let us considerthe example of an MMPP (see Example2.2) with associatedmatrices

Q=D(1

1

=

-1 )'

0D1

(1)

0

(2)

(whereA(1)

matrix Do

:A(2))

#

is

and with a =

D(E)

o

=

=

If T is the time to first failure, then

x

=

Q-

(,

1) (i.e. with a stationaryenvironment). The

(0

(2,)(1)0E

/4 2)E8

where 1 = (1

reD

P{T > t} = P{Nt = 0} = -r

exp(lD()t) 1T,

, 1). Setting a = 4 + (/(1)

t 1T

= [cosh

2

-

g(2))282,

we have

sinh

(

t e-te-((1)(1)+(2)(2))et

As expected,P{T > t} convergesto 1 as s tendsto 0. Therefore,convergencein distributionof the countingprocessto an HPP,i.e. weak convergence of the finite-dimensionaldistributions

460

J.-B.GRAVEREAUXANDJ.LEDOUX

of N to those of an HPP, cannot take place at the currenttime-scale. If we investigate the asymptotic distributionof T attime-scale t/s, then, fromthe previousexpression for P{ T > t),

lim P{T > t/s) = exp(-(t(1)7r(1)

+ g(2)zr(2))t).

We will thereforedeal with the countingprocess N(E) = (N('e))t defined by

NE) = Nt/e,

where Nt countsthenumberof failuresin theinterval ]0, t] forthe reliability modelof with the system (3.1) of perturbated failure parameters. The matrices D(0e, D E) associatedwith the model N. Note that

with

L(i, j)

=

-Q(i, j))g(i,

L(i,)

O

D1e)= EB + E2L,

j)(i,

j)

if j 0

i,

if j = i,

B(i, j)=

tF)(i)

LI

+

DDi(i, i)

1 ] Q(i, k) X,(i, k) ]p(i, j) + Q(i, j)

k?i

L(i, j)

if ]j 0

if j

=

i,

i.

Note thatB is a nonnegative matrix.The Markov processes X* :-

havegeneratorsQ* = Doe) + -)"

3.2. Convergence of compensators

(Xt)t and X**E :-

and Q*.E:= Q*/s respectively.

Using the development of Subsection 2.2, the F(N(E),'X*')-compensator of

A()

-=

f

iieM

D

(X*_, j) ds

B(X*_,j) ds + 82

jEeM

jEiM

=EEl{X_=i}

ds

jeM

B(i, j) +

ie"

L(Xj,

j) ds

Section2

are those

(3.2)

(X*/)t

(3.3)

N(E) is

(from(3.2))

E2 1{X*=i} ds1

jE

L(i, j).

Since X* is chdlg, X* (w) =

2, exceptforcountablymanys. Thus,

we can replace X*

similarintegralsthroughout this paper. Now, let us considerLittlewood'smodelas given in Example 2.1. Failuresdo not affectthe

execution process, thatis, X* = X and Q*= Q. Thus,

to equate

(w) foralmostall w e

with X*

X>_ in the above integrals. This observationwill be used

A(E)

-

e

l{Xs=i} ds C

B(i, j).

It follows fromwell-knowntime-averageproperties of the cumulativeprocessf

an irreducibleMarkovprocess X (see [3]) that (8/t) foi l(xi)

f

(Xs)

ds for

ds converges almost surely

Poisson approximation

461

to 7r(i), where7ris the stationary distributionof X. Thus,

Ae) -+

t

iEM

r(i) 1

j]EM

B(i, j) =

Xt

almost surely as E --

0,

withXas in (1). Inparticular,thisimpliesthat A)

the compensator of an HPPwith intensity X. It

convergesin probabilityto Xt. Werecognize follows fromTheorem1 of [7] that

N()

--

P

as e -+ oo,

where P = (Pt)t is the countingprocess of an HPP with parameter X. We have shownthat

Theorem 3.1 below

and 2.2). Since convergence in the L2-norm impliesconvergence in probability, the following lemma

will give the convergence in probability of compensator Ate) to Xtas e tendsto 0 forthe general

reliability model of Section 2.

Lemma 3.1. If the probability vector7r is such thatnrQ = 0, then

holds for an MMPP and Littlewood's reliability model (Examples 2.1

limE

e--+0

e]

0O

(1{xs=i}

-r(i))ds

Proof The perturbatedgeneratorQ* = D)

Q* =

+ Di)

Q + Re,

where

=

0.

of X* can be decomposed as follows:

(3.4)

R(i,j)=

Ix(i) +

ki

X(i) + E

I

k~i

Q(i, k)X(,k)(i

p(i

j) -

Q(i, j)X(i, j)

Q(i, k)X(i, k) p(i, i) - X(i)

The change of variablesu = se gives that

if j

if j

=

i,

i.

e1j

l{X=i) ds=

l{X*,=i}

ds.

Recall that

(Xt'e)t

has

Q*/e = Q/e

sucha case,CorollaryC.1 of [18,p.

+ R as generator. It is easily checkedthatR 1T = 0. In

349]gives theestimate

t

E] ( l{x*,E=i} -r(i))

ds

2

< C(1 + t2)e.

The convergence

previous estimate.

of fO 1lX*,E=i} ds to 7r(i) in the L2-norm as Etends to 0 follows from the

The following theoremfollows from [7, Theorem1].

Theorem 3.1. Theprobability vector r is such that r Q = O. As E tends to 0, the counting

process N(e) = (Nt/e)t intensity

converges in distributionto the countingprocess of an HPP with

=

i

i~~M

(i)

1(i)

+ ~Q

j 1/

(i, j)[(i,

) +

(i,

)].

(3.5)

462

J.-B.GRAVEREAUXANDJ.LEDOUX

3.3.

Fast modulation in an MMPP

Whenwe consideran MMPP (see Example2.2), the previous issue is equivalent to investi- gating the asymptotic of a Poisson process withan intensity modulated by a fastMarkovchain. Indeed, the compensator of N(e) is then

A(8) A=

-

(Xs)

ds

- j

(Xs/p_) ds

(with the change of variableu = Es).

The Markovchain X(E) = (Xt/E)t has the generatorQ/s.

thatthe introductionof the

Theorem3.1 statesthatthe asymptoticprocess is an HPPwith intensity

As E tends to 0, it is now clear

small parameter E speeds up the rateof switching between states.

This fact is known and has been investigated for various applications. The closest context to reliabilitytheory is [1], where the underlying Poisson process is either homogeneous or nonhomogeneous. Proofsarebasedon asymptoticexpansion of thetransition semigroup of the bivariateMarkov process Z(e). Introductionof time-dependentintensity is not relevantin our

context. Indeed, the decreaseof failure parameters, or the reliabilitygrowth, is already taken

into account by the The case where

addressedin [4]. The asymptoticprocess is a

ergodicityassumptions on X. Froma strictly mathematical point of view, the issue addressedin this paper is equivalent both to a fast modulating Markov process X* (with generatorQ*/E) and to the introduction of a small scalare only in the failure parametersA(., .) corresponding to a transitionbetween

states. Speedingup the modulating Markov process X* at rate 1/E implies a speedingup of the numberof transitionsbetween states of X*. Therefore, we have to compensate for this

small parameter e. the modulatingprocess is

a finite nonhomogeneous Markov process X is nonhomogeneous Poisson process with suitable

'explosion'by introducing a smallfactor E in /(.,

.).

4. Convergence rate

We provide in this section an estimate of the convergence rate of the finite-dimensional

distributionsof N(E) to thatof anHPPwith

of the HPPis denoted by P =

can writethe limit compensator as

intensity as in Theorem3.1. The countingprocess

(Pt)t. Note thatXin (3.5) is the scalar product(.r, B 1T). We

At =

(7r, B 1T)t.

From (3.3), the F(N('), X*,E)-compensator of N(e) is

(4.1)

A(E) -

i

(B

+

EL)1T(i)

dO

'l{Xs*_=i}

ds

-

-

L(B

+ EL) 1T)(i)

]

l{x.•=i}

(Y_,

(B

+

L) 1T) ds,

ds

(settingu = sE)

(4.2)

Poisson approximation

where Ys(E) :=

YT(N(0')x*()-intensity

(lX*,e=i))ijE

 

463

. Hence, the

of N(e) is

se) := (Ys(), (B + eL) 1T).

(4.3)

Let T be any positive scalar and T :=

T.

{to,t,

tn

of

Pt,), the distance in total variation, denoted

tn,} with 0 =

to

<

t1

<

<

-

,

To evaluate proximity between the respective distributions ?(Nr'))

(E)

(E)

Nj (Nt,.,_

:=

by

)

dTv(.C(N(j)), ?Z(PT)), may be used, thatis

N

(E)

t)

and PT := (Pt,,

and ?(PT)

dTv(f(N)),

?Q(PT))

sup

BCNn

IP{N(E) E B) - P{Pr E B)}

IP{N) = k} -P{PT = kI

kENn

(see [17]). Fora locally boundedvariationfunctiont [0, T] is

n

f

(t), thetotalvariationin the interval

var[o,Tl(f) :=

{ti,

sup

,tn}jEP([O,T])

i=1

f (t)

-

f (ti-1)

where P ([0, T]) is the set of all finite subdivisionsof the

converges in total

variationto those of an HPP with intensity X at rate e.

Theorem 6.1], where a similarresult is given for an MMPP. This is heavily based on the

following estimateof the total variationbetween finite-dimensionaldistributionsof N(e) and

P [8, Theorem 3.1]:

interval [0, T].

In this section, we show thatthe finite-dimensionaldistributionsof

N(E)

The proof is borrowedfrom [8,

dTV(G.(NE)), Z(PT)) < Evar[o,T](A(e) - A),

where A(e) is the F(N(*))-compensator of N(e). The termA(e)is obtainedfrom (4.2) and

(t)

j

(Ys•) (B + eL) 1)

ds

with

I (P{X'e = i

N(e)))iEA

N(e) is

(see [11, Theorem 18.3]). Hence, the

F(N(8))-intensity of

E)

((S8) (B + eL) 1T).

We thereforeobtainfrom (4.1) and (4.2) that

dTv((NNE)),

X

?(P))

=

E var[0,T]( [,

E

I((e)

< E

I(_(e)

-

-

(s(e) -

B

1T) + e (s(),

L 1T) ds

r,

B 1

) + e(Ys_), L 1

) Ids

r, B 1T) Ids + eE

I(s),

L 1T) ds

<

E

0IT

-

I (Ys(e)

r, B1T)I ds + eCT

(4.4)

(.5

(4.5)

464

J.-B.GRAVEREAUXANDJ.LEDOUX

(since Y(E) is bounded). Since is that

and7rare stochastic vectors, it follows from (B.1) below

1(Ys

-

7

r, B 1T)[ <

2

11

-

* ,r(4.6)

Hence, it remainsto

estimatein (4.5) the convergence rateof 1I is- Fr Ii to 0 when e -- 0. The first step consists

where 3 := max(B 1T(i))

- min(B 1T(i))

and II III is the li-norm.

in writing a filteringequation for the vectorY0

4.1. Filtering equation for y(E)

Note thateach component Y ) (i) of Y(

of [2].

Recall that

=

is a bounded

randomvariable.We YtE) now

Lemma 4.1. Define Y

Then,for all t > 0,

(l{X*,e.=i})iE(.

basically follow Chapter IV

E[Y(e)

()].

Let a be the probability distribution of Xof*.

jd8)

Y)

Q* ds+

It

Vs(dN

o

=a

+-

i(t

1

Tf v3)*

?(dN4s)

where (s) is the F•(N(E))-intensity of N(') given in (4.4) and

() ~•(s)

(B + eL)

(

-

-

X< ds),

ds)(4.7)

(4.7)

(4.8)

Proof RecallthattheMarkov process X*,' hasthe generatorQ*Q' = Q*/E. the Dynkin formulathat

Y8)

Yo)+

Y

Q* ds + Mt,

where M = (Mt) is an

the representation(4.9) of

F(*x*'E)-martingale.

the

Then, applying Theorem 1 of [2,

bounded process y(E), we get that

^

Yt

0.

=Y

(6.

1

+-

rt

f0

I

s

) Q *

ds +M,

Itfollows from

(4.9)

ChapterIV] to

whereM = (Mt) is an F(N('E)-martingale. Now, Theorem17 of [2, ChapterIII]gives us the following representation of the F(NE))E-martingale M:

G s(dN

(e)

-

) ds),

where •(E) is the 3F(N()

called the innovationsprocess. We also know fromTheorem2 of [2, ChapterIV] that Gt

-intensity of N(W) and G =

(Gt) is an .F(N E)-predictable process

-

Gi•t

) + G3,t, wherethe ith entryof the vectorsG1,sand G3,s mustbe computedfrom

E

E CsYs (i)X8")ds =

E

3 CsAMs(i)AN(s) = E J

O<s<t

CsGls(i)•~(

CsG3,s(i)4

) ds,

(4.10)

)ds,

(4.11)

Poisson approximation

465

where C := (Cs)s is any nonnegative F(N(e )-predictable process and AMs(i) =

Ms_(i)

N(e)

an explicitexpression forG

that X*,' andN(e) may have common jumps. The left-handside of (4.10) can be rewrittenas

Ms(i) -

and AN(e) =

- N(

arethe jumps of martingaleM(i) andthe countingprocess

X(E) are givenby (4.3) and (4.4). Wedetermine

Nse)

respectively. Therandom processes h(E) and

1,t

and G3,t. Thisis similarto theMMPPcase (see [2, p. 98]) except

EJ

CsY

(i)••

ds -

=

E

E

Y

(00T

f

Cs

(i)X0••

s

Cs Y

(s)

(i)(Y E, (B + eL)

)ds

ft

E

= CsY()(i)((B+ eL) 1T)(i) ds

(from(4.3))

= CsY(E)(i)((B + eL) 1T)(i) ds

E

since Cs is Fs(N()) -measurable.We thendeducefrom (4.10) that

=

G ),s

(E)fY(i)((B + eL) 1T(i)

The countingprocess N(e) has the following representation(see (2.1)):

Nt(e)=

+oo

Nt(e) ((n, j), (n + 1,k)),

n=0 jeAA kEAM

(4.12)

where Nt() (x, y) is the cumulativenumberof transitionsof the bivariate process (N(e), X*,e) up to time t. Since AMs(i) = A Ys) (i),

O<s<t

Cs AMs(i)AN(e)--

O<s<t

Cs

Ys(e)(i)AN( e)-

O<s<t

Cs Y

(e)(i)ANse)".

(4.13)

The firsttermon the right-hand side of (4.13) can be rewritten using (4.12) as

O<s<t

CsYs(e)>(i)AN(e)

-

O<s<t

+oo

CsYs(e)(i)- AN e)((n, j), (n + 1, k)).

n=0 jEAt kEAM

It is easily seen thatthe right-hand side of the last equality is

So, from(4.13),

O<s<t

+oo

Cs YY

n=0 jAE

Y(s ()j)ANs(e)((n, j), (n + 1, i)).

O<s<t

CsAMs (i)jAN(e) =

j

0

Cs

3 Y2(j)dN~E)((n,

n=0 jEA

- CsjYs(C2(i)

dNe).

j), (n +

1, i))

466

J.-B.GRAVEREAUXANDJ.LEDOUX

Since the processes C and y(e) are

the compensator,(2.2)

F(N('),x*,E)-predictable, it follows from the definitionof

and (4.3) that

E L

O<s<t

CsAMs (i)ANe)

= E

E Cs c[ C

Y

Y-(j)(B(j, ((j)(B(j,

i)

i)

i)) -

-

+ eL(j, i))

eL(j,

Y-

Ys

(i

(i)(Y5s_),

(B (B +

+Li eL) 1T)

ds ds

=E

Cs

EEJ

je:

j Ei

=

Ys-(j) (B(j, i) + EL (j, i))-

s

Yf2(j)(B(j,

i) +

eL(j, i)) -

Y

Y

(i)

i)

jeM

j i

(B(i, j) + eL (i, j))

(B(i,

j))

ds

ds

jii

Y

[

(j)(B(j,

=sEE

j]i

i) + eL(j, i)) - f(-s0) (i)L:(B(i,ij)

+ eL(i, j))]ds

since Cs is Fs(E)) -measurable.We then get from (4.11) that

G3,s(i) =

i

Ys('(J)(B(J, i) + eL(j, i)) -

Y'- (i) -1~ i(B(i, j)

+eL(i, j))

Consequently, the gain Gs has the form given in Lemma4.1.

Lemma 4.2. Leta be the probability distribution of X*.

Then (4.7) has the unique solution

't(= taexp(Q*t/)

1

v(E exp(Q*(t

- s)/e)(dN(sE) - X) ds).

(4.14)

Proof First, let us check that (4.14) is a solutionof (4.7). It has the form UtVt, where

Ut =

a +

V (E exp(-Q*s/E)(dNse)

Vt = exp(Q*t/e).

-(e)

ds)

Using integrationby parts[2, Theorem 2, p. 336], we obtain

l

t

t

= a + 1t - (Us Vs)Q* ds -+

(dN

ds).

Second, notethat (177) -

Ur Vt)t is a solutionof the homogeneous lineardifferential equation

y(t)= -

y(s)Q* ds

with initialcondition y(O) = 0. Then it

-

Ut Vt = 0.

Poisson approximation

467

4.2. Convergence rate

Theorem 4.1. The process P = (Pt)t is the countingprocess of an HPP with intensity

X =

(r, B 1T) =

7r(i)[I(i)+

X(i)+

E

j)Q(Q(i,Q(i,

j)

+

L(i, j)),

where 7r

constant CT such that

is the probability distributionsuch that r Q = 0.

Proof Let us recallthat

dTv(?(N(e)), ?(PT))

< Cre.

For any T > 0, there exists a

dTv(?(NTe)),

?(P)) t

E

11 j

-

7rillldt + C1,e

(see (4.5) and (4.6)). We just haveto controlthefirsttermonthe right-hand sideof the inequality. Since v(e)1T = 0, using (4.14) we can write

t(e) -

r

= aexp(Q*t/e)

-

+

(e) [exp(Q*(t - s)/e)

Using

the triangleinequality, it is easily seen that

E

II)

-

rIlll dt < f

+

E

Ia exp(Q*t/e) -

t

dt

r111

v (e)[exp(Q*(t - s)/e)

-

-

1

1r

r](dNs()e)

r](dNs() -

ds).

(e) ds)

dt.

(4.15)

In a first step, let us considerthe firsttermon the right-hand side of the previousinequality. We havefrom (A.2) below with Q(E) = Q*/E = Q/e + R (see (3.4)) that, for all s > 0,

Then

Ila exp(Q*t/e) -

r I1 < C1(E + exp(-pt/e)).

T Ila exp(Q*t/e) - 7r11dt <

In a second step, we have

CiT +-e

= C2,TE.

E v(E) [exp(Q*(t - s)/e)

< E

Iv(')[exp(Q*(t

-

-

1T

s)/e)

r](dN(e) -

(e) d