Sei sulla pagina 1di 75

Durbin-Levinson recursive method

A recursive method for computing 'n is useful because


it avoids inverting large matrices;
when new data are acquired, one can update predictions, instead of
starting again from scratch;
the procedure is a method for computing important theoretical
quantities.

8 ottobre 2014 1 / 25
Durbin-Levinson recursive method

A recursive method for computing 'n is useful because


it avoids inverting large matrices;
when new data are acquired, one can update predictions, instead of
starting again from scratch;
the procedure is a method for computing important theoretical
quantities.
Idea

X̂n+1 = PL(X1 ,...,Xn ) Xn+1 = PL(X2 ,...,Xn ) Xn+1 + a X1 PL(X2 ,...,Xn ) X1

Note X1 PL(X2 ,...,Xn ) X1 is orthogonal to the previous.

8 ottobre 2014 1 / 25
Durbin-Levinson, 2

X̂n+1 = PL(X1 ,...,Xn ) Xn+1 = PL(X2 ,...,Xn ) Xn+1 + a X1 PL(X2 ,...,Xn ) X1

Check orthogonality condition to find a:

i >1: hX̂n+1 Xn+1 , Xi i = hPL(X2 ,...,Xn ) Xn+1 Xn+1 , Xi i


+ ahX1 PL(X2 ,...,Xn ) X1 , Xi i = 0 + 0

last step coming from the definitions of projections (i = 2 . . . n).

8 ottobre 2014 2 / 25
Durbin-Levinson, 3

X̂n+1 = PL(X1 ,...,Xn ) Xn+1 = PL(X2 ,...,Xn ) Xn+1 + a X1 PL(X2 ,...,Xn ) X1

Check orthogonality condition with i = 1:

8 ottobre 2014 3 / 25
Durbin-Levinson, 3

X̂n+1 = PL(X1 ,...,Xn ) Xn+1 = PL(X2 ,...,Xn ) Xn+1 + a X1 PL(X2 ,...,Xn ) X1

Check orthogonality condition with i = 1:

0 = hX̂n+1 Xn+1 , X1 PL(X2 ,...,Xn ) X1 i

8 ottobre 2014 3 / 25
Durbin-Levinson, 3

X̂n+1 = PL(X1 ,...,Xn ) Xn+1 = PL(X2 ,...,Xn ) Xn+1 + a X1 PL(X2 ,...,Xn ) X1

Check orthogonality condition with i = 1:

0 = hX̂n+1 Xn+1 , X1 PL(X2 ,...,Xn ) X1 i


= hPL(X2 ,...,Xn ) Xn+1 Xn+1 , X1 PL(X2 ,...,Xn ) X1 i + akX1 PL(X2 ,...,Xn ) X1 k2

8 ottobre 2014 3 / 25
Durbin-Levinson, 3

X̂n+1 = PL(X1 ,...,Xn ) Xn+1 = PL(X2 ,...,Xn ) Xn+1 + a X1 PL(X2 ,...,Xn ) X1

Check orthogonality condition with i = 1:

0 = hX̂n+1 Xn+1 , X1 PL(X2 ,...,Xn ) X1 i


= hPL(X2 ,...,Xn ) Xn+1 Xn+1 , X1 PL(X2 ,...,Xn ) X1 i + akX1 PL(X2 ,...,Xn ) X1 k2
= hXn+1 , X1 PL(X2 ,...,Xn ) X1 i + akX1 PL(X2 ,...,Xn ) X1 k2

8 ottobre 2014 3 / 25
Durbin-Levinson, 3

X̂n+1 = PL(X1 ,...,Xn ) Xn+1 = PL(X2 ,...,Xn ) Xn+1 + a X1 PL(X2 ,...,Xn ) X1

Check orthogonality condition with i = 1:

0 = hX̂n+1 Xn+1 , X1 PL(X2 ,...,Xn ) X1 i


= hPL(X2 ,...,Xn ) Xn+1 Xn+1 , X1 PL(X2 ,...,Xn ) X1 i + akX1 PL(X2 ,...,Xn ) X1 k2
= hXn+1 , X1 PL(X2 ,...,Xn ) X1 i + akX1 PL(X2 ,...,Xn ) X1 k2
hXn+1 , X1 PL(X2 ,...,Xn ) X1 i
=) a =
kX1 PL(X2 ,...,Xn ) X1 k2

8 ottobre 2014 3 / 25
Durbin-Levinson. 4
We tried

X̂n+1 = PL(X1 ,...,Xn ) Xn+1 = PL(X2 ,...,Xn ) Xn+1 + a X1 PL(X2 ,...,Xn ) X1

and found
hXn+1 , X1 PL(X2 ,...,Xn ) X1 i 1
a= = hXn+1 , X1 PL(X2 ,...,Xn ) X1 ivn 1
kX1 PL(X2 ,...,Xn ) X1 k2

with
vn 1 = E(|X̂n Xn |2 ) = kXn PL(X1 ,...,Xn 1)
Xn k2 = kX1 PL(X2 ,...,Xn ) X1 k2 .

8 ottobre 2014 4 / 25
Durbin-Levinson. 4
We tried

X̂n+1 = PL(X1 ,...,Xn ) Xn+1 = PL(X2 ,...,Xn ) Xn+1 + a X1 PL(X2 ,...,Xn ) X1

and found
hXn+1 , X1 PL(X2 ,...,Xn ) X1 i 1
a= = hXn+1 , X1 PL(X2 ,...,Xn ) X1 ivn 1
kX1 PL(X2 ,...,Xn ) X1 k2

with
vn 1 = E(|X̂n Xn |2 ) = kXn PL(X1 ,...,Xn 1)
Xn k2 = kX1 PL(X2 ,...,Xn ) X1 k2 .
P
n
We write X̂n+1 = 'n,1 Xn + · · · + 'n,n X1 = 'n,j Xn+1 j
j=1

8 ottobre 2014 4 / 25
Durbin-Levinson. 4
We tried

X̂n+1 = PL(X1 ,...,Xn ) Xn+1 = PL(X2 ,...,Xn ) Xn+1 + a X1 PL(X2 ,...,Xn ) X1

and found
hXn+1 , X1 PL(X2 ,...,Xn ) X1 i 1
a= = hXn+1 , X1 PL(X2 ,...,Xn ) X1 ivn 1
kX1 PL(X2 ,...,Xn ) X1 k2

with
vn 1 = E(|X̂n Xn |2 ) = kXn PL(X1 ,...,Xn 1)
Xn k2 = kX1 PL(X2 ,...,Xn ) X1 k2 .
P
n
We write X̂n+1 = 'n,1 Xn + · · · + 'n,n X1 = 'n,j Xn+1 j
j=1
nP1
so that PL(X2 ,...,Xn ) Xn+1 = 'n 1,j Xn+1 j
j=1

8 ottobre 2014 4 / 25
Durbin-Levinson. 4
We tried

X̂n+1 = PL(X1 ,...,Xn ) Xn+1 = PL(X2 ,...,Xn ) Xn+1 + a X1 PL(X2 ,...,Xn ) X1

and found
hXn+1 , X1 PL(X2 ,...,Xn ) X1 i 1
a= = hXn+1 , X1 PL(X2 ,...,Xn ) X1 ivn 1
kX1 PL(X2 ,...,Xn ) X1 k2

with
vn 1 = E(|X̂n Xn |2 ) = kXn PL(X1 ,...,Xn 1)
Xn k2 = kX1 PL(X2 ,...,Xn ) X1 k2 .
P
n
We write X̂n+1 = 'n,1 Xn + · · · + 'n,n X1 = 'n,j Xn+1 j
j=1
nP1
so that PL(X2 ,...,Xn ) Xn+1 = 'n 1,j Xn+1 j
j=1
and substituting we get a recursion.
8 ottobre 2014 4 / 25
Durbin-Levinson algorithm. 5

n
X
X̂n+1 = 'n,j Xn+1 j = PL(X2 ,...,Xn ) Xn+1 + a X1 PL(X2 ,...,Xn ) X1
j=1

Hence

1
'n,n = a = hXn+1 , X1 PL(X2 ,...,Xn ) X1 ivn 1
2 3
n 1
X
= 4 (n) 'n 1,j (n j)5 vn 1
1.
j=1

8 ottobre 2014 5 / 25
Durbin-Levinson algorithm. 6

Then from
n
X n 1
X n 1
X
'n,j Xn+1 j = 'n 1,j Xn+1 j + a(X1 'n 1,j Xj+1 )
j=1 j=1 j=1
n 1
X n 1
X
= 'n 1,j Xn+1 j + a(X1 'n 1,n k Xn+1 k )
j=1 k=1

one sees

'n,j = 'n 1,j a'n 1,n j = 'n 1,j 'n,n 'n 1,n j j = 1...n 1

8 ottobre 2014 6 / 25
Durbin-Levinson algorithm. 6

Then from
n
X n 1
X n 1
X
'n,j Xn+1 j = 'n 1,j Xn+1 j + a(X1 'n 1,j Xj+1 )
j=1 j=1 j=1
n 1
X n 1
X
= 'n 1,j Xn+1 j + a(X1 'n 1,n k Xn+1 k )
j=1 k=1

one sees

'n,j = 'n 1,j a'n 1,n j = 'n 1,j 'n,n 'n 1,n j j = 1...n 1

We need also a recursive procedure for vn .

8 ottobre 2014 6 / 25
Durbin-Levinson algorithm. 7

n
X
2
vn = E(|X̂n+1 Xn+1 | ) = 0 'n,j (j)
j=1
n 1
X
= 0 'n,n (n) ('n 1,j 'n,n 'n 1,n j ) (j)
j=1
0 1
n 1
X n 1
X
= 0 'n 1,j (j) 'n,n @ (n) 'n 1,j (j)A
j=1 j=1

= vn 1 'n,n 'n,n vn 1 = vn 1 1 '2n,n .

The terms in red are equal because of the definition 'n,n .

8 ottobre 2014 7 / 25
Durbin-Levinson algorithm. 7

n
X
2
vn = E(|X̂n+1 Xn+1 | ) = 0 'n,j (j)
j=1
n 1
X
= 0 'n,n (n) ('n 1,j 'n,n 'n 1,n j ) (j)
j=1
0 1
n 1
X n 1
X
= 0 'n 1,j (j) 'n,n @ (n) 'n 1,j (j)A
j=1 j=1

= vn 1 'n,n 'n,n vn 1 = vn 1 1 '2n,n .

The terms in red are equal because of the definition 'n,n .


The final formula vn = 1 '2n,n vn 1 shows that 'n,n determines the
decrease of predictive error with increasing n.

8 ottobre 2014 7 / 25
Durbin-Levinson algorithm. Summary

v0 = E(|X1 X̂1 |2 ) = E(|X1 |2 ) = (0)

8 ottobre 2014 8 / 25
Durbin-Levinson algorithm. Summary

v0 = E(|X1 X̂1 |2 ) = E(|X1 |2 ) = (0)


(1)
'1,1 = = ⇢(1)
v0

8 ottobre 2014 8 / 25
Durbin-Levinson algorithm. Summary

v0 = E(|X1 X̂1 |2 ) = E(|X1 |2 ) = (0)


(1)
'1,1 = = ⇢(1)
v0
v1 = 1 '21,1 v0 = (0) 1 ⇢(1)2

8 ottobre 2014 8 / 25
Durbin-Levinson algorithm. Summary

v0 = E(|X1 X̂1 |2 ) = E(|X1 |2 ) = (0)


(1)
'1,1 = = ⇢(1)
v0
v1 = 1 '21,1 v0 = (0) 1 ⇢(1)2
..
.
2 3
n 1
X
'n,n = 4 (n) 'n 1,j (n j)5 vn 11
j=1

8 ottobre 2014 8 / 25
Durbin-Levinson algorithm. Summary

v0 = E(|X1 X̂1 |2 ) = E(|X1 |2 ) = (0)


(1)
'1,1 = = ⇢(1)
v0
v1 = 1 '21,1 v0 = (0) 1 ⇢(1)2
..
.
2 3
n 1
X
'n,n = 4 (n) 'n 1,j (n j)5 vn 11
j=1

'n,j = 'n 1,j 'n,n 'n 1,n j j = 1...n 1

8 ottobre 2014 8 / 25
Durbin-Levinson algorithm. Summary

v0 = E(|X1 X̂1 |2 ) = E(|X1 |2 ) = (0)


(1)
'1,1 = = ⇢(1)
v0
v1 = 1 '21,1 v0 = (0) 1 ⇢(1)2
..
.
2 3
n 1
X
'n,n = 4 (n) 'n 1,j (n j)5 vn 11
j=1

'n,j = 'n 1,j 'n,n 'n 1,n j j = 1...n 1


vn = 1 '2n,n vn 1
..
.

8 ottobre 2014 8 / 25
Durbin-Levinson algorithm. Summary

v0 = E(|X1 X̂1 |2 ) = E(|X1 |2 ) = (0)


(1)
'1,1 = = ⇢(1)
v0
v1 = 1 '21,1 v0 = (0) 1 ⇢(1)2
..
.
2 3
n 1
X
'n,n = 4 (n) 'n 1,j (n j)5 vn 11
j=1

'n,j = 'n 1,j 'n,n 'n 1,n j j = 1...n 1


vn = 1 '2n,n vn 1
..
.
One could divide everything by (0) and work with ACF instead of ACVF
8 ottobre 2014 8 / 25
Durbin-Levinson algorithm for AR(1)

Xt stationary with Xt = Xt + Zt , Zt ⇠ WN(0, 2)


1

and E(Xs Zt ) = 0 if s < t

8 ottobre 2014 9 / 25
Durbin-Levinson algorithm for AR(1)

Xt stationary with Xt = Xt + Zt , Zt ⇠ WN(0, 2)


1
2 |h|
and E(Xs Zt ) = 0 if s < t =) (h) = 2
.
1

8 ottobre 2014 9 / 25
Durbin-Levinson algorithm for AR(1)

Xt stationary with Xt = Xt + Zt , Zt ⇠ WN(0, 2)


1
2 |h|
and E(Xs Zt ) = 0 if s < t =) (h) = 2
.
1
2
2
v0 = 2
, '1,1 = , v1 = ,
1

8 ottobre 2014 9 / 25
Durbin-Levinson algorithm for AR(1)

Xt stationary with Xt = Xt + Zt , Zt ⇠ WN(0, 2)


1
2 |h|
and E(Xs Zt ) = 0 if s < t =) (h) = 2
.
1
2
2
v0 = 2
, '1,1 = , v1 = ,
1
 2 2 2
1
'2,2 = 2
' 2
v1 = 0. '2,1 = '1,1 , v2 = v1 ,
1 1
2
'n,1 = , 'n,j = 0 j > 1, vn = v1 = .

8 ottobre 2014 9 / 25
Durbin-Levinson algorithm for MA(1)

X t = Zt Zt ⇠ WN(0, 2 ), 2 (1 + #2 ), (1) = 2 #.
#Zt 1, (0) =

8 ottobre 2014 10 / 25
Durbin-Levinson algorithm for MA(1)

X t = Zt Zt ⇠ WN(0, 2 ), 2 (1 + #2 ), (1) = 2 #.
#Zt 1, (0) =

2 (1 #
v0 = + #2 ) '1,1 =
1 + #2

8 ottobre 2014 10 / 25
Durbin-Levinson algorithm for MA(1)

X t = Zt Zt ⇠ WN(0, 2 ), 2 (1 + #2 ), (1) = 2 #.
#Zt 1, (0) =

2 (1 #
v0 = + #2 ) '1,1 =
1 + #2
2 (1 + #2 + #4 ) #2
v1 = '2,2 = ...
1 + #2 1 + #2 + #4
2 (1 + #2 + #4 + #6 )
v2 = ...
1 + #2 + #4

8 ottobre 2014 10 / 25
Durbin-Levinson algorithm for MA(1)

X t = Zt Zt ⇠ WN(0, 2 ), 2 (1 + #2 ), (1) = 2 #.
#Zt 1, (0) =

2 (1 #
v0 = + #2 ) '1,1 =
1 + #2
2 (1 + #2 + #4 ) #2
v1 = '2,2 = ...
1 + #2 1 + #2 + #4
2 (1 + #2 + #4 + #6 )
v2 = ...
1 + #2 + #4
Remarks: Computations are long and tedious.
vn converges (slowly) towards 2 (the white-noise variance) if |#| < 1.

8 ottobre 2014 10 / 25
Durbin-Levinson for sinusoidal wave

Xt = B cos(!t) + C sin(!t), with ! 2 R,

2
E(B) = E(C ) = E(BC ) = 0, V(B) = V(C ) = .

8 ottobre 2014 11 / 25
Durbin-Levinson for sinusoidal wave

Xt = B cos(!t) + C sin(!t), with ! 2 R,

2
E(B) = E(C ) = E(BC ) = 0, V(B) = V(C ) = .
Then (h) = 2 cos(!h).

8 ottobre 2014 11 / 25
Durbin-Levinson for sinusoidal wave

Xt = B cos(!t) + C sin(!t), with ! 2 R,

2
E(B) = E(C ) = E(BC ) = 0, V(B) = V(C ) = .
Then (h) = 2 cos(!h).

v0 = 2 '1,1 = cos(!)

8 ottobre 2014 11 / 25
Durbin-Levinson for sinusoidal wave

Xt = B cos(!t) + C sin(!t), with ! 2 R,

2
E(B) = E(C ) = E(BC ) = 0, V(B) = V(C ) = .
Then (h) = 2 cos(!h).

v0 = 2 '1,1 = cos(!)
cos(2!) cos2 (!)
v1 = 2 (1 cos2 (!)) = 2 sin2 (!) '2,2 = = 1
sin2 (!)

8 ottobre 2014 11 / 25
Durbin-Levinson for sinusoidal wave

Xt = B cos(!t) + C sin(!t), with ! 2 R,

2
E(B) = E(C ) = E(BC ) = 0, V(B) = V(C ) = .
Then (h) = 2 cos(!h).

v0 = 2 '1,1 = cos(!)
cos(2!) cos2 (!)
v1 = 2 (1 cos2 (!)) = 2 sin2 (!) '2,2 = = 1
sin2 (!)
v2 = 0
=) Xn+1 = PL(Xn ,Xn 1)
Xn+1 .

8 ottobre 2014 11 / 25
Partial auto-correlation
For a stationary process {Xt } ↵(h) the partial auto-correlation represents
the correlation between Xt and Xt+h , after removing the e↵ect of
intermediate values.

8 ottobre 2014 12 / 25
Partial auto-correlation
For a stationary process {Xt } ↵(h) the partial auto-correlation represents
the correlation between Xt and Xt+h , after removing the e↵ect of
intermediate values.
Definition: ↵(1) = ⇢(Xt , Xt+1 ) = ⇢(1).
↵(h) = ⇢(Xt PL(Xt+1 ,...,Xt+h 1)
Xt , Xt+h PL(Xt+1 ,...,Xt+h 1)
Xt+h ) h > 1.

8 ottobre 2014 12 / 25
Partial auto-correlation
For a stationary process {Xt } ↵(h) the partial auto-correlation represents
the correlation between Xt and Xt+h , after removing the e↵ect of
intermediate values.
Definition: ↵(1) = ⇢(Xt , Xt+1 ) = ⇢(1).
↵(h) = ⇢(Xt PL(Xt+1 ,...,Xt+h 1)
Xt , Xt+h PL(Xt+1 ,...,Xt+h 1)
Xt+h ) h > 1.

E((Xt PL(Xt+1 ,...,Xt+h 1 ) Xt )(Xt+h PL(Xt+1 ,...,Xt+h 1)


Xt+h ))
↵(h) =
V(Xt PL(Xt+1 ,...,Xt+h 1 ) Xt )
hX1 PL(X2 ,...,Xh ) X1 , Xh+1 PL(X2 ,...,Xh ) Xh+1 i
=
kX1 PL(X2 ,...,Xh ) X1 k2
hX1 , Xh+1 PL(X2 ,...,Xh ) Xh+1 i
= = 'h,h .
kX1 PL(X2 ,...,Xh ) X1 k2

8 ottobre 2014 12 / 25
Partial auto-correlation
For a stationary process {Xt } ↵(h) the partial auto-correlation represents
the correlation between Xt and Xt+h , after removing the e↵ect of
intermediate values.
Definition: ↵(1) = ⇢(Xt , Xt+1 ) = ⇢(1).
↵(h) = ⇢(Xt PL(Xt+1 ,...,Xt+h 1)
Xt , Xt+h PL(Xt+1 ,...,Xt+h 1)
Xt+h ) h > 1.

E((Xt PL(Xt+1 ,...,Xt+h 1 ) Xt )(Xt+h PL(Xt+1 ,...,Xt+h 1)


Xt+h ))
↵(h) =
V(Xt PL(Xt+1 ,...,Xt+h 1 ) Xt )
hX1 PL(X2 ,...,Xh ) X1 , Xh+1 PL(X2 ,...,Xh ) Xh+1 i
=
kX1 PL(X2 ,...,Xh ) X1 k2
hX1 , Xh+1 PL(X2 ,...,Xh ) Xh+1 i
= = 'h,h .
kX1 PL(X2 ,...,Xh ) X1 k2

Durbin-Levinson’s algorithm is a method to compute ↵(·).


8 ottobre 2014 12 / 25
Remember in fact Durbin-Levinson algorithm. 5

n
X
X̂n+1 = 'n,j Xn+1 j = PL(X2 ,...,Xn ) Xn+1 + a X1 PL(X2 ,...,Xn ) X1
j=1

Hence

1
'n,n = a = hXn+1 , X1 PL(X2 ,...,Xn ) X1 ivn 1
2 3
n 1
X
= 4 (n) 'n 1,j (n j)5 vn 1
1.
j=1

8 ottobre 2014 13 / 25
Examples of PACF

{Xt } AR(1), =) ↵(1) = , ↵(h) = 0 for h > 1 (seen before).

8 ottobre 2014 14 / 25
Examples of PACF

{Xt } AR(1), =) ↵(1) = , ↵(h) = 0 for h > 1 (seen before).


{Xt } AR(p), i.e. stationary proces s.t.
Xp
2
Xt = k X t k + Zt , {Zt } ⇠ WN(0, ).
k=1

8 ottobre 2014 14 / 25
Examples of PACF

{Xt } AR(1), =) ↵(1) = , ↵(h) = 0 for h > 1 (seen before).


{Xt } AR(p), i.e. stationary proces s.t.
Xp
2
Xt = k X t k + Zt , {Zt } ⇠ WN(0, ).
k=1 Pp
If t p, PL(X1 ,...,Xt ) Xt+1 = k=1 k Xt+1 k (check).

8 ottobre 2014 14 / 25
Examples of PACF

{Xt } AR(1), =) ↵(1) = , ↵(h) = 0 for h > 1 (seen before).


{Xt } AR(p), i.e. stationary proces s.t.
Xp
2
Xt = k X t k + Zt , {Zt } ⇠ WN(0, ).
k=1 Pp
If t p, PL(X1 ,...,Xt ) Xt+1 = k=1 k Xt+1 k (check).
Then 'p,p = ↵(p) = p, 'h,h = 0 if h > p, i.e. ↵(h) = 0 for h > p.

8 ottobre 2014 14 / 25
Examples of PACF

{Xt } AR(1), =) ↵(1) = , ↵(h) = 0 for h > 1 (seen before).


{Xt } AR(p), i.e. stationary proces s.t.
Xp
2
Xt = k X t k + Zt , {Zt } ⇠ WN(0, ).
k=1 Pp
If t p, PL(X1 ,...,Xt ) Xt+1 = k=1 k Xt+1 k (check).
Then 'p,p = ↵(p) = p, 'h,h = 0 if h > p, i.e. ↵(h) = 0 for h > p.
{Xt } MA(1) =) ↵(h) = #h /(1 + #2 + · · · + #2h ) (long
computation)

8 ottobre 2014 14 / 25
Examples of PACF

{Xt } AR(1), =) ↵(1) = , ↵(h) = 0 for h > 1 (seen before).


{Xt } AR(p), i.e. stationary proces s.t.
Xp
2
Xt = k X t k + Zt , {Zt } ⇠ WN(0, ).
k=1 Pp
If t p, PL(X1 ,...,Xt ) Xt+1 = k=1 k Xt+1 k (check).
Then 'p,p = ↵(p) = p, 'h,h = 0 if h > p, i.e. ↵(h) = 0 for h > p.
{Xt } MA(1) =) ↵(h) = #h /(1 + #2 + · · · + #2h ) (long
computation)
PACF of AR processes has finite support, while PACF of MA is always
non-zero. This is the opposite as for ACF.

8 ottobre 2014 14 / 25
Examples of PACF

{Xt } AR(1), =) ↵(1) = , ↵(h) = 0 for h > 1 (seen before).


{Xt } AR(p), i.e. stationary proces s.t.
Xp
2
Xt = k X t k + Zt , {Zt } ⇠ WN(0, ).
k=1 Pp
If t p, PL(X1 ,...,Xt ) Xt+1 = k=1 k Xt+1 k (check).
Then 'p,p = ↵(p) = p, 'h,h = 0 if h > p, i.e. ↵(h) = 0 for h > p.
{Xt } MA(1) =) ↵(h) = #h /(1 + #2 + · · · + #2h ) (long
computation)
PACF of AR processes has finite support, while PACF of MA is always
non-zero. This is the opposite as for ACF.

Sample PACF. Apply Durbin-Levinson algorithm to ˆ (·).

8 ottobre 2014 14 / 25
Sample ACF and PACF
Oveshort data

1.0
0.5
ACF

0.0
-0.5

0 5 10 15

Lag
0.2
Partial ACF

0.0
-0.4

5 10 15

Lag

8 ottobre 2014 15 / 25
Sample ACF of Huron: AR(1) fit
ACF of detrended Huron data

1.0
0.8
0.6
0.4
ACF

0.2
0.0
-0.2
-0.4

0 5 10 15

Lag

8 ottobre 2014 16 / 25
Sample ACF of Huron: AR(1) fit
ACF of detrended Huron data

1.0
0.8
0.6
0.4
ACF

0.2
0.0
-0.2
-0.4

0 5 10 15

Lag

Add theoretical ACF of AR(1) with = 0.79.


8 ottobre 2014 17 / 25
Sample ACF of Huron: AR(1) fit
ACF of detrended Huron data

1.0
0.8
0.6
0.4
ACF

0.2
0.0
-0.2
-0.4

0 5 10 15

Lag

Add confidence intervals, assuming = 0.79 (di↵erent from book).


8 ottobre 2014 18 / 25
Sample ACF and PACF of Huron data
Huron data

1.0
0.6
ACF

0.2
-0.2

0 5 10 15

Lag
0.6
Partial ACF

0.2
-0.2

5 10 15

Lag

PACF suggests use of an AR(2) model.


8 ottobre 2014 19 / 25
The innovations algorithm. Basis

Another recursive algorithm (‘innovations algorithm’) works better in some


cases. It will be important in the estimation of ARMA processes.

8 ottobre 2014 20 / 25
The innovations algorithm. Basis

Another recursive algorithm (‘innovations algorithm’) works better in some


cases. It will be important in the estimation of ARMA processes.
Let X̂n+1 = PL(X1 ,...,Xn ) Xn+1 2 L(X1 , . . . , Xn ). We wish to write
n
X
X̂n+1 = #n,j (Xn+1 j X̂n+1 j ).
j=1

8 ottobre 2014 20 / 25
The innovations algorithm. Basis

Another recursive algorithm (‘innovations algorithm’) works better in some


cases. It will be important in the estimation of ARMA processes.
Let X̂n+1 = PL(X1 ,...,Xn ) Xn+1 2 L(X1 , . . . , Xn ). We wish to write
n
X
X̂n+1 = #n,j (Xn+1 j X̂n+1 j ).
j=1

{Xn+1 j X̂n+1 j }j=1...n is an orthogonal basis of L(X1 , . . . , Xn ).


In fact Xk+1 X̂k+1 by definition is orthogonal to L(X1 , . . . , Xk ),
hence to Xj X̂j for all j = 1 . . . k.
( Xk+1 X̂k+1 is named innovation, as it could not be predicted before)

8 ottobre 2014 20 / 25
The innovations algorithm. Steps
The orthogonality condition reads: for j = 1 . . . n

hXn+1 , Xn+1 j X̂n+1 j i = hX̂n+1 , Xn+1 j X̂n+1 j i


= #n,j kXn+1 j X̂n+1 j k2 = #n,j vn j . (1)

8 ottobre 2014 21 / 25
The innovations algorithm. Steps
The orthogonality condition reads: for j = 1 . . . n

hXn+1 , Xn+1 j X̂n+1 j i = hX̂n+1 , Xn+1 j X̂n+1 j i


= #n,j kXn+1 j X̂n+1 j k2 = #n,j vn j . (1)

Take j = n. Then

#n,n v0 = hXn+1 , X1 X̂1 i = hXn+1 , X1 i = (n).

8 ottobre 2014 21 / 25
The innovations algorithm. Steps
The orthogonality condition reads: for j = 1 . . . n

hXn+1 , Xn+1 j X̂n+1 j i = hX̂n+1 , Xn+1 j X̂n+1 j i


= #n,j kXn+1 j X̂n+1 j k2 = #n,j vn j . (1)

Take j = n. Then

#n,n v0 = hXn+1 , X1 X̂1 i = hXn+1 , X1 i = (n).

For j < n, #n,j vn j = hXn+1 , Xn+1 j X̂n+1 j i


n j
X
= (j) #n j,k hXn+1 , Xn+1 j k X̂n+1 j k i.
k=1

8 ottobre 2014 21 / 25
The innovations algorithm. Steps
The orthogonality condition reads: for j = 1 . . . n

hXn+1 , Xn+1 j X̂n+1 j i = hX̂n+1 , Xn+1 j X̂n+1 j i


= #n,j kXn+1 j X̂n+1 j k2 = #n,j vn j . (1)

Take j = n. Then

#n,n v0 = hXn+1 , X1 X̂1 i = hXn+1 , X1 i = (n).

For j < n, #n,j vn j = hXn+1 , Xn+1 j X̂n+1 j i


n j
X
= (j) #n j,k hXn+1 , Xn+1 j k X̂n+1 j k i.
k=1

Now insert (1) in the rightmost term.


8 ottobre 2014 21 / 25
The innovations algorithm. Steps (cont.)

hXn+1 , Xn+1 j X̂n+1 j i = #n,j vn j . (1)

n j
X
From #n,j vn j = (j) #n j,k hXn+1 , Xn+1 j k X̂n+1 j ki
k=1
n j
X
= (j) #n j,k #n,j+k vn j k .
k=1

8 ottobre 2014 22 / 25
The innovations algorithm. Steps (cont.)

hXn+1 , Xn+1 j X̂n+1 j i = #n,j vn j . (1)

n j
X
From #n,j vn j = (j) #n j,k hXn+1 , Xn+1 j k X̂n+1 j ki
k=1
n j
X
= (j) #n j,k #n,j+k vn j k .
k=1

Hence in order to compute #n,j we need #n j,k (as j 1 this value has
already been obtained) and #n,j+k , i.e. #n,l with l > j. At step n, one can
then compute #n,n (first formula), then #n,n 1 down to #n,1 .

8 ottobre 2014 22 / 25
The innovations algorithm. Steps (cont.)

hXn+1 , Xn+1 j X̂n+1 j i = #n,j vn j . (1)

n j
X
From #n,j vn j = (j) #n j,k hXn+1 , Xn+1 j k X̂n+1 j ki
k=1
n j
X
= (j) #n j,k #n,j+k vn j k .
k=1

Hence in order to compute #n,j we need #n j,k (as j 1 this value has
already been obtained) and #n,j+k , i.e. #n,l with l > j. At step n, one can
then compute #n,n (first formula), then #n,n 1 down to #n,1 .
One needs still a recursive formula for vn .

8 ottobre 2014 22 / 25
The innovations algorithm. Summary
vn = kXn+1 X̂n+1 k2 = kXn+1 k2 + kX̂n+1 k2 2hXn+1 , X̂n+1 i
= kXn+1 k2 + kX̂n+1 k2 2hXn+1 X̂n+1 , X̂n+1 i 2hX̂n+1 , X̂n+1 i
= kXn+1 k2 kX̂n+1 k2

as Xn+1 X̂n+1 is orthogonal to L(X1 , . . . , Xk ), hence to X̂n+1 .

8 ottobre 2014 23 / 25
The innovations algorithm. Summary
vn = kXn+1 X̂n+1 k2 = kXn+1 k2 + kX̂n+1 k2 2hXn+1 , X̂n+1 i
= kXn+1 k2 + kX̂n+1 k2 2hXn+1 X̂n+1 , X̂n+1 i 2hX̂n+1 , X̂n+1 i
= kXn+1 k2 kX̂n+1 k2

as Xn+1 X̂n+1 is orthogonal to L(X1 , . . . , Xk ), hence to X̂n+1 .


Pn
kXn+1 k2 = (0), kX̂n+1 k2 = #2n,j vn j .
j=1

8 ottobre 2014 23 / 25
The innovations algorithm. Summary
vn = kXn+1 X̂n+1 k2 = kXn+1 k2 + kX̂n+1 k2 2hXn+1 , X̂n+1 i
= kXn+1 k2 + kX̂n+1 k2 2hXn+1 X̂n+1 , X̂n+1 i 2hX̂n+1 , X̂n+1 i
= kXn+1 k2 kX̂n+1 k2

as Xn+1 X̂n+1 is orthogonal to L(X1 , . . . , Xk ), hence to X̂n+1 .


Pn
kXn+1 k2 = (0), kX̂n+1 k2 = #2n,j vn j .
j=1
The algorithm starts with v0 = (0).
Then for each n, #n,n = (n)/v0 ,

8 ottobre 2014 23 / 25
The innovations algorithm. Summary
vn = kXn+1 X̂n+1 k2 = kXn+1 k2 + kX̂n+1 k2 2hXn+1 , X̂n+1 i
= kXn+1 k2 + kX̂n+1 k2 2hXn+1 X̂n+1 , X̂n+1 i 2hX̂n+1 , X̂n+1 i
= kXn+1 k2 kX̂n+1 k2

as Xn+1 X̂n+1 is orthogonal to L(X1 , . . . , Xk ), hence to X̂n+1 .


Pn
kXn+1 k2 = (0), kX̂n+1 k2 = #2n,j vn j .
j=1
The algorithm starts with v0 = (0).
Then for each n, #n,n = (n)/v0 ,
n j
X
#n,j = [ (j) #n j,k #n,j+k vn j k ]/vn j , j =n 1, . . . , 1.
k=1

8 ottobre 2014 23 / 25
The innovations algorithm. Summary
vn = kXn+1 X̂n+1 k2 = kXn+1 k2 + kX̂n+1 k2 2hXn+1 , X̂n+1 i
= kXn+1 k2 + kX̂n+1 k2 2hXn+1 X̂n+1 , X̂n+1 i 2hX̂n+1 , X̂n+1 i
= kXn+1 k2 kX̂n+1 k2

as Xn+1 X̂n+1 is orthogonal to L(X1 , . . . , Xk ), hence to X̂n+1 .


Pn
kXn+1 k2 = (0), kX̂n+1 k2 = #2n,j vn j .
j=1
The algorithm starts with v0 = (0).
Then for each n, #n,n = (n)/v0 ,
n j
X
#n,j = [ (j) #n j,k #n,j+k vn j k ]/vn j , j =n 1, . . . , 1.
k=1
n
X
vn = (0) #2n,j vn j .
j=1

8 ottobre 2014 23 / 25
Innovations algorithm applied to MA(1)

It is easy to see that #n,j = 0 for n > 1 and j > 1. In fact


n j
X
#n,j = [ (j) #n j,k #n,j+k vn j k ]/vn j .
k=1

Then
(1) 2 (1)
#n,1 = and vn = (0) #2n,1 vn 1 = (0) .
vn 1 vn 1

8 ottobre 2014 24 / 25
Projection on infinite past

We can consider projections based on knowledge of all the past:

Mt = sp(Xs )st

i.e. the smallest closed subset containing all the finite linear combinations
of Xs , s  t, i.e. the limits (in L2 ) of finite linear combinations of Xs .

8 ottobre 2014 25 / 25
Projection on infinite past

We can consider projections based on knowledge of all the past:

Mt = sp(Xs )st

i.e. the smallest closed subset containing all the finite linear combinations
of Xs , s  t, i.e. the limits (in L2 ) of finite linear combinations of Xs .
An example. MA(1): Xt = Zt #Zt 1. Show that, if |#| < 1,
1
X
#j Xt+1 j = PMt Xt+1 .
j=1

8 ottobre 2014 25 / 25
Projection on infinite past

We can consider projections based on knowledge of all the past:

Mt = sp(Xs )st

i.e. the smallest closed subset containing all the finite linear combinations
of Xs , s  t, i.e. the limits (in L2 ) of finite linear combinations of Xs .
An example. MA(1): Xt = Zt #Zt 1. Show that, if |#| < 1,
1
X
#j Xt+1 j = PMt Xt+1 .
j=1
1 the series converges.

8 ottobre 2014 25 / 25
Projection on infinite past

We can consider projections based on knowledge of all the past:

Mt = sp(Xs )st

i.e. the smallest closed subset containing all the finite linear combinations
of Xs , s  t, i.e. the limits (in L2 ) of finite linear combinations of Xs .
An example. MA(1): Xt = Zt #Zt 1. Show that, if |#| < 1,
1
X
#j Xt+1 j = PMt Xt+1 .
j=1
1 the series converges.
P
1
2 Xt+1 + #j Xt+1 j is orthogonal to Xt i , i 0.
j=1

8 ottobre 2014 25 / 25
Projection on infinite past

We can consider projections based on knowledge of all the past:

Mt = sp(Xs )st

i.e. the smallest closed subset containing all the finite linear combinations
of Xs , s  t, i.e. the limits (in L2 ) of finite linear combinations of Xs .
An example. MA(1): Xt = Zt #Zt 1. Show that, if |#| < 1,
1
X
#j Xt+1 j = PMt Xt+1 .
j=1
1 the series converges.
P
1
2 Xt+1 + #j Xt+1 j is orthogonal to Xt i , i 0.
j=1

What could be PMt Xt+1 if |#| > 1?


8 ottobre 2014 25 / 25

Potrebbero piacerti anche