Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
8 ottobre 2014 1 / 25
Durbin-Levinson recursive method
8 ottobre 2014 1 / 25
Durbin-Levinson, 2
8 ottobre 2014 2 / 25
Durbin-Levinson, 3
8 ottobre 2014 3 / 25
Durbin-Levinson, 3
8 ottobre 2014 3 / 25
Durbin-Levinson, 3
8 ottobre 2014 3 / 25
Durbin-Levinson, 3
8 ottobre 2014 3 / 25
Durbin-Levinson, 3
8 ottobre 2014 3 / 25
Durbin-Levinson. 4
We tried
and found
hXn+1 , X1 PL(X2 ,...,Xn ) X1 i 1
a= = hXn+1 , X1 PL(X2 ,...,Xn ) X1 ivn 1
kX1 PL(X2 ,...,Xn ) X1 k2
with
vn 1 = E(|X̂n Xn |2 ) = kXn PL(X1 ,...,Xn 1)
Xn k2 = kX1 PL(X2 ,...,Xn ) X1 k2 .
8 ottobre 2014 4 / 25
Durbin-Levinson. 4
We tried
and found
hXn+1 , X1 PL(X2 ,...,Xn ) X1 i 1
a= = hXn+1 , X1 PL(X2 ,...,Xn ) X1 ivn 1
kX1 PL(X2 ,...,Xn ) X1 k2
with
vn 1 = E(|X̂n Xn |2 ) = kXn PL(X1 ,...,Xn 1)
Xn k2 = kX1 PL(X2 ,...,Xn ) X1 k2 .
P
n
We write X̂n+1 = 'n,1 Xn + · · · + 'n,n X1 = 'n,j Xn+1 j
j=1
8 ottobre 2014 4 / 25
Durbin-Levinson. 4
We tried
and found
hXn+1 , X1 PL(X2 ,...,Xn ) X1 i 1
a= = hXn+1 , X1 PL(X2 ,...,Xn ) X1 ivn 1
kX1 PL(X2 ,...,Xn ) X1 k2
with
vn 1 = E(|X̂n Xn |2 ) = kXn PL(X1 ,...,Xn 1)
Xn k2 = kX1 PL(X2 ,...,Xn ) X1 k2 .
P
n
We write X̂n+1 = 'n,1 Xn + · · · + 'n,n X1 = 'n,j Xn+1 j
j=1
nP1
so that PL(X2 ,...,Xn ) Xn+1 = 'n 1,j Xn+1 j
j=1
8 ottobre 2014 4 / 25
Durbin-Levinson. 4
We tried
and found
hXn+1 , X1 PL(X2 ,...,Xn ) X1 i 1
a= = hXn+1 , X1 PL(X2 ,...,Xn ) X1 ivn 1
kX1 PL(X2 ,...,Xn ) X1 k2
with
vn 1 = E(|X̂n Xn |2 ) = kXn PL(X1 ,...,Xn 1)
Xn k2 = kX1 PL(X2 ,...,Xn ) X1 k2 .
P
n
We write X̂n+1 = 'n,1 Xn + · · · + 'n,n X1 = 'n,j Xn+1 j
j=1
nP1
so that PL(X2 ,...,Xn ) Xn+1 = 'n 1,j Xn+1 j
j=1
and substituting we get a recursion.
8 ottobre 2014 4 / 25
Durbin-Levinson algorithm. 5
n
X
X̂n+1 = 'n,j Xn+1 j = PL(X2 ,...,Xn ) Xn+1 + a X1 PL(X2 ,...,Xn ) X1
j=1
Hence
1
'n,n = a = hXn+1 , X1 PL(X2 ,...,Xn ) X1 ivn 1
2 3
n 1
X
= 4 (n) 'n 1,j (n j)5 vn 1
1.
j=1
8 ottobre 2014 5 / 25
Durbin-Levinson algorithm. 6
Then from
n
X n 1
X n 1
X
'n,j Xn+1 j = 'n 1,j Xn+1 j + a(X1 'n 1,j Xj+1 )
j=1 j=1 j=1
n 1
X n 1
X
= 'n 1,j Xn+1 j + a(X1 'n 1,n k Xn+1 k )
j=1 k=1
one sees
'n,j = 'n 1,j a'n 1,n j = 'n 1,j 'n,n 'n 1,n j j = 1...n 1
8 ottobre 2014 6 / 25
Durbin-Levinson algorithm. 6
Then from
n
X n 1
X n 1
X
'n,j Xn+1 j = 'n 1,j Xn+1 j + a(X1 'n 1,j Xj+1 )
j=1 j=1 j=1
n 1
X n 1
X
= 'n 1,j Xn+1 j + a(X1 'n 1,n k Xn+1 k )
j=1 k=1
one sees
'n,j = 'n 1,j a'n 1,n j = 'n 1,j 'n,n 'n 1,n j j = 1...n 1
8 ottobre 2014 6 / 25
Durbin-Levinson algorithm. 7
n
X
2
vn = E(|X̂n+1 Xn+1 | ) = 0 'n,j (j)
j=1
n 1
X
= 0 'n,n (n) ('n 1,j 'n,n 'n 1,n j ) (j)
j=1
0 1
n 1
X n 1
X
= 0 'n 1,j (j) 'n,n @ (n) 'n 1,j (j)A
j=1 j=1
8 ottobre 2014 7 / 25
Durbin-Levinson algorithm. 7
n
X
2
vn = E(|X̂n+1 Xn+1 | ) = 0 'n,j (j)
j=1
n 1
X
= 0 'n,n (n) ('n 1,j 'n,n 'n 1,n j ) (j)
j=1
0 1
n 1
X n 1
X
= 0 'n 1,j (j) 'n,n @ (n) 'n 1,j (j)A
j=1 j=1
8 ottobre 2014 7 / 25
Durbin-Levinson algorithm. Summary
8 ottobre 2014 8 / 25
Durbin-Levinson algorithm. Summary
8 ottobre 2014 8 / 25
Durbin-Levinson algorithm. Summary
8 ottobre 2014 8 / 25
Durbin-Levinson algorithm. Summary
8 ottobre 2014 8 / 25
Durbin-Levinson algorithm. Summary
8 ottobre 2014 8 / 25
Durbin-Levinson algorithm. Summary
8 ottobre 2014 8 / 25
Durbin-Levinson algorithm. Summary
8 ottobre 2014 9 / 25
Durbin-Levinson algorithm for AR(1)
8 ottobre 2014 9 / 25
Durbin-Levinson algorithm for AR(1)
8 ottobre 2014 9 / 25
Durbin-Levinson algorithm for AR(1)
8 ottobre 2014 9 / 25
Durbin-Levinson algorithm for MA(1)
X t = Zt Zt ⇠ WN(0, 2 ), 2 (1 + #2 ), (1) = 2 #.
#Zt 1, (0) =
8 ottobre 2014 10 / 25
Durbin-Levinson algorithm for MA(1)
X t = Zt Zt ⇠ WN(0, 2 ), 2 (1 + #2 ), (1) = 2 #.
#Zt 1, (0) =
2 (1 #
v0 = + #2 ) '1,1 =
1 + #2
8 ottobre 2014 10 / 25
Durbin-Levinson algorithm for MA(1)
X t = Zt Zt ⇠ WN(0, 2 ), 2 (1 + #2 ), (1) = 2 #.
#Zt 1, (0) =
2 (1 #
v0 = + #2 ) '1,1 =
1 + #2
2 (1 + #2 + #4 ) #2
v1 = '2,2 = ...
1 + #2 1 + #2 + #4
2 (1 + #2 + #4 + #6 )
v2 = ...
1 + #2 + #4
8 ottobre 2014 10 / 25
Durbin-Levinson algorithm for MA(1)
X t = Zt Zt ⇠ WN(0, 2 ), 2 (1 + #2 ), (1) = 2 #.
#Zt 1, (0) =
2 (1 #
v0 = + #2 ) '1,1 =
1 + #2
2 (1 + #2 + #4 ) #2
v1 = '2,2 = ...
1 + #2 1 + #2 + #4
2 (1 + #2 + #4 + #6 )
v2 = ...
1 + #2 + #4
Remarks: Computations are long and tedious.
vn converges (slowly) towards 2 (the white-noise variance) if |#| < 1.
8 ottobre 2014 10 / 25
Durbin-Levinson for sinusoidal wave
2
E(B) = E(C ) = E(BC ) = 0, V(B) = V(C ) = .
8 ottobre 2014 11 / 25
Durbin-Levinson for sinusoidal wave
2
E(B) = E(C ) = E(BC ) = 0, V(B) = V(C ) = .
Then (h) = 2 cos(!h).
8 ottobre 2014 11 / 25
Durbin-Levinson for sinusoidal wave
2
E(B) = E(C ) = E(BC ) = 0, V(B) = V(C ) = .
Then (h) = 2 cos(!h).
v0 = 2 '1,1 = cos(!)
8 ottobre 2014 11 / 25
Durbin-Levinson for sinusoidal wave
2
E(B) = E(C ) = E(BC ) = 0, V(B) = V(C ) = .
Then (h) = 2 cos(!h).
v0 = 2 '1,1 = cos(!)
cos(2!) cos2 (!)
v1 = 2 (1 cos2 (!)) = 2 sin2 (!) '2,2 = = 1
sin2 (!)
8 ottobre 2014 11 / 25
Durbin-Levinson for sinusoidal wave
2
E(B) = E(C ) = E(BC ) = 0, V(B) = V(C ) = .
Then (h) = 2 cos(!h).
v0 = 2 '1,1 = cos(!)
cos(2!) cos2 (!)
v1 = 2 (1 cos2 (!)) = 2 sin2 (!) '2,2 = = 1
sin2 (!)
v2 = 0
=) Xn+1 = PL(Xn ,Xn 1)
Xn+1 .
8 ottobre 2014 11 / 25
Partial auto-correlation
For a stationary process {Xt } ↵(h) the partial auto-correlation represents
the correlation between Xt and Xt+h , after removing the e↵ect of
intermediate values.
8 ottobre 2014 12 / 25
Partial auto-correlation
For a stationary process {Xt } ↵(h) the partial auto-correlation represents
the correlation between Xt and Xt+h , after removing the e↵ect of
intermediate values.
Definition: ↵(1) = ⇢(Xt , Xt+1 ) = ⇢(1).
↵(h) = ⇢(Xt PL(Xt+1 ,...,Xt+h 1)
Xt , Xt+h PL(Xt+1 ,...,Xt+h 1)
Xt+h ) h > 1.
8 ottobre 2014 12 / 25
Partial auto-correlation
For a stationary process {Xt } ↵(h) the partial auto-correlation represents
the correlation between Xt and Xt+h , after removing the e↵ect of
intermediate values.
Definition: ↵(1) = ⇢(Xt , Xt+1 ) = ⇢(1).
↵(h) = ⇢(Xt PL(Xt+1 ,...,Xt+h 1)
Xt , Xt+h PL(Xt+1 ,...,Xt+h 1)
Xt+h ) h > 1.
8 ottobre 2014 12 / 25
Partial auto-correlation
For a stationary process {Xt } ↵(h) the partial auto-correlation represents
the correlation between Xt and Xt+h , after removing the e↵ect of
intermediate values.
Definition: ↵(1) = ⇢(Xt , Xt+1 ) = ⇢(1).
↵(h) = ⇢(Xt PL(Xt+1 ,...,Xt+h 1)
Xt , Xt+h PL(Xt+1 ,...,Xt+h 1)
Xt+h ) h > 1.
n
X
X̂n+1 = 'n,j Xn+1 j = PL(X2 ,...,Xn ) Xn+1 + a X1 PL(X2 ,...,Xn ) X1
j=1
Hence
1
'n,n = a = hXn+1 , X1 PL(X2 ,...,Xn ) X1 ivn 1
2 3
n 1
X
= 4 (n) 'n 1,j (n j)5 vn 1
1.
j=1
8 ottobre 2014 13 / 25
Examples of PACF
8 ottobre 2014 14 / 25
Examples of PACF
8 ottobre 2014 14 / 25
Examples of PACF
8 ottobre 2014 14 / 25
Examples of PACF
8 ottobre 2014 14 / 25
Examples of PACF
8 ottobre 2014 14 / 25
Examples of PACF
8 ottobre 2014 14 / 25
Examples of PACF
8 ottobre 2014 14 / 25
Sample ACF and PACF
Oveshort data
1.0
0.5
ACF
0.0
-0.5
0 5 10 15
Lag
0.2
Partial ACF
0.0
-0.4
5 10 15
Lag
8 ottobre 2014 15 / 25
Sample ACF of Huron: AR(1) fit
ACF of detrended Huron data
1.0
0.8
0.6
0.4
ACF
0.2
0.0
-0.2
-0.4
0 5 10 15
Lag
8 ottobre 2014 16 / 25
Sample ACF of Huron: AR(1) fit
ACF of detrended Huron data
1.0
0.8
0.6
0.4
ACF
0.2
0.0
-0.2
-0.4
0 5 10 15
Lag
1.0
0.8
0.6
0.4
ACF
0.2
0.0
-0.2
-0.4
0 5 10 15
Lag
1.0
0.6
ACF
0.2
-0.2
0 5 10 15
Lag
0.6
Partial ACF
0.2
-0.2
5 10 15
Lag
8 ottobre 2014 20 / 25
The innovations algorithm. Basis
8 ottobre 2014 20 / 25
The innovations algorithm. Basis
8 ottobre 2014 20 / 25
The innovations algorithm. Steps
The orthogonality condition reads: for j = 1 . . . n
8 ottobre 2014 21 / 25
The innovations algorithm. Steps
The orthogonality condition reads: for j = 1 . . . n
Take j = n. Then
8 ottobre 2014 21 / 25
The innovations algorithm. Steps
The orthogonality condition reads: for j = 1 . . . n
Take j = n. Then
8 ottobre 2014 21 / 25
The innovations algorithm. Steps
The orthogonality condition reads: for j = 1 . . . n
Take j = n. Then
n j
X
From #n,j vn j = (j) #n j,k hXn+1 , Xn+1 j k X̂n+1 j ki
k=1
n j
X
= (j) #n j,k #n,j+k vn j k .
k=1
8 ottobre 2014 22 / 25
The innovations algorithm. Steps (cont.)
n j
X
From #n,j vn j = (j) #n j,k hXn+1 , Xn+1 j k X̂n+1 j ki
k=1
n j
X
= (j) #n j,k #n,j+k vn j k .
k=1
Hence in order to compute #n,j we need #n j,k (as j 1 this value has
already been obtained) and #n,j+k , i.e. #n,l with l > j. At step n, one can
then compute #n,n (first formula), then #n,n 1 down to #n,1 .
8 ottobre 2014 22 / 25
The innovations algorithm. Steps (cont.)
n j
X
From #n,j vn j = (j) #n j,k hXn+1 , Xn+1 j k X̂n+1 j ki
k=1
n j
X
= (j) #n j,k #n,j+k vn j k .
k=1
Hence in order to compute #n,j we need #n j,k (as j 1 this value has
already been obtained) and #n,j+k , i.e. #n,l with l > j. At step n, one can
then compute #n,n (first formula), then #n,n 1 down to #n,1 .
One needs still a recursive formula for vn .
8 ottobre 2014 22 / 25
The innovations algorithm. Summary
vn = kXn+1 X̂n+1 k2 = kXn+1 k2 + kX̂n+1 k2 2hXn+1 , X̂n+1 i
= kXn+1 k2 + kX̂n+1 k2 2hXn+1 X̂n+1 , X̂n+1 i 2hX̂n+1 , X̂n+1 i
= kXn+1 k2 kX̂n+1 k2
8 ottobre 2014 23 / 25
The innovations algorithm. Summary
vn = kXn+1 X̂n+1 k2 = kXn+1 k2 + kX̂n+1 k2 2hXn+1 , X̂n+1 i
= kXn+1 k2 + kX̂n+1 k2 2hXn+1 X̂n+1 , X̂n+1 i 2hX̂n+1 , X̂n+1 i
= kXn+1 k2 kX̂n+1 k2
8 ottobre 2014 23 / 25
The innovations algorithm. Summary
vn = kXn+1 X̂n+1 k2 = kXn+1 k2 + kX̂n+1 k2 2hXn+1 , X̂n+1 i
= kXn+1 k2 + kX̂n+1 k2 2hXn+1 X̂n+1 , X̂n+1 i 2hX̂n+1 , X̂n+1 i
= kXn+1 k2 kX̂n+1 k2
8 ottobre 2014 23 / 25
The innovations algorithm. Summary
vn = kXn+1 X̂n+1 k2 = kXn+1 k2 + kX̂n+1 k2 2hXn+1 , X̂n+1 i
= kXn+1 k2 + kX̂n+1 k2 2hXn+1 X̂n+1 , X̂n+1 i 2hX̂n+1 , X̂n+1 i
= kXn+1 k2 kX̂n+1 k2
8 ottobre 2014 23 / 25
The innovations algorithm. Summary
vn = kXn+1 X̂n+1 k2 = kXn+1 k2 + kX̂n+1 k2 2hXn+1 , X̂n+1 i
= kXn+1 k2 + kX̂n+1 k2 2hXn+1 X̂n+1 , X̂n+1 i 2hX̂n+1 , X̂n+1 i
= kXn+1 k2 kX̂n+1 k2
8 ottobre 2014 23 / 25
Innovations algorithm applied to MA(1)
Then
(1) 2 (1)
#n,1 = and vn = (0) #2n,1 vn 1 = (0) .
vn 1 vn 1
8 ottobre 2014 24 / 25
Projection on infinite past
Mt = sp(Xs )st
i.e. the smallest closed subset containing all the finite linear combinations
of Xs , s t, i.e. the limits (in L2 ) of finite linear combinations of Xs .
8 ottobre 2014 25 / 25
Projection on infinite past
Mt = sp(Xs )st
i.e. the smallest closed subset containing all the finite linear combinations
of Xs , s t, i.e. the limits (in L2 ) of finite linear combinations of Xs .
An example. MA(1): Xt = Zt #Zt 1. Show that, if |#| < 1,
1
X
#j Xt+1 j = PMt Xt+1 .
j=1
8 ottobre 2014 25 / 25
Projection on infinite past
Mt = sp(Xs )st
i.e. the smallest closed subset containing all the finite linear combinations
of Xs , s t, i.e. the limits (in L2 ) of finite linear combinations of Xs .
An example. MA(1): Xt = Zt #Zt 1. Show that, if |#| < 1,
1
X
#j Xt+1 j = PMt Xt+1 .
j=1
1 the series converges.
8 ottobre 2014 25 / 25
Projection on infinite past
Mt = sp(Xs )st
i.e. the smallest closed subset containing all the finite linear combinations
of Xs , s t, i.e. the limits (in L2 ) of finite linear combinations of Xs .
An example. MA(1): Xt = Zt #Zt 1. Show that, if |#| < 1,
1
X
#j Xt+1 j = PMt Xt+1 .
j=1
1 the series converges.
P
1
2 Xt+1 + #j Xt+1 j is orthogonal to Xt i , i 0.
j=1
8 ottobre 2014 25 / 25
Projection on infinite past
Mt = sp(Xs )st
i.e. the smallest closed subset containing all the finite linear combinations
of Xs , s t, i.e. the limits (in L2 ) of finite linear combinations of Xs .
An example. MA(1): Xt = Zt #Zt 1. Show that, if |#| < 1,
1
X
#j Xt+1 j = PMt Xt+1 .
j=1
1 the series converges.
P
1
2 Xt+1 + #j Xt+1 j is orthogonal to Xt i , i 0.
j=1