Sei sulla pagina 1di 17

Time Series Analysis for Business and Economic Forecasting

Exercises - Chapter 6 Aberrant observations

Exercise 6.1
Suppose we are interested in a time series xt , which follows a stationary AR(1)
process,
xt = 1 xt1 + t , (1)

where |1 | < 1, and t i.i.d. N (0, 2 ). Instead of xt we actually observe the time
series yt , where
yt = xt + dt , t = 0, 1, 2, . . . , T, (2)

where dt can take the values 1, 0, and 1, with probabilities

P (dt = 1) = P (dt = 1) = po /2 and P (dt = 0) = 1 po , (3)

for certain 0 < po < 1.

a. Derive the expressions for the variance 0,x and for the rst-order autocovari-
ance 1,x of xt in terms of the AR(1) parameter 1 and the variance of shocks
t .
b. What type of outlier is present in the observed time series yt , according to the
specication as given in (1)-(3)?
c. Show that asymptotically (T ) the following holds for the sample mean

y = T1 Tt=1 yt , the sample variance 0,y = T1 Tt=1 (yt y)2 , and the rst-order

autocovariance 1,y = T1 Tt=1 (yt y)(yt1 y) of the observed series yt :

y 0, (4)
0,y 0,x + 2 po , (5)
1,y 1,x . (6)

d. Suppose we consider an AR(1) model for the observed time series yt , that is,

yt = 1 yt1 + t , t = 1, . . . , T. (7)

What do the properties (4), (5), and (6) imply for the OLS estimator of the
parameter 1 in this model?

1
Solution
a. From (1), it follows that xt has unconditional mean equal to zero, that is E[xt ] = 0.
To obtain the unconditional variance, note that

E[x2t ] = E[(1 xt1 + t )2 ]


= E[21 x2t1 + 21 xt1 t + 2t ]
= 21 E[x2t1 ] + E[2t ].

The unconditional variance of xt should be the same as that of xt1 given that the
AR(1) model is assumed to be stationary. Hence, we set E[x2t1 ] = E[x2t ], such that
0,x = E[x2t ] = 2 /(1 21 ).
(Alternatively, recursively substitute for lagged xt in (1) until you obtain

t1
xt = t1 x0 + i1 ti ,
i=0

then take expectations of the squares on the left- and right-hand side while letting
t )
For the rst-order autocovariance, multiply (1) with xt1 on both the left- and
right-hand side, and take expectations:

E[xt xt1 ] = E[1 x2t1 + xt1 t ],

which gives 1,x = 1 0,x .

b. The process dt controls the occurrence of outliers. is the absolute value of


the outlier (the outlier can be both positive and negative, as dt can be equal to +1
or 1). As shown by (2), only the observation at time t is aected by an outlier
occurring at time t. Hence, this is the typical specication of an additive outlier.

c. From the denition of dt , it follows that E[dt ] = 0. Hence, for the sample mean
y, we nd that

1
T
y = yt
T t=1

1
T
= (xt + dt )
T t=1

1 1
T T
= xt + dt 0
T t=1 T t=1
| {z } | {z }
E[xt ] = 0 E[dt ] = 0

2
For the sample variance 0 , the following holds

1 1 2
T T
0 = (yt y) =
2
(y 2yt y + y 2 )
T t=1 T t=1 t

1 2
T
= y y 2 (8)
T t=1 t

and

1 2 1 1 2 1
T T T T T
2 21
y = (xt + dt ) = x + 2 xt dt + d2 .
T t=1 t T t=1 T t=1 t T t=1 T t=1 t

The probability limit of the rst term is 0,x . The second term converges to the
covariance between xt and dt , which is equal to 0 by assumption. The limit of the
nal term is equal to 2 po , which can be understood by noting that

E[d2t ] = (1 P (dt = 1)) + (0 P (dt = 0)) + (1 P (dt = 1)) = po .

For the 1st-order autocovariance 1 , the following holds

1 1
T T
1 = (yt y)(yt1 y) = (yt yt1 yt y yt1 y + y 2 )
T t=1 T t=1

1
T
= yt yt1 y 2 , (9)
T t=1

and

1 1
T T
yt yt1 = (xt + dt )(xt1 + dt1 )
T t=1 T t=1

1 1
T T T
21
= xt xt1 + (xt dt1 + xt1 dt ) + dt dt1 ,
T t=1 T t=1 T t=1

and because the probability limits of the second and third term in the last line are
equal to zero (due to the assumptions of independence between xt and dt and the
lack of autocorrelation in dt ), the result follows.

d. The OLS estimate of 1 is equal to


1,y
1 = ,
0,y
which asymptotically converges to 1,x /(0,x + 2 po ) according to the results derived
above. This will be smaller (in absolute value) than the true value 1 . Hence, the
coecient estimate is biased towards zero.

3
Exercise 6.2
Suppose we are interested in a time series xt , which follows an MA(1) process,

xt = t + 1 t1 , (10)

where |1 | < 1, and the shocks t i.i.d. N (0, 2 ).

a. Derive the expressions for the variance 0,x and for the rst-order autocovari-
ance 1,x of xt in terms of the MA(1) parameter 1 and the variance of t .

In addition to the time series xt , we also observe another time series yt , where

yt = t + 1 t1 , (11)

where the shocks t are given by t + dt with t as above, and dt can take the values
1, 0, and 1, with probabilities

P (dt = 1) = P (dt = 1) = po /2 and P (dt = 0) = 1 po , (12)

for certain 0 < po < 1.

b. What type of outlier would you say is present in the time series yt , according
to the specication as given in (10)-(12)?

c. Show that asymptotically (T ) the following holds for the sample mean

y = T1 Tt=1 yt , the sample variance 0,y = T1 Tt=1 (yt y)2 , and the rst-order

autocovariance 1,y = T1 Tt=1 (yt y)(yt1 y) of the observed series yt :

y 0, (13)
0,y 0,x + (1 + 12 ) 2 po , (14)
1,y 1,x + 1 2 po . (15)

What does this imply for the properties of the autocorrelations of yt , compared
to those of xt ?

4
Solution
a. From (10) and because E[t ] = 0, it follows directly that xt has unconditional
mean equal to zero, that is E[xt ] = 0. To obtain the unconditional variance, note
that

E[x2t ] = E[(t + 1 t1 )2 ]
= E[2t + 21 t1 t + 12 2t1 ]
= (1 + 12 ) 2 ,

as t i.i.d. (0, 2 ) such that E[t1 t ] = 0.


For the rst-order autocovariance, multiply (10) with xt1 on both the left- and
right-hand side, and take expectations:

E[xt xt1 ] = E[(t + 1 t1 )(t1 + 1 t2 )]


= E[1 2t1 ] = 1 2 ,

again because t i.i.d. (0, 2 ).

b. The process dt controls the occurrence of outliers. is the absolute value of the
outlier (the outlier can be both positive and negative, as dt can be equal to +1 or
1). As shown by (11), the outlier dt takes the form a large shock at time t, which
aects the time series yt and yt+1 in the same way as regular shocks t . This is the
typical specication of an innovation outlier.

c. From the denition of dt , it follows that E[dt ] = 0. Hence, for the sample mean y
we nd that

1
T
y = yt
T t=1

1
T
= (t + 1 t1 )
T t=1

1
T
= (t + dt + 1 (t1 + dt1 ))
T t=1

1 1 1 1
T T T T
= t + dt +1 t1 +1 dt1 0
T t=1 T t=1 T t=1 T t=1
| {z } | {z } | {z } | {z }
0 0 0 0

5
For the sample variance 0 , the following holds
1 1 2
T T
0 = (yt y) =
2
(yt 2yt y + y 2 )
T t=1 T t=1

1 2
T
= yt y 2 (16)
T t=1
and
1 2 1
T T
y = (t + 1 t1 )2
T t=1 t T t=1

1
T
= (t + dt + 1 (t1 + dt1 ))2
T t=1

1 1
T T
2
= (t + 1 t1 ) + 2 (t + 1 t1 )(dt + 1 dt1 )
T t=1 T t=1

1
T
+ 2 (dt + 1 dt1 )2 .
T t=1
The probability limit of the rst term is 0,x . The second term converges to 0 due to
the assumption that t and dt are independent. The limit of the nal term is equal
to (1 + 12 ) 2 po , which can be understood by noting that

E[d2t ] = (1 P (dt = 1)) + (0 P (dt = 0)) + (1 P (dt = 1)) = po .

For the 1st-order autocovariance 1 , the following holds


1 1
T T
1 = (yt y)(yt1 y) = (yt yt1 yt y yt1 y + y 2 )
T t=1 T t=1

1
T
= yt yt1 y 2 , (17)
T t=1
and
1 1
T T
yt yt1 = (t + 1 t1 )(t1 + 1 t2 )
T t=1 T t=1

1
T
= (t + dt + 1 (t1 + dt1 ))(t1 + dt1 + 1 (t2 + dt2 ))
T t=1

1 1
T T
= xt xt1 + (xt (dt1 + 1 dt2 ) + xt1 (dt + 1 dt1 )
T t=1 T t=1

1 2
T
+ (dt + 1 dt1 )(dt1 + 1 dt2 ),
T t=1

6
and because the probability limit of the second term in the last line is equal to zero
(due to the assumptions of independence between xt and dt ) and that of the third
term is equal to 1 2 po , the result follows.
To see the implications for the autocorrelation properties of yt , note that (14)
and (15) imply that

1,y 1,x + 1 2 po
1,y =
0,y 0,x + (1 + 12 ) 2 po
1 2 + 1 2 po
=
(1 + 12 ) 2 + (1 + 12 ) 2 po
1
= = 1,x .
(1 + 12 )

Hence, yt has the same autocorrelation properties as xt .

7
Exercise 6.3
Consider a time series yt which experiences a temporary change in mean, as described
by the model
yt = Dt,[T /4,3T /4] + t , t = 1, 2, . . . , T, (18)

where Dt,[T /4,3T /4] = 1 for t = T /4 + 1, . . . , 3T /4 and Dt,[T /4,3T /4] = 0 otherwise, and
t is a white noise series with variance 2 .

a. Show that asymptotically (T ) the following holds for the sample mean

y = T1 Tt=1 yt , the sample variance 0,y = T1 Tt=1 (yt y)2 , and the rst-order

autocovariance 1,y = T1 Tt=1 (yt y)(yt1 y):

y /2, (19)
0,y 2 + 2 /4, (20)
1,y 2 /4. (21)

b. Suppose we consider an AR(1) model for the observed time series yt , that is,

yt = + 1 yt1 + t , t = 1, . . . , T. (22)

What do the properties (19), (20), and (21) imply for the OLS estimator of
the parameter 1 in this model? [Hint: can you express the OLS estimate 1
in terms of 0,y and 1,y ?]
c. What does this imply for the behavior of the Dickey-Fuller test for a unit root,
when applied to the series yt ,when ||/ 2 becomes very large?

Solution
a. For the sample mean y, we nd that

1
T
y = E[ yt ]
T t=1

1
T
= E[ (Dt,[T /4,3T /4] + t )]
T t=1

1 1
T T
= Dt,[T /4,3T /4] + t /2, (23)
T t=1 T t=1
| {z } | {z }
1/2 E[t ] = 0

8
For the sample variance 0 , the following holds

1 1 2
T T
0 = (yt y) =
2
(y 2yt y + y 2 )
T t=1 T t=1 t

1 2
T
= y y 2
T t=1 t

and
1 2 1
T T
y = (Dt,[T /4,3T /4] + t )2
T t=1 t T t=1

1 2 2 1 1 2
T T T
= Dt,[T /4,3T /4] + 2Dt,[T /4,3T /4] t + .
T t=1 T t=1 T t=1 t

The probability limits of the three terms in the last line are 2 /2, 0 and 2 , respec-
tively. Hence, we have that 0 2 /2 + 2 (/2)2 = 2 /4 + 2 .
For the 1st-order autocovariance 1 , the following holds

1 1
T T
1 = (yt y)(yt1 y) = (yt yt1 yt y yt1 y + y 2 )
T t=1 T t=1

1
T
= yt yt1 y 2
T t=1

and
1 1
T T
yt yt1 = (Dt,[T /4,3T /4] + t )(Dt1,[T /4,3T /4] + t1 )
T t=1 T t=1

1 2
T
= Dt,[T /4,3T /4] Dt1,[T /4,3T /4]
T t=1

1 1
T T
+ (Dt,[T /4,3T /4] t1 + Dt1,[T /4,3T /4] t ) + t t1
T t=1 T t=1

The rst term has limiting value 2 /2, while the second and third terms converge to
zero. Hence, we have that E[1 ] = 2 /2 (/2)2 = 2 /4.

b. The OLS estimate of 1 is equal to


1,y
1 = ,
0,y
which has probability limt ( 2 /4)/( 2 + 2 /4) according to the results derived above.
This will be larger than 0, the true value of 1 . In fact, when ||/ 2 becomes larger,
it approaches 1.

9
c. The Dickey-Fuller statistic tests the null hypothesis H0 : 1 = 1 against the
alternative H1 : 1 < 1 in (22). As ||/ 2 becomes very large, 1 approaches 1, such
that the null hypothesis is not likely to be rejected.

10
Exercise 6.4
Consider a time series yt which experiences a permanent change in mean, as described
by the model
yt = Dt,[T /2] + t , t = 1, 2, . . . , T, (24)
where Dt,[T /2] = 0 for t = 1, . . . , T /2 and Dt,[T /2] = 1 for t = T /2 + 1, . . . , T , and t
is a white noise series with variance 2 .
a. Show that asymptotically (T ) the following holds for the sample mean

y = T1 Tt=1 yt , the sample variance 0 = T1 Tt=1 (yt y)2 , and the k-th order

autocovariance k = T1 Tt=1 (yt y)(ytk y) for any k > 0:
y /2, 0 2 + 2 /4, and k 2 /4.

b. What does this imply for the behavior of the Dickey-Fuller test for a unit root,
when applied to the series yt , when ||/ 2 becomes very large?

Solution
a. For the sample mean y, we nd that as T
1
T
y = yt
T t=1

1
T
= (Dt,[T /2] + t )
T t=1

1 1
T T
= Dt,[T /2] + t /2, (25)
T t=1 T t=1
| {z } | {z }
1/2 E[t ] = 0
For the sample variance 0 , the following holds as T
1 1 2
T T
0 = (yt y) =
2
(y 2yt y + y 2 )
T t=1 T t=1 t

1 2
T
= y y 2 (26)
T t=1 t
and because
1 2 1
T T
yt = (Dt,[T /2] + t )2
T t=1 T t=1

1 2 2 1 1 2
T T T
= Dt,[T /2] + 2Dt,[T /2] t +
T t=1 T t=1 T t=1 t
| {z } | {z } | {z }
/2
2
0 2

11
we have that 0 2 /2 + 2 (/2)2 = 2 /4 + 2 .
For the 1st-order autocovariance 1 , the following holds as T

1 1
T T
1 = (yt y)(yt1 y) = (yt yt1 yt y yt1 y + y 2 )
T t=1 T t=1

1
T
= yt yt1 y 2 (27)
T t=1

and because

1 1
T T
yt yt1 = (Dt,[T /2] + t )(Dt1,[T /2] + t1 )
T T
t=1 t=1

1 2 1 1
T T T
= Dt,[T /2] Dt1,[T /2] + (Dt,[T /2] t1 + Dt1,[T /2] t ) + t t1
T T T
t=1 t=1 t=1
| {z } | {z } | {z }
2 /2 0 0

we have that 1 2 /2 (/2)2 = 2 /4.

b. Because 0 2 + 2 /4 and k 2 /4, we have that

2 /4
k = 1,
2 + 2 /4

when ||/ 2 becomes very large. All empirical autocorrelations approach 1 as the
sample size T and the magnitude of the mean shift become large. Hence, the
EACF of yt looks just like what we would nd for a time series with a unit root. For
that reason, the DF test will typically not reject the null hypothesis of a unit root.

12
Exercise 6.5
Suppose we are interested in a time series xt , which follows an AR(2) process,

xt = 1 xt1 + 2 xt2 + t , (28)

or
(1 1 L 2 L2 )xt = t ,

where 1 and 2 are such that the AR(2) process is stationary, L is the usual lag
operator dened as Lk xt xtk , and t i.i.d. N (0, 2 ).
Instead of xt we actually observe the contaminated series yt , containing a single
outlier at time t = , that is
yt = xt + t dt , (29)

where dt = 1 for t = and 0 otherwise.

Question: How would you test whether the outlier at time t = is an additive
outlier (AO) or an innovation outlier (IO)?

Solution
In case of an AO we have t = 0, such that

yt = xt + dt ,

which implies that

(1 1 L 2 L2 )yt = (1 1 L 2 L2 )xt + (1 1 L 2 L2 )dt ,

or
yt = 1 yt1 + 2 yt2 + t + dt 1 dt1 2 dt2 .

In case of an IO we have t /(1 1 L 2 L2 ), or, given that an IO is simply


an unusual shock at t = ,

yt = 1 yt1 + 2 yt2 + t + dt .

Dene et = yt 1 yt1 2 yt2 , such that

AO: et = t + (dt 1 dt1 2 dt2 ) (30)


IO: et = t + dt , (31)

13
We can interpret (30)-(31) as a regression model for et :

et = zx,t + t , t = 1, . . . , T, x = AO,IO, (32)

with


1 for t=


1 for t= +1
AO: zAO,t =

2 for t= +2


0 for t> +2

{
1 for t =
IO: zIO,t =
0 for t >

and zx,t = 0, x=AO,IO, for all t < .


For each outlier type, we can obtain estimate of by regressing et on zx,t as
T T
et zx,t et zx,t
x ( ) = t=1
T 2
= t=
T 2
.
t=1 zx,t t= zx,t

Using the denitions of zAO,t and zIO,t , it is straightforward to show that AO ( ) =


(e 1 e +1 2 e +2 )/(1 + 21 + 22 ) and IO ( ) = e .
The signicance of x can be assessed by considering the associated t-statistic:

x ( )
x ( ) = ( )1/2 ,
T 2
t= zx,t

where is the OLS estimate of the standard deviation of t .

14
Exercise 6.6
Consider the quarterly time series of quarterly growth rates of US investment, for
the period 1969Q3-1999Q4 in the EViews workle usinv.wf1.

a. Estimate an AR(2) model for this time series. Evaluate the model by inspect-
ing the properties of the residuals (check (i) normality; (ii) [partial] autocorre-
lations; (iii) homoskedasticity). What kind of patterns do you observe in the
residuals around 1978Q2 and 1980Q2?
b. Test for the presence of additive outliers (AOs) and innovation outliers (IO).
c. Estimate a model where the observation in 1978Q2 is treated as an IO and the
observation in 1980Q2 is treated as an AO.

Solution
a. Estimating the AR(2) model yt = 1 (yt1 )+2 (yt2 )+t by least squares
gives coecient estimates (with standard errors in parentheses) of = 1.24(0.46),
1 = 0.39(0.09), and 2 = 0.21(0.09), with an R2 = 0.28.
With a skewness equal to 0.20 and kurtosis equal to 4.05, the Jarque-Bera
test for normality of the residuals gives a p-value of 0.041. A signicant (partial)
autocorrelation appears at lag 8, with 8 () = 0.226 (8 () = 0.218). The same
applies to the squared residuals, for which 8 (2 ) = 0.314, for example.

b. The residual patterns are suspect around 1978Q2 and 19802, in the sense that (i)
the residual for 1978Q2 is unusually large positive, and (ii) the residual for 1980Q2 is
unusually large negative while residuals for the two following observations are some-
what large positive. This raises the suspicion that 1978Q2 might be an innovation
outlier (IO) and 1980Q2 might be an additive outlier (AO).
12
Residual Actual Fitted
8

4
8
0

4 -4

0 -8

-4

-8
1970 1975 1980 1985 1990 1995

15
This suspicion is conrmed by the test statistics x ( ) for x=AO and IO, as
shown below. The (absolute) value of AO ( ) is largest for 1980Q2 and equal to
3.68, while the (absolute) value of IO ( ) is largest for 1978Q2 and equal to 3.50.
4

-1

-2

-3

-4
1970 1975 1980 1985 1990 1995

-1

-2

-3

-4
1970 1975 1980 1985 1990 1995

c. Estimating the AR(2) model

yt 2 d1980Q2,t = 1 (yt1 2 d1980Q2,t1 )


+ 2 (yt2 2 d1980Q2,t2 ) + 1 d1978Q2,t + t

by least squares gives coecient estimates (with standard errors in parentheses)


of = 1.15(0.43), 1 = 6.63(1.81), 2 = 6.22(1.63), 1 = 0.43(0.09), and 2 =
0.18(0.09), with an R2 = 0.42.
With a skewness equal to 0.31 and kurtosis equal to 3.26, the Jarque-Bera test
for normality of the residuals gives a p-value of 0.31. The (partial) autocorrelation
at lag 8 is reduced to 8 () = 0.204 (8 () = 0.181). The same applies to the
squared residuals, for which 8 (2 ) = 0.062, for example.

16
12
Residual Actual Fitted
8

0
4
-4
2
-8
0

-2

-4

-6
1970 1975 1980 1985 1990 1995

17

Potrebbero piacerti anche