Sei sulla pagina 1di 48

HE4020 Econometric Time

Series Analysis
Semester 1, 2017-18
LINEAR TIME SERIES MODELS
1 Introduction
Time series are observations collected over time.
Two aims of time series analysis are (i) to model the
underlying mechanism that generated the observa-
tions (ii) using the underlying model to forecast the
series.
Observed series exhibit different features. Any cred-
ible model must be able to account for at least the
salient features.
Some examples of time series plots:

3
40
30
Inches

20
10

1880 1900 1920 1940 1960 1980

Y ear

LA Annual Rainfall

4
40
30
Inches

20
10

10 20 30 40

Previous Year Inches

Scatter Plot of LA Annual Rainfall


Considerable variation in rainfall over the years. Some
years are very wet (e.g. 1883) while other years are
dry (e.g. 1983).
Scatter plot does not suggest strong correlation in
rainfall from one year to the next.

5
85
80
Colour Property

75
70
65

0 5 10 15 20 25 30 35

Batch

Colour Property

6
85
80
Color Proper ty

75
70
65

65 70 75 80 85

Previous Batch Color Proper ty

Scatter Plot of Color Property


Colour property of batches produced in an industrial
chemical process.
Time series plot shows that consecutive batches tend
to have similar colour values.
Scatter plot shows a small degree of positive associ-
ation between consecutive batches.

7
80
60
Number

40
20
0

1905 1910 1915 1920 1925 1930 1935

Year

Annual Number of Canadian Hare

8
80
60
Abundance

40
20
0

0 20 40 60 80

Previous Y ear Abundance

Scatter Plot of Canadian Hare


Stickiness in the time series plot. Neighbouring val-
ues are very closely related. Number does not change
much from one year to the next.
Scatter plot shows strong positive correlation.

9
70
60
50
T emper ature

40
30
20
10

1964 1966 1968 1970 1972 1974 1976

T ime

Average Monthly Temperature,


Dubuque, Iowa

10
70
60
50
T emper ature

40
30
20
10

10 20 30 40 50 60 70

T emper ature1 y earAgo

Scatter Plot of Average Monthly


Temperature
Time series displays regular pattern called seasonal-
ity.
Observations twelve months apart are related. All
January and February temperatures are low while June,
July and August months are warmer.
Twelve-month lag scatter plot shows strong positive
correlation.
11
2 Fundamental Concepts
2.1 Mean, Variance and Covariance
The sequence of random variables fYt : t = 0; 1; 2; : : :g
is called a stochastic process.
The complete probability structure of such a process
is determined by the joint probability distribution of
all finite collections of the Y 0s:
Fortunately, often we dont need to deal with these
joint distributions, instead we need only consider the
means, variances and covariances. dont need complete probability structure,
just need moments

The mean function, t; of a stochastic process fYt :


t = 0; 1; 2; : : :g is defined as

= E(Yt), t = 0; 1; 2; : : :
t (1)
Note that t is just the expected value of the process
at time t, and, in general, it can vary with time.
The autocovariance function, t;s; is defined as

12
t;s = Cov(Yt; Ys); ::t; s = 0; 1; 2; : : : (2)

where

Cov(Yt; Ys) = E[(Yt t )(Ys s )] = E(YtYs) t s:


(3)
The autocorrelation function, t;s ; is defined as

t;s = Corr(Yt; Ys); t; s = 0; 1; 2; : : : (4)

where

Cov(Yt; Ys) t;s


Corr(Yt; Ys) = p =p (5)
V ar(Yt)V ar(Ys) t;t s;s
Note that
9
t;t = V ar(Yt) ; t;t = 1 =
t;s = ; = (6)
p s;t s;t t;s
;
t;s t;t s;s ; t;s 1

13
The following results are useful to evaluate covari-
ance properties of various time series models:
" #
P
m P
n P
m P
n
Cov ciYti ; dj Ysj = cidj Cov(Yti ; Ysj )
i=1 j=1 i=1 j=1
(7)

P
m P
m P
m iP1
V ar ciYti = c2i V ar(Yti ) + 2 cicj Cov(Yti ; Ytj )
i=1 i=1 i=2 j=1
(8)

where c1; c2; : : : ; cm and d1; d2; : : : ; dn are constants;


t1; t2; : : : ; tm and s1; s2; : : : ; sn are time points.
2.2 Random Walk
Let e1; e2; : : : be a sequence of iid random variables
each with mean zero and variance 2e :
The observed time series fYt : t = 1; 2; : : :g gener-
ated by the process

Yt = Yt 1 + et; Y1 = e1 (9)
14
is called a random walk process.
By repeated substitution it can easily be shown that
9
Y1 = e1 >
>
=
Y2 = e2 + e1
.. (10)
>
>
;
Yt = et + et 1 + + e1

From Equation (10), we obtain

E(Yt) = t = E(et + et 1 + + e1) = 0; 8t (11)


seems to have a trend but is
random walk model where it goes
back to zero

and

V ar(Yt) = V ar(et + et 1 + + e1) = t 2e (12)


Note that while the mean is constant over time, the
variance increases with t:
Consider next the covariance function. Suppose 1
t s: Then we have
15
9
t;s = Cov(Yt; Ys) >
>
>
>
>
>
>
= Cov(e1 + + et; e1 + + et + et+1 + + es ) >
>
>
>
>
>
>
>
>
P
s P
t >
=
= Cov(ei; ej )
i=1 j=1
>
>
>
>
P
t >
>
>
>
= V ar(ej ) >
>
j=1 >
>
>
>
>
>
2
>
;
= t e
(13)

The autocorrelation function for the random walk is


then given by
16
9
t;s >
t;s = p >
> correlation between Yt and Ys for
t;t s;s >
> a random walk process
>
>
>
>
2 >
=
t e
= p (14)
(t 2e )(s 2e ) >
>
>
>
r >
>
>
>
t >
>
= ;
s

further apart the weaker the


correlation

Note that t;s decreases as jt sj increases, but the


rate of decrease is smaller for larger values of t (or s) :
q q autocorrelation
1 1
1;2 = 2 = 0:707; 1;3 = 3 = 0:577; :::
q wider gap / further apart qsmaller time
10 10
10;11 = 11 = 0:953; 10;12 = 12 = 0:913; : : :
A simulated random walk where the es are selected
from the standard normal distribution.
17
15
10 SimulatedRandom Walk
rw.nd

5
0
-5

0 100 200 300 400 500

Time

Although the mean is zero, the series tends to wander


away from the mean because the variance increases
and adjacent values become more strongly correlated
over time.

2.3 A Moving Average


Consider
18
et + et 1
Yt = (15)
2
Lets derive the first two moments of Yt :
ne + e o
t t 1
E(Yt) = t = E =0
n e +2e o
t t 1
V ar(Yt) = V ar
2
V ar(et) + V ar(et 1)
=
2
4
= 0:5 e ne + e o
t t 1 et 1 + et 2
Cov(Yt; Yt 1) = Cov ;
2 2
Cov(et; et 1) + Cov(et; et 2)+
= 41
Cov(et 1; et 1) + Cov(et 1; et 2)
Cov(et 1; et 1)
=
2
4
= 0:25 e
or
t;t 1 = 0:25 2e ; 8t
Further,
ne + e + et 3 o
t t 1 et 2
Cov(Yt; Yt 2) = Cov ; =
2 2
0
19
since the es are independent. Similarly,
Cov(Yt; Yt k ) = 0 for k > 1: Hence, we have
8
< 0:5 2e for jt sj = 0
t;s = 0:25 2e for jt sj = 1 (16)
:
0 for jt sj > 1
and
8
< 1 for jt sj = 0
t;s = 0:5 for jt sj = 1 (17)
:
0 for jt sj > 1

Note that, for example, 1;2 = 3;4 = 9;10 = 0:5 and


1;3 = 5;7 = 9;11 = 0: That is, Y values one period
apart are correlated to 0.5, and Y values two periods
apart are uncorrelated. These results hold regardless
of which time periods we are considering. This leads
us to the important concept of stationarity.

2.4 Stationarity

20
A process is said to be strictly stationary if the joint
distribution of Yt1 ; Yt2 ; : : : ; Ytn is the same as that of
Yt1 k ; Yt2 k ; : : : ; Ytn k for all choices of time points
t1; t2; :::; tn and k:
This is a strong assumption and often difficult to es-
tablish in practice.
A weaker version, referred to as weak (or second-
order) stationarity requires that
I The mean of Yt is constant over time
I t;t k = 0;k for all time t and lag k:
Since the covariance of a stationary process depends
only on the time difference jt (t k)j and not on
the actual times t and t k , we can simply express
the autocovariance as k : Similarly, autocorrelations
can be expressed as k :

2.5 White noise


A white noise process is a sequence of independent,
identically distributed random variables fetg:
A white noise process has the following properties:
21
I constant mean: E(et) =
V ar(et) for k = 0
I k=
0 for k 6= 0
1 for k = 0
I k=
0 for k 6= 0

3 Stationary Time Series Models


Consider a time series generated by the following
model:

Yt = 0 + 1 Yt 1 + et (18)
where
E(et) = 0
2
t=s
e
E(etes) =
0 t 6= s
This is known as an autoregressive order 1, or AR(1),
process.
How does Yt behave over time?
Yt = 0 + 1 Yt 1 + et = 0 + 1( 0 + 1 Yt 2 +
et 1 ) + et
22
= 0(1 + 1) + 21Yt 2 + et + 1et 1
= 0(1+ 1)+ 21( 0 + 1Yt 3 +et 2)+et + 1et 1
= 0(1 + 1 + 21) + 31Yt 3 + et + 1et 1 + 21et 2
Continued substitution of y on the right hand side
leads to
Yt = 0(1+ 1+ 21+ )+et+ 1et 1+ 21et 2+
Taking expectation on both sides gives
E(Yt) = 0(1 + 1 + 21 + )
This expectation exists if the infinite geometric series
converges.
A necessary and sufficient condition for this is j 1j <
1:
The expectation is then

0
E(Yt) = (19)
1 1

Thus if j 1j < 1 the Y series has a constant uncon-


ditional mean at all points in time. In that case,
Yt = + et + 1et 1 + 21et 2 + (20)
Consider now the variance of Y .
23
V ar(Yt) = E(Yt )2
= E(e2t + 21e2t 1 + 41e2t 2 + + 2 1 et et 1+ )
= 2e (1 + 21 + 41 + )
2
e
= 2
1 1
Thus, the Y series has a constant unconditional vari-
ance, independent of time.
Next consider the covariance of Y over time. The
covariance of Y with its lagged value is known as an
autocovariance. The first-lag autocovariance is de-
fined as
= E(Yt
1 )(Yt 1 ):
Now,
2
Yt = et + +
1 et 1 1 et 2 +
2
Yt 1 = et 1 + 1 et 2 + 1 et 3 +
Hence,
1 = E(et + 1et 1 + 21et 2+ )(et 1+ 1 et 2
+ 21et 3 + )
Taking expectation
2 2 4
1 = 1 e (1 + 1 + 1 + )= 1V ar(Yt)
24
Similarly, second-lag autocovariance is 2 = 21V ar(Yt)
and in general s = s1V ar(Yt); s = 0; 1; 2; : : :
Note that the autocovariances depend only on the lag
length and are independent of the time point.
Dividing the autocovariances by the variance gives
the autocorrelations:

s
s ar(Yt)
1V s
s = = = 1; s = 0; 1; 2; : : : (21)
0 V ar(Yt)
Note that for j 1j < 1 the autocorrelations decay
with the lag length.
Plotting the autocorrelations with the lag length gives
the correlogram.
For j 1j < 1 the mean, variance and covariances of
the Y series are constants, independent of time. The
series is therefore weakly or covariance stationary.
An autoregressive of order p; AR(p), process has the
form

Yt = 0 + 1 Yt 1 + + p Yt p + et (22)
25
For p = 2, we have the AR(2) process

Yt = 0 + 1Yt 1 + 2Yt 2 + et (23)


Repeated substitution for the Y on the rhs and taking
expectation will give the unconditional mean of Yt.
However, if Y is covariance stationary, the uncondi-
tional mean can be evaluated more easily as follows:

E(Yt) = 0 + 1 E(Yt 1 ) + 2 E(Yt 2 ) + E(et)

0
E(Yt) =
1 1 2
(24)
0
Let E(Yt) = = : Then 0 = (1
1 1 2
1 2 ):
Substituting for 0 in the AR(2) model, we have
Yt = (1 1 2 ) + 1 Yt 1 + 2 Yt 2 + et
Yt = 1(Yt 1 ) + 2(Yt 2 ) + et
y t = 1 y t 1 + 2 y t 2 + et
where yt = Yt :
26
Multiply by yt and take expectation, we have
E(yt2) = 1E(yt 1yt) + 2E(yt 2yt) + E(etyt)
2
0 = 1 1+ 2 2+ e
Multiply by yt 1 and yt 2 and take expectations:
1 = 1 0+ 2 1
2 = 1 1+ 2 0
Substitute for 1 and 2 in the preceding equation
and
simplify:
2
(1 2) e
0 =
(1 + 2)(1 1 2 )(1 + 1 2)
For stationarity this variance must be constant and
positive. Sufficient conditions for stationarity are
that
each term in the parentheses is positive:
1+ 2 <1
2 1 < 1
j 2j < 1
Dividing the two equations above for 1 and 2 by
0
we obtain the Yule-Walker equations for the AR(2)
process:

27
1 = 1+ 2 1
2 = 1 1+ 2
From these, we can solve for 1 and 2 :
2
1 1
1 = ; 2 = +2
1 2 1 2
For k = 3; 4; :::the autocorrelations for an AR(2)
process follows a second-order difference equation:
k = 1 k 1+ 2 k 2
Stationary conditions ensure that the acf dies out
as
the lag increases.
The AR(2) process may be expressed in terms of the
lag operator L defined as
Lyt = yt 1
L(Lyt) = L2yt = yt 2
In general, Lsyt = yt s
The AR(2) process is then

A(L)yt = et (25)
where

28
2
A(L) = 1 1L 2L (26)
A(L) is referred to as a polynomial in the lag opera-
tor.
Now,
2
A(L) = 1 1L 2 L = (1 1 L)(1 2 L)
where the 0s and 0s are connected by
1 + 2 = 1 and 1 2 = 2:
The inverse A 1(L) may be written as
1 1 c
A (L) = = +
(1 1 L)(1 2 L) (1 1 L)
d
(1 2 L)
where
c= 1 =( 2 1 ) and d = 2 =( 2 1)
Then

1 c d
yt = A (L)et = et + et (27)
(1 1 L) (1 2 L)
From the results of AR(1), stationarity of the AR(2)
requires that j 1j < 1 and j 2j < 1:
The 0s may be seen as the roots of the quadratic
29
equation
2
1 2 =0
This follows from the fact that for a quadratic equa-
tion
x2 + bx + c = 0; the sum of the two roots equals
b and the product of the two roots equals c:
This is known as the characteristic equation of the
AR(2) process. The roots are
q
2
1 1+4 2
1; 2 =
2
These roots are real or complex, depending on whether
2
1 +4 2 > 0 or < 0: If the roots are complex, the au-
tocorrelation coefficients will display sine wave fluc-
tuations, which will dampen towards zero provided
the complex roots have moduli less than 1:
Stationarity requires that the roots of the characteris-
tic equation, whether real or complex, have moduli
less than 1: This is often stated as the roots lie within
the unit circle.
An alternative statement is that the roots of A(z) =
2
1 1 z 2 z lie outside the unit circle. The roots
30
of A(z) are the values of z that solve the equation
2
A(z) = 1 1z 2z = 0
The roots are reciprocal of those of the characteris-
tic
equation, i.e.
zj = 1= j ; j = 1; 2:
So, if the 0s lie within the unit circle, the z 0s must
lie
outside the unit circle.
So, often the stationarity condition is stated as the
roots of the polynomial in the lag operator lie outside
the unit circle.

3.1 Solution of Difference Equations


Consider the first-order difference equation:
xt = a1xt 1
Trivial solution:
xt = xt 1 = =0
An obvious solution is:
xt = at1
since then xt = a1xt 1 gives

31
at1 = a1(at1 1) = xt = at1
Multiplying at1 by an arbitrary constant gives an-
other solution Aat1; since
xt = a1xt 1 ) Aat1 = a1Aat1 1 = Aat1
Characteristics of the solution:
I If ja1j < 1; at1 ! 0; as t ! 1
Direct convergence if 0 < a1 < 1
Oscillatory if 1 < a1 < 0
I If ja1j > 1 solution is not stable.
For a1 > 1; solution ! 1 as t ! 1
For a1 < 1; solution oscillates explosively as
t!1
I If a1 = 1; any arbitrary constant A satisfies the
difference equation xt = xt 1
I If a1 = 1 the system is meta-stable:
at1 = 1 if t is even
at1 = 1 if t is odd
Consider the 2nd order difference equation:
xt = a1 xt 1 + a2 xt 2
Solution to 1st order system suggests trying the
solution

32
xt = A t :
If this is a solution, it must satisfy the difference
equation:
A t a1A t 1 a2A t 2 = 0
Divide through by A t 2: Find values that satisfy
2
a1 a2 = 0
The two solutions are
p
a1 a21 + 4a2
1; 2 =
2
Each of these roots yields a valid solution for the 2nd
order difference equation.
These solutions are not unique. For any two arbitrary
constants A1 and A2, the linear combination A1 t1 +
A2 t2 also solves the difference equation:
A1 t
1 + A2 = a1(A1 t1 1 + A2 t2 1)
t
2
+a2(A1 t1 2 + A2 t2 2)
A1( t1 a1 t1 1 a2 t1 2)
+A2( t2 a1 t2 1 a2 2t 2) = 0
Since 1 and 2 each solves the 2nd order differ-
ence
equation, the terms in the brackets equal to zero.

33
Therefore the solution to the 2nd order difference equa-
tion is

xt = A1 t1 + A2 t2 (28)
Three
p possible cases for the solutions, depending on
a21 + 4a2
Case 1: a21 + 4a2 > 0
Roots will be real and distinct.
xt = A1 t1 + A2 t2
If both j 1j and j 2j are both < 1, series is conver-
gent.
If any one of the roots is greater than 1, series is
explosive.
Example.
xt = 0:2xt 1 + 0:35xt 2
xt 0:2xt 1 0:35xt 2 = 0
Characteristic equation:
2
0:2 0:35
p= 0
0:2 0:04 + (4)(0:35)
1; 2 =
2

34
p
0:2 1:44
= = 0:7; 0:5
2
So, xt = A1(0:7)t + A2( 0:5)t
Convergent series.
Suppose
xt = 0:7xt 1 + 0:35xt 2
xt 0:7xt 1 0:35xt 2 = 0
Characteristic equation:
2
0:7 0:35
p= 0
0:7 0:49 + (4)(0:35)
1 ; 2 =
p 2
0:2 1:44
= = 1:037; 0:337
2
So, xt = A1(1:037)t + A2( 0:337)t
Series is explosive.
Case 2: a21 + 4a2 = 0
The two roots are then
a1
1 ; 2 =
2
Hence, a solution is
a1 t
xt =
2
In this case, can show that another solution is

35
a1 t
xt = t
2
If this is a solution, it must satisfy
xt a1xt 1 a2xt 2 = 0; i.e.
a1 t a1 t 1 a1 t 2
t a1(t 1) a2(t 2) =0
2 2 2
a1 t 2
Dividing through by ; we obtain
2
a1 2 a1
t a1(t 1) a2(t 2) = 0
2 2
Collecting terms,
a21 a21
+ a2 t + + 2a2 =0
4 2
Since a21 + 4a2 = 0 each of the bracketed terms
equals
zero.
So, general solution is
a1 t a1 t
xt = A 1 + A2 t
2 2
Series will be explosive if ja1j > 2;
Convergent if ja1j < 2
Case 3: a21 + 4a2 < 0 (a2 < 0)
36
Imaginary roots.
p
2
1 = a1 + i ja1 + 4a2 j
p
2 = a1 i ja21 + 4a2j

Expressing roots in polar coordinate form, the so-


lution
can be written as
xt = 1rt cos(t + 2)
where 1 and 2 are arbitrary constants;
0:5 a1
r = ( a2) and cos =
2( a2)0:5
Solution shows wave-like pattern.
Since cos function is bounded, stability condition
depends on r i.e. ( a2)0:5
I ja2j = 1; oscillation with unchanging amplitude
I ja2j < 1; damped oscillations
I ja2j > 1; explosive oscillations
Example
xt = 1:6xt 1 0:9xt 2
xt 1:6xt 1 + 0:9x
p t 2=0
1:6 (1:6)2 (4)(0:9)
1; 2 =
2
37
1:6 1:02i
=
2
= 0:8 0:51i
r = (0:9)0:5 = 0:949
1:6
cos = = 0:843 ) = 0:567
2(0:9)0:5
xt = 1(0:949)t cos(0:567t + 2)
Damped sine waves.
3.2 Partial autocorrelation function
(pacf)
Difficult to distinguish between AR processes of dif-
ferent order using correlogram.
Partial autocorrelations provide better discrimination
between different AR processes.
In an AR(2) process, the 2 parameter is the partial
correlation between yt and yt 2, holding yt 1constant.
Recall the definition of partial correlation coefficient:
r13 r12r23
r13:2 = p 2
p
2
1 r12 1 r23
Let 1; 2 and 3 denote y and its first and second lags,
respectively. Then,
38
r12 = r23 = 1
r13 = 2
Substitute these into the formula for partial correla-
tion
gives
2
2 1
r13:2 = 2
1 1
Note that the Yule-Walker equations above can be
solved
for 2 to get
2
2 1
2 = = r13:2 2
1 1
The Yule-Walker equations for an AR(3) process:
y t = 1 y t 1 + 2 y t 2 + 3 y t 3 + et
are
1 = 1+ 2 1+ 3 2
2 = 1 1+ 2+ 3 1
3 = 1 2+ 2 1+ 3
3 is the partial correlation between yt and yt 3 : If,
however, the process is only AR(2), the acf shows
that
3 = 1 2+ 2 1
39
This implies that 3 = 0: Similar results carry over
to higher order AR processes. For example, the
pacf for AR(3) cuts off after the third lag.

4 The Moving Average Processes


In an M A process, the variable is expressed as a lin-
ear function of current and past white noise distur-
bances. An M A(q) process is given by

Yt = et + 1et 1 + 2et 2 + + q et q (29)


For q = 1, we have the M A(1) process

Y t = et + 1 et 1 (30)
Consider the variance and autocovariances of this process:
E(Yt2) = E(et + 1et 1)2 = 2e (1 + 21)
E(YtYt 1) = E(et + 1et 1)(et 1 + 1et 2)
) 1 = 1 2e
E(YtYt 2) = E(et + 1et 1)(et 2 + 1et 3)
) 2=0
Similarly, higher order autocovariances all equal to
40
zero.
The autocorrelations of an M A(1) process are there-
fore:
1
1 =
1 + 21
2 = 3 = =0
If j 1j < 1, the M A(1) process may be expressed as
an infinite series in Yt: Hence, its partial autocorrela-
tions do not cut off but damp toward zero.

5 The ARMA Processes


The ARM A(p; q) process is defined as

yt = 1 yt 1 + + p y t p + et + 1 et 1 +
+ q et q
(31)
For p = q = 1 we have the ARM A(1; 1) process:

y t = 1 y t 1 + et + 1 et 1 (32)
Squaring this and taking expectation, we can show
that
41
1 + 2 1 1 + 21 2
0 = 2 e
1 1
Multiplying by yt 1 and taking expectation, yields
2 ( 1 + 1)(1 + 1 1) 2
1 = 1 0+ 1 e = 2 e
1 1
Higher-order autocovariances are given by
k = 1 k 1; k = 2; 3; : : :
The autocorrelation function of the ARM A(1; 1) process
is thus:
( 1 + 1)(1 + 1 1)
1 =
1 + 2 1 1 + 21
k = 1 k 1; k = 2; 3; :::
The first coefficient depends on the parameters of the
AR and M A part of the process. Subsequent coeffi-
cients decline exponentially at a rate that depends on
the AR parameter.

5.1 Patterns of ARMA(p,q) ACF and


PACF
General ARM A(p; q) model:
42
yt = 1yt 1 + + pyt p +et + 1et 1 + + q et q
White noise i.e. p = q = 0
I All s = 0 and all ss = 0
AR(1) i.e. p = 1; q = 0; 0 < 1 < 1
I ACF decays directly: s = s
1
I P ACF cuts off after lag 1: 11 = 1; ss =0
for s 2
AR(1) i.e. p = 1; q = 0; 1 < 1 <0
I ACF exhibits oscillating decay: s = s
1
I P ACF cuts off after lag 1: 11 = 1; ss =0
for s 2
AR(p); p 2; q = 0
I ACF decays toward zero. Decay may be direct,
or
may oscillate.
I P ACF spikes through lag p. All ss = 0 for
s>p
M A(1) i.e. p = 0; q = 1; 1 > 0
I ACF exhibits positive spike at lag 1
s = 0 for s > 1

43
I P ACF exhibits oscillating decay:
( 1 )s
ss =
(1 + 21 + + 2s
1 )
M A(1) i.e. p = 0; q = 1; 1 < 0
I ACF exhibits negative spike at lag 1
s = 0 for s > 1
I P ACF exhibits monotonic decay:
ARM A(1; 1), i.e. p = q = 1; 0 < 1 < 1
I ACF exhibits monotonic decay beginning after
lag 1.
I P ACF exhibits oscillating decay beginning af-
ter
lag 1. 11 = 1:
ARM A(1; 1), i.e. p = q = 1; 1< 1 <0
I ACF exhibits oscillating decay beginning after
lag 1.
I P ACF exhibits monotonic decay beginning af-
ter
lag 1. 11 = 1:
ARM A(p; q)
I ACF decays (either monotonic or oscillatory)
44
beginning after lag q
I P ACF decays (either monotonic or oscillatory)
beginning after lag p

6 Wolds Theorem
Let fxtg be any zero-mean covariance stationary process.
Then we can write it as

P
1
xt = B(L)et = bt e t i (33)
i=0
2 P1
2
et W N (0; e) where b0 = 1 and i=0 bi < 1
It is clear that an M A(q) attempts to approximate
Wolds representation.
What about an AR(p)? Can it be seen as an approx-
imation of Wolds representation?
xt = 1 xt 1 + 2 xt 2 + + p xt p + et
xt 1 xt 1 2 xt 2 p xt p = et
(L)xt = et
where
2 p
(L) = 1 1L 2L pL

45
If xt is covariance stationary, the roots of (L) all lie
outside the unit circle. Then, xt can be expressed as
a convergent infinite moving average of the innova-
tions:
1
xt = et
(L)
For example, in the case of AR(1);
(L) = 1 1L and
1 1 2 2
= =1+ 1L + 1L +
(L) 1 1 L
and
1
xt = et = et + 1et 1 + 21et 2 +
(L)
Both M A and AR processes impose restrictions in
the estimation of the Wold representation.
A more flexible form is provided by the ARM A
processes.
xt = 1 xt 1 + + p xt p + et + 1 et 1 + +
q et q
xt 1 xt 1 p xt p = et + 1 et 1 + +
q et q
46
(L)xt = (L)et
(L)
xt = et
(L)
For example, an ARM A(1; 1) gives

1+ 1L
xt = et
1 1L

2 2
= (1 + 1 L)(1 + 1L + 1L + )et

= [1 + ( 1 + 1)L + ( 1 + 1 + 21)L2 + ]et


Why do AR and M A processes have ACF and P ACF
that behave as described above?
Consider the M A(1) process: xt = et + 1et 1:
Innovations up to lag 1 affect xt, so values more than
one period apart have no common influences. There-
fore ACF cut off abruptly after 1 period.
Similar reasoning explains why ACF of an M A(q)
process cuts off abruptly after q lags.
All M A processes are covariance stationary, regard-
less of the values of the parameters.
47
An M A(q) process xt = et + 1et 1 + + q et q , is
said to be invertible if and only if the roots of the lag
polynomial 1 + 1L + + q Lq = 0 all lie outside
the unit circle.
Invertibility means that the M A(q) process can be
expressed as an infinite order AR process.
For example, the M A(1) process xt = et + 1et 1 is
invertible if the root of 1 + 1L = 0 is greater than 1
in absolute value, i.e. j 1j < 1; in which case

1
xt = et
1 + 1L
2 2
(1 1L + 1L )xt = et
2 3
xt = 1 xt 1 1 x t 2 + 1 xt 3 + et
The P ACF; ss; is defined as the regression coeffi-
cient of the last term in an autoregression:

xt = 1 xt 1 + 2 xt 2 + + s xt s + vt
Since an invertible M A process has an infinite or-
der AR representation, its P ACF will not cut off
abruptly after some lag.
48
As shown above, a covariance stationary AR(1) process
can be represented as an infinite order M A process:
1
xt = et = et + 1et 1 + 21et 2 +
(L)
Values of the series at different points in time have
common influences, no matter how far apart they are.
Therefore, its ACF does not cut off abruptly.
Similar reasoning leads to persistent ACF for higher
order AR processes.
The P ACF of an AR(p) process cuts off abruptly
after p lags because if the model generating the series
is

xt = 1 xt 1 + 2 xt 2 + + p xt p + et
then the parameter p+1 in the autoregression
xt = 1 xt 1 + 2 xt 2 + + p xt p + p+1 xt p 1 +
et
must be zero.

49

Potrebbero piacerti anche