Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Series Analysis
Semester 1, 2017-18
LINEAR TIME SERIES MODELS
1 Introduction
Time series are observations collected over time.
Two aims of time series analysis are (i) to model the
underlying mechanism that generated the observa-
tions (ii) using the underlying model to forecast the
series.
Observed series exhibit different features. Any cred-
ible model must be able to account for at least the
salient features.
Some examples of time series plots:
3
40
30
Inches
20
10
Y ear
LA Annual Rainfall
4
40
30
Inches
20
10
10 20 30 40
5
85
80
Colour Property
75
70
65
0 5 10 15 20 25 30 35
Batch
Colour Property
6
85
80
Color Proper ty
75
70
65
65 70 75 80 85
7
80
60
Number
40
20
0
Year
8
80
60
Abundance
40
20
0
0 20 40 60 80
9
70
60
50
T emper ature
40
30
20
10
T ime
10
70
60
50
T emper ature
40
30
20
10
10 20 30 40 50 60 70
= E(Yt), t = 0; 1; 2; : : :
t (1)
Note that t is just the expected value of the process
at time t, and, in general, it can vary with time.
The autocovariance function, t;s; is defined as
12
t;s = Cov(Yt; Ys); ::t; s = 0; 1; 2; : : : (2)
where
where
13
The following results are useful to evaluate covari-
ance properties of various time series models:
" #
P
m P
n P
m P
n
Cov ciYti ; dj Ysj = cidj Cov(Yti ; Ysj )
i=1 j=1 i=1 j=1
(7)
P
m P
m P
m iP1
V ar ciYti = c2i V ar(Yti ) + 2 cicj Cov(Yti ; Ytj )
i=1 i=1 i=2 j=1
(8)
Yt = Yt 1 + et; Y1 = e1 (9)
14
is called a random walk process.
By repeated substitution it can easily be shown that
9
Y1 = e1 >
>
=
Y2 = e2 + e1
.. (10)
>
>
;
Yt = et + et 1 + + e1
and
5
0
-5
Time
2.4 Stationarity
20
A process is said to be strictly stationary if the joint
distribution of Yt1 ; Yt2 ; : : : ; Ytn is the same as that of
Yt1 k ; Yt2 k ; : : : ; Ytn k for all choices of time points
t1; t2; :::; tn and k:
This is a strong assumption and often difficult to es-
tablish in practice.
A weaker version, referred to as weak (or second-
order) stationarity requires that
I The mean of Yt is constant over time
I t;t k = 0;k for all time t and lag k:
Since the covariance of a stationary process depends
only on the time difference jt (t k)j and not on
the actual times t and t k , we can simply express
the autocovariance as k : Similarly, autocorrelations
can be expressed as k :
Yt = 0 + 1 Yt 1 + et (18)
where
E(et) = 0
2
t=s
e
E(etes) =
0 t 6= s
This is known as an autoregressive order 1, or AR(1),
process.
How does Yt behave over time?
Yt = 0 + 1 Yt 1 + et = 0 + 1( 0 + 1 Yt 2 +
et 1 ) + et
22
= 0(1 + 1) + 21Yt 2 + et + 1et 1
= 0(1+ 1)+ 21( 0 + 1Yt 3 +et 2)+et + 1et 1
= 0(1 + 1 + 21) + 31Yt 3 + et + 1et 1 + 21et 2
Continued substitution of y on the right hand side
leads to
Yt = 0(1+ 1+ 21+ )+et+ 1et 1+ 21et 2+
Taking expectation on both sides gives
E(Yt) = 0(1 + 1 + 21 + )
This expectation exists if the infinite geometric series
converges.
A necessary and sufficient condition for this is j 1j <
1:
The expectation is then
0
E(Yt) = (19)
1 1
s
s ar(Yt)
1V s
s = = = 1; s = 0; 1; 2; : : : (21)
0 V ar(Yt)
Note that for j 1j < 1 the autocorrelations decay
with the lag length.
Plotting the autocorrelations with the lag length gives
the correlogram.
For j 1j < 1 the mean, variance and covariances of
the Y series are constants, independent of time. The
series is therefore weakly or covariance stationary.
An autoregressive of order p; AR(p), process has the
form
Yt = 0 + 1 Yt 1 + + p Yt p + et (22)
25
For p = 2, we have the AR(2) process
0
E(Yt) =
1 1 2
(24)
0
Let E(Yt) = = : Then 0 = (1
1 1 2
1 2 ):
Substituting for 0 in the AR(2) model, we have
Yt = (1 1 2 ) + 1 Yt 1 + 2 Yt 2 + et
Yt = 1(Yt 1 ) + 2(Yt 2 ) + et
y t = 1 y t 1 + 2 y t 2 + et
where yt = Yt :
26
Multiply by yt and take expectation, we have
E(yt2) = 1E(yt 1yt) + 2E(yt 2yt) + E(etyt)
2
0 = 1 1+ 2 2+ e
Multiply by yt 1 and yt 2 and take expectations:
1 = 1 0+ 2 1
2 = 1 1+ 2 0
Substitute for 1 and 2 in the preceding equation
and
simplify:
2
(1 2) e
0 =
(1 + 2)(1 1 2 )(1 + 1 2)
For stationarity this variance must be constant and
positive. Sufficient conditions for stationarity are
that
each term in the parentheses is positive:
1+ 2 <1
2 1 < 1
j 2j < 1
Dividing the two equations above for 1 and 2 by
0
we obtain the Yule-Walker equations for the AR(2)
process:
27
1 = 1+ 2 1
2 = 1 1+ 2
From these, we can solve for 1 and 2 :
2
1 1
1 = ; 2 = +2
1 2 1 2
For k = 3; 4; :::the autocorrelations for an AR(2)
process follows a second-order difference equation:
k = 1 k 1+ 2 k 2
Stationary conditions ensure that the acf dies out
as
the lag increases.
The AR(2) process may be expressed in terms of the
lag operator L defined as
Lyt = yt 1
L(Lyt) = L2yt = yt 2
In general, Lsyt = yt s
The AR(2) process is then
A(L)yt = et (25)
where
28
2
A(L) = 1 1L 2L (26)
A(L) is referred to as a polynomial in the lag opera-
tor.
Now,
2
A(L) = 1 1L 2 L = (1 1 L)(1 2 L)
where the 0s and 0s are connected by
1 + 2 = 1 and 1 2 = 2:
The inverse A 1(L) may be written as
1 1 c
A (L) = = +
(1 1 L)(1 2 L) (1 1 L)
d
(1 2 L)
where
c= 1 =( 2 1 ) and d = 2 =( 2 1)
Then
1 c d
yt = A (L)et = et + et (27)
(1 1 L) (1 2 L)
From the results of AR(1), stationarity of the AR(2)
requires that j 1j < 1 and j 2j < 1:
The 0s may be seen as the roots of the quadratic
29
equation
2
1 2 =0
This follows from the fact that for a quadratic equa-
tion
x2 + bx + c = 0; the sum of the two roots equals
b and the product of the two roots equals c:
This is known as the characteristic equation of the
AR(2) process. The roots are
q
2
1 1+4 2
1; 2 =
2
These roots are real or complex, depending on whether
2
1 +4 2 > 0 or < 0: If the roots are complex, the au-
tocorrelation coefficients will display sine wave fluc-
tuations, which will dampen towards zero provided
the complex roots have moduli less than 1:
Stationarity requires that the roots of the characteris-
tic equation, whether real or complex, have moduli
less than 1: This is often stated as the roots lie within
the unit circle.
An alternative statement is that the roots of A(z) =
2
1 1 z 2 z lie outside the unit circle. The roots
30
of A(z) are the values of z that solve the equation
2
A(z) = 1 1z 2z = 0
The roots are reciprocal of those of the characteris-
tic
equation, i.e.
zj = 1= j ; j = 1; 2:
So, if the 0s lie within the unit circle, the z 0s must
lie
outside the unit circle.
So, often the stationarity condition is stated as the
roots of the polynomial in the lag operator lie outside
the unit circle.
31
at1 = a1(at1 1) = xt = at1
Multiplying at1 by an arbitrary constant gives an-
other solution Aat1; since
xt = a1xt 1 ) Aat1 = a1Aat1 1 = Aat1
Characteristics of the solution:
I If ja1j < 1; at1 ! 0; as t ! 1
Direct convergence if 0 < a1 < 1
Oscillatory if 1 < a1 < 0
I If ja1j > 1 solution is not stable.
For a1 > 1; solution ! 1 as t ! 1
For a1 < 1; solution oscillates explosively as
t!1
I If a1 = 1; any arbitrary constant A satisfies the
difference equation xt = xt 1
I If a1 = 1 the system is meta-stable:
at1 = 1 if t is even
at1 = 1 if t is odd
Consider the 2nd order difference equation:
xt = a1 xt 1 + a2 xt 2
Solution to 1st order system suggests trying the
solution
32
xt = A t :
If this is a solution, it must satisfy the difference
equation:
A t a1A t 1 a2A t 2 = 0
Divide through by A t 2: Find values that satisfy
2
a1 a2 = 0
The two solutions are
p
a1 a21 + 4a2
1; 2 =
2
Each of these roots yields a valid solution for the 2nd
order difference equation.
These solutions are not unique. For any two arbitrary
constants A1 and A2, the linear combination A1 t1 +
A2 t2 also solves the difference equation:
A1 t
1 + A2 = a1(A1 t1 1 + A2 t2 1)
t
2
+a2(A1 t1 2 + A2 t2 2)
A1( t1 a1 t1 1 a2 t1 2)
+A2( t2 a1 t2 1 a2 2t 2) = 0
Since 1 and 2 each solves the 2nd order differ-
ence
equation, the terms in the brackets equal to zero.
33
Therefore the solution to the 2nd order difference equa-
tion is
xt = A1 t1 + A2 t2 (28)
Three
p possible cases for the solutions, depending on
a21 + 4a2
Case 1: a21 + 4a2 > 0
Roots will be real and distinct.
xt = A1 t1 + A2 t2
If both j 1j and j 2j are both < 1, series is conver-
gent.
If any one of the roots is greater than 1, series is
explosive.
Example.
xt = 0:2xt 1 + 0:35xt 2
xt 0:2xt 1 0:35xt 2 = 0
Characteristic equation:
2
0:2 0:35
p= 0
0:2 0:04 + (4)(0:35)
1; 2 =
2
34
p
0:2 1:44
= = 0:7; 0:5
2
So, xt = A1(0:7)t + A2( 0:5)t
Convergent series.
Suppose
xt = 0:7xt 1 + 0:35xt 2
xt 0:7xt 1 0:35xt 2 = 0
Characteristic equation:
2
0:7 0:35
p= 0
0:7 0:49 + (4)(0:35)
1 ; 2 =
p 2
0:2 1:44
= = 1:037; 0:337
2
So, xt = A1(1:037)t + A2( 0:337)t
Series is explosive.
Case 2: a21 + 4a2 = 0
The two roots are then
a1
1 ; 2 =
2
Hence, a solution is
a1 t
xt =
2
In this case, can show that another solution is
35
a1 t
xt = t
2
If this is a solution, it must satisfy
xt a1xt 1 a2xt 2 = 0; i.e.
a1 t a1 t 1 a1 t 2
t a1(t 1) a2(t 2) =0
2 2 2
a1 t 2
Dividing through by ; we obtain
2
a1 2 a1
t a1(t 1) a2(t 2) = 0
2 2
Collecting terms,
a21 a21
+ a2 t + + 2a2 =0
4 2
Since a21 + 4a2 = 0 each of the bracketed terms
equals
zero.
So, general solution is
a1 t a1 t
xt = A 1 + A2 t
2 2
Series will be explosive if ja1j > 2;
Convergent if ja1j < 2
Case 3: a21 + 4a2 < 0 (a2 < 0)
36
Imaginary roots.
p
2
1 = a1 + i ja1 + 4a2 j
p
2 = a1 i ja21 + 4a2j
Y t = et + 1 et 1 (30)
Consider the variance and autocovariances of this process:
E(Yt2) = E(et + 1et 1)2 = 2e (1 + 21)
E(YtYt 1) = E(et + 1et 1)(et 1 + 1et 2)
) 1 = 1 2e
E(YtYt 2) = E(et + 1et 1)(et 2 + 1et 3)
) 2=0
Similarly, higher order autocovariances all equal to
40
zero.
The autocorrelations of an M A(1) process are there-
fore:
1
1 =
1 + 21
2 = 3 = =0
If j 1j < 1, the M A(1) process may be expressed as
an infinite series in Yt: Hence, its partial autocorrela-
tions do not cut off but damp toward zero.
yt = 1 yt 1 + + p y t p + et + 1 et 1 +
+ q et q
(31)
For p = q = 1 we have the ARM A(1; 1) process:
y t = 1 y t 1 + et + 1 et 1 (32)
Squaring this and taking expectation, we can show
that
41
1 + 2 1 1 + 21 2
0 = 2 e
1 1
Multiplying by yt 1 and taking expectation, yields
2 ( 1 + 1)(1 + 1 1) 2
1 = 1 0+ 1 e = 2 e
1 1
Higher-order autocovariances are given by
k = 1 k 1; k = 2; 3; : : :
The autocorrelation function of the ARM A(1; 1) process
is thus:
( 1 + 1)(1 + 1 1)
1 =
1 + 2 1 1 + 21
k = 1 k 1; k = 2; 3; :::
The first coefficient depends on the parameters of the
AR and M A part of the process. Subsequent coeffi-
cients decline exponentially at a rate that depends on
the AR parameter.
43
I P ACF exhibits oscillating decay:
( 1 )s
ss =
(1 + 21 + + 2s
1 )
M A(1) i.e. p = 0; q = 1; 1 < 0
I ACF exhibits negative spike at lag 1
s = 0 for s > 1
I P ACF exhibits monotonic decay:
ARM A(1; 1), i.e. p = q = 1; 0 < 1 < 1
I ACF exhibits monotonic decay beginning after
lag 1.
I P ACF exhibits oscillating decay beginning af-
ter
lag 1. 11 = 1:
ARM A(1; 1), i.e. p = q = 1; 1< 1 <0
I ACF exhibits oscillating decay beginning after
lag 1.
I P ACF exhibits monotonic decay beginning af-
ter
lag 1. 11 = 1:
ARM A(p; q)
I ACF decays (either monotonic or oscillatory)
44
beginning after lag q
I P ACF decays (either monotonic or oscillatory)
beginning after lag p
6 Wolds Theorem
Let fxtg be any zero-mean covariance stationary process.
Then we can write it as
P
1
xt = B(L)et = bt e t i (33)
i=0
2 P1
2
et W N (0; e) where b0 = 1 and i=0 bi < 1
It is clear that an M A(q) attempts to approximate
Wolds representation.
What about an AR(p)? Can it be seen as an approx-
imation of Wolds representation?
xt = 1 xt 1 + 2 xt 2 + + p xt p + et
xt 1 xt 1 2 xt 2 p xt p = et
(L)xt = et
where
2 p
(L) = 1 1L 2L pL
45
If xt is covariance stationary, the roots of (L) all lie
outside the unit circle. Then, xt can be expressed as
a convergent infinite moving average of the innova-
tions:
1
xt = et
(L)
For example, in the case of AR(1);
(L) = 1 1L and
1 1 2 2
= =1+ 1L + 1L +
(L) 1 1 L
and
1
xt = et = et + 1et 1 + 21et 2 +
(L)
Both M A and AR processes impose restrictions in
the estimation of the Wold representation.
A more flexible form is provided by the ARM A
processes.
xt = 1 xt 1 + + p xt p + et + 1 et 1 + +
q et q
xt 1 xt 1 p xt p = et + 1 et 1 + +
q et q
46
(L)xt = (L)et
(L)
xt = et
(L)
For example, an ARM A(1; 1) gives
1+ 1L
xt = et
1 1L
2 2
= (1 + 1 L)(1 + 1L + 1L + )et
1
xt = et
1 + 1L
2 2
(1 1L + 1L )xt = et
2 3
xt = 1 xt 1 1 x t 2 + 1 xt 3 + et
The P ACF; ss; is defined as the regression coeffi-
cient of the last term in an autoregression:
xt = 1 xt 1 + 2 xt 2 + + s xt s + vt
Since an invertible M A process has an infinite or-
der AR representation, its P ACF will not cut off
abruptly after some lag.
48
As shown above, a covariance stationary AR(1) process
can be represented as an infinite order M A process:
1
xt = et = et + 1et 1 + 21et 2 +
(L)
Values of the series at different points in time have
common influences, no matter how far apart they are.
Therefore, its ACF does not cut off abruptly.
Similar reasoning leads to persistent ACF for higher
order AR processes.
The P ACF of an AR(p) process cuts off abruptly
after p lags because if the model generating the series
is
xt = 1 xt 1 + 2 xt 2 + + p xt p + et
then the parameter p+1 in the autoregression
xt = 1 xt 1 + 2 xt 2 + + p xt p + p+1 xt p 1 +
et
must be zero.
49