Sei sulla pagina 1di 36

TIME SERIES

Contents
Syllabus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv
1 Models for time series 1
1.1 Time series data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Trend, seasonality, cycles and residuals . . . . . . . . . . . . . . . . . 1
1.3 Stationary processes . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.4 Autoregressive processes . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.5 Moving average processes . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.6 White noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.7 The turning point test . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2 Models of stationary processes 5
2.1 Purely indeterministic processes . . . . . . . . . . . . . . . . . . . . . 5
2.2 ARMA processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3 ARIMA processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.4 Estimation of the autocovariance function . . . . . . . . . . . . . . . 6
2.5 Identifying a MA(q) process . . . . . . . . . . . . . . . . . . . . . . . 7
2.6 Identifying an AR(p) process . . . . . . . . . . . . . . . . . . . . . . . 7
2.7 Distributions of the ACF and PACF . . . . . . . . . . . . . . . . . . 8
3 Spectral methods 9
3.1 The discrete Fourier transform . . . . . . . . . . . . . . . . . . . . . . 9
3.2 The spectral density . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.3 Analysing the eects of smoothing . . . . . . . . . . . . . . . . . . . . 12
4 Estimation of the spectrum 13
4.1 The periodogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4.2 Distribution of spectral estimates . . . . . . . . . . . . . . . . . . . . 15
4.3 The fast Fourier transform . . . . . . . . . . . . . . . . . . . . . . . . 16
5 Linear lters 17
5.1 The Filter Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
5.2 Application to autoregressive processes . . . . . . . . . . . . . . . . . 17
5.3 Application to moving average processes . . . . . . . . . . . . . . . . 18
5.4 The general linear process . . . . . . . . . . . . . . . . . . . . . . . . 19
i
5.5 Filters and ARMA processes . . . . . . . . . . . . . . . . . . . . . . . 20
5.6 Calculating autocovariances in ARMA models . . . . . . . . . . . . . 20
6 Estimation of trend and seasonality 21
6.1 Moving averages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
6.2 Centred moving averages . . . . . . . . . . . . . . . . . . . . . . . . . 22
6.3 The Slutzky-Yule eect . . . . . . . . . . . . . . . . . . . . . . . . . . 22
6.4 Exponential smoothing . . . . . . . . . . . . . . . . . . . . . . . . . . 23
6.5 Calculation of seasonal indices . . . . . . . . . . . . . . . . . . . . . . 24
7 Fitting ARIMA models 25
7.1 The Box-Jenkins procedure . . . . . . . . . . . . . . . . . . . . . . . 25
7.2 Identication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
7.3 Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
7.4 Verication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
7.5 Tests for white noise . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
7.6 Forecasting with ARMA models . . . . . . . . . . . . . . . . . . . . . 28
8 State space models 29
8.1 Models with unobserved states . . . . . . . . . . . . . . . . . . . . . . 29
8.2 The Kalman lter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
8.3 Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
8.4 Parameter estimation revisited . . . . . . . . . . . . . . . . . . . . . . 32
ii
Syllabus
Time series analysis refers to problems in which observations are collected at regular
time intervals and there are correlations among successive observations. Applications
cover virtually all areas of Statistics but some of the most important include economic
and nancial time series, and many areas of environmental or ecological data.
In this course, I shall cover some of the most important methods for dealing with
these problems. In the case of time series, these include the basic denitions of
autocorrelations etc., then time-domain model tting including autoregressive and
moving average processes, spectral methods, and some discussion of the eect of time
series correlations on other kinds of statistical inference, such as the estimation of
means and regression coecients.
Books
1. P.J. Brockwell and R.A. Davis, Time Series: Theory and Methods, Springer
Series in Statistics (1986).
2. C. Chateld, The Analysis of Time Series: Theory and Practice, Chapman and
Hall (1975). Good general introduction, especially for those completely new to
time series.
3. P.J. Diggle, Time Series: A Biostatistical Introduction, Oxford University Press
(1990).
4. M. Kendall, Time Series, Charles Grin (1976).
iii
Keywords
ACF, 2
AR(p), 2
ARIMA(p,d,q), 6
ARMA(p,q), 5
autocorrelation function, 2
autocovariance function, 2, 5
autoregressive moving average
process, 5
autoregressive process, 2
Box-Jenkins, 18
classical decomposition, 1
estimation, 18
lter generating function, 12
Gaussian process, 5
identiability, 14
identication, 18
integrated autoregressive moving
average process, 6
invertible process, 4
MA(q), 3
moving average process, 3
nondeterministic, 5
nonnegative denite sequence, 6
PACF, 11
periodogram, 15
sample partial autocorrelation
coecient, 11
second order stationary, 2
spectral density function, 8
spectral distribution function, 8
strictly stationary, 1
strongly stationary, 1
turning point test, 4
verication, 20
weakly stationary, 2
white noise, 4
Yule-Walker equations, 3
iv
1 Models for time series
1.1 Time series data
A time series is a set of statistics, usually collected at regular intervals. Time series
data occur naturally in many application areas.
economics - e.g., monthly data for unemployment, hospital admissions, etc.
nance - e.g., daily exchange rate, a share price, etc.
environmental - e.g., daily rainfall, air quality readings.
medicine - e.g., ECG brain wave activity every 2
8
secs.
The methods of time series analysis pre-date those for general stochastic processes
and Markov Chains. The aims of time series analysis are to describe and summarise
time series data, t low-dimensional models, and make forecasts.
We write our real-valued series of observations as . . . , X
2
, X
1
, X
0
, X
1
, X
2
, . . ., a
doubly innite sequence of real-valued random variables indexed by Z.
1.2 Trend, seasonality, cycles and residuals
One simple method of describing a series is that of classical decomposition. The
notion is that the series can be decomposed into four elements:
Trend (T
t
) long term movements in the mean;
Seasonal eects (I
t
) cyclical uctuations related to the calendar;
Cycles (C
t
) other cyclical uctuations (such as a business cycles);
Residuals (E
t
) other random or systematic uctuations.
The idea is to create separate models for these four elements and then combine
them, either additively
X
t
= T
t
+ I
t
+ C
t
+ E
t
or multiplicatively
X
t
= T
t
I
t
C
t
E
t
.
1.3 Stationary processes
1. A sequence {X
t
, t Z} is strongly stationary or strictly stationary if
(X
t
1
, . . . , X
t
k
)
D
=(X
t
1
+h
, . . . , X
t
k
+h
)
for all sets of time points t
1
, . . . , t
k
and integer h.
2. A sequence is weakly stationary, or second order stationary if
1
(a) E(X
t
) = , and
(b) cov(X
t
, X
t+k
) =
k
,
where is constant and
k
is independent of t.
3. The sequence {
k
, k Z} is called the autocovariance function.
4. We also dene

k
=
k
/
0
= corr(X
t
, X
t+k
)
and call {
k
, k Z} the autocorrelation function (ACF).
Remarks.
1. A strictly stationary process is weakly stationary.
2. If the process is Gaussian, that is (X
t
1
, . . . , X
t
k
) is multivariate normal, for all
t
1
, . . . , t
k
, then weak stationarity implies strong stationarity.
3.
0
= var(X
t
) > 0, assuming X
t
is genuinely random.
4. By symmetry,
k
=
k
, for all k.
1.4 Autoregressive processes
The autoregressive process of order p is denoted AR(p), and dened by
X
t
=
p

r=1

r
X
tr
+
t
(1.1)
where
1
, . . . ,
r
are xed constants and {
t
} is a sequence of independent (or uncor-
related) random variables with mean 0 and variance
2
.
The AR(1) process is dened by
X
t
=
1
X
t1
+
t
. (1.2)
To nd its autocovariance function we make successive substitutions, to get
X
t
=
t
+
1
(
t1
+
1
(
t2
+ )) =
t
+
1

t1
+
2
1

t2
+
The fact that {X
t
} is second order stationary follows from the observation that
E(X
t
) = 0 and that the autocovariance function can be calculated as follows:

0
= E
_

t
+
1

t1
+
2
1

t2
+
_
2
=
_
1 +
2
1
+
4
1
+
_

2
=

2
1
2
1

k
= E
_

r=0

r
1

tr

s=0

s
1

t+ks
_
=

2

k
1
1
2
1
.
2
There is an easier way to obtain these results. Multiply equation (1.2) by X
tk
and take the expected value, to give
E(X
t
X
tk
) = E(
1
X
t1
X
tk
) +E(
t
X
tk
) .
Thus
k
=
1

k1
, k = 1, 2, . . .
Similarly, squaring (1.2) and taking the expected value gives
E(X
2
t
) =
1
E(X
2
t1
) + 2
1
E(X
t1

t
) +E(
2
t
) =
2
1
E(X
2
t1
) + 0 +
2
and so
0
=
2
/(1
2
1
).
More generally, the AR(p) process is dened as
X
t
=
1
X
t1
+
2
X
t2
+ +
p
X
tp
+
t
. (1.3)
Again, the autocorrelation function can be found by multiplying (1.3) by X
tk
, taking
the expected value and dividing by
0
, thus producing the Yule-Walker equations

k
=
1

k1
+
2

k2
+ +
p

kp
, k = 1, 2, . . .
These are linear recurrence relations, with general solution of the form

k
= C
1

|k|
1
+ + C
p

|k|
p
,
where
1
, . . . ,
p
are the roots of

p1

p2

p
= 0
and C
1
, . . . , C
p
are determined by
0
= 1 and the equations for k = 1, . . . , p 1. It
is natural to require
k
0 as k , in which case the roots must lie inside the
unit circle, that is, |
i
| < 1. Thus there is a restriction on the values of
1
, . . . ,
p
that can be chosen.
1.5 Moving average processes
The moving average process of order q is denoted MA(q) and dened by
X
t
=
q

s=0

ts
(1.4)
where
1
, . . . ,
q
are xed constants,
0
= 1, and {
t
} is a sequence of independent
(or uncorrelated) random variables with mean 0 and variance
2
.
It is clear from the denition that this is second order stationary and that

k
=
_
0, |k| > q

q|k|
s=0

s

s+k
, |k| q
3
We remark that two moving average processes can have the same autocorrelation
function. For example,
X
t
=
t
+
t1
and X
t
=
t
+ (1/)
t1
both have
1
= /(1 +
2
),
k
= 0, |k| > 1. However, the rst gives

t
= X
t

t1
= X
t
(X
t1

t2
) = X
t
X
t1
+
2
X
t2

This is only valid for || < 1, a so-called invertible process. No two invertible
processes have the same autocorrelation function.
1.6 White noise
The sequence {
t
}, consisting of independent (or uncorrelated) random variables with
mean 0 and variance
2
is called white noise (for reasons that will become clear
later.) It is a second order stationary series with
0
=
2
and
k
= 0, k = 0.
1.7 The turning point test
We may wish to test whether a series can be considered to be white noise, or whether
a more complicated model is required. In later chapters we shall consider various
ways to do this, for example, we might estimate the autocovariance function, say
{
k
}, and observe whether or not
k
is near zero for all k > 0.
However, a very simple diagnostic is the turning point test, which examines a
series {X
t
} to test whether it is purely random. The idea is that if {X
t
} is purely
random then three successive values are equally likely to occur in any of the six
possible orders.
In four cases there is a turning point in the middle. Thus in a series of n points
we might expect (2/3)(n 2) turning points.
In fact, it can be shown that for large n, the number of turning points should
be distributed as about N(2n/3, 8n/45). We reject (at the 5% level) the hypothesis
that the series is unsystematic if the number of turning points lies outside the range
2n/3 1.96
_
8n/45.
4
2 Models of stationary processes
2.1 Purely indeterministic processes
Suppose {X
t
} is a second order stationary process, with mean 0. Its autocovariance
function is

k
= E(X
t
X
t+k
) = cov(X
t
, X
t+k
), k Z.
1. As {X
t
} is stationary,
k
does not depend on t.
2. A process is said to be purely-indeterministic if the regression of X
t
on
X
tq
, X
tq1
, . . . has explanatory power tending to 0 as q . That is, the
residual variance tends to var(X
t
).
An important theorem due to Wold (1938) states that every purely-
indeterministic second order stationary process {X
t
} can be written in the form
X
t
= +
0
Z
t
+
1
Z
t1
+
2
Z
t2
+
where {Z
t
} is a sequence of uncorrelated random variables.
3. A Gaussian process is one for which X
t
1
, . . . , X
t
n
has a joint normal distri-
bution for all t
1
, . . . , t
n
. No two distinct Gaussian processes have the same
autocovariance function.
2.2 ARMA processes
The autoregressive moving average process, ARMA(p, q), is dened by
X
t

r=1

r
X
tr
=
q

s=0

ts
where again {
t
} is white noise. This process is stationary for appropriate , .
Example 2.1
Consider the state space model
X
t
= X
t1
+
t
,
Y
t
= X
t
+
t
.
Suppose {X
t
} is unobserved, {Y
t
} is observed and {
t
} and {
t
} are independent
white noise sequences. Note that {X
t
} is AR(1). We can write

t
= Y
t
Y
t1
= (X
t
+
t
) (X
t1
+
t1
)
= (X
t
X
t1
) + (
t

t1
)
=
t
+
t

t1
5
Now
t
is stationary and cov(
t
,
t+k
) = 0, k 2. As such,
t
can be modelled as a
MA(1) process and {Y
t
} as ARMA(1, 1).
2.3 ARIMA processes
If the original process {Y
t
} is not stationary, we can look at the rst order dierence
process
X
t
= Y
t
= Y
t
Y
t1
or the second order dierences
X
t
=
2
Y
t
= (Y )
t
= Y
t
2Y
t1
+Y
t2
and so on. If we ever nd that the dierenced process is a stationary process we can
look for a ARMA model of that.
The process {Y
t
} is said to be an autoregressive integrated moving average
process, ARIMA(p, d, q), if X
t
=
d
Y
t
is an ARMA(p, q) process.
AR, MA, ARMA and ARIMA processes can be used to model many time series.
A key tool in identifying a model is an estimate of the autocovariance function.
2.4 Estimation of the autocovariance function
Suppose we have data (X
1
, . . . , X
T
) from a stationary time series. We can estimate
the mean by

X = (1/T)

T
1
X
t
,
the autocovariance by c
k
=
k
= (1/T)

T
t=k+1
(X
t


X)(X
tk


X), and
the autocorrelation by r
k
=
k
=
k
/
0
.
The plot of r
k
against k is known as the correlogram. If it is known that is 0
there is no need to correct for the mean and
k
can be estimated by

k
= (1/T)

T
t=k+1
X
t
X
tk
.
Notice that in dening
k
we divide by T rather than by (T k). When T is
large relative to k it does not much matter which divisor we use. However, for
mathematical simplicity and other reasons there are advantages in dividing by T.
Suppose the stationary process {X
t
} has autocovariance function {
k
}. Then
var
_
T

t=1
a
t
X
t
_
=
T

t=1
T

s=1
a
t
a
s
cov(X
t
, X
s
) =
T

t=1
T

s=1
a
t
a
s

|ts|
0.
A sequence {
k
} for which this holds for every T 1 and set of constants (a
1
, . . . , a
T
)
is called a nonnegative denite sequence. The following theorem states that {
k
}
is a valid autocovariance function if and only if it is nonnegative denite.
6
Theorem 2.2 (Blochner) The following are equivalent.
1. There exists a stationary sequence with autocovariance function {
k
}.
2. {
k
} is nonnegative denite.
3. The spectral density function,
f() =
1

k=

k
e
ik
=
1

0
+
2

k=1

k
cos(k) ,
is positive if it exists.
Dividing by T rather than by (T k) in the denition of
k
ensures that {
k
} is nonnegative denite (and thus that it could be the autoco-
variance function of a stationary process), and
can reduce the L
2
-error of r
k
.
2.5 Identifying a MA(q) process
In a later lecture we consider the problem of identifying an ARMA or ARIMA model
for a given time series. A key tool in doing this is the correlogram.
The MA(q) process X
t
has
k
= 0 for all k, |k| > q. So a diagnostic for MA(q) is
that |r
k
| drops to near zero beyond some threshold.
2.6 Identifying an AR(p) process
The AR(p) process has
k
decaying exponentially. This can be dicult to recognise
in the correlogram. Suppose we have a process X
t
which we believe is AR(k) with
X
t
=
k

j=1

j,k
X
tj
+
t
with
t
independent of X
1
, . . . , X
t1
.
Given the data X
1
, . . . , X
T
, the least squares estimates of (
1,k
, . . . ,
k,k
) are ob-
tained by minimizing
1
T
T

t=k+1
_
X
t

j=1

j,k
X
tj
_
2
.
This is approximately equivalent to solving equations similar to the Yule-Walker
equations,

j
=
k

=1

,k

|j|
, j = 1, . . . , k
These can be solved by the Levinson-Durbin recursion:
7
Step 0.
2
0
:=
0
,

1,1
=
1
/
0
, k := 0
Step 1. Repeat until

k,k
near 0:
k := k + 1

k,k
:=
_

k

k1

j=1

j,k1

kj
__

2
k1

j,k
:=

j,k1

k,k

kj,k1
, for j = 1, . . . , k 1

2
k
:=
2
k1
(1

2
k,k
)
We test whether the order k t is an improvement over the order k 1 t by looking
to see if

k,k
is far from zero.
The statistic

k,k
is called the kth sample partial autocorrelation coecient
(PACF). If the process X
t
is genuinely AR(p) then the population PACF,
k,k
, is
exactly zero for all k > p. Thus a diagnostic for AR(p) is that the sample PACFs
are close to zero for k > p.
2.7 Distributions of the ACF and PACF
Both the sample ACF and PACF are approximately normally distributed about
their population values, and have standard deviation of about 1/

T, where T is the
length of the series. A rule of thumb it that
k
is negligible (and similarly
k,k
) if
r
k
(similarly

k,k
) lies between 2/

T. (2 is an approximation to 1.96. Recall that


if Z
1
, . . . , Z
n
N(, 1), a test of size 0.05 of the hypothesis H
0
: = 0 against
H
1
: = 0 rejects H
0
if and only if

Z lies outside 1.96/

n).
Care is needed in applying this rule of thumb. It is important to realize
that the sample autocorrelations, r
1
, r
2
, . . ., (and sample partial autocorrelations,

1,1
,

2,2
, . . .) are not independently distributed. The probability that any one r
k
should lie outside 2/

T depends on the values of the other r


k
.
A portmanteau test of white noise (due to Box & Pierce and Ljung & Box) can
be based on the fact that approximately
Q

m
= T(T + 2)
m

k=1
(T k)
1
r
2
k

2
m
.
The sensitivity of the test to departure from white noise depends on the choice of
m. If the true model is ARMA(p, q) then greatest power is obtained (rejection of the
white noise hypothesis is most probable) when m is about p + q.
8
3 Spectral methods
3.1 The discrete Fourier transform
If h(t) is dened for integers t, the discrete Fourier transform of h is
H() =

t=
h(t)e
it
,
The inverse transform is
h(t) =
1
2
_

e
it
H() d .
If h(t) is real-valued, and an even function such that h(t) = h(t), then
H() = h(0) + 2

t=1
h(t) cos(t)
and
h(t) =
1

_

0
cos(t)H() d .
3.2 The spectral density
The Wiener-Khintchine theorem states that for any real-valued stationary process
there exists a spectral distribution function, F(), which is nondecreasing and
right continuous on [0, ] such that F(0) = 0, F() =
0
and

k
=
_

0
cos(k) dF() .
The integral is a Lebesgue-Stieltges integral and is dened even if F has disconti-
nuities. Informally, F(
2
) F(
1
) is the contribution to the variance of the series
made by frequencies in the range (
1
,
2
).
F() can have jump discontinuities, but always can be decomposed as
F() = F
1
() +F
2
()
where F
1
() is a nondecreasing continuous function and F
2
() is a nondecreasing
step function. This is a decomposition of the series into a purely indeterministic
component and a deterministic component.
Suppose the process is purely indeterministic, (which happens if and only if

k
|
k
| < ). In this case F() is a nondecreasing continuous function, and dif-
ferentiable at all points (except possibly on a set of measure zero). Its derivative
f() = F

() exists, and is called the spectral density function. Apart from a


9
multiplication by 1/ it is simply the discrete Fourier transform of the autocovariance
function and is given by
f() =
1

k=

k
e
ik
=
1

0
+
2

k=1

k
cos(k) ,
with inverse

k
=
_

0
cos(k)f() d.
Note. Some authors dene the spectral distribution function on [, ]; the use of
negative frequencies makes the interpretation of the spectral distribution less intuitive
and leads to a dierence of a factor of 2 in the denition of the spectra density.
Notice, however, that if f is dened as above and extended to negative frequencies,
f() = f(), then we can write

k
=
_

1
2
e
ik
f() d .
Example 3.1
(a) Suppose {X
t
} is i.i.d.,
0
= var(X
t
) =
2
> 0 and
k
= 0, k 1. Then
f() =
2
/. The fact that the spectral density is at means that all frequencies
are equally present accounts for our calling this sequence white noise.
(b) As an example of a process which is not purely indeterministic, consider X
t
=
cos(
0
t + U) where
0
is a value in [0, ] and U U[, ]. The process has
zero mean, since
E(X
t
) =
1
2
_

cos(
0
t +u) du = 0
and autocovariance

k
= E(X
t
, X
t+k
)
=
1
2
_

cos(
0
t +u) cos(
0
t +
0
k +u)du
=
1
2
_

1
2
[cos(
0
k) + cos(2
0
t +
0
k + 2u)] du
=
1
2
1
2
[2 cos(
0
k) + 0]
=
1
2
cos(
0
k) .
Hence X
t
is second order stationary and we have

k
=
1
2
cos(
0
k), F() =
1
2
I
[
0
]
and f() =
1
2

0
() .
10
Note that F is a nondecreasing step function.
More generally, the spectral density
f() =
n

j=1
1
2
a
j

j
()
corresponds to the process X
t
=

n
j=1
a
j
cos(
j
t + U
j
) where
j
[0, ] and
U
1
, . . . , U
n
are i.i.d. U[, ].
(c) The MA(1) process, X
t
=
1

t1
+
t
, where {
t
} is white noise. Recall
0
=
(1 +
2
1
)
2
,
1
=
1

2
, and
k
= 0, k > 1. Thus
f() =

2
(1 + 2
1
cos +
2
1
)

.
(d) The AR(1) process, X
t
=
1
X
t1
+
t
, where {
t
} is white noise. Recall
var(X
t
) =
2
1
var(X
t1
) +
2
=
0
=
2
1

0
+
2
=
0
=
2
/(1
2
1
)
where we need |
1
| < 1 for X
t
stationary. Also,

k
= cov(X
t
, X
tk
) = cov(
1
X
t1
+
t
, X
tk
) =
1

k1
.
So
k
=
|k|
1

0
, k Z. Thus
f() =

0

+
2

k=1

k
1

0
cos(k) =

0

_
1 +

k=1

k
1
_
e
ik
+ e
ik

_
=

0

_
1 +

1
e
i
1
1
e
i
+

1
e
i
1
1
e
i
_
=

0

1
2
1
1 2
1
cos +
2
1
=

2
(1 2
1
cos +
2
1
)
.
Note that > 0 has power at low frequency, whereas < 0 has power at high
frequency.
0
0
0
0
1 1 2 2 3 3
4 4

f() f()

1
=
1
2

1
=
1
2
11
Plots above are the spectral densities for AR(1) processes in which {
t
} is Gaussian
white noise, with
2
/ = 1. Samples for 200 data points are shown below.
1
50 100 150 200

1
=
1
2
1
50 100 150 200

1
=
1
2
3.3 Analysing the eects of smoothing
Let {a
s
} be a sequence of real numbers. A linear lter of {X
t
} is
Y
t
=

s=
a
s
X
ts
.
In Chapter 5 we show that the spectral density of {Y
t
} is given by
f
Y
() = |a()|
2
f
X
() ,
where a(z) is the transfer function
a() =

s=
a
s
e
is
.
This result can be used to explore the eect of smoothing a series.
Example 3.2
Suppose the AR(1) series above, with
1
= 0.5, is smoothed by a moving average
on three points, so that smoothed series is
Y
t
=
1
3
[X
t+1
+ X
t
+X
t1
] .
Then |a()|
2
= |
1
3
e
i
+
1
3
+
1
3
e
i
|
2
=
1
9
(1 + 2 cos )
2
.
Notice that
X
(0) = 4/3,
Y
(0) = 2/9, so {Y
t
} has 1/6 the variance of {X
t
}.
Moreover, all components of frequency = 2/3 (i.e., period 3) are eliminated in
the smoothed series.
0
0
1
1
2 3

|a()|
2
0
0
1
1
2
2
3
3
4
5

= |a()|
2
f
X
()
f
Y
()
12
4 Estimation of the spectrum
4.1 The periodogram
Suppose we have T = 2m + 1 observations of a time series, y
1
, . . . , y
T
. Dene the
Fourier frequencies,
j
= 2j/T, j = 1, . . . , m, and consider the regression model
y
t
=
0
+
m

j=1

j
cos(
j
t) +
m

j=1

j
sin(
j
t) ,
which can be written as a general linear model, Y = X + , where
Y =
_
_
y
1
.
.
.
y
T
_
_
, X =
_
_
1 c
11
s
11
c
m1
s
m1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1 c
1T
s
1T
c
mT
s
mT
_
_
, =
_
_
_
_
_
_
_
_
_

1
.
.
.

m
_
_
_
_
_
_
_
_
_
, =
_
_

1
.
.
.

T
_
_

c
jt
= cos(
j
t), s
jt
= sin(
j
t) .
The least squares estimates in this model are given by

= (X

X)
1
X

Y .
Note that
T

t=1
e
i
j
t
=
e
i
j
(1 e
i
j
T
)
1 e
i
j
= 0
=
T

t=1
c
jt
+ i
T

t=1
s
jt
= 0 =
T

t=1
c
jt
=
T

t=1
s
jt
= 0
and
T

t=1
c
jt
s
jt
=
1
2
T

t=1
sin(2
j
t) = 0 ,
T

t=1
c
2
jt
=
1
2
T

t=1
{1 + cos(2
j
t)} = T/2 ,
T

t=1
s
2
jt
=
1
2
T

t=1
{1 cos(2
j
t)} = T/2 ,
T

t=1
c
jt
s
kt
=
T

t=1
c
jt
c
kt
=
T

t=1
s
jt
s
kt
= 0, j = k .
13
Using these, we have

=
_
_
_
_
_

0

1
.
.
.

m
_
_
_
_
_
=
_
_
_
_
_
T 0 0
0 T/2 0
.
.
.
.
.
.
.
.
.
0 0 T/2
_
_
_
_
_
1
_
_
_
_
_

t
y
t

t
c
1t
y
t
.
.
.

t
s
mt
y
t
_
_
_
_
_
=
_
_
_
_
_
y
(2/T)

t
c
1t
y
t
.
.
.
(2/T)

t
s
mt
y
t
_
_
_
_
_
and the regression sum of squares is

Y = Y

X(X

X)
1
X

Y = T y
2
+
m

j=1
2
T
_
_
_
T

t=1
c
jt
y
t
_
2
+
_
T

t=1
s
jt
y
t
_
2
_
_
.
Since we are tting T unknown parameters to T data points, the model ts with no
residual error, i.e.,

Y = Y . Hence
T

t=1
(y
t
y)
2
=
m

j=1
2
T
_
_
_
T

t=1
c
jt
y
t
_
2
+
_
T

t=1
s
jt
y
t
_
2
_
_
.
This motivates denition of the periodogram as
I() =
1
T
_
_
_
T

t=1
y
t
cos(t)
_
2
+
_
T

t=1
y
t
sin(t)
_
2
_
_
.
A factor of (1/2) has been introduced into this denition so that the sample variance,

0
= (1/T)

T
t=1
(y
t
y)
2
, equates to the sum of the areas of m rectangles, whose
heights are I(
1
), . . . , I(
m
), whose widths are 2/T, and whose bases are centred
at
1
, . . . ,
m
. I.e.,
0
= (2/T)

m
j=1
I(
j
). These rectangles approximate the area
under the curve I(), 0 .
0
I()
I(
5
)

5
2/T

14
Using the fact that

T
t=1
c
jt
=

T
t=1
s
jt
= 0, we can write
TI(
j
) =
_
T

t=1
y
t
cos(
j
t)
_
2
+
_
T

t=1
y
t
sin(
j
t)
_
2
=
_
T

t=1
(y
t
y) cos(
j
t)
_
2
+
_
T

t=1
(y
t
y) sin(
j
t)
_
2
=

t=1
(y
t
y)e
i
j
t

2
=
T

t=1
(y
t
y)e
i
j
t
T

s=1
(y
s
y)e
i
j
s
=
T

t=1
(y
t
y)
2
+ 2
T1

k=1
T

t=k+1
(y
t
y)(y
tk
y) cos(
j
k) .
Hence
I(
j
) =
1


0
+
2

T1

k=1

k
cos(
j
k) .
I() is therefore a sample version of the spectral density f().
4.2 Distribution of spectral estimates
If the process is stationary and the spectral density exists then I() is an almost
unbiased estimator of f(), but it is a rather poor estimator without some smoothing.
Suppose {y
t
} is Gaussian white noise, i.e., y
1
, . . . , y
T
are iid N(0,
2
). Then for
any Fourier frequency = 2j/T,
I() =
1
T
_
A()
2
+ B()
2

, (4.1)
where
A() =
T

t=1
y
t
cos(t) , B() =
T

t=1
y
t
sin(t) . (4.2)
Clearly A() and B() have zero means, and
var[A()] =
2
T

t=1
cos
2
(t) = T
2
/2 ,
var[B()] =
2
T

t=1
sin
2
(t) = T
2
/2 ,
15
cov[A(), B()] = E
_
T

t=1
T

s=1
y
t
y
s
cos(t) sin(s)
_
=
2
T

t=1
cos(t) sin(t) = 0 .
Hence A()
_
2/T
2
and B()
_
2/T
2
are independently distributed as N(0, 1), and
2
_
A()
2
+B()
2

/(T
2
) is distributed as
2
2
. This gives I() (
2
/)
2
2
/2. Thus
we see that I(w) is an unbiased estimator of the spectrum, f() =
2
/, but it is not
consistent, since var[I()] =
4
/
2
does not tend to 0 as T . This is perhaps
surprising, but is explained by the fact that as T increases we are attempting to
estimate I() for an increasing number of Fourier frequencies, with the consequence
that the precision of each estimate does not change.
By a similar argument, we can show that for any two Fourier frequencies,
j
and

k
the estimates I(
j
) and I(
k
) are statistically independent. These conclusions
hold more generally.
Theorem 4.1 Let {Y
t
} be a stationary Gaussian process with spectrum f(). Let
I() be the periodogram based on samples Y
1
, . . . , Y
T
, and let
j
= 2j/T, j < T/2,
be a Fourier frequency. Then in the limit as T ,
(a) I(
j
) f(
j
)
2
2
/2.
(b) I(
j
) and I(
k
) are independent for j = k.
Assuming that the underlying spectrum is smooth, f() is nearly constant over a
small range of . This motivates use of an estimator for the spectrum of

f(
j
) =
1
2p + 1
p

=p
I(
j+
) .
Then

f(
j
) f(
j
)
2
2(2p+1)
/[2(2p +1)], which has variance f()
2
/(2p +1). The idea
is to let p as T .
4.3 The fast Fourier transform
I(
j
) can be calculated from (4.1)(4.2), or from
I(
j
) =
1
T

t=1
y
t
e
i
j
t

2
.
Either way, this requires of order T multiplications. Hence to calculate the complete
periodogram, i.e., I(
1
), . . . , I(
m
), requires of order T
2
multiplications. Computa-
tion eort can be reduced signicantly by use of the fast Fourier transform, which
computes I(
1
), . . . , I(
m
) using only order T log
2
T multiplications.
16
5 Linear lters
5.1 The Filter Theorem
A linear lter of one random sequence {X
t
} into another sequence {Y
t
} is
Y
t
=

s=
a
s
X
ts
. (5.1)
Theorem 5.1 (the lter theorem) Suppose X
t
is a stationary time series with spec-
tral density f
X
(). Let {a
t
} be a sequence of real numbers such that

t=
|a
t
| < .
Then the process Y
t
=

s=
a
s
X
ts
is a stationary time series with spectral density
function
f
Y
() =

A(e
i
)

2
f
X
() = |a()|
2
f
X
() ,
where A(z) is the lter generating function
A(z) =

s=
a
s
z
s
, |z| 1 .
and a() = A(e
i
) is the transfer function of the linear lter.
Proof.
cov(Y
t
, Y
t+k
) =

rZ

sZ
a
r
a
s
cov(X
tr
, X
t+ks
)
=

r,sZ
a
r
a
s

k+rs
=

r,sZ
a
r
a
s
_

1
2
e
i(k+rs)
f
X
()d
=
_

A(e
i
)A(e
i
)
1
2
e
ik
f
X
()d
=
_

1
2
e
ik

A(e
i
)

2
f
X
()d
=
_

1
2
e
ik
f
Y
()d .
Thus f
Y
() is the spectral density for Y and Y is stationary.
5.2 Application to autoregressive processes
Let us use the notation B for the backshift operator
B
0
= I, (B
0
X)
t
= X
t
, (BX)
t
= X
t1
, (B
2
X)
t
= X
t2
, . . .
17
Then the AR(p) process can be written as
(I

p
r=1

r
B
r
) X =
or (B)X = , where is the function
(z) = 1

p
r=1

r
z
r
.
By the lter theorem, f

() = |
_
e
i
)|
2
f
X
(
_
, so since f

() =
2
/,
f
X
() =

2
|(e
i
)|
2
. (5.2)
As f
X
() = (1/)

k=

k
e
ik
, we can calculate the autocovariances by ex-
panding f
X
() as a power series in e
i
. For this to work, the zeros of (z) must lie
outside the unit circle in C. This is the stationarity condition for the AR(p) process.
Example 5.2
For the AR(1) process, X
t

1
X
t1
=
t
, we have (z) = 1
1
z, with its zero at
z = 1/
1
. The stationarity condition is |
1
| < 1. Using (5.2) we nd
f
X
() =

2
|1 e
i
|
2
=

2
(1 2cos +
2
)
,
which is what we found by other another method in Example 3.1(c). To nd the
autocovariances we can write, taking z = e
i
,
1
|
1
(z)|
2
=
1

1
(z)
1
(1/z)
=
1
(1
1
z)(1
1
/z)
=

r=0

r
1
z
r

s=0

s
1
z
s
=

k=
z
k
(
|k|
1
(1 +
2
1
+
4
1
+ )) =

k=
z
k

|k|
1
1
2
1
=f
X
() =
1

k=

|k|
1
1
2
1
e
ik
and so
k
=
2

|k|
1
/(1
2
1
) as we saw before.
In general, it is often easier to calculate the spectral density function rst, using
lters, and then deduce the autocovariance function from it.
5.3 Application to moving average processes
The MA(q) process X
t
=
t
+

q
s=1

ts
can be written as
X = (B)
where (z) =

q
s=0

s
B
s
. By the lter theorem, f
X
() = |(e
i
)|
2
(
2
/).
18
Example 5.3
For the MA(1), X
t
=
t
+
1

t1
, (z) = 1 +
1
z and
f
X
() =

2

_
1 + 2
1
cos +
2
1
_
.
As above, we can obtain the autocovariance function by expressing f
X
() as a power
series in e
i
. We have
f
X
() =

2

1
e
i
+ (1 +
2
1
) +
1
e
i
_
=

2

(e
i
)(e
i
)
So
0
=
2
(1 +
2
1
),
1
=
1

2
,
2
= 0, |k| > 1.
As we remarked in Section 1.5, the autocovariance function of a MA(1) process
with parameters (
2
,
1
) is identical to one with parameters (
2
1

2
,
1
1
). That is,

0
=
2
1

2
(1 + 1/
2
1
) =
2
(1 +
2
1
) =
0

1
=
1
1
/(1 +
2
1
) =
1
/(1 +
2
1
) =
1
.
In general, the MA(q) process can be written as X = (B), where
(z) =
q

k=0

k
z
k
=
q

k=1
(
k
z) .
So the autocovariance generating function is
g(z) =
q

k=q

k
z
k
=
2
(z)(z
1
) =
2
q

k=1
(
k
z)(
k
z
1
) . (5.3)
Note that (
k
z)(
k
z
1
) =
2
k
(
1
k
z)(
1
k
z
1
). So g(z) is unchanged in (5.3)
if (for any k such that
k
is real) we replace
k
by
1
k
and multiply
2
by
2
k
. Thus
(if all roots of (z) = 0 are real) there can be 2
q
dierent MA(q) processes with the
same autocovariance function. For identiability, we assume that all the roots of
(z) lie outside the unit circle in C. This is equivalent to the invertibility condition,
that
t
can be written as a convergent power series in {X
t
, X
t1
, . . .}.
5.4 The general linear process
A special case of (5.1) is the general linear process,
Y
t
=

s=0
a
s
X
ts
,
where {X
t
} is white noise. This has
cov(Y
t
, Y
t+k
) =
2

s=0
a
s
a
s+k

2

s=0
a
2
s
,
19
where the inequality is an equality when k = 0. Thus {Y
t
} is stationary if and
only if

s=0
a
2
s
< . In practice the general linear model is useful when the a
s
are
expressible in terms of a nite number of parameters which can be estimated. A rich
class of such models are the ARMA models.
5.5 Filters and ARMA processes
The ARMA(p, q) model can be written as (B)X = (B). Thus
|(e
i
)|
2
f
X
() = |(e
i
)|
2

= f
X
() =

(e
i
)
(e
i
)

.
This is subject to the conditions that
the zeros of lie outside the unit circle in C for stationarity.
the zeros of lie outside the unit circle in C for identiability.
(z) and (z) have no common roots.
If there were a common root, say 1/, so that (I B)
1
(B)X = (I B)
1
(B),
then we could multiply both sides by

n=0

n
B
n
and deduce
1
(B)X =
1
(B), and
thus that a more economical ARMA(p 1, q 1) model suces.
5.6 Calculating autocovariances in ARMA models
As above, the lter theorem can assist in calculating the autocovariances of a model.
These can be compared with autocovariances estimated from the data. For example,
an ARMA(1, 2) has
(z) = 1 z, (z) = 1 +
1
z +
2
z
2
, where || < 1.
Then X = C(B), where
C(z) = (z)/(z) =
_
1 +
1
z +
2
z
2
_

n=0

n
z
n
=

n=0
c
n
z
n
,
with c
0
= 1, c
1
= +
1
, and
c
n
=
n
+
n1

1
+
n1

2
=
n2
_

2
+
1
+
2
_
, n 2.
So X
t
=

n=0
c
n

tn
and we can compute covariances as

k
= cov(X
t
, X
t+k
) =

n,m=0
c
n
c
m
cov(
tn
,
t+km
) =

n=0
c
n
c
n+k

2
.
For example,
k
=
k1
, k 3. As a test of whether the model is ARMA(1, 2)
we might look to see if the sample autocovariances decay geometrically, for k 2,
20
6 Estimation of trend and seasonality
6.1 Moving averages
Consider a decomposition into trend, seasonal, cyclic and residual components.
X
t
= T
t
+I
t
+C
t
+E
t
.
Thus far we have been concerned with modelling {E
t
}. We have also seen that the
periodogram can be useful for recognising the presence of {C
t
}.
We can estimate trend using a symmetric moving average,

T
t
=
k

s=k
a
s
X
t+s
,
where a
s
= a
s
. In this case the transfer function is real-valued.
The choice of moving averages requires care. For example, we might try to esti-
mate the trend with

T
t
=
1
3
(X
t1
+X
t
+ X
t+1
) .
But suppose X
t
= T
t
+
t
, where trend is the quadratic T
t
= a + bt + ct
2
. Then

T
t
= T
t
+
2
3
c +
1
3
(
t1
+
t
+
t+1
) ,
so E

T
t
= EX
t
+
2
3
c and thus

T is a biased estimator of the trend.
This problem is avoided if we estimate trend by tting a polynomial of sucient
degree, e.g., to nd a cubic that best ts seven successive points we minimize
3

t=3
_
X
t
b
0
b
1
t b
2
t
2
b
3
t
3
_
2
.
So

X
t
= 7

b
0
+ 28

b
2

tX
t
= 28

b
1
+ 196

b
3

t
2
X
t
= 28

b
0
+ 196

b
2

t
3
X
t
= 196

b
1
+ 1588

b
3
Then

b
0
=
1
21
_
7

X
t

t
2
X
t
_
=
1
21
(2X
3
+ 3X
2
+ 6X
1
+ 7X
0
+ 6X
1
+ 3X
2
2X
3
) .
We estimate the trend at time 0 by

T
0
=

b
0
, and similarly,

T
t
=
1
21
(2X
t3
+ 3X
t2
+ 6X
t1
+ 7X
t
+ 6X
t+1
+ 3X
t+2
2X
t+3
) .
21
A notation for this moving average is
1
21
[2, 3, 6, 7, 6, 3, 2]. Note that the weights
sum to 1. In general, we can t a polynomial of degree q to 2q+1 points by applying a
symmetric moving average. (We t to an odd number of points so that the midpoint
of tted range coincides with a point in time at which data is measured.)
A value for q can be identied using the variate dierence method: if {X
t
} is
indeed a polynomial of degree q, plus residual error {
t
}, then the trend in
r
X
t
is
a polynomial of degree q r and

q
X
t
= constant +
q

t
= constant +
t

_
q
1
_

t1
+
_
q
2
_

t2
+ (1)
q

tq
.
The variance of
q
X
t
is therefore
var(
q

t
) =
_
1 +
_
q
1
_
2
+
_
q
2
_
2
+ + 1
_

2
=
_
2q
q
_

2
,
where the simplication in the nal line comes from looking at the coecient of z
q
in expansions of both sides of
(1 +z)
q
(1 + z)
q
= (1 +z)
2q
.
Dene V
r
= var(
r
X
t
)/
_
2r
r
_
. The fact that the plot of V
r
against r should atten out
at r q can be used to identify q.
6.2 Centred moving averages
If there is a seasonal component then a centred-moving average is useful. Sup-
pose data is measured quarterly, then applying twice the moving average
1
4
[1, 1, 1, 1]
is equivalent to applying once the moving average
1
8
[1, 2, 2, 2, 1]. Notice that this so-
called centred average of fours weights each quarter equally. Thus if X
t
= I
t
+
t
,
where I
t
has period 4, and I
1
+ I
2
+ I
3
+ I
4
= 0, then

T
t
has no seasonal com-
ponent. Similarly, if data were monthly we use a centred average of 12s, that is,
1
24
[1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1].
6.3 The Slutzky-Yule eect
To remove both trend and seasonal components we might successively apply a number
of moving averages, one or more to remove trend and another to remove seasonal
eects. This is the procedure followed by some standard forecasting packages.
However, there is a danger that application of successive moving averages can
introduce spurious eects. The Slutzky-Yule eect is concerned with the fact that
a moving average repeatedly applied to a purely random series can introduce articial
cycles. Slutzky (1927) showed that some trade cycles of the nineteenth century were
no more than artifacts of moving averages that had been used to smooth the data.
22
To illustrate this idea, suppose the moving average
1
6
[1, 2, 4, 2, 1] is applied k
times to a white noise series. This moving average has transfer function, a() =
1
6
(4+
4 cos 2 cos 2), which is maximal at = /3. The smoothed series has a spectral
density, say f
k
(), proportional to a()
2k
, and hence for = /3, f
k
()/f
k
(/3) 0
as k . Thus in the limit the smoothed series is a periodic wave with period 6.
6.4 Exponential smoothing
Single exponential smoothing
Suppose the mean level of a series drifts slowly over time. A naive one-step-ahead
forecast is X
t
(1) = X
t
. However, we might let all past observations play a part in
the forecast, but give greater weights to those that are more recent. Choose weights
to decrease exponentially and let
X
t
(1) =
1
1
t
_
X
t
+X
t1
+
2
X
t2
+ +
t1
X
1
_
,
where 0 < < 1. Dene S
t
as the right hand side of the above as t , i.e.,
S
t
= (1 )

s=0

s
X
ts
.
S
t
can serve as a one-step-ahead forecast, X
t
(1). S
t
is known as simple exponential
smoothing. Let = 1 . Simple algebra gives
S
t
= X
t
+ (1 )S
t1
X
t
(1) = X
t1
(1) +[X
t
X
t1
(1)] .
This shows that the one-step-ahead forecast at time t is the one-step-ahead forecast
at time t 1, modied by times the forecasting error incurred at time t 1.
To get things started we might set S
0
equal to the average of the rst few data
points. We can play around with , choosing it to minimize the mean square fore-
casting error. In practice, in the range 0.250.5 usually works well.
Double exponential smoothing
Suppose the series is approximately linear, but with a slowly varying trend. If it
were true that X
t
= b
0
+b
1
t +
t
, then
S
t
= (1 )

s=0

s
(b
0
+ b
1
(t s) +
t
)
= b
0
+b
1
t b
1
(1 )

s=0

s
s + b
1
(1 )

s=0

ts
,
23
and hence
ES
t
= b
0
+ b
1
t b
1
/(1 ) = EX
t+1
b
1
/(1 ) .
Thus the forecast has a bias of b
1
/(1). To eliminate this bias let S
1
t
= S
t
be the
rst smoothing, and S
2
t
= S
1
t
+(1 )S
2
t1
be the simple exponential smoothing of
S
1
t
. Then
ES
2
t
= ES
1
t
b
1
/(1 ) = EX
t
2b
1
/(1 ) ,
E(2S
1
t
S
2
t
) = b
0
+ b
1
t, E(S
1
t
S
2
t
) = b
1
(1 )/.
This suggests the estimates

b
0
+

b
1
t = 2S
1
t
S
2
t
and

b
1
= (S
1
t
S
2
t
)/(1 ). The
forecasting equation is then
X
t
(s) =

b
0
+

b
1
(t +s) = (2S
1
t
S
2
t
) +s(S
1
t
S
2
t
)/(1 ) .
As with single exponential smoothing we can experiment with choices of and nd
S
1
0
and S
2
0
by tting a regression line, X
t
=

0
+

1
t, to the rst few points of the
series and solving
S
1
0
=

0
(1 )

1
/, S
2
0
=

0
2(1 )

1
/.
6.5 Calculation of seasonal indices
Suppose data is quarterly and we want to t an additive model. Let

I
1
be the
average of X
1
, X
5
, X
9
, . . ., let

I
2
be the average of X
2
, X
6
, X
10
, . . ., and so on for

I
3
and

I
4
. The cumulative seasonal eects over the course of year should cancel, so that
if X
t
= a + I
t
, then X
t
+ X
t+1
+ X
t+2
+X
t+3
= 4a. To ensure this we take our nal
estimates of the seasonal indices as I

t
=

I
t

1
4
(

I
1
+ +

I
4
).
If the model is multiplicative and X
t
= aI
t
, we again wish to see the cumulative
eects over a year cancel, so that X
t
+X
t+1
+X
t+2
+X
t+3
= 4a. This means that we
should take I

t
=

I
t

1
4
(

I
1
+ +

I
4
) + 1, adjusting so the mean of I

1
, I

2
, I

3
, I

4
is 1.
When both trend and seasonality are to be extracted a two-stage procedure is
recommended:
(a) Make a rst estimate of trend, say

T
1
t
.
Subtract this from {X
t
} and calculate rst estimates of the seasonal indices, say
I
1
t
, from X
t


T
1
t
.
The rst estimate of the deseasonalized series is Y
1
t
= X
t
I
1
t
.
(b) Make a second estimate of the trend by smoothing Y
1
t
, say

T
2
t
.
Subtract this from {X
t
} and calculate second estimates of the seasonal indices,
say I
2
t
, from X
t


T
2
t
.
The second estimate of the deseasonalized series is Y
2
t
= X
t
I
2
t
.
24
7 Fitting ARIMA models
7.1 The Box-Jenkins procedure
A general ARIMA(p, d, q) model is (B)(B)
d
X = (B), where (B) = I B.
The Box-Jenkins procedure is concerned with tting an ARIMA model to data.
It has three parts: identication, estimation, and verication.
7.2 Identication
The data may require pre-processing to make it stationary. To achieve stationarity
we may do any of the following.
Look at it.
Re-scale it (for instance, by a logarithmic or exponential transform.)
Remove deterministic components.
Dierence it. That is, take (B)
d
X until stationary. In practice d = 1, 2 should
suce.
We recognise stationarity by the observation that the autocorrelations decay to
zero exponentially fast.
Once the series is stationary, we can try to t an ARMA(p, q) model. We consider
the correlogram r
k
=
k
/
0
and the partial autocorrelations

k,k
. We have already
made the following observations.
An MA(q) process has negligible ACF after the qth term.
An AR(p) process has negligible PACF after the pth term.
As we have noted, very approximately, both the sample ACF and PACF have stan-
dard deviation of around 1/

T, where T is the length of the series. A rule of thumb


is that ACF and PACF values are negligible when they lie between 2/

T. An
ARMA(p, q) process has kth order sample ACF and PACF decaying geometrically
for k > max(p, q).
7.3 Estimation
AR processes
To t a pure AR(p), i.e., X
t
=

p
r=1

r
X
tr
+
t
we can use the Yule-Walker
equations
k
=

p
r=1

|kr|
. We t by solving
k
=

p
1

r

|kr|
, k = 1, . . . , p.
These can be solved by a Levinson-Durbin recursion, (similar to that used to solve
for partial autocorrelations in Section 2.6). This recursion also gives the estimated
25
residual variance
2
p
, and helps in choice of p through the approximate log likelihood
2 log L T log(
2
p
).
Another popular way to choose p is by minimizing Akaikes AIC (an information
criterion), dened as AIC = 2 log L + 2k, where k is the number of parameters
estimated, (in the above case p). As motivation, suppose that in a general modelling
context we attempt to t a model with parameterised likelihood function f(X | ),
, and this includes the true model for some
0
. Let X = (X
1
, . . . , X
n
) be a
vector of n independent samples and let

(X) be the maximum likelihood estimator
of . Suppose Y is a further independent sample. Then
2nE
Y
E
X
log f
_
Y |

(X)
_
= 2E
X
log f
_
X |

(X)
_
+ 2k + O
_
1/

n
_
,
where k = ||. The left hand side is 2n times the conditional entropy of Y given

(X), i.e., the average number of bits required to specify Y given



(X). The right
hand side is approximately the AIC and this is to be minimized over a set of models,
say (f
1
,
1
), . . . , (f
m
,
m
).
ARMA processes
Generally, we use the maximum likelihood estimators, or at least squares numerical
approximations to the MLEs. The essential idea is prediction error decomposition.
We can factorize the joint density of (X
1
, . . . , X
T
) as
f(X
1
, . . . , X
T
) = f(X
1
)
T

t=2
f(X
t
| X
1
, . . . , X
t1
) .
Suppose the conditional distribution of X
t
given (X
1
, . . . , X
t1
) is normal with mean

X
t
and variance P
t1
, and suppose also that X
1
is normal N(

X
1
, P
0
). Here

X
t
and
P
t1
are functions of the unknown parameters
1
, . . . ,
p
,
1
, . . . ,
q
and the data.
The log likelihood is
2 log L = 2 log f =
T

t=1
_
log(2) + log P
t1
+
(X
t


X
t
)
2
P
t1
_
.
We can minimize this with respect to
1
, . . . ,
p
,
1
, . . . ,
q
to t ARMA(p, q).
Additionally, the second derivative matrix of log L (at the MLE) is the observed
information matrix, whose inverse is an approximation to the variance-covariance
matrix of the estimators.
In practice, tting ARMA(p, q) the log likelihood (2 log L) is modied to sum
only over the range {m+ 1, . . . , T}, where m is small.
Example 7.1
For AR(p), take m = p so

X
t
=

p
r=1

r
X
tr
, t m+ 1, P
t1
=
2

.
26
Note. When using this approximation to compare models with dierent numbers
of parameters we should always use the same m.
Again we might choose p and q by minimizing the AIC of 2 log L + 2k, where
k = p + q is the total number of parameters in the model.
7.4 Verication
The third stage in the Box-Jenkins algorithm is to check whether the model ts the
data. There are several tools we may use.
Overtting. Add extra parameters to the model and use likelihood ratio test or
t-test to check that they are not signicant.
Residuals analysis. Calculate the residuals from the model and plot them. The
autocorrelation functions, ACFs, PACFs, spectral densities, estimates, etc., and
conrm that they are consistent with white noise.
7.5 Tests for white noise
Tests for white noise include the following.
(a) The turning point test (explained in Lecture 1) compares the number of peaks
and troughs to the number that would be expected for a white noise series.
(b) The BoxPierce test is based on the statistic
Q
m
= T
m

k=1
r
2
k
,
where r
k
is the kth sample autocorrelation coecient of the residual series, and
p + q < m T. It is called a portmanteau test, because it is based on the
all-inclusive statistic. If the model is correct then Q
m

2
mpq
approximately.
In fact, r
k
has variance (T k)/(T(T +2)), and a somewhat more powerful test
uses the Ljung-Box statistic quoted in Section 2.7,
Q

m
= T(T + 2)
m

k=1
(T k)
1
r
2
k
,
where again, Q

m

2
mpq
approximately.
(c) Another test for white noise can be constructed from the periodogram. Recall
that I(
j
) (
2
/)
2
2
/2 and that I(
1
), . . . , I(
m
) are mutually independent.
Dene C
j
=

j
k=1
I(
k
) and U
j
= C
j
/C
m
. Recall that
2
2
is the same as the expo-
nential distribution and that if Y
1
, . . . , Y
m
are i.i.d. exponential random variables,
27
then (Y
1
+ + Y
j
)/(Y
1
+ + Y
m
), j = 1, . . . , m 1, have the distribution of
an ordered sample of m 1 uniform random variables drawn from [0, 1]. Hence
under the hypothesis that {X
t
} is Gaussian white noise U
j
, j = 1, . . . , m 1
have the distribution of an ordered sample of m 1 uniform random variables
on [0, 1]. The standard test for this is the Kolomogorov-Smirnov test, which uses
as a test statistic, D, dened as the maximum dierence between the theoret-
ical distribution function for U[0, 1], F(u) = u, and the empirical distribution

F(u) = {#(U
j
u)}/(m1). Percentage points for D can be found in tables.
7.6 Forecasting with ARMA models
Recall that (B)X = (B), so the power series coecients of C(z) = (z)/(z) =

r=0
c
r
z
r
give an expression for X
t
as X
t
=

r=0
c
r

tr
.
But also, = D(B)X, where D(z) = (z)/(z) =

r=0
d
r
z
r
as long as the
zeros of lie strictly outside the unit circle and thus
t
=

r=0
d
r
X
tr
.
The advantage of the representation above is that given (. . . , X
t1
, X
t
) we can
calculate values for (. . . ,
t1
,
t
) and so can forecast X
t+1
.
In general, if we want to forecast X
T+k
from (. . . , X
T1
, X
T
) we use

X
T,k
=

r=k
c
r

T+kr
=

r=0
c
k+r

Tr
,
which has the least mean squared error over all linear combinations of (. . . ,
T1
,
T
).
In fact,
E
_
(

X
T,k
X
T+k
)
2
_
=
2

k1

r=0
c
2
r
.
In practice, there is an alternative recursive approach. Dene

X
T,k
=
_
X
T+k
, (T 1) k 0 ,
optimal predictor of X
T+k
given X
1
, . . . , X
T
, 1 k .
We have the recursive relation

X
T,k
=
p

r=1

r

X
T,kr
+
T+k
+
q

s=1

s

T+ks
For k = (T 1), (T 2), . . . , 0 this gives estimates of
t
for t = 1, . . . , T.
For k > 0, this gives a forecast

X
T,k
for X
T+k
. We take
t
= 0 for t > T.
But this needs to be started o. We need to know (X
t
, t 0) and
t
, t 0.
There are two standard approaches.
1. Conditional approach: take X
t
=
t
= 0, t 0.
2. Backcasting: we forecast the series in the reverse direction to determine estima-
tors of X
0
, X
1
, . . . and
0
,
1
, . . . .
28
8 State space models
8.1 Models with unobserved states
State space models are an alternative formulation of time series with a number of
advantages for forecasting.
1. All ARMA models can be written as state space models.
2. Nonstationary models (e.g., ARMA with time varying coecients) are also state
space models.
3. Multivariate time series can be handled more easily.
4. State space models are consistent with Bayesian methods.
In general, the model consists of
observed data: X
t
= F
t
S
t
+v
t
unobserved state: S
t
= G
t
S
t1
+ w
t
observation noise: v
t
N(0, V
t
)
state noise: w
t
N(0, W
t
)
where v
t
, w
t
are independent and F
t
, G
t
are known matrices often time dependent
(e.g., because of seasonality).
Example 8.1
X
t
= S
t
+v
t
, S
t
= S
t1
+w
t
. Dene Y
t
= X
t
X
t1
= (S
t
+v
t
) (S
t1
+v
t1
) =
w
t
+ v
t
v
t1
. The autocorrelations of {y
t
} are zero at all lags greater than 1. So
{Y
t
} is MA(1) and thus {X
t
} is ARMA(1, 1).
Example 8.2
The general ARMA(p, q) model X
t
=

p
r=1

r
X
tr
+

q
s=0

ts
is a state space
model. We write X
t
= F
t
S
t
, where
F
t
= (
1
,
2
, ,
p
, 1,
1
, ,
q
), S
t
=
_
_
_
_
_
_
_
_
_
X
t1
.
.
.
X
tp

t
.
.
.

tq
_
_
_
_
_
_
_
_
_
R
p+q+1
29
with v
t
= 0, V
t
= 0. S
t
= G
t
S
t1
+ w
t
.
S
t
=
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
X
t1
X
t2
X
t3
.
.
.
X
tp1

t1

t2
.
.
.

tq
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
=
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_

1

2

p
1
1

2

q1

q
1 0 0 0 0 0 0 0
0 1
.
.
. 0 0 0 0 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 1 0 0 0 0 0
0 0 0 0 0 0 0 0 0
0 0 0 0 0 1 0 0 0
0 0 0 0 0 0 1 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 0 0 0 0 1 0
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
X
t2
X
t3
.
.
.
X
tp1

t1

t2

t3
.
.
.

tq

tq1
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
+
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
0
0
.
.
.
.
.
.
0

t
0
.
.
.
.
.
.
0
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
.
8.2 The Kalman lter
Given observed data X
1
, . . . , X
t
we want to nd the conditional distribution of S
t
and a forecast of X
t+1
.
Recall the following multivariate normal fact: If
Y =
_
Y
1
Y
2
_
N
__

2
_
,
_
A
11
A
12
A
21
A
22
__
(8.1)
then
(Y
1
| Y
2
) N
_

1
+A
12
A
1
22
(Y
2

2
), A
11
A
12
A
1
22
A
21
_
. (8.2)
Conversely, if (Y
1
| Y
2
) satises (8.2), and Y
2
N(
2
, A
22
) then the joint distribution
is as in (8.1).
Now let F
t1
= (X
1
, . . . , X
t1
) and suppose we know that (S
t1
| F
t1
)
N
_

S
t1
, P
t1
_
. Then
S
t
= G
t
S
t1
+w
t
,
so
(S
t
| F
t1
) N
_
G
t

S
t1
, G
t
P
t1
G

t
+W
t
_
,
and also (X
t
| S
t
, F
t1
) N(F
t
S
t
, V
t
).
Put Y
1
= X
t
and Y
2
= S
t
. Let R
t
= G
t
P
t1
G

t
+ W
t
. Taking all variables
conditional on F
t1
we can use the converse of the multivariate normal fact and
identify

2
= G
t

S
t1
and A
22
= R
t
.
Since S
t
is a random variable,

1
+ A
12
A
1
22
(S
t

2
) = F
t
S
t
=A
12
= F
t
R
t
and
1
= F
t

2
.
30
Also
A
11
A
12
A
1
22
A
21
= V
t
=A
11
= V
t
+ F
t
R
t
R
1
t
R

t
F

t
= V
t
+ F
t
R
t
F

t
.
What this says is that
_
X
t
S
t
_

F
t1
= N
__
F
t
G
t

S
t1
G
t

S
t1
_
,
_
V
t
+F
t
R
t
F

t
F
t
R
t
R

t
F

t
R
t
__
.
Now apply the multivariate normal fact directly to get (S
t
| X
t
, F
t1
) = (S
t
| F
t
)
N(

S
t
, P
t
), where

S
t
= G
t

S
t1
+ R
t
F

t
_
V
t
+ F
t
R
t
F

t
_
1
_
X
t
F
t
G
t

S
t1
_
P
t
= R
t
R
t
F

t
_
V
t
+F
t
R
t
F

t
_
1
F
t
R
t
These are the Kalman lter updating equations.
Note the form of the right hand side of the expression for

S
t
. If contains the term
G
t

S
t1
, which is simply what we would predict if it were known that S
t1
=

S
t1
, plus
a term that depends on the observed error in forecasting X
t
, i.e.,
_
X
t
F
t
G
t

S
t1
_
.
This is similar to the forecast updating expression for simple exponential smoothing
in Section 6.4.
All we need to start updating the estimates are the initial values

S
0
and P
0
. Three
ways are commonly used.
1. Use a Bayesian prior distribution.
2. If F, G, V, W are independent of t the process is stationary. We could use the
stationary distribution of S to start.
3. Choosing S
0
= 0, P
0
= kI (k large) reects prior ignorance.
8.3 Prediction
Suppose we want to predict the X
T+k
given (X
1
, . . . , X
T
). We already have
(X
T+1
| X
1
, . . . , X
T
) N
_
F
T+1
G
T+1
S
t
, V
T+1
+F
T+1
R
T+1
F

T+1
_
which solves the problem for the case k = 1. By induction we can show that
(S
T+k
| X
1
, . . . , X
T
) N
_

S
T+k
, P
T+k
_
31
where

S
T,0
=

S
T
P
T,0
= P
T

S
T,k
= G
T+k

S
T,k1
P
T,k
= G
T+k
P
T,k1
G

T+k
+W
T+k
and hence that (X
T+k
| X
1
, . . . , X
T
) N
_
F
T+k

S
T,k
, V
T+k
+ F
T+k
P
T,k
F

T+k
_
.
8.4 Parameter estimation revisited
In practice, of course, we may not know the matrices F
t
, G
t
, V
t
, W
t
. For example, in
ARMA(p, q) they will depend on the parameters
1
, . . . ,
p
,
1
, . . . ,
q
,
2
, which we
may not know.
We saw that when performing prediction error decomposition that we needed to
calculate the distribution of (X
t
| X
1
, . . . , X
t1
). This we have now done.
Example 8.3
Consider the state space model
observed data X
t
= S
t
+ v
t
,
unobserved state S
t
= S
t1
+ w
t
,
where v
t
, w
t
are independent errors, v
t
N(0, V ) and w
t
N(0, W).
Then we have F
t
= 1, G
t
= 1, V
t
= V , W
t
= W. R
t
= P
t1
+ W. So if
(S
t1
| X
1
, . . . , X
t1
) N
_

S
t1
, P
t1
_
then (S
t
| X
1
, . . . , X
t
) N
_

S
t
, P
t
_
, where

S
t
=

S
t1
+ R
t
(V + R
t
)
1
(X
t


S
t1
)
P
t
= R
t

R
2
t
V + R
t
=
V R
t
V + R
t
=
V (P
t1
+ W)
V +P
t1
+ W
.
Asymptotically, P
t
P, where P is the positive root of P
2
+WP WV = 0 and

S
t
behaves like

S
t
= (1 )

r=0

r
X
tr
, where = V/(V + W + P). Note that this
is simple exponential smoothing.
Equally, we can predict S
T+k
given (X
1
, . . . , X
T
) as N
_

S
T,k
, P
T,k
_
where

S
T,0
= S
t
,
P
T,0
= P
T
,

S
T,k
=

S
T
,
P
T,k
= P
T
+ kW .
So (X
T+k
| X
1
, . . . , X
T
) N
_

S
T
, V + P
T
+ kW
_
.
32

Potrebbero piacerti anche