Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Chapter 02
Walter Enders, 3rd Edition
𝑚𝑡 ∗ = 1.03 𝑚𝑡−1 ∗
• E(t) = E(t–1) = … = 0
• E(t)2 = E(t–1) 2 = … = s2
• Though t follows white noise, however if any of the betas is non zero, this series
may not follow a white noise, i.e may have different variances
2. ARMA MODELS
Combine linear difference equation with moving average equation
If the characteristic roots of the equation above are all in the unit circle then yt
follows ARMA process
ARMA MODELS
• Moving average representation of an AR(1) model
yt= a0 + a1yt–1+ t is
𝒊=𝟎 𝒂 𝟏 t-1
yt= a0/ (1- a1 ) + σ∞ 𝒊
q
p
yt a0 i t i 1 a i Li
i 0 i 1
Note that all roots must lie outside of the unit circle. If this is the case, we have the MA
Representation
2t
Section 3
Stationarity Restrictions for an AR(1) Process
Covariance Stationary Series
• Mean is time-invariant
• Variance is constant
• All covariances are constant
▫ all autocorrelations are constant
Formal Definition
A stochastic process having a finite mean and variance is covariance stationary if for all t and t s,
1. E(yt) = E(yt-s) = m
2. E[(yt– m)2] = E[(yt-s – m)2]
or [var(yt) = var(yts) = 𝜎 2 𝑦 ]
Mean
𝐸𝑦𝑡 =𝑎0 σ𝑡−1 𝑖 𝑡
𝑖=0 𝑎 1 + 𝑎 1 𝑦0
Variance
Notes:
p
so that yt a0 / 1 a1 ci t i
i 1 i 0
where ci = undetermined coefficients
Hence, all elements in the {xt} sequence have the same finite mean (m = 0).
• Form var(xt) as
Restricting the sum s + 1s+1 + 2s+2 + to be finite means that E(xtxt−s) is finite.
The Autocorrelation Function of an AR(2) Process
The Autocorrelation Function of an MA(1) Process
The Autocorrelation Function of an ARMA(1, 1)
Process
The ACF and PACF Functions
The Box-Jenkins method to identifying and estimating
time series models.
s
AR (1) Process
g 0 E[ yt u ]2 s 2 [1 a12 a14 ......]
E[ yt u ]2 s 2 [a0 / (1 a12 )]
and
g s E ( yt m )( yt s m ) s 2 (a1 ) s / [1 a12 ]
• For an AR(1)process
s necessary condition for stationarity is |a1|<1.
• The plot of against s is known as the correlogram or the ACF should converge
to 0 geometrically of the series is stationary
• If a1 is positive the converge will be direct and if it is negative then will follow a
dampened oscillatory path around 0.
The Autocorrelation Function of an MA(1) Process
Consider yt = t + t–1. Again, multiply yt by each yts and
take expectations
and
g0 = a1g1 + s2 + 1(a1+1)s2
g1 = a1g0 + 1s2
• Calculates
yt* yt m
11 1
PACF of an AR Process
yt = a + a1yt-1 + t
yt = a + a1yt-1 + a2yt-2+ t
11 1
22 ( 2 12 ) / (1 12 )
PACF of a MA(1)
yt = t + 1t-1
ACF PACF
• AR(1) • AR(p)
▫ Cuts off at lag p
▫ geometric decay
• MA(q) • MA(1)
▫ Geometric Decay
▫ cuts off at lag q
For stationary processes, the key points to note are the
following:
• 1. The ACF of an ARMA(p,q) process will begin to
decay after lag q. After lag q, the coefficients of the
ACF (i.e., the ri) will satisfy the difference equation
(i = a1i–1 + a2i–2 + + api-p).
( y y )( y
t t s y)
Form the sample autocorrelations
rs t s 1
T
( y
t 1
t y )2
s
Q T (T 2) rk2 /(T k ) Test groups of correlations
k 1
ALTERNATIVE
• AIC* = –2ln(L)/T + 2n/T
• SBC* = –2ln(L)/T + n ln(T)/T
0 . 7 5
0 . 7 5
0 . 5 0
0 . 5 0
0 . 2 5
0 . 2 5
0 . 0 0
0 . 0 0 - 0 . 2 5
0 5 1 0 1 5 0 5 1 0 1 5
CP Fa n f e ol r d t : h eP A AC RF M f Ao ( r 1 , t 1h ) e PA rR oM c e
1 . 0 0 1 . 0 0
0 . 7 5 0 . 7 5
0 . 5 0 0 . 5 0
0 . 2 5 0 . 2 5
0 . 0 0 0 . 0 0
- 0 . 2 5 - 0 . 2 5
- 0 . 5 0 - 0 . 5 0
- 0 . 7 5 - 0 . 7 5
- 1 . 0 0 - 1 . 0 0
0 5 1 0 1 5 0 5 1 0 1 5
Table 2.2: Estimates of an AR(1) Model
Model 1 Model 2
yt = a1yt-1 + et yt = a1yt-1 + et + b12et-12
Degrees of Freedom 98 97
Sum of Squared 85.10 85.07
Residuals
Estimated a1 0.7904 0.7938
(standard error) (0.0624) (0.0643)
Estimated b -0.0250
(standard error) (0.1141)
AIC / SBC AIC = 441.9 ; SBC = 444.5 AIC = 443.9 ; SBC = 449.1
Ljung-Box Q- Q(8) = 6.43 (0.490) Q(8) = 6.48 (0.485)
statistics for the Q(16) = 15.86 (0.391) Q(16) = 15.75 (0.400)
residuals Q(24) = 21.74 (0.536) Q(24) = 21.56. (0.547)
(significance level in
parentheses)
1.0
0.8
0.6
0.4
0.2
0.0
-0.2
0 5 10 15 20
Figure 2.4: ACF of the Residuals from Model 1
Table 2.3: Estimates of an ARMA(1,1)
Model
1 5 0 . 7 5
0 . 5 0
1 0
0 . 2 5
5
0 . 0 0
- 0 . 2 5
0
- 0 . 5 0
- 5
- 0 . 7 5
C O R R S
P A R T I A L S
- 1 0 - 1 . 0 0
5 01 0 10 5 20 0 20 5 30 0 30 5 40 0 40 5 50 0 0 0 5 1 0 1 5 2 0
Parsimony
Stationarity and Invertibility
Goodness of Fit
Post-Estimation Evaluation
Box Jenkins Model Selection
• Parsimony
▫ Extra AR coefficients reduce degrees of freedom
by 2
▫ Similar processes can be approximated by very
different models
▫ Common Factor Problem
yt = t and yt = 0.5 yt-1 + t - 0.5t-1
▫ Hence: All t-stats should exceed 2.0
1 2 5 0
1 0 0 0
7 5 0
5 0 0
2 5 0
- 2 5 0
- 5 0 0
- 7 5 0
1 9 14 9 71 5 9 21 5 9 71 6 9 21 6 9 71 7 9 21 7 9 71 8 9 21 8 9 71 9 92 9 7
Updating 1 period:
Etyt+1 = 0 + 1t
Hence, the 1-step ahead forecast error is the "unforecastable" portion of yt+1
Confidence intervals
The 95% confidence interval for the 1-step ahead forecast is:
0 + 1t 1.96s
The 95% confidence interval for the 2-step ahead forecast is:
0 1.96(1 + 12)1/2s
and in general:
If we take the limit of Etyt+j we find that Etyt+j = a0/(1 - a1). This result is really quite
general; for any stationary ARMA model, the conditional forecast of yt+j converges
to the unconditional mean.
Forecast errors
The 1-step ahead forecast error is:
yt+1 – Etyt+1 = a0 + a1yt + εt+1 - a0 - a1yt = t+1
The 2-step ahead forecast error is: yt+2 - Etyt+2. Since yt+2 = a0 + a1a0 +
a12yt + εt+2 + a1εt+1 and Etyt+2 = a0 + a1a0 + a12yt , it follows that:
a0 + a1yt 1.96σ
0.75
0.50
0.25
0.00
-0.25
-0.50
0 1 2 3 4 5 6 7 8 9 10 11 12
Autocorrelations PACF
• Seasonal AR coefficients
▫ yt = a1yt-1+a12yt-12 + a13yt-13
▫ yt = a1yt-1+a12yt-12 + a1a12yt-13
▫ (1 – a1L)(1 – a12L12)yt
• Seasonal MA Coefficients
• Seasonal differencing:
▫ Dyt = yt – yt-1 versus D12yt = yt – yt-12
NOTE: You do not difference 12 times
▫ In RATS you can use: dif(sdiffs=1) y / sdy
Panel a: M1 Grow th
1.0
0.8
0.6
0.4
0.2
0.0
-0.2
-0.4
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
Autocorrelations PACF
0.75
0.50
0.25
0.00
-0.25
-0.50
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
Autocorrelations PACF
mt = a0 + a1mt–1 + t + 4t–4
mt = a0 + (1 + a1L)(1 + a4L4)mt–1 + t
mt = a0 + (1 + 1L)(1 + 4L4)t
Table 2.5 Three Models of Money Growth
If you use the 5% significance level, the plot value of each value of
CUSUMN should be within a band of approximately 0.948 [ (T n)0.5
+ 2(N – n) (T n)-0.5 ].
Figure 2.10: Recursive Estimation of the Model
Panel 1: The Series Panel 2: Intercept
12 7
6
10
5
8 4
3
6
2
1
4
0
2 -1
-2
0 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150
0.0 0
-10
-0.5
-20
-1.0
-30
-1.5 -40
10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150
and σ 𝑤𝑖 = 1
If the forecasts are unbiased (so that Et1fit = yt), it follows that the
composite forecast is also unbiased:
Now let e1t and e2t denote the series containing the one-step-ahead forecast errors
from models 1 and 2 (i.e., eit = yt fit) and let ect be the composite forecast error.
As such, we can write
Suppose that the forecast error variances are the same size and that cov(e1te2t)
=0. If you take a simple average by setting w1 = 0.5, (2.72) indicates that the
variance of the composite forecast is 25% of the variances of either forecast:
var(ect) = 0.25var(e1t) = 0.25var(e2t).
Optimal Weights
var(ect) = (w1)2var(e1t) + (1 w1)2var(e2t) + 2w1(1 w1)cov(e1te2t)
Select the weight w1 so as to minimize var(ect):
var(ect )
2w1 var(e1t ) 2(1 w1 ) var(e2t ) 2(1 2w1 )cov(e1t e2 t )
w1
Bates and Granger (1969), recommend constructing the weights
excluding the covariance terms.
var(e2 t ) var(e1t ) 1
w
*
var(e1t ) var(e2 t ) var(e1t ) 1 var(e2 t ) 1
1
var(e1t ) 1
w
*
n
w a i / a i
*
i
t 1
Since exp(0) = 1, the model with the best fit has the weight 1/Sai. Since
ai is decreasing in the value of SBCi, models with a poor fit with have
smaller weights than models with large values of the SBC.
Example of the Spread
I estimated seven different ARMA models of the interest rate spread. The data
ends in April 2012 and if I use each of the seven models to make a one-step-
ahead forecast for January 2013:
2s 2
t
2
t 1
Let t = yt – bxt
T
T T 1
ln L = ln (2 ) ln s t t
2 2
( y x )
2 2 2s 2 t=1
ln L T 1 T
ln L 1 T
( y x ) = 2 ( y t x t x t2)
2
= +
s 2 2s 2 2s 4
t
s t=1
t
t=1
ML ESTIMATION OF AN MA(1)
Now let yt = t–1 + t. The problem is to construct
the {t} sequence from the observed values of
{yt}. If we knew the true value of and knew that
0 = 0, we could construct 1, … , T recursively.
Given that 0 = 0, it follows that:
1 = y1
2 = y2 – 1 = y2 – y1
3 = y3 – 2 = y3 – (y2 – y1 )
4 = y4 – 3 = y4 – [y3 – (y2 – y1 ) ]
In general, t = yt – t–1 so that if L is the lag
operator
t 1
t yt /(1 L) ( )i yt i
i 0
2
T T 1 T t 1
ln L ln(2 ) ln s 2 yt i
2 i
2 2 2s t 1 i 0