Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
FEB22005(X): Econometrics 2
5 Tests
V [bOLS ] = E[(X 0 X )−1 X 0 εε0 X (X 0 X )−1 ] A direct estimator of the unknown covariance σij is ei ej , the
0 −1 0 0 0 −1
cross-product of OLS residuals ei = yi − xi0 bOLS
= (X X ) X E[εε ]X (X X )
= (X 0 X )−1 X 0 ΩX (X 0 X )−1 However
n X
n
X
ei ej xi xj0 = X 0 ee0 X = 0,
i=1 j=1
4 Only in absense of correlation and with homoskedasticity
(Ω = σ 2 I) the covariance matrix V [bOLS ] simplifies to the usual so this estimator can not be used to estimate V [bOLS ]
σ 2 (X 0 X )−1
(→ see also the part on heteroskedasticity)
yi = β1 + β2 xi + εi , i = 1, . . . , n,
2 Consequences of autocorrelation
can be included in various manners when estimating β1 and β2
.1 .1
Variable Coefficient Std. Error t-Statistic Prob.
RESIDWITHLAGS(-1)
RESIDNOLAGS(-1)
.0 .0
C 0.546057 0.206198 2.648218 0.0087 -.1 -.1
LOG(PRICE1) -1.805941 0.159140 -11.34813 0.0000
PROMO11 0.295989 0.078788 3.756752 0.0002 -.2 -.2
PROMO21 0.443473 0.280192 1.582745 0.1149
-.3 -.3
LOG(PRICE1(-1)) 1.129974 0.176820 6.390540 0.0000
LOG(SHARE1(-1)) 0.490609 0.062649 7.831090 0.0000 -.4 -.4
-.5 -.5
R-squared 0.665045 Mean dependent var -0.816880
Adjusted R-squared 0.657432 S.D. dependent var 0.142689 -.6 -.6
S.E. of regression 0.083515 Akaike info criterion -2.101400 -.6 -.5 -.4 -.3 -.2 -.1 .0 .1 .2 .3 -.6 -.5 -.4 -.3 -.2 -.1 .0 .1 .2 .3
Sum squared resid 1.534433 Schwarz criterion -2.010589 RESIDNOLAGS RESIDWITHLAGS
Log likelihood 243.4582 Hannan-Quinn criter. -2.064752
F-statistic 87.36077 Durbin-Watson stat 1.813537
Prob(F-statistic) 0.000000
In Eviews the option AR(1) R-squared 0.667315 Mean dependent var -0.816880
ls y c x AR(1) Adjusted R-squared 0.661294 S.D. dependent var 0.142689
S.E. of regression 0.083043 Akaike info criterion -2.117051
Sum squared resid 1.524032 Schwarz criterion -2.041375
Log likelihood 244.2267 Hannan-Quinn criter. -2.086511
F-statistic 110.8230 Durbin-Watson stat 1.880427
Prob(F-statistic) 0.000000
. . . ρn−1
1
ρ a 0 0 ... 0 0 0
ση2 ρ 1 . . . ρn−2 −ρ 1 0 ... 0 0 0
Ω=
. .. ..
1 − ρ2 ..
.. 0 −ρ 1 ... 0 0 0
. . . . .. .. .. ..
1 .. .. ..
ρn−1 ρn−2 ... 1 P −1 = . . . . . .
ση
..
such that 0
0 0 . 1 0 0
1 0 0 0 ... −ρ 1 0
0 0 0 0 0
a ...
ρ 0 0 0 ... 0 −ρ 1
a 1 0 ... 0 0 0
ρ2
a ρ 1 ... 0 0 0
. .. .. .. .. ....
. . .
p Compared to the transformation of Cochrane-Orcutt (given Ω)
P = ση . . . . . , with a = 1 − ρ2
n−3
ρ .. In GLS also the first observation is included
ρn−4 ρn−5 . 1 0 0
Scaling factor σ1η
a
ρn−2
ρn−3 ρn−4
a ... ρ 1 0
ρn−1 →Otherwise exactly the same!
a ρn−2 ρn−3 ... ρ2 ρ 1
In practice Ω is unknown - GLS is therefore not applicable, unless we Given y = X β + ε with Var[ε] = Ω (Ω known)
first get an estimate of Ω. This gives rise to the “feasible” GLS (FGLS) Prove that the GLS estimator has a smaller variance than the
estimator: OLS estimator
1 Estimate β in yi = xi0 β + εi with OLS (note that OLS is consistent!) Variance GLS: (X 0 Ω−1 X )−1
2 Estimate Ω using residuals of previous step ei = yi − xi0 b Variance OLS: (X 0 X )−1 (X 0 ΩX )(X 0 X )−1
3 Use Ωb to determine P
b
⇒ Prove that (X 0 X )−1 (X 0 ΩX )(X 0 X )−1 − (X 0 Ω−1 X )−1 is positive
4 b −1 : y ∗ = P
Transform the data with P b −1 X
b −1 y and X ∗ = P
semidefinite
5 Estimate β with OLS in the model for the transformed data:
yi∗ = xi∗0 β + ε∗i Positive Semidefinite (psd)
Possibly to iterate steps 2-5 further: Iterated Feasible GLS 1 A square matrix C is positive semidefinite if x 0 Cx ≥ 0 ∀ vectors x
2 If C psd, then B 0 CB is also psd for all matrices B
All four tests make (indirect) use of the autocorrelation of the OLS
Before applying GLS or Cochrane-Orcutt it is sensible to test whether residuals: Pn
there is indeed serial correlation ei ei−k
rk = i=k P+1
n 2
, k = 1, 2, . . .
i=1 ei
Correlogram of Residuals This test is most generally applicable and therefore most suitable
Sample: 6/06/1991 10/05/1995
to test for presence of serial correlation
Included observations: 226
Q-statistic probabilities adjusted for 1 ARMA term