Sei sulla pagina 1di 10

Outline

FEB22005(X): Econometrics 2

Lecture 3 1 Previous class


Serial Correlation
2 Consequences of autocorrelation
Michel van der Wel

3 Estimating under serial correlation


Erasmus University Rotterdam
Econometric Institute
March 9, 2017 4 GLS

5 Tests

ERASMUS SCHOOL OF ECONOMICS 1/38

Outline Previous class(es)


Have yi = xi0 β + εi , for i = 1, . . . , n, where
E(ε2i ) = σi2 → Heteroskedasticity
1 Previous class E[εi εj ] = σij 6= 0 → Autocorrelation

In matrix form y = X β + ε, with E[ε] = 0 and E[εε0 ] = Ω, where for


2 Consequences of autocorrelation
Homoskedasticity: Ω = σ 2 In
 2 
σ1 0
3 Estimating under serial correlation ..
Heteroskedasticity: Ω = 
 
. 
0 σn2
4 GLS
σ12
 
σ12 ... σ1n
 .. .. 
5 Tests
σ21 . . 
Autocorrelation (and heterosked): Ω = 
 .

.. 
 .. ..
. . 
σn1 ... ... σn2

ERASMUS SCHOOL OF ECONOMICS 2/38 ERASMUS SCHOOL OF ECONOMICS 3/38


Outline Consequences of (auto)correlation
Correlation has important consequences for (among others)
estimating the parameters β
1 In particular, the OLS estimator
1 Previous class
bOLS = (X 0 X )−1 X 0 y
2 Consequences of autocorrelation remains unbiased (and consistent):

E[bOLS ] = E[(X 0 X )−1 X 0 y ]


3 Estimating under serial correlation
= E[(X 0 X )−1 X 0 (X β + ε)]
4 GLS = β + E[(X 0 X )−1 X 0 ε]
= β + (X 0 X )−1 X 0 E[ε]
5 Tests =β

Note: under the assumption that X is fixed (so no lagged (NL:


“vertraagde”) y allowed)
2 OLS is inefficient
ERASMUS SCHOOL OF ECONOMICS 4/38 ERASMUS SCHOOL OF ECONOMICS 5/38

Standard errors Standard errors


3 The usual OLS standard errors are no longer correct. From the
previous slide we have As X 0 X =
Pn
xi xi0 and X 0 ΩX =
Pn Pn
σij xi xj0 it holds
i=1 i=1 j=1

bOLS − β = X 0 X )−1 X 0 ε !−1  n n  !−1


n
X XX n
X
0 0 0
such that the covariance matrix of bOLS , V [bOLS ] given by V [bOLS ] = xi xi  σij xi xj  xi xi
E[(bOLS − β)(bOLS − β)0 ], is i=1 i=1 j=1 i=1

V [bOLS ] = E[(X 0 X )−1 X 0 εε0 X (X 0 X )−1 ] A direct estimator of the unknown covariance σij is ei ej , the
0 −1 0 0 0 −1
cross-product of OLS residuals ei = yi − xi0 bOLS
= (X X ) X E[εε ]X (X X )
= (X 0 X )−1 X 0 ΩX (X 0 X )−1 However
n X
n
X
ei ej xi xj0 = X 0 ee0 X = 0,
i=1 j=1
4 Only in absense of correlation and with homoskedasticity
(Ω = σ 2 I) the covariance matrix V [bOLS ] simplifies to the usual so this estimator can not be used to estimate V [bOLS ]
σ 2 (X 0 X )−1
(→ see also the part on heteroskedasticity)

ERASMUS SCHOOL OF ECONOMICS 6/38 ERASMUS SCHOOL OF ECONOMICS 7/38


Estimator for covariance matrix OLS Example continued
The solution is to weigh the ei ej terms
Dependent Variable: LOG(SHARE1)
⇒ This provides the “Newey-West” estimator of V [bOLS ]: Method: Least Squares
Sample: 6/06/1991 10/05/1995
 −1  \   −1 Included observations: 227
1 1 0 1 0 1 0
V\[bOLS ] = X X X ΩX X X HAC standard errors & covariance (Bartlett kernel, Newey-West fixed
n n n n bandwidth = 5.0000)

with Variable Coefficient Std. Error t-Statistic Prob.

n n−1 n C 1.067230 0.275888 3.868351 0.0001


1\ 1X 2 0 1X X
X 0 ΩX = ei xi xi + w|j−i| ei ej (xi xj0 + xj xi0 ) LOG(PRICE1) -1.295001 0.169646 -7.633551 0.0000
n n n PROMO11 0.345599 0.120684 2.863673 0.0046
i=1 i=1 j=i+1 PROMO21 0.874562 0.326462 2.678911 0.0079

where wh is the kernel. R-squared 0.573537 Mean dependent var -0.817644


Example: Bartlett kernel Adjusted R-squared 0.567800 S.D. dependent var 0.142836
S.E. of regression 0.093903 Akaike info criterion -1.875635
( Sum squared resid 1.966382 Schwarz criterion -1.815284
h
1− B h<B Log likelihood 216.8846 Hannan-Quinn criter. -1.851283
wh = F-statistic 99.96859 Durbin-Watson stat 1.124464
0 h≥B Prob(F-statistic) 0.000000 Wald F-statistic 101.3733
Prob(Wald F-statistic) 0.000000
⇒Newey-West standard errors are HAC
[Heteroskedasticity and Autocorrelation Consistent] Standard errors larger in this application (not always the case)

ERASMUS SCHOOL OF ECONOMICS 8/38 ERASMUS SCHOOL OF ECONOMICS 9/38

Outline Estimating under serial correlation


OLS no longer BLUE!

1 Previous class Serial correlation in the linear regression model

yi = β1 + β2 xi + εi , i = 1, . . . , n,
2 Consequences of autocorrelation
can be included in various manners when estimating β1 and β2

3 Estimating under serial correlation


1 Include lagged variables
⇒ Correlation between εi = yi − β1 − β2 xi and
4 GLS εi−1 = yi−1 − β1 − β2 xi−1 could occur due to correlation between
yi and yi−1 or xi−1
⇒ Therefore the disturbances ηi in the model
5 Tests
yi = ρyi−1 + β1 + β2 xi + β3 xi−1 + ηi , i = 1, . . . , n,

may not be correlated

ERASMUS SCHOOL OF ECONOMICS 10/38 ERASMUS SCHOOL OF ECONOMICS 11/38


Model with lags Scatters of residuals

Dependent Variable: LOG(SHARE1)


Method: Least Squares .3 .3
Sample (adjusted): 6/13/1991 10/05/1995
Included observations: 226 after adjustments .2 .2

.1 .1
Variable Coefficient Std. Error t-Statistic Prob.

RESIDWITHLAGS(-1)
RESIDNOLAGS(-1)
.0 .0
C 0.546057 0.206198 2.648218 0.0087 -.1 -.1
LOG(PRICE1) -1.805941 0.159140 -11.34813 0.0000
PROMO11 0.295989 0.078788 3.756752 0.0002 -.2 -.2
PROMO21 0.443473 0.280192 1.582745 0.1149
-.3 -.3
LOG(PRICE1(-1)) 1.129974 0.176820 6.390540 0.0000
LOG(SHARE1(-1)) 0.490609 0.062649 7.831090 0.0000 -.4 -.4

-.5 -.5
R-squared 0.665045 Mean dependent var -0.816880
Adjusted R-squared 0.657432 S.D. dependent var 0.142689 -.6 -.6
S.E. of regression 0.083515 Akaike info criterion -2.101400 -.6 -.5 -.4 -.3 -.2 -.1 .0 .1 .2 .3 -.6 -.5 -.4 -.3 -.2 -.1 .0 .1 .2 .3
Sum squared resid 1.534433 Schwarz criterion -2.010589 RESIDNOLAGS RESIDWITHLAGS
Log likelihood 243.4582 Hannan-Quinn criter. -2.064752
F-statistic 87.36077 Durbin-Watson stat 1.813537
Prob(F-statistic) 0.000000

ERASMUS SCHOOL OF ECONOMICS 12/38 ERASMUS SCHOOL OF ECONOMICS 13/38

Residuals in model with lags Estimating under serial correlation


-0.2
2 Modeling of the serial correlation using an autoregressive model
-0.4
for the disturbances
-0.6
Suppose it is possible to model the serial correlation in the
-0.8
disturbances with an AutoRegressive (AR) model of order 1:
-1.0
.4
-1.2 yi = β1 + β2 xi + εi ,
.2 -1.4 εi = γεi−1 + ηi ,
-1.6
.0
with standard assumptions for ηi . The system implies
-.2
yi − γyi−1 = β1 (1 − γ) + β2 (xi − γxi−1 ) + ηi
-.4

-.6 ⇒ Given a value for γ, the parameters β1 and β2 can be


II III IV I II III IV I II III IV I II III IV I II III estimated with OLS, and given estimates of ε, the parameter γ
1991 1992 1993 1994 1995 can be also be estimated with OLS. This gives the iterative
Cochrane-Orcutt procedure
Residual Actual Fitted

ERASMUS SCHOOL OF ECONOMICS 14/38 ERASMUS SCHOOL OF ECONOMICS 15/38


Alternative estimation procedures Example AR(1)

Dependent Variable: LOG(SHARE1)


Method: Least Squares
As alternatives for Cochrane-Orcutt can be used: Sample (adjusted): 6/13/1991 10/05/1995
Included observations: 226 after adjustments
Convergence achieved after 6 iterations

Variable Coefficient Std. Error t-Statistic Prob.


Non Linear Least Squares for
C 1.605277 0.267088 6.010303 0.0000
yi = γyi−1 + β1 (1 − γ) + β2 (xi − γxi−1 ) + ηi LOG(PRICE1) -1.654138 0.168958 -9.790248 0.0000
PROMO11 0.372533 0.086705 4.296573 0.0000
PROMO21 0.713930 0.283069 2.522104 0.0124
AR(1) 0.515989 0.061730 8.358819 0.0000

In Eviews the option AR(1) R-squared 0.667315 Mean dependent var -0.816880
ls y c x AR(1) Adjusted R-squared 0.661294 S.D. dependent var 0.142689
S.E. of regression 0.083043 Akaike info criterion -2.117051
Sum squared resid 1.524032 Schwarz criterion -2.041375
Log likelihood 244.2267 Hannan-Quinn criter. -2.086511
F-statistic 110.8230 Durbin-Watson stat 1.880427
Prob(F-statistic) 0.000000

Inverted AR Roots .52

ERASMUS SCHOOL OF ECONOMICS 16/38 ERASMUS SCHOOL OF ECONOMICS 17/38

Outline Generalized Least Squares


The equivalent of CO for the general linear regression model

yi = xi0 β + εi , with εi = γεi−1 + ηi i = 1, . . . , n,


1 Previous class
is equal to
yi − γyi−1 = (xi − γxi−1 )0 β + ηi
2 Consequences of autocorrelation
Given γ we can see this as a model for the transformed data
3 Estimating under serial correlation
yi∗ = xi∗ 0 β + ε∗i , i = 2, 3, . . .

4 GLS with yi∗ = yi − γyi−1 , xi∗ = xi − γxi−1 , and ε∗i = εi − γεi−1


As ε∗i = ηi :
5 Tests Disturbances ε∗i are homoskedastic and free of serial correlation
All “classical” assumptions in the linear model hold
OLS thus provides an efficient estimate of β!
⇒ This is similar to the idea (‘trick’) of WLS for heteroskedasticity

ERASMUS SCHOOL OF ECONOMICS 18/38 ERASMUS SCHOOL OF ECONOMICS 19/38


Generalized Least Squares Generalized Least Squares
WLS and Cochrane-Orcutt are two examples of the general estimation Now define the transformed data y ∗ = P −1 y , X ∗ = P −1 X , such that
method of Generalized Least Squares (GLS) y = X β + ε, with E[εε0 ] = Ω,
Main idea: transform the data in such a way that the conditions hold can be written as
under which OLS is efficient y ∗ = X ∗ β + ε∗ ,
Consider the linear regression model in matrix form where ε∗ = P −1 ε with
y = X β + ε, var(ε∗ ) = var(P −1 ε)
0
with E[εε0 ] = Ω, and suppose Ω is known = P −1 var(ε)P −1
0
⇒ Ω is a covariance matrix, and is thus symmetric and positive = P −1 ΩP −1
definite. Therefore there is an invertible lower triangular (NL: = P −1 PP 0 P −1
0
“onderdriehoeks”) matrix P such that
=I
PP 0 = Ω ∗
The disturbances ε are homoskedastic and free of serial correlation,
such that all “classical” assumptions in the linear regression model
(A decomposition that satisfies this is called the Choleski
1 hold. OLS for the transformed model thus provides an efficient
decomposition, also sometimes denoted with P = Ω 2 )
estimate of β ⇒This is the GLS Estimator
ERASMUS SCHOOL OF ECONOMICS 20/38 ERASMUS SCHOOL OF ECONOMICS 21/38

Generalized Least Squares GLS – Example 1: WLS


For this estimator:
−1
bGLS = (X ∗0 X ∗ ) X ∗0 y ∗ In case of heteroskedasticity of the form E[ε2i ] = σ 2 zi2 it holds
 0 −1 0  2
= P −1 X P −1 X P −1 X P −1 y

z1 0 ... 0
−1 0 z22 ... 0
0 0 Ω = σ2  .
  
= X 0 P −1 P −1 X X 0 P −1 P −1 y .. .. .. 
 .. . . .
= X 0 (PP 0 )−1 X
−1 0
X (PP 0 )−1 y 0 0 ... zn2
−1 0 −1
= X 0 Ω−1 X such that

X Ω y,
   −1 
for which E[bGLS ] = β, z1 0 ... 0 z1 0 ... 0
−1
0 z2 ... 0 1  0 z2−1 ... 0 
Var(bGLS ) = (X ∗0 X ∗ ) P = σ .

.. ..
 −1
..  , P =

 . .. .. ..

 .. σ  ..

 0
−1 . . .  . . . 
= X 0 P −1 P −1 X 0 0 ... zn 0 0 ... zn−1
−1
= X 0 Ω−1 X
⇒GLS is BLUE
ERASMUS SCHOOL OF ECONOMICS 22/38 ERASMUS SCHOOL OF ECONOMICS 23/38
GLS – Example 2: Cochrane-Orcutt GLS – Example 2: Cochrane-Orcutt
In case of autocorrelation of the form εi = ρεi−1 + ηi it turns out (will
come during Time Series Analysis) This provides

. . . ρn−1
 
1
 
ρ a 0 0 ... 0 0 0
ση2  ρ 1 . . . ρn−2  −ρ 1 0 ... 0 0 0
Ω=

. .. .. 
 
1 − ρ2  ..
 ..  0 −ρ 1 ... 0 0 0
. . .  . .. .. .. .. 
 
1  .. .. ..
ρn−1 ρn−2 ... 1 P −1 = . . . . . .
ση 
 ..


such that 0
 0 0 . 1 0 0 
1 0 0 0 ... −ρ 1 0
0 0 0 0 0
 
a ...
ρ 0 0 0 ... 0 −ρ 1
 a 1 0 ... 0 0 0
ρ2
 

 a ρ 1 ... 0 0 0 
 . .. .. .. .. .... 
 . . .
p Compared to the transformation of Cochrane-Orcutt (given Ω)
P = ση  . . . . . , with a = 1 − ρ2
 n−3
ρ ..  In GLS also the first observation is included
ρn−4 ρn−5 . 1 0 0

Scaling factor σ1η
 a
 ρn−2
ρn−3 ρn−4

 a ... ρ 1 0
ρn−1 →Otherwise exactly the same!
a ρn−2 ρn−3 ... ρ2 ρ 1

ERASMUS SCHOOL OF ECONOMICS 24/38 ERASMUS SCHOOL OF ECONOMICS 25/38

Feasible GLS Direct prove of GLS efficiency

In practice Ω is unknown - GLS is therefore not applicable, unless we Given y = X β + ε with Var[ε] = Ω (Ω known)
first get an estimate of Ω. This gives rise to the “feasible” GLS (FGLS) Prove that the GLS estimator has a smaller variance than the
estimator: OLS estimator
1 Estimate β in yi = xi0 β + εi with OLS (note that OLS is consistent!) Variance GLS: (X 0 Ω−1 X )−1
2 Estimate Ω using residuals of previous step ei = yi − xi0 b Variance OLS: (X 0 X )−1 (X 0 ΩX )(X 0 X )−1
3 Use Ωb to determine P
b
⇒ Prove that (X 0 X )−1 (X 0 ΩX )(X 0 X )−1 − (X 0 Ω−1 X )−1 is positive
4 b −1 : y ∗ = P
Transform the data with P b −1 X
b −1 y and X ∗ = P
semidefinite
5 Estimate β with OLS in the model for the transformed data:
yi∗ = xi∗0 β + ε∗i Positive Semidefinite (psd)

Possibly to iterate steps 2-5 further: Iterated Feasible GLS 1 A square matrix C is positive semidefinite if x 0 Cx ≥ 0 ∀ vectors x
2 If C psd, then B 0 CB is also psd for all matrices B

ERASMUS SCHOOL OF ECONOMICS 26/38 ERASMUS SCHOOL OF ECONOMICS 27/38


Prove (compact form) Outline

1 Write Ω = P · P with P symmetric


(this is possible as Ω is a variance matrix and thus pos.def.) 1 Previous class

2 Write the difference in variances as VOLS − VGLS = B 0 CB 2 Consequences of autocorrelation


→ This holds for B = PX (X 0 X )−1 and C = [I − A(A0 A)−1 A0 ], with
A = P −1 X
3 Estimating under serial correlation

3 As C is a projection matrix, C = C · C = C 0 , and thus it is is psd


4 GLS

4 By property 2 of the previous slide, the difference between the


variances is pos.semidefinite! 5 Tests

ERASMUS SCHOOL OF ECONOMICS 28/38 ERASMUS SCHOOL OF ECONOMICS 29/38

Tests for autocorrelation Autocorrelation

All four tests make (indirect) use of the autocorrelation of the OLS
Before applying GLS or Cochrane-Orcutt it is sensible to test whether residuals: Pn
there is indeed serial correlation ei ei−k
rk = i=k P+1
n 2
, k = 1, 2, . . .
i=1 ei

The null hypothesis being tested is that of absence of serial


This is possible in multiple ways, for example with the tests of correlation, so rk = 0, k = 1, 2, . . .
Durbin-Watson (DW)
Box-Pierce (BP) Note the similarity to the sample k -th autocorrelation coefficient:
Ljung-Box (LB) Pn
i=k +1 ei ei−k
Breusch-Godfrey (BG) qP qP ,
n 2 n−k 2
e
i=k +1 i e
i=1 i

which is in fact asymptotically equivalent

ERASMUS SCHOOL OF ECONOMICS 30/38 ERASMUS SCHOOL OF ECONOMICS 31/38


Tests for autocorrelation Tests for autocorrelation

The Durbin-Watson (DW) statistic is defined as


The Box-Pierce (BP) statistic is defined as
Pn
(ei − ei−1 )2
DW = i=2Pn 2
p
X
i=1 ei BP = n rk2 ≈ χ2 (p) under H0
Pn 2
Pn 2
Pn
i=2 ei + i=2 ei−1 − 2 i=2 ei ei−1
k =1
= Pn 2
i=1 ei
≈ 2(1 − r1 ) The Ljung-Box (LB) statistic is defined as
p
Value between 0 (for perfect correlation, r1 = 1) and 4 (perfect X n+2 2
negative correlation, r1 = −1). Value under null should be about 2 LB = n r ≈ χ2 (p) under H0
n−k k
k =1
Two disadvantages:
1 Distribution under the null depends on the properties of regressors Disadvantage: the BP and LB tests are only applicable when the
2 Not applicable when lagged dependent variables are included as regressors are nonstochastic (so do not include lagged y ’s).
regressors (see Exercise 5.11) Correction for lagged y ’s is possible however

ERASMUS SCHOOL OF ECONOMICS 32/38 ERASMUS SCHOOL OF ECONOMICS 33/38

Residual autocorr. and LB-stat. Breusch-Godfrey test


Correlogram of Residuals

Sample: 6/06/1991 10/05/1995


The Breusch-Godfrey (BG) tests is an LM-test for the restriction
Included observations: 227
H0 : γ1 = . . . = γp = 0 in the model
Autocorrelation Partial Correlation AC PAC Q-Stat Prob

“Basic” model: 1 0.435 0.435 43.534 0.000


2
3
0.066
0.079
-0.152
0.141
44.549
45.982
0.000
0.000
yi = xi0 β + εi ,
4 0.291 0.262 65.737 0.000
5
6
0.226
0.019
-0.024
-0.072
77.662
77.746
0.000
0.000 εi = γ1 εi−1 + . . . + γp εi−p + ηi
7 0.047 0.112 78.275 0.000
8 0.168 0.049 84.949 0.000

Correlogram of Residuals This test is most generally applicable and therefore most suitable
Sample: 6/06/1991 10/05/1995
to test for presence of serial correlation
Included observations: 226
Q-statistic probabilities adjusted for 1 ARMA term

Autocorrelation Partial Correlation AC PAC Q-Stat Prob* Applying the test:


Model with AR(1): 1
2
0.059
-0.167
0.059
-0.171
0.7936
7.2426 0.007 1 OLS on yi = xi0 β + ηi →ei
3
4
-0.090
0.250
-0.071
0.241
9.1322
23.691
0.010
0.000 2 Run auxiliary regression ei = γ1 ei−1 + . . . + γp ei−p + xi0 δ + ωi
5 0.211 0.168 34.030 0.000
6
7
-0.041
-0.049
0.003
0.046
34.420
34.996
0.000
0.000
3 Under H0 (no autocorrelation) have nR 2 ≈ χ2 (p)
8 0.032 -0.000 35.241 0.000

*Probabilities may not be valid for this equation specification.


Step 2 here thus regression of residuals on own lags while also
including all xi , see full derivation in Exercise 5.8
(Q-stat in Eviews gives the Ljung-Box statistic)

ERASMUS SCHOOL OF ECONOMICS 34/38 ERASMUS SCHOOL OF ECONOMICS 35/38


Example LM-test I Example LM-test II

Tests result in model with AR(1) disturbances


Test results in model without lags
Breusch-Godfrey Serial Correlation LM Test:
Breusch-Godfrey Serial Correlation LM Test: F-statistic 4.839839 Prob. F(2,219) 0.0088
Obs*R-squared 9.566252 Prob. Chi-Square(2) 0.0084
F-statistic 31.83044 Prob. F(2,221) 0.0000
Obs*R-squared 50.76574 Prob. Chi-Square(2) 0.0000
Test Equation:
Dependent Variable: RESID
Test Equation: Method: Least Squares
Dependent Variable: RESID Sample: 6/13/1991 10/05/1995
Method: Least Squares Included observations: 226
Sample: 6/06/1991 10/05/1995 Presample missing value lagged residuals set to zero.
Included observations: 227
Presample missing value lagged residuals set to zero. Variable Coefficient Std. Error t-Statistic Prob.
Variable Coefficient Std. Error t-Statistic Prob. C -0.023935 0.262682 -0.091117 0.9275
LOG(PRICE1) 0.016662 0.166184 0.100265 0.9202
C 0.193220 0.194808 0.991847 0.3224 PROMO11 -0.005581 0.085465 -0.065302 0.9480
LOG(PRICE1) -0.134221 0.123158 -1.089823 0.2770 PROMO21 -0.073054 0.280677 -0.260278 0.7949
PROMO11 0.061648 0.076279 0.808187 0.4199 AR(1) 0.293041 0.210692 1.390853 0.1657
PROMO21 -0.234480 0.276293 -0.848666 0.3970 RESID(-1) -0.217750 0.215724 -1.009390 0.3139
RESID(-1) 0.533312 0.067627 7.886106 0.0000 RESID(-2) -0.329362 0.127882 -2.575508 0.0107
RESID(-2) -0.126423 0.069296 -1.824388 0.0694
R-squared 0.042329 Mean dependent var -1.03E-13
R-squared 0.223638 Mean dependent var -2.82E-16 Adjusted R-squared 0.016091 S.D. dependent var 0.082301
Adjusted R-squared 0.206073 S.D. dependent var 0.093278 S.E. of regression 0.081636 Akaike info criterion -2.142602
S.E. of regression 0.083113 Akaike info criterion -2.111150 Sum squared resid 1.459522 Schwarz criterion -2.036656
Sum squared resid 1.526625 Schwarz criterion -2.020623 Log likelihood 249.1140 Hannan-Quinn criter. -2.099847
Log likelihood 245.6155 Hannan-Quinn criter. -2.074621 F-statistic 1.613280 Durbin-Watson stat 2.096603
F-statistic 12.73218 Durbin-Watson stat 1.932896 Prob(F-statistic) 0.144552
Prob(F-statistic) 0.000000

→Still autocorrelation? (outliers could be an issue)

ERASMUS SCHOOL OF ECONOMICS 36/38 ERASMUS SCHOOL OF ECONOMICS 37/38

Example LM-test III


FEB22005(X): Econometrics 2
Test result in model with AR(1) disturbances and extra dummy
Breusch-Godfrey Serial Correlation LM Test:
Lecture 3
F-statistic
Obs*R-squared
2.223714
4.518456
Prob. F(2,218)
Prob. Chi-Square(2)
0.1107
0.1044 Serial Correlation
Test Equation:
Dependent Variable: RESID
Method: Least Squares
Sample: 6/13/1991 10/05/1995
Michel van der Wel
Included observations: 226
Presample missing value lagged residuals set to zero.

Variable Coefficient Std. Error t-Statistic Prob.

C -0.010256 0.258451 -0.039684 0.9684


Erasmus University Rotterdam
LOG(PRICE1)
PROMO11
0.007024
-0.000193
0.163463
0.083409
0.042967
-0.002311
0.9658
0.9982 Econometric Institute
PROMO21 -0.090704 0.275126 -0.329682 0.7420
DUM 0.026605 0.072584 0.366547 0.7143
AR(1)
RESID(-1)
0.232925
-0.188553
0.223294
0.229364
1.043131
-0.822070
0.2980
0.4119 March 9, 2017
RESID(-2) -0.241590 0.133951 -1.803566 0.0727

R-squared 0.019993 Mean dependent var 4.15E-15


Adjusted R-squared -0.011475 S.D. dependent var 0.078567
S.E. of regression 0.079016 Akaike info criterion -2.203567
Sum squared resid 1.361101 Schwarz criterion -2.082486
Log likelihood 257.0031 Hannan-Quinn criter. -2.154704
F-statistic 0.635347 Durbin-Watson stat 2.058627
Prob(F-statistic) 0.726381

ERASMUS SCHOOL OF ECONOMICS 38/38

Potrebbero piacerti anche