Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
by
James H. Stock and Mark W. Watson
18.1. (a) The regression in the matrix form is
Y
with
r with
R = (0
0 1) and r = 0.
R (R r )/q
F (R r ) R
With q
We reject the null hypothesis if the calculated F-statistic is larger than the critical
value of the Fq , distribution at a given significance level.
18.3. (a)
Var (Q) E[(Q Q ) 2 ]
E[(Q Q )(Q Q )]
E[(cW c W )(cW c W )]
cE[(W W ) ( W W )]c
c var( W) c c w c
where the second equality uses the fact that Q is a scalar and the third equality
uses the fact that Q cw.
(b) Because the covariance matrix W is positive definite, we have c wc 0 for
every non-zero vector from the definition. Thus, var(Q) > 0. Both the vector c
and the matrix W are finite, so var(Q) c wc is also finite. Thus, 0 < var(Q)
< .
MX is idempotent because
M X M X (I n PX ) (I n PX ) I n PX PX PX PX
I n 2PX PX In PX M X
PXMX 0nxn because
PX M X PX (I n PX ) PX PX PX PX PX 0n n
(b) Because ( XX) 1 XY, we have
X X( XX) 1 XY P Y
Y
X
18.7. (a) We write the regression model, Yi 1Xi 2Wi ui, in the matrix form as
Y X W U
with
XX XW XU
1
2 W X W W W U
1 X X
1 1n
2 n W X
1 1n in1 X i2
1 n
2 n i 1 Wi X i
By the law of large numbers
1
n
1
n
XW 1n XU
1
1
n W W n W U
1
n
1
n
in1 X iWi
2
n
1
n i 1 Wi
1n in1 X i ui
1 n
n i 1 Wi ui
p
p
in1 X i2
E ( X 2 ); 1n in1 Wi 2
E (W 2 );
p
in1 X iWi
E ( XW ) 0 (because X and W are independent with means of
zero);
1
n
p
in1 X i ui
E ( Xu ) 0 (because X and u are independent with means of
zero);
1
n
p
in1 X i ui
E ( Xu ) 0 Thus
1
1 p 1 E ( X 2 )
0 0
E (W 2 ) E (Wu )
2 0
2
1
E (Wu ) .
2 E (W 2 )
ui Wi ai
where E(Wu)/E(W2). In this population regression, by construction, E(aW) 0.
Using this equation for ui rewrite the equation to be estimated as
Yi X i 1 Wi 2 ui
X i 1 Wi ( 2 ) ai
X i 1 Wi ai
where 2 . A calculation like that used in part (a) can be used to show that
n ( 1 1 ) 1n in1 X i2
n ( ) 1n in1 Wi X i
1
n
in1 X iWi
n
2
1
n i 1 Wi
in1 X i ai
n
1
W
a
i 1
u i
n
1
n
E( X 2 )
0 S1
E (W 2 ) S 2
0
d
a2
d
n ( 1 1 )
N 0,
2
E( X )
Now consider the regression that omits W, which can be written as:
Yi X i 1 di
where di Wi ai. Calculations like those used above imply that
d2
d
n 1r 1
N 0,
.
2
E
X
(
)
18.9. (a)
( XM W X)1 XM W Y
( XM W X)1 XM W ( X W U)
( XM W X) 1 XM W U.
The last equality has used the orthogonality MWW 0. Thus
n 1XM W X n 1X(I n PW )X
n 1XX n 1XPW X
n 1XX (n 1XW)(n 1WW)1 (n 1WX).
First consider n 1XX 1n in1 Xi Xi . The (j, l) element of this matrix is
1
n
1 n
p
X ji X li E ( X ji X li ) .
n i 1
This is true for all the elements of n1 XX, so
n XX
1
1 n
p
Xi Xi E ( Xi Xi ) XX. .
n i 1
Applying the same reasoning and using Assumption (ii) that (Xi, Wi, Yi) are
i.i.d. and Assumption (iii) that (Xi, Wi, ui) have four moments, we have
1 n
p
Wi Wi E ( Wi Wi ) WW ,
n i 1
1 n
p
n 1XW Xi Wi E ( Xi Wi ) XW ,
n i 1
n 1WW
and
n1WX
1 n
p
Wi Xi E ( Wi Xi ) WX .
n i 1
From Assumption (iii) we know XX , WW , XW , and WX are all finite nonzero, Slutskys theorem implies
n 1XM W X n 1XX (n 1XW ) (n 1WW ) 1 (n 1WX)
p
XX XW -1WW WX
The second equality used Assumption (ii) that (Xi , Wi , Yi ) are i.i.d., and the
third equality applied the conditional mean independence assumption (i).
because MW W 0.
(e) n 1XM W X converges in probability to a finite invertible matrix, and
n 1XM W U converges in probability to a zero vector. Applying Slutskys
theorem,
p
(n 1XM W X)-1 (n 1XM W U)
0.
This implies
p
.
I
18.11. (a) Using the hint C [Q1 Q2] r
0
0 Q1 '
, where QQ I. The result follows
0 Q 2 '
with AQ1.
(b) W AV ~ N(A0, AInA) and the result follows immediately.
(c) VCV VAAV (AV)(AV) WW and the result follows from (b).
r0
(XX)1R.
[R(XX)1R]1(R r).
Substituting this into (***) yields the result.
(c) Using the result in (b), Y X
r ), so that
(Y X ) X(XX)1R[ R(XX)1R]1(R
, so that
(c)
Q X and finite variance (because Xit has finite fourth moments). The result then
follows from the law of large numbers.
(d) This follows the the Central limit theorem.
(e) This follows from Slutskys theorem.
(f) i2 are i.i.d., and the result follows from the law of large numbers.
(g) Let
. Then
and
p
Because ( ) 0 , the result follows from (a)
and (b)
large numbers; both (a) and (b) are averages of i.i.d. random variables.
Completing the proof requires verifying that
and
18.17 The results follow from the hints and matrix multiplication and addition.