Sei sulla pagina 1di 6

Cornell University

Department of Economics

Econ 620
Instructor: Prof. Kiefer

Solution to Problem set # 1

1) We will use the following fact: loge w = b eb = w. By taking log


to base 10 to the last expression, we get that b log10 e = log10 w. Denote by
ln the log to base e, and by log, the log to base 10. n
Therefore, log X = log e ln X, for any variable X. Let log x = n1 i=1 log xi
 n  n
and log y = n1 i=1 log yi and similarly, ln x = n1 i=1 ln xi and ln y =
1 n
n i=1 ln yi
Hence,
n n 2 
i=1 (log xi log x) log yi i=1 (log e) (ln xi ln x) ln yi (log e)2 n i=1 (ln xi ln x) ln yi
10 =  n =  n =  =
i=1 (log xi log x)
2
i=1 (log e) 2 (ln x ln x)2
i (log e)2 n i=1 (ln xi ln x)
2
n
i=1 (ln xi ln x) ln yi
 n
(ln xi ln x)2
= e
i=1

However, it is not true that 10 = e .To see this, note that

10 = log y 10 log x = log y e log x = log e ln y e log e ln x = log e(ln y

e ln x) = log e e .


Also, if the model is logYt = 10 +10 t+t , then and will be dierent
if we take log to base 10 or log to base e, simply because logYt = log e ln Yt, and
hence
10 10 t t
logYt = 10 + 10 t + t is equivalent to ln Yt = log e + log e + log e . To see

this, let t = n1 nt=1 t

n 
t=1 (tt) log yi log e n (tt) ln yi
10 =  n 2 =  nt=1 2 = log e e and
t=1 (tt) t=1 (tt)

10 = log y 10 t = log e ln y log e e t = log e( ln y e t) = log ee .

2)

The statement is false. Here is a counterexample: let the joint density of


(X,Y) be

g(x, y) = 2zf (x)f (y),

1
where f is univariate standard normal pdf and z is a function of x and y
taking value 1 if xy > 0 and taking value 0 if xy 0. Clearly, the support
of (X,Y) are the northeast and southwest quadrants (i.e., both x and y are
positive or both x and y are negative), so (X,Y) is not bivariate normal (since
the support of a bivariate normal is R2 ).
 +  +
The marginal density (pdf) of X is g(x, y)dy = 2zf (x)f (y)dy =
 +
2f (x) zf (y)dy.
 +  +
Now, if x > 0, then zf (y)dy = 0 f (y)dy = 12 .
 + 0
And if x 0, then zf (y)dy = f (y)dy = 12 .
Therefore, the marginal pdf of X if f(x). Similarly for Y.

This exercise shows you that if (X,Y) is a bivariate normal, that is stronger
than just saying that the univariate distribution of X and of Y is normal.

3)
a) Yes. The model is linear, E(i ) = 0 for all i, X is full rank (has rank
one) and Var(i ) = 1 for all i (and the errors are uncorrelated since they are
independent random variables).
b) The OLS estimator for is
2 2
x y x
=  i=1 i i
2 =+  i=1 i i
2 = 1 + 1 +2
5
2
.
i=1 i=1
This comes from minimizing the sum of squares residuals. Note that we do

not demeaned x and y in the formula for as in the case where there is an
intercept in the model.

So the exact distribution of is given by its probability mass function

(pmf), whichis: 14 if = 25 , 45 , 65 or 85 and 0 otherwise.
c) =  xy = + 1 + 2
3 . It is unbiased, and its pmf is :
1
2 if = 1, 14 if
1 5
= 3 or 3 and 0 otherwise.

1
d) Var ( ) = 5 and Var ( ) = 29 . So Var ( ) >Var ( ).

4)

(a) Recall that



(yi y) (xi x)
 =  i
2
i (xi x)

 = y x

2
The information given in the question is not directly usable. However,
    
(yi y) (xi x) = (xi x) yi = xi yi x yi = xi yi nxy
i i i i i
220 440
= 4430 22 = 30
  22 22  
2
(xi x) = (xi x) (xi x) = (xi x) xi = x2i nx2
i i i i
 2
220
= 2260 22 = 60
22

Hence,
30
 = = 0.5
60
440 220
=
0.5 = 15
22 22
(b) R2 is dened as the ratio of the explained sum of squares(ESS) to total sum
of squares(TSS).
  2
2 i (xi x)
2 2
2  2 i xi nx 2 60
R =  2 =  2 2 = 0.5 8900 22 202 = 0.15
i (yi y) i yi ny

(c) By the normality assumption, we know that


 
2

 N ,  2
i (xi x)

Moreover,
(n 2) s2
2 (n 2)
2
1
 2 
where s2 = (n2) 2
i ei . We can also show that and s are independent each
other. Then,



2

i (xi x)
2
 
= = t (n 2)
(n2)s2  s 2


2 i (xi x)
(n2)

s2
 =
where  2 . We want to reject the null hypothesis if
i (xi x)




T = > t0.975 (20)


3
under the null hypothesis. On the other hand,
1  1  2
s2 = e2i = yi  i
 x
(n 2) i (n 2) i
1 
= 2 + 2 x2i 2
yi2 + yi + 2  i 2x
x  i yi
(n 2) i
 
1 8900 + 22 152 + 0.52 2260 2 15 440
=
20 +2 15 0.5 220 2 0.5 4430
= 4. 25

Hence, the test statistic is given by




0.5 0
T =
= 1. 8787

4.25
60

Since t0.975 (20) = 2.086, we do not reject the null hypothesis.

 is given by
1. (d) The distribution of
  
2 1 x2
 N ,
+ 2
i (xi x)
n

  is distributed as
Therefore, 
 
 N , 2

where = . By Gauss-Markov theorem  is the BLUE of . The


variance of  is given by

2 = V ar ( ) = V ar   = V ar (
) + V ar  2Cov , 
 
2 1 x2 1 2x
= + 2 + 2  2 + 2  2
i (xi x) i (xi x) i (xi x)
n
 
1 x2 + 1 + 2x
= 2 + 2
i (xi x)
n

We can estimate the variance of  as


 
2 2 1 x2 + 1 + 2x
 = s
+ 2
i (xi x)
n
1 
where s2 = n2
2
i ei .
What do we know? We know that

N (0, 1)

4
and
(n 2) s2
2 (n 2)
2
and  and s2 are independent. Then,




= t (n 2)
(n2)s2 

2 / (n 2)

We want to reject the null hypothesis if




T = > t0.975 (n 2)


under the null hypothesis. Note that  = 15 0.5 = 14.5 and = 10


under the null. Moreover,
   
2 2 1 x2 + 1 + 2x 1 102 + 1 + 2 10
 = s + 2 = 4.25 22 + = 8. 764
i (xi x)
n 60

The test statistic is now



14.5 10

T = = 1. 5201
8. 764
Since t0.975 (20) = 1.725, again, we do not reject the null hypothesis.

5)

1. You can write


 
 1 i (xi x) (yi y) 1 (xi x) yi
 = y x =
yi x  2 = yi x i 2
i (xi x) i (xi x)
n i n i
 
 1 (xi x) 
= x 2 yi = mi y i
i (xi x)
i
n i
   
where mi = x  (x(xi x)
1
n x) 2 = n1 xwi with wi =  (x(xi x) 2 . Then,
i x)
2
 2 i i i
V ar (
) = i mi . Now, consider an alternative linear estimator such
that
    
=
hi y i = hi ( + xi + i ) = hi + hi xi + hi i
i i i i i

Then,  
E (
) = hi + hi xi
i i

5
 
Therefore, unbiasedness requires that i hi = 1 and i hi xi = 0. Intro-
duce a new expression for hi ;

h i = mi + g i

We can always do this! - gi may be negative-. Now,


 2
    
) = E
V ar ( hi i = h2i E 2i = 2 h2i
i i i
 2
  
2 2
= (mi + gi ) = m2i + 2
gi2 + 2 2 mi g i
i i i i
  
= 2 m2i + 2 gi2 2 m2i = V ar (
)
i i i

since
    1  1 2
mi g i = mi (hi mi ) = mi h i m2i = xwi hi xwi
i i i i i
n i
n
1  1 2x 
2 
= hi x wi hi + wi x2 wi2
n i i i
n n i i
 
1  (xi x) 1 1
= x  2 hi x 2  2
i (xi x) i (xi x)
n i
n
1  xi hi  hi 1 1
= x  2 + x2  2 x2  2
i (xi x) i (xi x) i (xi x)
n i i
n
1 1 1 1
= + x2  2 n x 
2
2 =0
i (xi x) i (xi x)
n
  2
The third
 row follows from
 i hi = 1 and i wi = 1. The last row follows
from i xi hi = 0 and i hi = 1.

Potrebbero piacerti anche