Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Gauss-Markov Theorem
Theoretical: Yi = 1 + 2Xi + ui
Estimated: Yi 1 2 X i
The population parameters 1 and 2are
unknown population constants.
The formulas that produce the sample estimates
of 1 and 2 are called the estimators of 1 and
2.
^
and
0
1
2 =
^
nXiYi - XiYi
nXi -(Xi)
2
xiyi
=
xi2
1 = Y - 2X
where
y = yi / n and x = xi / n
Substitute Yi = 1 + 2Xi + u i
Into ^1 formula to get:
n X i ( 1 2 X i u i ) X i ( 1 2 X i u i )
2
n X i2 ( X i ) 2
2
(n 2 X i n X i2 n X i ui ) (n1 X i 2( X i ) 2 X i ui )
2
2
n X i2 ( X i ) 2
n 2 X i2 n X i u i 2( X i ) 2 X i u i )
n X i2 ( X i ) 2
2[n X i2 ( X i ) 2 ] (n X i u i X i u i )
n X ( X i )
2
i
nxi ui - xi ui
2 = 2 +
nxi2-(xi) 2
^
E(2) = 2 +
Since
=0
xi E(ui)
nxi2-(xi) 2
^
An Unbiased Estimator
Unbiasedness: The mean of the distribution of sample
estimates is equal to the parameter to be estimated.
= 3X2 +u
E(ui) 0
10
E(1) = 1
^
11
Var(i)
^
^2
u
12
Sum
mean
2 =
x2
xy
X2
140
157
205
162
174
165
3388
169.4
5
9
13
10
11
9
207
10.35
-29.40
-12.40
35.60
-7.40
4.6
-4.4
-5.35
-1.35
2.65
-0.35
0.65
-1.35
28.62
1.82
7.02
0.12
0.42
1.82
92.5
157.29
16.74
94.34
2.59
2.99
5.94
590.2
25
81
169
100
121
81
(Xi X )Yi Y )
xi x )2
xiyi
xi2
2235
590.12
=
= 6.38
92.5
13
Variance of 2
Given that both Yi and ui have variance 2,
the variance of the estimator ^1 is:
^
var(2) =
^2
xi x
^2
xi
^
Se(2)= (8.50)2/92.55 = 0.7809 = 0.8836
^2 is a function of the Yi values, but
var(^2) does not involve Yi directly.
14
Variance of ^1
Given
1 = Y 2X
x i
^
var( )= 2
1
n x i x
x i
2
nxi
2
^
Se(1)= (8.50)2(2235/20(92.55)) = 87.238 = 9.34
15
Covariance of 1 and 2
^
^
cov(1,2)= 2
x t x
= 2
x
xi
16
ui
T
^u = Y ^ ^X
i
i
1
2 i
i =1
n 2
2
^
is an unbiased estimator of
^2
17
18
Gauss-Markov Theorem
Given the assumptions of classical linear
regression model, the ordinary least
^
^
squares (OLS) estimators 0 and 1
are the best linear unbiased estimators
(BLUE) of ^1 and 2.^ This means that 1 and
2 have the smallest variance of all linear
unbiased estimator of 1 and 2.
Note: Gauss-Markov Theorem doesnt
apply to non-linear estimators
19
20
E(^2)<
Biased
underestimate
E(^2)=
E(^2)>
Unbiased
Biased
overestimate
^2)
E(
^2)
E(
True value
of
E(^2)
Probability Distribution
of Least Squares Estimators
1~ N 1 ,
^
2 ~ N 2 ,
x i
2
nx i2
u2
x i2
21
22
Efficiency
1 is more efficient
than 2.
2
^2)
E(
True value of 2
23
Consistency:
^k is a consistent estimator of k means that as the
sample size gets larger, the ^k will become more
accurately.
Prob.
(^
1 )
N=500
N=100
N=5
0
^1)
E(
True value of 2
24
and
E(^2)=2
Variance
x i
2
nxi
2
Var(^1)= 2
and
Var(^2)=
xi
^ ) = var(^ )
Se(
k
k
25
ui
T
^2
E(^u2) = u2
u = n-K-1
i =1
^u =
ui
T
^2
i =1
n K-1
K
# of independent
excludes
the Constant term