Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
y = β0 + β1x1 + β2x2 + . . . + βK xK + γq + v,
with E(v|x1, x2, . . . , xK , q) = 0, where v is structural error. Note that
E(v|.) = 0 implies (i) E(v) = 0 and (ii) covariance between v and any
function of (x, q) is zero.
One way to handle q is to put it into the error term. Assume, WLG,
E(q) = 0 (as there is an intercept in the model). Thus,
y = β0 + β1x1 + β2x2 + . . . + βK xK + u,
q = δ0 + δ1x1 + . . . + δK xK + r,
y = β0 + β1x1 + β2x2 + . . . + βK xK + r,
with E(r) = 0, cov(xj , r) = 0 ∀j = 1, . . . , K. This definition is iden-
tical with the previous one in the sense that given the error form,
parameters in the structural form must be as stated above. Again,
given the structural form, properties of r in the error form always hold
true.
If γ > 0 and xK and q are positively correlated then the bias is positive,
i.e. OLS tend to overestimate βK .
Note that,
log(scrap) = β0 + β1grant + γq + v,
If z does not have zero mean, then it must be that E(q|z) = θ0 + θ1z,
as we continue to assume E(q) = E[E(q|z)] = θ0 + θ1E(z) = 0 for
interpretational convenience. Then
Under these conditions the composite error term, u = v + e0, has the
following properties: E(u) = 0, cov(u, x) = 0. Hence OLS produces
consistent estimates of each βj . Further, usual OLS inferences are
asymptotically valid under appropriate homoskedasticity assumptions.
However, var(v + e0) = σv2 + σ02 > σv2. Hence standard errors are larger
than in the absence of measurement error.
This will be the case when the firm does not report true scrap rates
to enhance the chances of receiving a grant. But it implies
y = β0 + β1x1 + . . . + βK x∗K + v,
where y, x1, x2, . . . , xK−1 are observed but x∗K is not observed. We
make the following assumptions:
1. E(v) = 0, cov(v, x) = 0 .
y = β0 + β1x1 + . . . βK xK + (v − βK eK ).
Note (i) E(v − βK eK ) = E(v) − βK E(eK ) = 0. Again, v and eK are
uncorrelated with x1, . . . , xK−1. Hence u = v − βK eK is also uncorre-
lated with x1, . . . , xK−1. (iii) Also, v is uncorrelated with xK [assmp.
2] and eK is uncorrelated with xK [assmp 6a]. Together, these mean,
u is uncorrelated with xK . Hence, (ii) and (iii) imply u is uncorrelated
with all observed regressors.
y = θ0 + θ1x1 + . . . θK−1xK−1 + θK xK + ε.
L(xK |1, x1, . . . , xK−1) = L(x∗K |1, x1, . . . , xK−1) + L(eK |1, x1, . . . , xK−1)
= δ0 + δ1x1 + . . . + δK−1xK−1 + 0
= δ0 + δ1x1 + . . . + δK−1xK−1,
where L(eK |1, x1, . . . , xK−1) = 0 follows from the fact that if L(eK |1, x1,
cov(ẍ, eK )
. . . , xK−1) = γ0 + γ1x1 + . . . + γK−1xK−1 = γ0 + ẍγ̈ , γ̈ =
var(ẍ)
and cov(xj , eK ) = 0 ∀j = 1, . . . , K − 1 by assumption 5. Thus γ̈ = 0.
Also, γ0 = E(eK ) − γ1E(x1) − . . . − γK−1E(xK−1) = 0 − 0 − . . . − 0 = 0.
Again,
∗ +e
rK = xK − L(xK |1, x1, . . . , xK−1) = δ0 + δ1x1 + . . . + δK−1xK−1 + rK K
−(δ0 + δ1x1 + . . . + δK−1xK−1)
∗ +e ,
= rK K
cov(y, rK )
Now, L(y|rK ) = α1rK , where α1 = , as E(rK ) = 0. But,
var(rK )
cov(y, rK ) = cov(β0 + β1x1 + . . . + βK−1xK−1 + βK xK + v − βK eK , rK )
= βK cov(xK , rK ) − βK cov(eK , rK ),
βK σr2∗
Therefore, α1 = 2 2
.
σr∗ + σeK
σr2∗
Note that, 0 < 2 2
< 1 implying |plimβ̂K | < |βK |.
σr∗ + σeK
Note that, variance of x∗K , i.e. σx2∗ , does not affect plimβ̂K , but
K
∗
the (net) variance of xK does (after netting out the effect of other
explanatory variables, i.e. σr2∗ ). If x∗K is more and more collinear with
other explanatory variables, the residual variation, σr2∗ is smaller (refer
to LP of x∗K on 1, x1, . . . , xK−1) and hence attenuation bias is worse.