Sei sulla pagina 1di 17

Chapter 9

Autocorrelation
One of the basic assumptions in linear regression model is that the random error components or disturbances
y X β + u , it is assumed that
are identically and independently distributed. So in the model=

σ u2 if s = 0
E (ut , ut − s ) = 
0 if s ≠ 0
i.e., the correlation between the successive disturbances is zero.

ut − s ) σ=
In this assumption, when E (ut , = 2
u, s 0 is violated, i.e., the variance of disturbance term does not

remains constant, then problem of heteroskedasticity arises. When E (ut , ut=


−s ) 0, s ≠ 0 is violated, i.e., the
variance of disturbance term remains constant though the successive disturbance terms are correlated, then
such problem is termed as problem of autocorrelation.

When autocorrelation is present, some or all off diagonal elements in E (uu ') are nonzero.

Sometimes the study and explanatory variables have a natural sequence order over time, i.e., the data is
collected with respect to time. Such data is termed as time series data. The disturbance terms in time series
data are serially correlated.

The autocovariance at lag s is defined as


γ s = E (ut , ut − s ); s = 0, ±1, ±2,... .
At zero lag, we have constant variance, i.e.,
γ 0 E=
= (ut2 ) σ 2 .

The autocorrelation coefficient at lag s is defined as


E (ut ut − s ) γ
ρs = = s ; s = 0, ±1, ±2,...
Var (ut )Var (ut − s ) γ 0

Assume ρ s and γ s are symmetrical in s , i.e., these coefficients are constant over time and depend only on

length of lag s. The autocorrelation between the successive terms (u2 and u1 ) ,

(u3 and u2 ),..., (un and un −1 ) gives the autocorrelation of order one, i.e., ρ1 . Similarly, the autocorrelation

Econometrics | Chapter 9 | Autocorrelation | Shalabh, IIT Kanpur


1
between the successive terms (u3 and u1 ), (u4 and u2 )...(un and un − 2 ) gives the autocorrelation of order two,

i.e., ρ 2 .

Source of autocorrelation
Some of the possible reasons for the introduction of autocorrelation in the data are as follows:
1. Carryover of effect, atleast in part, is an important source of autocorrelation. For example, the
monthly data on expenditure on household is influenced by the expenditure of preceding month. The
autocorrelation is present in cross-section data as well as time series data. In the cross-section data,
the neighboring units tend to be similar with respect to the characteristic under study. In time series
data, the time is the factor that produces autocorrelation. Whenever some ordering of sampling units
is present, the autocorrelation may arise.

2. Another source of autocorrelation is the effect of deletion of some variables. In regression modeling,
it is not possible to include all the variables in the model. There can be various reasons for this, e.g.,
some variable may be qualitative, sometimes direct observations may not be available on the
variable etc. The joint effect of such deleted variables gives rise to autocorrelation in the data.

3. The misspecification of the form of relationship can also introduce autocorrelation in the data. It is
assumed that the form of relationship between study and explanatory variables is linear. If there are
log or exponential terms present in the model so that the linearity of the model is questionable then
this also gives rise to autocorrelation in the data.

4. The difference between the observed and true values of variable is called measurement error or
errors–in-variable. The presence of measurement errors on the dependent variable may also
introduce the autocorrelation in the data.

Econometrics | Chapter 9 | Autocorrelation | Shalabh, IIT Kanpur


2
Structure of disturbance term:
Consider the situation where the disturbances are autocorrelated,
 γ0 γ1  γ n −1 
γ γ0  γ n − 2 
E (εε ') =  1
     
 
γ n −1 γ n−2  γ 0 
 1 ρ1  ρ n −1 
 ρ 1  ρ n − 2 
= γ0  1
     
 
 ρ n −1 ρn−2  1 
 1 ρ1  ρ n −1 
 ρ 1  ρ n − 2 
2 
= σu 1
.
     
 
 ρ n −1 ρn−2  1 

Observe that now there are (n + k ) parameters- β1 , β 2 ,..., β k , σ u2 , ρ1 , ρ 2 ,..., ρ n −1. These (n + k ) parameters are

to be estimated on the basis of available n observations. Since the number of parameters are more than the
number of observations, so the situation is not good from the statistical point of view. In order to handle the
situation, some special form and the structure of the disturbance term is needed to be assumed so that the
number of parameters in the covariance matrix of disturbance term can be reduced.

The following structures are popular in autocorrelation:


1. Autoregressive (AR) process.
2. Moving average (MA) process.
3. Joint autoregression moving average (ARMA) process.

1. Autoregressive (AR) process


The structure of disturbance term in autoregressive process (AR) is assumed as
u=
t φ1ut −1 + φ2ut − 2 + ... + φq ut − q + ε t ,

i.e., the current disturbance term depends on the q lagged disturbances and φ1 , φ2 ,..., φk are the parameters

(coefficients) associated with ut −1 , ut − 2 ,..., ut − q respectively. An additional disturbance term is introduced in

ut which is assumed to satisfy the following conditions:

Econometrics | Chapter 9 | Autocorrelation | Shalabh, IIT Kanpur


3
E (ε t ) = 0
σ ε2 if s = 0
E (ε tε t −s ) = 
0 if s ≠ 0.
This process is termed as AR ( q ) process. In practice, the AR (1) process is more popular.

2. Moving average (MA) process:


The structure of disturbance term in the moving average (MA) process is
ut = ε t + θ1ε t −1 + .... + θ pε t − p ,

i.e., the present disturbance term ut depends on the p lagged values. The coefficients θ1 , θ 2 ,..., θ p are the

parameters and are associated with ε t −1 , ε t − 2 ,..., ε t − p respectively. This process is termed as MA ( p )

process.

3. Joint autoregressive moving average (ARMA) process:


The structure of disturbance term in the joint autoregressive moving average (ARMA) process is
u=
t φ1ut −1 + ... + φq ut − q + ε1 + θ1ε t −1 + ... + θ pε t − p .

This is termed as ARMA ( q, p ) process.

The method of correlogram is used to check that the data is following which of the processes. The
correlogram is a two dimensional graph between the lag s and autocorrelation coefficient ρ s which is

plotted as lag s on X -axis and ρ s on y -axis.

In MA(1) process
u=
t ε t + θ1ε t −1
 θ1
 for s = 1
ρ s = 1 + θ12
0 for s ≥ 2

ρ0 = 1
ρ1 ≠ 0
ρi 0=
= i 2,3,...
So there is no autocorrelation between the disturbances that are more than one period apart.

Econometrics | Chapter 9 | Autocorrelation | Shalabh, IIT Kanpur


4
In ARMA(1,1) process
u=
t φ1ut −1 + ε t + θ1ε t −1
 (1 + φ1θ1 )(φ1 + θ1 )
 for s = 1
ρ s =  (1 + θ12 + 2φ1θ1 )

φ1 ρ s −1 for s ≥ 2
 1 + θ12 + 2φ1θ1  2
σ u2 =  σ ε .
 1 − φ12 
The autocorrelation function begins at some point determined by both the AR and MA components but
thereafter, declines geometrically at a rate determined by the AR component.

In general, the autocorrelation function


- is nonzero but is geometrically damped for AR process.
- becomes zero after a finite number of periods for MA process.
The ARMA process combines both these features.

The results of any lower order of process are not applicable in higher order schemes. As the order of the
process increases, the difficulty in handling them mathematically also increases.

Estimation under the first order autoregressive process:


Consider a simple linear regression model
yt =β 0 + β1 X t + ut , t =1, 2,..., n.

Assume ui ' s follow a first order autoregressive scheme defined as

ut ρ ut −1 + ε t
=

where ρ < 1, E (ε t ) =
0,

σ 2 if s = 0
E (ε t , ε t + s ) =  ε
0 if s ≠ 0
for all t = 1, 2,..., n where ρ is the first order autocorrelation between ut and ut −1 , t = 1, 2,..., n. Now

ut ρ ut −1 + ε t
=
= ρ (ut − 2 + ε t −1 ) + ε t
= 
ε t ρε t −1 + ρ 2ε t − 2 + ...
=+

= ∑ ρ rε t −r
r =0

Econometrics | Chapter 9 | Autocorrelation | Shalabh, IIT Kanpur


5
E (ut ) = 0
E (ε t2 ) + ρ 2 E (ε t2−1 ) + ρ 4 E (ε t2− 2 ) + ...
E (ut2 ) =
=(1 + ρ 2 + ρ 4 + ....)σ ε2 (ε t' s are serially independent)
σ ε2
E (ut=
2
) σ=
2
for all t.
1− ρ 2
u

E (ut ut −=
1) E ( ε t + ρε t −1 + ρ 2ε t − 2 + ...) × ( ε t −1 + ρε t − 2 + ρ 2ε t −3 + ...) 

= E {ε t + ρ ( ε t −1 + ρε t − 2 + ...)}{ε t −1 + ρε t − 2 + ...}

= ρ E ( ε t −1 + ρε t − 2 + ...) 
2
 
= ρσ u2 .
Similarly,
E (ut ut − 2 ) = ρ 2σ u2 .
In general,
E (ut ut − s ) = ρ sσ u2
 1 ρ ρ2  ρ n −1 
 
 ρ 1 ρ  ρ n−2 
.
E (uu ') = Ω = σ u  ρ 2
2
ρ 1  ρ n −3 
 
      
 ρ n −1 ρ n−2
ρ n −3  1 

Note that the disturbance terms are no more independent and E (uu ') ≠ σ 2 I . The disturbance are
nonspherical.

Consequences of autocorrelated disturbances:


Consider the model with first order autoregressive disturbances
y X β+ u
=
n×1 n×k k ×1 n×1

ut = ρ ut −1 + ε t , t = 1, 2,..., n
with assumptions
E (u ) = 0, E (uu ') = Ω
σ ε2 if s = 0
E (ε t ) 0,=
= E (ε t ε t + s ) 
0 if s ≠ 0
where Ω is a positive definite matrix.

Econometrics | Chapter 9 | Autocorrelation | Shalabh, IIT Kanpur


6
The ordinary least squares estimator of β is

b = ( X ' X ) −1 X ' y
= ( X ' X ) −1 X '( X β + u )
b−β =
( X ' X ) −1 X ' u
E (b − β ) =
0.
So OLSE remains unbiased under autocorrelated disturbances.

The covariance matrix of b is


V (b) =E (b − β )(b − β ) '
= ( X ' X ) −1 X ' E (uu ') X ( X ' X ) −1
= ( X ' X ) −1 X ' ΩX ( X ' X ) −1
≠ σ u2 ( X ' X ) −1.

The residual vector is


e =y − Xb =Hy =Hu
=
e ' e y=
' Hy u ' Hu
(e ' e) E (u ' u ) − E u ' X ( X ' X ) −1 X ' u 
E=
nσ u2 − tr ( X ' X ) −1 X ' ΩX .
=
e 'e
Since s 2 = , so
n −1
1σ u2
E (s ) =
2
− tr ( X ' X ) −1 X ' ΩX ,
n −1 n −1
so s 2 is a biased estimator of σ 2 . In fact, s 2 has downward bias.

Application of OLS fails in case of autocorrelation in the data and leads to serious consequences as
 overly optimistic view from R 2 .
 narrow confidence interval.
 usual t -ratio and F − ratio tests provide misleading results.
 prediction may have large variances.

Since disturbances are nonspherical, so generalized least squares estimate of β yields more efficient
estimates than OLSE.

Econometrics | Chapter 9 | Autocorrelation | Shalabh, IIT Kanpur


7
The GLSE of β is

βˆ =Ω
( X ' −1 X ) −1 X ' Ω −1 y
E ( βˆ ) = β
=
V ( βˆ ) σ 2 ( X ' Ω −1 X ) −1.
u

The GLSE is best linear unbiased estimator of β .

Tests for autocorrelation:


Durbin Watson test:
The Durbin-Watson (D-W) test is used for testing the hypothesis of lack of first order autocorrelation in the
disturbance term. The null hypothesis is
H0 : ρ = 0

Use OLS to estimate β in =


y X β + u and obtain residual vector

e =y − Xb =Hy

where b= ( X ' X ) −1 X ' y, H = I − X ( X ' X ) −1 X '.

The D-W test statistic is


n

∑ (e − e )
2
t t −1
d= t =2
n

∑e
t =1
2
t

n n n

∑ et2 ∑ et −1 ∑ et et −1
=n
=t 2=t 2
+ n −2 n
=t 2
.
∑ et ∑ et
2

=t 1 =t 1
2
∑ et 2

=t 1

For large n,
d ≈ 1 + 1 − 2r
d  2(1 − r )
where r is the sample autocorrelation coefficient from residuals based on OLSE and can be regarded as the
regression coefficient of et on et −1 . Here

Econometrics | Chapter 9 | Autocorrelation | Shalabh, IIT Kanpur


8
positive autocorrelation of et ’s ⇒ d < 2

negative autocorrelation of et ’s ⇒ d > 2

zero autocorrelation of et ’s ⇒ d ≈ 2
As −1 < r <1, so
if −1 < r < 0, then 2 < d < 4 and
if 0 < r <1, then 0 < d < 2.
So d lies between 0 and 4.

Since e depends on X , so for different data sets, different values of d are obtained. So the sampling
distribution of d depends on X . Consequently exact critical values of d cannot be tabulated owing to
their dependence on X . Durbin and Watson therefore obtained two statistics d and d such that

d <d <d
and their sampling distributions do not depend upon X .

Considering the distribution of d and d , they tabulated the critical values as d L and dU respectively.

They prepared the tables of critical values for 15 < n <100 and k ≤ 5. Now tables are available for
6 < n < 200 and k ≤ 10.

The test procedure is as follows:


H0 : ρ = 0
Nature of H1 Reject H 0 when Retain H 0 when The test is inconclusive
when
H1 : ρ > 0 d < dL d > dU d L < d < dU
H1 : ρ < 0 d > (4 − d L ) d < (4 − dU ) ( 4 − dU ) < d < (4 − d L )
H1 : ρ ≠ 0 d < d L or dU < d < (4 − dU ) d L < d < dU
d > (4 − d L ) or
(4 − dU ) < d < (4 − d L )
Values of d L and dU are obtained from tables.

Econometrics | Chapter 9 | Autocorrelation | Shalabh, IIT Kanpur


9
Limitations of D-W test
1. If d falls in the inconclusive zone, then no conclusive inference can be drawn. This zone becomes
fairly larger for low degrees of freedom. One solution is to reject H 0 if the test is inconclusive. A
better solutions is to modify the test as
• Reject H 0 when d < dU .

• Accept H 0 when d ≥ dU .

This test gives satisfactory solution when values of xi ’s change slowly, e.g., price, expenditure
etc.

2. The D-W test is not applicable when intercept term is absent in the model. In such a case, one can use
another critical values, say d M in place of d L . The tables for critical values d M are available.

3. The test is not valid when lagged dependent variables appear as explanatory variables. For example,
yt β1 yt −1 + β 2 yt − 2 + .... + β r yt − r + β r +1 xt1 + ... + β k xt ,k − r + ut ,
=

ut ρ ut −1 + ε t .
=
In such case, Durbin’s h test is used which is given as follows.

Durbin’s h-test
Apply OLS to
yt β1 yt −1 + β 2 yt − 2 + .... + β r yt − r + β r +1 xt1 + ... + β k xt ,k − r + ut ,
=

ut ρ ut −1 + ε t
=
 (b ). Then the Dubin’s h -
and find OLSE b1 of β1. Let its variance be Var (b1 ) and its estimator is Var 1

statistic is

n
h=r

1 − n Var (b1 )

which is asymptotically distributed as N (0,1) and


n

∑e e t t −1
r= t =2
n
.
∑e
t =2
2
t

Econometrics | Chapter 9 | Autocorrelation | Shalabh, IIT Kanpur


10
 (b )  < 0, then test breaks down. In such cases, the
This test is applicable when n is large. When 1 − nVar
 1 

following test procedure can be adopted.


Introduce a new variable ε t −1 to
= ut ρ ut −1 + ε t . Then

et δρt −1 + yt .
=

Now apply OLS to this model and test H 0 A : δ = 0 versus H1 A : δ ≠ 0 using t -test . It H 0 A is accepted then

accept H 0 : ρ = 0.

If H 0 A : δ = 0 is rejected, then reject H 0 : ρ = 0.

4. If H 0 : ρ = 0 is rejected by D-W test, it does not necessarily mean the presence of first order
autocorrelation in the disturbances. It could happen because of other reasons also, e.g.,
 distribution may follows higher order AR process.
 some important variables are omitted .
 dynamics of model is misspecified.
 functional term of model is incorrect.

Estimation procedures with autocorrelated errors when autocorrelation coefficient is


known
Consider the estimation of regression coefficient under first order autoregressive disturbances and
autocorrelation coefficient is known. The model is
y X β + u,
=
ut ρ ut + ε t
=

and assume that E (u ) = ψ ≠ σ 2) I , E (ε ) =


0, E (uu ') = 0, E (εε ') =
σ ε2 I .

The OLSE of β is unbiased but not, in general, efficient and estimate of σ 2 is biased. So we use
generalized least squares estimation procedure and GLSE of β is

βˆ = ( X 'ψ −1 X ) −1 X 'ψ −1 y
where

Econometrics | Chapter 9 | Autocorrelation | Shalabh, IIT Kanpur


11
 1 −ρ 0  0 0 
− ρ 1 + ρ 2 −ρ  0 0 
 
 0 −ρ 1+ ρ 2  0 0 
ψ =
−1
.
     
 0 0 0  1+ ρ 2 −ρ 
 
 0 0 0  −ρ 1 

To employ this, we proceed as follows:


1. Find a matrix P such that P ' P = ψ −1. In this case

 1− ρ 2 0 0  0 0
 
 −ρ 1 0  0 0
 0 −ρ 1  0 0
P= .
      
 
 0 0 0  1 0
 0 0 0  −ρ 1 

2. Transform the variables as


= =
y* Py =
, X * PX , ε * Pε .
Such transformation yields
 1− ρ 2 y   1− ρ 2 1 − ρ 2 x12  1 − ρ 2 x1k 
 1
  
 y2 − ρ y1   1− ρ x22 − ρ x12  x2 k − ρ x1k 
y − ρy , 
y* =
 3 2  X* = 1− ρ x32 − ρ x22  x3k − ρ x2 k  .
 
        
   
 yn − ρ yn −1   1− ρ xn 2 − ρ xn −1,2 ,  xn − ρ xn −1 

Note that the first observation is treated differently than other observations. For the first observation,

( ) (
1 − ρ 2 y1 = )
1 − ρ 2 x1' β + ( )
1 − ρ 2 u1

whereas for other observations


yt = ρ yt −1 = ( xt − ρ xt −1 ) ' β + (ut − ρ ut −1 ) ; t = 2,3,..., n

where xt' is a row vector of X . Also, 1 − ρ 2 u1 and (u1 − ρ u0 ) have same properties. So we expect
these two errors to be uncorrelated and homoscedastic.

Econometrics | Chapter 9 | Autocorrelation | Shalabh, IIT Kanpur


12
If first column of X is a vector of ones, then first column of X * is not constant. Its first element is

1− ρ 2.

Now employ OLSE with observations y * and X * , then the OLSE of β is

β * = ( X *' X *) −1 X *' y*,


its covariance matrix is

V ( βˆ ) = σ 2 ( X *' X *) −1
= σ 2 ( X 'ψ −1 X ) −1
and its estimator is
Vˆ ( βˆ ) = σˆ 2 ( X 'ψ −1 X ) −1
where

( y − X βˆ ) 'ψ −1 ( y − X βˆ )
σˆ 2 = .
n−k

Estimation procedures with autocorrelated errors when autocorrelation coefficient is


unknown
Several procedure have been suggested to estimate the regression coefficients when autocorrelation
coefficient is unknown. The feasible GLSE of β is

βˆF =Ω
( X ' ˆ −1 X ) −1 X ' Ω
ˆ −1 y

ˆ −1 is the Ψ −1 matrix with ρ replaced by its estimator ρ̂ .


where Ω

1. Use of sample correlation coefficient


Most common method is to use the sample correlation coefficient r between successive residuals as the
natural estimator of ρ . The sample correlation can be estimated using the residuals in place of disturbances
as
n

∑e e t t −1
r= t =2
n

∑e
t =2
2
t

where et =− 1, 2,..., n and b is OLSE of β .


yt xt' b, t =
Two modifications are suggested for r which can be used in place of r .

Econometrics | Chapter 9 | Autocorrelation | Shalabh, IIT Kanpur


13
n−k 
1. r* =   r is the Theil’s estimator.
 n −1 
d
2. r **= 1 − for large n where d is the Durbin Watson statistic for H 0 : ρ = 0 .
2
2. Durbin procedure:
In Durbin procedure, the model
yt − ρ yt −1 = β 0 (1 − ρ ) + β ( xt − ρ xt −1 ) + ε t , t = 2,3,..., n
is expressed as
yt = β 0 (1 − ρ ) + ρ yt −1 + β x1 − ρβ xt −1 + ε t
=β 0* + ρ yt −1 + β xt + β * xt −1 + ε t , t =2,3,..., n (*)

where β 0* =
β 0 (1 − ρ ), β * =
− ρβ .

Now run regression using OLS to model (*) and estimate r * as the estimated coefficient of yt −1.

Another possibility is that since ρ ∈ (−1,1) , so search for a suitable ρ which has smaller error sum of
squares.
3. Cochrane-Orcutt procedure:
This procedure utilizes P matrix defined while estimating β when ρ is known. It has following steps:

(i) Apply OLS to yt =β 0 + β1 xt + ut and obtain residual vector e .


n

∑e e t t −1
(ii) Estimate ρ by r = t =2
n
.
∑e
t =2
2
t −1

Note that r is a consistent estimator of ρ .


(iii) Replace ρ by r is

yt − ρ yt −1 = β 0 (1 − ρ ) + β ( xt − ρ xt −1 ) + ε t
and apply OLS to transformed model
yt − ryt −1 =β 0* + β ( xt − rxt −1 ) + disturbance term

and obtain estimators of β 0* and β as βˆ0* and βˆ respectively.

This is Cochrane-Orcutt procedure. Since two successive applications of OLS are involved, so it is also
called as two-step procedure.

Econometrics | Chapter 9 | Autocorrelation | Shalabh, IIT Kanpur


14
This application can be repeated in the procedure as follows:
(I) Put βˆ0* and βˆ in original model.
(II) Calculate the residual sum of squares.
n

∑e e t t −1
(III) Calculate ρ by r = t =2
n
and substitute it in the model
∑e
t =2
2
t −1

yt − ρ yt −1 = β 0 (1 − ρ ) + β ( xt − ρ xt −1 ) + ε t
and again obtain the transformed model.
(IV) Apply OLS to this model and calculate the regression coefficients.

This procedure is repeated until convergence is achieved, i.e., iterate the process till the two successive
estimates are nearly same so that stability of estimator is achieved.
This is an iterative procedure and is numerically convergent procedure. Such estimates are asymptotically
efficient and there is a loss of one observation.

4. Hildreth-Lu procedure or Grid-search procedure:


The Hilreth-Lu procedure has following steps:
(i) Apply OLS to
( yt − ρ yt −1 ) = β 0 (1 − ρ ) + β ( xt − ρ xt −1 ) + ε t , t = 2,3,..., n

using different values of ρ (−1 ≤ ρ ≤ 1) such as ρ = ±0.1, 0.2,... .


(ii) Calculate residual sum of squares in each case.
(iii) Select that value of ρ for which residual sum of squares is smallest.

Suppose we get ρ = 0.4. Now choose a finer grid. For example, choose ρ such that 0.3 < ρ < 0.5 and
consider ρ = 0.31, 0.32,..., 0.49 and pick up that ρ with smallest residual sum of squares. Such iteration
can be repeated until a suitable value of ρ corresponding to minimum residual sum of squares is obtained.
The selected final value of ρ can be used and for transforming the model as in the case of Cocharane-Orcutt
procedure. The estimators obtained with this procedure are as efficient as obtained by Cochrane-Orcutt
procedure and there is a loss of one observation.

Econometrics | Chapter 9 | Autocorrelation | Shalabh, IIT Kanpur


15
5. Prais-Winston procedure
This is also an iterative procedure based on two step transformation.
n

∑e e t t −1
(i) Estimate ρ by ρˆ = t =2
n
where et ’s are residuals based on OLSE.
∑e
t =3
2
t −1

(ii) Replace ρ by ρ̂ is the model as in Cochrane-Orcutt procedure

( ) (
1 − ρˆ 2 y1 = )
1 − ρˆ 2 β 0 + β ( ) (
1 − ρˆ 2 xt + )
1 − ρˆ 2 ut
yt − ρˆ yt −1 =(1 − ρˆ ) β 0 + β ( xt − ρˆ xt −1 ) + (ut − ρˆ ut −1 ), t =2,3,..., n.

(iii) Use OLS for estimating the parameters.


The estimators obtained with this procedure are asymptotically as efficient as best linear unbiased
estimators. There is no loss of any observation.

(6) Maximum likelihood procedure


Assuming that y ~ N ( X β , σ ε2ψ ), the likelihood function for β , ρ and σ ε2 is

1  1 
=L exp  − 2 ( y − X β ) 'ψ −1 ( y − X β )  .
 2σ ε
n

( 2πσ ) 
n
2 2
ψ 2

1
Ignoring the constant and using ψ = , the log-likelihood is
1− ρ 2
n 1 1
ln L( β , σ ε2 , ρ ) =
ln L = − ln σ ε2 + ln(1 − ρ 2 ) − 2 ( y − X β ) 'ψ −1 ( y − X β ) .
2 2 2σ ε

The maximum likelihood estimators of β , ρ and σ ε2 can be obtained by solving the normal equations

∂ ln L ∂ ln L ∂ ln L
= 0,= 0,= 0.
∂β ∂ρ ∂σ ε2
There normal equations turn out to be nonlinear in parameters and can not be easily solved.
One solution is to
- first derive the maximum likelihood estimator of σ ε2 .

Econometrics | Chapter 9 | Autocorrelation | Shalabh, IIT Kanpur


16
- Substitute it back into the likelihood function and obtain the likelihood function as the function of
β and ρ .
- Maximize this likelihood function with respect to β and ρ .
Thus
∂ ln L n 1
= 0 ⇒ − 2 + 2 ( y − X β ) 'ψ −1 ( y − X β ) = 0
∂σ ε2
2σ ε 2σ ε
1
⇒ σˆ ε2 = ( y − X β ) 'ψ −1 ( y − X β )
n
is the estimator of σ ε2 .

Substituting σˆ ε2 in place of σ ε2 in the log-likelihood function yields

n 1  1 n
ln L *( β , ρ ) =
ln L* = − ln  ( y − X β ) 'ψ −1 ( y − X β )  + ln(1 − ρ 2 ) −
2 n  2 2
n 
− ln {( y − X β ) 'ψ −1 ( y − X β )} − ln(1 − ρ 2 )  + k
1
=
2 n 
 
n  ( y − X β ) 'ψ −1 ( y − X β ) 
= k − ln
2  1 
 (1 − ρ 2 ) n 
n n
=
where k ln n − .
2 2

Maximization of ln L * is equivalent to minimizing the function

( y − X β ) 'ψ −1 ( y − X β )
1
.
(1 − ρ )
2 n

Using optimization techniques of non-linear regression, this function can be minimized and estimates of β
and ρ can be obtained.

If n is large and ρ is not too close to one, then the term (1 − ρ 2 ) −1/ n is negligible and the estimates of β

will be same as obtained by nonlinear least squares estimation.

Econometrics | Chapter 9 | Autocorrelation | Shalabh, IIT Kanpur


17

Potrebbero piacerti anche