Sei sulla pagina 1di 21

# Term Paper for Econ 508

## Cointegration Test: A Comparison Between Bayesian and Classical

Methods
December 12, 2010
Daisuke Goto
Department of Economics
Rutgers, The State University of New Jersey
New Brunswick, NJ

1 Johansen’s Test

## Johansen (1988) developed a test of cointegration, which is based on a canonical

correlation test statistic of a certain form. An autoregressive (AR) model can be
written as follows.

## ∆yt = Γzt + Πyt−1 + B1 ∆yt−1 + · · · + Bp−1 ∆yt−p+1 + vt (1)

where vt is an error vector being distributed as N ∼ (0, Σ), and ∆xt = xt − xt−1 .
Let us fix y1−p , · · · , y0 , we rewrite the equation 1, and obtain:

Y∆ = W B + V (2)
= Y−1 Π + XC + V (3)

where

## ∆y10 y00 z10 ∆y00 0 0

     
∆y−1 ··· ∆y2−p
Y∆ =  ...  , Y−1 =  ...  , X =  .. .. .. ..
,
     
. . . .
∆yn0 0
yn−1 zn0 ∆yn−1
0 0
∆yn−2 ··· 0
∆yn−p+1
Γ0
 
 B1 
W = (Y−1 , X), C =  .
 
 ..

Bp−1

## In order to test the existence of cointegration vectors, we examine H0 : Π = 0 vs.

Ha : Π 6= 0.
When we factorize Π as αβ 0 , their maximum likelihood estimator of α and β is
a function of residuals defined as follows.
−1 −1
|λi I = S11 S01 S00 S01 | = 0 (4)

where
0
S11 = Y−1 Mx Y−1 , S01 = Y∆0 Mx Y−1 , S00 = Y∆0 Mx Y∆ (5)
0 −1 0
Mx = I − X(X X) X (6)

1
Assuming we have eigenvalues such that λi > λi+1 , provided we have r cointegration
vectors, we have λj 6= 0 for j ∈ [1, r], and the rest of eigenvalues are, λk = 0 for
k ∈ [r + 1, p]. Hence, we have the likelihood ration test statistic H is:
G
X
H = −2ln(λ) = −n ln(1 − λi ). (7)
i=r+1

## This test has an atypical distribution that follows a functional of multivariate

Brownian motion (Johansen, 1991). Tabulated significant values are published in
Osterwald-Lenum (1992).

2 Bayesian Test

Tsurumi and Wago (1996) developed a test for cointegration, which examines sin-
gular values of Π. They used a prior probability density function of a form:
G+1 1
p(Π, C, Σ) ∝ |Σ|− 2 exp − tr(Π − Π0 )0 F0 (Π − Π0 )Σ−1
2
and the likelihood function that they used is:
n 2
l(Π, C, Σ) = |Σ|− 2 exp − tr[S + (B − B)
b 0 W 0 W (B − B)]Σ
b −1
1
In this analysis, we use an improper flat prior function. And thus, the posterior
function becomes:
n−G+1 1
p(Π, Σ|data) ∝ |Σ|− 2 exp − tr[S + D + (Π − C)0 F (Π − C)]Σ−1
2
Estimates for Π and its singular values are generated using Markov Chain Monte
Carlo methods. Because we used a flat prior function, the marginal distributions
of the target density function is fully known. Therefore, we use the Gibbs Sam-
pling to draw Π and its singular values by decomposition. We further assume
that Π is distributed around Π which minimizes squared sum of residuals, hence
Π = (Y−10 M Y )−1 Y M ∆Y . Therefore, we obtain S and D. We use Inverted
x −1 −1 x t
Wishart distribution for the conjugate prior for the covariance matrix Σ. Hence, we
design the following Gibbs Sampling:
Step 1: Draw Σ using the inverted Wishart Distribution:

Σ ∼ IW (n, S + D) (8)

## Step 2: Draw Π using the Σ obtained above assuming Π is normally distributed

around Π:

Π ∼ N (Π, Σ ⊗ F −1 ) (9)

Σ = Ψ0 ΛΥ (10)

## The singular values are on the diagonal of Λ.

2
3 Monte Carlo Simulation

Monte Carlo simulation is conducted using the tests we have discussed above. We
use the data generation process used by Banerjee et al. (1986), Engle and Granger
(1987), Hansen and Philips (1990) and Gonzalo (1994).
     
ezt 0 1 θσ
yt = βxt + zt zt = ρzt−1 + ezt =N ,
ewt 0 θσ σ 2
a1 yt − a2 xt = wt wt = wt−1 + ewt

## Our parameter space is a1 = (0, 1), a2 = −1, β = 1, θ = 0, σ = (0.25, 0.5, 1, 2),

and ρ = (.95, .9, .8).
In order to find the size of the Bayesian test due to Tsurumi and Wago (1996),
we run the Monte Carlo study to obtain singular values under ρ = 1 and run
the Kolmogorov-Smirnov Test to see if the distribution obtained under ρ 6= 0 is
significantly different. We also obtained the five percentile value under ρ = 1 to
compare with the mean of singular values under ρ 6= 0 to see how frequent singular
values under ρ 6= 0 are larger than the said value. We obviously have a significant
problem when we are trying to compare tests under Bayesian and Classical methods
because Bayesian tests are oriented around analyses of posterior functions.

Table 1 summarizes the sizes of the tests obtained through 100 experiments and
1000 observations. It is usually assumed as ρ decreases from 1 to 0, the size of any
cointegration tests shall increase. We, however, have many counter examples for
the assumption. The size of Johansen’s test was generally much larger than the
theoretical value of 0.05.
For the Bayesian analysis, we found that the distributions of singular values
are always significantly different. However, under the other regime which tests of
singular values obtained under ρ 6= 0 is generally larger than ρ = 0, we found that
the probability of having higher singular value is not as good as the size of the
Johansen’s test.
We also note that the sizes of the tests are more influenced by σ rather than
ρ. It is understandable as error terms become smaller, we see more indication of
contegration.
Overall, the experiment informs us that the power of test is much weaker than
what we should get in theory. We must use these tests in caution.

4 Concluding Remarks

We generally say that we favor Johansen’s method under this particular example
of Monte Carlo Test. It is still very hard to compare tests under Classical regime
and Bayesian methods. Further research should investigate further in the method-
ological issues on comparison between Bayesian and Classical tests. We also must
understand the nature of the power and the size of the cointegration test. It seems
also necessarily to update the table for significant values for the test in Johansen’s

3
method.

4
References

## Banerjee et al. (1986). Exploring equilibrium relationships in econometrics through

static models: Some Monte Carlo evidence. Oxford Bulletin of Economics and
Statistics. 48. 253-277.

Engle, R.F. and C.W.J. Granger. (1987). Cointegration and error correction: Rep-
resentation, estimation and testing. Econometrica 55. 251-276.

## Gonzalo, Jesis. (1994). Five Alternative Methods of Estimating Long-run Equailib-

rium Relationships. Journal of Econometrics. 60, 203-233.

## Hansen, B. and P.C.B. Phillips. (1990). Estimation and inference in models of

cointegration: A simulation study. Advances in Econometrics 8. 225-248.

## Johansen, S. (1988). Statistical analysis of cointegration vectors. Journal of Eco-

nomic Dynamics and Control. 12, 231-254.

## Johansen, S. (1991). Estimation and hypothesis testing of cointegration vectors in

Gaussian vector autoregressive models. Econometrica. 59, 1551-1580.

## Johansen, S. and K. Juselius (1990). Maximum likelihood estimationand inference

on cointegration with applications to the demand for money. Oxford Bulletin of
Economics and Statistics. 52.2, 169-210.

## Osterwald-Lenum, M. (1992). A note with quantiles of the asymptotic distribution

of the maximum likelihood cointegration rank test statistics. Oxford Bulletin of
Economics and Statistics. 54, 461471.

## Tsurumi, H. and H. Wago (1996). Bayesian analysis of unit root andcointegration

with an application to the foreign exchange rate of yen. Advances in Econometrics.
13B, 51-86.

5
Tables and Figures Table 1.

Test Results
Rho 0.900 0.800 0.700 0.900 0.800 0.700 0.900 0.800 0.700
Sigma 2.000 2.000 2.000 1.000 1.000 1.000 0.500 0.500 0.500
a1 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000
Johansen
Size 0.110 0.130 0.120 0.160 0.080 0.100 0.150 0.150 0.110
Bayesian
K-S Test 0.010 0.000 0.000 0.000 0.000 0.020 0.000 0.000 0.010
Size 0.120 0.150 0.190 0.160 0.220 0.110 0.240 0.200 0.210
0.900 0.800 0.700 0.900 0.800 0.700 0.900 0.800 0.700
2.000 2.000 2.000 1.000 1.000 1.000 0.500 0.500 0.500
0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000

## 0.020 0.000 0.020 0.020 0.010 0.000 0.020 0.030 0.010

0.250 0.170 0.200 0.240 0.170 0.150 0.110 0.110 0.090

6
Appendix: GAUSS Code

new;

## print "BEGINNING OF PROGRAM";

print;
print "Test F_or Cointegration";
print;

## @--- GLOBAL ---@

_pmcolor = {0,0,0,0,0,0,0,0,15};
_plwidth = 5;
_pcolor = 1;

## @--- PARAMETERS ---@

looploo = 0;
consta1 = 0;
consta2 = 1;
p = 1; // # of lags
tun = {.04, 0.1, 0.5, 10.5};
nrep = 1000 ; // # of gibbs sampling
burn = 500 ; // # of burn on gibbs sampling
indct = 0;
numcol = 2; // # of cols on wind
dagenl = 1500 ; // number of data generated on dgp
databn = 500 ; // number of burn on dgp
testct = 100 ; // number of ite on monte carlo

## // parameter_vector = {looploo, consta1, nrep, burn, indct, numcol, consta2, p}

parameter_vector = looploo~ consta1 ~ nrep~ burn~ indct~ numcol ~ consta2 ~ p;

// Currency Data

## print "Currency Data";

print;
{reject, bay} = cointegration_test_interface(data, tun, parameter_vector, 1, 0, -1, 0);

## print "Monte Carlo Simulation";

print; size = {}; power = {}; bsize = {}; bpower ={}; parastore = {}; resultstore = {};

## // p = total number of eigenvalues (cols(data))

// Ho: 1 coint <= r or Ha: no coint
// k = number of legs

## // para = {length, burn, sigma, a1, a2, rho, beta, alpha_1}

fedrhovec = .9 ~ .8 ~ .7;

7
feda_1vec = 1 ~ 0;
fedsigvec = 2 ~ 1 ~ .5;
beta = 1;
alpha_1 = .5;

## for a_1feeder (1, cols(feda_1vec), 1);

feda_1 = feda_1vec[1,a_1feeder];
for sigfeeder (1, cols(fedsigvec), 1);
fedsig = fedsigvec[1,sigfeeder];
para = dagenl ~ databn ~ fedsig ~ feda_1 ~ -1~ 0 ~ beta ~ alpha_1;
{fif, fiv} = sigvalscooper(tun, parameter_vector, testct, para);
for rhofeeder (1, cols(fedrhovec), 1);
fedrho = fedrhovec[1,rhofeeder];
para = dagenl ~ databn ~ fedsig ~ feda_1 ~ -1~ fedrho ~ beta ~ alpha_1;
{szz, pww, szb, pwb} = montestudy(tun, parameter_vector, testct, para, fiv, fif);
parasam = fedrho | fedsig | feda_1;
parastore = parastore ~ parasam;
size = size ~ szz; power = power ~ pww; bsize = bsize ~ szb; bpower = bpower ~ pwb;
result_gd = parasam | szz | pww | szb | pwb;
so_result = ftostrC(result_gd,"%6.3lf");
soso_result = so_result[1:3,1] \$| "" \$| so_result[4:5,1] \$| "" \$| so_result[6:7,1];
resultstore = resultstore \$~ soso_result;
endfor;
endfor;
endfor;
para_ref = "Rho" \$| "Sigma" \$| "a_1" ;
result_ref = para_ref \$| "Johansen" \$| "Size" \$| "Power" \$| "Bayesian" \$| "Size" \$| "Power";

## print "Test Results";

print;
outy = result_ref \$~ resultstore;
print outy;
note = "" \$~ "Note: A -1 indicates an error or an omission.";
print note;
print;
//print "CPU TIME";
//print timeutc - tc;
//print;
print "END OF PROGRAM";
end;

## proc(4) = montestudy(tun, parameter_vector, testct, para, fiv, fif);

local pw, sz, data, rej, bay, size, power, testcount, bs, bp, sizeb, powerb;
pw = 0;
sz = 0;
bs = 0;
bp = 0;
for testcount (1, testct, 1);
{data} = datagen(para[1,1],para[1,2],para[1,3],para[1,4],para[1,5],para[1,6],para[1,7],para[1,8]);
{rej,bay} = cointegration_test_interface(data, tun, parameter_vector, 0, testcount, fiv, fif);
sz = sz + rej[1,1];
pw = pw + rej[1,2];
bs = bs + bay[1,1];
bp = bp + bay[1,2];
endfor;
size = sz/testct ;
power = pw/testct ;

8
sizeb = bs/testct ;
powerb = bp/testct ;
retp(size, power, sizeb, powerb);
endp;

## proc(2) = sigvalscooper(tun, parameter_vector, testct, para);

local pw, sz, data, rej, bay, size, power, testcount, fif, fiv, fivchain, fifchain, fivout, e;
fivchain = {};
fifchain = {};
for testcount (1, testct, 1);
{data} = datagen(para[1,1],para[1,2],para[1,3],para[1,4],para[1,5], 1 ,para[1,7],para[1,8]);
{fif, fiv} = sigvalscooper_engine(data, tun, parameter_vector, 0, testcount);
fifchain = fifchain | fif;
fivchain = fivchain | fiv;
endfor;
fivout = meanc(fivchain);
// e = { .95 };
// fivout = quantile(fivchain,e);
retp(fif, fivout);
endp;

## proc(2) = sigvalscooper_engine(data, tun, parameter_vector, graphh, itenum);

local transsing, xksi, transxksi, transbcha, kdenite, kdenfeed, xdeni, kdeni,
xdenchain, kdenchain, x15, den15, numrow, numcol, xdenout, kdenout, kdenplot, ncol, pep, marp,
alphavalue, confbet, confsig, confksi, eigenvalues, pp, lambdamaxtest, tracetest,
singvalue, meanctranssing, out, bchain, meancxksi, empty, consta1, consta2,
h0_size, h0_power, ii, rowsdata, reject, rjct_sz, rjct_pw, cv1, cv2, cv3,
fif, fiv, transing, e, fit, fitty;

looploo = parameter_vector[1,1];
consta1 = parameter_vector[1,2];
nrep = parameter_vector[1,3];
burn = parameter_vector[1,4];
indct = parameter_vector[1,5];
numcol = parameter_vector[1,6];
consta2 = parameter_vector[1,7];
pp = parameter_vector[1,8];
rowsdata = cols(data);

{singvalue, bchain, empty} = coint(data, pp, tun, nrep, burn, indct, consta1, looploo);

transing = singvalue’;
fif = transing;
fit = meanc(transing);
e = { .05 };
fitty = quantile(transing,e);
fiv = maxc(fitty’);

retp(fif, fiv);
endp;

## proc(2) = cointegration_test_interface(data, tun, parameter_vector, graphh, itenum, fiv, fif);

local transsing, xksi, transxksi, transbcha, kdenite, kdenfeed, xdeni, kdeni,
xdenchain, kdenchain, x15, den15, numrow, numcol, xdenout, kdenout, kdenplot, ncol, pep, marp,
alphavalue, confbet, confsig, confksi, eigenvalues, pp, lambdamaxtest, tracetest,
singvalue, meanctranssing, out, bchain, meancxksi, empty, consta1, consta2,
h0_size, h0_power, ii, rowsdata, reject, rjct_sz, rjct_pw, cv1, cv2, cv3,
meansing, maxsing, bayee, baye_sz, baye_pw, sdasdasda, stickin, dk, pv,

9
j_lambda,j_alpha,j_beta,j_ecm,j_maxstat,j_trstat,j_crit;

// cv1 = 11.42 | 23.62 | 39.56 | 59.42 | 83.26 | 111.11 | 142.93 | 178.80 | 218.63 | 262.48 | 310.33 | 362.0
cv2 = 5.332 | 17.299 | 32.313 | 50.424 | 72.140 ;
cv3 = 5.332 | 15.810 | 23.002 | 29.335 | 35.546 ;

looploo = parameter_vector[1,1];
consta1 = parameter_vector[1,2];
nrep = parameter_vector[1,3];
burn = parameter_vector[1,4];
indct = parameter_vector[1,5];
numcol = parameter_vector[1,6];
consta2 = parameter_vector[1,7];
pp = parameter_vector[1,8];
rowsdata = cols(data);
h0_power = 1; // null hypothesis
h0_size = 0; // null hypothesis

{singvalue, bchain, empty} = coint(data, pp, tun, nrep, burn, indct, consta1, looploo);

transsing = singvalue’;
meansing = meanc(transsing);
maxsing = maxc(meansing);

if fiv == -1;
else;
baye_sz = -1;

## stickin = transsing[.,1] ~ fif[.,1];

dk = ks(stickin);
pv = pval(dk);
endif;
if fiv == -1;
baye_sz = -1;
elseif pv < 0.05;
baye_sz = 0;
else;
baye_sz = 1;
endif;

if fiv == -1;
baye_pw = -1;
elseif maxsing > fiv;
baye_pw = 0;
else;
baye_pw = 1;
endif;

## bayee = baye_sz ~ baye_pw;

xksi = sumc(singvalue);
transxksi = xksi’;
transbcha = bchain’;
xdenchain = {};
kdenchain = {};

## for kdenite (1, cols(transsing), 1);

kdenfeed =transsing[.,kdenite];
{xdeni,kdeni} =kden(kdenfeed);

10
xdenchain=xdenchain~xdeni;
kdenchain=kdenchain~kdeni;
endfor;

{x15, den15}=kden(xksi);

if floor(cols(transsing)/numcol) == cols(transsing)/numcol;
numrow = floor(cols(transsing)/numcol)+1;
else;
numrow = floor(cols(transsing)/numcol)+2;
endif;

if graphh == 1;
begwind;
window(numrow,numcol,0);
title("Sum of eigenvalues");
xy(x15, den15); nextwind;
for kdenplot (1, cols(transsing), 1);
nextwind;
xdenout = xdenchain[.,kdenplot];
kdenout = kdenchain[.,kdenplot];
xy(xdenout, kdenout);
endfor;
endwind;

endif;

## //for j (1, ncol, 1);

// {pep, marp} = chkconv(transsing[.,j], round(rows(transsing)/50), 0, j);
//endfor;

ncol = cols(transsing);

// Confidence intervals
alphavalue = .05;
confbet = confinv(transbcha, alphavalue);
confsig = confinv(transsing, alphavalue);
confksi = confinv(xksi, alphavalue);

// Classical Econometrics
// {eigenvalues, lambdamaxtest, tracetest} = mgjoh(data,p,consta2);
rjct_sz = -1;
rjct_pw = -1;
/*
if lambdamaxtest[1,cols(lambdamaxtest)] > cv2[rowsdata-h0_power, 1];
rjct_pw = 0;
else;
rjct_pw = 1;
endif;*/

/*
if lambdamaxtest[1,cols(lambdamaxtest)] > cv3[rowsdata-h0_size, 1];
rjct_sz = 0;
else;
rjct_sz = 1;
endif;
*/

{j_lambda,j_alpha,j_beta,j_ecm,j_maxstat,j_trstat,j_crit} = coint_joh(data,0,1,2);

11
if j_trstat[1,1] > j_trstat[1,2];
rjct_sz = 1;
else;
rjct_sz = 0;
endif;

## reject = rjct_sz ~ rjct_pw;

//Test Results
/*
if itenum>0;
print "Iteration Number";
print itenum;
endif;
print;
print "Bayesian Method";
print;
print "estimated singular values & highest posterior density interval of eigenvalues";
meanctranssing = meanc(transsing);
out = meanctranssing ~ confsig;
print out;
print;
print "estimated value & highest posterior density interval of ksi";
meancxksi = meanc(xksi);
out = meancxksi ~ confksi;
print out;
print;
print "bay size.";
print bayee;
print;
print "Johensen’s Method";
print;
print "estimated eigenvalues";
print eigenvalues;
print;
print "test statistics";
print lambdamaxtest;
print;
print "h_0 rejected, then 1 or else 0. size and power.";
print reject;
print;
*/
//print "E_nd of Output";
//print;
retp(reject, bayee);
endp;

## proc(1) = confinv(y, alpha);

/*
Procedures to compute HPDR
for vector of paramters Procedures
output: gives HPDR for paramters based on posterior
*/

## local conf, i, upd;

conf =bounds(y[.,1],alpha);
for i (2,cols(y),1);
upd=bounds(y[.,i], alpha);

12
conf=conf|upd;
endfor;
retp(conf);
endp;

proc(1)=bounds(z, alpha);
local x, den, c_low, c_up;
{x, den}=kden(z);
{c_low, c_up}=critical(x, den, alpha);
retp(c_low~c_up);
endp;

## proc(2)=critical(x1, den1, alpha);

local nrept, i, ii, c_low, c_up;
nrept=rows(den1);
i=1; ii=0;
do while (i+ii) <= alpha*nrept;
if den1[i]<=den1[nrept-ii];
i=i+1;
else;
ii=ii+1;
endif;
endo;
c_low=x1[i]; c_up=x1[nrept-ii];
retp(c_low, c_up);
endp;

//{a, b} = checkconv(transsing);

proc(2) = checkconv(a);
local bh, b, sumca, h, s, ss, j, r, v, w;
sumca = sumc(a);
bh = (1/rows(a).*sumca)-(1/cols(a)*sumc(1/rows(a).*sumca).*ones(rows(sumca),cols(sumca)));
b = rows(a)/(cols(a)-1)*bh’bh;
s = 0;
for j (1, cols(a), 1);
ss = a[.,j] - (1/rows(a)*sumca[j,.]).*ones(rows(a),1);
s = s + ss’ss;
endfor;
w = s/cols(a);
v = w*(rows(a)-1)/rows(a)*w + 1/rows(a)*b;
r = sqrt(v/w);
retp(r, rows(a));
endp;

/*
b1 = bchain[1,1];
count = 1;
do while count<=cols(bchain);
b1 = b1 | bchain[1,count];
count = count + 4;
endo;
obs = seqa(1,1,rows(b1));
begwind;
window(1,1,0);
xy(obs, b1);
endwind;
*/

13
proc(3)=ecmtransform(y_0, z, p, ind);

## /** Transformation of raw data into ECM format

** p-is the number of lagged differneces in the model
** dimension of y_0 should be (t+p+1)by(n)
** con== number different from zero is constant is present */

## local x, dy_t, t, y_t;

t = rows(y_0)-p-1;
x = {};
dy_t= dlag2(y_0, t, 0); /* creating a vector delta y_t */
for jj (1,p,1);
x = x~dlag2(y_0,t, jj); /* creating vectors delta y_(t-1), ..., y_(t-p) */
endfor;
y_t = y_0[p+1:rows(y_0)-1,.]; /* creating y(t-1), i.e error correction term */
//x=y_t~x;
if z == 0;
else;
x = x ~ ones(rows(x),1);
endif;
fn dlag2(x, t, m)= x[rows(x)-t-m+1:rows(x)-m,.]-x[rows(x)-t-m:rows(x)-m-1,.];
retp(dy_t, x, y_t);
endp;

proc(1)=invwishart(s, nu);

## /* Drawing inverted wishart distribution with

matrix S and "nu" degrees of freedom
: invwish2.g
S- "sample" variance-covraince matrix
nu -degrees of freedom
The advantage is that it does not require nu d-variate
normals (Gentle, James E. (1998) Random number generation
and Monte Carlo Methods). */

## local m, chi, a, b, c, w, sigma, x, t, iw, invwish;

m =rows(s);
//chi=rndgam(m,1,nu);
a =rndn(m,m);
b =lowmat(a);
c =rndgam(m,1,nu);
c =sqrt(c);
w =b-eye(m).*diag(b)+eye(m).*c;
sigma = w*w’;
iw = inv(sigma);
t =chol(s);
invwish = t’iw*t;
retp(invwish);
endp;

proc(2) = kden(v);
local g,h,j,nn,res;
nn=rows(v);
h=1.06*stdc(v)/nn^.2;
g=0;
j=1;
do while j <= nn;
g=g|meanc(pdfn((v-v[j])/h))/h;
j=j+1;

14
endo;
res=sortc(v~g[2:nn+1],1);
retp(res[.,1],res[.,2]);
endp;

## proc(3) = coint(y_0, p, tun, nrep, burn, indct, consta, looploo);

local t, n, dy_t, x, y_t1, z, z_rows, mxrows, mx, eyemx, pi_hat, ef1;
local mw, s, eyemw, mwrows, w, c, d, sd, sigma, beta_hat2, beta_hat, eye2, wki, st_dy;
local ef, nn, lg_sig, new_p_add, b_trimmed, new_p, rows_nb, cols_nb, i, cols_bh;
local rows_bh, j, jj, zz, sigma_hat, singvalue, ii, transsing, diaglgsig;
local den14, den13, den12, den11, x11, x12, x13, x14, bchain, pchain;
local diaginvef, sig, nu, ssr_not, s1, s2, s3, s4, pi_ori;

## {dy_t, x, y_t1} = ecmtransform(y_0, consta, p, 0);

w = y_t1 ~ x;
mx = eye(cols(x*inv(x’x)*x’)) - x*inv(x’x)*x’;
mw = eye(cols(w*inv(w’w)*w’)) - w*inv(w’w)*w’;
ef = y_t1’mx*y_t1;
pi_hat = inv(ef)*y_t1’mx*dy_t;
pi_ori = pi_hat;
pchain = pi_hat;
singvalue = {};
jj = 0;
c = inv(ef)*ef*pi_hat;
d = pi_hat’ef*pi_hat-c’ef*c;
s = dy_t’mw*dy_t;

// Iterations 1 - n

## for j (1, nrep, 1);

{lg_sig} = invwishart(s+d,rows(dy_t));
new_p = {};

## for ii (1, rows(lg_sig), 1);

s1 = lg_sig[ii,ii]*inv(ef);
z = rndn(rows(s1),1);
endfor;

## pi_hat = pi_ori + new_p;

singvalue = singvalue~svd(pi_hat);
pchain = pchain ~pi_hat;

if looploo == 1;
if j/100 >= jj; // Printed Every 500th Iter
print "Number of Iteration - " j;
print pi_hat;
print;
jj = jj+1;
endif;
endif;

endfor;

singvalue = singvalue[.,burn+1:cols(singvalue)];

15
transsing = singvalue’;

endp;

## proc(1) = datagen(length, burn, sigma, a1, a2, rho, beta, alpha_1);

local e_zt, e_wt, x_0, d, e_2t, e_1t, d_xt, u_2t, u_1t, alpha_1t, gamma_1, gamma_2;
local d_yt, x_t1, y_t1, x_chain, y_chain;
d = a1*beta - a2;
gamma_2 = (rho - 1)*a1/d;
gamma_1 = (rho - 1)*a2/d;
x_t1 = 0;
y_t1 = 0;
x_chain = x_t1;
y_chain = y_t1;
for i (1, length, 1);
e_zt = rndn(1,1);
e_wt = rndn(1,1) * sigma;
e_2t = d^(-1)*(-a1*e_zt + e_wt);
u_2t = e_2t;
u_1t = d^(-1)*(-a2*e_zt + beta*e_wt);
e_1t = u_1t - alpha_1 * u_2t;
d_xt = gamma_2*(1-beta)*x_t1+e_2t;
d_yt = alpha_1 * d_xt + (alpha_1*gamma_2 - gamma_1)*(1-beta)*y_t1+e_1t;
x_chain = x_chain | x_t1 + d_xt;
y_chain = y_chain | y_t1 + d_yt;
x_t1 = x_t1 + d_xt;
y_t1 = y_t1 + d_yt;
endfor;
x_chain = x_chain[burn+1:rows(x_chain),.];
y_chain = y_chain[burn+1:rows(y_chain),.];
retp(x_chain~y_chain);
endp;

## proc(2)=chkconv(y, intvl, indc, index_p);

/**
** check convergence of the chain
**/

local n, x, a_t, dx, phix, nnr, nnr1, upr, lowr, nnp, nnp1, upp,
lowp, hp, pep, hr, per, wp, wp1, wp2, wp3, wp4, wpr, nn2, sur,
wpp, th, i, rho, j, phi_1, sigma_1, sigma, phi, cphi, wr, yr,
b_gls, vs2, wrwr, h11, ln1, max1, maxsur, vol, marr, marp, meanp,
varp, sdp, meanr, varr, sdr, emr, emp, abr, abmpd, cov, corr_rp, obs;

y=selct(y,intvl);
obs=seqa(1,1,rows(y));
n=rows(y);
x=ones(n,1);

a_t=lowmat(ones(n,n));
a_t=a_t’a_t;
a_t=a_t[n:1,n:1];
dx=diagrv(zeros(n,n),x[.,1]);
phix=dx*a_t*dx;
nnr=20; nnr1=nnr+1;

16
upr=1.0; lowr=0.000001;
nnp=20; nnp1=nnp+1;
upp=.32; lowp=-.2;
hp=(upp-lowp)/nnp; pep=seqa(lowp,hp,nnp1);
hr=(upr-lowr)/nnr; per=seqa(lowr,hr,nnr1);

/**
Numerical integration by Simpson’s rule
**/
wp1={1 4}; wp2={2 4}; wp4=1;

nn2=nnr/2-1;
wp3=ones(1,nn2).*.wp2;
wp=wp1~wp3~wp4;
wpr=wp*(hr/3);

sur=zeros(nnr1,nnp1);
nn2=nnp/2-1;
wp3=ones(1,nn2).*.wp2;
wp=wp1~wp3~wp4;
wpp=wp*(hp/3);

th=0.0;
for i (1, nnr1,1); rho=per[i];
for j (1, nnp1,1); phi_1=pep[j];
sigma=toeplitz(seqm(1,phi_1,n));
sigma_1=sigma-eye(n);
sigma=eye(n)*(1+th^2+2*th*phi_1)+
sigma_1*(phi_1+th)*(1+phi_1*th)/phi_1;
phi=sigma+(1-phi_1^2)*rho*phix;
cphi=chol(phi);
cphi=inv(cphi)’;

wr=cphi*x;
yr=cphi*y;
b_gls=yr/wr;
vs2=(yr-wr*b_gls)’(yr-wr*b_gls);
wrwr=wr’wr;
h11=inv(wrwr);
ln1=-.5*ln(det(phi))-.5*(n-3)*ln(vs2)-.5*ln(det(wrwr))-.5*ln(h11)
+.5*n*ln(1-phi_1^2);
sur[i,j]=ln1;
endfor;
endfor;

max1=maxc(sur);
maxsur=maxc(max1);
sur=sur-maxsur; sur=exp(sur);

## @---marginal posterior pdf, mean and sd----@

vol=wpr*sur*wpp’;
sur=sur/vol;
marr=sur*wpp’;
marp=sur’wpr’;

meanp=wpp*(pep.*marp);
varp=wpp*(((pep-meanp)^2).*marp);
sdp=sqrt(varp);

17
meanr=wpr*(per.*marr);
varr=wpr*(((per-meanr)^2).*marr);
sdr=sqrt(varr);

## //print "posterior mean and sd of phi" meanp~sdp;

//print "posterior mean and sd of rho" meanr~sdr;

## @---drawing the marginal pdf----@

// fonts("simplex simgrma");
begwind;
title(index_p);
window(2,2,0);
ylabel("value of draw");
xlabel("# of draws");
xy(obs, y); nextwind;
ylabel("\201 Pdf(\202f\201)");
xlabel("\202f");
xy(pep,marp); nextwind; nextwind;
ylabel("\201 Pdf(\202r\201)");
xlabel("\202r");
xy(per,marr);
endwind;

if indc==1;
@---posterior correlation between rho and phi---@
emr=per-meanr;
emp=pep-meanp;
abr=emr*emp’;
abmpd=abr.*sur;
cov=wpr*abmpd*wpp’;
corr_rp=cov/(sdr*sdp);
// print "covariance and correlation between rho and phi" cov~corr_rp;

## @---surface of joint posterior pdf of rho and phi---@

fonts("simplex simgrma");
_protate=1;
_pframe[1]=0;
begwind;
makewind(9, 6.855, 0, 0, 0);
title("\201 Surface of Joint Posterior Pdf of \202r \201and \202f");
ylabel("\202f");
xlabel("\202r");
zlabel("\201p(\202r\201, \202f\201)");
surface(per’,pep,sur’);
endwind;

## @---contours of joint posterior pdf of rho and phi---@

_pzclr={1, 2, 3, 4};
_protate=1;
_pframe[1]=0;
begwind;
makewind(9, 6.855, 0, 0, 0);
title("\201 Surface of Joint Posterior Pdf of \202r \201and \202f");
ylabel("\202f");
xlabel("\202r");
zlabel("\201p(\202r\201, \202f\201)");
contour(per’,pep,sur’);
endwind;
endif;

18
retp(pep~marp, per~marr);
endp;

proc(1)=selct(x0,intvl);
local n,idx, x;
n = rows(x0);
idx = seqa(1,intvl,floor(n/intvl));
x = x0[idx,.];
retp(x);
endp;

## // Original Author: Richard G. Pierse

local t,n,k,ni0,DY,Z,ZZ,R0,Rk,S00,S01,S0k,Skk,
lambda,alpha,beta,ecm,maxstat,trstat,
cvmax,cvtrace,llf,npar,crit;
/* 95% critical values from Osterwald-Lenun (1992) */
cvmax={
3.84 11.44 17.89 23.80 30.04
36.36 41.51 47.99 53.69 59.06,
9.243 15.672 22.002 28.138 34.400
40.303 46.455 51.995 57.422 63.571,
3.762 14.069 20.967 27.067 33.461
39.372 45.277 51.420 57.121 62.805,
8.176 14.900 21.074 27.136 33.319
39.426 44.912 51.071 56.996 62.419};
cvtrace={
3.84 12.53 24.31 39.89 59.46
82.49 109.99 141.20 175.77 212.67,
9.243 19.964 34.910 53.116 76.069
102.139 131.700 165.579 202.920 244.148,
3.762 15.410 29.680 47.210 68.524
94.155 124.243 155.999 192.887 233.135,
8.176 17.953 31.525 48.280 70.598
95.177 124.253 157.109 192.840 232.486};
n=rows(Y); k=cols(Y); ni0=cols(X);
if rows(X) ne n; ni0=0; endif;
t=n-lag;
DY = Y - ( zeros(1,k) | Y[1:n-1,.] );
Z=zeros(n,1); if model gt 1; Z=Z~ones(n,1); endif;
if lag gt 1;
Z=Z~shiftr((ones(1,lag-1).*.DY)’,
seqa(1,1,lag-1).*.ones(k,1),0)’;
endif;
if ni0 gt 0; Z=Z~X; endif;
DY=DY[lag+1:n,.];
R0=DY;
Rk=Y[1:t,.]; if model eq 1; Rk=Rk~ones(t,1); endif;
if cols(Z) gt 1;
Z=Z[lag+1:n,2:cols(Z)];
ZZ=invpd(Z’Z);
R0=DY-Z*ZZ*Z’DY;
Rk=Y[1:t,.]-Z*ZZ*Z’Y[1:t,.];
if model eq 1;
Rk=Rk~(ones(t,1)-Z*ZZ*Z’ones(t,1));
endif;
endif;

19
S00=R0’R0/t; S0k=R0’Rk/t;
Skk=(invpd(Rk’Rk/t));
S01=S00-S0k*Skk*S0k’;
S00=invpd(S00);
Skk=chol(Skk);
llf =-t/2*(k*ln(2*pi)+k+ln(det(S01)));
/* Call eigenvalue procedure */
{lambda,alpha}=eighv(Skk*S0k’S00*S0k*Skk’);
/* Order eigenvalues in decreasing order */
lambda=rev(lambda); alpha=rev(alpha’)’;
/* Normalise */
alpha=Skk’alpha/sqrt(t); beta=S0k*alpha*t;
/* Calculate ECM terms */
ecm=Y; if model eq 1; ecm=ecm~ones(n,1); endif;
ecm=ecm*(alpha./(ones(rows(alpha),1)*alpha[1,.]));
/* Maximal eigenvalue statistic */
maxstat=zeros(k,2);
maxstat[.,1]=-t*ln(1-lambda[1:k]);
maxstat[.,2]=rev(cvmax[model+1,1:k]’);
/* Trace statistic */
trstat=zeros(k,2);
trstat[.,1]=upmat(ones(k,k))*maxstat[.,1];
trstat[.,2]=rev(cvtrace[model+1,1:k]’);
/* Model selection criteria */
crit=zeros(k+1,4);
/* Model log-likelihood */
crit[.,1]=llf -((trstat[.,1]/2)|0);
npar=(k*(k*lag+ni0)-(k-seqa(0,1,k+1))^2)/2;
if model eq 1;
npar=npar+(k-seqa(0,1,k+1))/2;
elseif model eq 2;
npar=npar+k/2;
endif;
/* Akaike Information Criterion */
crit[.,2]=crit[.,1]-2*npar;
/* Schwartz Bayesian Criterion */
crit[.,3]=crit[.,1]-npar*ln(t);
/* Hannan-Quinn Criterion */
crit[.,4]=crit[.,1]-2*npar*ln(ln(t));
retp(lambda,alpha,beta,ecm,maxstat,trstat,crit);
endp;

## @== Kolmogorov-Smirnov Test ======@

proc(1)=ks(y);
local x1, x2, nn, nn1, x_min, x_max, low, up,step, eta, dd, ii, k1, k2, kst;
//x1=y[1:floor(n/2)];
//x2=y[floor(n/2)+1:n];
x1 = y[.,1];
x2 = y[.,2];
nn=rows(x1)-1; nn1=nn+1;
x_min=minc(y); x_max=maxc(y);
low=x_min; up=x_max;
step=(up-low)/nn;
eta=seqa(low,step,nn1);
dd=zeros(nn1,1);

ii=1;

20
do while ii<=nn1;
k1=x1 .<= eta[ii];
k2=x2 .<= eta[ii];
dd[ii]=abs(sumc(k1)-sumc(k2))/nn1;
ii=ii+1;
endo;
KST=sqrt(nn1/2)*maxc(dd);
retp(kst);
endp;

## @== Computes the prob-valuse of Komlogorov-Smirnov Test ======@

proc(1)=pval(pex);
local p, s1, i, cdf_x;

p=120;
s1=0.0;

i=1;
do while i <= p;
s1=s1+2*(-1)^i*exp(-2*i^2*pex^2);
i=i+1;
endo;
s1=s1+1;
cdf_x=s1;
cdf_x=1-cdf_x;
retp(cdf_x);
endp;

21