Sei sulla pagina 1di 11

Stability Analysis for VAR systems

For a set of n time series variables


)' ..., , (
, 2 1 nt t t t
y y y y
, a VAR model of order p (VAR(p))
can be written as:
(1)
t p t p t t t
u y A y A y A y + + + +

...
2 2 1 1

where the i
A
s are (nxn) coefficient matrices and
)' ,..., , (
2 1 nt t t t
u u u u
is an unobservable
i.i.d. zero mean error term.
I. Stability of the Stationary VAR system:
(Glaister, Mathematical Methods for Economists
The stability of a VAR can be examined by calculating the roots of:
t t n
y L A y L A L A I ) ( ....) (
2
2 1

The characteristic polynomial is defined as:
....) ( ) (
2
2 1
z A z A I z
n

The roots of
) (z
= 0 will give the necessary information about the stationarity or
nonstationarity of the process.
The necessary and sufficient condition for stability is that all characteristic roots lie
outside the unit circle. Then is of full rank and all variables are stationary.
In this section, we assume this is the case. Later we allow for less than full rank matrices
(Johansen methodology).
Calculation of the eigenvalues and eigenvectors
Given an (nxn) square matrix A, we are looking for a scalar and a vector 0 c such
that c Ac then is an eigenvalues (or characteristic value or latent root) of A. Then
there will be up to n eigenvalues, which will give up to n linearly independent associated
eigenvectors such that
or
0 ] [ 0 c I A Ic Ac
.
For there to be a nontrivial solution, the matrix
] [ I A
must be singular. Then

must be
such that
0 I A
Ex: A=
1
1
1
]
1

4 3 0
3 4 0
1 8 3
1
1
1
]
1

1
1
1
]
1

1
1
1
]
1

4 3 0
3 4 0
1 8 3
0 0
0 0
0 0
4 3 0
3 4 0
1 8 3
.
Expanding the determinant of this matrix gives the characteristic equation:
7 , 1 , 3 0 ) 7 8 )( 3 (
3 2 1
2
+ .
Note: an eigenvector is only determined up to a scalar multiple: If c is an eigenvector,
then
c
is also an eigenvector where

is a scalar:
) ( ) ( ) ( ) ( c c A c Ac
.
The associated eigenvectors are those that satisfy the equations for the three distinct
values of the eigenvalues.
The eigenvector associated with
3
1

, which satisfies the equation for this matrix is
found as
1
1
1
]
1

1
1
1
]
1

1
1
1
]
1

0
0
0
1 3 0
3 1 0
1 8 0
22
12
11
c
c
c
. Notice that only columns 2 and 3 are linearly independent
(rank=2) so we can choose the first element of the c matrix arbitrarily. Set 1
11
c and the
other two elements are = 0
1
1
1
]
1

1
1
1
]
1

0
0
1
13
12
11
c
c
c
.
Similarly, the eigenvector associated with
7
2

, which satisfies the equation for this
matrix is found as
1
1
1
]
1

1
1
1
]
1

1
1
1
]
1

0
0
0
3 3 0
3 3 0
1 8 4
23
22
21
c
c
c
.
Notice that rk(A)=2 again because this time the last two rows are linearly dependent.
Thus only the 2x2 matrix on the LHS is nonsingular. We can delete the last row and
move
23
c
multiplied by the last column to the RHS. Now the first two elements will be
Reminder: For a characteristic equation of the type 0
2
+ + c b a
i
a
ac b
a
b
i
t

2
4
2
,
2
2 1
Real roots:
t
2 1
,
Imaginary roots:
i t
2 1
,

where
1 i
Modulus =
2 2
+
expressed in terms of the last element. We can fix arbitrarily
23
c
and solve for the two
others: assume
23
c
=4. Then )' 4 , 4 , 9 (
2
c is an eigenvector corresponding to the
eigenvalue
7
2

We can find similarly the last eigenvector to be
)' 2 , 2 , 7 (
3
c
Jordan Canonical Form:
Form a new matrix C whose columns are the three eigenvectors.
1
1
1
]
1


2 4 0
2 4 0
7 9 1
C
. You can calculate to find that the matrix product

AQ Q
1

1
1
1
]
1

1 0 0
0 7 0
0 0 3
1
AC C
=
1
1
1
]
1

3
2
1
0 0
0 0
0 0

Thus, for any square matrix A, there is a nonsingular matrix C such that
(i) AC C
1
is diagonal with the eigenvalues on the diagonal.
(ii) The eigenvectors corresponding to distinct eigenvalues of a symmetric matrix are
orthogonal (linearly independent).
II. Stability Conditions for Stationary and Nonstationary VAR Systems
(Johnson and DiNardo, Ch 9+Appdx)
To discuss these conditions we start with simple models and generalize. We will see:
VAR(1) with 2 variables:
VAR(2) with k variables (ex: VAR(2) with 2 variables)
VAR(p) with k variables.
1. VAR(1) with two variables (p=1, k=2).
(1) t t t t
y a y a b y
1 1 , 2 12 1 , 1 11 10 1
+ + +

(2) t t t t
y a y a b y
2 1 2 22 1 1 21 20 2
+ + +
or:
(3)
t t t
Ay b y + +
1
, which can be written with the lag operator
(4) t t
b y AL I + ) (
Each variable is expressed as a linear combination of itself and all other variables
(plus intercepts, dummies, time trends). The dynamics of the system will depend on
the properties of the A matrix.
The error term is a vector white noise process with
0 ) (
t
E
and

'

t s
t s
E
s t
0
) ' (

where the covariance matrix

is assumed to be positive definite the errors are


serially uncorrelated but can be contemporaneously correlated.
Solution to 4:
(i) Homogenous equation:
Omit the error term
1
+
t t
Ay b y simplest solution:
y y y
t t


..
1 . Then,
(5) b y
1
if is nonsingular ( A I )
As a solution try
t
d . Substituting it in the homogenous (trivial solution) equation
(5):
0 ) ( d A I
---eigenvalues
The nontrivial solution requires the determinant to be zero:
0 A I
Get the eigenvalues (
s '
).
(ii) Substitute the eigenvalues into the homogenous system, to get the corresponding
eigenvectors (
s C'
).
(iii) After calculating the nonhomogenous solution and adding to the homogenous
equation, we obtain the complete solution (in matrix form):
(6) y c c y
t t
t
+ +
2 2 1 1

y y
t

(LR value) as t rises if the two eigenvalues have the modulus<1.
We can rewrite
b y AL I
t
) (
in (4) as a polynomial to see the stability conditions in
terms of the eigenvalues:
b y L B
t
) (
where B(L)=I-AL,
) 1 )( 1 ( ) (
2 1
L L L B
.
The stability condition:
(i) Modulus
s
<1
nonsingular, the determinant not 0, the system is stationary.
In (6) y converges to
y
.
(ii) Modulus i

>1
nonsingular, but the system is explosive, no convergence.
This is because one or more of the
s '
grows without bound as t increases, so does y
from (6). Not a typical process observed in the macro/finance series, therefore we do
not consider this case.
(iii) Modulus i

=1

unit root, is singular, the determinant is 0 y is


nonstationary, we need to look into the VECM specification. A lot of the
macro/finance models fall into this category.
(iv) Modulus 1
2 1
I(2) variables, VAR is I(1). In general A is not
symmetric. Look for cointegrating vectors.
Relation between VAR variables and eigenvalues
Define the eigenvalues and the corresponding eigenvectors of the matrix A as:
1
]
1


2
1
0
0

and 1
]
1

22 21
12 11
c c
c c
C
If the eigenvalues are distinct then the eigenvectors are linearly independent, and C is
nonsingular

AC C
1
and
1
C C A .
Theorem: (i) for any square matrix A, there is a nonsingular matrix C such that
AC C
1
is diagonal with the
eigenvalues on the diagonal. (ii) The eigenvectors corresponding to distinct eigenvalues of a symmetric matrix
are orthogonal (linearly independent).
Define a new vector of variables w such that
t t t t
y C w or Cw y
1

each y is a linear combination of ws (or each w is a linear combination of ys).


Multiply (3)
t t t
Ay b y + +
1
by
1
C :
t t t
C y AC b C y C
1
1
1 1 1


+ +
t t t
e w b w + +
1
*
, b C b
1
*

, AC C
1
,
t t
C e
1
,
or:
(7)
t t t
e w b w
1 1 , 1 1 1 1
* + +

t t t
e w b w
2 1 2 2 2 2
* + +

(i)
1 <
i

for i=1,2
Both eigenvalues have modulus < 1.
Each w is therefore I(0), and since ys are linear combinations of ws, each y is
I(0). You can therefore apply the standard inference procedures and estimate
each equation separately. As we saw above, A I is nonsingular, it is full
rank (=2 here), and a unique static equilibrium exists: b b A I y
t
1 1
) (

. The
values of

are such that any shock die out quickly and deviations from
equilibrium are transitory.
(ii)
1 >
i

for i=1 or 2
One of the eigenvalues has modulus > 1. Since each y is a linear combination of
both ws, y is unbounded and the process is explosive.
(iii)
1
1

and
1
2
<
Now 1
w
is a random walk with drift, or I(1),
2
w is I(0). Each y is I(1) since each y
is a linear combination of both ws, therefore VAR is nonstationary.
Is there a linear combination of t
y
1 and t
y
2 that removes the stochastic trend and makes
it I(0), i.e. both variables are cointegrated?
Consider again
t t
y C w
1
=

'

+
+


1 , 2 22 1 , 1 21
1 , 2 12 1 , 1 11
* *
* *
t t
t t
y c y c
y c y c
where c* represent the coefficients in
the
1
C matrix. We know that 2
w
is I(0), thus
*] * [
22 21
c c
is a cointegrating vector.
Look for a Relation between the CI vector
*] * [
22 21
c c
and the

matrix such that


[ ]
*
22
*
21
[.] c c .
Reparameterize equation (3) to give:
(8)
t t t
y b y +
1
where A I .
The eigenvalues of are the complements of the eigenvalues of A:
i i
1
.
Since 1
1
the eigenvalues of are 0 and 2
1
. Thus, it is a singular matrix with
rank 1. Let us decompose . Since A I and
1
C C A , we can write
1 1 1
) ( ) (

C I C C C CI AC C I .
Thus:
(9)
1
2
1 0
0 0

1
]
1

C C


1
]
1

1
]
1

1
]
1

*
22
*
21
*
12
*
11
2 22 21
12 11
1 0
0 0
c c
c c
c c
c c

[ ] * *
) 1 (
) 1 (
22 21
2 22
2 12
c c
c
c
1
]
1


'
So , which has a rank 1, is factorized into the product of a row vector

and a
column vector

, called an outer product:


The row vector =

= the cointegrating vector.


The column vector =

= the loading matrix = the weights with which the CI vector


enters into each equation of the VAR.

-----------
Note: compare (9) to the case where is full rank with 0
1
:

1
2
1
1 0
0 1

1
]
1

C C

1
]
1

1
]
1

*
22
*
21
*
12
*
11
2 22 1 21
2 12 1 11
) 1 ( ) 1 (
) 1 ( ) 1 (
c c
c c
c c
c c


. You can see why
'
is
said to be of reduced rank.
------------
Combining (8) and (9) we get the vector error correction model of the VAR:
(10)

'

+ +
+ +


t t t t
t t t t
y c y c c b y
y c y c c b y
2 1 , 2 22 1 , 1 21 2 22 2 2
1 1 , 2 22 1 , 1 21 2 12 1 1
) * * )( 1 (
) * * )( 1 (


'

+
+

t t t
t t t
w c b y
w c b y
2 1 , 2 2 22 2 2
1 1 , 2 2 12 1 1
) 1 (
) 1 (


All variables here are I(0): ys in first differences and ws.
The w (EC term) measures the extent to which ys deviate from their equilibrium LR
values.
Although all the variables are I(0), the standard inference procedures are not valid.
(similar to the univariate case where in order to test whether a series is I(1), we have
to use an ADF test and not the t statistics on the AR coefficient).
--See example below
(iv) Repeated unitary eigenvalues:
1
2 1

We can no longer have a diagonal eigenvalue matrix as before. But it is possible to
find a nonsingular matrix P such that J AP P
1
and
1
PJP A where 1
]
1

0
1
J
(the
Jordan matrix). The problem with this case is that although is still rank 1, the
transformation of ys into ws leads to I(2) variables, the cointegration vector gives a
linear combination of I(2) variables and is thus I(1) and not I(0). Thus y is CI(2,1),
the variables in the VAR are all I(1) but the inference procedures are nonstandard.
Example of a case with
1
1

and
1
2
<
Find the matrices

and

from a VAR(1) with k=2:


(11)
t t t t
e y y y
, 1 1 , 2 1 , 1 1
2 . 0 2 . 1 +

(11)
t t t t
e y y y
, 2 1 , 2 1 , 1 2
4 . 0 6 . 0 + +

Reparametrizing the VAR into a VECM gives us:
t t t t
e y y y
, 1 1 , 2 1 , 1 1
2 . 0 2 . 0 +

t t t t
e y y y
, 2 1 , 2 1 , 1 2
6 . 0 6 . 0 +

in matrix form:
(12)
1
]
1

+
1
]
1

1
]
1

1
]
1

t
t
t
t
t
t
e
e
y
y
y
y
2
1
1 2
1 1
2
1
6 . 0 6 . 0
2 . 0 2 . 0
or:
t t t
u Y Y +
1
But we cannot infer the loading matrix and the cointegrating matrix separately from
this. To find

and

separately, we need to calculate the eigenvector matrix:


Get the eigenvalues from the solution to
0 I A
.
I A

22 21
12 11
a a
a a
0
4 . 0 6 . 0
2 . 0 2 . 1

6 . 0 , 1
2 1

Eigenvectors corresponding to 1
1
:
1
]
1

1
]
1

1
]
1


0
0
1 4 . 0 6 . 0
2 . 0 1 2 . 1
12
11
c
c
1
]
1

1
]
1

1
]
1

0
0
6 . 0 6 . 0
2 . 0 2 . 0
12
11
c
c
there is linear dependency
So set 1
12
c [ ]' 1 1
1
c
Eigenvalues corresponding to 6 . 0
2

1
]
1

1
]
1

1
]
1

0
0
2 . 0 6 . 0
2 . 0 6 . 0
22
21
c
c
there is linear dependency So set 1
21
c [ ]' 3 1
2
c
The eigenvector matrix and its inverse are:
1
]
1


1
]
1


5 . 0 5 . 0
5 . 0 5 . 1
3 1
1 1
1
C C
Now we can write the VAR in VECM by decomposing :
t t t
u Y Y +
1

t
u C C +
1
]
1


1
2
1 0
0 0

[ ]
t
t
t
t
u
y
y
Y +
1
]
1

1
]
1

1 , 2
1 , 1
5 . 0 5 . 0
2 . 1
4 . 0
This is the same expression as in (12) but now we have both the loading and the
cointegrating matrices:
1
]
1


2 . 1
4 . 0

and [ ] 5 . 0 5 . 0 '
2. VAR(2) with k variables:
(13)
t t t t
y A y A b y + + +
2 2 1 1
Note: you can also add any deterministic terms such as trend, breaks by specifying the
model as:
t t t t t
D y A y A y + + +
2 2 1 1
Set the error term=0 and examine the properties of the system.
We still have the LR solution (or the particular solution) as in (5)
b y
1
but now 2 1
A A I
.
y
exists if
1
is nonzero. To see this, look at the eigenvalues.
We again try the same solution for the homogenous equation

t
y
t
c and substitute it
in to get the characteristic equation
0
2 1
2
A A I
The number of roots = pk where p=order of the VAR and k=#variables.
Here we will have 2k roots.
If all eigenvalues have modulus<1 then is non singular and the solution
y c y
k
i
t
i t
+


2
1

will converge to
y
as t grows. The analysis w.r.t the modulus of the roots (<1, =1,
>1) is the same as in the VAR(1) case.
If the process is stationary then we can invert the VAR model and express y as a
function of present and past shocks, and the exogenous (deterministic)
components=Impulse Responses:
Ex: Calculate the roots of a 2-dimensional VAR(2): n=p=2 and find the effect of a
shock on a dependent variable: (Juselius Ch. 3)
The characteristic function of
2
2 1
) ( z A z A I z where
1
z is
2
22 , 2 21 , 2
12 , 2 11 , 2
22 , 1 21 , 1
12 , 1 11 , 1
) ( z z I z
1
]
1

1
]
1






1
1
]
1

1
]
1


2
22 , 2
2
21 , 2
2
12 , 2
2
11 , 2
22 , 1 21 , 1
12 , 1 11 , 1
z z
z z
z z
z z
I




1
1
]
1

) 1 ( ) (
) ( ) 1 (
2
22 , 2 22 , 1
2
21 , 2 21 , 1
2
12 , 2 12 , 1
2
11 , 2 11 , 1
z z z z
z z z z


Therefore
) )( ( ) 1 )( 1 ( ) (
2
21 , 2 21 , 1
2
12 , 2 12 , 1
2
22 , 2 22 , 1
2
11 , 2 11 , 1
z z z z z z z z z + +
Regrouping similar terms:
4
4
3
3
2
2 1
1 ) ( z a z a z a z a z ) 1 )( 1 )( 1 )( 1 (
4 3 2 1
z z z z
The determinant is a 4
th
order polynomial in z giving 4 characteristic roots:
4 4 3 3 2 2 1 1
/ 1 ; / 1 ; / 1 ; / 1 z z z z
Effects of a shock (or structural change dummy) on a dependent variable:
If is invertible (all roots in the unit circle), we can write
t t
y
1
.
We can then calculate the effect of a shock on
it
y
:
) (
) (
z
L
y
jt
a
it


) 1 ( ) 1 )( 1 )( 1 (
) (
1 4 3 2
z z z z
L
jt
a

for t=1,.T.
We are assuming that all roots have modulus less than 1. The characteristic roots
give information about the dynamic behavior of the process. To see how the shock is
propagated, expand the last component:
jt jt
L L ...) 1 ( ) 1 (
2 2
1 1
1
1
+ + +

.
You will have to do the same thing with each root. Thus, each shock will affect
current and future values of i
y
.
The persistence of the shock depends on the magnitude of the roots. The larger they
are the more persistent will be the shocks.
-If the roots
i

are real and <1, the shock will exponentially die out.
-If one or more root
i

is imaginary then a shock will be cyclical but


exponentially declining.
-If one or more roots
i

lies on the unit circle, the shock will be permanent and


and i
y
will show nonstationary behavior. VAR is not invertible, then we need to
look into VECM
We can also calculate the roots by reformulating the VAR(p) into the companion
matrix VAR(1) form and solve for the two eigenvalues:
Alternative approach: companion matrix.
A VAR(p) can be transformed into a VAR(1). Consider the equation (6) again. We can
rewrite it as:
t t t t
u y A y A y + +
2 2 1 1

1 1

t t
y y
In matrix form:
1
]
1

+
1
]
1

1
]
1

1
]
1

0 0
2
1 2 1
1
t
t
t
p t
t
u
y
y
I
A A
y
y
Calculate the eigenvalues i

from the coefficient matrix:


1
]
1

1
]
1


0 0
0
2
2 1
2
2
I
A A
I
I
A I

= 0 =
0
2 1
2
2
2 1


A A
I
A A

0 ) )( (
2 1

.
Now we get the roots directly instead of the zs, which were the inverse of the roots,
obtained by solving the characteristic polynomial. Johansen and Juselius refer to the
s
as eigenvalues roots and to zs characteristic roots.
In the case of the companion matrix, there are two roots. If the roots to the characteristic
polynomial are outside the unit circle, then the eigenvalues of the companion matrix are
inside the unit circle and the system is stable.
To recap:
-The solution to
0 zA I
gives the stationary roots (characteristic roots) outside
the unit circle.
-The solution to
0 A I
gives the stationary roots (eigenvalues) inside the unit
circle.
-If the roots of
) (z
are all outside the unit circle or the eigenvalues of the
companion matrix are inside the unit circle, the process is stationary.
-If one or more of the roots of
) (z
or those of the companion matrix are on the
unit circle then the process is nonstationary.
-If one or more roots of
) (z
is inside the unit circle or the eigenvalues of the
companion matrix are outside the unit circle, the process is explosive.

Potrebbero piacerti anche