Sei sulla pagina 1di 6

OVERVIEW

Indian capital market has undergone tremendous changes since 1991, when the government has
adopted liberalization and globalization polices. As a result, there is a growing importance of the stock
market from aggregate economy point of view. Nowadays stock market have become a key driver of
modern market based economy and is one of the major sources of raising resources for Indian
corporate, thereby enabling financial development and economic growth.

Speaking about economic growth, it has been seen that every month the newspapers invariably carry
either of the following two headlines: ‘Sensex rises on strong IIP growth’ or ‘Sensex tumbles on weak IIP
data’. Therefore it becomes really important to assess the impact of stock market on the economic
growth. Hence we have used IIP as a proxy for economic growth.

IIP (Index of Industrial Production) denotes the total production activity that happens in the country
during a particular period as compared to a reference period. It helps us to understand the general level
of industrial activity in the economy. The products included for calculation of IIP can be segregated into
3 major sectors – Manufacturing , Mining & Quarrying and Electricity . Another way of categorizing the
items used in the calculation of IIP is a ‘Use based classification’ with categories like Basic Goods, Capital
goods, Intermediate goods, Consumer durables and Non-consumer durables.

The growing importance of stock markets in developing countries around the world over the last few
decades has shifted the focus of researchers to explore the relationship between stock market
development and economic growth. The motivation is derived primarily from the policy implications of
the findings of such studies for the developing economies.

This reports attempts to create link between stock market developments and the growth for the Indian
economy using indicator of the former as IIP. Moreover, it focuses only on stock market development
and its causal linkage with economic growth.

The rest of the study is organized as follow.


SCOPE OF STUDY

The current study unravels the linkage between stock market & macroeconomic variables in the Indian
context using techniques like regression, Granger causality test, ADF test & Unit root test using SPSS. A
time span of 14 years has been chosen for this study from January, 2004 to August , 2018 uses monthly
data to portray a larger view of the relationship.

The study does not assume any a prior relationship between these variables and the stock market and is
open to the possible two-way relationship between them which has been tested through Granger
causality test.

DATA AND METHODOLOGY

The present study uses monthly data for the time span of January, 2004 to August, 2018.
The collected data are seasonally adjusted to correct seasonal variations. The required data on Index of
Industrial Production (IIP) and IIP (Index of Industrial Production) are collected from RBI (Reserve Bank
of India)’s publication of ‘Handbook of Statistics on Indian Economy, 2017-2018. Data on indices of BSE-
SENSEX are collected from BSE(Bombay Stock Exchange).

Variables:
For monthly analysis - IIP (Index of Industrial Production is used as a proxy for GDP) and BSE SENSEX are
used.

With a view to accomplish the pre determined set of objectives of our research, different set of
techniques and tests have been adopted.

ADF

Generally a data series is called a stationary series if its mean and variance are constant over a given
period of time and the covariance between the two extreme time periods does not depend on the
actual time at which it is computed but it depends only on lag amidst the two extreme time periods.
One of the common methods to find whether a time series is stationary or not is the unit root test.
There are numerous unit root tests. One of the most popular among them is the Augmented Dickey-
Fuller (ADF) test. Augmented Dickey -Fuller (ADF) is an extension of Dickey -Fuller test. Following
equation of ADF test checks the stationarity of time series data:

where Yt is the variable in period t, T denotes a time trend, is the difference operator, et is an error term
disturbance with mean zero and variance σ 2 , and k represents the number of lags of the differences in
the ADF equation.
Null and alternative hypothesis are as follows:

H 0 : ρ=0 [Variable is not stationary]

Ha : ρ<0 [Variable is stationary]

PP-TEST

The Phillips-Perron (PP) unit root tests differ from the ADF tests mainly in how they deal with serial
correlation and heteroskedasticity in the errors. The test regression for the PP tests is

where ut is I(0) and may be heteroskedastic. The PP tests correct for any serial correlation and
heteroskedasticity in the errors ut of the test regression by directly modifying the test statistics tπ=0 and
Tπˆ. These modified statistics, denoted Zt and Z π, are given by

The terms ˆσ2 and λˆ2 are consistent estimates of the variance parameters.

Null and alternative hypothesis are as follows:

H 0 : π =0 [Series is not stationary]

Ha : π <0 [Series is stationary]

KPSS TEST

The KPSS (Kwiatkowski et al. 1992) test differs from above unit root tests in that the series
yt is assumed to be (trend) stationary under the null hypothesis. The KPSS test is based on the
residuals from the OLS regression of yt on the exogenous variables xt :

The LM statistic can be defined as


Where f0 is an estimator of the residual spectrum at zero frequency and where s(t) is a
cumulative residual function:

Null and alternative hypothesis are as follows:

H 0 : [Series is stationary]

Ha : [Series is not stationary]

COINTEGRATION

Consider two I(1) variables, yt as dependent and xt as explanatory variable, for simplicity without a
constant. Generally, if we make a linear combination out of them,

OR

the combination ût will normally still be I(1), since they both have infinite variance. However, if the
constant αˆ is therefore such that the bulk of the long run components of yt and xt cancel out, the
combination could be I(0), more precisely, the difference ût would be I(0). If a linear combination of I(1)
variables is stationary, then the variables are said to be cointegrated..

ENGEL GRANGER COINTEGERATION TEST

Engle and Granger (1987) suggest a cointegration test, which consists of estimating the cointegration
regression by OLS, obtaining the residual ût and applying unit root test for ût . To test an equilibrium
assertion, they propose testing the null that ût has a unit root against the alternative that it has a root
less than unity. Since ût are themselve estimates, new critical values need to be tabulated. Thus one has
to use the corrected MacKinnon critical values. We have the equation ût = y −αˆ ⋅ x , where ût follows
an autoregressive progress,
With

One could assume three possibilities, that ρˆ is smaller, equal or higher than one.

Only if | ρˆ | < 1, a cointegration relationship exists. If one wants to derive more information about the
dynamic behaviour of the variables, he will have to apply an Error-Correction model.

ERROR CORRECTION MODEL

Cointegration is concerned with long run equilibrium. On the other hand, Granger causality (see below)
is concerned with short run forecast ability. These two different models can be considered in an error
correction model. The name error-correction model is derived from the fact, that it has a self regulating
mechanism. That means it returns after deviations automatically to its long run equilibrium. The ECM
has a long run equilibrium and uses past disequilibrium as explanatory variables in the dynamic
behaviour of current variables.

where yt not only depends on its own past lags, but also on the past lags of xt and where xt not only
depends on its own past lags, but also on the past lags of yt . The ECM shows how significant the lagged
variables are, by using simple t-tests. If one wants to know, how strong the influences of all lagged
values together are, you will have to apply a test for Granger Causality.

GRANGER CAUSALITY

Granger (1988) showed, that in the case of a bivariate system, with the time series xt and yt which are
integrated at the same order, when the past and present value of yt provides some useful information
to forecast xt+1 at time t, it is said that yt Granger causes xt. The normal testing procedure for Granger
causality is testing for the significance of the coefficients of lagged yt , which are used as the explanatory
variables for xt in the regression context. If one looks at the second part of the Error-Correction Model,
the test for Granger causality from y to x is an F-test for the joint significance of i c21, ˆ (i = 1, …, t-1) .
Similarly, the test for Granger causality from y to x is an F-test for the joint significance of i c12, ˆ . The
strength of Granger causality can change over time, the direction of causality can change depending on
time that is measured, or there can be bidirectional causality. Granger causality means that a lead-lag
relationship between variables in a multivariate time series is evident. However, this does not mean that
if we make a structural change in one series the other will change as well, but the turning point in one
series precede the turning points of the other

GRANGER CAUSALITY TEST

Granger causality test is a technique for determining whether one time series is significant in forecasting
another. The standard Granger causality test seeks to determine whether past values of a variable helps
to predict changes in another variable. Granger causality technique measures the information given by
one variable in explaining the latest value of another variable. In addition, it also says that variable Y is
Granger caused by variable X if variable X assists in predicting the value of variable Y. If this is the case, it
means that the lagged values of variable X are statistically significant in explaining variable Y. The null
hypothesis (H0) that we test in this case is that the X variable does not Granger cause variable Y and
variable Y does not Granger cause variable X. In summary, one variable (Xt) is said to granger cause
another variable (Yt) if the lagged values of Xt can predict Yt and vice-versa. The test is based on the
following regressions

Where Yt and Xt are the variables to be tested, and ut and vt are mutually uncorrelated errors, and t
denotes the time period and ‘k’ and ‘l’ are the number of lags.

The null hypothesis and alternative hypothesis

H0 : αt = δt = 0 for all i [X does not granger cause Y]

Ha : αt ≠ 0 and δt ≠ 0 [X granger cause Y]

If the coefficient αt are statistically significant but δt are not, then X causes Y. In the reverse case, Y
causes X. But if both αt & δt are significant, then causality runs both ways. The null hypothesis is tested
by using the standard F-test of joint significance.

Potrebbero piacerti anche