Sei sulla pagina 1di 10

The Role of the Informational Efficiency in the DotCom Bubble

Wiston Adrin Risso


Department of Economics, University of Siena
Abstract The financial bubbles are a challenge for the efficient market hypothesis. The phenomenon is always the same, a rapid increase in prices arriving to a maximum and then a strong and sudden decrease. The DotCom bubble is an example of the last century, the Nasdaq index closed at 5,048.62 on March 10, 2000, whereas it was 1,114 on August 1996, and also on October 2002. In this paper, we measure the informational efficiency of the Nasdaq market through the time using the Shannon entropy. Bootstrap simulations suggest that we can reject the hypothesis that the market behaved efficiently between August 17, 1998 and September 11, 2003. In addition, using a classical linear regression and a Tobit model, we present empirical evidence suggesting the existence of a negative relationship between the drawdown and the levels of efficiency. On the other hand, a Logit model presents some evidence of a negative relationship between the probability of crash and the informational efficiency. It suggests that a healthy market should present high level of efficiency. Finally, we propose a simply stochastic model simulating great part of the cluster of inefficiency detecting in the Nasdaq market. Keywords: Informational Efficiency, Entropy, DotCom Bubble, Nasdaq J.E.L.: G14, C01

Introduction Since the famous tulipmania event of the 1630s, so-called bubbles have appeared through the time. The evolution is always the same, it is observed a phase of rapidly increase in prices, followed by a sudden collapse. The question about why these events tend systematically to repeat, it is hard to answer. A possible explanation is the existence of an important number of inexperienced traders participating in the market. In this line, Smith et al. (1988) assert that laboratory experiments have shown that bubbles tend to occur with inexperienced traders. Dufwenberg et al. (2005) suggest that bubbles in mixed-experience market are rare. As it is well known, in the sixties an electronic bubble grew and exploded, and the same happened in the seventies with the Nifty Fifty. However, as Malkiel (2006) remarked in the eighties a new bubble developed in the biotechnology sector, with the same characteristics as in the sixties. Despite all these events, the late nineties presented the biggest bubble in the last century. The DotCom bubble was originated in the promise of a new economy based on internet. In fact, Internet was presented as both a new technology and an opportunity for revolutionary businesses. The Nasdaq Composite Index is a bundle composed by the most important technological US companies. This index, on March 10, 2000, closed at its all-time high of 5,048.62. However, it was 1,114 on August 1996, and also in October 2002. Ofek and Richardson (2003) suggest that there are two important elements explaining the rapidly increase in prices. The first, was the relevant short sales restrictions for Internet Stocks, this did not permit to push Internet prices back to reasonable levels. The second is that there was sufficient heterogeneity across investors such that the marginal investors might look very different from one period to the next. However, we have to mention that the Nasdaq market was less competitive before 1997 when as Braclay et al. (1999) assert the 1

Securities and Exchange Commission began implementing reforms that would permit the public to compete directly with Nasdaq dealers. The purpose of the present paper is to show that during the DotCom bubble, the informational efficiency was decreasing in the technological market. The crash in this market just showed the necessity of going back to reasonably levels of efficiency. Note, that this would agree with the efficient market hypothesis, at least in its weakly version. We just suggest that the market does not always behave efficiently. However, it found the way to return to efficient levels, sometimes drastically. In this context, the DotCom bubble can be interpreted as inefficiency in the technological market, and the crash was the signal by the market coming back to healthy levels of efficiency. Firstly, we shall generate a time-series measuring the informational efficiency in the Nasdaq market. This measure is based on the Shannon entropy and the symbolic time series. Then, we will generate Monte Carlo simulations using bootstrap technique in order to find a confidence interval permitting to show the moments when we can reject the null hypothesis of efficiency in the market. Finally, we will try to show on the one hand, that there exist a negative relationship between the informational efficiency and the probability of crash. On the other hand, we shall show that the larger the levels of efficiency, the smaller the draw down (in absolute value). The paper is organized as follows. The next section explains our measure of efficiency, the bootstrap technique and the models applied in order to estimate the relationship between efficiency and probability of crash, and efficiency and drawdown. Section three presents the empirical evidence for the Nasdaq stock market. Section four suggests a possible explanation of the bubble based on the concept of news arrivals which are not well understood by the investors. Finally, section five draws some conclusions. 2. Methodology 2.1. The informational efficiency measure Recently, local Hurst exponent has been proposed as a measure of informational efficiency in the stock markets. In fact, Grech and Mazur (2004) use the local Hurst exponent coefficient to measure the efficiency in the Dow Jones index. They affirm that even though we cannot predict the detailed evolution scenario, we might be able to say something else about the system. However, some authors, such as Bassler et al. (2006) and McCauley et al. (2007) criticize this measure asserting that Hurst exponent different from 1/2 (the number corresponding to a random walk process) does not necessarily imply long term correlations like those found in fractional Brownian motion. Instead of the Hurst exponent, we propose to use the Shannon entropy, widely used in informational theory. We shall assume that in an efficient market the actual prices Pt reflect all the available information, and then the increments in prices should be unpredictable. Therefore, we assume that the returns are independently distributed. Note that this assumption is valid under the martingale hypothesis or assuming risk neutral agents. In order to compute the efficiency in the Nasdaq market, we will proceed to symbolize the returns detecting the very dynamic of the process. According to Daw et al. (2003) symbolization has the practical effect of producing low-resolution data from high-resolution data. They suggest that symbolization is crucial when time series is highly affected by noise. However, as far as we know, few works are done applying symbolic analysis. Lawrence et al. (1988) forecast 47.1% of the direction of the change for the next day in foreign exchange rates. Schittenkopf et al. (2002) predict the daily change in volatility of two major stock indices. 2

After symbolization is done, the Shannon entropy is applied in order to measure the quantity of information contained in the series. The main problem of the symbolic analysis is to find the right partition in the series, there is no satisfactory method. We will follow Molgedey and Ebeling (2000) who use 3 symbols dividing the empirical distribution of the Dow Jones returns in three equally probable regions. We can interpret these regions as high negative returns, normal returns, and high positive returns. Assuming we have a time series of size T defined as {r1, r2, r3, , rT}, being rt the asset returns at time t and f * the empirical density function of r, we proceed to transform the time series in a symbolic one, according to (1).

$ rt ' f1*/ 3 st % 1 ! * * if # f 1 / 3 ' rt ' f 2 / 3 s t % 2 ! r & f* st % 3 t 2/3 "

(1)

Thus, a symbolic time series {s1, s2, s3, , sT} is obtained. On the one hand, if the market is completely efficient, incorporating immediately the news arrivals to the market, the series should be random. On the other hand, if the market is inefficient, the returns will not reflect the news immediately and some pattern should be more predictable. Here is where the normalized Shannon entropy (see equation (2)) enters. This is a measure of uncertainty, achieving its maximum value at 1 when the process is completely random, and its minimum at 0, when there is a completely certain event.
. /1 + H %, ) , log (n) )( pi log 2 pi 2 * -

(2)

where n is the total number of sequences and pi is the probability of each sequence, i=1, 2, , n. As mentioned, the maximum is when the n sequences or events are equally probable. Another extreme case is when a particular sequence or event has probability 1, then the entropy achieves its minimum. In practice, we compute these probabilities computing the frequencies of each event in the total period. Since we are interested in the evolution of the efficiency through the time, we take a time-window v<T and move in across the time. Then, the entropy is computed for each time-window from moment v to T. As will be shown, different sizes for the time-window are considered, 50, 100, 150, 200, 250 and 300 days. Grech and Mazur (2004) (using the Hurst exponent) suggest that timewindows should not be so large in order to capture the locality. In addition, we will define a threshold in order to define when the inefficiency started in the Nasdaq market. Since the sample is finite, the latter requires some kind of interval of confidence highlighting the moment we can reject the hypothesis the market is efficient. Due to the fact that time series usually are non-Gaussian we will not apply test based on t-student in order to compute the threshold of efficiency. Efron (1979) suggested the bootstrap technique; this method is basically based on Monte Carlo simulations considering the empirical sample. We will generate 1,000 random time series from the empirical density of the Nasdaq returns as in the bootstrap method, then we will symbolize the series and compute the entropy, obtaining the distribution of an efficient market for a finite sample. Therefore, we will take as threshold of inefficiency at 5% of the simulated distribution. If the actual entropy goes outside this threshold we can reject the hypothesis the market is efficient at that time. Note, that this threshold is like a minimum of efficiency we can accept for this sample size under the Nasdaq returns empirical distribution.

2.2 The Tobit model

It is not an easy task to define a financial crash. The present work will apply two definitions. Firstly, we will consider so-called drawdown. A drawdown is defined as the maximum loss from a market peak to a market. It measures how sustained ones losses can be. In other words, it is how much money you lose until you get back to breakeven. The second definition, as we will show in subsection 2.3 we consider the negative fat-tail of the empirical distribution of the returns as a measure of financial crash. Note that drawdown is a variable that takes only negative values and 0 when there is not drawdown. Therefore we have a problem of censored variable. In order to evaluate the relationship between the drawdown and the efficiency we first will apply a classical linear regression (equation 3) and then, a Tobit model (equation 4). DDt % 2 3 1H t 3 0 t (3)

Equation (3) shows the linear regression, where DD is the absolute value of the drawdown, H is our measure of efficiency, and 0 is the error. We expect that the sign of 1 be negative, a more efficient market should avoid large drawdowns. However, since DD only takes positive values, coefficient 1 can be underestimated. In order to solve this problem, Tobin (1958) proposed a hybrid of probit analysis and multiple regressions, known as Tobit model. This model assumes that there is a latent variable DD* whereas the observable variable DD is defined to be equal to the latent variable whenever the latent variable is above zero and zero otherwise.
$ DD * if DDt* 4 0 DDt % # t * " 0 if DDt ' 0

DDt* % 2 3 1H t 3 u t

(4)

Amemiya (1973) has proved that the likelihood estimator suggested by Tobin for this model is consistent.
2.3 The Logit model

We will also define the crash as the negative tail (at 1%) of the empirical distribution of the Nasdaq returns. This model will be useful in order to understand the relationship between the probability of having a crash in the Nasdaq market and the level of efficiency. The logit model assumes that there is a variable y that takes on one of two values, 0 and 1, in the present case, financial crash is 1, and no-crash is 0. We have a latent variable y* such that:

y t* % 2 3 1 H t 3 0 t

(5)

Note, that Ht is the efficiency and 0t follows an extreme value distribution, see McFadden (1984). As in the case before, we do not observe y*, but rather y, which takes on values of 0 or 1 according to the following rule:

$1 yt % # "0

if

y t* & 0

otherwise

(6)

In the present work, the variable y takes value 1 when the empirical distribution of the Nasdaq returns cumulate the 1% (the negative tail), and 0 otherwise. Therefore, the logit model can be expressed as in equation (7).
p ( y t % 1) % exp(2 3 1H t ) 1 / exp(2 3 1H t )

(7)

The equation (7) says that the probability of the financial crash in one day, p(yt=1) depends on the efficiency level (H). As known the formulation of the models ensures that the predicted probabilities lie between 0 and 1.
3. Empirical Results for the NASDAQ market

The previous methodology is applied to the NASDAQ index. Daily data for the Nasdaq composite index was obtained from February 5, 1971 to April 3, 2008. As mentioned before we take timewindows (v) for 50, 100, 150, 200, 250 and 300 days. Then, we proceed to symbolize the returns as shown in (1), therefore we compute the normalized Shannon entropies for each v. Fig. 1 shows the NASDAQ efficiency evolution (taking v=100 and a sequence of 4 days) and the limit highlighted by the bootstrap simulations. Note that there is a clear cluster of inefficiency between August 17, 1998 and September 11, 2003. In this period the efficiency, first goes under our confidence limit and then goes up to more efficient levels. Inside this is the period when it said the bubble developed and exploded. The minimum appears on April 27, 2001 when efficiency reaches 0.6930.
1

Figure 1: Evolution of the Informational Efficiency in the NASDAQ Market and the Bootstrap Confidence Limit at 5%

0.95

0.9

0.85

Entropy
0.8 0.75 0.7 0.65 6-Jul-1971

24-Jun-1975

11-Jun-1979

24-May-1983

8-May-1987

23-Apr-1991

5-Apr-1995

23-Mar-1999

17-Mar-2003

7-Mar-2007

Time

We compute the drawdown for the Nasdaq showing that for the whole period, the maximum drawdown was 82.70% produced on October 9, 2002, other important drawdowns were October 3, 1974 (60.75%) and December 7, 1987 (37.53%). We computed linear regressions and Tobit models in order to compute the relationship between the absolute value of the draw down and the efficiency. Since, we considered different time windows (50, 100, 150, 200, 250, 300 days) and different sequences of days (1, 2, 3, 4, and 5 days) producing 30 estimations of linear regressions and 30 Tobit models, Table 1 shows the results of the best fit model for each case. However, we highlight that the results in terms of signs and significance were the same in all the cases.

Table 1: Linear Regression and Tobit model for the relationship between drawdown and efficiency Linear Regression(a) No. Obs.= 9074 Drawdown F( 1, 9072) = 1458.28 Coefficients Standard error (5) t=Coeff./5 p-value > |t| Constant (2) 2.357 0.055 42.720 0.000 Prob. > F 0.000 Entropy (1) -2.178 0.057 -38.190 0.000 R-squared= 0.138 (b) Tobit Model Uncensored Obs.= 8486 Drawdown LR chi2(1)= 1244 Coefficients Standard error (5) t=Coeff./5 p-value > |t| Constant (2) 2.377 0.058 40.770 0.000 Prob > chi2 0.000 Entropy (1) -2.207 0.060 -36.630 0.000 Pseudo R2 0.416
The results were obtained with STATA 9.0 program. Source: own calculations. (a) Estimation of equation (3), v=300 days and sequence of 1 day produce the best fit. (b) Estimation of equation (4), v=300 days and sequence of 1 day produce the best fit.

Note in Table 1 that the larger the efficiency in the Nasdaq stock market, the smaller the drawdown is. Note also that the coefficient is larger in the case of the Tobit model suggesting that in a classical linear model the coefficient is underestimated. Table 2 shows the results of the logit model for the relationship between the probability of a crash in the Nasdaq market respect to the level of efficiency.
Table 2: Logit Model for the relationship between the probability of crash and the efficiency in the Nasdaq index
Crash Prob.(a) Constant (2) Entropy (1) Crash Prob.(b) Entropy (1) Coefficients Standard error (5) 13.167 1.147 -20.475 1.414 Odds-ratio Standard error (5) 1.28e-09 1.81e-09 t=Coeff./5 p-value > |t| 11.48 0.000 -14.48 0.000 t=Coeff./5 p-value > |t| -14.48 0.000 No. Obs.= Loglikelihood = Wald chi2(1)= Prob > chi2 Pseudo-R2= 9271 -838.6 209.74 0.000 0.1026

The results were obtained with STATA 9.0 program. Source: own calculations. (a) Estimation of equation (4), v=100 days and sequence of 4 days produce the best fit. (b) Estimation of Logit model in Odds ratio.

Table 2 shows that the probability of crash depends negatively on the efficiency. Once more, the larger the informational efficiency of the Nasdaq market the healthier will be the market.

4. Relationship between Efficiency and New Arrivals

In this section we try to give an explanation about the relation between news arrival and information efficiency and how it might be determining the crashes. As mentioned above, an informational efficient market should immediately assimilate the news arrivals; hence no pattern in prices should be formed. Assume that in a completely efficient market the probability of producing a positive or negative return tomorrow should be the same, it means 1/2. In terms of our entropic measure of efficiency should produce the maximum entropy as in the following equation:

.1 .1+ 1 . 1 ++ H (t ) % /, log 2 , ) 3 log 2 , ) ) % 1 ) ,2 -2* 2 - 2 ** -

(8)

It means that efficiency always will be the maximum (=1) because there is no reason for predicting a negative or positive return tomorrow. Consider now, a process !t, representing the news arrivals in the market. If they are good news, then probability of having positive returns tomorrow should increase, on the other hand if they are bad news the probability of having negative returns tomorrow should increase. However, in an efficient market this effect should be assimilated the following day. It means, the news arrive and the market can "forecast" the returns only that day, of course in our model the efficiency decreases for that day because probability of having positive or negative returns increases, nonetheless the news should be assimilated that day, and the following days the efficiency comes back to the maximum value (=1). Consider that news arrivals are independent and identically distributed as a Poisson process with parameter ".
u (t ) is i.d .d . Poisson68 7

(9)

This news will affect the efficiency as a noise normalized by a parameter, reducing the efficiency for once and then the efficiency recovers its maximum value. Note that u(t) takes value 0 or 1, however the effect of the news in the efficiency should not be larger than 1/2, then we define 0(t)=9u(t), where 9 is the impact in the efficiency. Therefore we can redefine our efficiency measure under news arrivals as equation (10).

.. 1 + .1 + .1 + .1 ++ H (t ) % /, , 3 0 (t ) ) log 2 , 3 0 (t ) ) 3 , / 0 (t ) ) log 2 , / 0 (t ) ) ) ) , 2 * -2 * -2 * -2 ** --

(10)

Using the expansion of Taylor, note that log2(1/2+!(t))=log2 (1+2!(t))-1 !2log2(e)!(t)-1, similarly log2((1/2)-!(t))!-2log2(e)!(t)-1. Then, our measure can be approximated as the following equation:

H (t ) % H * (t ) % 1 / :0 (t ) 2

(11)

where :=4log2(e), now the relation is clear, in an efficient market the efficiency should be always equal to 1 (the maximum), however when news arrive, they affect the efficiency for once and the efficiency comes back to its maximum level because new information is immediately assimilated. However, assume that the market is not always efficient and the news are not immediately assimilated, for instance assume that they are embodied as an autoregressive process as in equation (12).

0 (t ) % a0 (t / 1) 3 9u (t )

(12)
7

where 9u(t) is again the Poisson process of information arrival corrected by the impact factor 9. On the other hand, 2;is the autoregressive coefficient less than 1, defining the memory of the process. The information is not well embodied and then inefficiency remains for more time. In order to obtain a more realistic approach to the efficiency evolution, assume that the efficiency is also affected by the absolute value of random and exogenous factor <(t), normal distributed with mean equal to 0 and variance 5<, now our stochastic system is composed by equations (12) and (13).
H * (t ) % 1 / :0 (t ) 2 / < (t )

(13)

Generating a simulation of the system for T=1,000, "=0.005, #=0.2, $<=0.02, %=0.95, we obtain Figure (2).
Figure 2: Simulated Dynamics of the Efficiency under Autorregressive News Arrivals and Exogenous Shocks
1

0.9

0.8

0.7

Entropy

0.6

0.5

0.4

0.3

0.2

100

200

300

400

500

600

700

800

900

1000

Time

Note that now, the evolution of the efficiency is similar to the results obtained for real data. Therefore, we suggest that a great part of the inefficiency can be produced by the arrival of not well understood news.
5. Conclusions

As mentioned the financial bubbles are a challenge for the efficient market hypothesis, in the sense that if the market is efficient it should assimilate immediately the information arriving to the market and prices should not present patterns. However, the history of the financial market shows that bubbles tended to grow and explode in different moments of the times. Different bubbles appeared in the sixties, seventies, eighties and nineties. We suggest that the concept of bubble is not incompatible with the efficient market hypothesis. The inefficiency can appear, however the market will find the way to go back to efficient levels, crashes is a drastic way to recover the efficiency. We analyzed the technological bubble produced in the last century. Measuring the informational efficiency using the Shannon entropy and producing bootstrap simulations, we showed that we can 8

reject the hypothesis that the Nasdaq market was efficient between August 17, 1998 and September 11, 2003. In this period, market seems to present a decrease in the efficiency until arriving to a minimum, and then it started to increase the efficiency until healthy levels. We studied the relationship between the crisis and the informational efficiency using two methods. Firstly, we used a linear classical regression and a Tobit model in order to study the relation between the drawdown and the efficiency. We observed that as the efficiency decreases the drawdowns tend to be larger. Then, we study the relation between the probability of having a crash and the levels of efficiency. The logit model suggests that a decrease in the efficiency produces an increment in the probability of having a crash in the Nasdaq market. The results suggests that at the end of the nineties, the Nasdaq market started a process of decrease in its efficiency producing a larger probability of having a crash and larger draw downs, once the efficiency arrived to a minimum the market found the way to recover efficiency. We proposed a simple stochastic model showing that inefficiency can remain for a time, if the news arriving to the market is not immediately assimilated by the market.
References: -Amemiya, T., (1973), "Regression analysis when the dependent variable is truncated normal", Econometrica, Vol. 41, No. 6, pp. 9971016. -Barclay, M., Christie, W., Harris, J., Kandel, E., Schultz, P., (1999), Effects of Market Reform on the Trading Costs and Depths of Nasdaq Stocks, The Journal of Finance, Vol. 54, No. 1, pp. 1-34. -Bassler, K., Gunaratne, G., McCauley, J., (2006), Markov processes, Hurst exponents, and nonlinear diffusion equations: with application to finance, Physica A, Vol. 369, No. 2, pp. 343353. -Daw, C., Finney, C., Tracy, E., (2003), A review of symbolic analysis of experimental data, Review of Scientific Instruments, Vol. 74, No. 2, pp. 915-930. -Dufwenberg, M., Lindqvist, T., Moore, E., (2005), Bubbles and Experience: An Experiment, The American Economic Review, Vol. 95, No. 5, pp. 1731-1737. -Efron, B., (1979), Bootstrap Methods: Another Look at the Jackknife, The Annals of Statistics, Vol. 7, No. 1, pp. 1-26. -Grech, D., Mazur, Z., (2004), Can one make any crash prediction in finance using the local Hurst exponent idea?, Physica A, Vol. 336, pp. 133-145. -Malkiel, B., (2003), A Random Walk Down Wall Street. W.W. Norton & Company, Inc. -McCauley, J., Gunaratne, C., Bassler, K., (2007), Hurst Exponents, Harkov processes, and fraccional Browniam motion, Physica A, Vol. 379, No.1, pp. 1-9. -McFadden, D., (1984), Econometric Analysis of Quantitative Choice Models. In: Giliches and Intriligator (Eds.), Handbook of Econometrics, North-Holland (Chapter 24).

-Molgedey, L., Ebeling, W., (2000), Local order, entropy and predictability of financial time series, The European Physical Journal B, Vol. 15, pp. 733-737. -Ofek, E., Richardson, M., (2003), DotCom Mania: The Rise and Fall of Internet Stock Prices, The Journal of Finance, Vol. 58, No. 3, pp. 1113-1137. -Tobin, J., (1958), "Estimation for relationships with limited dependent variables", Econometrica, Vol. 26, No. 1, pp. 2436. -Schittenkopf, C., Tino, P., Dorffner, G., (2002), The benefit of information reduction for trading strategies, Applied Economics, Vol. 34, pp. 917-930. -Smith, V., Suchanek, G., Williams, A., (1988), Bubbles, Crashes, and Endogenous Expectations in Experimental Spot Asset Markets, Econometrics, Vol. 56, No. 5, pp. 1119-1151.

10

Potrebbero piacerti anche