Sei sulla pagina 1di 2

Review of QuantitativeFinance and Accounting, 1 (1991): 307-329

9 1991 Kluwer Academic Publishers, Boston. Manufacturedin The Netherlands.

Discriminating Between Wealth and Information Effects


in Event Studies in Accounting and Finance Research

RALPH W. SANDERS, JR.


College of Business Administration, University of South Florida, Tampa, FL 33620
RUSSELL P. ROBINS
A.B. Freeman School of Business, Tulane University, New Orleans, LA 70118

Abstract. This article examinesthe power of tests of given size to detect and distinguishbetweenwealth (i.e.,
mean) and information(i.e., variance)effects in eventstudies. We findthat an EstimatedGeneralizedLeast Squares
(EGLS) mean-effectstest is consistentlymore powerfulthan the test based upon the averagestandardizedresidual
and is as powerful as a nonparametricrank test. Unlike the test based upon the average standardized residual
and the rank test, the EGLS test is well specified even when the event affects the variances of the prediction
errors. We also find that conventionalparametric tests to detect changes in the varianceof the event-dayaverage
abnormal return are misspecified when the null of no change is true. We analyze the reasons this occurs and
suggest a rank procedure that produces tests of the correct size under the null. Our evidence suggests that the
critical factors allowingresearchers to distinguishbetweenwealth and informationeffects are an estimationpro-
cedure incorporatingthe heteroskedasticityinherent in market model predictionerrors and an explicittest for
event-day variance changes.

Key words: wealth effects, informationeffects

Several studies (Brown and Warner, 1980; Collins and Dent, 1984; Dyckman, Philbrick,
and Stephan, 1984; Brown and Warner, 1985; Sefcik and Thompson, 1986; Corrado, 1989)
have investigated various aspects of event-study methodology. Most of these studies advocated
the use of a particular mean-effect test statistic. None of these studies, however, examined
tests of hypotheses regarding event-day changes in the variance of the mean prediction error. 1
The importance of testing for variance changes takes on added significance in light of recent
work by Ross (1989), which shows that increases in the rate of flow of idiosyncratic infor-
mation manifest themselves not in wealth effects, but in increases in idiosyncratic stock
price volatility. Thus, mean-effects test procedures that do not incorporate such variance
increases are likely to misclassify increases in the rate of idiosyncratic information flow
as wealth effects.
In addtion to neglecting tests of hypotheses regarding changes in variances, certain of
the mean-effects tests proposed by previous researchers have potentially serious shortcom-
ings. For example, the mean-effects test assuming cross-sectional independence proposed
by Brown and Warner (1985) 2 which has become the mean-effects test statistic of choice
among capital markets researchers, is misspecified in the presence of event-day changes
in the prediction error variances,3 and, as illustrated by Collins and Dent (1984), it reflects
a correction for heteroskedasticity inconsistent with that present by construction in market
model prediction errors. This research addresses each of the issues raised above by examining
308 R.W. SANDERS, JR. AND R.E ROBINS

three different estimators of the average abnormal event day return, two mean-effect test
statistics related to each, and three procedures for testing hypotheses regarding changes
in variance of the event-day average abnormal return.
With respect to testing hypotheses regarding mean or wealth effects, we find that, in
general, test statistics incorporating both estimation period data and the cross-sectional
variation in the event-day prediction errors are well specified even in the presence of a
variance increase. Inconsistent with Brown and Warner's (1985, p. 24) admonition, we find
that tests incorporating estimation period data and the cross-sectional variation in the event-
day prediction errors are no less powerful than tests that do not incorporate such variation
when the event-day prediction error variances are unaffected by the event. We also find
that a special case of the test proposed by Collins and Dent (1984) is more powerful than
that advocated by Brown and Warner.
With respect to testing hypotheses regarding the effect of an event on the variance of
the average abnormal return, we find that the X2 test proposed by Collins and Dent (1984)
rejects much too often when the null of no effect is true. Moreoever, we find that the mis-
specification of this test is due to a prediction-error-specific attribute (i.e., excess kurtosis).
Consequently, researchers cannot rely upon a simple parametric test of the hypothesis of
no change in the sample's event-day variance. We present instead a within-sample rank
procedure that produces reasonably powerful tests of the correct size.
The article proceeds as follows. Section 1 discusses the types of hypotheses testable using
an event-study methodology. In section 2, we provide the theoretical basis for each of the
estimators and test statistics. Section 3 describes and presents the results of a simulation
experiment examining the small sample properties of the estimators and test statistics. Section
4 illustrates, in the context of the average abnormal returns to bidding firms in corporate
acquisitions, the sensitivity of event-study inferences to the choice of an event-study metho-
dology, and section 5 discusses other events that are likely to be accompanied by variance
increases. A summary and concluding remarks appear in section 6.

1. Hypothesis testing in event studies

The basic null hypothesis in an event study can be thought of as "the event conveyed no
information." Within this framework, changes in either or both of the first two moments
of the abnormal return distribution reasonably can be interpreted as evidence that the event
conveyed information.4 Thus, an affirmative response to any one of the following questions
results in rejection of the basic null hypothesis:

1. Did the event change the mean of the abnormal return distribution, given that it did
not change the distribution variance?
2. Did the event change the variance of the abnormal return distribution?
3. Did the event change (unconditonally) the mean of the abnormal return distribution?

The distinction between questions 1 and 3 is subtle but significant. To highlight the distinc-
tion, consider the often cited study by Brown and Warner (1985). Brown and Warner exam-
ine the power of tests of given size to detect known levels of average abnormal return (i.e.,

Potrebbero piacerti anche