Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Naveen Gondhi
Kellogg School of Management
November 2015
Download the latest version by clicking here
Abstract
I study the implications of rational inattention of firm managers for asset prices and macroeconomic quantities. Firms face aggregate and idiosyncratic productivity shocks, the uncertainty of which varies over time. My model delivers endogenous movements in output
and measured aggregate productivity in response to exogenous changes in uncertainty. An
increase in aggregate uncertainty leads managers to allocate less capacity to learn about
their idiosyncratic productivity, leading to higher misallocation of resources across firms
and lower output. An increase in idiosyncratic uncertainty has the opposite effect and
results in an economic expansion. This rationalizes the empirical finding that risk price of
aggregate uncertainty is negative, whereas risk price of idiosyncratic uncertainty is positive.
My model delivers novel testable predictions regarding the degree of resource misallocation,
the relation between output and both types of uncertainty, the comovement of production
inputs and market betas in the cross section of firms. I confirm these predictions in the
data.
This paper is part of my Ph.D. dissertation developed at Northwestern university and I am deeply indebted to
my advisors, Dimitris Papanikolaou, Snehal Banerjee, Sergio Rebelo and Ian Dew-Becker for all their helpful guidance
and support. I would also like to thank Nicolas Crouzet, Jesse Davis, Sebastian Di Tella, Michael Fishman, Kathleen
Hagerty, Benjamin Iverson, Ravi Jagannathan, Guido Lorenzoni, Konstantin Milbradt, Charles Nathanson, Alessandro
Pavan, Brian Weller and seminar participants at Kellogg School of Management for helpful comments and suggestions.
All errors are my own.
Contact Information Kellogg School of Management, 2001 Sheridan Rd, Evanston, IL 60208; Email: ngondhi@kellogg.northwestern.edu, site: http://www.kellogg.northwestern.edu/faculty/gondhi/index.htm
Introduction
Economic theory suggests that, in order to maximize output, scarce resources need to be deployed
efficiently. Notably, recent work has documented substantial variation in the degree of resource misallocation over time (e.g., Hsieh and Klenow (2009)). As can be seen in Figure 1, the elasticity of
firm investment to its TFP my proxy for the degree of resource misallocation varies substantially
over the business cycle and, furthermore, is strongly correlated with the level of aggregate productivity. In this paper, I build a theory of time-varying resource misallocation based on the rational
inattention of firm managers. Firm managers first choose how much information to acquire and then
make input decisions conditional on the realization of their signals. In particular, I allow firm managers to acquire two types of information: data about the aggregate economy and information about
their own firm. Importantly, information processing ability is just like any other economic resource:
it is in finite supply.1 As managers pay more attention to aggregate news, they devote less time to
acquiring information about their own firm.2 I embed this feature in a tractable general equilibrium
model with shocks to the level of both aggregate and idiosyncratic uncertainty and explore its testable
implications.
My model delivers endogenous fluctuations in measured aggregate productivity and output in
response to movements in both aggregate and idiosyncratic uncertainty. In particular, the level of
aggregate productivity depends on the correlation between firm input choices and their own productivity. An aggregate uncertainty shock induces managers to acquire more information about the state
of the aggregate economy and hence less information about firm-specific shocks. Consequently, the
degree of resource misallocation increases leading to a drop in aggregate productivity and output.3
This pattern is a direct consequence of the fact that the acquisition of information is endogenous.
Note that, in the US economy, reallocation of resources is a key factor driving aggregate productivity.4
Conversely, a rise in idiosyncratic uncertainty induces managers to acquire more information about
their own firm-specific productivity, leading to improved allocation of resources and a rise in aggregate
output. In sum, exogenous changes in uncertainty lead to endogenous business cycles. The model
delivers several novel predictions that are in line with the data.
On the asset pricing side, my model generates endogenous risk prices for fluctuations in aggregate as
well as idiosyncratic uncertainty. In particular, the risk price of aggregate uncertainty is negative. That
is, high aggregate uncertainty is associated with an increase in marginal utility of consumption, and
therefore households are willing to pay a higher price for securities that are positively correlated with
the shocks to aggregate uncertainty. This feature is consistent with empirical evidence in Bali, Brown,
1
In an extension, I show that my main results hold under more general cost function specifications.
Specifically, firm managers acquire signals about aggregate and firm specific shocks subject to an entropy constraint.
Hence, firms face a trade-off: paying more attention to aggregate conditions requires paying less attention to idiosyncratic
conditions. Aggregate signals can be interpreted as macroeconomic data about aggregate shocks that affect future cashflows of all firms, and idiosyncratic firm level signals as firm-level data that forecasts the future profitability of firms and
is independent of aggregate shocks.
3
As firms invest less time to learn about their idiosyncratic shocks, the inputs decisions of high and low productive
firms are similar and, hence, the degree of misallocation will be higher.
4
See for example Foster, Haltiwanger, and Krizan (2000).
2
2
1
0
1
2
3
1960
1970
1980
1990
year
2000
2010
2020
and Tang (2014); Schrhoff and Ziegler (2011) and others.5 Perhaps surprisingly, my model implies
that the risk price of shocks to common idiosyncratic uncertainty is positive, since it is positively
correlated with output growth.6 This result is in fact consistent with the evidence provided by Driessen,
Maenhout, and Vilkov (2009), who show that an option trading strategy that replicates a payoff
proportional to the rise in the average idiosyncratic volatility across firms earns high risk-adjusted
returns.7 In addition, it is consistent with several novel empirical facts in the real economy that I
document: the level of idiosyncratic uncertainty is positively correlated with both the level of output
as well as the degree of resource reallocation in the economy. These observations provide support for
the proposed mechanism.
My model provides a potential explanation for excess comovement puzzle documented in Christiano and Fitzgerald (1998) and Rebelo (2005): sectoral inputs (investment and labor) comove highly
with each other, even though the comovement in sectoral productivity (TFP) is very weak. Traditional
RBC models have difficulty explaining this fact. My model generates excess comovement in inputs due
to the fact that, when firm managers acquire information about the aggregate economy, they effectively learn from a noisy public signal and hence make correlated decisions.8 Furthermore, Increases
5
Campbell, Giglio, Polk, and Turley (2012), extending the earlier work of Campbell (1992, 1993), estimate market
variance innovations based on a vector auto-regressive approach, and find a negative market variance risk premium in
the cross-section of equity portfolios.
6
Previous researchers (Herskovic, Kelly, Lustig, and Van Nieuwerburgh (2014); Schrhoff and Ziegler (2011)) documented that idiosyncratic uncertainty has a strong factor structure. Given this evidence, I assume that idiosyncratic
uncertainty across firms is driven by an underlying state variable.
7
In particular, the strategy involves selling index straddles and buying individual straddles and stocks in order to hedge
individual variance risk and stock market risk. This portfolio earns large excess returns. The result is also consistent
with the evidence in Schrhoff and Ziegler (2011) who constructs model-free variance swaps and find that systematic
variance risk exhibits a negative price of risk, whereas common shocks to the variances of idiosyncratic returns carry a
large positive risk premium.
8
The common error in the public signals increases the comovement beyond what is justified by productivity shocks.
The error in the public signal can be interpreted as measurement error in macro-economic statistics or noise in common
information sources such as the financial press. For alternative justification of these noise shocks, refer to Lorenzoni (2009)
in aggregate uncertainty lead to higher input comovement across sectors; an increase in idiosyncratic
uncertainty decreases comovement.9 For intuition, consider an increase in idiosyncratic uncertainty.
As idiosyncratic uncertainty increases, firm managers shift their attention to learn more about their
idiosyncratic shocks and, hence, learn less about aggregate shocks. As all firms learn less about aggregate shocks, the co-movement of their inputs decreases. I show that this prediction is indeed consistent
with the data.10 First, I document that, consistent with the model, sectoral comovement of inputs is
highest in recessions. Second, I use several proxies for either aggregate or idiosyncratic uncertainty
and find that the comovement of each sectors inputs with aggregate inputs increases with aggregate
uncertainty but decreases with each sectors idiosyncratic uncertainty.11
The same mechanism that generates comovement in inputs also delivers substantial time-variation
in comovement of market betas across firms. Empirically, there is substantial evidence that equity
betas fluctuate over time.12 Yet, little is known about the source of this variation, either theoretically
or empirically. My models prediction is that conditional betas should display a large time variation
and that their cross-sectional dispersion decreases with aggregate uncertainty and increases with idiosyncratic uncertainty.13 My model can thus explain the empirical finding of Fama and French (1997)
who document that market risk of industry portfolios fluctuates considerably over time. I complement
their finding by also documenting that one standard deviation increase in aggregate uncertainty is
associated with 0.4 standard deviation reduction in the dispersion in market betas; whereas, one standard deviation increase in idiosyncratic uncertainty is associated with 1.6 standard deviation increase
in the dispersion of betas.
One of the assumptions of the baseline model is that firm managers do not learn from prices in the
stock market. A large literature in corporate finance and macroeconomics focuses on the sensitivity
of firms investment to a mispricing in the stock market.14 One conclusion from this literature is that
investment responds only moderately to mispricing in the stock market or that the stock market is a
and Angeletos and LaO (2013). Further, another source that impacts the expectations held by all market participants is
the noise in stock prices. In appendix C, I introduce a stock market in my setup. I allow investors and firms to learn from
equilibrium prices and update their beliefs accordingly. Non-fundamental shocks that affect stock prices for instance
endowment or noise-trader shocks, in the spirit of NREE models will impact the expectations held by all market
participants.
9
This result is a direct consequence of endogenous learning. In an economy with exogenous information (i.e., signals
with constant precisions), the co-movement of inputs do not change with idiosyncratic uncertainty.
10
I test the prediction using sectoral level-KLEM data compiled by Dale Jorgenson. The database combines industry
data from the US bureau of Labor Statistics (BLS) and the US Bureau of Economic Analysis (BEA). For each sector,
the data-set contains information on the value and the price of four inputs (capital, labor, energy and material) and the
value and price of output.
11
As a proxy for aggregate uncertainty, I used the variable constructed by Jurado, Ludvigson, and Ng (2013) and
used VIX for robustness. As a proxy for idiosyncratic uncertainty, I used the variable constructed by Bloom, Floetotto,
Jaimovich, Saporta-Eksten, and Terry (2012) and used the idiosyncratic industry uncertainty constructed using stock
returns data for robustness (see Campbell, Lettau, Malkiel, and Xu (2001)).
12
Direct evidence is provided by Bollerslev, Engle, and Wooldridge (1988); Jagannathan and Wang (1996); Lewellen
and Nagel (2006); Bali and Engle (2010); Fama and French (1997) and Engle (2014) who find significant time-series
variation in the conditional betas of equity portfolios.
13
In an economy with exogenous information (i.e., signals with constant precisions), dispersion in betas increases with
aggregate uncertainty.
14
Some representative papers in this area are Baker, Stein, and Wurgler (2002), Gilchrist, Himmelberg, and Huberman
(2005), Polk and Sapienza (2009) and Bond, Edmans, and Goldstein (2011).
sideshow with respect to the real economy (Morck, Shleifer, Vishny, Shapiro, and Poterba (1990)).15
In Section 5, I relax this constraint and introduce trading in an aggregate index where firm managers
can trade based on their aggregate information and learn from the equilibrium price. Since all firm
managers learn from the stock market, any noise (non-fundamentals) that moves stock prices impacts
the expectations held by all firm managers. This merely adds to the common error and strengthen my
effects. This is because, my results mainly depend on the correlation of beliefs across firm managers
and not on any information asymmetry between firm managers and investors.
Last, I explore the welfare implications of information acquisition. In my model, the information
processing capacity of a firm manager is effectively a factor of production just like labor and capital. I
thus examine the degree to which it is efficiently allocated. I find that, as long as the cost of acquiring
information is convex, the equilibrium information capacity chosen will be lower than the socially
efficient capacity. Two factors drive this result. First, since firms are local monopolists, they underinvest in both physical resources as well as information.16 Second, if managers can learn from financial
markets, this leads to a free rider problem in acquiring information about aggregate shocks.
My paper connects to several strands of the literature. First, my paper relates to a vast theory
literature which studies the mechanisms through which uncertainty shocks impacts the aggregate
economy (see Bloom (2013) for an extensive survey). In contrast to the papers in this literature,
I allow managers to affect the posterior uncertainty facing their firm through learning. My model
delivers an endogenous response of output to changes in uncertainty that operates through changes in
the degree of resource misallocation. Motivated by the assumption that the aggregate and idiosyncratic
uncertainties are positively correlated, previous researchers (e.g.,Bloom et al. (2012)) assume that one
state variable drives both uncertainties and find that shock to the state variable can generate drop in
output. However, in this paper, I document that the two uncertainties have opposite business cycle
implications.
Second, it is closely related to the literature on endogenous information acquisition and rational
inattention (Sims (2003)). Close to my paper, Mackowiak and Wiederholt (2009) study optimal inattention in firms pricing decisions. They assume that information frictions only have nominal bite
and all real decisions are allowed to adjust under the true state of nature. By contrast, I study the
more realistic scenario in which information friction has a real bite. Endogenous information acquisition has been studied in various scenarios. Van Nieuwerburgh and Veldkamp (2009); Van Nieuwerburgh and Veldkamp (2010) study financial investors information acquisition problem and show that
portfolio under-diversification might arise endogenously with information acquisition. Kacperczyk,
Van Nieuwerburgh, and Veldkamp (2014) study information acquisition problem of a fund manager
15
To the extent that prices reflect information not otherwise available from firms internal sources, stock markets
provide firms with valuable information and guide real activity. However, in a recent paper, David, Hopenhayn, and
Venkateswaran (2014) find that-learning from stock prices is at best only a small part of total learning at the firm level,
even in a well-functioning financial market like US. Thus, the contribution of financial markets to overall allocative
efficiency and aggregate performance through this channel is quite limited. This is primarily due to the high levels of
noise in market prices, making them relatively poor signals of fundamentals. In contrast, a significant amount of learning
occurs from private sources, i.e., those internal to the firm.
16
Market power introduces a constant distortion to the average level of economic activity. It affects the wedge between
private and social value of information, and hence firms invest less in information acquisition compared to the socially
efficient choice.
who can learn about several assets and show that fund managers optimally choose to process information about aggregate shocks in recessions and idiosyncratic shocks in booms. I contribute to this
literature by showing how the interaction between endogenous information acquisition and uncertainty
shocks leads to endogenous business cycles.
Third, my work is related to the literature on the pricing of aggregate and idiosyncratic volatility.
Representative agent models have explored the role of aggregate consumption growth volatility for
explaining a host of asset pricing stylized facts. In such models, the representative agent is willing
to sacrifice a portion of her expected returns for insurance against a rise in aggregate volatility, but
she does not seek to hedge against idiosyncratic volatility which is fully diversifiable.17 Empirically,
there is mixed evidence whether idiosyncratic volatility is priced positively or negatively. Herskovic
et al. (2014) argue that idiosyncratic volatility is priced negatively because of incomplete markets and
households face more labor income risk when idiosyncratic volatility increases. In my model, I abstract
away from this effect and show that idiosyncratic volatility is priced positively, which is consistent with
evidence in Schrhoff and Ziegler (2011) and others.
Fourth, my work is related to the recent literature in macroeconomics on animal spirits as the
drivers of the business cycle (Angeletos and LaO (2013)). Angeletos and LaO (2010) show that
introducing the common error shocks in the RBC model can generate zero (or even negative) correlation
between output and employment, a moment which RBC models have tough time matching. Lorenzoni
(2009) claim that the common error shocks can drive business cycles. On the empirical side, a recent
work by Angeletos, Collard, and Dellas (2014) show that confidence shocks can account for the bulk
of the observed business-cycle fluctuations. My contribution is to show that common error shocks also
help us explain the excess comovement puzzle.
Last, my paper also relates to the growing literature on the aggregate implications of misallocated
resources, for example, Hsieh and Klenow (2009) and David et al. (2014). Close to my paper, David
et al. (2014) study resource misallocation in an economy with imperfect (an exogenous) information.
However, they do so in a deterministic model. In addition to endogenizing the information acquisition
decision, my model allows for stochastic fluctuations in uncertainty that leads to endogenous fluctuations in aggregate productivity. I also propose a new measure of resource misallocation that is a unique
implication of my model, that is, the elasticity of firms investment to their TFP. On the welfare front,
this paper is related to Colombo, Femminis, and Pavan (2014) which relates the (in)efficiency in the
acquisition of information to the (in)efficiency in the use of information and explain why efficiency in
the use is no guarantee of efficiency in the acquisition. My model highlights two potential channels
which lead to the inefficiency in the acquisition of information.
Layout: Section 2 introduces the model and solves the information acquisition problem. Section 3
studies the implications of endogenous learning. Section 4 provides the empirical evidence. In Section
5, I discuss some extensions. Section 6 studies efficiency and Section 7 concludes. Proofs for the main
results and some extensions can be found in the Appendix.
17
As explained by Campbell (1993), aggregate volatility is a priced state variable provided that the agent has a
preference for early or late resolution of uncertainty.
Model
In this section, I build a general equilibrium RBC model to investigate the link between uncertainty
shocks, firms information acquisition, and their real decisions. In order to focus on the particular
role of learning, I work with a simple framework in which costly information acquisition is the only
friction. My point of departure is a standard general equilibrium model of dispersed information
along the lines of Angeletos and LaO (2010). The model adds three features to this benchmark.
First, firm managers have imperfect information not only about aggregate shocks, but also about their
firm-specific (idiosyncratic) shocks. Second, uncertainty is time varying, so the model includes shocks
to both level of technology (first moment) and its variance (second moment) at both aggregate and
idiosyncratic levels. Third and most importantly, firm managers can either learn about aggregate or
about firm specific shocks subject to a constraint on their information processing capability.
2.1
Setup
Time is discrete and periods are indexed by t {0, 1, 2, , }. A continuum of firms of fixed measure
one, indexed by i, produce intermediate goods using only labor according to18
Yit = At Zit Nit
, 1
(1)
where Nit denotes labor employed by the firm. Each firms productivity is a product of two separate
processes: an aggregate component (At ) and an idiosyncratic component (Zit ). The aggregate and
idiosyncratic components of the firms productivity follow AR(1) processes:
log At = a log At1 + at
log Zit = z log Zit1 + zit
1
1
where the I assume zit is independent across firms and time and zit N 2z,t1
, z,t1
. I assume
1
1
at is independent over time and at N 2a,t1
, a,t1
. I allow for the variance of innovations,
a
t1
defined as
1
a,t1
z
and t1
defined as
1
z,t1
ods of low and high aggregate and idiosyncratic uncertainty. I assume that firms learn in advance that
the distribution of shocks from which they will draw in the next period is changing. This captures the
notion of uncertainty that firms face about future business conditions.
The intermediate goods are bundled to produce the single final good using a standard CES aggregator
Z
Yt =
Yit
di
a , z , A
Stage 1: Managers allocate their attention knowing Iit1 = {t1
t1 , Zit1 }
t1
Stage 2: Managers receive signals. Workers go the respective firms, labor decisions are made and
wages adjust so that the labor market clears. At this point, workers and firm managers have the same
information regarding future TFP shocks. Denote their information set Iit2 . Workers return home and
the economy transitions into stage 3.
Stage 3: Production occurs. All information is revealed. Commodities and asset markets open.
Prices clear these markets. Consumption takes place.
Figure 2: The time line of the economy
This figure illustrates the time-line of the economy:
t1
Stage1
Stage2
1
a
z
Iit
= {t1
, t1
, At1 , Zi,t1 }
2
Iit
= {siat , sizt }
time
Stage3
S
1
Iit
Assumptions: Asset markets and good markets operate only in stage 3, when information is
homogenous. This guarantees that asset prices do not convey any information. Moreover, because
my economy admits a representative consumer, allowing households to trade risky assets in stage 3
would not affect any of the results. In Section 5, I allow firm managers to trade on their aggregate
information and learn from equilibrium prices in stage 2 and show that the analysis remains similar.
Households: There is a representative household, consisting of a consumer and a continuum of
workers with preferences:
U=
t E0 U (Ct )
V (Nit ) di
i
t=0
where i [0, 1] indexes firm i, Ct represents consumption of final good in period t, Nit is the labor
effort of the worker who works for firm i. I assume
U (C) =
C 1
1
and
V (N ) =
N 1+
1+
where > 0 parametrizes the income elasticity of labor supply and also the coefficient of relative risk
aversion; and parametrizes the Frisch elasticity of labor supply. Assume that all idiosyncratic risk is
insurable. The representative household owns all the firms in the economy and its budget constraint
is given by
Z
it di +
Wit Nit di + Rt Bt
i
where it denotes the profits from firm i, Wit denotes the period-t wage of firm i, Rt is the period-t
nominal gross rate of return on the risk-less bond, and Bt is the amount of bonds held in period t.
Firms: Since firms choose inputs in stage 2 of every time period without any adjustment costs,
7
the problem of each firm is essentially static. The firm is realized profit is given by
it = Pit Yit Wit Nit .
In stage 2, each firms objective is to maximize the representative households valuation of its profits
i.e.,
Eit U 0 (Ct )it .
Market clearing: The representative household consumes the total output produced in the
economy: C = Y , since there is no capital. In the labor market, wages adjust to equate labor
demanded and supplied. In the product market, prices adjust to equate demand and supply of goods
i.e., Cit = Yit .
Stochasticity and information: There are two types of shocks: aggregate TFP shocks and idiosyncratic TFP shocks. I represent the aggregate state of the economy by the history t (0 , , t )
of exogenous random variable t which includes the aggregate shock, aggregate and idiosyncratic uncertainties. As defined earlier, Iit1 and Iit2 denote the information set of firm i at the beginning of stage
1 and stage 2 of period t respectively.
2.2
Equilibrium
Definition 1. An equilibrium consists of an employment strategy N (Iit2 ), a production strategy Yit (Iit2 , t ),
a wage function W Iit2 , an aggregate output function Y ( t ), a price function P t , a consumption
strategy C( t ), signal precisions {as , zs } Iit1 such that the following are true:
(i) Stage 1: Firms acquire information to maximize the net present value of all future cashflows,
subject to an information processing constraint/cost.
(ii) Stage 2: Representative household and all firms are at their respective optima given their
information set.
(iii) Stage 3: Commodity and asset prices are determined such that the respective markets clear.
I solve for the equilibrium backwards. In stage 3, the optimal demand for intermediate good i is
given by
Yit = Yt
Pit
Pt
Pit
=
Pt
Yit
Yt
1
where Pit denotes the price of good i and Pt denotes the price of final good. The final good is our
numeraire, and so P = 1. I normalize the price of final good to be 1. Revenue of firm i is given by
1 1
where 1 = 1
1 1
Yt = Yt At
1 1
Zit
(Nit,d )1
and subscript d indicates labor demanded by each firm given wages. This
1 1
1 1
Zit
Eit U (Ct )
1 1 1 1
Yt At Zit
1
(Nit,d )
Wit Nit,d
(2)
where the expectation is taken with respect to firm managers information set Iit2 . The first order
condition for firms demand of labor is given by
Eit U
1 1 1 1
(Ct )Yt At Zit
In stage 2, since workers who work for firm i, share the same information set as the firm, the
optimal labor supply (Nit,s ) solves the first order condition:
Eit U 0 (Ct )Wit = V 0 (Nit,s ) .
Imposing market clearing in the labor market (Nit,d = Nit,s = Nit ) implies that the equilibrium is
pinned down by the following condition
1 Eit Yt
1 1
At
1 1
Zit
Nit1 1
{z
it
|{z}
(3)
Marginal Disutility
This condition equates the private cost and benefit of effort for each firm. The right hand side
is simply the marginal dis-utility of an extra unit of labor and the left hand side is the product of
marginal utility of consuming extra unit of the good i and the marginal product of labor.
Lemma 1. In stage 2, the equilibrium level of labor is the solution to the following fixed-point problem:
+ 1 1
Eit log Yt
+ 1 1
(4)
The above condition is just the log-linear transformation of (3). Each firms input decision determines their output, which in-turn determines the aggregate output. The expectation of aggregate
output enters into firms labor decision. This best-response condition is similar to the best response
in the abstract class of models (beauty-contest games) studied in Morris and Shin (2002); Angeletos
and Pavan (2007). An agents best response is the linear function of expectation of fundamentals
and an aggregate variable. The economy features strategic complementarity when
strategic substuitability when
Yt =
1
Pit Yit
1 1
di = Yt At
Pt
1 1
Zit
1 Eit Ct
9
1 1 1 1
At Zit
1
+11
di.
(5)
To complete the characterization of the firms problem and therefore the production-side equilibrium in the economy, I need to spell out the firms information set. I defer the discussion to the next
subsection where I endogenize firms information and, for now, conjecture that all firms have same
signal precisions, which I will later show to be true.
Specifically, I assume that the signals about fundamentals are of the form:
Assumption 1. The signals available to firm manager i are of the form:
siat = at + t + iat ,
where
t N 0, 1
(6)
where iat N 0, 1s
a
and izt N 0, 1s
z
rational inattention.
This assumption formalizes the idea that paying attention to the aggregate conditions and paying
attention to the idiosyncratic conditions are separate activities. For example, attending to the current
state of monetary policy, is a separate activity from attending to the firm-specific productivity. The
signal about aggregate shocks can be interpreted as public sources of information about the state of
the economy, in which case the correlated error t may reflect, for example, measurement error in
macroeconomic statistics while the idiosyncratic error (iat ) may be interpreted as the byproduct of
limited attention. Alternatively, t can be interpreted as a proxy of noise shocks studied in Lorenzoni
(2009) or as confidence shocks studied in Angeletos and LaO (2013).
I will look for a symmetric equilibrium in which all the firms acquire the same signal precisions.
I next conjecture that
the equilibrium labor chosen, nit , is linear in firms information set Iit2 =
o
at1 , zit1 , siat , sizt and aggregate output yt is linear in state vector t = at1 , t , t , at , zt and
(7)
(8)
output depends on aggregate TFP shock (at ) as well as common noise term t . A key feature contributing to this tractability is that the aggregate output remains log-normal as a result of the Law
of Large Numbers.
2.3
Until now, I solved for the firms optimal input decisions given their signals i.e., solved for stage 2
equilibrium. In this subsection, I endogenize their signals i.e., solve their stage 1 problem. I assume
that the firm manager has access to a variety of information sources and has to decide what to pay
attention to, subject to an information flow constraint. Processing information is modeled as receiving
noisy signals about the fundamentals. Firm managers chooses the precisions of their signals, as and
zs , subject to a learning constraint.
Two Learning Technologies
Formulating a problem with information choice requires a learning technology. Which learning technology is appropriate depends on the type of data agents are acquiring.19 Since agents are learning
about future productivity, which is subjective, rational inattention effectively captures the learning
constraint. Recall that the agents optimal action (from 4) is a function of both the fundamentals and
aggregate output, which, in turn, is driven by both the fundamentals and correlated error. So, an agent
wants to learn not only about the fundamentals, but also about common error from the aggregate
signals. Given this, inattention constraint needs to be modified for my information structure.
There are two ways of writing the constraint,
1. Approximate constraint: If agent only learns about fundamentals, his constraint can be written
as
I(at ; siat ) + I(zit ; sizt )
|
{z
{z
and 1 , 2 0,
(9)
2. Exact constraint: Agent tries to extract a linear combination of aggregate shock and common
noise at + t from his signal siat i.e.,
I( at + t ; siat ) + I(zit ; sizt )
|
{z
{z
and 1 , 2 0,
(10)
where I(x;y) denotes the mutual information between random variables x and y; and are equilibrium objects defined in the appendix.
The information constraint in the second approach is endogenous, since the coefficients and
are equilibrium objects. In the first approach, the agent incurs cost in extracting information about
fundamentals only. In both constraints, 1 denotes the information flow allotted to learning aggregate
conditions and 2 denotes the information flow allotted to learning idiosyncratic conditions.
19
For general information technologies, refer to Hellwig, Kohls, and Veldkamp (2012)
11
In appendix, I show that information flow constraint is always binding. This implies that, firms
face a trade-off: attending more carefully to aggregate conditions requires attending less carefully to
idiosyncratic conditions. This substitutability in learning is a crucial assumption and drives a lot of
results in this paper.
The second constraint I impose on agents learning is the no-forgetting constraint, which insures
that the chosen precisions are always non-negative. It prevents an agent from erasing any prior
information, to make room to gather new information in other dimension.
Information choice problem
I now solve the firm managers information acquisition problem (i.e., stage 1 problem). Substituting
the optimal labor hiring (7) in (2), the net present value can be rewritten as
1 1
Eit U 0 (Ct ) Ct At
1
where Xit = Ct
1 1
At
1 1
Zit
1 1
Zit
+1
1
+11
+1
+11
and idiosyncratic conditions by maximizing the expected payoff subject to the information constraint
i.e.,
+1
(11)
1 ,2
subject to
sit =
at
zit
t
0
iat
izt
and either (9) or (10). The outer expectation operator in (11) is the expectation under the information
set Iit1 , whereas the inner expectation is under the information set Iit2 . The solution to this problem
is given below:
Proposition 2. The optimal attention is given by
1 =
if x 22
14 log2 (x)
if x 22 , 22
(12)
if x > 22
z2
( a + )2
2
{z
a
z
Relative uncertainty
Relative Importance
z2
=
2
+2 a
{z
|{z}
a
z
|{z}
Relative uncertainty
Relative Importance
0.55
z=0.1
z=0.11
0.5
z=0.11
z=0.12
0.45
z=0.12
0.3
0.35
z=0.1
0.4
0.35
0.25
0.3
0.2
0.25
0.6
0.7
0.8
0.9
0.6
0.7
0.8
0.9
uncertainty 2. Relative importance. When aggregate conditions are more variable than idiosyncratic
conditions, agents pay more attention to aggregate conditions. Relative importance is an endogenous
object and depends on the equilibrium information acquired, because and are equilibrium objects.
This implies that we need to solve a fixed point problem to determine exact precisions acquired in
equilibrium. I first show that the fixed point has a unique equilibrium.
Lemma 2. The fixed point given by
1 =
2 4
z2
log2
( (1 )a + (1 ) )2
2
An obvious hurdle in testing models with endogenous learning is that we cannot directly observe the
variables included in firm managers information set. In this section, I test the model indirectly: link
the information choice (solved in the previous section) to testable patterns in data.
3.1
Comovement of inputs:
In the model, firm managers choose inputs to maximize the NPV of the firm conditional on their
information set. Note that (7) gives the optimal labor input chosen by the firm. By definition,
idiosyncratic TFP shocks are uncorrelated across firms, which then implies that the comovement of
inputs across firms is given by:
2a
|{z}
Endogenous object
1
a
|{z}
Aggregate uncertainty
(13)
|{z}
Equation (13) implies that the forces causing inputs to co-move are aggregate uncertainty, common
error in beliefs and an endogenous object a . It is easy to show that the endogenous object a
14
increases as agents learn more about aggregate shocks. First, observe that, if the common error term
is sufficiently noisy, we can get excess comovement in inputs not warranted by comovement in TFP.
Benchmark economy: In the spirit of existing literature (Angeletos and LaO (2010)), the benchmark economy in my view is the one in which learning is exogenous (i.e., one in which signal precisions
are constant and independent of uncertainty faced by the agents).20 I will refer to this specification
as the benchmark economy and come up with predictions which would help us to distinguish this
benchmark economy from an economy with endogenous learning.
Theorem 2. 1. If is sufficiently low, comovement(inputs)>comovement(TFP).
2. In an economy with endogenous learning, comovement of inputs across sectors increases with aggregate uncertainty and decreases with idiosyncratic uncertainty. This result doesnt hold in benchmark
economy.
Sectoral comovement of inputs can be higher than their comovement of TFP because of common
error in beliefs. This helps us explain the excess comovement puzzle: sectoral inputs (investment and
labor) comove highly with each other, even though the comovement in sectoral productivity (TFP) is
very weak. The common error term could be the result of measurement error in macroeconomic statistics or any sender specific noise shocks (Veldkamp and Wolfers (2007)) or animal spirits/sentiments
(Angeletos and LaO (2013)) or expectation shocks (Lorenzoni (2009)).
Part 2 of the theorem states that, with endogenous learning, comovement of inputs across sectors
increases with aggregate uncertainty and decreases with idiosyncratic uncertainty. As aggregate uncertainty increases, firms learn more about aggregate shocks and a increases. This implies comovement
of inputs increases with aggregate uncertainty. This result holds even without endogenous learning
(i.e., in benchmark economy) as can be seen from expression (13). On the other hand, comovement of
inputs decreases with idiosyncratic uncertainty. As idiosyncratic uncertainty increases, firms allocate
less capacity to learn about aggregate shocks and this decreases a . As all firms learn less about
aggregate shocks, comovement of inputs across firms decreases. This is not true in the benchmark
economy. In the benchmark economy, as idiosyncratic uncertainty increases, comovement of inputs do
not change. This is evident from (13) which depends on aggregate uncertainty and aggregate signal
precisions and is independent of idiosyncratic uncertainty.
Figure 4 plots the comovement of inputs versus aggregate uncertainty (left plot) and idiosyncratic
uncertainty (right plot) in the baseline economy (solid line) and in an economy with endogenous
learning (dotted line). In the left plot, note that the results are qualitatively the same for both the
economies. In the right plot, note that the comovement of inputs do not change with idiosyncratic
uncertainty in the baseline economy and decreases with idiosyncratic uncertainty in an economy with
endogenous learning. This leads us to my first testable prediction:
Prediction 1: Comovement of inputs across sectors increases with aggregate uncertainty and
decreases with idiosyncratic uncertainty.
20
Other possible benchmark model is the one in which agents allocation of capacity is independent of uncertainty they
face. All my results hold for this definition of benchmark also.
15
Covariance of inputs
Covariance of inputs
1.2
0.8
0.6
0.4
Endogenous learning
Baseline
0.8
0.7
0.6
0.5
0.4
0.2
0.2
0.25
0.3
0.35
0.4
0.1
3.2
0.15
0.2
0.25
In this section, I will study the aggregate and asset pricing implications of uncertainty shocks and
endogenous learning.
Recall that the preferences of the representative household are given by
U=
E U (Ct )
V (Nit ) di
i
t=0
Pt Ct + Bt+1 +
Vit i,t+1 di
(Vit + it ) i,t +
Wit Nit + Rt Bt
i
where Vit denote the ex-dividend value of firm i in period t, it denote the ownership of firm i in
period t, and it denote the dividend paid by firm i in period t. Market clearing in the asset markets
is given by it = 1 for all i, t. The first order condition for the inter-temporal consumption allocation
problem yields
+
V
t+1
it+1
it+1
= 1.
Et
Ct
Vit
|
{z
}
{z
}|
Ri,t+1
St+1
St
This is a standard expression of consumption CAPM and has to hold for each asset i, where
St+1
St
is
the stochastic discount factor (SDF) and Ri,t+1 is the one-period return of holding firm is share from
16
Ct+1
St+1
= log log
.
St
Ct
as a + (a + as )
as
a
{z
0 = const0 +const1
|
+const2
|
z rz (2 )
2z
{z
(14)
where the endogenous coefficients are reported in appendix. Uncertainty shocks affect output only
through their effect on 0 , the unconditional aggregate productivity.
In my economy, there are four fundamental shocks:
First moment macro TFP shock (at )
Second moment idiosyncratic uncertainty shock (tz )
Second moment aggregate uncertainty shock (ta )
First moment common noise shock (t ).
Each of them affect the consumption of the representative agent. To understand their risk premia, I
investigate how much do households care about each risk.
Theorem 3. Log linearizing consumption growth around the steady state, Stochastic discount factor
in my economy can be approximately written as
log
St+1
a b1 t+1 b2 t+1 b3 z,t + b4 a,t
St
(15)
St+1
log
, zt+1 .
St
= covt
The price of risk associated with z depends on how the state price of consumption S is correlated with
the shock z. If the marginal utility of wealth, and, hence, the state price of consumption, is lower
17
following an increase in z, then this shock will carry a positive risk premium. Since households attach
a lower value to these states, they are willing to pay a lower price for a security that pays off in low
consumption states, or equivalently they demand a positive risk premium. Conversely, if the state
price of consumption is higher following a positive shock to z, households are willing to pay a higher
price for securities that are positively correlated with z, and thus, the risk premium is negative. I next
describe how each of the shocks are priced in equilibrium.
Macro TFP shock: The price of risk of the productivity shock (t ), b1 , is positive which implies
that households demand a positive risk premium to invest in securities that are positively correlated
with the aggregate productivity shock.
Common error shock: The price of risk of common error shock is positive if and only if < 1.
This result is because of general equilibrium forces. If > 1, an increase in noise leads to higher
expected aggregate income which discourages labor supply (from 7) and raises real wages, which has
the opposite effect on firm profits, production, and employment. Since all firms now invest less, this
leads to lower output and, hence, lower consumption.21 This increases marginal utility of wealth and,
hence, priced negatively. The opposite is true when < 1. Macro-economists interpret common error
shock as sentiment shocks/ animal spirits as in Angeletos and LaO (2013). My model shows that
these sentiment shocks are priced and the risk premium associated with these shocks is negative if and
only if >1.
Idiosyncratic uncertainty shock: From equation (15), changes in idiosyncratic uncertainty positively effect consumption growth and, hence, the price of risk of idiosyncratic uncertainty shock is
positive. My model links rational inattention at the firm level to resource misallocation and, hence, to
aggregate productivity. In the model, firm managers choose inputs under limited information about
their fundamentals. This informational friction leads to a misallocation of resources across firms in
an ex-post sense, reducing output. As firms learn more about their idiosyncratic shocks, this information friction shrinks: resources are allocated more efficiently, which leads to higher output. In an
economy with endogenous learning, as idiosyncratic uncertainty increases, firms shift their attention
to learn more about firm-specific (idiosyncratic) shocks, which decreases the extent of misallocation
in the economy and increases output. This increases the consumption of representative household
and, hence, decreases the marginal utility of consumption. Therefore, idiosyncratic uncertainty is
pro-cyclical, and hence, has a positive price of risk. Empirically, theres mixed evidence whether idiosyncratic uncertainty/volatility is priced positively or negatively. Empirically, Schrhoff and Ziegler
(2011) show that common shocks to idiosyncratic volatility are positively priced, consistent with my
theory. In the next section, I provide more empirical evidence consistent with the proposed channel.
Aggregate uncertainty shock: The price of risk of aggregate uncertainty shock is negative. As
aggregate uncertainty increases, firms learn more about aggregate shocks and less about idiosyncratic
shocks. This leads to inefficient allocation of resources and, hence, decreases the aggregate productivity.
This implies that the total output and, hence, consumption will be lower, which increases the marginal
utility of wealth and, therefore aggregate uncertainty is priced negatively. This is consistent with
the evidence in Bali et al. (2014) and others. Bloom, Bond, and Van Reenen (2007) shows using
21
18
Vector auto regression (VAR) estimations that aggregate uncertainty shocks have a large real impact,
generating a substantial drop and rebound in output, consumption and employment over the following
6 months. Thus economic uncertainty is a relevant state variable affecting future consumption and,
therefore should be priced. My theory provides a justification for why aggregate uncertainty should
be negatively priced.
In the macroeconomics literature, researchers assumed that aggregate and idiosyncratic uncertainty
shocks are driven by a single state variable. This is motivated by the assumption that the aggregate
and idiosyncratic uncertainty shocks are positively correlated in the data. They showed that a shock
to that state variable can cause fall in output. I relax the assumption of one state variable driving
both the uncertainties and find that these two uncertainties have opposite implications for output.
Recessions: As noted earlier, recessions are periods of high risk aversion and high aggregate uncertainty. In the model, both of these effects leads firm managers to learn more about aggregate shocks
and less about idiosyncratic shocks. This implies that the misallocation in the economy rises and,
hence, output falls.
3.2.1
In this subsection, I study the market beta implications of managers endogenous learning behavior.
Empirically there is substantial evidence that individual assets betas fluctuate over time. Yet, little
is known about the source of this variation, either theoretically or empirically. The next result speaks
directly to that. In my model, as mentioned earlier, consumption CAPM holds and market CAPM
doesnt hold since representative households wealth not only constitutes stocks, but also has human
capital. Even though market CAPM doesnt hold in my economy, I can define market beta of security
i as a regression coefficient of return of security i on market return just as an econometrician does
when computing market beta.
Figure 5: The time line of the economy - Risk weights
Information
acquisition
of firm i
Input
decisions
Comovement
with
aggregate
market
Cashflows
Risk weights (i )
Market betas are derived from the covariance of firms cash flows (and discount rates) with market
cash flows (and market discount rates). Because a representative investor exists, an asset i that pays
Di,t is priced at time t-1 (Vi,t1 ) according to
Vi,t1 U 0 (Ct1 ) = Et1 (Di,t + Vi,t ) U 0 (Ct ) .
A , Z
and aggregate shock (At1 ) i.e., price can be written as Vi,t1 = V Zit1 , At1 , t1
t1 . Return
is defined as
Rit =
Di,t + Vi,t
.
Vi,t1
19
Cov(Rit , Rmt )
.
V ar(Rmt )
For now, assume that the idiosyncratic shocks are not persistent. This assumption implies that
price of all securities will be the same since shocks every period are independent of past shocks. This
is an extreme assumption but made only for tractability. Since firms are solving the static problem in
each time period, betas are derived from covariance of the firms cash flow with the cash flow of the
market:22
it =
t )
Covt1 (Rit , Rmt )
Cov(Dit , D
=
t)
V art1 (Rmt )
V ar(D
t denotes the market dividends. Assume that firms pay all their revenues as dividends:
where D
1
1 1
Dit = Yt At
1 1
Zit
Substituting dividends into the expression for market beta, I get the following result:
Proposition 3. Market beta of firm i is given by
1i f1 + 2i f2
f1 + f2
(ia , iz ) =
(16)
where ia and iz denote the idiosyncratic signal errors in aggregate and idiosyncratic signals of firm i;
1i and 2i are functions of signal errors with mean 1 and all the variables are defined in the appendix.
From the above expression, first note that average beta across all firms is one since 1i and 2i are
functions with mean one. Market beta of firm i is measured as a function of signal errors of aggregate
and idiosyncratic errors. From 3 , its very easy to compute the dispersion of betas.
Definition 2. Dispersion of betas across firms is given by
Disp()
(i 1)2 di.
Dispersion is calculated as the extent to which betas differ from their mean 1.23
Proposition 4. In the model, dispersion of market betas across firms is given by
2a 2z
+ s
as
z
Disp f
As robustness, I also computed beta of a security as beta with respect to the wealth portfolio. Results remain
qualitatively the same.
23
In the empirical section, I compute it as weighted average dispersion of betas around mean. In the model, the weights
are equal because the price of all firms is the same at the beginning of every period.
20
Note that a and z denote the weights managers put on signals siat and sizt respectively in
the optimal input chosen (from (7)). Dispersion of market betas across firms is driven by dispersion
of beliefs about aggregate shocks and idiosyncratic shocks. As the dispersion of beliefs increase, the
dispersion of betas increases.
Theorem 4. If
= ,
1. Without endogenous learning, dispersion of market betas increases with both aggregate and
idiosyncratic uncertainty.
2. With endogenous learning and substantial common noise (i.e., ), dispersion of market
betas decreases with aggregate uncertainty and increases with idiosyncratic uncertainty.
I can analytically prove this result for
generally. This theorem states that, in an economy in which agents are endowed with signals of constant
precision (benchmark economy), dispersion of betas increase with both types of uncertainties. This
is because, as uncertainty increases, Bayesian agents put more weight on the signals they receive.
Because the signal realizations are heterogeneous across firms, the resulting posterior beliefs become
more different from each other which increase the dispersion of beliefs. This increases the dispersion
of market betas. This is true for both aggregate and idiosyncratic uncertainty. So, in the benchmark
economy , dispersion of betas increase with both aggregate and idiosyncratic uncertainty.
On the other hand, in an economy with endogenous learning, dispersion of betas decreases with
aggregate uncertainty and increases with idiosyncratic uncertainty if the common error is sufficiently
noisy. Recall that this condition is exactly what is required to generate excess comovement in inputs in
the data. Higher aggregate uncertainty implies that the firm managers priors are uninformative, and
endogenous learning implies that managers devote more attention to learning aggregate conditions.
Combining these two effects, Bayesian learning implies that the managers put more weight on the
signals received. If there is sufficient common noise in the aggregate signals, the managers signals
becomes increasingly similar with more learning. Because the managers weigh the similar signals
more, their resulting posterior beliefs become more similar to each other. This convergence in beliefs
generates similar real decisions and cash-flows, and hence, the dispersion of market betas across firms
decreases. This leads us to my second testable prediction:
Prediction 2: Cross-sectional dispersion in market betas decreases with aggregate uncertainty and
increases with idiosyncratic uncertainty.
3.3
In this subsection, I develop a proxy of misallocation the elasticity of firm investment to its TFP.
Recall that firms optimal choice of inputs is given by
log Inputsit = 0 + a1 at1 + z1 zi,t1 + a siat + z sizt .
(17)
A strong prediction of the model is that z increases as managers learn more about their idiosyncratic shocks. Higher idiosyncratic uncertainty leads to more about idiosyncratic shocks, which
21
increases z and also decreases the extent to misallocation in the economy and hence aggregate productivity increases. Hence high z is a proxy for resource misallocation.
Imagine an economist with cross-sectional firm-level data on Investment I, capital K and productivity (TFP). Suppose the economist estimates the regression coefficients in log(I/K)i,t = t +t tfpit+1 +
i,t by ordinary least squares (OLS). In this regression, the and are unknown coefficients and it
is an error term that accounts for the fact that the right-hand side variables do not perfectly predict
log investment rate.
Suppose we estimate the regression every period. Comparing the regression equation with (17),
I claim that t captures the effect of z . This is because, intercept at each instant, t , captures the
effect of aggregate shocks and constants. Any time variation in t corresponds to time variation in z .
In particular, higher t is associated with firms learning more about idiosyncratic shocks and hence
implies less misallocation of resources. This leads to higher aggregate productivity. Figure 1 plots
the time series of and aggregate productivity. Note that the correlation between these two series is
statistically significant 0.6, consistent with my theory.
Empirical Analysis
In this section, I first describe how I empirically construct the uncertainty measures. Volatility in
Zit leads to cross-sectional dispersion-based measures in firm performance (productivity shocks, sales,
stock market returns, etc.), while volatility in At induces higher variability in aggregate variables like
GDP growth and the S&P500 index.24 I construct proxies for idiosyncratic uncertainty using crosssectional dispersion measures, and aggregate uncertainty using volatility in aggregate variables like
GDP, VIX, etc.
I conduct all my analysis at the sector/ Industry level. Given that idiosyncratic uncertainty is
defined as cross-sectional measures, it is easy to depict this at sectoral level than at the firm level.
Moreover, betas estimated at the industry level will be more stable than betas estimated at the firm
level. In principle, similar tests could be conducted at the firm level.
Idiosyncratic Uncertainty over the Business cycle
In this subsection, I give details on what I mean by (measure) idiosyncratic uncertainty. I use two
different methods to proxy this:
1. Using stock returns: I use cross section dispersion in realized excess returns (compared to CAPM
model) of industry portfolios to measure idiosyncratic uncertainty at industry level. This reflects the
volatility of news about industry performance. This is the method adopted by Campbell et al. (2001).
For more details on the estimation, refer to their paper.
2. Using Census & ASM data: First calculate establishment-level TFP (zj,t ) using the standard
approach from Foster et al. (2000). I then define TFP shocks (ej,t ) as the residual from the following
24
The term Zit accommodates two interpretations: the productivity of intermediate good producer i or the firm specific
demand shifter. I will not be able to distinguish these two interpretations from my theory or using empirical work.
22
4.1
Comovement in inputs
In this subsection, I give details on how I test the real side hypothesis of my model. I use KLEM annual
data from Dale Jorgenson and collaborators from 1949 to 2005.25 The data comprise 35 industries
that cover the entire non-farm, non-mining private economy. This database provides factor inputs and
outputs, along with their prices. My goal in this section is to explore sectoral input dynamics and
examine how they relate to aggregate and idiosyncratic uncertainty faced by firms. I will first show
that there is excess co-variation in the inputs used across sectors, which is not driven by co-variation in
productivity. Second, I will show that the co-variation in inputs across sectors increases with aggregate
25
The Jorgensons KLEM (stands for Capital, Labor, Energy and Material) database combines industry data from the
US bureau of Labor Statistics (BLS) and the US Bureau of Economic Analysis (BEA). One advantage of using KLEM
data is that it covers entire economy unlike compustat database.
23
uncertainty and decreases with idiosyncratic uncertainty, consistent with the first prediction of the
model.
Decomposition of aggregate volatility:
I now perform a decomposition of aggregate variance of TFP and inputs of capital, labor and
other materials into sectoral variances and covariances across sectors. Let s,t denote the particular
sec be the share of sales for sector s in the aggregate sales in
variable in sector s at time t, and s,t
the economy. Also, let V [Z ]t+5
t4 denote the variance of {Zt4 , . . . Zt , . . . , Zt+5 } for any generic
t+5
variable Z and Cov [Z ]t+5
t4 , [Y ]t4 be the covariance between {Zt4 , Zt3 , . . . Zt , . . . Zt+4 , Zt+5 } and
sec
s,t s,t
.
[ ]t+5
t4
t+5
t+5 X
X
1 X
1 X
sec
sec
s, s,
s, s,
.
10 =t4 s
10 =t4 s
sec = sec for all sectors s and all years t. Then V [Z ]t+5 can be
For simplicity, suppose that s,t
t4
s
written as follows:
V [ ]t+5
t4 =
XX
{z
t+5
ssec jsec Cov [s, ]t+5
t4 , [j, ]t4 .
(18)
s j6=s
{z
Variance component
Covariance component
Equation (18) shows how I decompose aggregate variance into sectoral variance and covariance
across sectors. I then do this decomposition for the aggregate inputs used and aggregate TFP series.
To construct TFP at the sector level, I use the method provided by Basu et al. (2006) in which the
authors developed a purified measure of sectoral total-factor productivity (TFP)a measure of the
Solow residual, constructed to take account of non-constant returns to scale in industry production
functions, imperfect competition, and varying utilization of labor and capital inputs.
Figure 6 plots the decomposition of total variance into covariance component and sectoral variance
for TFP (left plot) and inputs (right plot) over time. The yellow region corresponds to covariance
component and the rosy-brown corresponds to sectoral level variance. It is easy to see that the
covariance region accounts for most of the total variation for inputs (right side plot) and this is not
true for TFP (left plot). This implies that there is excess comovement in inputs not justified by
comovement in TFP.
In the data, 86% of the aggregate variance of total inputs used is due to co-variation across sectors,
while the proportion of the variation in aggregate variance of TFP due to co-variation is only 15%.
In the model, the variance of the common error component has to be sufficiently high to justify this
kind of excess comovement in inputs. Without common error, the co-variation in inputs used should
be approximately equal to the covariation in TFP across sectors.
24
.0001
-.00002
.0002
.0003
.00002
.0004
.00004
.0005
.00006
.0006
This figure plots the decomposition of total variance of TFP and inputs used into sectoral variance and covariance
components as in (18). Left plot corresponds to TFP series and the right one corresponds to inputs used.
1950
1960
1970
year
Covariance
1980
1990
1950
1960
Variance
1970
year
Covariance
1980
1990
Variance
Conclusion 1. Sectoral inputs comove highly with each other, even though the comovement in sectoral
TFP is very weak
I next do the time series analysis of comovement in inputs used. I regress the covariation of inputs
on aggregate and idiosyncratic uncertainty to test my hypothesis. I run the following regression:
sec
Covtsec = + i i,t
+ a a,t + t .
Results are presented in table 2. Given that I only have 30 yearly observations, I dont have power
to estimate the coefficients precisely. But to gain robustness, I use different proxies of idiosyncratic and
aggregate uncertainty. The hypothesis I test is that: Co-variation of inputs across sectors increases
with aggregate uncertainty and decreases with idiosyncratic uncertainty.
[Table 2 about here.]
I estimate i to be negative in all the specifications and a to be positive in all the specifications.
Even though the estimates are not statistically significant in some specifications, the estimates are
consistently of the same sign across all specifications. If the measurement error is correlated across
proxies, then the result might be driven by common measurement error. Since the proxies I use are
from very different data sources (one is from real data and other from financial market data), it is
highly unlikely that measurement error is correlated across variables.
Conclusion 2. Covariance of inputs across sectors increases with aggregate uncertainty and decreases
with idiosyncratic uncertainty.
To gain robustness, I do the analysis using panel data where the dependent variable is the covariation of inputs used by each sector with the aggregate inputs used and the independent variables
are sectoral level idiosyncratic uncertainty and aggregate uncertainty. This specification will not only
have more power, but also allows me to soak up the sector level variation by including the sector level
fixed effects.
25
In theory, the covariance of inputs of sector s with the aggregate inputs is given by
Cov(ns , n
) = s a
1
1
+
a
In theory, same prediction holds: comovement of inputs of each sector with aggregate inputs increases
with aggregate uncertainty and decreases with each sectors idiosyncratic uncertainty.
To test this, I use the idiosyncratic volatility constructed at the sectoral level. Next, I construct
the sector-specific co-variation measure, Covts , defined as
Covts =
t+5
Cov [s, ]t+5
t4 , [j, j, ]t4
j6=s
Time fixed effects will take of the all the time series variation including aggregate uncertainty.
Results are reported in table 3. I estimate a negative i and a positive a in all the specifications
and are significant even when the standard errors are corrected for auto-correlation of independent
variable (newey west with 5 lags).
[Table 3 about here.]
Interpretation of column (2): Fix a sector A. If idiosyncratic uncertainty of sector A increases
from time t1 to t2 , inputs of sector A co-move less with aggregate inputs at t2 than at t1 and viceverse. Interpretation of column (3): Fix a time period t. Consider two sectors A and B. If idiosyncratic
uncertainty of sector A is higher than sector B, comovement of sector As inputs with aggregate inputs
will be lower than comovement of sector Bs inputs with aggregate inputs. The result is robust to
various definitions of idiosyncratic uncertainty.
4.2
My theory predicts that idiosyncratic uncertainty is pro-cyclical and, hence, has positive price of risk,
whereas, aggregate uncertainty is counter-cyclical and, hence, has negative price of risk. Empirically
Schrhoff and Ziegler (2011) document that common shocks to idiosyncratic volatility are positively
priced and systematic variance risk is negatively priced, consistent with my theory. In this subsection,
I document some evidence for the proposed channel.
Given that consumption CAPM holds in my economy, a shock is positively priced only if it is
positively correlated with consumption growth. In table 4, I document that aggregate uncertainty is
counter-cyclical whereas idiosyncratic uncertainty is pro-cyclical, providing evidence for consumption
CAPM channel. Since I am interested in the cyclical properties of output and consumption, it is
important to detrend the series, since the raw series are non-stationary. I use Hodrick-Prescott filter
described in Hodrick and Prescott (1997) to extract the cyclical component. I deflate all the series
26
to 2009 dollars using the CPI from the BLS to remove any effects from variation in nominal prices.
Standard errors corrected for heteroskedasticity and auto-correlation are reported in the table.
[Table 4 about here.]
To provide more support for the proposed channel, I examine how capital reallocation changes
with aggregate and idiosyncratic uncertainty. In theory, as idiosyncratic uncertainty increases, firms
learn more about idiosyncratic shocks which leads to more efficient reallocation of resources and, hence,
higher productivity. For this channel to be true, capital reallocation should increase with idiosyncratic
uncertainty. On the other hand, as aggregate uncertainty increases, firms learn less about idiosyncratic
shocks and, hence, there should be less reallocation of resources i.e., reallocation should decrease with
aggregate uncertainty. I next test this hypothesis.
I measure the amount of reallocation using annual compustat data as sum of acquisitions and sales
of property, plant and equipment and focus on the cyclical properties of this series. This is similar to the
measure used by Eisfeldt and Rampini (2006). I use HP filter to de-trend this series. Regression results
are provided in table 5. I find that reallocation increases with idiosyncratic uncertainty, consistent with
my channel, whereas, reallocation doesnt decrease with aggregate uncertainty. Note that recessions
are also accompanied by fire sales/forced liquidation which has opposite effect on reallocation.
[Table 5 about here.]
4.3
In this subsection, I test the second prediction of the model. I use returns for 49 industries from Ken
French data library for the period 1926-2014; First I estimate market beta for each industry using
daily returns. I use rolling one month window with estimation window of 12 months. Also, following
Dimson (1979), I include both current and lagged market returns in the regressions, estimating beta
as the sum of the slopes on all lags. I include four lags of market returns, imposing the constraint
that lags 2 4 have the same slope to reduce the number of parameters:
Rit = i + i,0 RM,t + i,1 RM,t1 + i,2 (RM,t2 + RM,t3 + RM,t4 ) + i,t .
The market beta is then estimated as i = i,0 + i,1 + i,2 . Then I estimate dispersion in CAPM
betas across the 49 industry portfolios.
First, I show that CAPM betas are indeed time varying. Figure 7 (a) plots the dispersion of CAPM
betas across industries over time. Figure 7(b) plots aggregate and idiosyncratic uncertainty series over
time. It also indicates the historical events over time. Figure 8 plots the dispersion of betas along
with aggregate (a) and idiosyncratic uncertainty (b) time series. From the plot, you can see that the
correlation between aggregate uncertainty and dispersion of betas is significantly negative (-0.21***),
but the correlation between idiosyncratic uncertainty and dispersion in betas is significantly positive
(0.23***). I next add some controls in a regression and show that the results remain significant.
[Table 6 about here.]
27
Macro Uncertainity
Industry Uncertainty
Credit crunch
Tech bubble
1.2
3.5
3.0
0.5
Franklin National
Monetary cycle
2.5
0.3
Vietnam buildup
Asian Crisis
1.5
Black Monday
1.0
0.5
0.2
OPEC II
2.0
Gulf War I
1.0
1.1
0.4
0.9
0.8
1940
1960
1980
2000
1970
1980
year
1990
2000
2010
year
Figure 8: Plot of Dispersion of CAPM betas with aggregate and idiosyncratic uncertainty
Dispersion in betas
Macro Uncertainity
2.5
Dispersion in betas
Industry Uncertainity
0.5
2.0
1.5
0.4
Dispersion in betas
0.4
1.0
1.0
0.2
0.3
0.3
Dispersion in betas
0.5
0.6
Left plot shows the dispersion of betas and aggregate uncertainty series over time and the right plots shows the
dispersion of betas and idiosyncratic uncertainty series over time.
0.2
0.1
0.5
1960
1970
1980
1990
2000
1980
year
1985
1990
1995
2000
2005
2010
2015
year
In table 6, I present regression results of dispersion of market betas on uncertainty proxies. Aggregate uncertainty is both economically and statistically negatively significant. One standard deviation
increase in aggregate uncertainty leads to decrease in dispersion of market betas by 0.02 to 0.03 across
various specifications. Idiosyncratic uncertainty is also both economically and statistically significant.
One standard deviation increase in idiosyncratic uncertainty leads to increase in dispersion of market
28
betas by 0.08 to 0.1 across various specifications. Both of these are consistent with my theory. The
results are robust to alternative proxies of uncertainties and alternative ways of estimating betas.26
Note that R-squared in the regression is also high which implies that uncertainty has a first order
effect on time variation in betas.
Conclusion 3. Dispersion in market betas across industries decreases with aggregate uncertainty and
increases with idiosyncratic uncertainty.
Extensions
5.1
One of the assumptions of the baseline model is that firm managers do not learn from financial markets
i.e., I shut down learning from financial markets in stage 2 when they make real decisions. In appendix
C, I introduce aggregate market where firm managers can trade on their aggregate information and
learn from equilibrium prices. Anything that moves stock prices impacts the expectation held by all
market participants. In this sense, noise in the financial market will serve as a common error and
impacts the beliefs of all managers. The implications of endogenous learning will qualitatively remain
the same. Recall that for the dispersion in market beta implications to hold, we need substantial
common error. Since noise in the financial market adds to the common noise, the results will be
amplified with learning from financial market. This is because, my results depend on the correlation
of beliefs across managers and not on any information asymmetry between firm managers and investors.
I also show that learning from financial market will make information acquisition choice inefficient.
This is because of a free rider problem that firm managers face i.e., they can learn from financial
markets for free and, hence, each manager imposes positive externality on others and thereby has less
incentive to learn about aggregate shocks. This leads to excess learning about idiosyncratic shocks.
5.2
Variable utilization
In the baseline model, I assumed that input choices are made under imperfect information and are not
allowed to change once the true state is realized. In this extension, I relax this assumption and allow
firm managers to choose the utilization of the inputs once the state is realized. To do this, assume that
Lit = Nit hit , where Lit denote the total labor input, Nit denote the labor chosen (number of workers)
under imperfect information in stage 2 and hit denote the level of effort per worker and is chosen once
the true state is realized. I change the HH utility function to accommodate variable effort:
U=
E U (Ct )
Nit V (hit )
i
t=0
26
As robustness,
1. I estimated market betas controlling for FF3 factors;
2. I estimated market betas using dynamic conditional correlation (DCC) method proposed by Engle (2002);
3. I used 6 months rolling window instead of 12 months;
4. To make sure that my results are not driven by estimation error in estimating betas, I also used average estimation
error as one of the controls in regression and find that the results remain significant.
29
where I assume
U (C) =
C 1
1
V (h) = 1 +
and
h1+
.
1+
I solve the model in appendix B. The results remain qualitatively the same.
Efficiency
In this section, I explore the welfare implications of rational inattention/ information acquisition.
Information processing capacity of a firm manager is effectively a factor of production just like labor
and capital. The only difference is the way in which it enters the production function. In this section, I
examine if the new factor of production capacity, is allocated efficiently between learning aggregate
and idiosyncratic productivity.
The welfare criterion I adopt is the ex-ante expected utility of the representative household. There
are two related issues regarding welfare: 1. (In)Efficiency in the use of information 2. (In)Efficiency in
acquiring information. First, I check if the efficient use of information coincides with the equilibrium
use of information. Second, I examine whether the equilibrium acquisition of information coincides
with the efficient acquisition of information.
I consider a constrained efficiency concept that permits the planner to choose any resource-feasible
allocation that respects the segregation of information in the economy - by which I mean that the
planner cannot make the production and employment choices of firms and workers based on the private
information of other firms.
Planners use of information: Choose an employment strategy, N(Iit2 ) and an aggregate output
function, Y ( t ), so as to maximize
Z
Eit U (Yt )
V (Nit ) di
i
subject to Yit =
At Zit Nit
and Yt =
Yit
di
The problem has a simple interpretation: first term is the utility of consumption for the representative household and the second term is the marginal dis-utility of labor for the typical worker in
a given firm; and the corresponding integral is the overall dis-utility of labor for the representative
household. The solution to planners problem is pinned down by the following first order condition:
Eit Ct
1 1 1 1
At Zit
Nit1 1 = Nit .
Comparing this equation to the equilibrium labor allocation (see equation (3)), we see that there
is a wedge between the equilibrium and efficient use of information. The wedge arises because of
monopoly power: firms internalize that their production decisions affect their prices next period and
hence invest/hire less than the optimal level. Alternatively, if firms behave as price takers, there will
be no wedge between the equilibrium and efficient use of information. In the absence of monopoly
distortions, the equilibrium use of information is efficient, no matter the information structure. As
30
for the complementarity/substuitability, its origin is in preferences and technologies, not any type of
market inefficiency, guaranteeing that private motives in coordinating economic activity are perfectly
aligned with social motives.
Efficient acquisition of information: Next, I solve the planners problem to study how he
allocates the finite capacity to learning about aggregate and idiosyncratic shocks. I distinguish two
scenarios. In the first one, the planner can control the way the agents use the information they acquire.
In the second one, the planner is unable to change the way the agents use their available information
and I ask the question of what allocation of capacity maximizes welfare when information is used
according to the equilibrium rule.
Under efficient use, we know that the optimal labor solves the equation:
Eit Ct
1 1 1 1
At Zit
Nit1 1 = Nit .
E U (Yt )
V (Nit ) di
i
subject to
nit = log Nit = 0 + 1 at1 + a siat + z sizt
and Yit =
At Zit Nit
and Yt =
Yit
di
s
a
and
1
1
s
+ s
log2 a + 1 + log2 z + 1 .
2
a
2
z
Conjecture that yt = 0 + a at + 1 at1 + t where the coefficients depend on the information
acquired by agents. Maximizing planners utility is same as maximizing
"
Et
Yt1
1+
1 1+
1+
(Eit [Xit ])
di .
Proposition 5. If planner can dictate the agents on how to use the information, the information
acquired in equilibrium coincides with the efficient information acquisition.
Eit Ct
1 1
At
1 1
Zit
1
Nit1 1 = Nit .
(19)
Given this, the planner solves the problem: Choose a strategy, 1 : R+ R+ [0, ] so as to maximize
E U (Yt )
V (Nit ) di
i
subject to
nit = log Nit = 0 + 1 at1 + a siat + z sizt
and Yit =
At Zit Nit
and Yt =
Yit
di
s
a
and
1
s
1
+ s
log2 l a + 1 + log2 z + 1 .
2
a
2
z
Note that the coefficients will remain exactly the same as in the equilibrium.
Proposition 6. Even if the use of information is not efficient, the acquisition of information is
efficient.
The intuition is simple. Equilibrium use of information doesnt change the relative importance of
aggregate and idiosyncratic shocks. It only changes the total marginal value of learning. If information
processing capacity is fixed, inefficient equilibrium use of information doesnt impact the efficiency of
equilibrium acquisition of information. This leads us to my first main result on efficiency:
Theorem 5. If managers have an exogenous capacity constraint, the acquisition of information is
always efficient even though the use of information could be inefficient.
Next, suppose agents incurs a convex cost of capacity and are allowed to choose capacity endogenously based on the uncertainty, then the equilibrium acquisition of information will be inefficient
and agents acquire less information in equilibrium than the socially efficient choice. The relative
allocation of capacity to learning about aggregate and idiosyncratic shocks will remain the same as in
the efficient allocation.
Proposition 7. If managers incur some convex cost to process information, C(), then the equilibrium
capacity chosen will be lower than the efficient capacity i.e, e < s .
The intuition is simple. Market power introduces a constant distortion to the average level of
economic activity. It affects the wedge between private and social value of information, and hence,
firms equilibrium acquisition of information will be lower than the socially efficient choice. The main
message is that: Monopoly power reduces the value of information to firms.
32
Conclusion
In this paper, I show that endogenizing information choice of firm managers is a fruitful approach to
understand input behavior and asset prices. I developed a tractable general equilibrium model that
uses an observable state variable - the state of the business cycle (i.e., aggregate and idiosyncratic
uncertainty) - to predict information choices and link those information choices to testable patterns
in the data. I show that firm managers optimally choose to learn less about aggregate shocks when
idiosyncratic uncertainty is higher and vice-versa. This mechanism has two main implications. First,
I show that endogenous learning is helpful to understand the patterns of comovement of inputs and
market betas in the cross section of firms. Second, I link rational inattention at the firm level to
resource misallocation and, hence, to aggregate productivity and output. Through this channel, I
show that common idiosyncratic uncertainty is pro-cyclical and, hence, has a positive price of risk,
while aggregate uncertainty is counter-cyclical, and, hence, has a negative price of risk.
There are several promising directions for future research. In my modeling approach, I have aimed
to strike a balance between realism and transparency of the economic forces at play. In doing so, I have
made a couple of admittedly extreme assumptions. For example, the investment choice is modeled as
static. Similarly, the learning problem is also static, which perfect revelation at the end of the period,
implying that firms are able to quickly correct their past errors. These assumptions limit my ability
to do a full-fledged calibration to match moments. Relaxing them is conceptually straightforward but
involves substantial computational challenges. Also, in this paper, I assumed that uncertainty shocks
are exogenous, like first moment shocks. If uncertainty is endogenous, one could think of a propagation
and amplification mechanism. This will be part of future work.
Another important direction for future work is towards a unified theory of information acquisition
of both firm managers and investors. Both of them involve in learning about future shocks and they
can also learn from each other. Anything that moves stocks prices impacts the expectations held by
all market participants and firm managers. Also, investors can learn from firm managers investment
decisions. Solving their joint problem will be interesting and complicated task and I leave this to
future work.
References
Angeletos, G.-M., F. Collard, and H. Dellas (2014). Quantifying confidence. Technical report, National Bureau of
Economic Research.
Angeletos, G.-M. and J. LaO (2010). Noisy business cycles. In NBER Macroeconomics Annual 2009, Volume 24, pp.
319378. University of Chicago Press.
Angeletos, G.-M. and J. LaO (2013). Sentiments. Econometrica 81 (2), 739779.
Angeletos, G.-M. and A. Pavan (2007). Efficient use of information and social value of information. Econometrica 75 (4),
11031142.
Baker, M., J. C. Stein, and J. Wurgler (2002). When does the market matter? stock prices and the investment of
equity-dependent firms. Technical report, National Bureau of Economic Research.
Bali, T. G., S. Brown, and Y. Tang (2014). Macroeconomic uncertainty and expected stock returns. Georgetown
McDonough School of Business Research Paper (2407279).
33
Bali, T. G. and R. F. Engle (2010). The intertemporal capital asset pricing model with dynamic conditional correlations.
Journal of Monetary Economics 57 (4), 377390.
Basu, S., J. G. Fernald, and M. S. Kimball (2006). Are technology improvements contractionary? American Economic
Review 96 (5), 14181448.
Bloom, N. (2013). Fluctuations in uncertainty. Technical report, National Bureau of Economic Research.
Bloom, N., S. Bond, and J. Van Reenen (2007). Uncertainty and investment dynamics. The review of economic
studies 74 (2), 391415.
Bloom, N., M. Floetotto, N. Jaimovich, I. Saporta-Eksten, and S. J. Terry (2012). Really uncertain business cycles.
Technical report, National Bureau of Economic Research.
Bollerslev, T., R. F. Engle, and J. M. Wooldridge (1988). A capital asset pricing model with time-varying covariances.
The Journal of Political Economy, 116131.
Bond, P., A. Edmans, and I. Goldstein (2011). The real effects of financial markets. Technical report, National Bureau
of Economic Research.
Campbell, J. Y. (1992). Intertemporal asset pricing without consumption data. Technical report, National Bureau of
Economic Research.
Campbell, J. Y. (1993). Understanding risk and return. Technical report, National Bureau of Economic Research.
Campbell, J. Y., S. Giglio, C. Polk, and R. Turley (2012). An intertemporal capm with stochastic volatility. Technical
report, National Bureau of Economic Research.
Campbell, J. Y., M. Lettau, B. G. Malkiel, and Y. Xu (2001). Have individual stocks become more volatile? an empirical
exploration of idiosyncratic risk. Journal of finance 56 (1).
Christiano, L. J. and T. J. Fitzgerald (1998). The business cycle: its still a puzzle. Economic Perspectives-Federal
Reserve Bank Of Chicago 22, 5683.
Colombo, L., G. Femminis, and A. Pavan (2014). Information acquisition and welfare*. The Review of Economic Studies,
rdu015.
David, J. M., H. A. Hopenhayn, and V. Venkateswaran (2014). Information, misallocation and aggregate productivity.
Technical report, National Bureau of Economic Research.
Driessen, J., P. J. Maenhout, and G. Vilkov (2009). The price of correlation risk: Evidence from equity options. The
Journal of Finance 64 (3), 13771406.
Eisfeldt, A. L. and A. A. Rampini (2006). Capital reallocation and liquidity. Journal of monetary Economics 53 (3),
369399.
Engle, R. F. (2014). Dynamic conditional beta. Available at SSRN 2404020 .
Fama, E. F. and K. R. French (1997). Industry costs of equity. Journal of financial economics 43 (2), 153193.
Foster, L., J. Haltiwanger, and C. Krizan (2000). taggregate productivity growth: Lessons from microeconomic evidenceu.
Gilchrist, S., C. P. Himmelberg, and G. Huberman (2005). Do stock price bubbles influence corporate investment?
Journal of Monetary Economics 52 (4), 805827.
Hellwig, C., S. Kohls, and L. Veldkamp (2012). Information choice technologies. The American Economic Review 102 (3),
3540.
Hellwig, M. F. (1980). On the aggregation of information in competitive markets. Journal of economic theory 22 (3),
477498.
Herskovic, B., B. T. Kelly, H. Lustig, and S. Van Nieuwerburgh (2014). The common factor in idiosyncratic volatility:
Quantitative asset pricing implications. Technical report, National Bureau of Economic Research.
34
Hsieh, C.-T. and P. J. Klenow (2009). Misallocation and manufacturing tfp in china and india. The Quarterly Journal
of Economics 124 (4), 14031448.
Jagannathan, R. and Z. Wang (1996). The conditional capm and the cross-section of expected returns. journal of finance,
353.
Jurado, K., S. C. Ludvigson, and S. Ng (2013). Measuring uncertainty. Technical report, National Bureau of Economic
Research.
Kacperczyk, M. T., S. Van Nieuwerburgh, and L. Veldkamp (2014). A rational theory of mutual funds attention
allocation.
Lewellen, J. and S. Nagel (2006). The conditional capm does not explain asset-pricing anomalies. Journal of financial
economics 82 (2), 289314.
Lorenzoni, G. (2009). A theory of demand shocks. American Economic Review 99 (5), 205084.
Mackowiak, B. and M. Wiederholt (2009). Optimal sticky prices under rational inattention. The American Economic
Review 99 (3), 769803.
Morck, R., A. Shleifer, R. W. Vishny, M. Shapiro, and J. M. Poterba (1990). The stock market and investment: is the
market a sideshow? Brookings papers on economic Activity, 157215.
Morris, S. and H. S. Shin (2002). Social value of public information. The American Economic Review 92, No. 5,
15211534.
Olley, G. S. and A. Pakes (1992). The dynamics of productivity in the telecommunications equipment industry. Technical
report, National Bureau of Economic Research.
Polk, C. and P. Sapienza (2009). The stock market and corporate investment: A test of catering theory. Review of
Financial Studies 22 (1), 187217.
Rebelo, S. (2005). Real business cycle models: past, present and future. The Scandinavian Journal of Economics 107 (2),
217238.
Schrhoff, N. and A. Ziegler (2011). Variance risk, financial intermediation, and the cross-section of expected option
returns.
Sims, C. A. (2003). Implications of rational inattention. Journal of Monetary Economics 50 (3), 665690.
Van Nieuwerburgh, S. and L. Veldkamp (2009). Information immobility and the home bias puzzle. The Journal of
Finance 64 (3), 11871215.
Van Nieuwerburgh, S. and L. Veldkamp (2010). Information acquisition and under-diversification. Review of Economic
Studies 77(2), 779805.
Veldkamp, L. and J. Wolfers (2007). Aggregate shocks or aggregate information? costly information and business cycle
comovement. Journal of Monetary Economics 54, 3755.
35
Appendix: Proofs
For all the proofs in the appendix, I assume that there are two kinds of uncertainty: learnable and unlearnable uncertainty.
The superscripts l and ul correspond to learnable and unlearnable uncertainty respectively.
Proof of Proposition 1. Conjecture that yt = 0 + al lt + aul ul
t + 1 at1 + t where we need to solve for the
coefficients. This implies
xit
=
=
1
1
1
0 + al lt + aul ul
zit
a at1 + lt + ul
+ 1
t + 1 at1 + t + 1
t
1
1
1
1
1 l
(0 ) + 1
+ 1
al + 1
a at1 +
t +
1
1 ul
1
1
aul + 1
t + 1
zit +
t
l l
0 + 1 at1 + ul ul
t + z zit + t +
s
+aas
t |Iit N
as + l siat ,
a
+as
| {z }
s l
a
a
al +as
t |Iit N
siat ,
+ las als
+
| {za a}
1
+ ul
a
+ al
s
a1
s
+a1
and
ra
1
s l
a
a
l + s
a
a
it |Iit N
zs
zs
1
1
+ ul
s , s
l izt
l
+ z
z + z
z
| {z }
rz
l 2
l l
2
( )
+
0
t + t
l
a
,
N
l
siat
l
a
1
l
a
l
l
a
1
s
a1
l lt + t |Iit N 12 1
22 siat ,
| {z }
l
2
al
+
12 1
22 12
where 12 =
l
l
a
and 22 =
1
l
a
1
s
a1
This implies
Eit (xit ) = 0 + 1 at1 + z z zit1 + z rz sizt + 1 siat
ul
V arit (xit ) =
2
aul
+ z2
1
1
+ ul
zs + zl
z
(as + ) 2 2 as + 2 (a + as )
as a + (a + as )
where the coefficients are endogenous and will be solved in equilibrium. I next evaluate the integral in 5. Before
doing this, I highlight a property of log-normal
distributions that is utilized repeatedly in this appendix. When a variable
X is log-normal with lnX N x
, 2 , then, for any R, we have that
E X = exp x
+
1 2 2
2
= exp x
+
36
1 2
exp
1
( 1) 2
2
and therefore
E X = (E [X]) exp
1
( 1) 2
2
I use this property again and again in the derivations that follow, for various X and .
The firm idiosyncratic productivity zit and Eit (xit ) are jointly distributed as
zit
Eit (xit )
| t , t N
0
lt + t
0 + 1 at1 + 1
n
1
l
z
1
l
z
1
ul
z
1
ul
z
n
1
12
z
z 2
z
12
z
z rz
l
z
n
1
l
z
1
ul
z
z 2
1
z + z rz
ul
l
12
z
z
z
2 2
2
z
(
)
2
2
1
z +
1
+ z rz
s
l
a
12
z
z
1
l
z
+ 1s
o
(20)
Z
Z
log
exp
log
zit
+
Eit xit +
1
2
V arit (xit )
di
i
2
z
2 1 2
z
ul
1
zl
1 2
zl
zul
+ z
2
ul
a
1
zs + zl
zul
2 2
z
z
2
z
0 + 1 at1 + 1
s
+
a
s
a
zul
(1 )2
t + t
2
s
+
2 2 a
s
a + a
!
s + ( + s )
a
a
a
a
2 2
+ z rz
zl
zs
z z rz
zl
1
zl
1
zul
2 2
z
z
1 2
z
ul ul
0 + a t + a t
+ 1 at1 + t
=
ul
a at1 + t + t
Z
+ log 1 + log
i
l
a
=
ul
=
a +1 =
+ (1 ) =
a + 1
l 1 +
1
a
s
a1
ul
s
a + a1
= 1 =
s
a1
!
s
+ a1
a
=1
a + l
a + l
ul
1
1
= a = 1
= 1 =
s
a + a1
1
1
1
1
(1 + ) a
1 =
s
+ a1
a
This implies = l 1 +
The endogenous coefficients are given by
1
(1 ) ra
(1 1 )
1 r ra
(
)
(1 ) ra
1 (r + ra )
and
(1 1 )
r
( 1 )
l
= (1 )
1
(1 )
1 r ra
(
)
al =
and
37
1
1 1 ra
(r + ra )
(1 ) a
1
1
1
1
nit = const +
1 1 rz
(1 ) a
(1 ) ra
z z
sizt +
= const +
at1 +
zit1 +
siat
1
+ 1 (1 )
+ 1 1
+1 1
+ 1 1 1 1 (r + ra )
Proof of Proposition 2:
The objective function can be written as
1
=
ul
(1 + )
2
(1 + )
V arit (xit ) |at1 , zit1
2
2
+
aul
z2
1
1
+ ul
zs + zl
z
(as + ) 2 2 as + 2 (a + as )
+
as a + (a + as )
ul
(1 + )
2
2
+
aul
z2
1
1
+ ul
zs + zl
z
!!
(as + ) 2 2 as + 2 (a + as )
+
as a + (a + as )
!!
exp
(z rz )2
1
1
+ s
z
zl
+ (1 )2
1
1
1
+
+ s
a
al
Agent has fixed capacity and has to allocate the attention optimally. For this case, we can rewrite the above
expression as
1
(1 + )
arg max
2
1
(1 + )
2
arg max
1
(z rz )
21
1
1
+ s
zl
z
+ (1 )
1
1
1
+
+ s
al
a1
+
1
1
1
+
+ s
al
a1
1
+
2
s +
2
s
2
s
a1
2 a1 + a + a1
z2
+
s
l
s
s
z + z
(a ) + a +
a1
s +
2
s
2
s
a1
2 a1 + a + a1
1
2
!
+
s ( ) + + s
a1
a
a
a1
a1
1 z2
(rz )
2 zl
subject to the entropy constraint. Lets simplify the objective function further:
Obj = (1 + )
21
1
1
1
+
+ s
al
a1
a + l
= (1 + )
a +
=
a + l
a +
s
a1
s
a1
2
s
a1
a
+
2
s
a1
a
s
a1
al
s
a1
al
+
s +
2
s
2
s
a1
2 a1 + a + a1
!
s ( ) + + s
a1
a
a
a1
!
z2
rz
zl
s +
2
s
2
s
a1
2 a1 + a + a1
s
a1
(a ) + a +
l 2 + 2 a
z2
rz +
l
z
a
Here, I use the law of total variance for going from 1st equation to the second.
Case 1. Suppose the information processing constraint is as given in 9
38
s
a1
!
+
z2
rz
zl
!
s
1
s
1
a1
log2
+ 1 + log2 zl + 1
l
s
2
2
a ( + a1 )
z
{z
{z
2
l
= arg min a +
1t
221
al 2
z2 2(1 )
2
zl
1 =
if
2
z
( a +l )2
l
a
zl
22
l
a
zl
22 , 22
l
a
zl
> 22
1
4
log2
2
z
l
a +
l
a
zl
if
2
z
l
a +
if
2
z
( a +l )2
(21)
2 + 2 a
= arg min
al
1t
221 +
z2 2(1 )
2
zl
if
2
z
(2 +2 a )
l
a
zl
22
l
a
zl
22 , 22
1
4
l
a
2 + 2
l
a z
2
z
log2
if
if
2
z
(2 +2 a )
2
z
2 + 2
l
a
zl
(22)
> 22
Proof of Lemma 2:
Fixed point is given by solution to
1
al
z2
1 + log2
=0
2
2
4
( (1 )al + (1 ) ) zl
2
{z
x1
221 1
a
a +
(1 ) ra
(r + ra )
39
and
= + 1
(23)
Then,
x1 =
z2 2
al + ( + 1 )
2 =
z2 2
al + + (1 )
z2 2
2 =
1 (1)r
l
(
)
a (a + )
1
1
)(
)(r +ra )
+ (1 )
2
x1 =
221 1 + l o2
a
221
(1 )2
1 +
2
4
log2
1
al
2 l
(1 ) z
+ log2
1
1
12
21
+ al
2 )
=0
A sufficient condition for the above equation to have unique solution is to have the derivative of the above equation
wrt 1 to have the same sign. Differentiating the above equation wrt 1 , I get
1
1
1
+ l
(1 221 )
l
+a
I want this expression to have the same sign for all values 1 (0, ). Given that the above expression is monotonic
in 1 ,I only need to verify the boundaries.
It is easy to verify that the above expression is always positive at both the boundaries since it is true under the
limits 1 0 and 1 and the expression is monotonic.
Proof of Theorem 2:
Part 1: If = , then comovements in inputs should be of similar magnitude as comovement in TFP. For finite
value of , comovement in inputs is higher than comovement in productivity.
Part 2:
Comovement = 2a
1
1
+
a
(1 )2 ra2
=
+1 1
1
2
(r + ra )
1
1
+
a
Without endogenous learning, the above object doesnt change with idiosyncratic uncertainty. With endogenous
learning, as industry uncertainty increases, agents pay less attention to aggregate uncertainty, which implies ra and r
decrease.
Comovement
z
z
=
r
a
+ 1 1 1
+1 1
+1 1
1
)
1
(r + ra )
(r + ra )
as
z
>0
The last inequality follows from agents substituting learning away from aggregate conditions as idiosyncratic uncertainty increases.
Proof of Proposition 3:
In the static model,
i =
Cov(Di , D)
V ar(D)
40
1
1
Ni1
Ni1+
Ei [U 0 (Y )]
Y A1 Zi
y
1 V
+z a+ log 1 + 2
zi z +Ei (xi )
Ge
2
2
vari (y)
1
2
2
He
In my model,
Ei (y) = 0 + Ei al l + = 0 +
al
al
1
1
1
+
+ s
a1
al
1
sia = 0 + 1 sia
+ ) 1 + 1 ] sia
1+
e(1+)0 +0 +(1+) log 1 + 2 V ari (xi ) 2 vari (y)
{z
}e
{z
1 2 1
1 s
a
eyt e 2
1 (1+)2 2 r 2
+2
z z
|
=
{z
e1 (
+ )
l l
ul ul
0 a
+a
+
+ 1s
1
z
3 e1 (
+ )
V ar(D)
l l
V ar e0 ea
ul ul
+a
+
3 e1 (
l l
ul ul
a
+a
+ t
2e0 3 Cov(e
=
2
20 + la
a
+ 1
+
ul
2e
3 e
2
1
l
2a
2
a
l
a
2
a
l
2a
, e1 (
+ )
1
ul
2a
+ 21
1 a
l
a
ul ul
+a
+
1
l
a
+ 1
+ 1
2
1
1
l
a
= e0 e 2a
E D
+ 2 +
1
ul
2a
41
2
1
3 e
1
2a
+ 21
+ 1
+ 23 V ar e1 (
2
1
+ 23 e
l l
= e20 V ar ea
+ 1
ul
+ 2 +
+ )
+ )
E e0 + log 1 + 2 V ari (xi ) e +z a ezi z +(1 sia +z rz siz ) e1 sia +(1+)z rz siz
E e1 (
0 + log 1 +
V ari (xi )
2
e
e
2
a
2a
+ 2 +
3 e
1
ul
2a
(1 )2
2a
( )2
( 1 a +z +1 )2 + (z rz +z )2 +
l
2a
l
2z
+1
2
+
1
ul
2a
2
z
ul
2z
e1 ia +z rz iz
( )2
( )2
( )2
1 ra ia + 21 + 21 21s +(1+)z rz iz
a
+z rz iz
2 r2
(1+)2 z
z
s
2z
(z rz )2
s
2z
( )2
+ 21
+1
{z
2 r2
(1+)2 z
z
l
2z
+ 2 + 0
2 r2
(1+)2 z
z
+
s
2z
(1 )2
s
2a
1 ia
e|
+ 21 +
2 (rz +1)2
+ z 2
z
2 r2
(1+)2 z
z
+
2z
(1 )2
s
2a
e
0
2
ari (xi )+ 2a
a
(1 )2
l
2a
l
+z +1 l +ul +z ul
i +(z rz +z )i +
1 + + r
z z iz
1 ia
0
1 ia +(1+)z rz iz
0 + log 1 +
V
2
+1 ia +z rz iz +
( )2
1 ia +(1+)z rz iz 21s
a
e
|
2 r2
(1+)2 z
z
s
2z
{z
where 1i and 2i are random variables of mean 1 and they depend on the realization of signal errors.
Finally,
E(Di D|ia , iz ) = E
l l + ul ul +
0 +a
a
+z
l +ul
l +
ul ul +
l l +a
e 1
e 0e a
3
=E
1 ia
l l +ul +
e 0e a
e
20 +
=e
(1 )2
(z rz )2
s +z rz iz
s
2a
2z
1 l +
l l +ul +
e 0e a
3 e
(2 )2
(2a )2
+ 4
+
2
l
ul
2a
2a
(21 )2
1 + +3 e
e 0 3 e
l +
a
1
2
1 + 1
2
l
2a
1 sia +(1+)z rz siz
3 e
l +t
e
1 ia +(1+)z rz iz
2 r2
(1+)2 z
(1 )2
z
s
s
2a
2z
2 e 0 3 e
1 + 1 +( + )2 1
1
2
l
ul
2a
2a
l +
a
1
2
1 + 1 +( + )2 1
1
2
l
ul
2a
2a
This implies that beta of a firm with signal errors {iat ; izt } is given by
(iat ; izt ) =
where f1
f2
2
1
23 e
1
l
a
20 + la + + 1
ul
a
a
e
+ 1
2
1
1
l
a
+ 1
+ + 1
ul
!
1
3 e
+
+
o + a l 1 + 2 1
1 3 e
+
+
o + a l 1 + 2 1 + 1ul
2
2
e
e
a 1
l
a
+ 1
a 1
l
a
+ 1
1
and
1
where 1i and 2i are random variables with mean 1 and f1 and f2 depend on the prior uncertainty and equilibrium
signal precisions.
42
Z
Disp()
1
(f1 + f2 )2
2
1i
1
(f1 + f2 )2
f12
2
2i
2 2
2 2
1i
f1 + 2i
f2 + 2f1 f2 1i 2i di 1
Disp
(1 )2
s
a
(z rz )2
s
z
1
(f1 + f2 )2
1 (1 f1 + 1 f2 )2
1 (z rz f1 + (1 + ) z rz )2
+ s
s
2
a
z
(f1 + f2 )
(f1 + f2 )2
2 r2
(1+)2 z
z
s
z
(1 )2
s
a
1
(f1 + f2 )2
1 f12 + e
1 f22 + 2f1 f2 e
1 1
s
a
(1+)(z rz )2
s
z
1
(1 )2
(z rz )2
(1 )2
(1 + )2 z2 rz2
(1 + ) (z rz )2
1 1
+
f12 +
+
f22 + 2f1 f2
+
s
s
s
s
s
a
z
a
z
a
zs
(z rz )2
21
+
s
a
zs
2
=
=
=
=
=
=
2
1 221
1 222
2
2a
+ sz
+
s
s
a
z
a
zs
21
21
a 1 2
1 221
2
a
221 a 1 221
1 221
a
( + a ) 2
p a
12
+
p a
a
( + a ) 2
p a
1 2
a
1
2
+ 21
a z
222
1 222
z
1 222
1 2
p z
a
p z
a
z
2
p a
z
222
1 2
p z
a
p z
a
22
a
1 ( + a )
z
z
+
1
1
22
z a
a
r
1
2
a z
z
2
+
a
1
z a
21
2
222 1
1 2
> 0 is sufficiently small
=
+
a z
|
From
aggregate
aggregate
Proof of
{z
>0
{z
<0
the above expression, note that, with out common noise ( ), dispersion of CAPM betas increases with
uncertainty. But, with sufficient common noise term ( is small), dispersion of CAPM betas decreases with
uncertainty.
Theorem 3: We can write SDF as
log
St+1
= log log Ct+1
St
43
In the baseline model, I assumed that input choice are made under imperfect information and are not allowed to change
once the true state is realized. In this extension, I assume that firm managers can choose the utilization of the inputs
once the state is realized. I change the HH utility function to account for labor utilization:
U=
U (Ct )
Nit V (hit )
i
t=0
where Nit and hit denote the labor hiring and labor utilization by firm i at time t. Nit is measurable with respect
to Iit but hit is measurable with respect to st . I assume
U (C) =
C 1
1
V (h) = 1 +
and
h1+
1+
I repeat the same steps as in the main paper. Firm revenues/profits are given by
1
1
d
it = Pit Yit Pt Wit Nit
= Pt Yit
d
Yt Pt Wit Nit
d
where Yit = At Zit L
it and Lit = Nit hit . So, in stage 2, firm is objective is to choose Nit to maximize
1
1
Eit U 0 (Ct ) Yt At
1
1
Zit
d
(Nit hit )1 Wit (hit ) Nit
i
In stage 2:
1
1
1 Ct At
In stage 3:
Eit 1 Ct
1
1
At
1
1
Zit
d
(hit )1 1 Nit
1
1
Zit
d
(hit )1 Nit
1 1
1 1 i
W (hit )
h
= Eit Ct Wit
it
To get intuition for this, note that for any h, Ct Wit 1 1+
gives marginal benefit to household associated with
effort h. This has to be positive for worker to prefer working over leisure. This also has to be non-positive for labor
27
supply to be finite. Since labor demand is positive for optimal h and zero otherwise, Ct Wit = 1 +
44
h1+
it
1+
for optimal h.
h1+
it
.
1+
Ct Wit = 1 +
These conditions can be rewritten as
1
1
1
1
1
hit ::::::::::::::::::::::::::: Ct
At
Zit
1
1
1
1
At
d
(hit )1 1 1 Nit
i
d 1 1
(hit )1 Nit
Zit
1 1
= hit
= Eit 1 +
(24)
h1+
it
1+
(25)
Eit
= Eit
h1+
1 + it
1+
= Eit h1+
=
it
1+
d
Nit
Let Xit = Ct
1
1
At
1 Ct
1 1)(1+)
(+1
1
1
1
Zit
;=
Eit
1
1
At
1
1
Zit
Ct
1+
+1
1+
1
1
d
Nit
1
1
At
1+
1 1 +1
1
1
1
Zit
1+
+1
1
1
.
+11
and =
= h+1
it
1+
1 + +1
1
1
1+
+ 1 1
d
log = log Nit
log Eit Xit
(1 1 ) (1 + )
1 Xit (Nit )1 1
+1
= h
it
At Zit Nit
hit
log Yit =
= (At Zit )
+1
+11
+11
(Yt )
+11
(Nit ) +11
1
+1
+1
log (Nit ) +
log At Zit Ct
+ 1 1
+ 1 1
Aggregation: Note that aggregate revenue equals aggregate output, which implies
1
1
Yt
+11
= (Yt )
1
+11
1
(At )(1 )(+1)
In logs,
!
yt
1 1
1
1
+ 1 1
1
= 1
( + 1) at + log
Z
1 1
Zit
Eit
1+
Xit
(1)(1+)
1
di
+ const.
=
=
1
1
1
0 + al lt + aul ul
a at1 + lt + ul
+ 1
zit
t + 1 at1 + t + 1
t
1
1
1
1
1 l
(0 ) + 1
+ 1
a at1 +
al + 1
t +
1
1 ul
1
1
aul + 1
t + 1
zit +
t
l l
0 + 1 at1 + ul ul
t + z zit + t +
Let Wit = Ct
1
1
At
1
1
Zit
1+
l l
wit = (1 + ) xit = (1 + ) 0 + 1 at1 + ul ul
t + z zit + t +
45
s
+aas
t |Iit N s
siat ,
a
l
+as + a
| {z }
1
s
a
s
+a
s l
a
a
al +as
t |Iit N
siat ,
+ las als
a +a
| {z }
1
+ ul
a
+ a
and
ra
1
s l
a
a
l + s
a
a
zit |Iit N
zs
zs
1
1
s , s
+ ul
l izt
l
z + z
z
+ z
| {z }
rz
l 2
l l
2
( )
+
0
t +
l
a
N
,
l
siat
1
l
a
l
a
l
l
a
1
s
a
l
l lt + |Iit N 12 1
22 siat ,
2
+
12 1
22 12
al
| {z }
1
where 12 =
l
l
a
and 22 =
1
l
a
1
s
a
. This implies
V arit (wit ) = (1 + )
2
ul
+ (1 + )2 z2
aul
1
1
+ ul
zs + zl
z
+ (1 + )2
(as + ) 2 2 as + 2 (a + as )
as a + (a + as )
1
V arit (wit )
2
The firm idiosyncratic productivity zit and Eit (wit ) are jointly distributed as
zit
Eit (wit )
|
lt , t
0
(1 + ) 0 + 1 at1 + 1
1
+ 1ul
zl
z
(1+)z rz
l
z
,
l
t + t
(1+)2 (1 )2
s
a
(1+)z rz
zl
!!
+ (1 + )2 z2 rz2
1
zl
1
zs
(26)
Z
log
1 1
Zit
(Eit [Wit ])
(11 )(1+)
di
Z
=
exp zit
log
z2
2
1
1
+ ul
zl
z
1
1
1
+ Eit wit + V arit (wit )
2
+ 0 + 1 at1 + 1 lt + t
(1 1 ) (1 + )
di
+
(1 1 )
Substituting the above expression in expression for output and comparing corresponding coefficients:
!
yt
lt
::
al
1 1
1
1
+ 1 1
1
1
1
1
Z
( + 1) at + log
at1 :: 1 1
1
1
1
l z
( + 1)+
= 1
1 1
1 1
1
+ 1 1
1+
Eit Xit
(1)(1+)
1
di
+ const.
!
ul
ul
t :: a
1
1
Zit
1
1
( + 1) = aul =
( + 1) a + 1
46
1
1
( + 1)+
+1
+ 1 (1 )
a
= 1 =
(1 1 )
1 (1 )
ra + l r
1 1
1
1
=
1 =
(1 1 )
1
1
ra + l r
1 1
We can solve these 2 equations in 2 unknowns and there is a unique solution. I dont report the coefficients here.
So, in stage 1, firm is objective is to maximize
1
1
Eit U 0 (Ct ) Yt At
1
1
Zit
d
(Nit hit )1 Wit (hit ) Nit
i
1
1
1
1
Eit Yt
Nit
1
At
1
+11
d
(Nit hit )1 Yt Wit (hit ) Nit
Zit
1+
Eit Xit
1+ d
Nit
Agents maximize
1
=
argmax Eexp
argmax exp
E exp
(1 + )2
(1 + ) Eit (xit ) +
V arit (xit )
2
ul
(1 + )2
2
2
+ z2
aul
1
1
+ ul
zs + zl
z
s +
2
s
2
s
a1
2 a + (a + a )
!)!
as (a ) + (a + as )
i
1
(1 1 ) (1 + )
ul
(1 + )2
2
2
+ z2
aul
1
1
+ ul
zs + zl
z
(as + ) 2 2 as + 2 (a + as )
!)!
s ( ) + ( + s )
a1
a
a
1
{(1 + ) (0 + 1 at1 + z z zit1 )}
(1 1 ) (1 + )
E exp
1
(1 1 ) (1 + )
1
{(1 + ) (0 + 1 at1 + z z zit1 + z rz sizt + 1 siat )}
(1 1 ) (1 + )
argmax exp
E exp
1
(1 1 ) (1 + )
i
1
2 (1 1 )2 (1 + )2
(1 + )2
(z rz )2
1
1
+ s
zl
z
+ (1 )2
1
1
1
+
+ s
al
Agent has fixed capacity and has to allocate the attention optimally. For this case, we can rewrite the above
expression as
1
arg max
1
+
(1 + )
2 (1 1 )
=
arg max
1
1
2 (1 1 )2
21
(z rz )2
1
1
1
+
+ s
a
al
arg max
1
arg max
1
a + l
a + l
a +
+ (1 )2
1
1
1
+
+ s
a
al
s
(a1
+ ) 2 2 as + 2 (a + as )
as (a ) + (a + as )
2
2
as a
as
al
as
al
+ (1 )
z2 rz
zl
221
al 2
{z
Objective is given by
Min a + l
2
47
z2 rz
zl
(27)
+
z2 rz
zl
(28)
(29)
1
as
1
s
log2
+ 1 + log2 zl + 1
l
s
2
2
a ( + a )
z
{z
as ( )2 + 2 + 2 a
as (a ) + (a + as )
(a + as ) + as a
as
1
1
+ s
z
zl
(as + ) 2 2 as + 2 (a + as )
z2
+
s
l
as (a ) + (a + as )
z + z
(1 + ) (1 1 )
=
z2 2(1 )
2
zl
where x =
l
a
( al +l )2 zl
2
z
1
4
if x 22
if x 22 , 22
if x > 22
log2 (x)
(30)
and all the coefficients are endogenously determined and they can be solved using fixed
point. We can show that this fixed point has a unique solution.
Total labor chosen by each firm is given by
log Inputsit =
=
1+
1
log (Xit ) +
log Eit Xit
+ 1 1
(1 1 ) (1 + )
l l
= const. +
l l
1 at1 + ul ul
1
t + z zit + t +
+
+ 1 1
(1 1 )
= const. +
ul ul
1 at1
z zit + l lt +
1
t
+
+
+
+ 1 1
1 1
+ 1 1
1 1
+ 1 1
+ 1 1
(z rz sizt + 1 siat )
Covt =
ul
+ 1 1
2
1
+
ul
1
+ 1 1
2
+
1
11
l
1
11
!
We can prove that covariance of inputs increases with aggregate uncertainty and decreases with idiosyncratic uncertainty.
One of the issues with my analysis in the paper is that firm managers are constrained to not learn from asset markets
by shutting down asset markets in stage 2 when they make real decisions. In this appendix, I relax this assumption and
develop a model of asset markets with dispersed private information in a macroeconomic setting where firm managers
also learn from financial prices when making their investment decisions. I derive a tractable equilibrium that has a
feedback loop between investor trading behavior and firm real investment. 28
One reason firm managers dont want to learn from asset markets is when it is costly to learn from asset markets (rational inattention setting) and manager is indifferent between learning from asset market or other sources of information
(like the way I modeled in the main body of the paper). In a rational inattention setting, cost of processing information
only depends on the prior and posterior uncertainty and agent is indifferent between different sources of information
if decrease in uncertainty is the same. Moreover, since I assumed that there is only one source of macro information
(t + t ), asset markets also convey the same information as public source of information and, hence, manager is truly
indifferent in this setting.
If learning from asset markets is free, then the analysis will be more interesting. since the price signal itself is
endogenous. Before I go into the details, I will make some assumptions:
Assumption 1: I only model the aggregate stock market and do not model the individual stocks. Since I am modeling
firm managers aggregation of information, it is illegal to trade on firm specific information and I assume that managers
dont indulge in insider trading. So, they only trade based on the aggregate information they have.
Assumption 2: Assume that a random fraction of firm managers are hit by participation shock, which are orthogonal
to all cash flow shocks and affect their ability to participate in the stock market. This is similar to the noisy supply
assumption in Grossman and Stiglitz (1980).
The stock market. My specific model structure in this subsection draws heavily from recent work by Albagli,
Hellwig, and Tsyvinski (2011a) and Albagli, Hellwig, and Tsyvinski (2011b). For aggregate market, there is a unit
measure of outstanding stock or equity, representing a claim on the markets dividends. These claims are traded by all
the agents - imperfectly informed firm managers except for the ones hit by participation shock.
Every period, each manager decides whether or not to purchase up to a single unit of aggregate stock at the current
market price qt . This assumption is standard in the literature. Assume that (wt ) of agents are not allowed to trade
each period, where wt N 0, 1w is i.i.d. and denotes the standard normal CDF. This convenient transformation
ensures that the total demand of these traders is positive and less than one, the total supply.
28
Even in this economy, q theory doesnt work because manager still has different beliefs about firms cashflows than
the market.
48
t N 0, 1
where
(31)
They also see the current stock price qt , or equivalently, place limit orders conditional on qt . My assumption is that
the random variables iat and wt are independent of the fundamental t and the common noise in the firms private
signal t . Aggregating demand of both traders and noise traders, the market clearing condition is
Z
d (siat , qt ) dF (siat |t + t ) = 1 (wt )
where d(siat , qt ) [0, 1] is the demand of investor i and F is the conditional distribution of investors private signals.
The expected payoff to investor i from purchasing the stock is given by
Z
(t , t , Pt ) dH(t , t |siat , qt )
Eit [t ] =
The term (.) denotes the expected current dividends of the stock as a function of fundamentals and stock price. It
is a function of stock price because it enters the firms information set and through that, influences firm decisions. Since
I only model the trading of index, I assume that they trade on the cash-flows of the index. The distribution H is the
investor is posterior over fundamentals. Note that each firm is maximizing their expectation of payoff,
Eit U 0 (Ct ) Dt
since each firm manager is infinitesimal, their trading gains wont effect aggregate consumption Ct and, hence, firm
manager can be thought of as risk neutral. This implies:
d (siat , qt ) =
[0, 1]
0
if
if
if
Eit [t ] > qt
Eit [t ] = qt
Eit [t ] < qt
An investor purchases the maximum quantity allowed(1 share) when the expected payoff (conditional on her information) strictly exceeds the price, does not purchase any shares when the expected payoff is strictly less than the price,
and is indifferent when the two are equal.
A REE is then a set of functions for prices qt , expected payoffs , investor decision rules d (.) and firms decisions, such
that, for any history of shocks, all of them are behaving optimally and market clearing sets the prices.
I conjecture that equilibrium outcomes have the following two properties: 1. trading decisions of investors are
characterized by a threshold rule, i.e., there is a signal st such that only investors observing signals higher than st choose
to buy, and 2. the market price is an invertible function of st .
Aggregating the demand decisions of all investors, market clearing then implies
1
st t t
v
= 1 (wt )
Z
qt
Z
=
where with a slight abuse of notation, I replace qt with its informational content st .
49
1
1
1
(0 + a t + 1 at1 + p wp,t + t ) + 1
(a at1 + t ) + 1
zit
1
1
1
1
1
(0 ) + 1
+ 1
a at1 +
a + 1
t +
1
1
1
p wp,t + 1
zit +
t
=
=
I will look for a symmetric equilibrium in which all the firms acquire the same signal
i.e., Theyacquire
precisions
1
s
a
signals about aggregate shocks siat = t + t + iat and sizt = it + izt with iat N 0,
Note that sp,t = t + t +
wts
a
where wt N 0,
+ + w w
siat
spt
1
w
0
0
0
. Let Iit = {a
2
a
!
,
t1
2
w
w
+
a
w
+ +
a
w s
a
1
a
1
a
1
s
a
1
zs
w
+
s
w a
1
1
+
a
1
+ 1 + w1 s
a
a
a
and izt N 0,
+ +w w|Iit N
where =
as ( a + w as (a + ))
siat +
as (w + 1) + a (as (w + 1) + )
{z
a ra + r +w rw
(a +as (w +1))2 2
as w ( a + e ) + w as (as + a (as + ))
s
pt
as (w + 1) + a (as (w + 1) + )
,
{z
p a + +w w
2
2
a )+ e w
+w
as a +e as (w +1) +as ((w +1)e
s 2
a
This implies
Eit (xit ) = 0 + 1 at1 + z rz sizt + a siat + p spt
V arit (xit ) = z2
zs
1
+ z
where the coefficients are endogenous and will be solved in equilibrium. We know that Eit (Xit ) = exp Eit (xit ) + 21 V arit (xit ) .
The firm idiosyncratic productivity zit and Eit (xit ) are jointly distributed as
zit
Eit (xit )
| t , t , wt N
0
0 + 1 at1 + a (t + t ) + p spt
1
z
,
z rz
z
1
12
z
2
a
s
a
z rz
z
+ z2 rz2
1
z
1
zs
!!
(32)
Z
1
1
Zit
log
(Eit [Xit ]) di
Z
exp zit 1
log
z2
2 (1 2z )
+
z2
2
1
zs + z
1
z
+ Eit xit +
1
V arit (xit ) di
2
1
+ + 2
2
2a
+ z2 rz2
as
1
1
+ s
z
z
z z rz
z
Substituting the above expression in output equation and comparing corresponding coefficients:
1
(0 + a t + 1 at1 + p wp,t + t ) 1
a 1
= 1
1
= 1
(a at1 + t ) + log 1 + log
+ (a + p ) =
50
1 +
1
Z
1 1
Zit
= (a + p )
(Eit [Xit ]) di
1
1
a +1 =
a + 1
1
1
=
w 1
= p =
+
1
1
= 1 =
1
(1 ) a
1
1
1
1
(1 + ) a
1
1 =
a as w + e as w + w as (as + a (as + ))
as (w + 1) + a (as (w + 1) + )
1
= (a + p ) =
= (a + p )
This implies = 1 + . We can solve these 3 equations in 3 unknowns and there is a unique solution. The
solution is
( 1)
( 1) a
=
a
( 1)
as ( 1) + 1
as ( 1) + 1
+ as (( 1) + 1)
+ as (( 1) + 1)
as ( 1) + ( 1) (w + 1)
as ( 1) + ( 1) (w + 1)
+ ( 1)as
+ as (( 1) + 1)
as ( 1) + ( 1) (w + 1)
as ( 1) + ( 1) (w +
( 1)( 1)as ( 1) w
w =
a
( 1)
as ( 1) + 1
+ as (( 1) + 1)
as ( 1) + ( 1) (w + 1)
( 1)as ( 1)
=
a
( 1)
as ( 1) + 1
+ as (( 1) + 1)
+ as (( 1) + 1)
as ( 1) + ( 1) (w + 1
as ( 1) + ( 1) (w + 1)
as ( 1) + ( 1) (w + 1)
+ as (( 1) + 1)
as ( 1) + ( 1) (w + 1)
1
1
Eit U 0 (Ct ) Ct At
1
1
Zit
1
Wit Nit
Nit
i
1
Eit Ct Wit Nit
Eit (Xit ) Nit
+1
1
+11
+1
+1
+11
Before solving information acquisition, it is useful to rewrite updating formula. Here, I assume that firm i is choosing
s
a signal of precision as and all other firms are choosing signals of precision a1
.
+ +w w|Iit N
where =
2
s
a
s
a1
a +
s + s +
a
a1 w
+ a + e
s + s
a
a1 w
{z
w
siat +
s
a1
s
a
+
s
+ a
s + s +
a
a1 w
a ra + r +w rw
(w a
s
+ a1
w
s
+ as +a1
w
s + s
a
a1 w
s 2
a1
)
s
pt
,
p a + +w w
{z
s
s
s
2
2
s
s
2
s + s + s
a1
e( a
a1 w ))+ (a +a +a1 w )+w a a1 +e (a +a1 w )+ (e w
s +
a as +a1
w
a + e
=
=
(1 + )
2
z2
(1 + )
V arit (xit ) |at1 , zit1
2
1
s
z,i
+ z
+ (i)
E exp ((1 + ) (0 + 1 at1 + z rz,i sizt + a,i siat + p,i spt )) |at1 , zit1
=
(1 + )
2
z2
1
s
z,i
+ z
+ (i)
exp
(1 + )2
2
(z rz,i )2
1
1
+ s
z
z,i
+
(a,i + p,i )2
(a,i + p,i )2
(a,i )2
(p,i )2
+
+
+ s
s
a
a,i
a,i w
Agent has fixed capacity and has to allocate the attention optimally. For this case, we can rewrite the above
51
expression as
1
arg max
(1 + )
2
z
s + l
z,i
z
2
z
1
z
arg max
1
z rz,i
2
w a
s
z,i
a,i + p,i
2
+
as + as1 w
as1 + e
a,i + p,i
2
a,i
+
a
w a
as1 + e
rz
as + as1 w
as + as1 w +
a + as + as1 w
2
+
as + as1 w +
p,i
+
s
a,i
a + as + as1 w
2
+
2
2 !
s
a,i
w
as + as1 w
2
2
+ w
a as1 + e
e w
+ (as + as1 w )
2
2
+ w
a as1 + e
as + as1 w
e w
as1
+ (as + as1 w )
Here, I use the law of total variance for going from 1st equation to the second. As in the main paper, the constraint
is given by
h s
i
1
as
1
log2
+ log2 zl + 1
+
1
s
2
a ( + a )
2
z
{z
{z
221
221
+
w a
2
1
+a a 221
+a a 2
2
2
1
+ s + 2 221
z2 22
a
e
w
)
a1 ( a
=
2
+w (a + )(1221 )
z
a
The solution of problem can be rewritten as
1 =
1
4
log2 x
where x =
2
z
a +e w
s (a + )
a1
1221
if x 22
if x 22 , 22
if x > 22
(33)
2 az
+w (a + )(
)
Note that labor choice of each firm is given by
nit = const +
1
z rz
a
p
at1 +
sizt +
siat +
spt
+ 1 1
+ 1 1
+ 1 1
+ 1 1
Cov (Inputs) =
a + p
+ 1 1
2
1
1
+
a
+
p
+ 1 1
2
1
w as
Note that the above is a function of just aggregate signals. In a economy without endogenous learning, comovement
if inputs do not vary with idiosyncratic signals. In a economy with endogenous learning, co-movement of inputs decreases
with idiosyncratic uncertainty. Conclusion 1 states that most of the variation in aggregate inputs is driven by covariance
of inputs across sectors, but most of the variance of aggregate TFP is driven by variance of sectors. Comovement in
inputs is higher not only because of common error component, but also because of noise in prices, which will be part of
every ones beliefs and, hence, adds to the comovement in inputs.
Planners problem:
As in the paper, the use of information will remain the same, since the result in the paper is independent of signal
structure. But, the acquisition of information will be inefficient because firm managers impose positive externality on
each other because they can learn from financial markets freely. Because of this, firm managers pay less attention to
aggregate shocks and more attention to idiosyncratic shocks than what the social planner acquisition of information.
52
as1
2
2 !
Aggregate(Real)
Aggregate (Mkt.)
Idiosyncratic (Mkt.)
Aggregate (Real)
Aggregate (Mkt.)
0.342***
Idiosyncratic (Mkt.)
0.169***
0.447***
Idiosyncratic (Real)
0.135***
0.230***
0.512***
53
Idiosyncratic (Real)
Panel A: Inputs
The sample used is yearly data for 35 sectors to estimate Covariance of inputs. The dependent variable is the covariation
of inputs across sectors (Refer to equation 18). The independent variables are aggregate uncertainty and idiosyncratic
uncertainty. I use various proxies for each uncertainty (Market refers to proxies constructed using market data. Real
refers to proxies constructed using real data. Refer to variable definitions for more information on each proxy). Different
columns correspond to various permutations of proxies.
(1)
0.575***
(0.156)
-0.054
(0.148)
(2)
0.656***
(0.215)
-0.634***
(0.197)
Observations
Adjusted R2
-0.129
(0.152)
32
0.273
(4)
-0.135
(0.168)
(3)
-0.138
(0.210)
20
0.398
0.234
(0.168)
0.000
(0.160)
39
0.004
-0.438*
(0.243)
0.043
(0.252)
0.164
(0.265)
20
0.070
Panel B: Outputs
The dependent variable is the covariation of output across sectors (Refer to equation 18). The independent variables are
aggregate uncertainty and idiosyncratic uncertainty. I use various proxies for each uncertainty (Market refers to proxies
constructed using market data. Real refers to proxies constructed using real data. Refer to variable definitions for more
information on each proxy). Different columns correspond to various permutations of proxies.
(1)
0.569***
(0.159)
-0.115
(0.150)
(2)
0.503**
(0.230)
-0.562**
(0.211)
Observations
Adjusted R2
-0.158
(0.155)
32
0.260
54
(4)
-0.180
(0.169)
(3)
-0.037
(0.224)
20
0.268
0.178
(0.169)
-0.000
(0.161)
39
-0.006
-0.360
(0.234)
-0.136
(0.243)
0.290
(0.255)
20
0.078
The sample used is yearly panel data for 35 sectors to estimate covariance of inputs. The dependent variable is the
covariation of inputs of sector s with aggregate inputs.
idiosyncratic uncertainty of sector s. Different columns correspond to various specifications of fixed effects.
Idiosyncratic uncertainty
Aggregate uncertainty
Fixed effecs
Adjusted R2
(1)
Covariance
b/se
-0.00171***
(0.00019)
0.00056***
(0.00020)
(2)
Covariance
b/se
-0.00315***
(0.00098)
0.00066***
(0.00017)
(3)
Covariance
b/se
-0.00137***
(0.00018)
None
0.150
0.145
0.138
55
The sample used is quarterly data of gdp from FRED (Federal Reserve Economic Data) . The dependent variable is
cyclical component of GDP series. I filter the series using Hodrick and Prescott (1997) filter (HP). The independent
variables are aggregate uncertainty and idiosyncratic uncertainty. I use various proxies for each uncertainty (Market
refers to proxies constructed using market data. Real refers to proxies constructed using real data. Refer to variable
definitions for more information on each proxy). Different columns correspond to various permutations of proxies. The
standard errors reported are newey-west standard errors with 20 lags.
(1)
b/se
-0.183*
(0.106)
0.598**
(0.242)
(2)
b/se
-0.139
(0.102)
0.019
(0.112)
Observations
0.167*
(0.097)
174
(4)
b/se
0.931***
(0.235)
(3)
b/se
0.018
(0.074)
152
-0.389***
(0.092)
0.246***
(0.086)
150
0.210**
(0.094)
-0.240***
(0.084)
0.072
(0.074)
148
Panel B: Consumption
The dependent variable is cyclical component of consumption series. I filter the series using Hodrick and Prescott (1997)
filter (HP). The independent variables are aggregate uncertainty and idiosyncratic uncertainty. I use various proxies
for each uncertainty (Market refers to proxies constructed using market data. Real refers to proxies constructed using
real data. Refer to variable definitions for more information on each proxy). Different columns correspond to various
permutations of proxies. The standard errors reported are newey-west standard errors with 20 lags.
(1)
cons_norm
b/se
-0.271***
(0.098)
0.757***
(0.229)
(2)
cons_norm
b/se
-0.230**
(0.102)
0.020
(0.133)
Observations
0.178*
(0.097)
218
56
(4)
cons_norm
b/se
0.989***
(0.204)
(3)
cons_norm
b/se
0.048
(0.089)
152
-0.426***
(0.090)
0.235***
(0.082)
248
0.249**
(0.106)
-0.409***
(0.096)
0.093
(0.084)
148
The sample used is annual data from compustat. I measure the amount of reallocation as sum of acquisitions and sales
of property, plant and equipment (PPE). The dependent variable is cyclical component of Reallocation series. I filter
the series using Hodrick and Prescott (1997) filter (HP). The first 4 columns use sum of sales of PPE and acquisition
as dependent variable. The next 4 columns use sales of PPE and the last 4 columns use acquisitions as dependent
variable. The independent variables are aggregate uncertainty and idiosyncratic uncertainty. I use various proxies for
each uncertainty (Market refers to proxies constructed using market data. Real refers to proxies constructed using
real data. Refer to variable definitions for more information on each proxy). Different columns correspond to various
permutations of proxies. The standard errors reported are newey-west standard errors with 10 lags
(1)
apc
b/se
0.014
(0.037)
0.255***
(0.088)
0.059
(0.042)
44
0.147
(2)
apc
b/se
0.233*
(0.119)
0.027
(0.067)
0.068
(0.044)
38
0.153
57
(3)
ppesc
b/se
0.004
(0.021)
0.195***
(0.050)
0.047*
(0.024)
44
0.250
(4)
ppesc
b/se
0.114*
(0.060)
0.078**
(0.034)
0.035
(0.022)
38
0.384
(5)
acqc
b/se
0.027
(0.052)
0.265**
(0.122)
0.058
(0.058)
44
0.080
(6)
acqc
b/se
0.285*
(0.167)
-0.004
(0.095)
0.081
(0.062)
38
0.071
nber_rec
(1)
std_beta
b/se
0.008
(0.008)
(2)
std_beta
b/se
(3)
std_beta
b/se
-0.019***
(0.003)
0.081***
(0.007)
p/d ratio
Constant
Observations
Adjusted R2
0.320***
(0.003)
651
-0.000
0.321***
(0.003)
651
0.059
0.341***
(0.003)
651
0.158
(4)
std_beta
b/se
-0.030***
(0.003)
0.109***
(0.009)
-0.036*
(0.020)
0.363***
(0.009)
648
0.288
nber_rec
(1)
std_beta
b/se
0.002
(0.009)
(2)
std_beta
b/se
Observations
Adjusted R2
(4)
std_beta
b/se
0.021***
(0.003)
0.310***
(0.003)
453
0.088
-0.015***
(0.003)
0.023***
(0.003)
0.315***
(0.003)
453
0.128
-0.010***
(0.003)
(3)
std_beta
b/se
0.310***
(0.004)
453
-0.002
58
0.313***
(0.003)
453
0.017