Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
To cite this article: Lawrence A. Boland FRSC (2016) Econometrics and equilibrium models,
Review of Political Economy, 28:3, 438-447, DOI: 10.1080/09538259.2016.1154757
Article views: 43
probabilities
the form of equilibrium models. Some of these problems were
recognized by Trygve Haavelmo decades ago. And as Aris Spanos
has recently discussed, the problems are often the result of what
we say in our textbooks. Some problems have to do with what we
mean by econometric parameters and others with how we use
probabilities.
1. Introduction
By the 1960s, most economics departments in North America began adding econometrics
to the list of required courses, particularly in their graduate programs. After all, the Federal
Reserve was hiring econometricians to help with elaborate computer models that started
becoming available. By the mid-1980s, things started turning around and the Federal
Reserve was under pressure to start laying off some of its econometrics staff. Supposedly
the motivation had nothing to do with the performance of such models but with the sheer
budgetary cost for the Federal government of the day.
The message never seemed to get back to the graduate programs, which continued to
turn out sophisticated econometrics theorists as well applied econometricians. The
increasing availability of all kinds of social and economic data gave applied econometri-
cians a lot to work on. Today, econometric model-building in macroeconomics has
found a fruitful field to plow, particularly with the widespread use of Dynamic Stochastic
General Equilibrium (DSGE) models, despite the growing criticism of such models as well
as criticism of the extent to which equilibrium-based structural econometric models are
capable of dealing with any questions of dynamics and uncertainty.
Today, any empirical question is thought to be a subject that should be addressed with
econometrics. In Rational Econometric Man, Edward Nell and Karim Errouaki (2013)
present a welcome and timely case for the view that econometrics and econometric
model-building may not be the magic tools to solve all empirical questions despite what
many seem to have thought they were in the 1960s.
Some think that if there is a problem with econometrics it is with what is taught in
econometrics textbooks rather than with econometrics itself—particularly if econometric
theory is properly understood when it comes to how to deal with available data. In this
paper I will go further and argue that a main problem with econometric models (such
as the DSGE models) is that the typical structural econometric models are all equilibrium
models, which are incapable of dealing with real-world economies.
much more difficult than it was thought to be by earlier economists, and that no solution of it
which claims to be simple can be true. (Marshall 1920, pp. 426, 423)
Let us pick out a subgroup of individuals from the total group of N, such that, for each
member of this subgroup, the factors x are identically the same. When, nevertheless, the
quantities y for the members of this subgroup are different, it means that the decisions of
the individuals, even after fixing the values of xl, x2, … xn, are still to some extent uncertain.
The individuals do not all act alike. When we assume that [a general shift] has, for each fixed
set of values of the variables x, a certain probability distribution, we accept the parameters (or
some more general properties) of these distributions as certain additional characteristics of
the theoretical model itself. These parameters (or properties) describe the structure of the
model just as much as do the systematic influences of xl, x2 … xn upon y. Such random
elements are not merely some superficial additions ‘for statistical purposes’ … . Consider a
group of families of equal size and composition. Let r be family income, and let x be
family spending, during a certain period of time. Assume all prices constant and the same
for all families during this period. Still, among those families who have the same income,
the amount spent, x, will vary from one family to the other, because of a great many neglected
factors. Let us assume that the spending habits of an infinite population of such families could
be described by the following stochastic equation:
loge x = k loge r + k0 + 1. (k and k0 = constants) (1)
First, let us imagine that we could, somehow, remove the forces which cause the discrepancies
ε. In this hypothetical population all families with the same r would act alike, and we should
have of an infinite population of such families could be described by the following stochastic
equation
loge x = k loge r + k0 . (2)
Secondly, let the ‘errors’ є remain in the scheme, but consider only the average or expected
consumption for those families who have the same income r … .
E(x|r) means: Expected value of x, given r.
Therefore, what the average family in the scheme (1) does is not necessarily the same as what
the families would all do if they acted alike.
It is particularly important to be aware of the difference between these two types of relations
when we want to perform algebraic operations within stochastic equation systems. For
440 L. A. BOLAND
x = eko rk . (3)
But from E(loge x|r) = k loge r + ko we do not get E(x|r) = eko rk. Therefore, when we perform
such operations, we must keep in mind that we are using the hypothetical ‘if-there-were-no-
errors’ scheme, and not the ‘expected-value’ scheme. Confusion on this point … arises in par-
ticular when we have a system of stochastic equations and apply algebraic elimination pro-
cesses to the corresponding ‘expected-value’ equations. The usual mistake here is that we
identify the expected values of a variable in one equation with the expected values of the
same variable in another equation. This may lead to nonsensical results. (Haavelmo 1944,
pp. 51, 57–9, original emphasis)
Book VI. There is nothing in Book V or in the principles textbooks about the difficult
questions of income distribution or diversity in general. Marshall did discuss such pro-
blems in his Book VI, which economists tend to ignore. But Book V is almost entirely
about the necessary conditions of a long-run equilibrium, such as the condition that in
all markets every economic actor is maximizing his or her utility, and all firms are max-
imizing profits. It implicitly assumes that the number of buyers and sellers is large enough
to ensure that everyone is a price-taker. This means that firms will produce the level of
output that makes marginal cost equal to price, and that economic profits are zero.
Modern textbooks typically discuss equilibria where producers are not perfectly competi-
tive price-takers, but rarely do these textbooks point out that the resulting long-run equi-
libria are implausible since not all of the necessary conditions can be met. And
microeconomics textbooks rarely consider how the distribution patterns of preferences
and resource ownership affect economic outcomes, unlike Marshall, who addressed
such issues in his Book VI. Instead, what Economics 101 students learn about are equili-
brium models.
As to Trygve Haavelmo (1944, Section Thirteen), 70 years ago he examined the limits to
building stochastic models and showed how statistical parameter estimation can easily
yield questionable results. The primary difficulty is that if, for example, one were
merely to substitute the econometrically measured values of the parameters into the
non-stochastic equations representing, say, the demand and supply curves, one could
not legitimately treat the price P and the quantity Q as algebraic solutions to a set of
non-stochastic simultaneous equations. The reason is that the rules of algebraic manipu-
lation of any equation require that the variables have exact values, i.e., be non-stochastic.
But given unavoidable observation errors, statistics would provide only averages with a
probability of observational error over specified ranges.
Judging by what one finds in many econometrics textbooks, one would get the idea that
the sole purpose for estimating the values of the parameters of a stochastic equilibrium-
based structural model is to be able to engage in algebraic manipulations using the
non-stochastic version of the model. In my reading of what Haavelmo was explaining
in his Section Thirteen, this is exactly what he says yields meaningless results. Perhaps
like the failure of microeconomics textbooks writers to read Marshall’s Book VI, writers
of econometrics textbooks have not read Haavelmo’s Section Thirteen.
REVIEW OF POLITICAL ECONOMY 441
In any case, let us look at what happens when textbooks writers either misread Marshall
or overlook important parts of his book, or neglect to read the whole of Haavelmo’s (1944)
monograph.
Consider again Marshall’s ‘imaginary world’. One question that might occur to some is: If,
like Marshall, we build a model within which we have assumed all market participants are
price-taking maximizers and we assume all participating firms are making zero profit,
have we implicitly assumed that all production functions are homogeneous of degree
one, i.e., exhibit constant returns to scale, at least in the vicinity of the point of equili-
brium? I am not only going to argue the affirmative but will also note that on occasion
this is done without recognizing that one is building such a model.
When building a model and assuming all the necessary conditions for an equilibrium
model are satisfied, one has in effect built an equilibrium model without explicitly claiming
so.1 Apart from the price-taker and maximization assumptions of the model, one has a
choice between making an assumption about the nature and status of the production
(or total cost functions) or making an assumption about the going level of profit.
Obviously, assuming the total (excess) profit is zero fulfills the remaining necessary con-
dition. To see that this also assures that the firm is producing at a point on the production
function where it is locally linearly homogenous, consider the typical price-taking firm as
in Figure 1.
Textbooks note that at point A, marginal cost (MC) equals the given price since the firm
is assumed to be maximizing profit; and at that point the price equals average cost (AC)
1
Before going on, we need to call attention to the fact that textbooks are routinely misleading about necessary and suffi-
cient conditions. For an elementary example, consider the theory of the firm. We say that a necessary condition for profit
maximization is that marginal revenue equals marginal cost, so that the slope of the profit function is zero. In order to
distinguish a maximum from a minimum, it is further necessary that the second derivative of the profit function be nega-
tive at the point of optimization. Unfortunately, textbooks sometimes identify the secondary diminishing margin as the
sufficient condition for maximization. But there is no sufficient condition. The diminishing margin condition is also a
necessary condition of maximization. We should say that the conjunction of the two necessary conditions is the sufficient
condition.
442 L. A. BOLAND
since we assume total profit is zero. But it must also be noted that at point A it is also true
Downloaded by [177.207.94.122] at 18:09 04 August 2016
that marginal cost equals average cost, which is a necessary condition for locally linear-
homogeneity. We could have assumed that the firm is producing where the production
function is locally linear-homogenous, which means marginal cost is equal to average
cost, instead of assuming the firm is maximizing profit whenever total profit is zero.
Any model of the price-taking firm in which it is assumed that the production function
exhibits constant returns to scale (CRS) assures that average cost equals marginal cost. For
a long time, many empirical model builders would assume for convenience that the pro-
duction function can be represented by a Cobb-Douglas production function. By defi-
nition this function has the following form where Q represents output, K represents the
level of capital (machines) used and L represents the amount of labor:
Q = La + K a−1
No matter the value of α, this is a CRS production function and this means the firm is
producing at its point A. If it is further assumed that the firm is a price-taker and total
profit is zero, then the firm is maximizing profit whether or not maximization is assumed!
While all of this is very elementary, it is interesting that one can build a model thinking
one has not assumed CRS but use the model to make an argument that holds only if CRS is
presumed. In this regard, I explicitly point to Arman Alchian’s (1950) famous article on
‘Uncertainty, Evolution and Economic Theory’. In the 1940s many articles were published
that questioned the realism of the assumption that firms are knowingly maximizing.
Alchian only questioned whether the firm was knowingly maximizing. His argument
was based on a simple matter of the survival of the fittest. If a firm survives competition
and is making zero excess profit, it must be maximizing even though that might not have
been its explicit aim. Alchian thus relies on the conjunction of long-run zero profit and
price-taking, but he must implicitly assume the production function exhibits CRS to be
able to conclude that the firm is maximizing (knowingly or otherwise).
My point is that the explicit or implicit assumption of a CRS production function
makes it far too easy to implicitly build and apply an equilibrium model. Most often
this is just what models such as the DSGE model implicitly are doing. And econometric
models that implicitly provide all of the necessary conditions for an attainment of an equi-
librium are conveniently providing the basis for the presumption that the estimated
REVIEW OF POLITICAL ECONOMY 443
parameters are equilibrium parameters which are ergodic (i.e., constant over time) so long
as the equilibrium status is maintained.
very simple but very important point … . If there are no standard parameters in economics,
they are at best local approximations applying only to a given time and place … . Many
modern econometricians share this view and would be ready to accept that the parameters
of an economic model are not great constants, but are instead useful simplifications in
trying to make sense of the world … . As a result, the inappropriate pursuit of quantification
in economic models would reduce, rather than increase, their value in economic analysis.
(Nell and Errouaki 2013, pp. 355–356)
In Causality in Economics, John Hicks (1979, p. 39) specifically called into question what
parameters in economic models mean and observed that they are in no way the same kind
of entities as the parameters we find in the natural sciences: ‘One aspect of the difference
between the sciences and economics has … to be noted. The sciences are full of measure-
ments which, over a wide field of application, can be regarded as constants—the absolute
zero of temperature is -273° centigrade, the number of chromosomes in the human zygote
is forty-six, and so on—but there are no such constants in economics.’ Some might note
that there have always been constants if we allow for those created by fiat such as a legis-
Downloaded by [177.207.94.122] at 18:09 04 August 2016
lated price of gold. For many decades the price for the USA was fixed at about $34 per
ounce. But of course, the fixity was short-lived—hence not ergodic. In the 19th century
it was thought that business cycles were quite regular with a constant length of about
10 years. William Stanley Jevons even hypothesized that they match the frequency of
sunspot occurrences. But, it was subsequently established that their frequencies did not
actually match.
As Hicks (1979, p. 55) pointed out, economists can simply assume that some of the par-
ameters represent exogenous constants, but the parameters of an economist’s behavioral
equations do not have the ironclad constancy of the parameters encountered in the natural
sciences. Despite this, economic model-builders continue to employ econometric esti-
mation techniques that require the parameters to be ergodic.2
Aris Spanos has published many articles criticizing what is taught in econometrics text-
books. One major concern is that textbooks are not sufficiently concerned with the ques-
tion of whether the data to which econometricians apply their various estimation
techniques are statistically adequate (see Boland 2014, Chapter 10). In his 2008 Palgrave
article on ‘Statistics and Economics’, Aris Spanos ties much of the problem to the
2
P.A.V. Swamy (1970) has proposed a way to deal with non-ergodic parameters in econometric models, but not much has
been done with his proposal judging by today’s econometrics textbooks.
REVIEW OF POLITICAL ECONOMY 445
textbooks’ failure to appreciate what Haavelmo actually said in the 1944 monograph.
Spanos (1989) is particularly concerned with what he calls the Fisher-Neyman-Pearson
‘approach to statistical inference, point estimation, hypothesis testing and interval esti-
mation’ (p. 878).
According to Spanos (1989), modern textbook econometrics has evolved not from Haa-
velmo, but from the work of Jan Tinbergen, who narrowly defined the purpose of econo-
metrics to be about the quantification or measurement of theoretical relationships.
Interestingly, this is not what Ragnar Frisch, the first editor of Econometrica, thought
was the purpose of the Econometrics Society or its journal. The purpose was the advance-
ment of economic theory by connecting it to statistics and mathematics. Spanos (ibid.)
says that apart from some probabilistic language, Tinbergen’s econometric methodology
‘has little in common with the methodology in Haavelmo’s (1944) monograph, commonly
acknowledged as having founded modern econometrics’ (p. 406). Moreover, Haavelmo’s
(1944) methodology ‘includes several important elements which have either been dis-
Downloaded by [177.207.94.122] at 18:09 04 August 2016
carded or never fully integrated within the textbook approach’ (p. 405). As I noted
above, Haavelmo’s probabilities approach put into question econometric models designed
only to measure the parameters of theoretical relationships built into those models.
In this methodological perspective, with which I concur, probabilities have their place
but may not be relevant to some important economic relationships, particularly those
involving fundamental uncertainties.
Early in their training, economists are introduced to probabilities. Rarely are they told
when they might not be relevant. It is common for them to be told that one can place a
probability on any uncertain future event (such as the value of the equilibrium price at
which a product will be sold five years from now), despite that being a singular event.
As Hicks (1979) pointed out:
446 L. A. BOLAND
There are two theories of probability—or it may be better to say there are two concepts of
probability, for the mathematical structures that have been raised, on the basis of the one
concept and on the other, seem largely to correspond. They are (1) the frequency theory
and (2) the axiomatic theory, to give them their usual names. It is the frequency theory
which has become orthodox; most modern works on statistical mathematics take it as
their starting point … . According to the frequency theory, probability is a property of
random experiments. ‘Whenever we say that the probability of an event with respect to an
experiment is equal to P’ we mean that ‘in a long series of repetitions of the experiment, it
is practically certain that the frequency of Ε will be approximately equal to P’. (pp. 105–106)
A singular event is by definition not something ‘in a long series of repetitions’—it just
happens once. I like to pose the following challenge to my students: ‘Suppose somebody
should happen to jump unaided off the CN Tower. What is the probability that he will
survive the fall?’ They usually say they do not know, to which I point out that the answer
is 0 or 1, since it is a singular event. Answering with a probability between 0 and 1 makes
no sense.
Downloaded by [177.207.94.122] at 18:09 04 August 2016
As Knight, Keynes and Davidson explained, one can attribute a probability to matters of
risk. The probability of rolling a 10 with a pair of dice can be calculated if you know combi-
natorial mathematics. But if one is asking about some uncertain singular future event, there
are no combinatorial mathematics to deal with that, nor is there any way to perform a
random experiment. Probabilities are simply irrelevant when it comes to uncertainty.
This raises the question of what the probabilities in structural econometric equilibrium
models represent. Even more mysterious is the issue of econometric estimation of the par-
ameters of a macroeconomic equilibrium model. One can, of course, say they are in the
model to deal with the uncertainty concerning the observations used to perform the esti-
mation procedures. Are the observation errors due to measurement problems as, perhaps,
Tinbergen’s methodology might prescribe? Or are they the result of errors made in con-
structing the model? However you interpret them for the purpose of applying the econo-
metric model to existing data, perhaps in order to construct some policy recommendation
to advise a government, one could see them as merely a matter of risk. In my view, this
may be the main virtue of constructing such econometric models because the government
agent can make a decision based on the cost and risk of making a mistake given the size of
the error term—much as one might assess the risk of betting on whether to put a particular
amount of money on the roulette ball ending on red given the known probabilities for that
possible result.
But another question should be asked about estimating the parameters of a macroeco-
nomic equilibrium model. To what extent does the estimation of the parameters depend
on the supposition that the data correspond to a macroeconomic equilibrium? Hardly
anyone seems concerned with such a question, particularly economists building DSGE
models. Usually such models not only presume equilibrium states, they also deal with
matters of an uncertain future. Given that textbook econometric estimation since Haavel-
mo’s intervention in 1944 is now explicitly based on probabilistic stochastic models, and
recognizing that such estimations are appropriate only for ergodic parameters, it is diffi-
cult to see how textbook econometric estimation techniques are relevant let alone
plausible.
REVIEW OF POLITICAL ECONOMY 447
Disclosure statement
No potential conflict of interest was reported by the author.
References
Alchian, A. 1950. ‘Uncertainty, Evolution and Economic Theory.’ Journal of Political Economy 58:
211–221.
Boland, L. 2014. Model Building in Economics: Its Purposes and Limitations. New York: Cambridge
University Press.
Davidson, P. 1996. ‘Reality and Economic Theory.’ Journal of Post Keynesian Economics 18: 479–
508.
Haavelmo, T. 1943. ‘The Statistical Implications of a System of Simultaneous Equations.’
Econometrica 11: 1–12.
Haavelmo, T. 1944. ‘The Probability Approach in Econometrics.’ Econometrica 12 (Supplement):
iii–115.
Hicks, J. 1979. Causality in Economics. Oxford: Basil Blackwell.
Downloaded by [177.207.94.122] at 18:09 04 August 2016