Sei sulla pagina 1di 11

Review of Political Economy

ISSN: 0953-8259 (Print) 1465-3982 (Online) Journal homepage: http://www.tandfonline.com/loi/crpe20

Econometrics and equilibrium models

Lawrence A. Boland FRSC

To cite this article: Lawrence A. Boland FRSC (2016) Econometrics and equilibrium models,
Review of Political Economy, 28:3, 438-447, DOI: 10.1080/09538259.2016.1154757

To link to this article: http://dx.doi.org/10.1080/09538259.2016.1154757

Published online: 12 Jul 2016.

Submit your article to this journal

Article views: 43

View related articles

View Crossmark data

Full Terms & Conditions of access and use can be found at


http://www.tandfonline.com/action/journalInformation?journalCode=crpe20

Download by: [177.207.94.122] Date: 04 August 2016, At: 18:09


REVIEW OF POLITICAL ECONOMY, 2016
VOL. 28, NO. 3, 438–447
http://dx.doi.org/10.1080/09538259.2016.1154757

Econometrics and equilibrium models


Lawrence A. Boland, FRSC
Department of Economics, Simon Fraser University, Burnaby, BC, Canada

ABSTRACT ARTICLE HISTORY


In Rational Econometric Man, Edward Nell and Karim Errouaki present Received 4 December 2014
a welcome and timely case for the view that econometrics and Accepted 26 March 2015
econometric model-building may not be the magic tools to solve
KEYWORDS
all empirical questions despite what many seem to have thought Econometrics; equilibrium;
they were in the 1960s. Here I examine some possible problems models; parameters;
with econometric models that have to do with their usually taking
Downloaded by [177.207.94.122] at 18:09 04 August 2016

probabilities
the form of equilibrium models. Some of these problems were
recognized by Trygve Haavelmo decades ago. And as Aris Spanos
has recently discussed, the problems are often the result of what
we say in our textbooks. Some problems have to do with what we
mean by econometric parameters and others with how we use
probabilities.

1. Introduction
By the 1960s, most economics departments in North America began adding econometrics
to the list of required courses, particularly in their graduate programs. After all, the Federal
Reserve was hiring econometricians to help with elaborate computer models that started
becoming available. By the mid-1980s, things started turning around and the Federal
Reserve was under pressure to start laying off some of its econometrics staff. Supposedly
the motivation had nothing to do with the performance of such models but with the sheer
budgetary cost for the Federal government of the day.
The message never seemed to get back to the graduate programs, which continued to
turn out sophisticated econometrics theorists as well applied econometricians. The
increasing availability of all kinds of social and economic data gave applied econometri-
cians a lot to work on. Today, econometric model-building in macroeconomics has
found a fruitful field to plow, particularly with the widespread use of Dynamic Stochastic
General Equilibrium (DSGE) models, despite the growing criticism of such models as well
as criticism of the extent to which equilibrium-based structural econometric models are
capable of dealing with any questions of dynamics and uncertainty.
Today, any empirical question is thought to be a subject that should be addressed with
econometrics. In Rational Econometric Man, Edward Nell and Karim Errouaki (2013)
present a welcome and timely case for the view that econometrics and econometric
model-building may not be the magic tools to solve all empirical questions despite what
many seem to have thought they were in the 1960s.

CONTACT Lawrence A. Boland boland@sfu.ca; www.sfu.ca/~boland


© 2016 Informa UK Limited, trading as Taylor & Francis Group
REVIEW OF POLITICAL ECONOMY 439

Some think that if there is a problem with econometrics it is with what is taught in
econometrics textbooks rather than with econometrics itself—particularly if econometric
theory is properly understood when it comes to how to deal with available data. In this
paper I will go further and argue that a main problem with econometric models (such
as the DSGE models) is that the typical structural econometric models are all equilibrium
models, which are incapable of dealing with real-world economies.

2. Marshall and Haavelmo: unread or misread


Let me begin with two pertinent quotations, a brief one from Alfred Marshall and a more
extensive passage from Trygve Haavelmo:
We may now leave the imaginary world, in which everyone owns the capital that aids him in
his work; and return to our own, where the relations of labour and capital play a great part in
the problem of distribution … . It has now become certain that the problem of distribution is
Downloaded by [177.207.94.122] at 18:09 04 August 2016

much more difficult than it was thought to be by earlier economists, and that no solution of it
which claims to be simple can be true. (Marshall 1920, pp. 426, 423)
Let us pick out a subgroup of individuals from the total group of N, such that, for each
member of this subgroup, the factors x are identically the same. When, nevertheless, the
quantities y for the members of this subgroup are different, it means that the decisions of
the individuals, even after fixing the values of xl, x2, … xn, are still to some extent uncertain.
The individuals do not all act alike. When we assume that [a general shift] has, for each fixed
set of values of the variables x, a certain probability distribution, we accept the parameters (or
some more general properties) of these distributions as certain additional characteristics of
the theoretical model itself. These parameters (or properties) describe the structure of the
model just as much as do the systematic influences of xl, x2 … xn upon y. Such random
elements are not merely some superficial additions ‘for statistical purposes’ … . Consider a
group of families of equal size and composition. Let r be family income, and let x be
family spending, during a certain period of time. Assume all prices constant and the same
for all families during this period. Still, among those families who have the same income,
the amount spent, x, will vary from one family to the other, because of a great many neglected
factors. Let us assume that the spending habits of an infinite population of such families could
be described by the following stochastic equation:
loge x = k loge r + k0 + 1. (k and k0 = constants) (1)
First, let us imagine that we could, somehow, remove the forces which cause the discrepancies
ε. In this hypothetical population all families with the same r would act alike, and we should
have of an infinite population of such families could be described by the following stochastic
equation
loge x = k loge r + k0 . (2)
Secondly, let the ‘errors’ є remain in the scheme, but consider only the average or expected
consumption for those families who have the same income r … .
E(x|r) means: Expected value of x, given r.
Therefore, what the average family in the scheme (1) does is not necessarily the same as what
the families would all do if they acted alike.
It is particularly important to be aware of the difference between these two types of relations
when we want to perform algebraic operations within stochastic equation systems. For
440 L. A. BOLAND

instance, from the theoretical scheme (2) we may derive

x = eko rk . (3)

But from E(loge x|r) = k loge r + ko we do not get E(x|r) = eko rk. Therefore, when we perform
such operations, we must keep in mind that we are using the hypothetical ‘if-there-were-no-
errors’ scheme, and not the ‘expected-value’ scheme. Confusion on this point … arises in par-
ticular when we have a system of stochastic equations and apply algebraic elimination pro-
cesses to the corresponding ‘expected-value’ equations. The usual mistake here is that we
identify the expected values of a variable in one equation with the expected values of the
same variable in another equation. This may lead to nonsensical results. (Haavelmo 1944,
pp. 51, 57–9, original emphasis)

It always seems interesting to me that microeconomics principles textbooks are mostly


based on the ‘imaginary world’ that Marshall described in Book V of his Principles of Econ-
omics, and which, in the quoted passage, he proposes to leave behind as he moves on to
Downloaded by [177.207.94.122] at 18:09 04 August 2016

Book VI. There is nothing in Book V or in the principles textbooks about the difficult
questions of income distribution or diversity in general. Marshall did discuss such pro-
blems in his Book VI, which economists tend to ignore. But Book V is almost entirely
about the necessary conditions of a long-run equilibrium, such as the condition that in
all markets every economic actor is maximizing his or her utility, and all firms are max-
imizing profits. It implicitly assumes that the number of buyers and sellers is large enough
to ensure that everyone is a price-taker. This means that firms will produce the level of
output that makes marginal cost equal to price, and that economic profits are zero.
Modern textbooks typically discuss equilibria where producers are not perfectly competi-
tive price-takers, but rarely do these textbooks point out that the resulting long-run equi-
libria are implausible since not all of the necessary conditions can be met. And
microeconomics textbooks rarely consider how the distribution patterns of preferences
and resource ownership affect economic outcomes, unlike Marshall, who addressed
such issues in his Book VI. Instead, what Economics 101 students learn about are equili-
brium models.
As to Trygve Haavelmo (1944, Section Thirteen), 70 years ago he examined the limits to
building stochastic models and showed how statistical parameter estimation can easily
yield questionable results. The primary difficulty is that if, for example, one were
merely to substitute the econometrically measured values of the parameters into the
non-stochastic equations representing, say, the demand and supply curves, one could
not legitimately treat the price P and the quantity Q as algebraic solutions to a set of
non-stochastic simultaneous equations. The reason is that the rules of algebraic manipu-
lation of any equation require that the variables have exact values, i.e., be non-stochastic.
But given unavoidable observation errors, statistics would provide only averages with a
probability of observational error over specified ranges.
Judging by what one finds in many econometrics textbooks, one would get the idea that
the sole purpose for estimating the values of the parameters of a stochastic equilibrium-
based structural model is to be able to engage in algebraic manipulations using the
non-stochastic version of the model. In my reading of what Haavelmo was explaining
in his Section Thirteen, this is exactly what he says yields meaningless results. Perhaps
like the failure of microeconomics textbooks writers to read Marshall’s Book VI, writers
of econometrics textbooks have not read Haavelmo’s Section Thirteen.
REVIEW OF POLITICAL ECONOMY 441

In any case, let us look at what happens when textbooks writers either misread Marshall
or overlook important parts of his book, or neglect to read the whole of Haavelmo’s (1944)
monograph.

3. Obtaining Marshallian equilibria through the back door


Equilibrium is described as ‘the end of an economic process’; the story is usually told of a
group of individuals each with an ‘endowment’ of ready-made goods or of productive
capacity of some specific kind. By trading and retrading in a market, each ends up with a
selection of goods that he prefers to those that he started with. If we interpret this as an his-
torical process, it implies that, in the period of past time leading to ‘today’, equilibrium was
not established. Why are the conditions that led to a non-equilibrium position ‘today’ not
going to be present in the future? (Robinson 1974, p. 12)
As a matter of logic, Joan Robinson’s criticism of the nature and role of equilibrium in the
neo-Walrasian program cannot withstand close scrutiny. Assailing the logic of hard-core
Downloaded by [177.207.94.122] at 18:09 04 August 2016

propositions, as she did, is an exercise based on misunderstanding. As an attempt to


gather adherents to a competing research program, Robinson’s moralizing is explicable.
The ultimate issue of course is the relative progressivity of the neo-Walrasian and post-Key-
nesian research programs. What is required to judge the issue is a comparative appraisal of
those two competing research programs. It is a bit troubling that no such appraisal has as yet
been attempted. (Weintraub 1985, p. 149)

Consider again Marshall’s ‘imaginary world’. One question that might occur to some is: If,
like Marshall, we build a model within which we have assumed all market participants are
price-taking maximizers and we assume all participating firms are making zero profit,
have we implicitly assumed that all production functions are homogeneous of degree
one, i.e., exhibit constant returns to scale, at least in the vicinity of the point of equili-
brium? I am not only going to argue the affirmative but will also note that on occasion
this is done without recognizing that one is building such a model.
When building a model and assuming all the necessary conditions for an equilibrium
model are satisfied, one has in effect built an equilibrium model without explicitly claiming
so.1 Apart from the price-taker and maximization assumptions of the model, one has a
choice between making an assumption about the nature and status of the production
(or total cost functions) or making an assumption about the going level of profit.
Obviously, assuming the total (excess) profit is zero fulfills the remaining necessary con-
dition. To see that this also assures that the firm is producing at a point on the production
function where it is locally linearly homogenous, consider the typical price-taking firm as
in Figure 1.
Textbooks note that at point A, marginal cost (MC) equals the given price since the firm
is assumed to be maximizing profit; and at that point the price equals average cost (AC)

1
Before going on, we need to call attention to the fact that textbooks are routinely misleading about necessary and suffi-
cient conditions. For an elementary example, consider the theory of the firm. We say that a necessary condition for profit
maximization is that marginal revenue equals marginal cost, so that the slope of the profit function is zero. In order to
distinguish a maximum from a minimum, it is further necessary that the second derivative of the profit function be nega-
tive at the point of optimization. Unfortunately, textbooks sometimes identify the secondary diminishing margin as the
sufficient condition for maximization. But there is no sufficient condition. The diminishing margin condition is also a
necessary condition of maximization. We should say that the conjunction of the two necessary conditions is the sufficient
condition.
442 L. A. BOLAND

Figure 1. The equilibrium of the perfectly competitive firm


Note: MC = marginal cost; AC = average cost.

since we assume total profit is zero. But it must also be noted that at point A it is also true
Downloaded by [177.207.94.122] at 18:09 04 August 2016

that marginal cost equals average cost, which is a necessary condition for locally linear-
homogeneity. We could have assumed that the firm is producing where the production
function is locally linear-homogenous, which means marginal cost is equal to average
cost, instead of assuming the firm is maximizing profit whenever total profit is zero.
Any model of the price-taking firm in which it is assumed that the production function
exhibits constant returns to scale (CRS) assures that average cost equals marginal cost. For
a long time, many empirical model builders would assume for convenience that the pro-
duction function can be represented by a Cobb-Douglas production function. By defi-
nition this function has the following form where Q represents output, K represents the
level of capital (machines) used and L represents the amount of labor:

Q = La + K a−1

No matter the value of α, this is a CRS production function and this means the firm is
producing at its point A. If it is further assumed that the firm is a price-taker and total
profit is zero, then the firm is maximizing profit whether or not maximization is assumed!
While all of this is very elementary, it is interesting that one can build a model thinking
one has not assumed CRS but use the model to make an argument that holds only if CRS is
presumed. In this regard, I explicitly point to Arman Alchian’s (1950) famous article on
‘Uncertainty, Evolution and Economic Theory’. In the 1940s many articles were published
that questioned the realism of the assumption that firms are knowingly maximizing.
Alchian only questioned whether the firm was knowingly maximizing. His argument
was based on a simple matter of the survival of the fittest. If a firm survives competition
and is making zero excess profit, it must be maximizing even though that might not have
been its explicit aim. Alchian thus relies on the conjunction of long-run zero profit and
price-taking, but he must implicitly assume the production function exhibits CRS to be
able to conclude that the firm is maximizing (knowingly or otherwise).
My point is that the explicit or implicit assumption of a CRS production function
makes it far too easy to implicitly build and apply an equilibrium model. Most often
this is just what models such as the DSGE model implicitly are doing. And econometric
models that implicitly provide all of the necessary conditions for an attainment of an equi-
librium are conveniently providing the basis for the presumption that the estimated
REVIEW OF POLITICAL ECONOMY 443

parameters are equilibrium parameters which are ergodic (i.e., constant over time) so long
as the equilibrium status is maintained.

4. What do the parameters of econometric models represent?


Let us now consider the parameters we might find in econometric models. We all learn in
Economics 101 about the Keynesian aggregate consumption function that the econo-
metrics theorist Peter Kennedy (1985) discussed in his popular econometrics textbook:
Associated with any explanatory relationship are unknown constants, called parameters,
which tie the relevant variables into an equation. For example, the relationship between con-
sumption [C] and income [Y ] could be specified as
C = b1 + b2 Y + 1
where β1 and β2 are parameters characterizing this [stochastic] consumption function. Econ-
omists are often keenly interested in learning the values of these unknown parameters. (p. 3)
Downloaded by [177.207.94.122] at 18:09 04 August 2016

We all learn to refer to the β2 in Kennedy’s equation as the ‘marginal propensity to


consume’ and understand that it has a value between 0 and 1. This parameter suppo-
sedly says that for each dollar of income, the consumer spends the equivalent of β2 part
of it. Typically, one would take an equation such as the consumption function, make
observations of income and consumption at numerous points in time and then use
these observations to estimate the values of the two parameters, usually by means of
ordinary least squares. A question could be raised as to whether the estimated value
of the parameter β2 still represents the psychologically given marginal propensity to
consume. For it to be even possible, it must be assumed that the marginal propensity
to consume is ergodic. I suspect that is what Keynes had in mind but, nevertheless,
it is just an assumption, and an assumption that is necessary for econometrics
estimation.
This matter of interpreting a parameter such as β2 in a behavioral equation is at least
plausible, even if it is problematic. But in many macro models, some parameters exist only
for the sake of mathematical convenience and may not plausibly represent anything. For
example, the presence of β1 in the Keynesian consumption function reflects Keynes’s
assumption that the level of consumption is related to the level of income in a linear
manner. Had he assumed the two variables were related in a non-linear manner, say quad-
ratic, another parameter such as β3 would be needed and it is difficult to think of anything
plausible that β3 could represent.

5. Economics Parameters vs. Physics Parameters


If economics is to achieve real usefulness as a science, then we have to learn as much as we
can about the size of economic parameters … . [T]hese parameters may not be like the great
constants of the physical sciences. The constancy of parameters in economics depends on the
stability of the structure of the socio-economic system. In chemistry and physics the objective
is to measure parameters and constants, and once that has been done it is done for good. In
economics, these parameters and constants may change over time … . Keynes believed that
any parameters that might be measured in a particular study at a particular time would not
apply to the economy in the future … . In effect, Keynes is saying that even if econometrics
can turn nonexperimental data into parameter estimates, they cannot be ‘real’ parameters—a
444 L. A. BOLAND

very simple but very important point … . If there are no standard parameters in economics,
they are at best local approximations applying only to a given time and place … . Many
modern econometricians share this view and would be ready to accept that the parameters
of an economic model are not great constants, but are instead useful simplifications in
trying to make sense of the world … . As a result, the inappropriate pursuit of quantification
in economic models would reduce, rather than increase, their value in economic analysis.
(Nell and Errouaki 2013, pp. 355–356)

In Causality in Economics, John Hicks (1979, p. 39) specifically called into question what
parameters in economic models mean and observed that they are in no way the same kind
of entities as the parameters we find in the natural sciences: ‘One aspect of the difference
between the sciences and economics has … to be noted. The sciences are full of measure-
ments which, over a wide field of application, can be regarded as constants—the absolute
zero of temperature is -273° centigrade, the number of chromosomes in the human zygote
is forty-six, and so on—but there are no such constants in economics.’ Some might note
that there have always been constants if we allow for those created by fiat such as a legis-
Downloaded by [177.207.94.122] at 18:09 04 August 2016

lated price of gold. For many decades the price for the USA was fixed at about $34 per
ounce. But of course, the fixity was short-lived—hence not ergodic. In the 19th century
it was thought that business cycles were quite regular with a constant length of about
10 years. William Stanley Jevons even hypothesized that they match the frequency of
sunspot occurrences. But, it was subsequently established that their frequencies did not
actually match.
As Hicks (1979, p. 55) pointed out, economists can simply assume that some of the par-
ameters represent exogenous constants, but the parameters of an economist’s behavioral
equations do not have the ironclad constancy of the parameters encountered in the natural
sciences. Despite this, economic model-builders continue to employ econometric esti-
mation techniques that require the parameters to be ergodic.2

6. Econometrics guided by textbooks


Haavelmo was awarded the Nobel Prize in Economics in 1989 … ‘for his clarification of the
probability theory foundations of econometrics and his analyses of simultaneous economic
structures’. A glance through the current traditional textbooks in econometrics reveals that
most of them do not mention Haavelmo, and the few that do … only cite Haavelmo
(1943) for his contribution in ‘solving’ the technical problem of least-squares bias arising
when estimating simultaneous equations … . Moreover, the textbooks that do mention Haa-
velmo do not credit him for [his clarification of the probability theory foundations of econo-
metrics], despite their widespread use of probability theory. … Therefore, to the extent that
these textbooks articulate the current conventional wisdom, his numerous methodological
ideas and insights pertaining to econometric modeling, especially Haavelmo (1944), are
largely ignored in current practice. (Spanos 2014, p. 2)

Aris Spanos has published many articles criticizing what is taught in econometrics text-
books. One major concern is that textbooks are not sufficiently concerned with the ques-
tion of whether the data to which econometricians apply their various estimation
techniques are statistically adequate (see Boland 2014, Chapter 10). In his 2008 Palgrave
article on ‘Statistics and Economics’, Aris Spanos ties much of the problem to the
2
P.A.V. Swamy (1970) has proposed a way to deal with non-ergodic parameters in econometric models, but not much has
been done with his proposal judging by today’s econometrics textbooks.
REVIEW OF POLITICAL ECONOMY 445

textbooks’ failure to appreciate what Haavelmo actually said in the 1944 monograph.
Spanos (1989) is particularly concerned with what he calls the Fisher-Neyman-Pearson
‘approach to statistical inference, point estimation, hypothesis testing and interval esti-
mation’ (p. 878).
According to Spanos (1989), modern textbook econometrics has evolved not from Haa-
velmo, but from the work of Jan Tinbergen, who narrowly defined the purpose of econo-
metrics to be about the quantification or measurement of theoretical relationships.
Interestingly, this is not what Ragnar Frisch, the first editor of Econometrica, thought
was the purpose of the Econometrics Society or its journal. The purpose was the advance-
ment of economic theory by connecting it to statistics and mathematics. Spanos (ibid.)
says that apart from some probabilistic language, Tinbergen’s econometric methodology
‘has little in common with the methodology in Haavelmo’s (1944) monograph, commonly
acknowledged as having founded modern econometrics’ (p. 406). Moreover, Haavelmo’s
(1944) methodology ‘includes several important elements which have either been dis-
Downloaded by [177.207.94.122] at 18:09 04 August 2016

carded or never fully integrated within the textbook approach’ (p. 405). As I noted
above, Haavelmo’s probabilities approach put into question econometric models designed
only to measure the parameters of theoretical relationships built into those models.
In this methodological perspective, with which I concur, probabilities have their place
but may not be relevant to some important economic relationships, particularly those
involving fundamental uncertainties.

7. When are probabilities relevant?


Uncertainty must be taken in a sense radically distinct from the familiar notion of Risk, from
which it has never been properly separated. (Knight 1921, p. 19)
[F]acts and expectations were assumed to be given in a definite and calculable form; and
risks, of which, tho admitted, not much notice was taken, were supposed to be capable of
an exact actuarial computation. The calculus of probability … was supposed to be capable
of reducing uncertainty to the same calculable status as that of certainty itself … . (Keynes
1937, pp. 212–213)
To make statistically reliable forecasts of the future, agents need to obtain and analyze sample
data from the future. Since that is impossible, the assumption of a predetermined-ergodic-
reality permits the modeler to assert that sampling from past and present market data is
the same thing as obtaining a sample from the future. Ergodicity implies that future outcomes
are merely the statistical shadow of past and current market signals. Presuming ergodic con-
ditions reduces the modeler’s problem to explaining how and at what cost agents obtain and
process existing data (in the form of ‘price signals’). … The epistemological problem facing
every economic decision maker is to determine whether (a) the phenomena involved are cur-
rently governed by probabilities that can be presumed ergodic—at least for the relevant
future, or (b) nonergodic circumstances are involved … . It is only the latter case where
important policy decisions need to be made. (Davidson 1996, pp. 480, 501)

Early in their training, economists are introduced to probabilities. Rarely are they told
when they might not be relevant. It is common for them to be told that one can place a
probability on any uncertain future event (such as the value of the equilibrium price at
which a product will be sold five years from now), despite that being a singular event.
As Hicks (1979) pointed out:
446 L. A. BOLAND

There are two theories of probability—or it may be better to say there are two concepts of
probability, for the mathematical structures that have been raised, on the basis of the one
concept and on the other, seem largely to correspond. They are (1) the frequency theory
and (2) the axiomatic theory, to give them their usual names. It is the frequency theory
which has become orthodox; most modern works on statistical mathematics take it as
their starting point … . According to the frequency theory, probability is a property of
random experiments. ‘Whenever we say that the probability of an event with respect to an
experiment is equal to P’ we mean that ‘in a long series of repetitions of the experiment, it
is practically certain that the frequency of Ε will be approximately equal to P’. (pp. 105–106)

A singular event is by definition not something ‘in a long series of repetitions’—it just
happens once. I like to pose the following challenge to my students: ‘Suppose somebody
should happen to jump unaided off the CN Tower. What is the probability that he will
survive the fall?’ They usually say they do not know, to which I point out that the answer
is 0 or 1, since it is a singular event. Answering with a probability between 0 and 1 makes
no sense.
Downloaded by [177.207.94.122] at 18:09 04 August 2016

As Knight, Keynes and Davidson explained, one can attribute a probability to matters of
risk. The probability of rolling a 10 with a pair of dice can be calculated if you know combi-
natorial mathematics. But if one is asking about some uncertain singular future event, there
are no combinatorial mathematics to deal with that, nor is there any way to perform a
random experiment. Probabilities are simply irrelevant when it comes to uncertainty.
This raises the question of what the probabilities in structural econometric equilibrium
models represent. Even more mysterious is the issue of econometric estimation of the par-
ameters of a macroeconomic equilibrium model. One can, of course, say they are in the
model to deal with the uncertainty concerning the observations used to perform the esti-
mation procedures. Are the observation errors due to measurement problems as, perhaps,
Tinbergen’s methodology might prescribe? Or are they the result of errors made in con-
structing the model? However you interpret them for the purpose of applying the econo-
metric model to existing data, perhaps in order to construct some policy recommendation
to advise a government, one could see them as merely a matter of risk. In my view, this
may be the main virtue of constructing such econometric models because the government
agent can make a decision based on the cost and risk of making a mistake given the size of
the error term—much as one might assess the risk of betting on whether to put a particular
amount of money on the roulette ball ending on red given the known probabilities for that
possible result.
But another question should be asked about estimating the parameters of a macroeco-
nomic equilibrium model. To what extent does the estimation of the parameters depend
on the supposition that the data correspond to a macroeconomic equilibrium? Hardly
anyone seems concerned with such a question, particularly economists building DSGE
models. Usually such models not only presume equilibrium states, they also deal with
matters of an uncertain future. Given that textbook econometric estimation since Haavel-
mo’s intervention in 1944 is now explicitly based on probabilistic stochastic models, and
recognizing that such estimations are appropriate only for ergodic parameters, it is diffi-
cult to see how textbook econometric estimation techniques are relevant let alone
plausible.
REVIEW OF POLITICAL ECONOMY 447

Disclosure statement
No potential conflict of interest was reported by the author.

References
Alchian, A. 1950. ‘Uncertainty, Evolution and Economic Theory.’ Journal of Political Economy 58:
211–221.
Boland, L. 2014. Model Building in Economics: Its Purposes and Limitations. New York: Cambridge
University Press.
Davidson, P. 1996. ‘Reality and Economic Theory.’ Journal of Post Keynesian Economics 18: 479–
508.
Haavelmo, T. 1943. ‘The Statistical Implications of a System of Simultaneous Equations.’
Econometrica 11: 1–12.
Haavelmo, T. 1944. ‘The Probability Approach in Econometrics.’ Econometrica 12 (Supplement):
iii–115.
Hicks, J. 1979. Causality in Economics. Oxford: Basil Blackwell.
Downloaded by [177.207.94.122] at 18:09 04 August 2016

Kennedy, P. 1985. A Guide to Econometrics. 2nd ed. Oxford: Basil Blackwell.


Keynes, J. M. 1937. The General Theory of Employment. Quarterly Journal of Economics 51: 209–
223.
Knight, F. 1921. Risk, Uncertainty and Profit. Boston: Houghton Mifflin.
Marshall, A. 1920. Principles of Economics. 8th ed. London: Macmillan.
Nell, E., and K. Errouaki. 2013. Rational Econometric Man: Transforming Structural Econometrics.
Cheltenham: Edward Elgar.
Robinson, J. 1974. History Versus Equilibrium, Thames Papers in Political Economy. London:
Thames Polytechnic.
Spanos, A. 1989. Statistics and Economics. New Palgrave Dictionary. 2nd ed, Vol. 7. London:
Palgrave Macmillan.
Spanos, A. 2009. ‘On Rereading Haavelmo: A Retrospective View of Econometric Modeling.’
Econometric Theory 5: 405–429.
Spanos, A. 2014. Revisiting Haavelmo’s Structural Econometrics: Bridging the Gap between Theory
and Data. http://www.econ.vt.edu/directory/spanos/spanos9.pdf.
Swamy, P. A. V. 1970. ‘Efficient Inference in a Random Coefficient Regression Model.’
Econometrica 38: 311–323.
Weintraub, E. R. 1985. Joan Robinson’s Critique of Equilibrium: An Appraisal. American Economic
Review, Papers and Proceedings 75: 146–149.

Potrebbero piacerti anche