Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
SCIENTIFIC
RESEARCH IN
ECONOMICS
The Alpha-Beta
Method
Adolfo Figueroa
Rules for Scientific Research in Economics
Adolfo Figueroa
Why has the growth of scientific knowledge in the social sciences pro-
ceeded at a rate that is slower than that of the natural sciences? The
basic reason seems to rest upon the differences in the complexity of the
reality they study. Compared to the natural sciences, the social sciences
seek to explain the functioning of the social world, which is a much
more complex world than the physical world. As biologist Edward
Wilson pointed out:
Everyone knows that the social sciences are hypercomplex. They are inher-
ently far more difficult than physics and chemistry, and as a result they, not
physics and chemistry, should be called the hard sciences (1998, p. 183)
[This] book may hold some interest for the reader who is curious about
the methodology of the social sciences…[I]n a hard, exact science [as phys-
ics] a practitioner does not really have to know much about methodology.
Indeed, even if he is a definitely misguided methodologist, the subject
itself has a self-cleansing property which renders harmless his aberrations.
By contrast, a scholar in economics who is fundamentally confused con-
cerning [methodology] may spend a lifetime shadow-boxing with reality.
vii
viii PREFACE
These rules are scarcely used today, which is reflected in the fact that no
economic theory has been eliminated so far, and thus we observe the coex-
istence of the same economic theories (classical, neoclassical, Keynesian,
and others) over time, with the consequent lack of Darwinian competition
of theories. Scientific progress is the result of such evolutionary competi-
tion. Therefore, the book seeks to contribute to the scientific progress
of economics by proposing the use of the alpha-beta method, a method
designed for the evolutionary progress of economics.
The book is primarily addressed to students of economics at advanced
undergraduate and graduate levels. Students in the other social sciences
may also find it useful in the task of increasing the growth of interdisci-
plinary research within the social sciences. Even students of the natural
sciences may benefit from the book by learning the differences in the rules
of scientific research of their own sciences with that of the social sciences.
This understanding will prepare economists, physicists, and biologists to
work in interdisciplinary research projects, such as the relations between
economic growth and degradation of the biophysical environment, which
is, certainly, one of the fundamental problems of our time.
ACKNOWLEDGMENTS
Parts of this book have been taught in economics courses at the Social
Science School and in the epistemology course in the Doctorate in
Business Administration at CENTRUM Graduate Business School, both
at Pontifical Catholic University of Peru, and at the Universities of Notre
Dame, Texas at Austin, and Wisconsin at Madison, where I have been
Visiting Professor. I would like to thank the students in these courses for
their valuable comments and questions about my proposal of the Alpha-
beta Method.
I am also grateful to the three anonymous reviewers appointed by
Palgrave Macmillan. Their comments and suggestions to my manuscript
were very useful to make revisions and produce the book. Sarah Lawrence,
the Economics & Finance Editor of Palgrave Macmillan, has been most
helpful to go through the review process of the book project.
My gratitude is immense with my current institution, CENTRUM
Graduate Business School, Pontifical Catholic University of Peru, and
with its Director Fernando D’Alessio, for providing me with great sup-
port for the preparation of this book.
xiii
CONTENTS
1 Science Is Epistemology 1
xv
xvi CONTENTS
10 Conclusions 145
Bibliography 151
Index 153
LIST OF FIGURES
xvii
LIST OF TABLES
xix
Chapter 1
Science Is Epistemology
Factual science (S) is a set of relations (R) between material objects X and
material objects Y, which are established according to criterion (L).
S = {R ( X, Y ) / L} (1.1)
4 A. FIGUEROA
S = {R ( X, Y ) / L}
L = {T ( A ) / B}
(1.2)
B = {T ′ ( A ′ ) / B′}
………......……..
The first line of system Eq. (1.2) just repeats the definition of factual sci-
ence. The second says that criterion L is logically justified by deriving it from
the theory of knowledge T, which includes a set of assumptions A, given the
set of assumptions B that is able to justify A. The set B constitutes the meta-
assumptions, the assumptions underlying the set of assumptions A. The set
B is logically unavoidable, for the set A needs justification. (e.g., why do
I assume that there is heaven? Because I assume there is God? Why do I
assume that there is God? Because…, etc.). Therefore, the set B needs a logi-
cal justification by using another theory T′, which now contains assumptions
A′, which in turn are based on meta-assumptions B′, and so on. Hence, we
would need to determine the assumptions of the assumptions of the assump-
tions. This algorithm leads us to the logical problem of infinite regress.
The logical problem of infinite regress is a torment in science. A classical
anecdote is worth telling at this point (adapted from Hawking 1996, p. 2):
–– “What you have told us is rubbish. The world is really a flat plate
supported on the back of a giant tortoise.”
Science Is Epistemology 5
Table 1.2 displays the scientific research rules that can be logically
derived from the assumptions of Popperian epistemology. Rule (a) is self-
explanatory. Rule (b) indicates that the criterion of demarcation is falsi-
fication. A proposition is not scientific if it is not empirically falsifiable.
A falsifiable proposition is one that in principle is empirically false. Under
the falsification principle, the presumption is that the proposition is false
so that its testing becomes a necessity; that is, the proposition is presumed
false until proved otherwise. If the presumption were that the proposition
is true, or that it could be false, then the testing would become discre-
tionary; the proposition would be presumed true until proved otherwise.
8 A. FIGUEROA
false theories and thus to generate the progress of science. In this sense,
we may say that Popperian epistemology leads to the construction of a
critical science.
The assumptions of the Popperian epistemology are consistent with
the general principles of epistemology, established as meta-assumptions in
Table 1.1. They are clearly consistent with principles (i) and (ii), that is, the
Popperian epistemology implies rules to discover the functioning of the
real world, assuming that this real world is knowable. Referring to prin-
ciples (iii) and (iv), the Popperian epistemology proposes the logic of sci-
entific knowledge based on deductive logic and falsification as the principle
of demarcation. Therefore, regarding system Eq. (1.2) above, the scientific
rules (L) have been derived from the set of assumptions of the Popperian
epistemology (set A), for given set of meta-assumptions (set B).
and only if, they can be reduced to an abstract process analysis. This is the
process epistemology of Georgescu-Roegen (1971, Chap. IX), which will be
summarized in this section.
Conceptually, a process refers to a series of activities carried out in the
real world, having a boundary, a purpose, and a given duration; further-
more, those activities can be repeated period after period. The farming
process of production, for example, includes many activities having a given
duration (say, seasonality of six months), the purpose of which is, say, to
produce potatoes, which can be repeated year after year. The factory pro-
cess of production also includes many activities, but with a shorter dura-
tion, say, the hour, the purpose of which is, say, to produce shirts, which
can be repeated day after day.
The process epistemology makes the following assumptions:
First, the complex real world can be ordered in the form of a process, with
given boundaries through which input–output elements cross, and given
duration, which can be repeated period after period. This ordering is taxo-
nomic. Second, the complex real world thus ordered can be transformed into
a simpler, abstract world by constructing a scientific theory. This is the prin-
ciple of abstraction. By transforming the complex real world into an abstract
world, by means of a scientific theory, we can reach a scientific explanation
to that complex real world.
We can explain and understand a complex real world if, and only if, it is
reducible to a simpler and abstract world in the form of an abstract process,
by means of a scientific theory, which is also falsifiable; such scientific theory
exists and can be discovered.
Abstract In this chapter, a set of rules for scientific research, which is called
the alpha-beta method, is logically derived from the composite epistemol-
ogy. This method makes the composite epistemology operational. Alpha
propositions constitute the primary set of assumptions of an economic the-
ory, by which the complex real world is transformed into a simple, abstract
world; beta propositions are logically derived from alpha and are, by con-
struction, empirically falsifiable. Alpha propositions are unobservable but
beta are observable. Thus, the economic theory is falsifiable through beta
propositions. Beta propositions also show the causality relations implied by
the theory: the effect of exogenous variables upon endogenous variables.
The principles of the alpha-beta method will constitute the rules for scien-
tific research in economics in later chapters.
The highly complex social world will be subject to scientific knowledge if,
firstly, it is reducible to an abstract process, as indicated by the Georgescu-
Roegen’s epistemology; secondly, if the scientific theory is falsifiable, which
comes from Popperian epistemology. As shown in the previous chapter,
both epistemologies are not contradictory and can be combined into a
single epistemology. To make this composite epistemology operational,
this chapter derives a particular research method, containing a practical set
of rules for scientific research, which is called the alpha-beta method.
The debate about the applicability of Popperian epistemology in
economics is that economic theories “are rarely falsifiable,” as shown in
Therefore, beta propositions are observable and refutable, and thus they
can be utilized to falsify the theory. This is consistent with the Popperian
epistemology.
Alpha propositions are chosen somewhat arbitrarily, as said earlier.
However, they are subject to some logical constraints: they must be unob-
servable and non-tautological. The condition of unobservable is required
because alpha propositions refer to the underlying forces in the workings
of the observed world. Furthermore, alpha propositions that are non-
tautological will be able to generate beta propositions, which are both
observable and refutable.
Unfalsifiable propositions are unobservable or, if observable, they are
tautologies in the sense given to this term in logic: propositions that are
always true. As examples of propositions that are unfalsifiable, consider
the following:
Take note that beta propositions are observable and refutable, even
though they are derived from alpha propositions, which are unobservable.
This paradox is apparent because alpha propositions are free from tautolo-
gies; moreover, alpha propositions assume the endogenous variables (Y)
and exogenous variables (X) of the abstract process, which are observable,
and beta propositions refer to the empirical relations between X and Y. If
beta propositions cannot be derived from a theory, this “theory” is actu-
ally not a theory; it is a tautology, useless for scientific knowledge. To take
the example shown above: the statement “people act according to their
desires” is not an alpha proposition, for no beta proposition can be logi-
cally derived from it. It follows that the alpha-beta method eliminates any
possibility of protecting scientific theories from elimination because beta
propositions are falsifiable. This is so by logical construction.
Although subject to some logical constraints, the set of alpha proposi-
tions is established somewhat arbitrarily. However, this presents no major
problem for falsification because the theory is not given forever. On the
contrary, a theory is initially established as part of an algorithm, of a trial-
and-error process, the aim of which is to reach a valid theory by eliminat-
ing the false ones. If the initial theory fails, a new set of assumptions is
established to form a new theory, and a new abstract world is thus con-
structed. If this second abstract world does not resemble well the real
world, the theory fails and is abandoned, and a new set of assumptions
is established, and so on. A valid or good theory is the one that has con-
structed a simple abstract world—in the form of abstract process—that
resembles well the complex real world.
Under the alpha-beta method, the valid theory is found by a trial-and-
error process, in which we assist to the funerals of some theories. The beta
propositions derived logically from the alpha propositions are observable,
falsifiable, and mortal. This is consistent with the Darwinian evolutionary
principle of scientific progress. Hence, what the set of assumptions of a
theory needs is not justification; what it needs is empirical falsification,
testing it against the facts of the real world using the beta propositions.
Beta propositions thus have the following properties:
In this case, the conclusion follows logically from the premises, but it is
empirically false. Capitalism is characterized by the existence of unemploy-
ment. The reason for failure falls upon the premises, particularly, on the
assumption about the motivation of capitalists, which is proved wrong:
capitalists do not seek to maximize employment (but, say, seek to maximize
profits).
A theory will fail because the abstract world is not a good approxima-
tion of the real world; it has made the wrong assumptions about what the
essential factors of the economic process are. If, in spite of the abstraction,
the so-constructed simple abstract world resembles well the complex real
world, the theory constitutes a good approximation to the real world. The
abstract world resembles the real world; accordingly, we say the theory
explains the reality. Then this is a valid theory.
To be sure, in the alpha-beta method, submitting a theory to the pro-
cess of falsification has the following logic. Because the theory is in prin-
ciple false (it is an abstraction of the real world!), it must be proven that it
is not false. If the theory were in principle true, there would be no need
to prove that it is, or the proof would be discretionary. By comparison
with the judiciary court, in which the individual is in principle innocent
of a crime (legal rights) and it must be proven that he or she is guilty, the
falsification principle says that the individual, the theory in this case, is in
principle guilty, and must be proven that it is not. Therefore, if the theory
is found true, in spite of the expectation that it was false, then the theory is
a good one. The concept of falsification is also similar to the concept that
an honest person is one who having had the opportunity of committing
a crime did not do it, but whether the person never had had the chance,
we cannot say.
From the example of the theory “Figure F is a square,” shown earlier,
it is clear that falsification through beta propositions implies that the alpha
proposition cannot be proven true; it can only be proven false. Why? This is
so because the same beta propositions could be derived from another set of
alpha propositions. It may be the case that there is no one-to-one relation
between alpha propositions and beta propositions. Alpha implies beta, but
beta may not imply alpha. If the Figure F is a square, then it follows that
the two diagonals must be equal. However, if the two d iagonals are equal,
it does not follow that Figure F is a square; it could be a rectangle.
This simple example shows another property of the alpha-beta method.
If all beta propositions of the theory coincide with reality, the theory is
not refuted by the available facts; if at least one beta proposition fails, the
22 A. Figueroa
theory fails to explain the reality. If the two diagonals are not equal, it follows
that the theory fails: Figure F cannot be a square.
Consider the case in which there is a one-to-one relation between alpha
and beta propositions. Let the theory say, “People seek to kill their credi-
tors when repayment is unviable.” Individual B is suspected of individual
C’s death because B was debtor of C. Suppose only one fingerprint was
found in the scenery of the crime. If the fingerprint is that of B, then he is
the killer; if it is not, then he is not the killer. This is so because fingerprints
are personal. The same conclusion would follow with DNA tests. In the
previous example, the fact of equality of diagonals does not belong to the
square figure only. In social sciences, we deal with aggregates; therefore,
there cannot be a kind of “fingerprints” variables from which to draw
definite conclusions as in the case of the people, and the relevant example
is that of the “Figure F is a square” theory.
Logically, therefore, scientific theories in the social science cannot be
proven true; they can only be corroborated. To be sure, here “corrobo-
ration” means consistency, not truth. It also means to assess how far the
theory has been able to prove its fitness to survive by standing up to tests.
How many wars has the theory survived? How far has the theory been
corroborated?
In sum, scientific theory is a logical artifice to attain scientific knowl-
edge. A scientific theory allows us to construct an abstract world that
intends to resemble well the complex real world. If there is no theory,
there is no possibility of scientific knowledge. However, how accurate
is the approximation of the theory to the real world? The theory needs
empirical confrontation against reality. The prior set of assumptions needs
posterior empirical falsification. The reason behind falsification is that the
assumptions of the scientific theory were established arbitrarily (for there
is no other way). If in this confrontation theory and reality are inconsis-
tent, theory fails, not reality; that is, the arbitrary selection of its assump-
tions is proved wrong.
The rules for scientific research in economics derived from the compos-
ite epistemology, shown in Chap. 1, can now be restated in terms of the
alpha-beta method, as follows:
1. The rule that scientific theory is needed for explaining a complex real
world is given by constructing the set of alpha propositions.
2. The rule that falsification is the criterion of demarcation is given by the
beta propositions, derived logically from the set of alpha propositions.
Alpha-Beta: A Scientific Research Method 23
α1 ⇒ β1 → [ β1 ≈ b ]
If β1 = b, α1 is consistent with facts and explains reality
If β1 ≠ b, α1 does not explain reality and is refuted by facts. Then,
α 2 ⇒ β2 → [ β2 ≈ b]
If … (the algorithm is continued)
Y1 + + ?
Y2 − 0 +
− 0 +
Proposition beta 2 Y2 = G ( X1 , X 2 , X3 )
The matrix shown in Table 2.2 may also be called the causality matrix.
Thus, an increase in the exogenous variable X1, maintaining fixed the
values of the other two exogenous variables (X2 and X3), will cause an
increase in the value of the endogenous variable Y1 and a fall in Y2. Hence,
the functions F and G show the causality relations of the theory. These are
the reduced form equations of the scientific theory.
Falsification can now be analytically defined as follows: A theory fails
if one of the signs of the matrix is different from the sign of the observed
statistical associations between the corresponding variables. This is a suf-
ficient condition to have a theory refuted by facts. It should be clear that
the cell in which the effect is undetermined cannot be used to refute the
theory.
In the case of falsifying several theories at the same time, given data set
b, some theories will be false and some will be consistent. Those theories
that survive the entire process of falsification will become the corroborated
theories (not the verified or true theories), whereas the theories that fail
are eliminated. The corroborated theory will reign until new information,
new statistical testing methods, or a new superior theory appears. A theory
is superior to the others if it derives the same beta propositions as the oth-
ers, but in addition derives other beta propositions that are consistent with
facts, which the other theories cannot. A theory is thus superior to others
when it can explain the same facts that the others can and some additional
facts that the others cannot.
From the alpha-beta method, it also follows that data alone cannot
explain real phenomena. Data alone—data set b—can show statisti-
cal association or correlation between empirically defined variables, but
that is not causality. Causality refers to relations between exogenous and
endogenous variables, which can only be defined by the assumptions of
a scientific theory. There is no logical route from statistical association or
correlation to theory and then to causality (no matter how sophisticated
the statistical testing is).
Consider the following usual claim: “To establish causality, let data
speak for themselves.” This statement is logically false. Facts can never
speak for themselves because there is no logical route from facts to scien-
tific theory and causality. That route would imply using inductive logic:
from a set of observations, we can discover what factors are underly-
ing the observed phenomena; that is, from facts b we go to scientific
theory α. Popperian epistemology assumes that such logic does not exist.
Popperian epistemology assumes that scientific knowledge goes from
alpha propositions to beta propositions, which is falsified against facts b
28 A. Figueroa
• Flows refer to the elements that either enter into the production pro-
cess (material inputs) or come out from the process (material output
and waste);
• Funds refer to those elements that enter and come out (machines
and workers).
• Natural resources, which include renewable (biological) and non-
renewable (minerals).
• Initial conditions, the initial structure of society, such as capital per
worker, technological level, assets inequality, and institutions.
The stock of machines and men are seen as fund factors: they enter and
come out of the process maintaining their productive capacity intact in
every period. They constitute funds of services because they participate in
the production, providing their services. No piece of machine and no flesh
of workers are expected to be found in, say, the shirts produced in a fac-
tory. Machines and men are the agents that transform input flows (cotton
and oil energy from natural resources) into output flows (shirts).
Table 3.1 represents the economic process under the assumptions of
the E-theory, which assumes a simple reproduction process: production of
goods is repeated at the same scale period after period. Society is endowed
with given stocks of machines, workers, and natural resources; other ini-
tial conditions include wealth inequality or power structure, institutions,
and technology. Under a simple reproduction process, total output level
must be repeated period after period. This implies that the initial stocks of
machines and workers must remain unchanged. Hence, total output is net
of depreciation costs of the machines and subsistence wages of workers.
32 A. Figueroa
Initial stocks
K-Machines K-Machines
L-Workers L-Workers
R-Renewable resources R-Renewable
N-Non-renewable resources (minerals) N-n Non-renewable
Institutions
Flows
m-material inputs from renewable resources Goods: Q
n-material inputs from mineral resources Income Inequality: D
Waste/pollution
Social factors
δ-Power structure δ-Power structure
According to the nature of repetition, the economic process can take the
form of static, dynamic, or evolutionary. Those types correspond to dif-
ferent equilibrium situations in the economic process, for what is repeated
is the equilibrium situation. As indicated earlier (Chap. 1), the concept of
equilibrium refers to the solution of the social interactions. The concept
of equilibrium can be stated as follows:
top of a bowl that is placed upside down). The assumption of stable equi-
librium makes the economic process self-regulated. Thus, a change in an
exogenous variable will imply that the endogenous variable is now out of
equilibrium and then it will move to the new equilibrium spontaneously;
hence, changes in the exogenous variables will generate causality, quanti-
tative changes of the endogenous variables in definite directions.
Figure 3.1, panel (a), illustrates a static process. The vertical axis mea-
sures an endogenous variable Y and the horizontal axis measures time.
Suppose there is only one exogenous variable (X1). Given the value of the
exogenous variable (say, X1 = 10 ), the value of the endogenous variable
remains fixed period after period at the level OA. If for some reason, the
value of the endogenous variable is outside equilibrium, as at point “a”,
the system will tend to restore equilibrium spontaneously; that is, the sys-
tem is stable.
Now suppose the exogenous variable increases (say, to X1 = 20 ) at
period t′ and the new equilibrium is at the level B; hence, the value of Y is
now outside equilibrium, which will tend to be restored by moving spon-
taneously from point A′ to point B, because equilibrium is stable. The new
equilibrium will be repeated period after period at the level BB′, as long as
the exogenous variable remains fixed at the new level. Then we have been
able to generate a causality relation, the effect of changes in the exogenous
variable upon the endogenous variable, which in this case is positive: the
higher the value of X, the higher the value of Y.
In a dynamic economic process, the corresponding dynamic equilib-
rium implies a particular trajectory over time of the endogenous variables,
as long as the values of the exogenous variables remain fixed. In Table 3.1,
a dynamic process would imply an endogenous change in some of the ini-
tial conditions. Consider an increase in the stock of machines as outcome
of the economic process; that is, part of total net output is allocated to
increase machines and part to conspicuous consumption. Investment in
machines is thus endogenous and so is the stock of machines over time.
The other initial conditions are assumed to remain unchanged and consti-
tute the exogenous variables as well.
The simplest way to understand a dynamic equilibrium is as a sequence
of static equilibrium situations. Therefore, if the static equilibrium in
each period is stable, the dynamic equilibrium will be stable as well. From
any situation outside the equilibrium trajectory, there will be spontane-
ous forces that move the endogenous variable back to the equilibrium
trajectory.
36 A. Figueroa
Y
(a) Static
B B´
[ X1 = 20 ]
A [ X1 = 10 ]
A´
O t´ t
Y
(b) Dynamic
F [ X1 = 20 ]
D´ [ X1 = 10 ]
E
D
C´
C
a´
O t´ t
Y
(c) Evolutionary
E´ [ X = 10 ]
Y* 1
E F´
[ X2 = 0.5 ]
O T* T
grow over time. If for some reason the value of the endogenous variable
were located at point a′, it would move spontaneously back to the equi-
librium trajectory. Now suppose the exogenous variable increases (say, to
X1 = 20 ) at period t′. The new equilibrium trajectory is given by the curve
DF, but now the initial value of the endogenous variable at point C′ is out
of equilibrium. Since the equilibrium is stable, then point C′ will move
spontaneously to the new equilibrium trajectory, curve DF. Assume that
the move is not instantaneous, but takes time; then the trajectory of transi-
tion is given by the segment C′D′, which is called transition dynamics.
In the dynamic system, as we can see, a change in the exogenous vari-
able will have the effect of shifting the equilibrium trajectory of the endog-
enous variable to another level, along which it will continue to change
over time. The causality relation is thus established. As in the static system,
the effects of production upon depletion and pollution are just ignored.
Therefore, both static and dynamic economic processes, which include
the assumption of stable equilibrium, generate causality relations. The
effects of changes in exogenous variable upon endogenous variables of
the theory are known, these are the beta propositions of the theory. When
applied to economics, it is a property of the alpha-beta method that beta
propositions show causality relations in static and dynamic processes.
The endogenous variables in a static or dynamic process can be repeated
forever. There are no limits to the repetition. They can then be called
mechanical processes. It is clear that a mechanical economic process ignores
the problem of eventual depletion of non-renewable natural resources and
also the effect of waste/pollution on the biophysical environment of the
Earth, which is home of the human species.
Consider now a non-mechanical economic process. If the economic
process is viewed as subject to qualitative changes as it is repeated, then
we have an evolutionary economic process. In this case, the assumption is
that, as the economic process is repeated period after period, qualitative
changes will also take place in the economic process, which will eventually
set limits to the repetition of the endogenous variables; then a threshold
value of the endogenous value will exist. Before the threshold value, the
endogenous variable moves along a particular trajectory, for given values
of the exogenous variables, as in a dynamic equilibrium; once the thresh-
old value is reached, the trajectory breaks down. There will be a change in
the process itself. This change is called regime switching in the economic
process. A new exogenous variable (an innovation) will appear and a new
set of relations within the process will appear, leading to a new trajectory
38 A. Figueroa
to repetition will occur sooner or later. When the limit is reached, there
is a regime switching toward a new process, qualitatively different, which
will also be repeated for finite periods, and so on. Evolutionary process is
thus a well-defined process, in which qualitative changes occur over time.
In fact, it is through an economic theory that assumes evolutionary process
that economics can explain social changes, qualitative changes in human
societies.
Y (a)
Y = F(X)
Y3
Y2
Y1
O X1 X2 X3 X
Y (b)
Y = G(X)
Y2
Y1
O X1 X2 X
repeated period after period, with variations around the mean (the points
of different size indicate the distribution of those variations). If X = X 2 ,
then the mean value of Y will be higher, which will be repeated period after
period, with variations around it. Therefore, there is a positive causal rela-
tion between the mean values of Y and the values of X, that is, Y = G ( X ) .
Dynamic economic processes can also be assumed as deterministic or
stochastic. The deterministic process implies a function in which the equi-
librium values of the endogenous variable Y depends upon the values of
the exogenous variable X, and for a given value of X, depends upon the
passage of time t; that is the causality relation is Y = F ( X, t ) . For a sto-
chastic dynamic process, the causality relation is between the equilibrium
values of the endogenous variable, now represented by the mean value,
and the values of the exogenous variable and time t, that is, Y = G ( X, t ) .
Evolutionary economic processes have a temporal dynamic segment,
before the regime switching point has been attained. Therefore, an evolu-
tionary economic process can also be assumed as deterministic or stochas-
tic, depending on whether the temporal dynamic segment of the process
is deterministic or stochastic. The causality relation in the first case would
be Y = F ( X,T ) , where T < T , whereas in the stochastic form, the mean
*
that the alpha-beta method is, in principle, suitable for the construction
and growth of scientific knowledge in economics.
A note on empirical facts seems necessary. The alpha-beta method
implies that facts in economics refer to the actual behavior of people, to
what people do (observable). To be sure, facts cannot refer to what people
say about what they do, as in surveys. People’s answers on surveys need
not reflect their true behavior. Facts cannot refer to controlled-experiment
either. Experimental economics uses laboratory methods, in which data
are collected from people placed in a human laboratory. This method is
similar to the Skinner box that is used to study animal behavior. There is
here an implicit theory, which assumes that the behavior of people in the
lab (an artificial world) is the same as in the real world, that the behavior
of people inside the “Skinner box” is the same outside the box, as in the
case of animal behavior. Therefore, there is the need to test this theory in
the first place, which is unviable, for it will be based on opinions, not on
behavior alone.
Economics is still a non-experimental science. Therefore, facts must
come from observations of the real world, from “natural experiments.”
In this regard, economics is very much like astronomy. These very impor-
tant issues of measurement in economics are discussed in more detail in
Chaps. 6, 7, and 9.
How does the alpha-beta method work in economics? Some particular
traits of the alpha-beta method, when applied to economics, will be shown
in this chapter.
α0 (1). The scarcity postulate. Societies face the economic problem: whereas
the quantity of goods desired in society is unlimited, the maximum flow
of goods that society can produce is limited. The latter assumes that,
given the technological knowledge in society and its factor endowments
(machines, workers, and natural resources), the maximum flow of goods
per unit of time that society can produce is also given and the total out-
put that can ever be produced with the given stock of non-renewable
resources is given as well.
α0 (2). The institutional postulate. In order to solve the economic problem,
societies seek to establish a particular institution, that is, a set of rules
and organizations, which regulate the social relations in the economic
process of production and distribution.
α0 (3). The rationality postulate. Social actors act guided by their motiva-
tions, which are shaped by the institutions of the society in which they
live. Social actors have means to seek their objectives. The assumption
of rationality means that there is consistency between objectives and
means in people’s behavior.
The Alpha-Beta Method in Economics 51
of the democratic political system. Hence, markets and democracy are the
basic institutions that caracterize capitalism. In such institutional context,
it will be rational for social actors to act guided by the motivation of ego-
ism and self-interest, which will dominate the motivation of altruism.
There are several economic theories that seek to explain the capitalist
society. The most important ones are the neoclassical theory, the classical
theory, and the effective demand theory. Regarding the abstract process
diagram, they differ in their assumptions about the exogenous variables
and the mechanisms that explain production and distribution. These theo-
ries also differ in the assumptions about the endogenous variables or out-
comes of the economic process. Recently the theory of bio-economics has
been developed, which assumes that the biophysical degradation is part of
the outcome of the economic process (as shown in Table 3.1, Chap. 3).
The comparative analysis of these economic theories is beyond the scope
of this book.
Economics is a social science in two senses. First, its objective is the study
of human societies; second, its theoretical propositions apply to aggregates,
not to individuals. The latter sense needs some elaboration.
Because economics studies complex realities, it must use the abstraction
method. This implies making assumptions on which elements of the eco-
nomic process are important and which are not (and may thus be ignored).
Therefore, economic theory cannot explain every individual case, but only
the general features of reality. Due to the use of abstraction in the genera-
tion of a theory, the empirical test must be statistical, that is, about the
relations between averages of the endogenous and exogenous variables.
For example, an economic theory may explain the general behavior of
a group of capitalist countries, but not necessarily the behavior of every
country. Similarly, an economic theory may explain the general behav-
ior of a group of investors but not that of every individual investor. The
observation that person X smokes but does not have cancer does not
refute the theory that predicts smoking causes cancer, which in general
may be empirically consistent. Likewise, the observation that an individual
with primary schooling makes more income than does another individ-
ual with graduate studies does not refute the theory that more schooling
causes higher incomes, which in general may be empirically consistent. It
is, therefore, very likely to have societies, markets, or social actors that are
exceptions to the predictions of the theory. In this case, we will have reali-
ties without theory.
If a single social actor is important in the entire economic process (the
government of a country or the monopolist in a particular market), eco-
nomics may not be able to explain that single behavior. To put a theory
about this behavior to a statistical test, many governments and many
monopolists would have to be observed, but, all the same, there might be
some governments or monopolists whose behavior does not correspond
to that of the theory. Hence, it is logically possible to have social actors
that are exceptions to the theory and we can then write
It may also happen that there is more than one economic theory that
explains a social reality. The relationships between alpha and beta propositions
The Alpha-Beta Method in Economics 55
When the alpha propositions of a theory are too general, beta propositions
can hardly be derived from them. For example, the alpha proposition
“entrepreneurs seek to maximize profits” is too general to generate beta
propositions. The behavior of entrepreneurs will more likely depend upon
the market structure in which they operate; that is, the behavior will be
different under monopoly compared to perfect competition because in
the first case the entrepreneur will be price-maker, whereas in the lat-
ter he or she will be price-taker (market price is exogenous). In order to
derive a beta proposition from an economic theory, a specific social situ-
ation is needed, in which the social context and constrains under which
social actors operate must be determined. The term “social situation”
comes from Popper (1976, 1985). Whether the process can be seen as a
static, dynamic, or evolutionary process is also part of the social situation
definition.
When the particular social situation in which the economic process
takes place is unobservable (e.g., market power might be unobservable),
the only way to take into account that context in the economic theory is
by introducing assumptions about it. In this case, additional assumptions,
called auxiliary assumptions, are necessary to construct abstract processes
that make the theory operational, that is, able to generate beta propositions.
56 A. Figueroa
The term “auxiliary” implies that the assumptions of the economic theory
are primary assumptions. The set of auxiliary assumptions that define a
particular social context gives rise to a model of the scientific theory. A theo-
retical model then includes these two subsets of assumptions, such that the
auxiliary assumptions do not contradict the primary assumptions. A model
is thus a logical system.
Because there are different types of social context, and each context is
constructed using a set of auxiliary assumptions, then for each social con-
text there will exist a particular model, and the set of all possible models
comprise the theory. Thus, an economic theory can be seen as a family of
models. The set of alpha propositions constitutes the core of the family.
Beta propositions can now be derived from each model by deductive logic.
The theory will now be subject to the process of falsification through the
beta propositions of its models. The property of beta propositions shown
above (Chap. 2) also applies here: A beta proposition of the model repre-
sents the reduced-form relations of the process, the relations between the
endogenous and exogenous variable.
The alpha-beta method now operates as follows. Let α1 and α2 repre-
sent two models of theory α. The falsification algorithm is now applied
to the models. If the beta propositions of the first model are not refuted
by empirical data, the model can be accepted, and then the theory can be
accepted; if they are refuted by empirical data, the model is rejected, but
the theory is not because there is a second model to be put into a test. For
the theory to be rejected, all models of the family must fail. If all models
fail, the theory fails, it is eliminated, and a new theory is needed. This
algorithm requires that the number of models be finite; that is, it requires
that the theory should generate only a limited number of models.
Table 4.1 shows the falsification algorithm of the alpha-beta method
applied to economic theory α, having n models. Given theory α, a set of
α − ( A′ ) → α ′ ⇒ β ′ → [ β ′ ≈ b ]
If β′ = b, then model α′ explains reality and so does α
If β′ ≠ b, then model α′ fails to explain reality; then
α − ( A′′ ) → α ′′ ⇒ β ′′ → [ β ′′ ≈ b ]
If … (the algorithm is followed)
( )
α − A n → α n ⇒ β n → β n ≈ b
If βn ≠ b, then model αn fails to explain reality, so does α. Then, a new theory is
constructed and the algorithm continues
The Alpha-Beta Method in Economics 57
the long run and then to the very-long-run models. Hence, the mechanical
and evolutionary processes can also be seen as models of different runs.
The most important types of abstract economic processes that an eco-
nomic theory may take include the following categories:
Categories (b) and (c) come from the previous sections of this chapter.
Category (a) comes from static, dynamic, and evolutionary processes
(Fig. 3.1, Chap. 3). A theoretical model results from selecting one type
of process from each category. For example, the combination short run,
partial equilibrium, and market structure of perfect competition constitute
a model of a given economic theory. From this example, it follows that
the combination of types of processes leads to a finite number of models.
However, they are inconsistent with each other; that is, they both cannot
be true. The world of quantum physics operates with disorder, whereas
the world of relativity with order. How could the order in the large bod-
ies be the result of disorder in the small bodies? The two good partial
theories of physics do not lead to a good unified theory (more on this in
Chap. 9 below).
In order to explain the real social world, an economic theory must
constitute a logical system, and thus must lead to unity of knowledge, to
a unified theory. A single social reality must have unity of knowledge to be
understood. For example, in the neoclassical theory, partial equilibrium
models assume that labor markets are Walrasian and thus predict equi-
librium with full employment (microeconomics textbooks), but general
equilibrium models (macroeconomics textbooks) predict that equilibrium
is with unemployment. Both cannot be true and the neoclassical theory
cannot explain the observed unemployment in the real world.
Another example. A feature of the capitalist system is the coexistence
and persistence of few rich and many poor countries, usually called the
First World and the Third World. After 200 years of capitalism and its
continuous globalization, this persistent inequality is a paradox. To solve
this paradox, we need partial theories able to explain the First World and
the Third World taken separately, and then a unified growth theory that
should be able to explain the capitalist system taken as a whole. A proposal
of such unified theory can be found in Figueroa (2015).
Therefore, unity of knowledge is one of the fundamental epistemo-
logical requirements of science. Incoherent and fragmentary knowledge
does not imply scientific knowledge. A unified theory does. The alpha-
beta method, by construction, ensures that objective in economics. Thus,
the set of auxiliary assumptions must be consistent with the set of primary
assumption of the theory; the set of assumptions of partial theories models
must be consistent with that of the general equilibrium model, and the
set of assumptions of a short-run model must be consistent with that of a
long-run model.
In sum, this chapter has shown the particular features of the alpha-beta
method when applied in economics. The rules of scientific research in
this field have thus been established. The next two chapters are devoted
to solve operational problems in statistical testing when dealing with the
falsification of economic theories under the alpha-beta method.
Chapter 5
of testing hypotheses without scientific theory. They will deal with the
logic of the underlying statistical testing methods and their consistency
with the alpha-beta method, including the epistemological problems that
may arise.
(a) About the distribution of the variable under study in the parent popu-
lation from which samples are drawn: It assumes that the variable has
a normal distribution in the parent population.
(b) About the mechanism to selected the samples: It assumes that samples
are drawn by random mechanisms.
Total income and the number of households are common to both distri-
butions. However, the distribution B is normal or symmetric, whereas C
is non-normal or asymmetric. The distribution of a variable in the parent
population is usually unknown. It is often unobservable. We estimate the
values of these variables from samples drawn from the population. But
then we need to make assumptions about the distribution of the parent
population to derive the properties of the sample values.
Consider firstly the distribution B shown above. This corresponds to
the parent population, which can also be represented in the form of a
frequency distribution, as shown in Table 5.1. The variable household
income has a normal distribution or symmetric distribution, as shown in
the second column. The mean is equal to 280 / 8 = 35 and standard devia-
tion (S.D.) is 15.
From the parent population distribution we can derive the set of all
possible sample values of size n under the assumption that the mechanism
of selection is random. For the sake of simplicity consider n = 2 . Then the
set of all possible sample outcomes will be equal to 82 = 64 . Suppose that
Falsifying Economic Theories (I) 67
Table 5.2 Distribution
Mean Income Frequency Probability
of sample means for n = 2
drawn from population B 10 1 1/64
15 2 2/64
20 5 5/64
25 8 8/64
30 10 10/64
35 12 12/64
40 10 10/64
45 8 8/64
50 5 5/64
55 2 2/64
60 1 1/64
Total 64 1
Mean 35
S.D. 10.61
Theorem 5.1 If
(a) Variable Y has in the parent population a normal distribution, with
mean μ and standard deviation σ, usually expressed as Y ~ N ( µ , σ ) ;
(b) Samples of size n are drawn by a random mechanism;
Then
The distribution of the variable Y (sample mean) also has a normal
distribution with mean μ and standard deviation equal to σ / √ n, that is,
( )
Y ~ N µ ,σ / √ n .
Given the particular assumptions (a) and (b), there is a particular rela-
tion between sample and parent population values. The mean of the sample
distribution of means will be equal to the mean of the parent population.
The larger the sample size is, the more accurate the sample estimate of the
population mean will be, because the standard error will become smaller.
An implication of Theorem 5.1 is that we can define the statistic z as
follows
Falsifying Economic Theories (I) 69
( )
z = ( m − µ ) / σ / n , such that z ~ N ( 0, 1)
(5.1)
where m is the sample mean. The random variable z has normal distribu-
tion with mean equal to zero and standard deviation equal to one, and is
known as the standardized normal distribution (because their parameters
are 0 and 1).
Equation (5.1) can then be utilized to accept or reject the hypoth-
esis that we select to accept or reject statistically, which is called the null
hypothesis. The logic of using the null hypothesis in statistical testing is
the following: From the operational point of view, it is easier to accept or
reject the null hypothesis, which also implies rejecting or accepting the
alternative hypothesis. This is a logical artifice. For example, if we were
interested in testing whether a coin is unfair, it would be much easier
to test that it is fair (the null hypothesis, for which we have the random
distribution); if the null hypothesis is rejected (accepted), then we accept
(reject) the alternative hypothesis that it is unfair.
Let the null hypothesis refer to the value of µ = 0 and the alternative
hypothesis be µ ≠ 0 . Given the criterion that 1:20 ( p − value = 0.05 ) is
the deviation due to pure chance, it can be shown that the confidence
interval lies between 1.96 and −1.96 from the value of z, that is, approxi-
mately two standard deviations from the mean. If the observed value of
m lies beyond these threshold values, we reject the null hypothesis. Why?
The observed value of m is too far from zero to be attributed to chance
alone; that is, the probability of pure chance is too low (p-value < 0.05 )
and there must exist something else originating this outcome; thus, we
accept the alternative hypothesis. If the observed value of m lies within
the threshold values, we accept the null hypothesis and thus reject the
alternative hypothesis. Why? The observed value of m is not too far from
zero and can be attributed to chance alone; that is, the probability of pure
chance is high (p-value > 0.05 ).
There are several qualifications to the results presented here. Equation (5.1)
has a correction when the population is finite; also it becomes t-distribution
when the sample size is smaller than 30. Under some conditions the observed
standard deviation from the sample can substitute the unknown standard
deviation of the population. The level of significance can better be applied on
one tail testing only. Those problems are dealt with in standard textbooks of
statistical methods. However, what was important here was to show the logic
underlying the parametric statistical testing.
70 A. Figueroa
Table 5.3 Frequency
Income Frequency Total income
distribution of income in
the population C 10 2 20
15 2 30
30 2 60
50 1 50
120 1 120
Total 8 280
Mean 35
S.D. 34.55
Table 5.4 Distribution
Sample mean Frequency Probability
of sample means for n = 2
drawn from population 10.0 4 0.06
C 12.5 8 0.13
15.0 4 0.06
20.0 8 0.13
22.5 8 0.13
30.0 8 0.13
32.5 4 0.06
40.0 4 0.06
50.0 1 0.02
65.0 4 0.06
67.5 4 0.06
75.0 4 0.06
85.0 2 0.03
120.0 1 0.02
Total 64 1.00
Mean 35.00
S.D. 24.43
As the sample size increases, the distribution of sample means not only
reduces its standard deviation, but the sample distribution becomes more
symmetric, and around n = 30 , it becomes almost symmetric. Then we can
state the second fundamental theorem of statistical inference as follows:
Then
The distribution of the variable W (sample mean) has approximately
a normal distribution with mean equal to the mean of the parent popula-
tion and standard deviation equal to the standard deviation of the parent
population divided by the square root of the sample size n, that is,
( )
W ~ N ε, γ / √ n .
Given the particular assumptions (a) and (b) of Theorem 5.2, the
relations between the sample values and the corresponding values of the
parent population are similar to those shown in Theorem 5.1. Equation
(5.1) also applies in this case, but provided the sample size n is sufficiently
large ( n > 30 ) .
Theorem 5.3 If
(a) Variable Y has normal distribution in each of two parent populations;
that is Y1 ~ N ( µ1 ,σ1 ) and Y2 ~ N ( µ2 ,σ2 ) ;
72 A. Figueroa
(b) Samples are independent from each other and are drawn from each
population with sizes n1 and n2 by a random mechanism;
Then
The variable ( Y1 − Y2 ) , the difference between the means, is a random
variable that has a normal distribution with the mean equal to the
difference of the two population means and the standard deviation
equal to the square root of the sum of the variances of the two parent
populations each divided by its corresponding sample size, that is
( Y − Y ) ~ N (µ
1 2 1 − µ 2 , σ12 / n1 + σ 22 / n 2 )
The assumption “independent sampling” refers to the requirement that
the selection of one sample is not affected by the selection of the other.
The theorem is not true if the samples refer, for example, to “before and
after” situations. The implication of the theorem is that we can redefine
the statistic z of Eq. (5.1) and apply it for testing the null hypothesis that
the difference is zero.
A beta proposition of a model of an economic theory needs to be trans-
formed into a statistically testable hypothesis; thus, the beta proposition
may be called the beta hypothesis or simply β-hypothesis. This is to distin-
guish it from an empirical hypothesis that has no theory, which will be called
H-hypothesis (to be discussed later on, Chap. 7). Now suppose the beta
proposition states that the means of both populations are unequal. Then
the null hypothesis can be stated as ( µ1 − µ 2 ) = 0 , whereas the alternative
hypothesis is now the β-hypothesis ( µ1 − µ 2 ) ≠ 0 . If the observed sample
means difference is too far from zero to be attributed to pure chance; that is,
if the probability that this deviation due to pure chance is too low (p-value
< 5% ), then the null hypothesis is rejected and the β-hypothesis is accepted;
hence, the model is accepted, so is the theory. If in contrast the observed
sample means difference is not too far from zero so that the difference can
be attributed to pure chance (p-value > 5% ), then the null hypothesis is
accepted, and the β-hypothesis is rejected, which implies that the model is
rejected, but the theory is not, for another model can be constructed and
submitted to the statistical test. Remember that the number of models of an
economic theory is finite.
It could happen that the β-hypothesis states that the means of both pop-
ulations are equal. In this case, the null hypothesis is also the β-hypothesis
and accepting (rejecting) the null hypothesis also implies accepting (reject-
ing) the β-hypothesis.
Chapter 6
+ (6.1)
β :Y = F ( X )
β-hypothesis:
(6.2)
Y = F ( X ) = µ y / x = β0 + β X, β > 0
This equation is just the representation of the general beta proposition Eq.
(6.1) now by a linear equation, in which the sign ( + ) of the coefficient of
X indicates the prediction of the static model. Moreover, the variable X has
fixed values and the variable Y is stochastic; hence, Eq. (6.2) can be seen as
the representation of the mean value of Y for each value of X in the parent
population, indicated by the conditional mean μy/x. Furthermore, for each
value of X, the variable Y has a normal distribution with a mean value that
Falsifying Economic Theories (II) 75
µy/x
Y = b 0 + bX + e (6.3)
76 A. Figueroa
Yj
(Yj – Ŷj)
(Yj – Y)
(Ŷj – Y)
Y
O Xj X
β-hypothesis:
(6.4)
Y = F ( X1 , X 2 ) = β0 + β1 X1 + β2 X 2 , β1 > 0, β2 < 0
Sample estimate:
(6.5)
Y = b 0 + b1 X1 + b 2 X 2 + e
Equation (6.4) indicates the additional assumption made to test the theo-
retical model: the causality relation is linear in the parent population. The
Falsifying Economic Theories (II) 77
directions of causality derived from the theory show that the effect of
changes in the value of X1 (holding constant the value of the other exog-
enous variable) is positive, whereas the effect of changes in the value of X2
(holding constant the value of the other exogenous variable) is negative.
Equation (6.5) represents the sample estimate of the regression coeffi-
cients from a random sample drawn from the parent population.
We could follow the algorithm and represent a causality equation with
three or more exogenous variables. But we should remember that a theo-
retical model must have few exogenous variables; otherwise it would not
represent an abstract world. A theoretical model with many exogenous
variables is useless to understand the real world (as a map to the scale 1:1).
We can hardly understand the mechanisms by which so many exogenous
variables generate the values of the endogenous variables. If the real world
cannot be explained with few variables, then we have to admit that that
particular reality is unexplainable; it may be unknowable, which contra-
dicts one of the meta-assumptions of the theory of knowledge (Table 1.1,
Chap. 1).
The causality relation between sample and parent population distribu-
tions in regression analysis can be established by the following theorem:
Theorem 6.1 If
(a) There exists a linear relation in the parent population between Y and
X, where the endogenous variable Y has a normal distribution, with
conditional mean and constant standard deviation σ for all values of
the exogenous variable (homoscedasticity assumption);
(b) The sample of size n is drawn by a random mechanism;
(c) The regression coefficients from the sample, as in Eq. (6.5), are esti-
mated by the method of least squares;
Then
Each regression coefficient bj is a random variable that has a normal
distribution with mean βj and standard deviation that depends upon the
variance of the population (σ) corrected by a factor that depends on the
sample variability of the exogenous variable Xj and the sample size, that is,
( ) ( ∑X )
2
b j ~ N β j , σ / S xxj , where S xxj ≡ ∑X 2j − j /n
78 A. Figueroa
The term Sxxj measures the total square variations from the conditional
mean; hence, it is related (but not equal) to the sample variance of Xj,
An implication of this theorem is that we can define the statistic t as
follows
(
t = ( b j − β j ) / s / Sxxj ) (6.6)
If the overall test result is that both regression coefficients are posi-
tive and negative, respectively, and are statistically different from zero,
then β-hypothesis is accepted and thus the theoretical model is accepted;
otherwise, the theoretical model is rejected. It is sufficient for the model
to fail in one of its predictions (one regression coefficient) to be rejected.
The matrix of causality shown in Table 2.2, Chap. 2, illustrated this rule.
If the model fails, what can be said about the theory? We cannot con-
clude that the theory fails, as shown in Chap. 4. We have to test the other
models of the theory. If all models fail, then we can conclude that the
theory fails. Of course, if no models were needed to derive beta proposi-
tions from the theory, then there would be just one β-hypothesis to test,
the results of which will allow us either to accept or to reject the theory.
As shown above, the assumptions of regression analysis include that
the relation of the variables involved in the parent population is linear.
Rejection of the model implies rejecting the existence of a linear beta
proposition only. The beta proposition in the parent population could be
non-linear.
How can we test a non-linear relation with regression analysis? If the
beta proposition is monotonically increasing or decreasing, then a non-
linear relation can be transformed into linear relations of the logarith-
mic values of the variables; hence, all the properties of regression analysis
shown above will also apply to the linear regression using logarithmic val-
ues. The beta proposition shown in Eq. (6.2) can then take the following
β-hypothesis form:
β‐hypothesis:
Y = F ( X ) = AXβ (6.2a)
log Y = log A + β log X (6.2b)
Y′ = F ( X′ ) = µ y ′ / x ′ = A′ + βX′ (6.2c)
Equation (6.2c) is still linear (where the primes indicate logarithms). The
advantage is that with this mathematical artifice now we have a non-linear
relation in natural numbers, as represented in Eq. (6.2a), which can also
be tested with regression analysis. Therefore, the assumption of linearity
can take the form of natural functions or logarithmic functions.
If the test fails for Eq. (6.2), which leads to rejecting the β-hypothesis
of linear causality between Y and X, then we may proceed to test Eq.
80 A. Figueroa
Correlation Coefficients
Causality can also be measured by the correlation coefficient, for it is
derived from the regression line. Again, consider the case of just one exog-
enous variable X. As can be seen in the regression line in Fig. 6.2, for a
given value of Xj there will be an observed value of Yj. The deviation of the
observed value of Yj from the mean value of Y is necessarily equal to the
sum of two components: the difference between the observed value of Yj
and the value of Y on the regression line (Ŷj) plus the difference between
this and the mean value of Y ( Y ).
Therefore, for any observed value of Yj, we can write the following
identity:
( Y – Y ) ≡ ( Y – Yˆ ) + ( Yˆ – Y )
j j j j (6.7)
∑ ( Y − Y ) ≡ ∑ ( Y − Yˆ ) + ∑ ( Yˆ − Y )
2 2 2
j j j j (6.8)
( ) / ∑(Y − Y)
2 2
r2 = ∑ Y
ˆ −Y
j j (6.9)
+ – +
(6.10)
β :Y = G ( X1 , X 2 ; t )
because this exogenous variable has a positive sign. Changes in the exog-
enous variable X2, holding constant the value of the exogenous variable
X1, will shift the trajectory downward because this exogenous variable has
a negative sign.
The prediction of the theoretical dynamic model presented in Eq.
(6.10) constitutes a beta proposition. To apply regression analysis, it can
be transformed into a linear relation in the parent population, with t as
another exogenous variable, and obtain a beta-hypothesis. This is the
hypothesis that can be submitted to a statistical test by using regression
analysis, just as in the static model.
β-hypothesis:
+ +
(6.11)
Y = H ( X1 ;T )
Subject to
T < T*and Y < Y*
=If T T= *
, then Y Y*
T* = f ( X1 )
Given the value of the exogenous variables X1, the endogenous vari-
able Y will increase over time along a given trajectory; at time T = T* ,
the variable Y reaches the threshold value Y*, and the temporal dynamic
equilibrium breaks down. Hence, T* is the breakdown period; that is, at
period T*, a regime switching takes place and the process itself changes.
The threshold value Y* is unobservable. In the case that the exogenous
variables X1 changes (increases), the trajectory of the endogenous variable
will shift (upward) and thus the value of T* will also change (say, fall);
accordingly, the process will break down at different time (sooner). The
value of T* is thus endogenous.
84 A. Figueroa
Time T is now historical time, irreversible time, with past, present, and
future (as in the example of the cup that fell from the table and became
broken cup). This is different from mechanical time t (as in the example of
the pendulum). The critical statistical test of an evolutionary model is about
the breakdown of the temporary dynamic equilibrium, which is falsifiable.
In order to use regression analysis to falsify the evolutionary model, the
function (6.11) can be represented in linear form as follows:
β-hypothesis:
Y = β0 + β1 X1 + β2 T, T < T* , β1 > 0, β2 > 0 (6.12)
Sample estimate:
Y = b 0 + b1 X1 + b 2 T + e (6.12a)
The sample size is n, which come from k groups, such that ni is the sample
size of the i group. The sample observations are ordered from the lowest
value to the highest as if it were a single sample. The term Ri is the sum
of the ranking numbers assigned to the ni values of the i group. On the
statistical inference, we have the following theorem:
Theorem 6.2 If
(a) Samples from k groups of identical parent populations are drawn
independently, with sample size n ≥ 5 for each group;
(b) The mechanism of selection in each group is random;
µ1 = µ 2 =…= µ κ (6.14)
rs = 1 − 6 ( ∑d ) / n ( n
2 2
)
−1 (6.15)
The sample size consists of n pairs of two variables, V and W. The observed
values of V are ordered from the lowest to the highest values; the same
ordering is made for the values of W. The difference in the ranking for
each pair is equal to d, which is transformed into d2 and then added up.
On the statistical inference, we have the following theorem:
Theorem 6.3 If
(a) The correlation between two parent populations is ρ = 0 ;
(b) Sample size consisting of n pairs are drawn from these parent popula-
tions by using a random mechanism;
( )
z = ( rs − 0 ) / 1 / n − 1 , such that z ~ N ( 0,1)
(6.16)
Falsifying Economic Theories (II) 87
testing under the alpha-beta method, the opposite term to false is not
true, but consistent.
If statistical analysis shows consistency between beta propositions of a
theoretical model and empirical data, the abstract society constructed by
the theory is a good approximation of the real-world society under study;
otherwise, the model fails. If all models fail, then the theory fails to explain
this reality.
According to the alpha-beta method, causality requires a scientific the-
ory. The reason is simple: Causality is the relation between endogenous
and exogenous variables, which can only be established by a scientific the-
ory. No scientific theory, no causality. We should then notice that if the
statistical testing leads us to accept a theory, then the causality relations
proposed by the theory are also accepted.
We should also note that statistical rejection or acceptance of the beta-
hypothesis is not symmetric. Rejection is definitive, but acceptance is pro-
visional. We accept it just because there is no reason to reject it now;
hence, this logic for statistical acceptance may be called the principle of
insufficient reason. This logic is consistent with the falsification principle,
according to which the rejection of a scientific theory is definitive, but its
acceptance is provisional until new empirical data set or superior theory
appears.
The example of the theory “Figure F is a square,” shown above, illus-
trates this principle. If the two diagonals are not equal (a necessary con-
dition), we reject the proposition that F is a square; however, if the two
diagonals are equal, there is no reason to reject the proposition, and we
may accept it, but provisionally, for there are more tests to be performed,
such as whether all sides are equal. Truth is elusive in the social sciences,
for we have no way to test for the necessary and sufficient conditions
on the validity of a theory to explain reality, but just for the necessary
conditions.
Science Is Measurement
“Science is measurement” is a common accepted principle. This should be
read as a necessary condition in the alpha-beta method, for measurement
alone cannot lead to scientific knowledge. However, the question of iden-
tifying observable variables is not a simple task in economics. A criterion
to determine measurability is needed.
Falsifying Economic Theories (II) 89
β-hypothesis:
+ +
(6.17)
Y = F ( X1 , X 2 ) = β0 + β1 X1 + β2 X 2 , such that X 2 = 0 or 1
In the last two chapters, we have shown the connections between the
alpha-beta method and the theory of statistical testing. Statistical testing
methods are logically derived from statistics, a formal science. There are
two statistical theories: parametric and non-parametric. The first makes
assumptions about the distribution of a variable in the parent population
from which samples are drawn. The assumptions include normal distribu-
tion of the variable, homoscedasticity, and so on. The second makes no
such assumptions. However, both theories have in common the assump-
tion that sample selection has been made through a random mechanism.
94 A. Figueroa
Statistical testing instruments are derived from each theory and then we
have parametric and non-parametric testing instruments.
A new epistemological problem now appears in the falsification process
of an economic theory. If the parametric testing instrument is applied and
the theoretical model under consideration is rejected, the origin of the
failure may not be attributed only to the assumptions of the economic
theory but also to the assumptions of the testing instrument. If the non-
parametric testing is applied, and the theoretical model is rejected, the
origin of the failure can be attributed to the assumptions of the economic
theory only, unless there are reasons to doubt that the sample was random.
Note that this problem is not about errors of measurement. Certainly,
the instruments of measurement of variables could be faulty, as indicated
earlier. The problem at hand refers to the testing instruments themselves,
which may be faulty, when some of the assumptions or requirements of
the testing instrument are not met. Consider the case of medical tests,
which very often require that the patient be on fast; hence if the patient
does not comply with this requirement, the test will be invalid.
The same problem appears with the statistical testing instruments that
make some assumptions about reality. Suppose the beta-hypothesis ( β > 0 )
will be tested with the parametric instrument of regression analysis, which
assumes the sample was drawn from a parent population that is normally
distributed in the real world and that the causality relation is linear (in
natural or logarithmic numbers). The falsification is now about the two
sets of assumptions: the one that is underlying the beta proposition and
the other that is underlying the statistical testing instrument. Therefore, if
the beta-hypothesis is rejected, we will not know which of the two types
of assumptions have failed.
On the other hand, if the beta-hypothesis is accepted, we know that the
joint assumptions have passed the test. Because the set of assumptions about
the statistical inference is independent of the set of assumptions underlying
the beta proposition, we can say that the latter has passed the test too.
Another problem of applying falsification to economic theories origi-
nates in the instruments of measurement of variables. First, the nature
of empirical variables is such that most variables (endogenous and exog-
enous) are socially constructed variables, as was shown earlier. Socially con-
structed facts are not easy to measure.
Second, the instruments of measurement of the economic process are not
as developed as in physics. Production and distribution in most cases are still
measured by applying surveys to firms and households and by using gov-
Falsifying Economic Theories (II) 95
α ⇒ β → (α ,τ ,λ ) :[ b ≈ β] (6.18)
testing instruments that are contained in the statistical theory (τ), and
those of the measurement of empirical variables (λ). In the case of rejec-
tion, the source of the failure may come from either set of assumptions
or from all three, but there is no way to identify them. We call this the
identification problem. In the case of acceptance, all sets of assumptions are
accepted; moreover, the theory is accepted because the other two assump-
tions were made independently. In any case, the researcher should report
the ways in which the identification problem might affect the falsification
results.
Regarding the influence of the statistical testing instrument, we know
that, compared to parametric statistics, non-parametric statics assumes only
the condition that the sample drawn from parent population should be
random. The conclusion we may draw in this comparison is that paramet-
ric statistics is based on a theory that contains more restrictive assumptions
about the origin of the empirical data utilized. Then the more restrictive
the assumptions of a statistical theory, it is more unlikely that the sample
data utilized do satisfy those conditions; hence, testing instruments that
rely on a large set of assumptions are less powerful to test scientific theo-
ries than those that do not.
Can the problem of identification be resolved by testing the assump-
tions of the statistics utilized in the testing of the scientific theory? No, it
cannot. First, statistics is a formal science, not a factual science. No beta
propositions can be derived from a statistical theory to test the empirical
validity of a statistics. On the other hand, testing the assumptions—for
instance, whether a sample comes from a normal distribution popula-
tion—would require another statistics to do the testing, which in turn
would come from other theorem, with other assumptions, which would
have to be tested, which in turn would require another statistics, and so
on. We would fall into the infinite regress problem, unless the final test
was non-parametric.
Not only in economics, but also in the social sciences in general, scien-
tific research is carried out under imperfect knowledge about the charac-
teristics of the parent population and that of the sample drawn from that
population. Non-parametric statistical testing instruments are therefore
relatively more powerful than parametric instruments for the falsification
of an economic theory, when both testing instruments compete in per-
forming the test.
This conclusion holds true even acknowledging two relative disadvan-
tages of non-parametric methods. First, a non-parametric method is less
Falsifying Economic Theories (II) 97
efficient (requires a larger sample for the same statistical error) than the
parametric method that it can replace; second, more assumptions imply
more precision on the level of significance. These are the usual arguments
against the use of non-parametric statistics. However, these two relative
disadvantages of non-parametric methods are more than compensated by
their advantage of requiring less restrictive conditions about the genera-
tion of the empirical data utilized to perform the test.
Under the conditions of falsification presented in Eq. (6.18), progress
of scientific knowledge in economics and the social sciences will come
from superior new statistical theories that are non-parametric and from
innovations in the instruments of measurement of variables. These inno-
vations will make falsification more powerful, as false theories could be
identified and eliminated and replaced by better theories. The Darwinian
evolutionary selection of theories will work more effectively. Thus, eco-
nomics will show more rapid scientific progress.
Chapter 7
The alpha-beta research method has been derived from the compos-
ite epistemology, the combination of the epistemologies of Nicholas
Georgescu-Roegen and Karl Popper. However, they are not the only
known epistemologies. There are others, which will be examined in this
chapter. Research methods will be derived, if possible, and compare to the
alpha-beta method.
Deductivist Epistemology
Deductivism assumes that scientific knowledge can be reached by logical
deduction alone. Hence, the derived rule for scientific research would say
that that scientific knowledge must be established by pure thoughts rather
than by reference to empirical observation; that is, alpha and beta proposi-
tions alone are conducive to scientific knowledge, where beta propositions
are derived from alpha by using deductive logic. Thus, the derivation of
beta proposition is treated as a problem of solving a theorem, and once
it is solved, the method is completed. Economics is thus seen as formal
science, not as factual science.
Compared to the alpha-beta method, the rule derived from deductiv-
ist epistemology comprises only the alpha propositions and the corre-
sponding beta propositions. The step of submitting the beta proposition
to the falsification process is ignored. Thus, the untested beta proposi-
tion is taken as causality relations. The knowledge so generated is not
error-free. We already know that a logically correct proposition can be
empirically false.
Consider the following economic theory. The alpha proposition says
that the short-run level of output in a capitalist society is given by the
expenditure behavior of social actors, not by the behavior of producers of
goods. A model of this theory would derive the following beta proposi-
tion: if the government expenditure increases, the level of output of society
will too. The beta proposition has been derived from alpha using logical
deduction. Is this sufficient to have scientific knowledge? No. As shown
by the alpha-beta method, the beta proposition is logically correct but it
could be empirically false when tested statistically because the assumptions
contained in the theory were arbitrary.
Paul Samuelson’s classical book Foundations of Economic Analysis
(1947) is the best example of deductivist epistemology. The book goes
as far as to derive meaningful theorems from economic theories, which
correspond to what we have called here beta propositions. Empirical refu-
tation is ignored. (This is a paradox in a book that starts by claiming the
significance of epistemology in economics.) Even today, most textbooks
of economics use deductivist epistemology. In the language of alpha-beta
method, they derive beta propositions from an economic theory, and then
go immediately to applications to public policies. No falsification of the
theory is ever presented or even suggested.
The demarcation principle of deductivist epistemology is the theory
itself. The criterion of scientific knowledge is what the theory says. If
The Alpha-Beta Method and Other Methods 101
someone presents facts that refute the predictions of a theory, the proposi-
tion of this person will be disqualified on the grounds that it is contrary to
the theory. The person will be treated as ignorant in scientific knowledge.
Moreover, if reality and theory are found inconsistent with each other,
then “reality must be wrong.” Why? Because the theory and its empiri-
cal predictions are logically correct! The debate about the superiority of
theory A against theory B is also based on deductivism: the criterion of
knowledge utilized is the theory itself.
The risk of error in scientific knowledge using deductivist epistemology
is therefore enormous. We never know whether the empirical predictions
of a theory are consistent or inconsistent with empirical facts. The causal-
ity relations are hypotheses and they could just be empirically false.
Actually, the logic of deduction—not the deductivist epistemology—
plays a major role in the alpha-beta method. From the assumptions of
a scientific theory, beta propositions are derived by using the logic of
deduction. But that is just part of the alpha-beta method, which includes
the falsification process.
In sum, deductivist epistemology has to be abandoned. It belongs
to the formal science, not to factual sciences. It cannot lead to scientific
knowledge in the factual sciences.
Inductivist Epistemology
Inductivist epistemology as a theory of knowledge assumes that scientific
knowledge can be reached by empirical observations alone. No scientific
theory is needed. The derived rule of inductivist epistemology says that
scientific knowledge requires empirical observation; it is a necessary and
sufficient condition. Inductivism is just the opposite of deductivism.
A distinction needs to be made between inductivist epistemology and
inductive logic. Consider the following examples of syllogism (premises
and conclusion):
Deductive logic:
All men are mortal
Socrates is a man
Therefore, Socrates is mortal
Inductive logic:
We have observed that some poor countries are tropical
Then all tropical countries are probably poor
I :m → M / P
P :m ′ = ( m, n ) → M′ / P ′
P′ :m′′ = ( m, n, r ) → M ′′ / P′′
The principle of induction (I) says that from a particular observation (m)
we can reach a general statement (M), the justification of which is an
inductive rule (P). How do we establish P? By applying another induction,
which is to be found in the observation itself (m′), which now includes in
the observation a new element (n), from which we can generalize to state-
ment M′, the justification of which is an inductive rule (P′), and so on.
The Alpha-Beta Method and Other Methods 103
Thus, the inductive rule is just another induction; moreover, the algorithm
leads inevitably to the logical problem of infinite regress.
Consider the following example to illustrate the problem of induction:
If the initial statement (m) refers not to a single country but to a group
of countries, the conclusions would be formally the same. As we can see,
the observations themselves cannot give us a logical justification for draw-
ing conclusions beyond those experiences. In particular, no underlying
factors in the workings of the real world will ever appear with inductivism.
Under inductive logic, no definite conclusion can be derived from the
premises. The conclusion cannot be taken as true, but only as probable, as
indicated earlier. “If some degree of probability is going to be assigned to
statements based on inductive inference, this will have to be justified by a
new principle of induction, appropriately modified. And this new principle
in its turn will have to be justified, and so on” (Popper 1968, p. 30). The
logical problem of infinite regress is back again.
If it were just a matter of having more observations, How far do we
have to repeat our observations to draw a general conclusion? How can we
justify the conclusion? This is Popper’s answer: “[A]ny conclusion drawn
in this way may always turn out to be false: no matter how many instances
of white swans we may have observed, this does not justify the conclusion
that all swans are white” (Popper 1968, p. 27).
The problem of induction has no solution; that is, there is no such thing
as inductive logic. As Philosopher Susan Haack summarized, “According
to Popper, we have known since Hume that induction is unjustifiable;
there cannot be inductive logic” (Haack 2003, p. 35).
In sum, there are logical problems with inductivism. First, the principle
of induction would be another induction, for the principle of induction
must be a universal statement in its turn. This leads to the logical problem
of infinite regress, which denies the existence of inductive logic. Second,
inductivist epistemology rests on the assumption that there exists induc-
tive logic, which does not. Therefore, inductivist epistemology must be
abandoned because it does not full fill with two requirements of the
104 A. Figueroa
I think (like you, by the way) that theory cannot be fabricated out of the
results of observation, but that it can only be invented (Popper 1968,
p. 458)
/ α
b⇒
to the population from which samples were drawn. This is viable because
there exists statistical theory, which gives logic to the inference. This
rule may be called statistical inference research method. It is an empirical
research method, not a scientific research method, as alpha-beta is.
Statistical inference is the logic of empirical knowledge (not of scientific
knowledge); thus, the criterion of knowledge is the discovery of empirical
regularities, based on statistical testing of an empirical hypothesis that is not
derived logically from a theory, for such theory is unavailable. This empiri-
cal hypothesis may be called H-hypothesis. The logical justification of this
hypothesis is not a scientific theory, but an intuition of the researcher.
The H-hypothesis can be subject to statistical testing using parametric or
non-parametric statistics. To illustrate the method, regression analysis will be
presented here. Notice that under this method the regression linear equa-
tion does not come from scientific theory, from a beta proposition (it is not
β-hypothesis), but just from an empirical hypothesis without theory. Then
H-hypothesis:
(7.1)
Y = F ( X1 , X 2 ) = β0 + β1 X1 + β2 X 2 , β1 > 0, β2 < 0
Sample estimate :
(7.2)
Y = b0 + b1 X1 + b 2 X 2 + e
The existence of statistical correlation does not imply the existence of causal-
ity among the variables involved.
people do (behavior, observable) not on what people say about what they do
or what their motivations for doing things are.
The conclusion is that interpretive epistemology as it is presented in the
literature has no logic that can lead us to scientific knowledge, to causality
relations. As shown above, there is no such thing as inductive logic; hence,
there is no logical route from observations to theory and explanation. The
so-called grounded theory method mentioned in qualitative research text-
books is a misnomer: Scientific theory cannot be logically derived from
fieldwork observations, no matter how in depth fieldwork is. This problem
is similar to that of inductivism: No matter how many white swans you can
observe, there is no logical justification for the conclusion that all swans
are white.
In brief, interpretive method is not an epistemology because it fails to
meet two rules of the meta-theory of knowledge, as shown in Table 1.1,
Chap. 1: A logic of scientific knowledge is not provided and a demarcation
rule between scientific and non-scientific knowledge is not either.
However, interpretive research method can be very useful in the prog-
ress of scientific knowledge as exploratory research. In the first stages of
the construction of scientific knowledge, when nothing is known about
the research question, interpretive method can be very useful in order to
gain insights into how the world under study works. This will be the case
if the descriptive knowledge constructed from the fieldwork observations
is followed by the construction of an empirical hypothesis (H-hypothesis),
which goes beyond the anecdotal observation. This cannot be the result of
logical derivation, but of intuition. The empirical hypothesis can later on
be subject to statistical testing and thus to the beginning of the construc-
tion of empirical regularities on the workings of society.
On the other hand, the descriptive knowledge obtained from the field-
work observations and its consistency with the ways people believe and
reason about their daily life—their intuitive knowledge—should help the
researcher to gain insights for the construction of a theory of their behav-
ior. Again, the theory cannot be derived logically, but must be invented,
by trial and error. The interpretation should lead the researcher to the
construction of the motivations underlying the observed behavior of peo-
ple in an abstract way, selecting only what appears to be the essential fac-
tors, that is, proposing a theoretical hypothesis.
In theoretical physics, rocks falls not because they wish, but because
there are external forces (gravity) at work and yet the behavior of rocks
can be explained. People have wills, purposes, feelings, wants and needs.
The Alpha-Beta Method and Other Methods 111
But this fact does not mean that people’s behavior cannot be explained or
that the explanation is only possible if people declare about their behavior.
For example, consider the following economic theory: In a capitalist
society, consumers seek to maximize their individual utility functions, which
reflect their needs and wants, given their real incomes. This is an abstract,
unobservable proposition (alpha proposition). Nevertheless, the following
observable proposition can be derived from the abstract proposition: the
higher the price of a good, the smaller the quantities of the good that will be
bought in the market (beta proposition). This follows because the theory
implies that consumers will have incentives to substitute this good for oth-
ers that satisfy the same necessities. The alpha-beta method has the property
to transform unobservable propositions (alpha) into observable proposi-
tions (beta). After falsification, the theory may be rejected or accepted; if
accepted, we have a theory that is able to explain the behavior of consumers.
According to Popper (1993), both an ameba and physicist Albert
Einstein use theories to understand the real world in which they live. The
difference is just the epistemology: the theory that the ameba uses is based
on instincts, whereas in the case of Einstein, the theory to explain the
behavior of physical bodies is based on logic, on epistemology.
In the case of the social science, some particularities are present. People
need not know Newton’s gravity theory to predict that if someone jumps
through a window, he or she will end up on the ground. This is common-
sense knowledge. But people will probably need more than intuition to
know how the solar system works.
Similarly, people need not know demand–supply theory to predict that
higher scarcity of a good will increase its price, as in the case that the price
of potato will increase if there is a landslide that interrupts the road con-
necting the town with the farming areas. But people will probably require
more than intuition to understand the aggregate: how the market system
works, and what factors underlie the observed income inequality degree
and the unemployment level in society.
In sum, interpretive epistemology is rather a pre-scientific research
method. It can contribute to the progress of economics when nothing
is known about a research question. Fieldwork observation and its inter-
pretation by itself does not constitute scientific knowledge, but it belongs
to the realm of scientific research when the researcher collects informa-
tion seeking to discover new questions and propose empirical or theoreti-
cal hypotheses with which to start the algorithm that leads to scientific
knowledge.
112 A. Figueroa
Before going into empirical research, the researcher must have a research
question. What is a research question? In this interrogative sentence, a
tension between propositions is identified from the literature review. The
tension may take the form of a puzzle, paradox, controversy, or vacuum,
which the researcher intends to solve.
The choice of a research method is not a matter of personal taste or
preference. It is an objective matter. On the criteria of theory availability
and data availability, which comes from literature review, four methods can
be distinguished. These are shown in Table 7.1.
When both theory and data are available to solve the research question,
the researcher is placed in cell (1), and the research will be devoted to test
statistically the beta propositions of the theory, that is, to test a theoretically
Available Unavailable
based hypothesis (β-hypothesis). Cell (2) shows the case in which data
need to be constructed in order to submit the theory to falsification; thus,
the research will eventually move to cell (1). These methods constitute
scientific research or basic research and thus alpha-beta method provides
the rules to follow.
When data set is available but theory is not, cell (3), the researcher
will be able to use the statistical inference method. It seeks to test sta-
tistically a hypothesis that has no theoretical foundation, which is called
H-hypothesis. If it is accepted, the correlations obtained may then be used
as empirical regularities in need of theory to explain the phenomenon:
Why is it that there exists correlation between the variables included in
H-hypothesis? What factors might underlie the observed correlation?
Because there is no logical route from correlations to theory, the needed
theory would have to invented, that is, constructed by trial and error.
The task is to find an alpha proposition from which a beta proposition can
be derived, such that H = b = β is obtained. If such theory is found, the
researcher who started in cell (3) will have moved to cell (1).
If both theory and data are unavailable, the case of cell (4), exploratory
research is the only logical possibility. The researcher may use the inter-
pretive research method. The research method is based on case studies,
in which each case and its context is carefully designed, and participatory
observation and fieldwork are followed to collect primary qualitative and
quantitative data to produce descriptive knowledge. From the interpreta-
tion of the fieldwork data, the researcher is expected to gain insights to
propose theoretical or empirical hypotheses.
The aim of an exploratory research is to explore new research questions
and then propose new hypotheses, either an empirical H-hypothesis or the
first trial for alpha propositions. Therefore, from cell (4), the researcher
could move to either cell (2) or cell (3) and then ultimately will reach cell
(1). Exploratory research then constitutes the very first stage of scientific
research about a new research question, about which nothing is known.
It should be noted that the usual separation of researchers into quantita-
tive/qualitative or into theoretical/empirical has no logical justification, as
shown in Table 7.1. As to the first separation, cell (1) includes qualitative
elements, which are contained in the alpha propositions. Statistical testing
in cells (1), (2), and (3) uses basically quantitative data, but it can include
qualitative data (ordinal variables), socially constructed variables, as well
as “dummy” variables. Cell (4) collects primarily qualitative data, but
quantitative data are necessary as well, if hypotheses are to be generated.
114 A. Figueroa
As to the second separation, researchers in cells (1) and (2) will clearly
be required to do both theoretical and empirical work. In cell (3), the
work is mostly empirical, but it ends proposing a theoretical hypothesis
in order to move to cell (1). Finally, in cell (4), the work starts as theory-
free research, but the researcher will have to conclude with theoretical or
empirical hypotheses, and ultimately move to cell (1).
Table 7.1 is also helpful to place into perspective the so-called case
studies. Case study is a common method utilized in social research. The
question is whether it has epistemological justification. Because there can
be realities without theory, we can say that it is justifiable to place the case
study method in cell (1). The case study may be constituted by a sample
of particular firms, households, countries, and so on, that is, of those social
groups that are outliers or exceptions to the scientific theory or were never
part of a sample. Then the mean and variance of the variables involved can
be calculated from this sample.
Alternatively, the sample may be constituted by one social group only
(individuals, firms, households, or countries), for which observations over
time is possible and then allow us to calculate the mean and variance of the
variables over time. Therefore, a case study can produce a scatter diagram
of variables (similar to the scatter diagram presented in Fig. 6.2) from
variations over time for a particular social group. It should be clear that a
case study that produces a single observation (not a scatter diagram, but
one point only) is useless. The statistical value of one observation is nil!
A case study can be applied to solve the fallacy of ontological universal-
ism, which is the belief that if a relation is true in one place or time must
also be true in any other place or time (to be discussed in Chap. 8 below).
They can be summarized in two parts by two types of problems about
generalizations as follows:
(a) To test statistically the validity of a theory for a particular social real-
ity. The question is whether this reality behaves as the theory says.
For example: Theory T was found valid in country C 1. Is theory
T also valid in country C2? This research question would correspond
to either cell (1) or cell (2), depending upon the availability of
empirical data.
(b) To test statistically the validity of an H-hypothesis for a particular social
reality. The question is whether the H-hypothesis, which was accepted
in country C1, can also be accepted in country C2. This research
question would correspond to cell (3).
The Alpha-Beta Method and Other Methods 115
The conclusion that emerges from Table 7.1 is that all empirical research
methods contribute to scientific knowledge, although in different forms
and stages. All methods are complementary. Research that appears in cell
(1) produces scientific knowledge. But the other three cells contribute to
the possibility of reaching cell (1) at some point of the algorithm. They
contribute to the growth of scientific knowledge by supplying the needed
dataset to falsify a theory, or by constructing a set of empirical regularities
that will call for a theory to explain them, or by providing insights to gen-
erate new theoretical or empirical hypothesis on new questions that seek
to push the frontier of scientific knowledge. Even case studies have their
place. The different empirical research methods play different roles in the
construction of scientific knowledge and thus offer different outputs from
research. All of them are important in the growth of scientific knowledge.
The empirical research methods presented in Table 7.1 are applicable
to economics and the social sciences. Each discipline can operate in any
of the cells, depending on the theory and data availability. There is no
epistemological justification to argue that economics is quantitative and
the other social sciences are qualitative, or that economics is theoretical
and the other social sciences are empirical, which would imply a separation
of disciplines by empirical research methods, that is, by cells. As shown
earlier, the construction and progress of scientific knowledge in the social
sciences requires the researchers to engage in theoretical-empirical and
quantitative-qualitative research.
Table 7.1 also shows that the epistemology of economics and the other
social sciences assumes that there exists a difference between what people
actually do (behavior) and what they say they do; moreover, scientific
knowledge comes from behavior (facts), not from what people say what
they do and why. Then scientific theories in the social sciences can be sub-
ject to falsification only based on hard data: observations about human
behavior; hence, falsification using soft data (perception, opinions, beliefs
that people state in interviews) will be unviable. Progress of scientific
knowledge requires both hard and soft data, as indicated in Table 7.1.
In some stages, as in cell (4), researchers will act as ethologists of their
own species and will collect mostly soft data. In the other cells, researchers
will use hard data.
This is another reason to support the claim made before: The social
world is much more complex than the physical world (and biological world
as well!) and, therefore, social sciences need to be more epistemology-
intensive compared to the natural sciences. The alpha-beta method is
116 A. Figueroa
indeed a very involved method, much more than the scientific research
methods utilized in the natural sciences, as will be shown below, in Chap. 9.
Before that, the next chapter deals with some important fallacies that have
been uncovered by the rules of the alpha-beta method.
Chapter 8
Fallacies of Composition
Fallacies of composition refer to the logical errors of inference, going from
the parts to the whole or vice versa.
The Fallacy of Aggregation “What is true for the individual must be true
for the aggregation of individuals.”
Examples
The Fallacy of Division “What is true for the aggregate is also true for the
individuals of the aggregate.”
Example
“Water is liquid; then all its constitutive elements (hydrogen and oxygen)
are liquid substances”
The error in these two types of fallacies comes from ignoring the effect
of the interactions among individual elements of an aggregate. Therefore,
what is true for the part need not be true for the whole, and vice versa.
Economics is a social science. Its objective is to explain the functioning
of human societies. Human societies are made of individuals. Economics
analyzes the individual behavior just as a logic artifice, as a method to
construct the aggregate. The risk of falling into fallacy of composition
problems is therefore very high in economics.
Examples
“If a farmer has a good harvest, he will become richer; then if all farmers
have good harvest, they all will become richer” (fallacy of aggregation: mar-
ket price will fall if all farmers produce more).
“If national income increases, then the incomes of all individuals increase”
(fallacy of division: reason is obvious).
Fallacies in Scientific Argumentation 119
“If a borrower cannot repay the loan to a bank, he or she has a problem.
If all borrowers cannot repay, they all borrowers have a problem” (fallacy
of aggregation: If all borrowers cannot repay, the banks have a problem).
“If an individual acts seeking his or her own interest, he or she will reach
the objective; if all individuals act seeking their own interest, the group
will attain their objectives” (fallacy of aggregation: in the interactions may
appear problems of congestion and negative externalities). From the behav-
ior of individuals may result social wellbeing or social disaster, depending on
the form of their interactions. This is one of the most debated propositions
in economics about individual freedom and social wellbeing.
“If a theory explains reality, it explains all the components of that reality.”
Fallacies of Causality
Causality relations are the fundamental product of scientific knowledge.
To know what causes what is the result of having a scientific theory, empir-
ical predictions of the theory, and survival from falsification, as shown
in the alpha-beta method. However, there are ways in which causality is
stated as fallacy.
The Fallacy of Spurious Correlations “If fact A occurs at the same time
with fact B, then A is the cause of B.”
This fallacy is also known as Cum hoc, ergo propter hoc (simultaneously
with that, then because of that). There is no logical justification to draw
the conclusion of causality from the observation that facts occur simulta-
neously. The use of the alpha-beta method may reveal that a third factor
(C) is the cause of occurrence of A and B.
Examples
“There is correlation between (A) ice cream sales increases and (B) drowning
deaths increase; hence, A causes B.” Consider a third Factor (C) Weather:
It is summer time. Hence people buy more ice cream and go to the beaches
in larger quantities, which increase deaths. Then C may cause both A and B;
thus, A does not cause B.
A study found the following correlation: “(A) children sleeping with lights
on are correlated with (B) children suffering from myopia. Then the study
concluded: (A) causes (B).” Another study found that there is a third factor
causing both: (C) the myopia of children’s parents. The theory is that myopia
is mostly inherited; hence, myopic parents tend to leave the lights on in their
children bedroom. Then C cause both A and B; thus, A does not cause B.
The Fallacy of Sequence “If fact B occurs after fact A, then A is the cause
of B.”
This fallacy is also known as the problem of post hoc, ergo propter hoc
(after that, then because of that). Spurious correlation originates this
fallacy.
Example
“(A) After divorce (B) women tend to have higher cancer incidence; hence,
A causes B.” A third factor may cause both: age. After divorce, women will
be older and older women are more likely to develop cancer compared to
younger women. Aging may cause both cancer and divorce; that is, divorce
does not cause cancer.
“If theory and reality do not coincide, reality is wrong, for theory being
logically constructed cannot be wrong.”
“To understand the real world all what you need is empirical data.”
The alpha-beta method shows that there is logical route from theory
to facts, but there is no logical route from facts to theory and explanation.
The alpha-beta method is able to prove that both deductivism and induc-
tivism do not provide a logic of scientific knowledge; that is, they are truly
Fallacies in Scientific Argumentation 123
“If an economic theory does not explain everything, it does not explain
anything.”
“If an economic theory explains a reality, then it explains all aspects of that
reality.”
124 A. Figueroa
Fallacy of Forecasting
+
b :Y = F ( X ) (8.1)
It says that the value of the endogenous variable Y depends upon the
value of the exogenous variable X. Therefore, the value of the endog-
enous variable Y in a particular period of the future will depend upon the
value the exogenous variable will take in that period. Which value will the
exogenous variable X take in the future? The theory cannot determine the
value of the exogenous variable for the future; hence, it cannot predict
the future value of the endogenous variable. The future value of the exog-
enous variable X could only be guessed. With this value of the exogenous
variable so determined, the future value of the endogenous value Y could
be predicted. Hence, forecasting is a conditional prediction, conditional
on the correct guessing of the future value of the exogenous variable X.
The forecasting of the value of an endogenous variable could become
false ex post, but that does not refute the validity of the theory. The theory
is one that has been corroborated by falsification. It was the guessing of
the exogenous variable what failed.
The problem also appears in the case of dynamic processes. From a
valid theory, consider the following beta proposition:
++ (8.2)
b :Y = F ( X,t )
It says that the trajectory of the endogenous variable Y depends upon the
time period, for a given value of the exogenous variable X. Therefore, the
theory predicts a particular trajectory, in which the value of Y = Y ¢ will
occur at the particular period t′ in the future. This will be true if, and only
if, the value of X remains unchanged at period t′. But the theory cannot
determine that because X is exogenously determined. Then the forecast Y′
could fail ex post because at period t′ the value of X took another value.
Finally, consider an evolutionary process. From a corroborated evolu-
tionary theory, consider the temporal dynamic equilibrium before regime
switching, which for simplicity is given by Eq. (8.2), where t = T . Because
the trajectory is fully determined for a given value of the exogenous vari-
able X = X¢ , then the value of the endogenous variable Y = Y ¢ can be
forecast for period T = T ¢ < T* . However, this forecast could fail because
the value X′ may take another value at period T′.
Fallacies in Scientific Argumentation 127
Abstract How does the alpha-beta method compare with the methods
applied in the natural sciences? In order to make this comparison ana-
lytical, this chapter presents the main theories of physics and evolutionary
biology, together with their methods for accepting or rejecting them. The
conclusion is that alpha-beta method is not applicable to physics, but it is
to evolutionary biology. The alpha-beta method is thus appropriate for
sciences dealing with hyper complex realities, such as economics and biol-
ogy. Therefore, economics is, like physics, a science; however, economics
is not a science like physics; economics is more like biology. This chapter is
not a digression; on the contrary, epistemology comparisons have heuristic
value in science.
Along the book, we have assumed that the social world is more complex
than the physical world. The rules of the alpha-beta research method have
been developed to deal with the social world and are thus applicable to
economics and the social sciences in general. How does the alpha-beta
method compare with the methods applied in the natural sciences?
This comparison is introduced in the book with the idea that our
understanding of the epistemology of economics will be improved if we
try to compare the alpha-beta method with that of the natural sciences.
Therefore, this chapter should not be seen as a digression; on the contrary,
comparison has a heuristic value; it is an aid to learning. The comparison
will be limited to the fields of physics and evolutionary biology.
Physics
Theoretical physicists have recently written scientific books about the
state of knowledge attained in physics that are addressed also to the large
audience. The most popular is the one written by Cambridge University
professor Stephen Hawking. Based on his book A Brief History of Time
(1996 edition), the structure of the discipline in terms of theoretical and
empirical propositions together with the method followed will be pre-
sented here.
The first propositions made in physics were empirical hypotheses that
were not associated to any scientific theory in the sense the term has been
used in the alpha-beta method. This type of empirical hypothesis have
been called H-hypothesis here. Two of the most important were:
H (1) ≠ O (1) : Not everything in space orbits around the earth (Galileo)
H ( 2 ) ≠ O ( 2 ) : Planets follow elliptical paths around the sun (Kepler)
Gravity Theory
The most famous theory of physics is gravity theory, which was proposed
by Isaac Newton in 1687. It can be stated as follows:
ϕ1: The universe is a static system, in which bodies are attracted each other
by a force, which is stronger the more massive the bodies are and the closer
they are to each other.
There are two empirical refutations, which are enough to reject the
theory. The question now is to find another theory that predicts proposi-
tion γ1(1), but not the others.
ϕ2: Time and space are not absolute categories; their behavior depend upon
the relative motion of the observer.
γ 2 (1) = O ( 3 ) : Light seems indeed to travel faster than any other thing.
Some simple examples can be mentioned. In a thunderstorm, one sees the
lightening before hearing the thunder. In a baseball stadium, when a batter
hits the ball, one sees the ball being hit before hearing the sound.
γ 2 ( 2 ) ≈ O ( ? ) : No information is available.
γ 2 ( 3 ) = O ( 4 ) : The Michelson-Morley experiment (p. 39).
γ 2 ( 4 ) = O ( 5 ) : The explosion of atomic bombs.
is not flat (as had been previously assumed): it is curved by the distribution
of mass and energy in it. Bodies always follow strait lines in four-dimensional
space-time, but they appear to us to move along curved paths in our
three-dimensional space (p. 40).
γ 3 (1) = O ( 6 ) : Orbits measured by radar agree with theory. “In fact, the
orbits predicted by general relativity theory are almost exactly the same as those
predicted by the Newtonian theory of gravity γ 1 (1) = γ 3 (1) . However, in
the case of planet Mercury, which being the nearest planet to the sun, feels the
strong gravitational effect, general relativity theory predicts a small deviation
from Newtonian predictions. This effect was noticed before 1915, and served
as one of the confirmations of Einstein’s theory” (pp. 40–42).
γ 3 ( 2 ) = O ( 7 ) : Observing an eclipse from West Africa in 1919, a British
expedition showed that light was indeed deflected by the sun. The light
deflection has been accurately confirmed by a number of later observations
(p. 42).
γ 3 ( 3 ) = O( 8 ) : Experiments have corroborated this prediction (p. 43).
γ 3 ( 4 ) = O ( 9 ) : Hubble’s finding in 1929 that the universe is expanding.
γ 3 ( 5 ) = O( 9 ) : Hubble’s finding in 1929 that the universe is expanding;
the distance between the galaxies is growing over time.
Quantum Theory
The general theory of relativity deals with the large objects of the universe.
Phenomena on extremely small scales are studied by quantum mechanics
theory. The use of better quality “microscopes” (not similar to regular
microscopes because these instruments allow us to see molecules, atoms,
and smaller particles, and because some of them are very big instruments)
has generated more knowledge on empirical observations.
For a long time, it was assumed that matter is composed of atoms. The
atom was the ultimate unobservable element in physics and assumptions
were made about its composition and behavior. By the early 1930s, atoms
became observable. We now know that atoms are constituted by a nucleus
(containing proton and neutron) and electrons. In turn, what are these
elements made of? By 1968, it was observed that the nucleus is made of
even smaller elements, called quarks. What are quarks and electrons made
of? Today, they are still unobservable. Some physicists assume that the
ultimate element is something called string.
Quantum theory can be stated as follows:
ϕ4: Particles do not have exactly defined positions and speeds. The universe
is an uncertain place, governed by chance, when examined in smaller and
smaller distances and shorter and shorter time scales (the subatomic world).
γ4(1): Observations about the position and velocity of objects in the subatomic
world are uncertain. This is Heisenberg’s uncertainty principle. This principle
says that “one can never be exactly sure of both the position and the velocity
of a particle; the more accurately one knows the one, the less accurately one
can know the other” (p. 243).
Evolutionary Biology
Biologists have recently written scientific books for the large audience.
The introductory book by Ernst Mayr (1997), the late Harvard University
professor, is one of the most popular. Other authors include Casti (2001),
Smith (2002), and Pasternak (2003). These works will be used to present
the structure of this discipline here.
There are two fields in biology that are quite different in the nature of
knowledge. Functional biology—molecular biology—is more like phys-
ics. The other part is evolutionary biology, which studies the interactions
between populations. This is the field that will be compared with the social
sciences. According to Ernst Mayr, biology is, like physics, a science,
but biology is not a science like physics; biology is an autonomous science
(Mayr 1997, pp. 32–33).
The scope of evolutionary biology is the study of the interactions
between vast numbers of organisms, each of them in itself of enormous
complexity. This is similar to the complex social world studied by econom-
ics. Therefore, in principle, the alpha-beta method would be applicable
to evolutionary biology. The rule of scientific knowledge would be the
construction of abstract worlds, or theories, to explain the behavior of
136 A. Figueroa
these organisms and the biology world. The empirical predictions of the
theories would then be confronted against empirical facts.
Given the complexity of the biological world, scientific theories may
need to be presented in the form of theoretical models. Indeed, math-
ematical models are utilized in biology. The reason is that equations—the
reduced form equations—will allow the biologist to predict behavior. The
prediction is, however, mostly qualitative. Precise numerical fit is usually
too much to hope for because in any model so much is left out. What is
the justification to leave out of a model something that surely affects the
outcome? Biologist John Maynard Smith explains as follows: First, if it
was important, the model will not give the right predictions, even qualita-
tively; second, if we try to put everything into a model, it will be useless.
So, in biology, only rather simple models are useful. The price that biolo-
gists must pay for this simplification is the lack of quantitative accuracy in
their predictions (Smith, 2002, pp. 196–197).
Evolutionary biology can be represented as an evolutionary process.
The repetition of the process is non-mechanical, but subject to qualitative
changes as the process is repeated. Endogenous and exogenous variables
are also distinguishable. The exogenous variables refer mostly to different
biophysical environments or niches. Because an evolutionary process can
be represented by a sequence of dynamic processes, in evolutionary biol-
ogy the endogenous variables will show regime switches over time even if
the exogenous variables remain fixed.
The theory of biological evolution can then be restated in terms of
alpha-beta propositions, as follows:
Ontological Universalism
The next comparison between economics and the natural sciences refers to
the question of the ontological universalism—the assumption of one unitary
world. Economics considers the world society as its “universe.” In physics
and biology, the relevant comparable universe is the planet Earth.
The deviations between Einstein’s theories and Newton’s theory are
extremely small in the slow-velocity world we humans typically inhabit. If
you throw a baseball, Newton and Einstein theories can be used to predict
where it will land, and the answers will be different, but the difference
will be so slight that they are beyond our capacity to detect it experimen-
tally (Greene 2003, p. 76). Newton’s theory is not valid for all space and
time, but Einstein’s is. In the limited world of the planet Earth, however,
Newton and Einstein theories can both be right. Moreover, when consid-
ering the reality of the planet Earth alone, Newton’s theory is finer and
simpler than Einstein’s.
If the planet Earth could be separated into segments using any cri-
terion, these segments would not be different in the sense that gravity
theory would explain the relationships between objects, no matter how
those segments were created. Atoms would behave exactly in the same
manner everywhere in the planet Earth; there would not exist a different
physics for the South (poor regions) and for the North (rich regions). The
principle of ontological universalism in the planet Earth then applies in
physics and gravity is the general theory.
In evolutionary biology, the planet Earth can be separated into differ-
ent biophysical environments, which is the main exogenous variable. In
each environment, living organisms are expected to behave differently. We
know that plants behave differently in the tropics compared to the temper-
ate zones, or in mountains compared to sea-level environments. Separate
biological realities imply the construction of partial theories to explain
those specific realities. The question of unity of knowledge and the need
of a unified theory then arise in biology.
For example, plants physiology are different in the Andes compared to
the American plains. Why? In the temperate zones, plants have adapted to
an environment in which major changes in temperature take place between
seasons around the year. In the Andes, plants had to adapt to an environ-
ment in which major changes in temperature take place around a day.
In both zones, however, the theory of photosynthesis applies. Therefore,
photosynthesis is the unified theory of plant behavior.
Comparing Economics and Natural Sciences 141
set so constructed measures not only behavior, but also social actors’ self-
declarations and opinions, which very likely introduce significant bias and
distortions to the truth data, according to the incentives of the informa-
tion suppliers.
A simple H-hypothesis type that can be put forward here is that prog-
ress in physics is due essentially to the innovations in the instruments of
measurement. Think of the Hubble Space Telescope of today compared to
the telescope Galileo utilized in 1609. Telescopes, microscopes, and spec-
troscopes all have gone through continuous progress and sophistication,
which have certainly made falsification of theories more decisive and thus
made scientific progress more rapid in physics. This hypothesis is different
from Kuhn’s, in which paradigm changes depend mostly on sociological
factors, as the behavior of the community of scientists (Kuhn 1970).
Such process of innovations in measurement instruments has not hap-
pened in economics. Production and distribution are still measured using
imperfect instruments, which have not shown significant innovations. The
popular GDP (gross domestic product) is based partly on hard data (actual
output) and partly on soft data (responses of social actors to question-
naires about sales, inventories, employment, incomes, etc.). However, the
soft data collection implies a more fundamental problem: the attempt to
collect data is an interference and changes people’s behavior in unknown
directions. This is, in part, similar to the Heisenberg principle. As shown
earlier, the Heisenberg principle, also known as the uncertainty principle,
refers to a measurement problem in physics. According to this principle,
one can never be exactly sure of both the position and velocity of a particle
because the instrument used for measuring (to shine light on the particle)
will disturb the particle behavior; that is, the particle will change its posi-
tion or velocity in a way that cannot be predicted.
In economics, in addition to the Heisenberg problem, data collection
based on what people say has another uncertainty problem, which rests in
the fact that social actors can supply the real data or lie, according to the
incentives they face; thus, the quality of measurement of the economic
process is greatly affected in unknown directions as well. Soft data in eco-
nomics is thus endogenous, as it depends upon the incentive system facing
social actors to supply information. Consider the problem of measuring
corporate profits, when firms can decide how much profit to declare and
in which country to declare, depending on the incentives given by the
legislation of countries on corporate taxes.
Comparing Economics and Natural Sciences 143
Notes
1. Another example: According to gravity theory, if the sun exploded, the
earth would instantaneously suffer a departure from its usual elliptic orbit.
However, according to special relativity theory this effect would not hap-
pen, for no information can be transmitted faster than the light speed;
hence, this effect would not be felt in the earth instantaneously. It would
take eight minutes, which is the time light takes to travel from the sun to the
earth (Greene 2003, p. 354).
2. Space-time is the four dimensional description of the universe, uniting the
three space dimensions and the single time dimension.
CHAPTER 10
Conclusions
Abstract The social sciences are more complex than physics, the exemplar
of sciences. Therefore, economics and the other social sciences require
a more sophisticated epistemology than physics. The book proposes a
composite epistemology, the combination of the epistemologies of Karl
Popper and Nicholas Georgescu-Roegen, to provide economics with such
epistemological need. Then, the alpha-beta method is logically derived
from the composite epistemology and contains operational rules for sci-
entific research, the use of which will lead to a Darwinian competition
of economic theories. By construction, economics can now be seen as a
critical science. The use of the alpha-beta method will lead to enhancing
the quality of learning, teaching, and research in economics. This is the
expected contribution of the book.
The social world is much more complex than the physical world. Therefore,
economics requires more sophisticated epistemology than physics. This
book has proposed the alpha-beta method, which contains a set of rules
for scientific research in economics. The method is constructed in such a
way that economic theories are necessarily falsifiable. Therefore, the usual
claim that economic theories are rarely falsifiable, which has led to the
coexistence of many economic theories, can now be ruled out. The sci-
entific progress of economics is now viable through the Darwinian evolu-
tionary process, in which selection of good theories and elimination of bad
theories can be practiced.
The alpha-beta method has been derived from the composite epistemol-
ogy, the combination of the epistemologies of Nicholas Georgescu-Roegen
and Karl Popper. The Popperian epistemology alone could rarely lead to
falsification in economics. It was necessary to solve the problem of trans-
forming a complex real world into an abstract and simpler world by means
of a scientific theory, which generates at the same time causality relations.
The process epistemology of Georgescu-Rogen has been used to solve
this problem. Therefore, the alpha-beta method has epistemological jus-
tification and now under this method economic theory will be, by con-
struction, always falsifiable. The development of the alpha-beta method is
presented as the main contribution of this book.
Scientific knowledge seeks to explain and understand reality and it seeks
to be error-free knowledge. The aim of economics is to establish causality
relations—what causes what. The alpha-beta method allows us to reach
this objective in economics. Alpha propositions constitute the assump-
tions of the scientific theory, the construction of the abstract world, which
intends to be a good approximation of the complex real world. Beta prop-
ositions are derived from alpha proposition, show causality relations, and
are, by construction, empirically falsifiable. Therefore, we have an episte-
mological criterion to accept or reject economic theories. Science is epis-
temology. Hence, the alpha-beta is a scientific research method.
The rules of the alpha-beta method can be summarized as follows:
(A) The complex real world must be transformed into a simpler abstract
world by means of a scientific theory. A scientific theory is a set of
assumptions, which are called the alpha propositions; thus, a scientific
theory is a set of alpha propositions. The alpha propositions are
unobservable, as they refer to the underlying factors operating in the
workings of the real world. Thus, a scientific theory seeks to uncover
the underlying factors in the social facts that we observe.
(B) The alpha propositions must be able to generate, by logical deduc-
tion, observable propositions, which are called beta propositions.
Beta propositions are, by logical construction, empirically falsifiable
or refutable. Moreover, beta propositions contain the causality rela-
tions between the exogenous and endogenous variables established
by the scientific theory.
(C) Beta propositions are tested statistically against facts of the reality
under study. If facts refute the beta propositions, then the scientific
theory is rejected as a good approximation of this reality. If facts can-
CONCLUSIONS 147
can speak about the logic of scientific knowledge; that is, science is the
logic way, the rational way, to establish causality. The statement “Science
is measurement” is incomplete, for measurements alone, facts alone, are
not conducive to causality.
The alpha-beta method is conducive to scientific knowledge because it
is a scientific research method; however, its application requires the avail-
ability of both theory and data set. When theory or data is not available,
other empirical research methods can be applied, which are conducive to
pre-scientific knowledge. The research strategies to go from pre-scientific
knowledge to scientific knowledge are also presented in the book.
Although the book has dealt with economics, it has also shown that
the application of the alpha-beta method can be extended to the other
social sciences. The basic reason is epistemological: Any research ques-
tion about the complex social world can be answered by reducing it to
an abstract and simpler social world, which leads to the use of the alpha-
beta method.
Regarding the comparison of economics with the natural sciences, the
alpha-beta method is not applicable to physics. However, it is applicable
to evolutionary biology. This is so because human societies, seen as human
species, are instances of biological species. The social world and the bio-
logical world are thus similar—complex realities. This is not the case of
physics and thus it uses other rules for the falsification of scientific theo-
ries. Hence, the book concludes that economics is, like physics, a science,
but economics is not a science like physics. Economics is more like evolu-
tionary biology. For one thing, the principle of ontological universalism is
valid in physics, but it is not in economics and biology.
As to the question posed at the beginning of the book—why the growth
of scientific knowledge in the social sciences has proceeded at a lower rate
than in physics—the book has shown that the answer lies in the nature of
these sciences. The social world is much more complex than the physi-
cal world; hence, the social sciences are more complex than physics, the
exemplar of sciences. For one thing, the atom is homogeneous, but people
are diverse. Scientific knowledge in the social sciences is therefore more
demanding on epistemology, more epistemology-intensive, than in phys-
ics. The implication is that a more complex reality would require a more
complex epistemology. The alpha-beta method is constructed to provide
such epistemological need.
The book has shown that although the principle of falsification is
applicable to economics, it is a very involved task. First, facts in economics
CONCLUSIONS 149
F I
failure vs. error of scientific theory, 24 identification problem/problem of
falsification in economics identification
applicable only under alpha-beta in rejecting beta-hypothesis, 72
method, 64 in rejecting H-hypothesis, 106
justified by composite epistemology, immortal theory (pseudo theory), 53,
146 92, 95, 143
See also Popperian epistemology not inductive logic, 7, 27, 101–4, 110
applicable in economics inductivism, 101, 103, 104, 110,
Figueroa, A., 61, 141 112, 122–3
forecasting, the fallacy of, 125–7 initial conditions as initial structure of
Freund, J., 85 society, 31, 33, 50, 51
initial inequality postulate, 51
initial resource endowment
G postulate, 51
general equilibrium model, 57–9, 61 institutional postulate, 50
Georgescu-Roegen, N., 10, 16, 28, interpretive epistemology
31, 92, 99, 118, 138, 146 assumptions, 108
Georgescu-Roegen’s process and exploratory research, 110
epistemology. See process limitations, 108
epistemology intepretive research method,
Greene, B., 140, 143n2 108–113
H K
Haack, S., 103 Kuhn, T., 92, 142
Hawking, S., 4, 130, 131
INDEX 157
O
M ontological universalism, the fallacy of,
markets and democracy, 31, 33, 34, 52 123
as fundamental institutions of ontological universalism, the problem
capitalism, 31, 33, 52 of
Mayr, E., 135, 136 in economics, 48–52
McCloskey, D.N., 78 in physics, 48–52
measurement of social facts ordinal variable
Georgescu-Roegen criterion, 92–3 as dummy variable in regression
science is measurement, 88–93 analysis, 93, 113
Searle’s criterion, 89–92 qualitative, 92, 93, 113
mechanical processes, 37, 38, 40
mechanical time (t), 38, 84
mechanical vs. historical time, 38, 84 P
methodology. See epistemology Pagan, A., 87
microeconomic equilibrium parametric statistics or statistical
model, 58 theory, 64, 65, 74, 84, 95,
misplaced concreteness, fallacy of, 96, 105
123–5 assumptions, 64, 65, 74, 95,
models 96, 149
concept, 34, 56 partial correlation coefficient,
economic theory as family of 81–2
models, 55–7 partial equilibrium model, 58, 61
make theories falsifiable, 53 Pasternak, C., 135
need of auxiliary assumptions, 56 physics and economics. See economics
types of; partial vs. general and physics
equilibrium, 57–9; short run vs. Popper, K., 7–9, 28, 50, 55, 99,
long run, 59–60; static vs. 102–4, 111, 117–18, 138, 146
158 INDEX
V
T verified vs. corroborated theories, 27
testing economic theories. See
alpha-beta method, falsification in
economics under W
theory of everything, fallacy of, 123 weak-cardinal variable, 92–3
theory of evolution by natural wealth inequality. See power structure
selection, 136 Wilson, E., 139
theory of knowledge
definition, 3, 6, 7
as formal science (logic), 6 Z
meta-assumptions, 5, 6, 77 Ziliak, S., 78