Sei sulla pagina 1di 173

RULES FOR

SCIENTIFIC
RESEARCH IN
ECONOMICS
The Alpha-Beta
Method

Adolfo Figueroa
Rules for Scientific Research in Economics
Adolfo Figueroa

Rules for Scientific


Research in
Economics
The Alpha-Beta Method
Adolfo Figueroa
Pontifical Catholic University of Peru
Lima, Peru

ISBN 978-3-319-30541-7 ISBN 978-3-319-30542-4 (eBook)


DOI 10.1007/978-3-319-30542-4

Library of Congress Control Number: 2016944657

© The Editor(s) (if applicable) and The Author(s) 2016


This work is subject to copyright. All rights are solely and exclusively licensed by the
Publisher, whether the whole or part of the material is concerned, specifically the rights of
translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on
microfilms or in any other physical way, and transmission or information storage and retrieval,
electronic adaptation, computer software, or by similar or dissimilar methodology now
known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are
exempt from the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information
in this book are believed to be true and accurate at the date of publication. Neither the pub-
lisher nor the authors or the editors give a warranty, express or implied, with respect to the
material contained herein or for any errors or omissions that may have been made.

Printed on acid-free paper

This Palgrave Macmillan imprint is published by Springer Nature


The registered company is Springer International Publishing AG Switzerland
To Nicholas Georgescu-Roegen, my teacher, in memoriam
PREFACE

Why has the growth of scientific knowledge in the social sciences pro-
ceeded at a rate that is slower than that of the natural sciences? The
basic reason seems to rest upon the differences in the complexity of the
reality they study. Compared to the natural sciences, the social sciences
seek to explain the functioning of the social world, which is a much
more complex world than the physical world. As biologist Edward
Wilson pointed out:

Everyone knows that the social sciences are hypercomplex. They are inher-
ently far more difficult than physics and chemistry, and as a result they, not
physics and chemistry, should be called the hard sciences (1998, p. 183)

Methodology deals with the problem of how to construct scientific


knowledge. Is the understanding of the social world more demanding
on methodology than understanding the physical world? Economist Paul
Samuelson argued in his classic book Foundations of Economic Analysis
that indeed this is the case:

[This] book may hold some interest for the reader who is curious about
the methodology of the social sciences…[I]n a hard, exact science [as phys-
ics] a practitioner does not really have to know much about methodology.
Indeed, even if he is a definitely misguided methodologist, the subject
itself has a self-cleansing property which renders harmless his aberrations.
By contrast, a scholar in economics who is fundamentally confused con-
cerning [methodology] may spend a lifetime shadow-boxing with reality.

vii
viii PREFACE

In a sense, therefore, in order to earn his daily bread as a fruitful contribu-


tor to knowledge, the practitioner of an intermediately hard science like
economics must come to terms with methodological problems. (1947, pp.
viii–ix)

Paraphrasing both Wilson and Samuelson, the researcher’s good com-


mand of methodology is more critical for producing scientific knowledge
on the highly complex sciences (social sciences) than in the less complex
sciences (natural sciences). Therefore, the answer to the question posed
above seems to be that the difference lies in methodology. Social sciences
development needs to use better methodology and more intensively. This
book intends to contribute to that development.
Methodology is also called epistemology (from the Greek episteme,
knowledge). Epistemology or methodology is usually presented as part
of philosophy of science. In this view, epistemology is a branch of phi-
losophy that seeks to scrutinize the philosophical problems that arise in
the practice of science, such as epistemological, metaphysical, and ethi-
cal problems. Philosophy of economics is the particular field that deals
with philosophical problems in economics, as economists practice it. To
be sure, this book is not about philosophy of economics. There are good
recent books that show the state of this discipline (e.g. Reiss 2013).
The approach followed in this book will be different. It will correspond
to the view of epistemology as the theory of knowledge—the logic of
scientific knowledge. Then epistemology will be seen as part of the formal
science of logic, not of philosophy. Indeed, some textbooks of logic now
deal with the logic of scientific knowledge (e.g. Hurley 2008).
The book will show practical rules for the construction and growth of
scientific knowledge in economics, which will be derived logically from a
particular theory of knowledge or epistemology. No such rules exist cur-
rently in economics; that is, economists follow a diversity of rules, derived
from a diversity of epistemologies or having no epistemological justifica-
tion. The intended contribution of the book is then normative: what rules
of scientific research ought economists to follow. This view of epistemol-
ogy is more natural for working scientists, who are epistemology users
rather than makers.
The epistemology proposed by Karl Popper (1968) will be adopted in
this book. This is one of the most popular epistemologies in the literature.
It essentially says that theory is required for scientific knowledge, but this
theory must be empirically falsifiable or refutable; thus, good theories will
PREFACE ix

prevail and bad theories will be eliminated, as in a Darwinian competition.


Scientific progress will result from this competition.
However, Popperian falsification epistemology is also the most debated.
Many authors have argued that Popperian epistemology is not applica-
ble in economics. The arguments are clearly summarized in the Stanford
Encyclopedia of Philosophy by Daniel Hausman (2013), a leading philoso-
pher of economics. They are

1. Economic theories are rarely falsifiable.


2. When they are, they are rarely submitted to testing.
3. When they fail the test, they are rarely repudiated.

Consequently, we can understand why in economics we observe that


no theory is ever eliminated and that progress in scientific knowledge is
relatively limited, in spite of large amounts of research work.
Problems (2) and (3) refer to what economists do and why. These are
not within the scope of this book. Problem (1) is the subject of this book.
The challenge is how to make Popperian epistemology applicable and
operational in economics. Can we logically derive from Popperian episte-
mology a set of practical rules for scientific research in economics? As the
book will show, this derivation is subject to the transformation of a com-
plex social world into a simple abstract world. Popperian epistemology
might be suitable for physics, but whether it is so for economics, a science
dealing with a complex world, is another question. In fact, problem (1)
has to do with the complexity of the social world.
How to make knowable a complex reality, such as the social world? The
late Vanderbilt University professor of economics, Nicholas Georgescu-
Roegen (1971) proposed a solution to this problem, and developed the
process epistemology. Georgescu-Roegen is mostly known as the founder of
bio-economics, an economic school different from standard economics,
but his contribution to epistemology is less known.
Consider now combining the epistemologies of Popper and Georgescu-
Roegen into a single one, as they do not contradict each other. Call this
combination the composite epistemology. Then, as will be shown in this
book, a set of rules for scientific research in economics can be derived from
the composite epistemology. This set of rules will thus constitute a scien-
tific research method, as it will have epistemological justification or logi-
cal foundations. This will be called the alpha-beta method. This method
intends to solve the falsification problem in economics, the problem that
x PREFACE

“economic theories are rarely falsifiable”—the problem (1) of Popperian


epistemology, cited above. The alpha-beta method is a scientific research
method that ensures economic theories be always falsifiable. Thus, the
alpha-beta method is not another name for a known method, but a truly
new scientific research method, the application of which should contrib-
ute to scientific progress in economics. The book is thus intended to be
problem-solving.
Economics is a social science. However, this definition of economics
is not always accepted and the term social science is usually reserved for
sociology, anthropology, and political science. Although scientific rules are
derived for economics only, the book will show that extensions to the other
social sciences are nearly straightforward. This procedure means that eco-
nomics is presented as an example of the social sciences, not as the exemplar.
Differences in the complexity of the social world compared to the phys-
ical world must be reflected in the different epistemologies social sciences
and natural sciences use. The book presents a comparison between these
epistemologies, just to better understand the epistemology of economics
and the other social sciences.
Therefore, this book is concerned with the problem of how sciences
ought to seek scientific knowledge, not with what scientists actually do.
The common proposition “Science is what scientists do” ignores this dis-
tinction. Therefore, this book deals with the question of how scientific
research in economics ought to operate. The question of what economists
actually do and why is outside the scope of this book, for the answer
would require a scientific theory to explain that behavior. The book takes
the epistemologies of Popper and Georgescu-Roegen as given, and deals
with the problem of deriving logically from them a set of practical rules for
scientific research in economics.
The book includes 10 chapters. Chapters 1, 2, 3 and 4 deal with the
construction of the alpha-beta method and its application to economics.
Chapters 5 and 6 show the logic of statistical testing of economic theories
under the particular alpha-beta method. Chapter 7 compares the alpha-
beta method with other empirical research methods. Chapter 8 discusses
the most common fallacies found in economics that are uncovered by the
alpha-beta method. Chapter 9 compares the epistemologies of natural sci-
ences and economics in the light of the alpha-beta method. Chapter 10
presents the conclusions of the book.
In sum, the objective of this book is to present a set of rules for scien-
tific research in economics, which are contained in the alpha-beta method.
PREFACE xi

These rules are scarcely used today, which is reflected in the fact that no
economic theory has been eliminated so far, and thus we observe the coex-
istence of the same economic theories (classical, neoclassical, Keynesian,
and others) over time, with the consequent lack of Darwinian competition
of theories. Scientific progress is the result of such evolutionary competi-
tion. Therefore, the book seeks to contribute to the scientific progress
of economics by proposing the use of the alpha-beta method, a method
designed for the evolutionary progress of economics.
The book is primarily addressed to students of economics at advanced
undergraduate and graduate levels. Students in the other social sciences
may also find it useful in the task of increasing the growth of interdisci-
plinary research within the social sciences. Even students of the natural
sciences may benefit from the book by learning the differences in the rules
of scientific research of their own sciences with that of the social sciences.
This understanding will prepare economists, physicists, and biologists to
work in interdisciplinary research projects, such as the relations between
economic growth and degradation of the biophysical environment, which
is, certainly, one of the fundamental problems of our time.
ACKNOWLEDGMENTS

Parts of this book have been taught in economics courses at the Social
Science School and in the epistemology course in the Doctorate in
Business Administration at CENTRUM Graduate Business School, both
at Pontifical Catholic University of Peru, and at the Universities of Notre
Dame, Texas at Austin, and Wisconsin at Madison, where I have been
Visiting Professor. I would like to thank the students in these courses for
their valuable comments and questions about my proposal of the Alpha-
beta Method.
I am also grateful to the three anonymous reviewers appointed by
Palgrave Macmillan. Their comments and suggestions to my manuscript
were very useful to make revisions and produce the book. Sarah Lawrence,
the Economics & Finance Editor of Palgrave Macmillan, has been most
helpful to go through the review process of the book project.
My gratitude is immense with my current institution, CENTRUM
Graduate Business School, Pontifical Catholic University of Peru, and
with its Director Fernando D’Alessio, for providing me with great sup-
port for the preparation of this book.

xiii
CONTENTS

1 Science Is Epistemology 1

2 Alpha-Beta: A Scientific Research Method 15

3 The Economic Process 29

4 The Alpha-Beta Method in Economics 47

5 Falsifying Economic Theories (I) 63

6 Falsifying Economic Theories (II) 73

7 The Alpha-Beta Method and Other Methods 99

8 Fallacies in Scientific Argumentation 117

9 Comparing Economics and Natural Sciences 129

xv
xvi CONTENTS

10 Conclusions 145

Bibliography 151

Index 153
LIST OF FIGURES

Fig. 1.1 Diagrammatic representation of an abstract process 12


Fig. 3.1 Types of economic processes: static, dynamic, and
evolutionary 36
Fig. 3.2 Deterministic and stochastic static processes 42
Fig. 6.1 Assumptions of regression analysis 75
Fig. 6.2 Breakdown of the variation of Yj into two components 76

xvii
LIST OF TABLES

Table 1.1 Meta-assumptions of the theory of knowledge 6


Table 1.2 Scientific research rules derived from Popperian
epistemology 8
Table 2.1 The alpha-beta method 23
Table 2.2 Matrix of beta propositions or matrix of causality 26
Table 3.1 Economic process according to E-theory 32
Table 4.1 The alpha-beta method in economics 56
Table 5.1 Frequency distribution of income in the population B 67
Table 5.2 Distribution of sample means for n = 2 drawn from
population B 67
Table 5.3 Frequency distribution of income in the population C 70
Table 5.4 Distribution of sample means for n = 2 drawn from
population C 70
Table 6.1 Kinds of reality based on Searle’s classification 89
Table 7.1 Research methods: scientific and empirical 112

xix
Chapter 1

Science Is Epistemology

Abstract  What is the criterion to accept or reject propositions about


the social reality as scientific? We need rules for that, which must have
some rationality, some logic. This logic is called epistemology. Science is
epistemology. What is the epistemology of economics? The answer is still
debated. The use of the falsification epistemology of Karl Popper in eco-
nomics has been questioned. This chapter presents this epistemology and
analyzes the reasons for its shortcomings. Then the chapter introduces
the process epistemology of Nicholas Georgescu-Roegen, which deals with
complex realities, and shows that the two epistemologies are complemen-
tary and thus can be combined into a single composite epistemology. The
composite epistemology is now applicable to sciences dealing with com-
plex realities, such as those studied by economics.

Scientific knowledge seeks to establish relations between objects. The


objects can be mental or physical. Formal sciences study the relations
between mental objects, whereas factual sciences study the relations
between material objects. Mathematics and logic are examples of formal
science; physics and economics are instances of factual sciences.
Scientific knowledge takes the form of propositions that intend to be
error-free. Scientific knowledge is therefore a particular type of human
knowledge. What would be the criterion to accept or reject a proposi-
tion as scientific? It depends upon the type of science. In the formal
sciences, the criterion seems to be rather straightforward: The relations

© The Editor(s) (if applicable) and The Author(s) 2016 1


A. Figueroa, Rules for Scientific Research in Economics,
DOI 10.1007/978-3-319-30542-4_1
2  A. FIGUEROA

established must be free of internal logical contradictions, as in a math-


ematical theorem.
In the factual sciences, by contrast, the criteria are more involved. As
will be shown in this book, factual science propositions are based on for-
mal science propositions; that is, the propositions of a factual science must
also be free of internal logical contradictions. However, this rule consti-
tutes just a necessary condition, for the propositions must also be con-
fronted against real-world data.
Scientific knowledge in the factual sciences can be defined as the set
of propositions about the existence of relations between material objects
together with the explanations about the reasons for the existence of such
relationships. Therefore, it seeks to determine causality relations: what
causes what and why. It also seeks to be error-free knowledge, as said above.
We can think of several criteria to accept or reject a proposition in the
factual science. Common sense is the most frequent criterion utilized in
everyday life. Common sense refers to human intuition, which is a strong
force in human knowledge. Intuition is the natural method of human
knowledge.
The assumption taken in this book is that intuitive knowledge is subject
to substantial errors. Intuitive knowledge is based on human perceptions,
which can be deceiving. Galileo’s proposition that the Earth spins on its
axis and orbits around the sun was not generally accepted for a long time
(even up to now) because it contradicted intuitive knowledge: People can-
not feel the Earth spinning and what they can see is rather that the sun
is going around the Earth. The same can be said about today’s climate
change because the greenhouse gases are invisible to human eyes. Intuitive
knowledge is thus the primitive form of human knowledge.
As said earlier, science seeks to produce error-free human knowledge.
Therefore, human knowledge in the form of scientific knowledge requires
the use of a scientific method, which needs to be learned and educated. Thus,
science has to do with method. Thus, the criteria for accepting or reject-
ing propositions as scientific in the factual sciences—the scientific method—
needs to be constructed. This construction is the task of epistemology.

The Role of Epistemology in Scientific Knowledge


In this book, epistemology is viewed as the field that studies the logic of
scientific knowledge in the factual sciences. Epistemology sees scientific
knowledge as fundamentally problematic and in need of justification, of
Science Is Epistemology  3

proof, of validation, of foundation, of legitimation. Therefore, the objec-


tive of epistemology is to investigate the validity of scientific knowledge.
For this we need a criterion to determine whether and when scientific
knowledge is valid. This criterion cannot be based on facts, for they are the
objective of having a criterion; thus, the criterion can only be established
logically. Scientific knowledge must have a logic, a rationality, established
by a set of assumptions. Therefore, the criterion is given by a theory of
knowledge, which as any theory is a set of assumptions that constitute a
logical system.
Epistemology will thus be seen as theory of knowledge, as a logi-
cal system. In this book, the concept of theory will be applied to the
logic of scientific knowledge as well as to the scientific knowledge itself.
Consequently, two very useful definitions in parallel are needed at the very
beginning:

Theory of knowledge is the set of assumptions that gives us a logical criterion


to determine the validity of scientific knowledge, from which a set of rules
for scientific research can be derived. The set of assumptions constitutes a
logical system, free of internal contradictions.
Scientific theory is the set of assumptions about the essential underlying fac-
tors operating in the observed functioning of the real world, from which
empirically testable propositions can be logically derived. The set of assump-
tions constitutes a logical system, free of internal contradictions.

Any factual science needs to solve the criterion of knowledge before


doing its work because this question cannot be solved within the factual
science. The logical impossibility of obtaining the criterion from within
the factual science is relatively easy to proof. Let S represent any factual
science. Then

Factual science (S) is a set of relations (R) between material objects X and
material objects Y, which are established according to criterion (L).

This proposition can be represented as follows:

S = {R ( X, Y ) / L} (1.1)

4  A. FIGUEROA

How would L be determined? If L were part of S, then L would be estab-


lished through the relations between physical objects, that is, relations
between atoms (physical world) or between people (social world); how-
ever, this leads us to the logical problem of circular reasoning because
we need L precisely to explain the relations between atoms or between
people.
The criterion L will thus have to be determined outside the factual sci-
ence. How? The alternative is to go to the formal science, in particular to
the science of logic. The criterion L is now justified by a logical system.
This logical system is precisely the theory of knowledge (T), which as any
theory is a set of assumptions (A). Then we can write

S = {R ( X, Y ) / L}
L = {T ( A ) / B}
(1.2)
B = {T ′ ( A ′ ) / B′}

………......……..

The first line of system Eq. (1.2) just repeats the definition of factual sci-
ence. The second says that criterion L is logically justified by deriving it from
the theory of knowledge T, which includes a set of assumptions A, given the
set of assumptions B that is able to justify A. The set B constitutes the meta-
assumptions, the assumptions underlying the set of assumptions A. The set
B is logically unavoidable, for the set A needs justification. (e.g., why do
I assume that there is heaven? Because I assume there is God? Why do I
assume that there is God? Because…, etc.). Therefore, the set B needs a logi-
cal justification by using another theory T′, which now contains assumptions
A′, which in turn are based on meta-assumptions B′, and so on. Hence, we
would need to determine the assumptions of the assumptions of the assump-
tions. This algorithm leads us to the logical problem of infinite regress.
The logical problem of infinite regress is a torment in science. A classical
anecdote is worth telling at this point (adapted from Hawking 1996, p. 2):

An old person challenged the explanation of the universe given by an astron-


omer in a public lecture by saying:

–– “What you have told us is rubbish. The world is really a flat plate
supported on the back of a giant tortoise.”
Science Is Epistemology  5

The scientist gave a superior smile before replying:

–– “What is the tortoise standing on?”


–– “You’re very clever young man, very clever,” said the old person.
“But it is turtles all the way down.”
How could science escape from the infinite regress problem? This is a
classical problem, the solution of which goes back to Aristotle’s “unmoved
mover.” Everything that is in motion is moved by something else, but
there cannot be an infinite series of moved movers. Thus, we must assume
that there exists an unmoved mover.
In order to construct scientific knowledge, we need an unmoved mover,
an initial point, established as axiom, without justification, just to be able
to start playing the scientific game, which includes eventually revising the
initial point, and changing it if necessary. The scientific game includes the
use of an algorithm, that is, a procedure for solving a problem by trial and
error, in a finite number of steps, which frequently involves repetition of
an operation. Thus, the initial point is not established forever; it is only
a logical artifice. If the route to his desired destination is unknown, the
walker could better start walking in any direction and will be able to find
the route by trial and error, instead of staying paralyzed.
In the system Eq. (1.2) above, the only way to avoid the infinite
regress problem in the theory of knowledge is by starting with the meta-­
assumption B as given, and thus ignoring the third line and the rest. Then
the set of assumptions B will constitute the foundation or pillar of the
theory of knowledge T, which in turn will be the foundation or pillar of
the criterion L, which we can use to construct the theory of knowledge.
The infinite regress problem is thus circumvented and we are able to walk.
The role of the theory of knowledge in the growth of scientific knowl-
edge is to derive scientific rules that minimize logical errors in the task
of accepting or rejecting propositions that are intended to be scientific
knowledge. The theory of knowledge needs foundations, that is, meta-­
assumptions. Consider that the meta-assumptions B of the current theo-
ries of knowledge include those listed in Table 1.1.
As shown earlier, these meta-assumptions need no justification. (Please
do not try to justify them! We need to move on.) Thus, this initial set of
assumptions constitutes just the beginning of an algorithm to find the best
set of assumptions. Given these initial or fundamental assumptions, we
have a rule to follow: Any particular theory of knowledge will have to be
logically consistent with these four general principles.
6  A. FIGUEROA

Table 1.1  Meta-assumptions of the theory of knowledge


(i) Reality is knowable. It might not be obvious to everyone that this proposition is
needed, but reality could be unknowable to us
(ii) Scientific knowledge about reality is not revealed to us; it is discovered by us
(iii) Discovery requires procedures or rules that are based on a single logical system, which
implies unity of knowledge of a given reality; moreover, there exists such logical system
(iv) There exists a demarcation between scientific knowledge and non-scientific knowledge

In Table 1.1, assumption (i) implies that we may fail to understand a


reality because it is unknowable. Examples may include chaotic systems
(weather), rare events (earthquakes), and ancient civilizations where
facts are limited. Assumption (ii) in turn implies that research is needed
to attain scientific knowledge. According to assumption (iii), a theory of
knowledge seeks to provide science with a logical foundation or justifica-
tion, that is, with a rationality. Therefore, discovery cannot appear “out
of the blue.” Accidental discoveries are not “accidental”, but part of a
constructed logical system; otherwise, it could hardly be understood as
discovery. According to assumption (iv), a theory of knowledge must
have a rule that enables us to separate scientific knowledge from pseudo-­
knowledge in order to have error-free knowledge.
Theory of knowledge is a set of assumptions that constitute a logical
system; that is, the assumptions cannot contradict each other. Thus, the-
ory of knowledge can be seen as part of logic, that is, as a formal science.
Factual sciences and formal sciences thus interact: theory of knowledge
(constructed in the formal science of logic) is needed in factual sciences.
Any theory of knowledge has a particular set of assumptions that justify
rules of scientific knowledge, in which the set of assumptions are all con-
sistent with the meta-assumption presented in Table 1.1.
It should be clear from the outset that a theory of knowledge is a nor-
mative theory. It says what the rules of scientific knowledge ought to be.
Therefore, a theory of knowledge cannot seek to explain what scientists
do. These are not epistemological questions; they are scientific research
questions in themselves, equivalent to researching about why financial
investors choose a particular portfolio to allocate their funds. The answer
to both questions (the behavior of scientists and that of investors) will
come from a factual science. The usual sentence “science is what scientists
do” cannot constitute a scientific rule because it is inconsistent with the
meta-assumptions shown in Table 1.1.
Science Is Epistemology  7

Just to be clear on definitions:

• Epistemology is sometimes called methodology, as it deals with the


procedure (the “how” question) to attain scientific knowledge.
• Epistemology is also called theory of knowledge, as it deals with the
logic of scientific knowledge.

Therefore, the three terms—epistemology, methodology, and theory of


knowledge—can be considered synonymous and will be used interchange-
ably in this book. However, a possible confusion may arise with the use of
the category “theory,” which may refer to either the theory of knowledge
or to the scientific theory. In order to avoid this possible confusion, the
book will use the term “epistemology” or “methodology” rather than
“theory of knowledge” whenever the risk of confusion should appear.

The Assumptions of Popperian Epistemology


This section will present the theory of knowledge developed by Karl
Popper (1968, 1993). Popperian epistemology includes the following set
of assumptions:

First, scientific knowledge can only be attained by using hypothetic-­deductive


logic, which implies the construction of scientific theories. Scientific theories
are needed to explain the real world. Second, the scientific theory is empiri-
cally falsifiable or refutable. Third, the logical route for scientific knowledge
can only go from theory to testing it against facts; in contrast, there is no
logical route from facts to scientific theory, for it would require inductive
logic, which does not exist.

Table  1.2 displays the scientific research rules that can be logically
derived from the assumptions of Popperian epistemology. Rule (a) is self-­
explanatory. Rule (b) indicates that the criterion of demarcation is falsi-
fication. A proposition is not scientific if it is not empirically falsifiable.
A falsifiable proposition is one that in principle is empirically false. Under
the falsification principle, the presumption is that the proposition is false
so that its testing becomes a necessity; that is, the proposition is presumed
false until proved otherwise. If the presumption were that the proposition
is true, or that it could be false, then the testing would become discre-
tionary; the proposition would be presumed true until proved otherwise.
8  A. FIGUEROA

Table 1.2  Scientific research rules derived from Popperian epistemology


(a) Scientific theory is required to explain the real world: No scientific theory, no
explanation
(b) Falsification is the criterion of demarcation. A scientific theory must be falsifiable. In order
to be falsifiable, a scientific theory must contain a set of assumptions that constitute a logically
correct system, from which empirically falsifiable propositions can be logically derived
(c) If the empirical predictions are refuted by the reality, the scientific theory is rejected; if
they are not, the theory is accepted. A scientific theory cannot be proven true; it can only
be proven false, which implies that a scientific theory cannot be verified, but only
corroborated. Rejecting a scientific theory is definite, but accepting it is provisional, until
new data or superior theory appears; hence, scientific progress is a Darwinian evolutionary
process in which scientific theories compete and false theories are eliminated

Through the falsification principle, science is protected from including


untested propositions within its domain.
Rule (c) indicates the criterion to accept or reject a scientific theory. It
implies that the opposite of the sentence “the theory is false” is not “the
theory is true,” but “the theory is consistent with facts” because there may
exist another theory able to explain the same reality. This rule can be illus-
trated with a simple example. Consider a theory that states, “Figure F is
a square” (suppose Figure F is unobservable). By definition, a square is a
rectangle with all four sides equal. If these characteristics are taken as the
assumptions of the theory, then the following empirical proposition can be
logically derived: the two diagonals must be equal. If empirical evidence
on the diagonals becomes available, and are not equal, Figure F cannot
be a square. The theory has been refuted by facts. However, if empirically
the diagonals are equal, we can only say that the prediction has been cor-
roborated; we cannot say that we have verified that F is a square, for the
figure could be a rectangle.
Therefore, the Popperian criterion to accept a proposition as scientific
knowledge is not based on theory alone or on empirical data alone; it is
rather based on the empirical refutation of theories, on the elimination of
false theories. Falsification leads us to an evolutionary (in the Darwinian
sense) scientific knowledge. “The evolution of scientific knowledge is, in
the main, the evolution of better and better [scientific] theories. This is a
Darwinian process. The theories become better adapted through natural
selection: they give us better and better information about reality. (They
get nearer and nearer to the truth)” (Popper 1993, p. 338). In sum, the
logic of scientific knowledge is this: falsification is the way to eliminate
Science Is Epistemology  9

false theories and thus to generate the progress of science. In this sense,
we may say that Popperian epistemology leads to the construction of a
critical science.
The assumptions of the Popperian epistemology are consistent with
the general principles of epistemology, established as meta-assumptions in
Table 1.1. They are clearly consistent with principles (i) and (ii), that is, the
Popperian epistemology implies rules to discover the functioning of the
real world, assuming that this real world is knowable. Referring to prin-
ciples (iii) and (iv), the Popperian epistemology proposes the logic of sci-
entific knowledge based on deductive logic and falsification as the principle
of demarcation. Therefore, regarding system Eq. (1.2) above, the scientific
rules (L) have been derived from the set of assumptions of the Popperian
epistemology (set A), for given set of meta-assumptions (set B).

The Assumptions of Georgescu-Roegen’s


Epistemology
Social sciences seek to explain the functioning of human societies. We
may say that human societies constitute highly complex realities. At a first
glance, the social world seems to be a more complex reality than the physi-
cal world. The notion of complexity refers to the large number and the
heterogeneity of the elements that constitute the particular reality under
study, and to the multiple factors that shape the relations between those
elements. Human diversity together with the multiplicity of human inter-
actions makes human societies intricate realities; moreover, the individuals
that make up human society are not identical, as opposed to atoms in the
physical world. Human society is a highly complex system because many
individuals interact and individuals themselves are complex systems.
The problem that concerns us now is to find the proper epistemology
for the social sciences. The Popperian epistemology presented above gives
us general scientific rules. According to Popper, these rules are applicable
to the natural and social sciences, for these types of sciences differ in scope,
not much in method (Popper 1976). However, the use of Popperian
epistemology in the social sciences is something that needs logical justi-
fication. To this end, this book will show, firstly, that social sciences and
physics indeed differ in scope, but, and contrary to Popper’s statement,
that they also differ in method.
How can a complex social reality be subject to scientific knowledge? It
will now be shown that complex realities are subject to scientific knowledge if,
10  A. FIGUEROA

and only if, they can be reduced to an abstract process analysis. This is the
process epistemology of Georgescu-Roegen (1971, Chap. IX), which will be
summarized in this section.
Conceptually, a process refers to a series of activities carried out in the
real world, having a boundary, a purpose, and a given duration; further-
more, those activities can be repeated period after period. The farming
process of production, for example, includes many activities having a given
duration (say, seasonality of six months), the purpose of which is, say, to
produce potatoes, which can be repeated year after year. The factory pro-
cess of production also includes many activities, but with a shorter dura-
tion, say, the hour, the purpose of which is, say, to produce shirts, which
can be repeated day after day.
The process epistemology makes the following assumptions:

First, the complex real world can be ordered in the form of a process, with
given boundaries through which input–output elements cross, and given
duration, which can be repeated period after period. This ordering is taxo-
nomic. Second, the complex real world thus ordered can be transformed into
a simpler, abstract world by constructing a scientific theory. This is the prin-
ciple of abstraction. By transforming the complex real world into an abstract
world, by means of a scientific theory, we can reach a scientific explanation
to that complex real world.

On the boundary of the process and the input–output elements, the


first assumption implies that we are able to separate those elements that
come from outside and enter into it—called the exogenous elements—from
those that come out from inside the process—the endogenous elements.
All the elements that participate in a process have thus been classified as
endogenous, exogenous, or underlying mechanisms. This is just a taxo-
nomic ordering of a process. Therefore, the first assumption says that the
complex real world can be represented in the form of a process.
The second assumption says the complex social reality can have a scien-
tific explanation if it is reducible to an abstract process, a simpler abstract
world, by means of a scientific theory, which assumes what the essential
elements of the process are. This is the well-known abstraction method.
Certainly, to present the complete list of the elements of a process would
be equivalent to constructing a map to the scale 1:1. As in the case of the
map, a complex reality cannot be understood at this scale of representation.
In the abstract form, theoretical form, the complex reality is represented by
a map at a higher scale.
Science Is Epistemology  11

Although a process would include observable and unobservable


elements, the abstract process will select only those that are observable or
measurable. Call endogenous variables and exogenous variables to those ele-
ments that are observable. In order to explain the changes in the endog-
enous variables, the object of the research, the scientific theory selects
only the essential exogenous variables and the most important underly-
ing mechanisms (unobservable) by which the endogenous and exogenous
variables are connected. The use of abstraction or the use of scientific the-
ory implies that some elements of the real-world process must be ignored.
The process must be represented at higher scales, as in maps. In sum, this
is how a complex real social world can be transformed into an abstract
world, into an abstract process, in which only the supposedly important
elements of the process are included, and the rest are just ignored.
How do we decide which elements are important in a process and
which are not? How is an abstract process constructed? The construc-
tion of an abstract process is made through the introduction of a scien-
tific theory, which is a set of assumptions, as was defined earlier. Hence,
the assumptions of the scientific theory will determine the endogenous
variables, the exogenous variables that are important in the process, and
the underlying mechanisms that are also important. A scientific theory
is, therefore, a logical artifice by which a complex real world can be
transformed into a simple abstract world. The assumption of the process
epistemology is that by constructing the abstract world, by means of a sci-
entific theory, we will be able to explain and understand the complex real
world: We will know the determinants of the endogenous variables and
also the causality relations, namely, the relations between endogenous
and exogenous variables.
Figure 1.1 depicts the diagrammatic representation of an abstract pro-
cess. The segment t 0 − t1 represents the duration of the process, which is
going to be repeated period after period; X is the set of exogenous variables,
and Y is the set of endogenous variables. The shaded area indicates the
underlying mechanism by which X and Y are connected. What happens
inside the process is not observable, as indicated by the shaded area in
the figure. If it were, the interior of the process would be considered as
another process in itself, with other endogenous and exogenous variables
and other mechanism; the latter mechanism would also be observable and
then constitute another process, and so on. Thus, we would arrive at the
logical problem of an infinite regress. We may avoid this trap by making
assumptions about the mechanism and maintaining it fixed. Ultimately,
12  A. FIGUEROA

Fig. 1.1  Diagrammatic representation of an abstract process

there must be something hidden beneath the things we observe. Science


seeks to unravel those underlying elements.
The scientific theory must also include assumptions about how the
abstract process operates. The social relations taking place within the
mechanism constitute the structural relations. These social interactions
must have a solution, which will be repeated period after period. Call this
solution the equilibrium conditions. The outcome of the abstract process
showing the relations between endogenous and exogenous variables—
more precisely, the endogenous variables as a function of the exogenous
variables alone—constitutes the reduced form relations.
The reduced form relations may be represented as the following equa-
tion: Y = F ( X ) , where Y and X are vectors. In this equation, the exog-
enous variables X are the ultimate factors in the abstract process that
determine the values of the endogenous variables Y, after all internal rela-
tions or structural relations have been taken into account. The structural
equations show only the proximate factors that affect the endogenous vari-
ables Y. Moreover, according to the reduced form equation, changes in
the exogenous variables will cause changes in the endogenous variables.
Therefore, the reduced form equations may be called the causality rela-
tions of the scientific theory.
The rules of scientific research that are logically derived from Georgescu-­
Roegen’s epistemology include:

1. Construct an abstract process to represent the complex social world


with the help of a scientific theory;
2. Select a particular type of abstract process according to the nature of
the process repetition (static, dynamic, or evolutionary, to be shown in
Chap. 3);
3. Submit the reduced form equation of the scientific theory to empirical
test.
Science Is Epistemology  13

Combining the Two Epistemologies into One


Comparing the set of assumptions of Popper’s epistemology and Georgescu-
Roegen’s epistemology, we can see that they do not contradict each other;
thus, they are complementary and can be combined into a single epistemol-
ogy. Call this combination the composite epistemology. To the need of scien-
tific theory and the principle of falsification of Popperian epistemology, the
process epistemology adds the principle of abstraction, the need of scientific
theory for the particular purpose of reducing the complex real world into a
simpler, abstract world so as to be understood in terms of endogenous and
exogenous variables, and underlying mechanisms; moreover, whether the
abstract world is a good approximation to the real complex world is resolved
by using the falsification principle.
Therefore, the composite epistemology assumes the following:

We can explain and understand a complex real world if, and only if, it is
reducible to a simpler and abstract world in the form of an abstract process,
by means of a scientific theory, which is also falsifiable; such scientific theory
exists and can be discovered.

Comparing the rules that were derived from Georgescu-Roegen’s epis-


temology with those that were from Popperian epistemology (presented
in Table 1.2), we can see that rule (1) and rule (a) are consistent with each
other. Scientific theory is needed to understand the real world in both
epistemologies. However, process epistemology is more precise in that the
role of the scientific theory is clearly established: the set of assumptions by
which a complex real social world is transformed into an abstract process.
Rule (2) is absent in Popperian epistemology, but it does not contradict it.
Falsification as the demarcation principle, rules (b) and (c), are absent in
process epistemology, but they can be introduced into rule (3) of process
epistemology, as they complement each other.
It is clear that both epistemologies are complementary, for the two sets
of assumptions do not contradict each other; thus, they can be seen as a
single logical system, from which a single set of rules for scientific research
can be derived. This set of rules are obtained just by consolidating the
comparisons made above. The derived rules are the following:

1. Construct an abstract process to represent the complex social world by


means of a scientific theory;
14  A. FIGUEROA

2. Select a particular type of abstract process according to the nature of


the process repetition (static, dynamic, or evolutionary, to be shown in
Chap. 3);
3. Submit the scientific theory to the falsification process.

It should be clear that the composite epistemology is also logically con-


sistent with the assumptions of the meta-theory B, which was presented in
Table 1.1. The reason is that each epistemology taken separately complies
with that consistency, as was proven earlier, and that the assumptions of
the composite epistemology are just the elementary aggregation of the
assumptions of both epistemologies, for they are complementary.
It should be noted that the composite epistemology is now applicable
to complex realities, such as those studied by economics and the social sci-
ences in general. We have given Popper’s epistemology the needed logic to
be applicable to complex realities by adding the principles of Georgescu-­
Roegen’s epistemology. This is the most significant finding of this chapter.
However, the derived set of research rules are still too general. In order
to make them operational, a set of more specific research rules will have
to be developed. This calls for a scientific research method, containing the
rules logically derived from the composite epistemology in a more practi-
cal way, which will be called the alpha-beta method. This is the subject of
the next chapter.
Chapter 2

Alpha-Beta: A Scientific Research Method

Abstract  In this chapter, a set of rules for scientific research, which is called
the alpha-beta method, is logically derived from the composite epistemol-
ogy. This method makes the composite epistemology operational. Alpha
propositions constitute the primary set of assumptions of an economic the-
ory, by which the complex real world is transformed into a simple, abstract
world; beta propositions are logically derived from alpha and are, by con-
struction, empirically falsifiable. Alpha propositions are unobservable but
beta are observable. Thus, the economic theory is falsifiable through beta
propositions. Beta propositions also show the causality relations implied by
the theory: the effect of exogenous variables upon endogenous variables.
The principles of the alpha-beta method will constitute the rules for scien-
tific research in economics in later chapters.

The highly complex social world will be subject to scientific knowledge if,
firstly, it is reducible to an abstract process, as indicated by the Georgescu-­
Roegen’s epistemology; secondly, if the scientific theory is falsifiable, which
comes from Popperian epistemology. As shown in the previous chapter,
both epistemologies are not contradictory and can be combined into a
single epistemology. To make this composite epistemology operational,
this chapter derives a particular research method, containing a practical set
of rules for scientific research, which is called the alpha-beta method.
The debate about the applicability of Popperian epistemology in
economics is that economic theories “are rarely falsifiable,” as shown in

© The Editor(s) (if applicable) and The Author(s) 2016 15


A. Figueroa, Rules for Scientific Research in Economics,
DOI 10.1007/978-3-319-30542-4_2
16  A. Figueroa

the preface. We need a method to deal with the problem of falsification in


economics. The objective of the alpha-beta method is precisely to ensure
that economic theories are constructed in such a way that they are always
falsifiable. Therefore, the alpha-beta method is not just another name for
a known research method; it is truly a new scientific research method, the
application of which should contribute to the growth of the science of
economics.

The Alpha and Beta Propositions


It will help to introduce the following concept, in which Georgescu-­
Roegen presents the structure of scientific knowledge as a logically ordered
system, as follows:

In terms of the logical ordering of its propositions, any particular field of


knowledge can be separated into two classes: alpha and beta, such that
every beta proposition follows logically from …alpha propositions and no
alpha proposition follows from some other alpha propositions. (Georgescu-­
Roegen 1971, p. 26)

The task before us is to apply this definition to the composite epistemol-


ogy and particularly to make it consistent with the principle of falsification.
Let  alpha propositions constitute the foundation or primary assump-
tions of the scientific theory and beta propositions the empirical predic-
tions of the theory. The assumptions of a theory seek to construct an
abstract world to make the complex world understandable. Because the
social world is too complex to understand, abstraction must be applied,
which implies ignoring the variables that are supposedly unessential and
retaining only those that are supposedly essential. This is the role of a
scientific theory. Hence, the objective of the theory is to construct an
abstract world that resembles best the real complex world. This is consis-
tent with Georgescu-Roegen’s epistemology.
What are the logical requirements for a proposition to be considered
an alpha proposition?
Looking back to the abstract process diagram (Fig. 1.1, Chap. 1), it was
clear that there were observable and unobservable elements. The alpha
propositions refer to the first and beta propositions to the latter. Alpha
propositions are the assumptions of the scientific theory and must deal
with the mechanisms or forces that connect the endogenous and exogenous
Alpha-Beta: A Scientific Research Method  17

variables. Therefore, alpha propositions refer to the set of assumptions


about the underlying factors operating in the relationships between the
endogenous and the exogenous variables. Alpha propositions are unob-
servable, but they must be non-tautological because they need to generate
beta propositions, which should be observable and falsifiable. The set of
alpha propositions must constitute a logical system, free of internal con-
tradictions. This is just the definition of scientific theory presented earlier.
Hence, a scientific theory is a set of alpha propositions.
Can the assumptions of a scientific theory be logically derived from
empirical observation? No, they cannot. The main reason is that the the-
ory precisely seeks to explain those observations, so it cannot assume what
it intends to explain. Alpha propositions intend to discover the essential
factors that lie beneath the observed facts; therefore, the mechanisms
contained in alpha propositions are unobservable. What we can get from
reality by empirical observation is a description of it, not an abstraction.
The listing of all elements one observes in the real world cannot discover
by itself the essential and nonessential variables. As will be demonstrated
later on (Chap. 7), there is no logical route from empirical observations
to scientific theory.
How are then the assumptions of a scientific theory chosen? Not by
empirical observations. Do these assumptions need justification? No, they
do not. The assumptions are in the nature of axioms; they do not need
logical justification. The reason has to do with logical arguments: If the set
of assumptions needed justification, another set of assumptions to justify
them would be needed, which in turn would need another set to justify
the latter, and so on; hence, we would end up in the logical problem
of infinite regress. The assumptions of a scientific theory are, to some
extent, chosen arbitrarily. Therefore, the need to test the theory becomes
a requirement for scientific knowledge.
Beta propositions are derived from alpha propositions by logical deduc-
tion and make the theory comply with the testing requirement. Beta prop-
ositions are, by construction, observable and refutable because they refer
to the relations between endogenous and exogenous variables, which are
observable. Then the logical relations between alpha and beta proposi-
tions are as follows:

(a) If alpha is true, then beta must be true.


(b) If beta is false, then alpha must be false.
(c) If beta is true, then alpha is corroborated.
18  A. Figueroa

Therefore, beta propositions are observable and refutable, and thus they
can be utilized to falsify the theory. This is consistent with the Popperian
epistemology.
Alpha propositions are chosen somewhat arbitrarily, as said earlier.
However, they are subject to some logical constraints: they must be unob-
servable and non-tautological. The condition of unobservable is required
because alpha propositions refer to the underlying forces in the workings
of the observed world. Furthermore, alpha propositions that are non-­
tautological will be able to generate beta propositions, which are both
observable and refutable.
Unfalsifiable propositions are unobservable or, if observable, they are
tautologies in the sense given to this term in logic: propositions that are
always true. As examples of propositions that are unfalsifiable, consider
the following:

“Men die when God so wishes”


“If you have faith on this medicine, you will get well”
“It will rain or not rain here tomorrow”

The first example is unfalsifiable because God’s wishes are unobserv-


able; hence, a person is alive because God so wishes and when he dies
it is just because God so wanted. The proposition will never fail. The
second is also unfalsifiable because if the person complains that he is not
getting well, he or she can be told, “You had no faith on this medicine.”
This proposition will never fail because faith is unobservable. The third is
tautological because it includes all possible outcomes. Thus, tautological
propositions are unfalsifiable, useless for scientific knowledge, for they can
never fail.
Consider now the statement “people act according to their desires.” It is
unobservable but tautological, and thus unfalsifiable. This statement will
always be true because whatever people do will always reflect their desires;
hence, it cannot be an alpha proposition and no beta proposition can be
derived from it. However, the statement “people act guided by the moti-
vation of egoism” (not of altruism) is unobservable and non-­tautological,
and thus qualifies to be an alpha proposition. A beta proposition can logi-
cally be derived from it. For example, selfish motivations imply free-riding
behavior toward public goods; therefore, people will be forced (through
taxes) to produce public goods (parks and bridges). This empirical propo-
sition could in principle be false; thus, it is a beta proposition.
Alpha-Beta: A Scientific Research Method  19

Take note that beta propositions are observable and refutable, even
though they are derived from alpha propositions, which are unobservable.
This paradox is apparent because alpha propositions are free from tautolo-
gies; moreover, alpha propositions assume the endogenous variables (Y)
and exogenous variables (X) of the abstract process, which are observable,
and beta propositions refer to the empirical relations between X and Y. If
beta propositions cannot be derived from a theory, this “theory” is actu-
ally not a theory; it is a tautology, useless for scientific knowledge. To take
the example shown above: the statement “people act according to their
desires” is not an alpha proposition, for no beta proposition can be logi-
cally derived from it. It follows that the alpha-beta method eliminates any
possibility of protecting scientific theories from elimination because beta
propositions are falsifiable. This is so by logical construction.
Although subject to some logical constraints, the set of alpha proposi-
tions is established somewhat arbitrarily. However, this presents no major
problem for falsification because the theory is not given forever. On the
contrary, a theory is initially established as part of an algorithm, of a trial-
and-­error process, the aim of which is to reach a valid theory by eliminat-
ing the false ones. If the initial theory fails, a new set of assumptions is
established to form a new theory, and a new abstract world is thus con-
structed. If this second abstract world does not resemble well the real
world, the theory fails and is abandoned, and a new set of assumptions
is established, and so on. A valid or good theory is the one that has con-
structed a simple abstract world—in the form of abstract process—that
resembles well the complex real world.
Under the alpha-beta method, the valid theory is found by a trial-and-­
error process, in which we assist to the funerals of some theories. The beta
propositions derived logically from the alpha propositions are observable,
falsifiable, and mortal. This is consistent with the Darwinian evolutionary
principle of scientific progress. Hence, what the set of assumptions of a
theory needs is not justification; what it needs is empirical falsification,
testing it against the facts of the real world using the beta propositions.
Beta propositions thus have the following properties:

• Beta propositions show the falsifiable empirical predictions of a sci-


entific theory. The reason is that beta propositions represent the
reduced form relations of the abstract process: the relations between
the exogenous and endogenous variables that the theory assumes.
Hence, beta propositions are logically derived from the theory and
20  A. Figueroa

are observable and refutable; if beta propositions are not consistent


with facts, then the theory fails and is rejected; if the beta propositions
are consistent with facts, then the theory is accepted.
• Beta propositions also predict causality relations: changes in the
exogenous variables (X) will cause changes upon the endogenous
variables (Y), which again are observable and falsifiable, that is,
Y = F ( X ) , from the process diagram (Fig. 1.1, Chap. 1). Therefore,
causality requires a theory, that is, no theory, no causality.

Because beta propositions indicate causality relations, for each endog-


enous variable of the theory there will exist a causality relationship; hence,
there will be as many causality relations or beta propositions as there are
endogenous variables (variables the theory seeks to explain) in the theo-
retical system.
According to the alpha-beta method, if the abstract world constructed
by the theory is a good approximation of the real world, we should
observe in the real world what the beta propositions say. Although a beta
proposition is logically correct—it is the reduced form equation of the
theoretical system—it may be empirically false. The reason is that the set
of assumptions contained in the alpha propositions was selected somewhat
arbitrarily. Falsification of a scientific theory is thus a logical necessity.
In order to illustrate the principle that logically correct propositions may
be empirically false, consider the following syllogism:

All men are immortal


Socrates is a man
Then, Socrates is immortal

The conclusion follows logically from the premises, but it is empirically


false. The reason falls upon the first premise, which is empirically false. In
the alpha-beta method, by contrast, the premises (the assumptions) are
unobservable and they may be false in the sense that the underlying forces
of the workings of the real world are not those assumed by the theory;
then the logically correct proposition may be empirically false. Consider
the following example:

Capitalist firms seek to maximize employment


Workers seeking jobs are fixed in number
Then, the capitalist system operates with full employment
Alpha-Beta: A Scientific Research Method  21

In this case, the conclusion follows logically from the premises, but it is
empirically false. Capitalism is characterized by the existence of unemploy-
ment. The reason for failure falls upon the premises, particularly, on the
assumption about the motivation of capitalists, which is proved wrong:
capitalists do not seek to maximize employment (but, say, seek to maximize
profits).
A theory will fail because the abstract world is not a good approxima-
tion of the real world; it has made the wrong assumptions about what the
essential factors of the economic process are. If, in spite of the abstraction,
the so-constructed simple abstract world resembles well the complex real
world, the theory constitutes a good approximation to the real world. The
abstract world resembles the real world; accordingly, we say the theory
explains the reality. Then this is a valid theory.
To be sure, in the alpha-beta method, submitting a theory to the pro-
cess of falsification has the following logic. Because the theory is in prin-
ciple false (it is an abstraction of the real world!), it must be proven that it
is not false. If the theory were in principle true, there would be no need
to prove that it is, or the proof would be discretionary. By comparison
with the judiciary court, in which the individual is in principle innocent
of a crime (legal rights) and it must be proven that he or she is guilty, the
falsification principle says that the individual, the theory in this case, is in
principle guilty, and must be proven that it is not. Therefore, if the theory
is found true, in spite of the expectation that it was false, then the theory is
a good one. The concept of falsification is also similar to the concept that
an honest person is one who having had the opportunity of committing
a crime did not do it, but whether the person never had had the chance,
we cannot say.
From the example of the theory “Figure F is a square,” shown earlier,
it is clear that falsification through beta propositions implies that the alpha
proposition cannot be proven true; it can only be proven false. Why? This is
so because the same beta propositions could be derived from another set of
alpha propositions. It may be the case that there is no one-to-one relation
between alpha propositions and beta propositions. Alpha implies beta, but
beta may not imply alpha. If the Figure F is a square, then it follows that
the two diagonals must be equal. However, if the two d ­ iagonals are equal,
it does not follow that Figure F is a square; it could be a rectangle.
This simple example shows another property of the alpha-beta method.
If all beta propositions of the theory coincide with reality, the theory is
not refuted by the available facts; if at least one beta proposition fails, the
22  A. Figueroa

theory fails to explain the reality. If the two diagonals are not equal, it follows
that the theory fails: Figure F cannot be a square.
Consider the case in which there is a one-to-one relation between alpha
and beta propositions. Let the theory say, “People seek to kill their credi-
tors when repayment is unviable.” Individual B is suspected of individual
C’s death because B was debtor of C. Suppose only one fingerprint was
found in the scenery of the crime. If the fingerprint is that of B, then he is
the killer; if it is not, then he is not the killer. This is so because fingerprints
are personal. The same conclusion would follow with DNA tests. In the
previous example, the fact of equality of diagonals does not belong to the
square figure only. In social sciences, we deal with aggregates; therefore,
there cannot be a kind of “fingerprints” variables from which to draw
definite conclusions as in the case of the people, and the relevant example
is that of the “Figure F is a square” theory.
Logically, therefore, scientific theories in the social science cannot be
proven true; they can only be corroborated. To be sure, here “corrobo-
ration” means consistency, not truth. It also means to assess how far the
theory has been able to prove its fitness to survive by standing up to tests.
How many wars has the theory survived? How far has the theory been
corroborated?
In sum, scientific theory is a logical artifice to attain scientific knowl-
edge. A scientific theory allows us to construct an abstract world that
intends to resemble well the complex real world. If there is no theory,
there is no possibility of scientific knowledge. However, how accurate
is the approximation of the theory to the real world? The theory needs
empirical confrontation against reality. The prior set of assumptions needs
posterior empirical falsification. The reason behind falsification is that the
assumptions of the scientific theory were established arbitrarily (for there
is no other way). If in this confrontation theory and reality are inconsis-
tent, theory fails, not reality; that is, the arbitrary selection of its assump-
tions is proved wrong.
The rules for scientific research in economics derived from the compos-
ite epistemology, shown in Chap. 1, can now be restated in terms of the
alpha-beta method, as follows:

1. The rule that scientific theory is needed for explaining a complex real
world is given by constructing the set of alpha propositions.
2. The rule that falsification is the criterion of demarcation is given by the
beta propositions, derived logically from the set of alpha propositions.
Alpha-Beta: A Scientific Research Method  23

Table 2.1 The alpha-beta method

α1 ⇒ β1 → [ β1 ≈ b ]
   If β1 = b, α1 is consistent with facts and explains reality
   If β1 ≠ b, α1 does not explain reality and is refuted by facts. Then,
α 2 ⇒ β2 → [ β2 ≈ b]
    If … (the algorithm is continued)

The rule of rejection-acceptance of a scientific theory is given by the


iterations of alpha-beta propositions, eliminating false theories until
the valid one is found.
3. The use of abstraction implies that the beta propositions need not fit all
empirical cases, as there will be exceptions; hence, falsification requires
statistical testing. The alpha-beta method as an algorithm is shown in
Table 2.1 above.

Therefore, the alpha-beta method constitutes a logic system to construct


scientific theories of complex realities and submit those theories to the
process of falsification. This is a scientific research method.

The Workings of the Alpha-Beta Method


According to the alpha-beta method, alpha propositions are not observ-
able and thus cannot directly be subject to empirical refutation; however,
it can indirectly, through beta propositions. The beta propositions are uti-
lized to seek refutation of the alpha propositions, which make assumptions
to transform the complex real world into a simpler, abstract world. The
principle of abstraction is contained in the alpha propositions. Logically,
therefore, a beta proposition can fit only the general or typical cases of the
real world. Due to the use of abstraction, it may not fit all the observed
cases and exceptions may exist. Therefore, the refutation of a theory needs
to be based on statistical testing; the relationships between the average
values of the endogenous and exogenous variables are the critical ones.
This is another scientific rule of the alpha-beta method.
A single empirical observation that contradicts a beta proposition is
insufficient to refute the theory, for the statistical value of one observation
is nil. That observation could just correspond to a statistical error, a devia-
tion from the average by pure chance. By comparison, a single counter-­
example is sufficient to invalidate a theorem in mathematics, but it is not
24  A. Figueroa

sufficient to refute a scientific theory. The empirical proposition “smoking


causes cancer” cannot be refuted by finding someone that smokes but has
no cancer, as this individual can be the exception. Accordingly, a distinc-
tion must be made between error of a theory and failure of a theory.
The continuous confrontation between theory and empirical data is
the basic property of the alpha-beta method. From funeral to funeral of
theories (false theories are eliminated and good theories take their place),
science makes progress.
Table 2.1 depicts the scientific research rules of the alpha-beta method.
From the set of alpha propositions α1, the set of beta propositions β1 is
logically derived (indicated by the double arrow). The set β1 must then
be subject to the operational procedure of statistical testing (indicated by
the single arrow). While the double arrow indicates logical deduction, the
single arrow indicates operational procedure, or the task to be performed.
Statistical testing of the theory implies seeking a statistical conformity
between beta propositions and the available set of statistical associations
between endogenous and exogenous variables, the set b. This search for sta-
tistical conformity is indicated by the double-swung dash symbol ( ≈ ) , which
means investigating for “approximately equal to.” If statistically (not
mathematically) β1 = b , then α1 is consistent with reality, facts do not
refute the theory; therefore, there is no reason to reject the theory at
this stage of the research, so we may accept it, although provisionally,
until new empirical evidence or new theories appear. If β1 ≠ b , then real-
ity refutes the theory α1, and another theory α2 should be developed; thus
the algorithm is continued.
It should be noted that in the alpha-beta method facts can refute a
theory, but facts cannot verify a theory. The opposite of the conclusion
“the theory is false” is not the “theory is true,” but “the theory is con-
sistent with facts.” When facts have not been able to refute the theory,
we say that “the theory is consistent with facts” or “the theory has been
corroborated”, and we accept the theory provisionally, until new superior
theory or new empirical facts appear.
As in the case of the social world, the biological world may also be con-
sidered as a highly complex reality. Indeed, human societies are biological
species. Hence, the alpha-beta method is also applicable to biology.
An example of application of the alpha-beta method to biology is the
following:
α Plants seek to maximize the reception of solar energy.
Alpha-Beta: A Scientific Research Method  25

β Then, plants will position their leaves in a particular distribution so


as to maximize exposure to sun: each leave collects its share of sun
interfering the least with other leaves.
b We observe that tree leaves form a canopy, a near-continuous ceiling.
Then, b = β . We can conclude that α is a valid scientific theory that explains
plant behavior.

The alpha proposition is the scientific theory. It is an assumption about


the underlying forces operating in the functioning of the real world; thus,
it is unobservable and non-tautological. The beta proposition is derived by
deductive logic from the alpha. The term b indicates the statement about
facts. The last row indicates that because beta proposition and b coincide,
the assumption of the theory cannot be rejected; fact b does not refute the
theory. (If the leaves distribution of trees had shown no canopies, then the
theory would have failed.) Therefore, the theory explains the behavior of
plants and why the leaves of trees form canopies, that is, the why-question
is answered.
As this elementary biological example illustrates, a scientific theory is
unobservable and is submitted to the refutation process indirectly, through
its beta propositions. For one thing, in this case it is unviable to do the
direct refutation of a theory by asking the trees what their motivations
are; it is also unnecessary. It is a principle of the alpha-beta method that
unobservable propositions can be transformed into observable ones—a
scientific theory as set of alpha propositions can be transformed into beta
propositions. A good scientific theory has empirical implications over the
real world, which can be tested against facts. The theory is tested indirectly.
In the social sciences, the same principle of the alpha-beta method
applies. Even though, in contrast with plants, we may ask people about
their motivation, it is unviable and unnecessary for falsification. It is unvi-
able because beta propositions refer to observable propositions, to human
behavior, to what people do—not to what people say what they do. It is
unnecessary because we can make assumptions on the people’s motiva-
tions (the alpha proposition, unobservable), which can be transformed into
an observable proposition, which can then be confronted against facts. If
facts refute the beta proposition, we know that the alpha proposition (the
theory) is false. If they coincide, the theory is consistent with facts and then
we may accept it provisionally. Whatever people’s real motivations are, it is
equivalent to what the corroborated theory says.
26  A. Figueroa

Table 2.2  Matrix of beta


Endogenous Exogenous variables
propositions or matrix of variables
causality X1 X2 X3

Y1 + + ?
Y2 − 0 +

If the theory is accepted, it follows that its assumption on motivations is


a good approximation of what the real motivations of individuals are; it is as
if, people did what the alpha propositions say. If the scientific theory fails,
people act guided by motivations other than those established by the alpha
proposition. Therefore, people’s motivations, the forces underlying their
behavior, can be discovered through the alpha-beta method. To know these
motivations by asking people directly is unnecessary and, more importantly,
insecure. People know to lie and they may decide to lie because they feel
embarrassed to confess their true motivations (say, seek money above all).
It should also be noted that failure of a single beta proposition is suf-
ficient for refuting a scientific theory. Therefore, a theory is valid if, and
only if, none of its beta propositions fails. Table 2.2 illustrates this prop-
erty of the alpha-beta method. Let the scientific theory have two endog-
enous variables (Y1 and Y2) and three exogenous variables (X1, X2, and X3).
Then the beta propositions can be represented in matrix form. The effect
of changes in the exogenous variables upon Y1 is given by the first row of
the table: the effects are positive, positive, and undetermined; similarly,
the signs of the second row indicate the effect of changes in the exogenous
variables upon Y2: negative, no-effect or neutral, and positive.
The two endogenous variables of the theory give rise to two beta prop-
ositions, one for each endogenous variable. Each row of the matrix shows
the corresponding beta proposition. Thus, we can write
+ + ?
Proposition beta 1 Y1 = F ( X1 , X 2 , X3 )

− 0 +
Proposition beta 2 Y2 = G ( X1 , X 2 , X3 )

The relation between each endogenous variable and the exogenous


variables is represented by functions F and G. The signs on top of each
exogenous variable indicate the direction of the effect of changes in the
exogenous variables upon changes in each endogenous variable.
Alpha-Beta: A Scientific Research Method  27

The matrix shown in Table 2.2 may also be called the causality matrix.
Thus, an increase in the exogenous variable X1, maintaining fixed the
values of the other two exogenous variables (X2 and X3), will cause an
increase in the value of the endogenous variable Y1 and a fall in Y2. Hence,
the functions F and G show the causality relations of the theory. These are
the reduced form equations of the scientific theory.
Falsification can now be analytically defined as follows: A theory fails
if one of the signs of the matrix is different from the sign of the observed
statistical associations between the corresponding variables. This is a suf-
ficient condition to have a theory refuted by facts. It should be clear that
the cell in which the effect is undetermined cannot be used to refute the
theory.
In the case of falsifying several theories at the same time, given data set
b, some theories will be false and some will be consistent. Those theories
that survive the entire process of falsification will become the corroborated
theories (not the verified or true theories), whereas the theories that fail
are eliminated. The corroborated theory will reign until new information,
new statistical testing methods, or a new superior theory appears. A theory
is superior to the others if it derives the same beta propositions as the oth-
ers, but in addition derives other beta propositions that are consistent with
facts, which the other theories cannot. A theory is thus superior to others
when it can explain the same facts that the others can and some additional
facts that the others cannot.
From the alpha-beta method, it also follows that data alone cannot
explain real phenomena. Data alone—data set b—can show statisti-
cal association or correlation between empirically defined variables, but
that is not causality. Causality refers to relations between exogenous and
endogenous variables, which can only be defined by the assumptions of
a scientific theory. There is no logical route from statistical association or
correlation to theory and then to causality (no matter how sophisticated
the statistical testing is).
Consider the following usual claim: “To establish causality, let data
speak for themselves.” This statement is logically false. Facts can never
speak for themselves because there is no logical route from facts to scien-
tific theory and causality. That route would imply using inductive logic:
from a set of observations, we can discover what factors are underly-
ing the observed phenomena; that is, from facts b we go to scientific
theory α. Popperian epistemology assumes that such logic does not exist.
Popperian epistemology assumes that scientific knowledge goes from
alpha propositions to beta propositions, which is falsified against facts b
28  A. Figueroa

(more on inductive epistemology in Chap. 7 below). Causality requires


a scientific theory because exogenous and endogenous variables can only
come from a theory. Variables are not intrinsically endogenous or exog-
enous. Reality can only be understood in the light of a valid scientific
theory.
This chapter has shown that the alpha-beta method is in accord with
the composite epistemology. This method follows hypothetico-deductivist
logic; it also follows the demarcation principle of falsification: any propo-
sition that in principle cannot be falsified is outside the realm of science;
hence, the alpha-beta method is a particular method of the Popperian
methodology; that is, the alpha-beta method is consistent with the general
principles of the Popperian epistemology. Moreover, alpha-beta method
is consistent with Georgescu-Roegen’s epistemology, for it uses abstract
processes, which is the epistemology to study highly complex realities,
such as the social reality. This chapter has also shown the operational
scientific research rules that are contained in the alpha-beta method, as
displayed in Table 2.1 above.
Up to now, we have shown that the composite epistemology, the com-
bination of the epistemologies of Georgescu-Roegen and Popper into a
single epistemology, is applicable to economics and the social sciences.
Moreover, the alpha-beta method has been derived from the composite
epistemology. We now have practical scientific research rules to construct
and falsify scientific theories and thus ensure the progress of the social sci-
ences. Science is epistemology. The alpha-beta method will be applied to
economics in the next two chapters.
Chapter 3

The Economic Process

Abstract  The process epistemology of Georgescu-Roegen is one of the


logical pillars of economics. Its application makes economics a science and
a social science. This chapter is devoted, firstly, to the understanding of
the concept of economic process and, secondly, to make it operational
by distinguishing different types of economic processes. In particular,
special attention is given to the distinction between mechanical processes
(which include static and dynamic) and evolutionary processes, and then
to deterministic and stochastic processes. Finally, the process epistemology
is extended to the other social sciences.

Social sciences seek to explain the functioning of human societies. The


interpersonal relationships of individuals as members of society constitute
the set of social relations in society. The elements that give rise to social
relations are several and lead to different social sciences. Thus, the con-
necting elements that cement social relations include goods in economics,
government in political science, culture in anthropology, and organiza-
tions in sociology.
In order to explain the social world—a highly complex world—the
social sciences need to represent social reality at a high level of abstraction
in the form of abstract processes. Empirical regularities are essential in
each social science, as those are the phenomena to be explained. Because
society is the sum of individuals, or qualitatively more than the sum, the
behavior of individuals and the aggregate behavior constitute the endog-
enous variables in the social sciences. The exogenous variables and the

© The Editor(s) (if applicable) and The Author(s) 2016 29


A. Figueroa, Rules for Scientific Research in Economics,
DOI 10.1007/978-3-319-30542-4_3
30  A. Figueroa

underlying mechanisms connecting endogenous and exogenous variables


are established by the assumptions of the corresponding scientific theory.
The underlying mechanisms include the motivations that guide the behav-
ior of people; these motivations constitute the forces that give movement
to the social relations.
Economics is a social science. The scope and method of economics
may be defined as the science that studies a particular process in the com-
plex social world: the economic process. The endogenous variables of this
process will refer to the production of goods and its distribution among
social groups in human societies. The exogenous variables depend upon
the scientific economic theory utilized. This chapter presents the most
significant traits of economic process as an abstract process.

The Structure of the Economic Process


The scope of economics can be presented as an abstract process. Figure 1.1,
Chap. 1 can be used for that purpose. The basic endogenous variables
therefore include the quantity of goods produced per unit of time and
the degree of inequality in the distribution of those goods between social
groups. Production and distribution is repeated period after period. They
are the endogenous variables in any economic theory that seeks to explain
the economic process. Any economic theory that seeks to explain produc-
tion and distribution in a particular type of human society must make
assumptions about the boundaries of the economic process, the exog-
enous variables, and the mechanisms underlying the process.
In the case of a single and isolated individual, the economic process is
reduced to the production of goods alone; the distribution problem does
not exist. The use of Robinson Crusoe as the metaphor to study econom-
ics in some textbooks is thus a distortion on the nature of the economic
process. In human societies, both production and distribution are social
activities; that is, production of goods and its distribution between social
groups is the result of the social interactions. In sum, by using the concept
of economic process, economics is clearly placed in the domain of science
and in that of the social sciences.
In order to illustrate the structure of the economic process in the case
of a capitalist society, the book will use an invented economic theory of
capitalism, called the E-theory. The assumptions of this economic theory
include the following. Capitalism is a class society, in which the initial
distribution of economic, political, and cultural assets are concentrated
The Economic Process  31

in a small group of people, the capitalists. Capital goods are subject to


­private property. Workers have free labor (non-slaves) and can sell it in labor
markets. Markets and democracy constitute the fundamental institutions
of capitalism, which establish the rules and organizations under which
capitalism operates. In such context, people act guided by the motivation
of egoism. In this abstract capitalist society, people exchange goods under
the rules of market exchange (voluntarily) and public goods are supplied
by the government under democratic rules. Politicians run the govern-
ment and also act guided by egoism or self-interest.
As suggested by economist Nicholas Georgescu-Roegen (1971), an
analytical way to see the economic process is as an input-output system, in
which four categories are distinguished as follows:

• Flows refer to the elements that either enter into the production pro-
cess (material inputs) or come out from the process (material output
and waste);
• Funds refer to those elements that enter and come out (machines
and workers).
• Natural resources, which include renewable (biological) and non-­
renewable (minerals).
• Initial conditions, the initial structure of society, such as capital per
worker, technological level, assets inequality, and institutions.

The stock of machines and men are seen as fund factors: they enter and
come out of the process maintaining their productive capacity intact in
every period. They constitute funds of services because they participate in
the production, providing their services. No piece of machine and no flesh
of workers are expected to be found in, say, the shirts produced in a fac-
tory. Machines and men are the agents that transform input flows (cotton
and oil energy from natural resources) into output flows (shirts).
Table  3.1 represents the economic process under the assumptions of
the E-theory, which assumes a simple reproduction process: production of
goods is repeated at the same scale period after period. Society is endowed
with given stocks of machines, workers, and natural resources; other ini-
tial conditions include wealth inequality or power structure, institutions,
and technology. Under a simple reproduction process, total output level
must be repeated period after period. This implies that the initial stocks of
machines and workers must remain unchanged. Hence, total output is net
of depreciation costs of the machines and subsistence wages of workers.
32  A. Figueroa

Table 3.1 Economic process according to E-theory


Inputs Mechanisms Outputs

Initial stocks
 K-Machines K-Machines
 L-Workers L-Workers
 R-Renewable resources R-Renewable
  N-Non-renewable resources (minerals) N-n Non-renewable
Institutions
Flows
  m-material inputs from renewable resources Goods: Q
  n-material inputs from mineral resources Income Inequality: D
Waste/pollution
Social factors
  δ-Power structure δ-Power structure

About renewable natural resources, the quantity utilized in p ­ roduction


(m) should be equal or lower than the biological reproduction of the
natural resource, to maintain its stock constant. In contrast, the quantities
of non-renewable resources utilized in production (n) will imply a con-
tinuous decline by this amount of the initial stock. With the exception of
non-renewable natural resources, everything can be produced and repro-
duced in the economic process.
Machines and workers participate in production as services, as they are
a fund of services, whereas natural resources participate as material inputs.
In a factory process, say, producing cars in a certain number per year,
machines and men are maintained fixed; that is, depreciation and wages
are part of the production and reproduction costs; the stock of renewable
resources is also maintained fixed. However, the costs of the quantity of
minerals utilized per car (as material inputs and as energy) include only the
production costs of extracting minerals, not the reproduction costs of the
mineral stock itself. The quantity of mineral utilized is forgone forever; it
can be used only once; it cannot be replaced because the mineral depos-
its on the crust of the Earth are fixed, and thus the economic process is
subject to depletion of minerals. Moreover, the use of energy from miner-
als causes pollution, which are not included in the cost of production of
goods either.
Notice that waste and pollution are also the outcome of the production
process. However, they are not the objective of the production process.
These side effects of the production process are inevitable due to the
The Economic Process  33

­ iophysical nature of production, which uses material inputs as matter and


b
energy, and are thus subject to the laws of physics, known as the laws of
thermodynamics.
According to the Law of Conservation of Matter and Energy, mat-
ter and energy cannot be created or destroyed, only rearranged. Thus,
matter and energy not incorporated in the produced goods will become
waste. For example, in the production of a table, a piece of wood is
transformed into a table and waste (sawdust). The oil used as energy
source in the production of the table will be transformed part into
mechanical work and part into dissipated energy that is dumped into the
atmosphere, which is also waste. According to the Entropy Law, as rear-
rangement takes place, the only change in the biophysical environment
is of qualitative nature: the waste generates pollution of the air, soil, and
water, and thus degrades the biophysical environment, the ecological
niche of the human species.
Therefore, the flow of output in the production process includes goods,
the objective of the human activity, but it also includes depletion and
waste/pollution, which cause degradation of the human biophysical envi-
ronment. These side effects are, according to these laws of thermodynam-
ics, continuous and irrevocable in the production process. Consequently,
there are limits to the reproduction of the same scale of production;
that is, even the same output level in society cannot be repeated period
after period forever. The limits will come from either depletion of natural
resources or from pollution, whichever comes first. Those limits will be
more stringent under expanded reproduction process, when the level of out-
put grows endogenously over time.
The exogenous variables in the simple reproduction process include
the initial conditions or the initial structure of society, which remains
unchanged. Power structure is also considered an exogenous variable.
This variable refers to the initial inequality in the distribution of economic
assets (machines and human capital) and political entitlements among indi-
viduals in a class society. The endogenous variables include total output,
degree of income inequality, waste/pollution, and depletion of mineral
resources. The essential social mechanisms by which exogenous variables
affect endogenous variables in the economic process diagram include
institutions. Societies need rules and norms to function. The institutions
of capitalism include private property, market exchange, and democracy,
which shape the incentive system that guides the behavior of people under
34  A. Figueroa

capitalism, for under these institutions selfish and egotist individual


motivations will prevail.
According to alpha-beta method, any economic theory of capitalism
needs to assume a set of alpha propositions and then derive from them
the beta propositions, which should be confronted against the empirical
regularities of capitalist societies. The alpha propositions of E-theory have
already been established here: the underlying mechanism in the economic
process is given by institutions—by the workings of market and democracy
systems. However, the beta propositions are left unresolved, for it needs
the use of models, as will be explained in the next chapter. The falsification
process of these models will determine whether the real-world capitalist
societies operate as if it were the abstract capitalist society that was con-
structed by the E-theory.

Economic Processes: Mechanical and Evolutionary

According to the nature of repetition, the economic process can take the
form of static, dynamic, or evolutionary. Those types correspond to dif-
ferent equilibrium situations in the economic process, for what is repeated
is the equilibrium situation. As indicated earlier (Chap. 1), the concept of
equilibrium refers to the solution of the social interactions. The concept
of equilibrium can be stated as follows:

An economic process is said to be in equilibrium if, and only if, no social


actor has both the power and the incentive to change the outcome of the
production and distribution process.

In a static economic process, the corresponding static equilibrium


implies the repetition of the same values of the endogenous variables,
period after period, as long as the values of the exogenous variables remain
unchanged. Static economic process is another name for simple reproduc-
tion process. Therefore, production of the quantity of net output can be
repeated at the same scale period after period forever, as long as non-­
renewable natural resources are assumed to be abundant and pollution is
still harmless to human health.
Static equilibrium may be stable or unstable. It is stable when the value
of the endogenous variable spontaneously restores its equilibrium posi-
tion whenever it falls out of equilibrium (the classical metaphor is a ball
inside a bowl); otherwise, the equilibrium is said to be unstable (a ball on
The Economic Process  35

top of a bowl that is placed upside down). The assumption of stable equi-
librium makes the economic process self-regulated. Thus, a change in an
exogenous variable will imply that the endogenous variable is now out of
equilibrium and then it will move to the new equilibrium spontaneously;
hence, changes in the exogenous variables will generate causality, quanti-
tative changes of the endogenous variables in definite directions.
Figure 3.1, panel (a), illustrates a static process. The vertical axis mea-
sures an endogenous variable Y and the horizontal axis measures time.
Suppose there is only one exogenous variable (X1). Given the value of the
exogenous variable (say, X1 = 10 ), the value of the endogenous variable
remains fixed period after period at the level OA. If for some reason, the
value of the endogenous variable is outside equilibrium, as at point “a”,
the system will tend to restore equilibrium spontaneously; that is, the sys-
tem is stable.
Now suppose the exogenous variable increases (say, to X1 = 20 ) at
period t′ and the new equilibrium is at the level B; hence, the value of Y is
now outside equilibrium, which will tend to be restored by moving spon-
taneously from point A′ to point B, because equilibrium is stable. The new
equilibrium will be repeated period after period at the level BB′, as long as
the exogenous variable remains fixed at the new level. Then we have been
able to generate a causality relation, the effect of changes in the exogenous
variable upon the endogenous variable, which in this case is positive: the
higher the value of X, the higher the value of Y.
In a dynamic economic process, the corresponding dynamic equilib-
rium implies a particular trajectory over time of the endogenous variables,
as long as the values of the exogenous variables remain fixed. In Table 3.1,
a dynamic process would imply an endogenous change in some of the ini-
tial conditions. Consider an increase in the stock of machines as outcome
of the economic process; that is, part of total net output is allocated to
increase machines and part to conspicuous consumption. Investment in
machines is thus endogenous and so is the stock of machines over time.
The other initial conditions are assumed to remain unchanged and consti-
tute the exogenous variables as well.
The simplest way to understand a dynamic equilibrium is as a sequence
of static equilibrium situations. Therefore, if the static equilibrium in
each period is stable, the dynamic equilibrium will be stable as well. From
any situation outside the equilibrium trajectory, there will be spontane-
ous forces that move the endogenous variable back to the equilibrium
trajectory.
36  A. Figueroa

Y
(a) Static

B B´
[ X1 = 20 ]

A [ X1 = 10 ]

O t´ t

Y
(b) Dynamic

F [ X1 = 20 ]

D´ [ X1 = 10 ]
E

D

C

O t´ t

Y
(c) Evolutionary

E´ [ X = 10 ]
Y* 1

E F´
[ X2 = 0.5 ]

O T* T

Fig. 3.1  Types of economic processes: static, dynamic, and evolutionary

Figure  3.1, panel (b), illustrates the concept of dynamic equilibrium.


Given the value of the exogenous variable (say, X1 = 10 ), the equilibrium
trajectory of the endogenous variable Y is given by the curve CE, which will
The Economic Process  37

grow over time. If for some reason the value of the endogenous ­variable
were located at point a′, it would move spontaneously back to the equi-
librium trajectory. Now suppose the exogenous variable increases (say, to
X1 = 20 ) at period t′. The new equilibrium trajectory is given by the curve
DF, but now the initial value of the endogenous variable at point C′ is out
of equilibrium. Since the equilibrium is stable, then point C′ will move
spontaneously to the new equilibrium trajectory, curve DF. Assume that
the move is not instantaneous, but takes time; then the trajectory of transi-
tion is given by the segment C′D′, which is called transition dynamics.
In the dynamic system, as we can see, a change in the exogenous vari-
able will have the effect of shifting the equilibrium trajectory of the endog-
enous variable to another level, along which it will continue to change
over time. The causality relation is thus established. As in the static system,
the effects of production upon depletion and pollution are just ignored.
Therefore, both static and dynamic economic processes, which include
the assumption of stable equilibrium, generate causality relations. The
effects of changes in exogenous variable upon endogenous variables of
the theory are known, these are the beta propositions of the theory. When
applied to economics, it is a property of the alpha-beta method that beta
propositions show causality relations in static and dynamic processes.
The endogenous variables in a static or dynamic process can be repeated
forever. There are no limits to the repetition. They can then be called
mechanical processes. It is clear that a mechanical economic process ignores
the problem of eventual depletion of non-renewable natural resources and
also the effect of waste/pollution on the biophysical environment of the
Earth, which is home of the human species.
Consider now a non-mechanical economic process. If the economic
process is viewed as subject to qualitative changes as it is repeated, then
we have an evolutionary economic process. In this case, the assumption is
that, as the economic process is repeated period after period, qualitative
changes will also take place in the economic process, which will eventually
set limits to the repetition of the endogenous variables; then a threshold
value of the endogenous value will exist. Before the threshold value, the
endogenous variable moves along a particular trajectory, for given values
of the exogenous variables, as in a dynamic equilibrium; once the thresh-
old value is reached, the trajectory breaks down. There will be a change in
the process itself. This change is called regime switching in the economic
process. A new exogenous variable (an innovation) will appear and a new
set of relations within the process will appear, leading to a new trajectory
38  A. Figueroa

of the endogenous variable. Under an evolutionary process, the existence


of a dynamic equilibrium is just a mirage, as it is only temporal.
Figure 3.1c illustrates an evolutionary economic process. For a given
value of the exogenous variable X1 (say, X1 = 10 ), the curve EE′ shows a
given trajectory of the endogenous variable Y over time. Because qualita-
tive changes take place in the process, variable Y cannot continue increas-
ing forever. The level Y* represents the threshold value, which is reached
at period T*. When the threshold value is reached, the dynamic process
breaks down. Assume a new process, with new exogenous variable X2 and
new mechanisms (innovations), then the trajectory of the endogenous
variable Y shifts down to the new trajectory FF′ (say with X 2 = 0.5 ). The
regime switching takes place at period T = T* . It is clear that the “dynamic
equilibrium” EE′ does not exist; it is a mirage, for it is only temporal, as
said above.
Consequently, two categories of time must be distinguished. The evolu-
tionary process, represented in Fig. 3.1c, uses Time T in the horizontal axis,
which means historical time, with past, present, and future. The trajectory
FF′ is not reversible, as it cannot return to trajectory EE′. Time T moves in
only one direction. The behavior of society (regarding variable Y) is Time
dependent; that is, history matters.
This is contrary to static and dynamic processes, represented in Fig. 3.1a
and b, using time t in the horizontal axis, which refers to mechanical time.
In the mechanical process, the assumption is that society behavior (regard-
ing variable Y) is independent of Time. Hence, the mechanical process
allows, in the static process, the value of the endogenous variable to go
from A to B and return to A, if the exogenous variable increases and then
returns to its original value.
Similarly, in the dynamic process, the trajectory DF returns to trajectory
CE if the exogenous variables increases and then returns to its original
value. If an event alters the equilibrium to a new situation, then the world
moves back to its initial equilibrium once the event fades out. Thus the
event leaves absolutely no mark, no history, on the economic process.
The use of time t, therefore, assumes complete reversibility of time, just as
the mechanics in physics (just like the pendulum, which has no history). An
implication of the reversibility assumption is that the economic process can
be repeated forever. This is impossible in an evolutionary process, in which
change occurs in periods of historical Time T, with threshold values that
lead to qualitative changes in the structure of society, as shown above.
The Economic Process  39

To be sure, we have defined three types of economic processes in terms


of the type of equilibrium conditions. An economic process is called static
process when the equilibrium values of the endogenous variables Y are
repeated period after period as long as the values of the exogenous
variables X remain unchanged (static equilibrium). Therefore, if the values
of one or more exogenous variables change once, the values of the endog-
enous variables will change and the new values will be repeated period
after period. An example is an industrial society in which factories pro-
duce the same quantity of output every day. If one exogenous variables
changed, say, the international price of energy (oil), then factories will
adjust to the new price and will produce another output level; thus the
factory process would move to a new equilibrium output, which will be
repeated period after period.
An economic process is called dynamic process when the equilibrium
values of the endogenous variables Y move along a given trajectory due
to the passage of time t alone, as long as the values of the exogenous vari-
ables remain unchanged (dynamic equilibrium). Therefore, if one of the
exogenous variables changes once, the values of the endogenous variables
will be shifted to another trajectory and will move along it over time.
An example is the process of capital accumulation. Suppose society saves
a fixed proportion of the annual net output, which increases the stock of
capital for the next period. Higher capital leads to more net output in the
next year, of which the fixed proportion is saved, increasing even more the
stock of capital, which leads to further increase of net output in the sub-
sequent period, and so on. Hence, output will grow endogenously period
after period, for a given saving rate, which is the exogenous variable. If the
saving rate increases, then the net output will grow along another trajec-
tory over time.
An economic process is called evolutionary process when the equilib-
rium values of the endogenous variables cannot be repeated forever, for it
breaks down at some point and the process switches to another process,
one that is qualitatively different. The regime switching occurs at particu-
lar threshold values of the endogenous variables.
Consider the following examples of evolutionary process. A society
shows a temporal dynamic equilibrium in which total income of soci-
ety (Y) grows along segment EE′ in Fig.  3.1c, but it is accompanied
by increases in the degree of inequality. As Y increases, inequality also
does, which in turn leads to more and more acute social conflicts, which
ultimately end in a distributive crisis, with social disorder and social
40  A. Figueroa

r­evolution. The production process collapses and a new process (with


new rules in income distribution) takes place in which output Y switches
to a new trajectory and society changes. The output growth with rising
inequality of society has a limit, which is given by the existence of social
limits for inequality tolerance.
As another example, consider growth accompanied by environmental
degradation. Let again EE′ represent the growth of total output in the
world society (Y). As output growth takes place, the rate of degradation
(pollution) of the environment rises. When pollution reaches a level that
is not tolerable and causes harm to human health, then the intensity of
social conflict will emerge, as how to control the environment damage,
which will eventually reach social disorder in the form of environmental
crisis. Because human species can only survive in a particular biophysical
environment, there are limits to tolerate pollution, which sets limits to
the survival of human societies and thus to output growth. The eco-
nomic process collapses and a new process (under new rules of pro-
duction and distribution) takes place, in which output Y will switch to
another trajectory, at a lower level, say, along the curve FF′, and human
society changes.
An evolutionary economic theory predicts that the dynamic equilib-
rium cannot last forever. There exists qualitative changes that accompany
the quantitative changes; moreover, those qualitative changes have limits
of social tolerance; hence, once the threshold of social tolerance is reached,
the economic process cannot continue as before. In the evolutionary pro-
cess, static or dynamic equilibrium is a mirage, for it is only temporal. Beta
propositions—and the corresponding causality relations—can be derived
from each segment of dynamic equilibrium of the evolutionary model. In
Fig. 3.1c, along the segment EE′, the exogenous variable X1 has remained
constant. If this exogenous variable changes before period T*, the effect
will be to move the dynamic equilibrium to another trajectory, which will
imply a change in the value of period T*. The value of the threshold Y*
will remain constant, but it will be reached much sooner (or much later
depending upon the effect of the exogenous variable).
In sum, according to alpha-beta method, the economic process implies
repetition of production and distribution. There are two types of economic
processes: mechanical and evolutionary. Mechanical processes can take the
form of static or dynamic, in which the assumption is that the economic
process can be repeated without limits; that is, the limits to the repetition
are just ignored. Evolutionary processes in contrast assume that the limits
The Economic Process  41

to repetition will occur sooner or later. When the limit is reached, there
is a regime switching toward a new process, qualitatively different, which
will also be repeated for finite periods, and so on. Evolutionary process is
thus a well-defined process, in which qualitative changes occur over time.
In fact, it is through an economic theory that assumes evolutionary process
that economics can explain social changes, qualitative changes in human
societies.

Economic Processes: Deterministic and Stochastic


The static economic process has been seen as the relation between exog-
enous and endogenous variables in which the values of the endogenous
variables will be repeated period after period as long as the values of the
exogenous variables remain unchanged. This is also a representation of a
deterministic economics process.
However, the static economic process can also be seen as subject to ran-
dom variations around the equilibrium value of the endogenous variables.
In this case, we have a stochastic economic process. Then the relations in the
economic process would be different: if the exogenous variables remain
unchanged, the values of the endogenous variables will take the equilib-
rium value on average, but with variations around that value.
An example of the stochastic process is the agricultural production pro-
cess. The same quantity of inputs will generate different values of output
each year, depending upon the weather; that is, output is a stochastic vari-
able: it shows variations around the equilibrium value; therefore, the same
mean value of output, with a variance, will be repeated year after year. By
contrast, in the factory production process, output could be deterministic;
that is, given the same quantity of inputs, the same value of output will be
repeated day after day.
Figure 3.2 illustrates a static economic process that takes the form of
deterministic or stochastic. Panel (a) refers to a deterministic process.
Given the value of the exogenous variable X, say X = X1 , the endogenous
variables Y will take the value Y1 period after period. If the exogenous
variable increases, say X = X 2 , the endogenous variable will now take the
value of Y2 period after period, and so on. Therefore, there is a positive
causal relation given by the deterministic function Y = F ( X ) .
Panel (b) depicts the case of a stochastic process. Given X = X1 , the
value of the endogenous variable will take the equilibrium value, which
now refers to the same mean value of Y (represented by Y ), which will be
42  A. Figueroa

Y (a)

Y = F(X)

Y3

Y2
Y1

O X1 X2 X3 X

Y (b)

Y = G(X)

Y2

Y1

O X1 X2 X

Fig. 3.2  Deterministic and stochastic static processes


The Economic Process  43

repeated period after period, with variations around the mean (the points
of different size indicate the distribution of those variations). If X = X 2 ,
then the mean value of Y will be higher, which will be repeated period after
period, with variations around it. Therefore, there is a positive causal rela-
tion between the mean values of Y and the values of X, that is, Y = G ( X ) .
Dynamic economic processes can also be assumed as deterministic or
stochastic. The deterministic process implies a function in which the equi-
librium values of the endogenous variable Y depends upon the values of
the exogenous variable X, and for a given value of X, depends upon the
passage of time t; that is the causality relation is Y = F ( X, t ) . For a sto-
chastic dynamic process, the causality relation is between the equilibrium
values of the endogenous variable, now represented by the mean value,
and the values of the exogenous variable and time t, that is, Y = G ( X, t ) .
Evolutionary economic processes have a temporal dynamic segment,
before the regime switching point has been attained. Therefore, an evolu-
tionary economic process can also be assumed as deterministic or stochas-
tic, depending on whether the temporal dynamic segment of the process
is deterministic or stochastic. The causality relation in the first case would
be Y = F ( X,T ) , where T < T , whereas in the stochastic form, the mean
*

value of Y would be the endogenous variable.


If we are pursuing scientific knowledge, which implies the use of a sci-
entific theory to transform the real-world society into an abstract society,
then this abstract world can only take the form of stochastic economic pro-
cess. The causality relations are between the exogenous variables and the
average values of the endogenous variables because the variations around
the mean will reflect not only the effect of chance, but also the effect of
variables excluded by the theory as non-essential. Abstraction implies this
type of process, as was shown in Chap. 2. Hence, we will assume stochastic
economic processes in the rest of the book.

Extension to the Other Social Sciences


Social sciences deal with the functioning of human societies. In particular,
they study social relations in human societies, which refer to interpersonal
relations of individuals as members of society.
Why do social relations exist? What are the elements that connect
people in society? These elements are several and give rise to particular
social sciences. Thus, governance is the connecting element in political
44  A. Figueroa

science, culture in anthropology, organizations in sociology, and goods in


economics.
Economics is thus a social science. However, this is not the usual
view. Economics is considered a separate science from social sciences;
hence, when people talk about social sciences, economics is not included.
However, as seen in the above definitions, economics studies a particular
form of social relations: the economic relations, in which the binding
element is the production and distribution of goods.
Social relations in general can be studied in the form of abstract pro-
cesses, as represented in diagram form in Fig. 1.1, Chap. 1. The endog-
enous variables are defined by each discipline and the exogenous variables
and the mechanisms underlying the relations between endogenous and
exogenous variables in the process may be established by the assumptions
of the corresponding theory. What variables enter into the process and
what variables come out from the process will be assumed by the theory.
The endogenous variables must be observable and subject to empirical
regularities, which will be the phenomena to be explained.
The concept of equilibrium in the process is also applicable to social
processes in general. Equilibrium is a social concept and is defined as a
situation in which no social actor—an individual, a social group, or an
organization—has the power and the incentive to change the outcome of
the process. Governments may have the power to change an outcome of
the political process (e.g., public education of high quality); however, if
they do not have the incentives, the situation will not change—say, public
education of high quality does not buy votes, compared to inaugurating
school buildings; or, say, people under the risk of malaria have the incen-
tives to ask for a better public health service, but if they do not have the
power, the situation will not change in government policies.
From the concept of equilibrium, it should be clear that equilibrium does
not imply social optimum or elimination of social conflict. Social equilib-
rium may be a socially unfair or undesirable situation. Given the exogenous
variables under which individuals interact in the social process, equilibrium
is just the solution of those interactions. Those exogenous variables (such as
the initial power structure) may therefore lead to an equilibrium situation
that does not satisfy all social actors, but the losers cannot do much about to
change the solution. If the situation is repeated period after period, as long
as the exogenous variables remain unchanged, it is an equilibrium situation.
Whether this solution is a social optimum would require an ethical or nor-
mative theory (e.g., Pareto optimality, Rawlsian justice).
The Economic Process  45

If the concept of equilibrium is applicable to all social processes, then


the concepts of static, dynamic, and evolutionary processes will be too.
Similarly, the categories of deterministic and stochastic process will also
be applicable.
Once a social process is represented as an abstract process by means of
a scientific theory, changes in the exogenous variables will cause changes
in the values of the endogenous variables; hence, causality relations will
be determined. The theoretical assumptions constitute alpha propositions
and the causality relations are the beta propositions, which by construc-
tion are empirically falsifiable. In sum, the alpha-beta method seems appli-
cable to economics and to the social sciences in general.
The concept of economic process is fundamental not only to set the
scope of economics and that of the other social sciences, but also to set
their appropriate epistemology. How does the alpha-beta method operate
in economics? This question will be resolved in the next chapter.
Chapter 4

The Alpha-Beta Method in Economics

Abstract  How does the alpha-beta method work in economics? This


method uncovers some particular problems to attain scientific knowledge
in economics and then provides the solutions. The problem of ontological
universalism: no economic theory can explain all types of human societies.
Causality can only come from economic theory, which must assume the
existence of equilibrium in the economic process. Realities without theory
is another problem. An economic theory is a family of models; thus, falsi-
fication works through models. The problem of time, which in economics
have several meanings, which in turn have implications for falsification.
The problem of unity of knowledge: good partial theories may not lead to
a good unified theory. In this chapter, therefore, the alpha-beta method is
applied to solve these problems.

Economics is a social science. Its scope is to explain the economic process


in human societies. To achieve this objective, economics needs to be a
theoretical science. It should be able to transform a complex real world
into an abstract world by using a scientific theory, an economic theory.
A scientific economic theory is, by definition, a set of alpha propositions
from which beta propositions must be derived. Thus, the economic theory
would be, by construction, falsifiable. If the theory is not refuted by facts,
then we can say that the abstract world is a good approximation to the real
world; hence, the economic theory may be accepted. This is just to say

© The Editor(s) (if applicable) and The Author(s) 2016 47


A. Figueroa, Rules for Scientific Research in Economics,
DOI 10.1007/978-3-319-30542-4_4
48  A. Figueroa

that the alpha-beta method is, in principle, suitable for the construction
and growth of scientific knowledge in economics.
A note on empirical facts seems necessary. The alpha-beta method
implies that facts in economics refer to the actual behavior of people, to
what people do (observable). To be sure, facts cannot refer to what people
say about what they do, as in surveys. People’s answers on surveys need
not reflect their true behavior. Facts cannot refer to controlled-experiment
either. Experimental economics uses laboratory methods, in which data
are collected from people placed in a human laboratory. This method is
similar to the Skinner box that is used to study animal behavior. There is
here an implicit theory, which assumes that the behavior of people in the
lab (an artificial world) is the same as in the real world, that the behavior
of people inside the “Skinner box” is the same outside the box, as in the
case of animal behavior. Therefore, there is the need to test this theory in
the first place, which is unviable, for it will be based on opinions, not on
behavior alone.
Economics is still a non-experimental science. Therefore, facts must
come from observations of the real world, from “natural experiments.”
In this regard, economics is very much like astronomy. These very impor-
tant issues of measurement in economics are discussed in more detail in
Chaps. 6, 7, and 9.
How does the alpha-beta method work in economics? Some particular
traits of the alpha-beta method, when applied to economics, will be shown
in this chapter.

The Problem of Ontological Universalism


Physics is usually considered the exemplar of factual sciences. A character-
istic of physics is that it assumes ontological universalism; that is, there is
only one physical world to be explained. Therefore, a theory of physics of
the Earth is supposed to be valid for any place and time. The behavior of
atoms is the same everywhere, for example, in rich and poor countries of
today.
Human societies, in contrast, differ in time and space. Economics
needs to assume what the essential factors separating types of societies are.
Consider the following assumption: the essential factors are institutions and
technology, which jointly matter in shaping social relations. This assump-
tion was proposed earlier, in constructing E-theory (Table 3.1, Chap. 3).
In the economic literature, it is common to distinguish the following types
The Alpha-Beta Method in Economics  49

of human societies: primitive collectivism, feudalism, capitalism, and


communism. Indeed, these societies differ in their institutions and technol-
ogy. These societies would then have a corresponding economic theory.
Accepting this typology, economics could not generate a single theory that
is valid for every type of human society, independent of place and time. As a
social science, economics cannot assume ontological universalism.
The other extreme position that economic theory is valid only for each
particular society cannot be assumed either. As shown earlier (Chap. 2), a
scientific theory implies the use of abstraction and thus it cannot explain
the behavior of every individual in a given society but the behavior of
groups of individuals; similarly, a scientific economic theory cannot explain
every individual society but groups of societies. This is a property of the
alpha-beta method.
Consider the following intermediate position. Any, economic theory
must have two types of assumptions on the economic process: one is
universal, common to all human societies, and the other is specific to par-
ticular types of societies. This position intends to give unity of knowledge
to economics, which is a requirement of the meta-theory of knowledge
(Table 1.1, Chap. 1). Fragmented knowledge with partial theories that are
able to explain parts of a reality may not be able to explain the reality taken
as a whole. The rule that scientific theory is required to explain reality
implies one single theory for a single reality; there may be partial theories,
but they must constitute a unified theory. This intermediate position also
makes economics a historical science. As an illustration about the nature of
this position, a set of universal assumptions is proposed now.
Economics deals with social relations in which goods constitute the
cement that connects people. Economics assumes that human societies are
subject to the problem of scarcity: Human needs are unlimited, whereas the
production possibilities of goods that can satisfy those needs are limited.
Human societies are not nirvana. This is a general assumption. Particular
economic theories will specify what the determinants of scarcity are in par-
ticular types of societies.
Economics assumes that in order to solve the economic problem of
scarcity, societies establish the institutions under which the economic
process must operate. These institutions include both organizations and
the rules of the economic game (North 1990). As in the football game, in
which rules and organizations (clubs and associations) are needed, the
economic process needs organizations, that is, social actors that carry out
the economic activity; it also needs a set of rules under which these agents
50  A. Figueroa

will interact. These rules may be formal or informal. Economic theories


must then assume what the essential components of institutions are in
particular types of societies.
Another universal assumption needed is about the motivations of social
actors. The motivation underlying the observed behavior of social actors
is called rationality. In order to generate beta propositions, economics
needs to assume that social actors are rational, which has a precise mean-
ing: They act guided by a particular motivation, and thus use the means
at their disposal consistently with the objectives they seek. Moreover, they
act subject to the institutional rules, for their actions must be consistent
with the institutional rules, that is, appropriate to the social context. On
the other hand, the rationality assumption is needed to give the social
actors the animation, the force, or the motion in the workings of the
Societies (Popper 1976). Finally, among universal assumptions, econom-
ics needs to make assumptions about those initial conditions, the initial
structure (history) of societies that are the essential factors to understand
the current economic process and the evolution of societies.
The universal alpha propositions of economics will be called “postu-
lates” to indicate that they are assumptions about the foundations of eco-
nomics, but not as rigid as axioms. They play an important role in the
construction of unity of scientific knowledge in economics. They are sum-
marized as follows:

α0 (1). The scarcity postulate. Societies face the economic problem: whereas
the quantity of goods desired in society is unlimited, the maximum flow
of goods that society can produce is limited. The latter assumes that,
given the technological knowledge in society and its factor endowments
(machines, workers, and natural resources), the maximum flow of goods
per unit of time that society can produce is also given and the total out-
put that can ever be produced with the given stock of non-renewable
resources is given as well.
α0 (2). The institutional postulate. In order to solve the economic problem,
societies seek to establish a particular institution, that is, a set of rules
and organizations, which regulate the social relations in the economic
process of production and distribution.
α0 (3). The rationality postulate. Social actors act guided by their motiva-
tions, which are shaped by the institutions of the society in which they
live. Social actors have means to seek their objectives. The assumption
of rationality means that there is consistency between objectives and
means in people’s behavior.
The Alpha-Beta Method in Economics  51

α0 (4). The initial resource endowment postulate. In order to produce and


distribute goods, societies are endowed with economic resources, such
as workers, machines, land and other natural resources, together with
technology. Differences in population size relative to the other resources
make societies underpopulated or overpopulated, which implies high–
labor-productivity and low-labor-productivity societies, which in turn
shape the particular production and distribution process of societies.
α0 (5). The initial inequality postulate. Individuals participate in the eco-
nomic process, endowed not only with economic assets (such as skills,
capital, and land), but also with social entitlements (such as human,
political, and cultural rights). Differences in the degree of inequality in
the distribution of these assets and entitlements among individuals also
shape the production and distribution process of societies.

This set of alpha propositions constitute a logical system, as the postulates


are logically consistent with each other. Any human society operates under
this set of assumptions. The subscript zero indicates that it refers to the
abstract universal human society.
These assumptions are by nature very general. Differences in the con-
tent of this set of assumptions will define different types of human societ-
ies. Therefore, different human societies can be defined according to their
differences in institutions, technology, resource endowments, and initial
inequality, which correspond to the last four assumptions stated above,
which in turn refer to the initial conditions, the initial structure of society.
Any economic theory that seeks to explain production and distribution
in specific types of human societies will consist of society-specific alpha
propositions, such that they are logically consistent with the universal
postulates. The specific alpha propositions of the theory should not con-
tradict the universal assumptions. If economics pursues to be a unified
science (producing unity of knowledge about the social world, instead of
fragmentary and incoherent knowledge), this is a logical requirement, as
established by the meta-theory of knowledge (Table 1.1, Chap. 1). It also
follows that the derived beta propositions will also be society-specific.
The particular type of society that constitutes the most significant in
our times is the capitalist society. Consider the following definition of
capitalism. The institutions that characterize this type of human society
are private property of physical economic resources (physical capital,
financial capital, land); exchange of private goods under the norms of
market exchange; workers being free to exchange their labor power in
labor markets; and provisions of public goods decided under the norms
52  A. Figueroa

of the democratic political system. Hence, markets and democracy are the
basic institutions that caracterize capitalism. In such institutional context,
it will be rational for social actors to act guided by the motivation of ego-
ism and self-interest, which will dominate the motivation of altruism.
There are several economic theories that seek to explain the capitalist
society. The most important ones are the neoclassical theory, the classical
theory, and the effective demand theory. Regarding the abstract process
diagram, they differ in their assumptions about the exogenous variables
and the mechanisms that explain production and distribution. These theo-
ries also differ in the assumptions about the endogenous variables or out-
comes of the economic process. Recently the theory of bio-economics has
been developed, which assumes that the biophysical degradation is part of
the outcome of the economic process (as shown in Table 3.1, Chap. 3).
The comparative analysis of these economic theories is beyond the scope
of this book.

The Concept of Equilibrium Is Fundamental


for Falsification

In the economic process, the values of production and distribution will be


repeated period after period. However, there are different ways in which
the economic process is repeated: static, dynamic, and evolutionary, as
shown above (Chap. 3). Hence, when applied to economics, the diagram-
matic representation of a process, as shown in Fig. 1.1, Chap. 1, can be
understood either as a static process or as a dynamic process. In the first
case, the value of Y will change if, and only if, X changes; in the dynamic
case, Y will move along a given trajectory over time, even if X remains
fixed, just with the passage of time, and will shift to another trajectory if,
and only if, X changes. In the evolutionary process, Y changes not only
because X changes, but because the process (the society) itself changes.
Causality relations were also presented in the previous chapter for
static, dynamic, and evolutionary processes (Fig.  3.1, Chap. 3). It was
shown there that causality relations refers to changes in equilibrium; that
is, the existence of equilibrium is fundamental. We know that causality
relations constitute beta propositions as well. Therefore, it follows that the
existence of equilibrium is fundamental for the falsification of economic
theories.
In any economic theory, the derived beta propositions must be observ-
able; otherwise, the theory would not be falsifiable. This is to say that
The Alpha-Beta Method in Economics  53

unobservable elements of the theory (such as people’s expectations,


propensities, preferences, or threshold values) cannot be included in beta
propositions as either endogenous or exogenous variables. If they are
included into the exogenous variables, the theory becomes immortal: Any
change that refutes the predicted effect of an exogenous variable can be
saved by assuming that expectations or any similar unobservable category
has also changed. Thus, unobservable elements will become the protective
belt of the theory against refutation. If these unobservable variables are
needed in the theory, they should be placed among structural equations
(as dependent upon an observable variable, by assumption); however, the
reduced-form equation, the beta proposition, should include only observ-
able variables.
For static and dynamic processes, the conditions of equilibrium may be
observable or unobservable. If observable, the values of the endogenous
variables are bounded to take values in a certain range under any equilib-
rium situation. In the example of the biological theory of plants, shown
earlier, the beta proposition actually refers to the observable equilibrium
condition. If the equilibrium condition is unobservable, beta propositions
need to be derived from it and then submitted to the falsification of the
theory. Consider the equilibrium condition of the standard consumer
theory: The consumer’s marginal utility of each good must be propor-
tional to the market prices of consumption goods. Marginal utilities are
unobservable. However, the alpha-beta method would allow us to derive
the following observable proposition: the quantity bought in the market
of any consumption good decreases when its price increases (negative rela-
tion), which is empirically refutable.
Therefore, from a scientific theory two types of empirical predictions
can be derived:

• Observable equilibrium conditions


• Relations between endogenous and exogenous variables

Both predictions will be defined as beta propositions. They are derived


from the assumption of the existence of equilibrium, and stable equi-
librium, in the economic process. They both make the theoretical
model falsifiable or refutable. Hence, the assumption of the existence
of equilibrium in a scientific theory is not a protective belt of the the-
ory against falsification; on the contrary, it increases its chance of being
falsified.
54  A. Figueroa

Falsification Implies the Existence


of Realities Without Theory

Economics is a social science in two senses. First, its objective is the study
of human societies; second, its theoretical propositions apply to aggregates,
not to individuals. The latter sense needs some elaboration.
Because economics studies complex realities, it must use the abstraction
method. This implies making assumptions on which elements of the eco-
nomic process are important and which are not (and may thus be ignored).
Therefore, economic theory cannot explain every individual case, but only
the general features of reality. Due to the use of abstraction in the genera-
tion of a theory, the empirical test must be statistical, that is, about the
relations between averages of the endogenous and exogenous variables.
For example, an economic theory may explain the general behavior of
a group of capitalist countries, but not necessarily the behavior of every
country. Similarly, an economic theory may explain the general behav-
ior of a group of investors but not that of every individual investor. The
observation that person X smokes but does not have cancer does not
refute the theory that predicts smoking causes cancer, which in general
may be empirically consistent. Likewise, the observation that an individual
with primary schooling makes more income than does another individ-
ual with graduate studies does not refute the theory that more schooling
causes higher incomes, which in general may be empirically consistent. It
is, therefore, very likely to have societies, markets, or social actors that are
exceptions to the predictions of the theory. In this case, we will have reali-
ties without theory.
If a single social actor is important in the entire economic process (the
government of a country or the monopolist in a particular market), eco-
nomics may not be able to explain that single behavior. To put a theory
about this behavior to a statistical test, many governments and many
monopolists would have to be observed, but, all the same, there might be
some governments or monopolists whose behavior does not correspond
to that of the theory. Hence, it is logically possible to have social actors
that are exceptions to the theory and we can then write

In economics, it is logically possible to have realities without theory. This is


just the implication of using the alpha-beta method.

It may also happen that there is more than one economic theory that
explains a social reality. The relationships between alpha and beta propositions
The Alpha-Beta Method in Economics  55

need not be one-to-one. In physics, there is a classical problem of this


sort: Light is assumed in some theories as wave, but in others as particle.
(The planet Earth may be considered as a globe, but in some cases may be
considered as a plane, especially for short distances). The aspects of light
(or Earth) that we observe vary with the circumstances.
Does a particular society behave as the theory predicts? This is another
empirical question. Here the empirical test is on the individual society, not
on the theory. All the same, theory is needed and the test must be statisti-
cal. A large sample of observations on the behavior of the individual coun-
try is needed, for just one observation may represent the exception, not
the rule. If, in this particular case, the statistical test fails, the conclusion is
that the country under study does not function as the theory assumes. The
theory itself is not put into question.
Finally and just for the sake of completeness, if an economic theory is
refuted by empirical facts, then it is a theory without reality. Therefore, it
is a logical possibility to have a reality without theory and a theory without
reality. In the latter case, the theory should be eliminated.

An Economic Theory Is a Family of Models

When the alpha propositions of a theory are too general, beta propositions
can hardly be derived from them. For example, the alpha proposition
“entrepreneurs seek to maximize profits” is too general to generate beta
propositions. The behavior of entrepreneurs will more likely depend upon
the market structure in which they operate; that is, the behavior will be
different under monopoly compared to perfect competition because in
the first case the entrepreneur will be price-maker, whereas in the lat-
ter he or she will be price-taker (market price is exogenous). In order to
derive a beta proposition from an economic theory, a specific social situ-
ation is needed, in which the social context and constrains under which
social actors operate must be determined. The term “social situation”
comes from Popper (1976, 1985). Whether the process can be seen as a
static, dynamic, or evolutionary process is also part of the social situation
definition.
When the particular social situation in which the economic process
takes place is unobservable (e.g., market power might be unobservable),
the only way to take into account that context in the economic theory is
by introducing assumptions about it. In this case, additional assumptions,
called auxiliary assumptions, are necessary to construct abstract processes
that make the theory operational, that is, able to generate beta propositions.
56  A. Figueroa

The term “auxiliary” implies that the assumptions of the economic theory
are primary assumptions. The set of auxiliary assumptions that define a
particular social context gives rise to a model of the scientific theory. A theo-
retical model then includes these two subsets of assumptions, such that the
auxiliary assumptions do not contradict the primary assumptions. A model
is thus a logical system.
Because there are different types of social context, and each context is
constructed using a set of auxiliary assumptions, then for each social con-
text there will exist a particular model, and the set of all possible models
comprise the theory. Thus, an economic theory can be seen as a family of
models. The set of alpha propositions constitutes the core of the family.
Beta propositions can now be derived from each model by deductive logic.
The theory will now be subject to the process of falsification through the
beta propositions of its models. The property of beta propositions shown
above (Chap. 2) also applies here: A beta proposition of the model repre-
sents the reduced-form relations of the process, the relations between the
endogenous and exogenous variable.
The alpha-beta method now operates as follows. Let α1 and α2 repre-
sent two models of theory α. The falsification algorithm is now applied
to the models. If the beta propositions of the first model are not refuted
by empirical data, the model can be accepted, and then the theory can be
accepted; if they are refuted by empirical data, the model is rejected, but
the theory is not because there is a second model to be put into a test. For
the theory to be rejected, all models of the family must fail. If all models
fail, the theory fails, it is eliminated, and a new theory is needed. This
algorithm requires that the number of models be finite; that is, it requires
that the theory should generate only a limited number of models.
Table  4.1 shows the falsification algorithm of the alpha-beta method
applied to economic theory α, having n models. Given theory α, a set of

Table 4.1 The alpha-beta method in economics

α − ( A′ ) → α ′ ⇒ β ′ → [ β ′ ≈ b ]
   If β′ = b, then model α′ explains reality and so does α
   If β′ ≠ b, then model α′ fails to explain reality; then
α − ( A′′ ) → α ′′ ⇒ β ′′ → [ β ′′ ≈ b ]
    If … (the algorithm is followed)
( )
α − A n → α n ⇒ β n →  β n ≈ b 
   If βn ≠ b, then model αn fails to explain reality, so does α. Then, a new theory is
constructed and the algorithm continues
The Alpha-Beta Method in Economics  57

consistent auxiliary assumption A′ is included to construct the model α′,


from which the set of empirical predictions β′ is derived, which is then
tested out against the set of empirical dataset b. If the model α′ does not
fail the statistical test, it can be accepted, and so is the theory; if it does
fail, then the model α′ is rejected, but the theory is not. Using auxiliary
assumption A″, another model α″ is constructed and submitted to the fal-
sification process; if it fails, then another model is constructed, and so on,
until model αn. If the n models fail, then the theory fails, it is eliminated,
and a new theory is constructed; thus, the algorithm continues.
It should be noted that the use of auxiliary assumptions does not consti-
tute protective belts of a theory against refutation, as some epistemologists
have suggested (the so-called Duhem-Quine problem.) On the contrary,
as can be seen in Table 4.1, the use of models is a method to increase the
degree of falsifiability of an economic theory, subject to the only condition
that the number of possible models should be finite. Therefore, we can
write more precisely: an economic theory is a family of finite models.
How can we ensure that the number of models of a theory should be
finite? Each model represents each possible social situation. Therefore, a
finite number of models imply finite possible social situations in which social
actors interact in the economic process. As we already know, a complex real
world is transformed into a simple abstract world by means of a scientific the-
ory. An economic theory does it in economics. The social context or social
situation refers to the different forms in which the abstract economic process
can be constructed. Therefore, in order to be falsifiable, the theory needs to
assume that the set of possible social situations is finite.
For example, an economic theory of markets must consider the dif-
ferent forms that market structure may take. They can vary from many
buyers and sellers (perfect competition) to one buyer and one seller (bilat-
eral monopoly). We could imagine an infinite number of social situations
between these extremes. In order to be falsifiable, this economic theory
must assume only few situations, such as perfect competition, oligopoly,
oligopsony, monopoly, monopsony, and bilateral monopoly.

Partial Equilibrium and General


Equilibrium Models
According to the nature of the repetition, economic processes can be static,
dynamic, or evolutionary, as shown above (Chap. 3). Therefore, we may
have static, dynamic, or evolutionary models. According to the breadth
58  A. Figueroa

of its boundaries, economic processes can be defined by narrow or wide


boundaries in the social relations; hence, we may have partial equilibrium
and general equilibrium models.
Social actors constitute the unit of analysis of any economic theory. In
a capitalist society, social actors include capitalists, politicians, and work-
ers. The boundaries of the process that refer to these social actors, taken
as individuals or organizations, may be called group equilibrium model.
Endogenous variables will then refer to the behavior of these social actors
in response to changes in the exogenous variables. As we know, the use
of abstraction implies that an economic theory cannot seek to explain the
behavior of every social actor. However, the theory may study the group
by constructing the abstract representative agent of the group, usually
called microeconomic equilibrium model. This is just a logical artifice to
understand the behavior of a group of social actors.
Intergroup equilibrium refers to the interactions between social actors
in some particular domain of the economic process. The examples include
relations between consumers and producers in a particular market of
goods, or between employers and workers in a particular labor market,
or between government and taxpayers. The economic process so defined
leads to partial equilibrium models. Under these models, the abstract eco-
nomic process presents the most elementary social interactions in society.
General equilibrium models refer to the entire economic process, to the
interrelations between all social actors in all relevant domains of the entire
economic process. It is the abstract representation of the society under
study. The study of production and distribution in society corresponds to
a general equilibrium model. A general equilibrium model comprises the
interactions of all markets and the interactions between citizens and the
government. Any economic theory is presented by necessity as a general
equilibrium model for economics is a social science. It can have partial
equilibrium models as matter of convenience, just as a logical artifice to
construct the general equilibrium model.
As the abstract process moves from partial equilibrium to general equi-
librium, endogenous variables will increase in number and exogenous vari-
ables will decline. The reason is that some exogenous variables in partial
equilibrium will need to be explained in the general equilibrium and thus
will become endogenous. The general equilibrium model is able to explain
more features of the real world with fewer exogenous variables. We may
call this the principle of increasing endogenization of variables as the level
of models is more aggregated. Hence, partial equilibrium is just a logic
The Alpha-Beta Method in Economics  59

artifice to understand parts of the economic process. For example, prices


and quantities of equilibrium in a given market can be explained by soci-
ety’s income level and its distribution; however, at the general equilibrium
model, income level and its distribution in society becomes endogenous,
which may be explained by exogenous variables, such as international
prices or international terms of trade.

Short-Run and Long-Run Models


The abstract economic process can also be constructed to represent
different logical situations regarding the adjustment of the variables made
by social actors. This includes short-run and long-run models. A short-­
run model assumes limited capacity of social actors in the adjustment of
variables, whereas the long-run model assumes that those adjustments are
more flexible. For example, the short-run model may assume that capital-
ist firms operate with a given capital stock (exogenously given), whereas
the long-run model may assume that capitalists choose endogenously
the stock of capital they want to have by investing. Hence, compared to
the short-run model, a long-run model assumes fewer exogenous vari-
ables and a higher number of endogenous variables. The reason is that
social actors are able to make adjustments in some order: some can be
done immediately, while others will require more time; therefore, some
variables that were exogenous in the short run will become endogenous
in the long run. It follows that there is, again, an increasing endogeniza-
tion of variables in the economic process as we move from short-run to
long-­run models.
It should be noticed that short-run and long-run categories refer to
logical time, not to chronological time (months, years, decades). Short run
and long run correspond to a logical distinction: differences in the degree
of adjustments in the variables of the economic process made by social
actors. However, logical time and chronological time may be somehow
related. More things could be subject to adjustment in a year than in a
month.
Analytically, however, it is more useful to connect the logical time to
the type of economic process, mechanical or evolutionary, studied in the
previous chapter. A short-run model relates more naturally to the static
process, a long-run model to the dynamic process, and a very-long-run
model to the evolutionary process. Indeed, exogenous variables decrease
in number, as they become endogenous, as we move from the short run to
60  A. Figueroa

the long run and then to the very-long-run models. Hence, the mechanical
and evolutionary processes can also be seen as models of different runs.
The most important types of abstract economic processes that an eco-
nomic theory may take include the following categories:

(a) Short run, long run, very long run


(b) Group equilibrium, partial equilibrium, general equilibrium
(c) Market structure: perfect competition, oligopoly, monopoly, and
few others.

Categories (b) and (c) come from the previous sections of this chapter.
Category (a) comes from static, dynamic, and evolutionary processes
(Fig. 3.1, Chap. 3). A theoretical model results from selecting one type
of process from each category. For example, the combination short run,
partial equilibrium, and market structure of perfect competition constitute
a model of a given economic theory. From this example, it follows that
the combination of types of processes leads to a finite number of models.

Unity of Knowledge in Economics


According to the alpha-beta method, scientific knowledge needs a scien-
tific theory. No theory, no scientific knowledge. A scientific theory allows
us to transform the complex real world into a simple abstract world, which
makes viable to explain and understand that complex world. Whether
the abstract world is a good approximation of the complex real world is
resolved by the falsification of the scientific theory. However, the abstract
process that represents the real world process can take several forms, which
implies different models of the same scientific theory.
The risk now is that the explanations given by the different models
might be contradictory. The short-run model may be able to explain the
reality and the long-run model may also be able to explain the reality, but
they both cannot be true. Similarly, the partial equilibrium can explain the
real world, the general equilibrium can also explain the reality, but these
models might result in contradictory predictions. Each model can explain
reality taken separately, but they both cannot explain the reality taken as a
whole. We have the problem of unity of knowledge.
On this problem of unity of knowledge, the example of physics is very
illustrative. As we know, quantum physics is able to explain the subatomic
world, whereas relativity physics is able to explain the world of large bodies.
The Alpha-Beta Method in Economics  61

However, they are inconsistent with each other; that is, they both cannot
be true. The world of quantum physics operates with disorder, whereas
the world of relativity with order. How could the order in the large bod-
ies be the result of disorder in the small bodies? The two good partial
theories of physics do not lead to a good unified theory (more on this in
Chap. 9 below).
In order to explain the real social world, an economic theory must
constitute a logical system, and thus must lead to unity of knowledge, to
a unified theory. A single social reality must have unity of knowledge to be
understood. For example, in the neoclassical theory, partial equilibrium
models assume that labor markets are Walrasian and thus predict equi-
librium with full employment (microeconomics textbooks), but general
equilibrium models (macroeconomics textbooks) predict that equilibrium
is with unemployment. Both cannot be true and the neoclassical theory
cannot explain the observed unemployment in the real world.
Another example. A feature of the capitalist system is the coexistence
and persistence of few rich and many poor countries, usually called the
First World and the Third World. After 200 years of capitalism and its
continuous globalization, this persistent inequality is a paradox. To solve
this paradox, we need partial theories able to explain the First World and
the Third World taken separately, and then a unified growth theory that
should be able to explain the capitalist system taken as a whole. A proposal
of such unified theory can be found in Figueroa (2015).
Therefore, unity of knowledge is one of the fundamental epistemo-
logical requirements of science. Incoherent and fragmentary knowledge
does not imply scientific knowledge. A unified theory does. The alpha-­
beta method, by construction, ensures that objective in economics. Thus,
the set of auxiliary assumptions must be consistent with the set of primary
assumption of the theory; the set of assumptions of partial theories models
must be consistent with that of the general equilibrium model, and the
set of assumptions of a short-run model must be consistent with that of a
long-run model.
In sum, this chapter has shown the particular features of the alpha-beta
method when applied in economics. The rules of scientific research in
this field have thus been established. The next two chapters are devoted
to solve operational problems in statistical testing when dealing with the
falsification of economic theories under the alpha-beta method.
Chapter 5

Falsifying Economic Theories (I)

Abstract  The use of abstraction implies that a good economic theory


cannot fit all cases of the social world, but just the general cases; that is,
there will be exceptions. A counter-example cannot be used to refute an
economic theory. Therefore, falsification of economic theories must be
statistical. This and next chapters deal with the nature of statistical testing
in the particular framework of the alpha-beta method: How is statisti-
cal testing applied to falsification of economic theories? In this chapter,
the foundations of parametric testing instruments are presented, making
explicit the assumptions of the underlying statistical theories. Then, the
falsification of an economic theory through its beta propositions that
predict mean differences are shown.

An economic theory is an abstract construction of the real world; hence,


the empirical predictions derived from such theory cannot fit every case
of the real world, but only in general, in average values. Therefore, falsi-
fication of an economic theory requires statistical testing. This testing is
not a mechanical application of statistical methods or econometric meth-
ods, however. Several epistemological problems arise in this endeavor.
This and the next chapters discuss the nature of statistical testing under
the alpha-­beta method. To be sure, these chapters are not about statistical
methods alone, which are covered in standard statistics textbooks; more-
over, standard statistical methods are usually presented in the framework

© The Editor(s) (if applicable) and The Author(s) 2016 63


A. Figueroa, Rules for Scientific Research in Economics,
DOI 10.1007/978-3-319-30542-4_5
64  A. Figueroa

of testing hypotheses without scientific theory. They will deal with the
logic of the underlying statistical testing methods and their consistency
with the alpha-­beta method, including the epistemological problems that
may arise.

Statistical Testing under the Alpha-Beta Method


Testing hypotheses following the alpha-beta method, in which the hypoth-
esis to be tested is derived from a scientific theory, implies integrating
statistics and epistemology. In particular, in the falsification process, sta-
tistical testing introduces new assumptions, those of the statistical infer-
ence principles, which are additional to those of the scientific theory being
tested. New epistemological problems thus arise. When a scientific theory
is refuted by facts, how could we distinguish the failure that comes from
the assumptions of the scientific theory from the failure of those assump-
tions of the statistical instruments utilized in the test? The nature of fal-
sification in economics refers to this problem, which may be called the
problem of identification.
According to the alpha-beta method, the falsification problem consists
of seeking to refute the conformity between β, the empirical prediction of
a theoretical model, and the empirical data set b. The logic of statistical
testing comes from statistical theory. Statistics is a formal science; it is a
branch of mathematics.
The statistical conformity, which we represent by β = b , refers to the
universe of the study, which is called parent population. However, we can
hardly know the true empirical data b in the parent population. What we
use in our observations are samples drawn from the parent population,
and from those observations we seek to estimate the true values of the
parent population. What is the relation between the sample and the par-
ent population values of a variable? The answer is given by constructing a
logical system based on assumptions, which is known as statistical theory
or statistical inference theory.
Statistical inference theory is constructed upon particular assumptions.
Parametric statistical theory makes assumptions about the characteristics
of the parent population from which samples are drawn; by contrast, non-­
parametric statistics makes no such assumptions. Both statistical theories
are now presented in their most elementary form.
Falsifying Economic Theories (I)  65

Parametric Statistical Testing


We can hardly know the distribution of a parent population. It is unobserv-
able. We estimate the values of a parent population from samples taken from
it. Then we solve this problem by making assumptions about the distri-
bution of the parent population from which the sample was drawn. For
biological variables, the standard assumption is normal distribution (e.g.,
the height of adult people in a community). For socioeconomic variables, in
some cases it is assumed that it is a normal distribution and in others that it
is non-normal, as in the case of household income distribution.
Parametric statistical theory makes a set of assumptions about the
relationship between the characteristics of parent population and those of
the sample drawn from it. They are:

(a) About the distribution of the variable under study in the parent popu-
lation from which samples are drawn: It assumes that the variable has
a normal distribution in the parent population.
(b) About the mechanism to selected the samples: It assumes that samples
are drawn by random mechanisms.

The statistical theory of inference can now be applied to the alpha-beta


method. In scientific theoretical models, as we already know from Chap. 3,
endogenous variables can be deterministic or stochastic. An endogenous
variable is deterministic when it has a constant equilibrium value, whenever
the exogenous variables remain unchanged in static processes, or when its
equilibrium trajectory is fixed, whenever the exogenous variables remain
unchanged in dynamic processes. An endogenous variable is stochastic when
it has random variations around its equilibrium value in static processes, or
when it has random variations around its equilibrium trajectory in dynamic
processes.
According to the alpha-beta research method, in which the method
of abstraction is used, the basic endogenous variables in the economic
process—production and distribution—must be stochastic. In a static
process, the observed levels of output of a society must vary around its
equilibrium value; the observed degrees of income inequality of a society
must also vary around its equilibrium value. These variations are consistent
with the nature of economic theory. Economic theory makes abstraction,
leaving aside many other factors that are considered unimportant in the
66  A. Figueroa

economic process; hence, endogenous variables must be stochastic, sub-


ject to variability, which originates in the effect of those factors that the
theory has just ignored.
The endogenous variable of an economic theory could have in the par-
ent population a normal or non-normal distribution. This distinction is
very important for the logic of statistical testing of economic theories, so
it is worth explaining the logic of statistical testing under the alpha-beta
method.

Sample and Population Relationships


Take the variable “household income” as an example. Consider the fol-
lowing income distributions for eight households, which constitute the
parent populations:

B :[10,20,30,30, 40, 40,50,60 ]


Total income = 280, Mean = 35, Median = 35
C :[10,10,15,15,30,30,50,120 ]
Total income = 280, Mean = 35, Median = 22.5

Total income and the number of households are common to both distri-
butions. However, the distribution B is normal or symmetric, whereas C
is non-normal or asymmetric. The distribution of a variable in the parent
population is usually unknown. It is often unobservable. We estimate the
values of these variables from samples drawn from the population. But
then we need to make assumptions about the distribution of the parent
population to derive the properties of the sample values.
Consider firstly the distribution B shown above. This corresponds to
the parent population, which can also be represented in the form of a
frequency distribution, as shown in Table  5.1. The variable household
income has a normal distribution or symmetric distribution, as shown in
the second column. The mean is equal to 280 / 8 = 35 and standard devia-
tion (S.D.) is 15.
From the parent population distribution we can derive the set of all
possible sample values of size n under the assumption that the mechanism
of selection is random. For the sake of simplicity consider n = 2 . Then the
set of all possible sample outcomes will be equal to 82 = 64 . Suppose that
Falsifying Economic Theories (I)  67

Table 5.1  Frequency


Income Frequency Total income
distribution of income in
the population B 10 1 10
20 1 20
30 2 60
40 2 80
50 1 50
60 1 60
Total 8 280
Mean 35
S.D. 15

Table 5.2 Distribution
Mean Income Frequency Probability
of sample means for n = 2
drawn from population B 10 1 1/64
15 2 2/64
20 5 5/64
25 8 8/64
30 10 10/64
35 12 12/64
40 10 10/64
45 8 8/64
50 5 5/64
55 2 2/64
60 1 1/64
Total 64 1
Mean 35
S.D. 10.61

the mechanism of random selection is to have 64 balls marked from 1 to


64 in a dark urn, from which two numbers are drawn.
This mechanism will generate a sample distribution of means, as shown
in Table 5.2. The sample distribution of means also has a symmetric dis-
tribution as can be seen in the second column. The probability of the
outcome of each mean value is shown in the last column. The sample
distribution of means has a mean value that is equal to the weighted aver-
age of each mean outcome, weighted by its probability, which results in
the value of 35, which is just equal to the mean of the parent popula-
tion. The calculated standard deviation is equal to 10.61, which is smaller
than that of the parent population; actually it is equal to the standard
deviation of the parent population divided by the square root of 2, that is,
15 / √ 2 = 15 / 1.4142 = 10.61 .
68  A. Figueroa

If we drew a sample of size two and got a mean household income


of 15, could we accept the hypothesis that this sample comes from the
parent population that has a normal distribution with mean equal to 35
and standard deviation equal to 15? This particular sample value has a
chance of 2 in 64 cases, or 1 in 32 cases. The criterion to accept or reject a
hypothesis in statistics has been established: an error up to 1:20 is accept-
able as the sample deviation due to pure chance; that is, the threshold of
probability (p) due to pure chance is p = 0.05 or 5%. This probability is
called the p-value. Therefore, the hypothesis would be rejected.
Note that the concept of probability must be well defined. Consider
the following definition: Probabilities are known when we know the mecha-
nism by which the distribution of events is generated. Tossing a coin, draw-
ings from a dark urn constitute examples of physical characteristics of the
mechanism generating the events from which we can derive probabilities.
If the mechanism were unknown, we could not establish the relative fre-
quency distribution and, thus, probabilities could not be determined. A
simple observation of a relative frequency, without knowing the generat-
ing mechanism, is not an objective measure of probability, according to
this definition.
From the household income example, we can now state one of the
most important theorems of parametric statistics inference:

Theorem 5.1  If
(a) Variable Y has in the parent population a normal distribution, with
mean μ and standard deviation σ, usually expressed as Y ~ N ( µ , σ ) ;
(b) Samples of size n are drawn by a random mechanism;
Then
The distribution of the variable Y (sample mean) also has a normal
distribution with mean μ and standard deviation equal to σ / √ n, that is,
( )
Y ~ N µ ,σ / √ n .

Given the particular assumptions (a) and (b), there is a particular rela-
tion between sample and parent population values. The mean of the sample
distribution of means will be equal to the mean of the parent population.
The larger the sample size is, the more accurate the sample estimate of the
population mean will be, because the standard error will become smaller.
An implication of Theorem 5.1 is that we can define the statistic z as
follows
Falsifying Economic Theories (I)  69


( )
z = ( m − µ ) / σ / n , such that z ~ N ( 0, 1)

(5.1)

where m is the sample mean. The random variable z has normal distribu-
tion with mean equal to zero and standard deviation equal to one, and is
known as the standardized normal distribution (because their parameters
are 0 and 1).
Equation (5.1) can then be utilized to accept or reject the hypoth-
esis that we select to accept or reject statistically, which is called the null
hypothesis. The logic of using the null hypothesis in statistical testing is
the following: From the operational point of view, it is easier to accept or
reject the null hypothesis, which also implies rejecting or accepting the
alternative hypothesis. This is a logical artifice. For example, if we were
interested in testing whether a coin is unfair, it would be much easier
to test that it is fair (the null hypothesis, for which we have the random
distribution); if the null hypothesis is rejected (accepted), then we accept
(reject) the alternative hypothesis that it is unfair.
Let the null hypothesis refer to the value of µ = 0 and the alternative
hypothesis be µ ≠ 0 . Given the criterion that 1:20 ( p − value = 0.05 ) is
the deviation due to pure chance, it can be shown that the confidence
interval lies between 1.96 and −1.96 from the value of z, that is, approxi-
mately two standard deviations from the mean. If the observed value of
m lies beyond these threshold values, we reject the null hypothesis. Why?
The observed value of m is too far from zero to be attributed to chance
alone; that is, the probability of pure chance is too low (p-value < 0.05 )
and there must exist something else originating this outcome; thus, we
accept the alternative hypothesis. If the observed value of m lies within
the threshold values, we accept the null hypothesis and thus reject the
alternative hypothesis. Why? The observed value of m is not too far from
zero and can be attributed to chance alone; that is, the probability of pure
chance is high (p-value > 0.05 ).
There are several qualifications to the results presented here. Equation (5.1)
has a correction when the population is finite; also it becomes t-­distribution
when the sample size is smaller than 30. Under some conditions the observed
standard deviation from the sample can substitute the unknown standard
deviation of the population. The level of significance can better be applied on
one tail testing only. Those problems are dealt with in standard textbooks of
statistical methods. However, what was important here was to show the logic
underlying the parametric statistical testing.
70  A. Figueroa

Table 5.3 Frequency
Income Frequency Total income
distribution of income in
the population C 10 2 20
15 2 30
30 2 60
50 1 50
120 1 120
Total 8 280
Mean 35
S.D. 34.55

Table 5.4 Distribution
Sample mean Frequency Probability
of sample means for n = 2
drawn from population 10.0 4 0.06
C 12.5 8 0.13
15.0 4 0.06
20.0 8 0.13
22.5 8 0.13
30.0 8 0.13
32.5 4 0.06
40.0 4 0.06
50.0 1 0.02
65.0 4 0.06
67.5 4 0.06
75.0 4 0.06
85.0 2 0.03
120.0 1 0.02
Total 64 1.00
Mean 35.00
S.D. 24.43

Now consider the example presented in distribution C, the frequency


distribution of which is presented in Table  5.3. The parent population
distribution is asymmetric. The mean is 35 and the standard deviation
­
(S.D.) is 34.55. The set of all sample outcomes of size n = 2 will be 64.
The frequency distribution of the sample means is presented in Table 5.4.
As the parent population, this distribution is also asymmetric or non-normal,
with mean equal to 35 and standard deviation equal to 24.43, and other
parameters for asymmetry and kurtosis. The sample mean is equal to the
population mean, but the standard deviation is smaller than that of the
population; actually it is equal to the standard deviation of the population
divided by the square root of 2, that is, 34.55 / √ 2 = 24.43. 
Falsifying Economic Theories (I)  71

As the sample size increases, the distribution of sample means not only
reduces its standard deviation, but the sample distribution becomes more
symmetric, and around n = 30 , it becomes almost symmetric. Then we can
state the second fundamental theorem of statistical inference as follows:

Theorem 5.2: Central Limit Theorem  If


(a) Variable W has a non-normal distribution in the parent population,
with mean equal to ε and standard deviation equal to γ;
(b) Samples of large size ( n > 30 ) are drawn by a random mechanism;

Then
The distribution of the variable W (sample mean) has approximately
a normal distribution with mean equal to the mean of the parent popula-
tion and standard deviation equal to the standard deviation of the parent
population divided by the square root of the sample size n, that is,


( )
W ~ N ε, γ / √ n .

Given the particular assumptions (a) and (b) of Theorem 5.2, the
relations between the sample values and the corresponding values of the
parent population are similar to those shown in Theorem 5.1. Equation
(5.1) also applies in this case, but provided the sample size n is sufficiently
large ( n > 30 ) .

Beta Propositions: Testing Mean Differences


Up to now, we have shown the logic underlying the parametric statistical
testing. Now this logic is applied under the alpha-beta research method. Beta
propositions may refer to differences in the mean of two or more variables.
Mean income differences between populations can be predicted from a
theoretical model as observable equilibrium conditions. Let the mean values
of two populations be μ1 and μ2.
In order to test the mean differences, from the observed differences
from samples, we need a statistics. It is derived from the following theorem:

Theorem 5.3  If
(a) Variable Y has normal distribution in each of two parent populations;
that is Y1 ~ N ( µ1 ,σ1 ) and Y2 ~ N ( µ2 ,σ2 ) ;
72  A. Figueroa

(b) Samples are independent from each other and are drawn from each
population with sizes n1 and n2 by a random mechanism;
Then
The variable ( Y1 − Y2 ) , the difference between the means, is a random
variable that has a normal distribution with the mean equal to the
difference of the two population means and the standard deviation
equal to the square root of the sum of the variances of the two parent
populations each divided by its corresponding sample size, that is

( Y − Y ) ~ N (µ
1 2 1 − µ 2 , σ12 / n1  + σ 22 / n 2  )
The assumption “independent sampling” refers to the requirement that
the selection of one sample is not affected by the selection of the other.
The theorem is not true if the samples refer, for example, to “before and
after” situations. The implication of the theorem is that we can redefine
the statistic z of Eq. (5.1) and apply it for testing the null hypothesis that
the difference is zero.
A beta proposition of a model of an economic theory needs to be trans-
formed into a statistically testable hypothesis; thus, the beta proposition
may be called the beta hypothesis or simply β-hypothesis. This is to distin-
guish it from an empirical hypothesis that has no theory, which will be called
H-hypothesis (to be discussed later on, Chap. 7). Now suppose the beta
proposition states that the means of both populations are unequal. Then
the null hypothesis can be stated as ( µ1 − µ 2 ) = 0 , whereas the alternative
hypothesis is now the β-hypothesis ( µ1 − µ 2 ) ≠ 0 . If the observed sample
means difference is too far from zero to be attributed to pure chance; that is,
if the probability that this deviation due to pure chance is too low (p-value
< 5% ), then the null hypothesis is rejected and the β-hypothesis is accepted;
hence, the model is accepted, so is the theory. If in contrast the observed
sample means difference is not too far from zero so that the difference can
be attributed to pure chance (p-value > 5% ), then the null hypothesis is
accepted, and the β-hypothesis is rejected, which implies that the model is
rejected, but the theory is not, for another model can be constructed and
submitted to the statistical test. Remember that the number of models of an
economic theory is finite.
It could happen that the β-hypothesis states that the means of both pop-
ulations are equal. In this case, the null hypothesis is also the β-hypothesis
and accepting (rejecting) the null hypothesis also implies accepting (reject-
ing) the β-hypothesis.
Chapter 6

Falsifying Economic Theories (II)

Abstract  This chapter, firstly, continues with the analysis of parametric


testing instruments, now applied to testing causality relations derived from
an economic theory. The instrument of regression analysis is presented,
making explicit the assumptions of the underlying statistical theory. Then,
it is applied to test mechanical and evolutionary models. Secondly, the
foundations of non-parametric instruments are developed. The testing
instruments presented include one for mean differences and the other
for causality relations. Thirdly, the criteria for making economic variables
measurable, and the implicit assumptions, are discussed. The chapter ends
with a discussion about the nature of falsification in economics, in which
three sets of assumptions are involved, those of economic theory, statisti-
cal theory, and empirical measurement, giving rise to the identification
problem.

Beta propositions also constitute causality relations and as such predict


empirical relations between the endogenous and exogenous variables of
a scientific theory. These relations must be submitted to the falsification
process, via statistical testing. Causality is established by beta propositions
from static, dynamic, and evolutionary models of the scientific theory. We
need to consider these cases separately. We start with static and dynamic
models, which are mechanical models.

© The Editor(s) (if applicable) and The Author(s) 2016 73


A. Figueroa, Rules for Scientific Research in Economics,
DOI 10.1007/978-3-319-30542-4_6
74  A. Figueroa

Testing Causality Relations: Mechanical Models

Regression Analysis: Static Models


In static models, the values of the endogenous variables remain constant
as long as the values of exogenous variables remain unchanged; hence,
changes in exogenous variables cause changes in the endogenous variables
in particular directions. As shown above (Chap. 4), the beta proposition,
as causality, represents the reduced form relations of the theoretical model.
For the sake of simplicity, consider a theoretical model with single
endogenous and exogenous variables. Then the beta proposition can be
written as follows:

+ (6.1)
β :Y = F ( X )

Changes in the exogenous variable X cause changes in the values of the


endogenous variable Y, such that the endogenous variable moves in the
same direction as that of the exogenous (indicated by the sign + on top of
X). Therefore, the equilibrium values of the endogenous variables depend
upon the values of the exogenous variables. This is what Eq. (6.1) says.
How can this beta proposition be submitted to the statistical test? The
most common statistical method is called regression analysis. This is a sta-
tistical theory, the assumptions of which are the following, four in total.
The first assumption is that the relation in the parent population is linear.
Then we must transform the beta proposition into a regression line and
thus into a β-hypothesis, as follows:

β-hypothesis:
(6.2)
Y = F ( X ) = µ y / x = β0 + β X, β > 0

This equation is just the representation of the general beta proposition Eq.
(6.1) now by a linear equation, in which the sign ( + ) of the coefficient of
X indicates the prediction of the static model. Moreover, the variable X has
fixed values and the variable Y is stochastic; hence, Eq. (6.2) can be seen as
the representation of the mean value of Y for each value of X in the parent
population, indicated by the conditional mean μy/x. Furthermore, for each
value of X, the variable Y has a normal distribution with a mean value that
Falsifying Economic Theories (II)  75

µy/x

Fig. 6.1  Assumptions of regression analysis

is equal to the conditional mean and a constant standard deviation along


all values of X (the homoscedasticity assumption). These assumptions are
represented in Fig. 6.1 which is a standard graph in the literature.
Second assumption, a sample of size n is drawn by a random mecha-
nism from the parent population that is represented in Fig. 6.1; that is, a
sample is drawn from each fixed value of X. The sample estimate of Eq.
(6.2) is done by estimating the regression line using the method of least
squares, which assures that the line goes through the middle of the set of
points representing statistical data (see Fig. 6.2 below). This is the third
assumption. Define the estimated regression line from a sample as follows:

Y = b 0 + bX + e (6.3)

76  A. Figueroa

Yj

(Yj – Ŷj)

(Yj – Y)

(Ŷj – Y)
Y

O Xj X

Fig. 6.2  Breakdown of the variation of Yj into two components

The coefficient b is called the regression coefficient. According to Theorem


6.1 (see below), for a given sample size n, there will be a sample distribu-
tion of the regression coefficient b, when considering all possible samples,
with mean equal to β and a constant standard deviation.
Consider now a theoretical model with two exogenous variables X1 and
X2. Suppose the beta proposition stated the following causality relation:
the effects of the exogenous variables upon the endogenous variable are
positive and negative. Then, under the logic of regression analysis, we
must write this relation as the following β-hypothesis

β-hypothesis:
(6.4)
Y = F ( X1 , X 2 ) = β0 + β1 X1 + β2 X 2 , β1 > 0, β2 < 0

Sample estimate:
(6.5)
Y = b 0 + b1 X1 + b 2 X 2 + e

Equation (6.4) indicates the additional assumption made to test the theo-
retical model: the causality relation is linear in the parent population. The
Falsifying Economic Theories (II)  77

directions of causality derived from the theory show that the effect of
changes in the value of X1 (holding constant the value of the other exog-
enous variable) is positive, whereas the effect of changes in the value of X2
(holding constant the value of the other exogenous variable) is negative.
Equation (6.5) represents the sample estimate of the regression coeffi-
cients from a random sample drawn from the parent population.
We could follow the algorithm and represent a causality equation with
three or more exogenous variables. But we should remember that a theo-
retical model must have few exogenous variables; otherwise it would not
represent an abstract world. A theoretical model with many exogenous
variables is useless to understand the real world (as a map to the scale 1:1).
We can hardly understand the mechanisms by which so many exogenous
variables generate the values of the endogenous variables. If the real world
cannot be explained with few variables, then we have to admit that that
particular reality is unexplainable; it may be unknowable, which contra-
dicts one of the meta-assumptions of the theory of knowledge (Table 1.1,
Chap. 1).
The causality relation between sample and parent population distribu-
tions in regression analysis can be established by the following theorem:

Theorem 6.1  If
(a) There exists a linear relation in the parent population between Y and
X, where the endogenous variable Y has a normal distribution, with
conditional mean and constant standard deviation σ for all values of
the exogenous variable (homoscedasticity assumption);
(b) The sample of size n is drawn by a random mechanism;
(c) The regression coefficients from the sample, as in Eq. (6.5), are esti-
mated by the method of least squares;

Then
Each regression coefficient bj is a random variable that has a normal
distribution with mean βj and standard deviation that depends upon the
variance of the population (σ) corrected by a factor that depends on the
sample variability of the exogenous variable Xj and the sample size, that is,

( ) ( ∑X )
2
b j ~ N β j , σ / S xxj , where S xxj ≡ ∑X 2j − j /n

78  A. Figueroa

The term Sxxj measures the total square variations from the conditional
mean; hence, it is related (but not equal) to the sample variance of Xj,
An implication of this theorem is that we can define the statistic t as
follows


(
t = ( b j − β j ) / s / Sxxj ) (6.6)

where bj is the sample regression coefficient and s is the sample estimation


of σ. The random variable t has the t-distribution of Student, which is
similar to the standard normal distribution. Therefore this t-distribution is
the statistic to be utilized to test the null hypothesis β j = 0 .
An economic theory can then be submitted to the falsification process
using regression analysis. The beta proposition derived from a theoretical
model was represented by Eq. (6.4) above, in which the assumption that
the causality relations are linear has been added. The theoretical model
predicts only the sign of βj, such as β1 > 0 for the exogenous variable X1
in the equation. In this case the null hypothesis would be that β1 = 0 and
the alternative hypothesis β1 > 0 . If the estimated coefficient b1 is nega-
tive, the model is simply rejected. If it is positive but not sufficiently far
from zero, so that the probability that the deviation of the observed value
from zero due to pure chance is large (p-value > 5% ), the null hypothesis
is accepted and then the alternative hypothesis is rejected, and thus the
model is rejected. If b1 is positive and sufficiently far from zero, so that this
difference cannot be attributed to chance alone (p-value < 5% ), then the
null hypothesis is rejected and then the alternative hypothesis is accepted,
and thus the model is accepted. The same procedure is applied to testing
the effect of the exogenous variable X2.
A note of caution is in order. The null hypothesis is that the size of the
effect of the exogenous variable is zero. The rejection of the null hypoth-
esis implies that the size of the effect is statistically different from zero,
and thus the model has been accepted. However, the test does not tell us
whether the size of the effect is large or small. The effect of the exogenous
variable may be statistically different from zero, but the size of this effect
may be empirically small. (The slope may depend on the units of measure,
so the size effect could be arbitrary.) The statistical significance test cannot
answer this question. A discussion about the possible misuses of statistical
significance is presented in Ziliak and McCloseky (2008).
Falsifying Economic Theories (II)  79

If the overall test result is that both regression coefficients are posi-
tive and negative, respectively, and are statistically different from zero,
then β-hypothesis is accepted and thus the theoretical model is accepted;
otherwise, the theoretical model is rejected. It is sufficient for the model
to fail in one of its predictions (one regression coefficient) to be rejected.
The matrix of causality shown in Table 2.2, Chap. 2, illustrated this rule.
If the model fails, what can be said about the theory? We cannot con-
clude that the theory fails, as shown in Chap. 4. We have to test the other
models of the theory. If all models fail, then we can conclude that the
theory fails. Of course, if no models were needed to derive beta proposi-
tions from the theory, then there would be just one β-hypothesis to test,
the results of which will allow us either to accept or to reject the theory.
As shown above, the assumptions of regression analysis include that
the relation of the variables involved in the parent population is linear.
Rejection of the model implies rejecting the existence of a linear beta
proposition only. The beta proposition in the parent population could be
non-linear.
How can we test a non-linear relation with regression analysis? If the
beta proposition is monotonically increasing or decreasing, then a non-­
linear relation can be transformed into linear relations of the logarith-
mic values of the variables; hence, all the properties of regression analysis
shown above will also apply to the linear regression using logarithmic val-
ues. The beta proposition shown in Eq. (6.2) can then take the following
β-hypothesis form:

β‐hypothesis:
Y = F ( X ) = AXβ (6.2a)

log Y = log A + β log X  (6.2b)

Y′ = F ( X′ ) = µ y ′ / x ′ = A′ + βX′ (6.2c)


Equation (6.2c) is still linear (where the primes indicate logarithms). The
advantage is that with this mathematical artifice now we have a non-linear
relation in natural numbers, as represented in Eq. (6.2a), which can also
be tested with regression analysis. Therefore, the assumption of linearity
can take the form of natural functions or logarithmic functions.
If the test fails for Eq. (6.2), which leads to rejecting the β-hypothesis
of linear causality between Y and X, then we may proceed to test Eq.
80  A. Figueroa

(6.2b). If the β-hypothesis is accepted, then we have a monotonic non-­


linear causality between Y and X. If both forms are rejected, then we reject
the β-hypothesis. It may be that the causality relation takes other forms
of non-linear functions that are non-monotonic (such as cyclical relations,
with ups and downs), but we cannot use regression analysis for that.

Correlation Coefficients
Causality can also be measured by the correlation coefficient, for it is
derived from the regression line. Again, consider the case of just one exog-
enous variable X. As can be seen in the regression line in Fig. 6.2, for a
given value of Xj there will be an observed value of Yj. The deviation of the
observed value of Yj from the mean value of Y is necessarily equal to the
sum of two components: the difference between the observed value of Yj
and the value of Y on the regression line (Ŷj) plus the difference between
this and the mean value of Y ( Y ).
Therefore, for any observed value of Yj, we can write the following
identity:


( Y – Y ) ≡ ( Y – Yˆ ) + ( Yˆ – Y )
j j j j (6.7)

It can be shown that, by simple algebraic manipulations, we can obtain the


following identity for the n values of Y:

∑ ( Y − Y ) ≡ ∑ ( Y − Yˆ ) + ∑ ( Yˆ − Y )
2 2 2
j j j j (6.8)

This relation shows that total variation of Y is necessarily equal to the


variation due to the regression line plus the residual variation.
The coefficient of determination (r2) is defined as follows:

( ) / ∑(Y − Y)
2 2
r2 = ∑ Y
ˆ −Y
j j (6.9)

The numerator shows the part of total variations of Y that is attributable


to the statistical relation between endogenous and exogenous variables.
The denominator is just equal to total variations of Y. Therefore, the ratio
measures the proportion of total variations of Y that can be attributed
Falsifying Economic Theories (II)  81

to its statistical relation with the exogenous variables. If r 2 = 1 , then the


observed values of Y are all located on the regression line; if r 2 = 0 , then
the variation attributable to the exogenous variables is nil, which implies
that the observed values of Y are located around a horizontal line, which
shows the mean value of Y as the regression line.
The term r = √ r 2 is called the correlation coefficient. It can clearly take
positive or negative values and thus measures the direction of the observed
association between the variables: positive or negative correlation.
The measure of the coefficient of determination has been derived from
the regression analysis, so has the correlation coefficient. Therefore, there are
definite relations between them. Firstly, in the theoretical model that has only
one exogenous variable, the correlation coefficient (r) indicates the same cau-
sality direction of the relation between the endogenous and the exogenous
variables given by the regression coefficient (b): If the latter is positive, the
former is also positive; if the latter is negative, the former is also negative.
Secondly, the correlation coefficient of a sample is also a random vari-
able and can thus be submitted to a statistical test. The inference of the
sample r to the population correlation coefficient (usually represented by
the symbol ρ) requires a theorem, in which the assumptions will be very
similar to those utilized in the regression analysis. However, when the
interest is over the effect of exogenous variables upon the endogenous
variable, it is sufficient to perform the statistical test of significance to the
regression coefficient in the regression analysis, for that result will give us
the same information as doing the test on the statistical significance of the
observed correlation coefficient.
In other words, for the case of one exogenous variable, testing the
null hypothesis ρ = 0 is equivalent to testing the null hypothesis β = 0 .
Hence, the conclusion “there is no correlation” or “there is correlation”
can be used even though the statistical test has been applied to the signifi-
cance of the coefficient of regression, not to the coefficient of correlation.
In the case that the theoretical model includes two or more exogenous
variables, the resulting correlation coefficient is called the multiple cor-
relation coefficient (usually represented by R). The term R2 has the same
meaning of r2. But in this case there will be statistical relation between
the endogenous variable and each particular exogenous variable, holding
constant the others, which is called the partial correlation coefficient. The
partial correlation coefficient shows the same causality direction (positive
or negative) that is given by the corresponding regression coefficient;
hence, if a regression coefficient is positive, the corresponding partial cor-
82  A. Figueroa

relation coefficient will also be positive; if a regression coefficient is nega-


tive, the corresponding partial correlation coefficient will also be negative.
Moreover, testing the regression coefficients is sufficient to accept or
reject the theoretical model under analysis; when the regression coefficient
is significant, we may say that there exists statistical correlation.
So far, regression analysis has been applied to a beta-hypothesis, which
is the linear form equation of a beta proposition, which in turn has been
derived from a theoretical model; hence, in this case we are testing sta-
tistical relations between endogenous and exogenous variables within a
theoretical model. When this is the case, the interpretation of the esti-
mated coefficient of determination can be stated as follows: It is the 100r2
proportion of the observed total variation of the endogenous variable that
is caused by its relation with the exogenous variables. The emphasis is very
important. When the regression analysis is applied to an H-hypothesis,
these properties do not hold true, as will be shown below in Chap. 7.
The empirical predictions of the theory are falsified using regression
analysis. If it passes the statistical test, the prediction is corroborated, and
the theory is accepted. Therefore, when regression analysis is applied under
the alpha-beta method, the existence of statistical correlation implies cau-
sality, and the no existence of statistical correlation implies lack of causality
between the exogenous and endogenous variables of the theoretical model.

Regression Analysis: Dynamic models


The reduced form equation or beta proposition of a dynamic theoretical
model can be written as follows:

+ – +
(6.10)
β :Y = G ( X1 , X 2 ; t )

The equilibrium value of the endogenous variable Y depends upon the


passage of time (t), given the values of the exogenous variables X1 and X2.
The sign above the variable t indicates the direction of the trajectory; for
example, the positive sign in Eq. (6.10) indicates that the model predicts
a trajectory that is an upward sloping curve over time. This trajectory rep-
resents the dynamic equilibrium.
As to the effect of the exogenous variables upon dynamic equilibrium,
we can see that changes in the exogenous variable X1, holding constant
the value of the exogenous variable X2, will shift the trajectory upward
Falsifying Economic Theories (II)  83

because this exogenous variable has a positive sign. Changes in the exog-
enous variable X2, holding constant the value of the exogenous variable
X1, will shift the trajectory downward because this exogenous variable has
a negative sign.
The prediction of the theoretical dynamic model presented in Eq.
(6.10) constitutes a beta proposition. To apply regression analysis, it can
be transformed into a linear relation in the parent population, with t as
another exogenous variable, and obtain a beta-hypothesis. This is the
hypothesis that can be submitted to a statistical test by using regression
analysis, just as in the static model.

Testing Evolutionary Theoretical Models


We already know that qualitative changes are embedded in evolution-
ary processes. The example of theoretical evolutionary model shown in
Fig. 3.1c, Chap. 3, can be represented by the following beta proposition:

β-hypothesis:
+ +
(6.11)
Y = H ( X1 ;T )


Subject to
T < T*and Y < Y*

=If T T= *
, then Y Y*

T* = f ( X1 )

Given the value of the exogenous variables X1, the endogenous vari-
able Y will increase over time along a given trajectory; at time T = T* ,
the variable Y reaches the threshold value Y*, and the temporal dynamic
equilibrium breaks down. Hence, T* is the breakdown period; that is, at
period T*, a regime switching takes place and the process itself changes.
The threshold value Y* is unobservable. In the case that the exogenous
variables X1 changes (increases), the trajectory of the endogenous variable
will shift (upward) and thus the value of T* will also change (say, fall);
accordingly, the process will break down at different time (sooner). The
value of T* is thus endogenous.
84  A. Figueroa

Time T is now historical time, irreversible time, with past, present, and
future (as in the example of the cup that fell from the table and became
broken cup). This is different from mechanical time t (as in the example of
the pendulum). The critical statistical test of an evolutionary model is about
the breakdown of the temporary dynamic equilibrium, which is falsifiable.
In order to use regression analysis to falsify the evolutionary model, the
function (6.11) can be represented in linear form as follows:

β-hypothesis:
Y = β0 + β1 X1 + β2 T, T < T* , β1 > 0, β2 > 0 (6.12)

Sample estimate:
Y = b 0 + b1 X1 + b 2 T + e (6.12a)

The value of time T* is determined empirically by the passage of time and


by the increase in the exogenous variable X1; hence, the regime switch-
ing, the breakdown of the trajectory, will arrive at some finite time. What
is sought to refute is the existence of the breakdown, the hypothesis that
the dynamic process is repeated for a finite period only. Falsification of
the evolutionary model is more involved than in the case of mechanical
models. If the breakdown has not happened, the theoretical model can be
saved by the argument that the threshold value Y* (unobservable) has not
been reached yet; when it does happen, the model is accepted because that
is what was predicted. The model would seem to be immortal. However,
it is not the case: the model is falsifiable. It will be refuted if coefficient b1
or coefficient b2 fails to be positive, which is predicted in Eq. (6.12).

Non-parametric Statistical Testing


Statistics is a formal science. It is able to produce logical relationships
in the form of theorems, between sampling estimates and the values of
the parent populations from which the sample was drawn. The theorems
contain sets of assumptions. As we have seen in parametric statistics, the
statistical testing instruments with which statistical tests can be carried out
are derived from those sets of assumptions.
Whether economic data satisfy the assumptions of the parametric statis-
tical instruments is something we do not know. Whether the sample data
Falsifying Economic Theories (II)  85

come from a parent population that has a normal distribution is usually


unknown; also whether the sample was selected by a random mechanism is
often unknown. Although more theorems could tell us with greater precision
the necessary and sufficient conditions to construct statistical testing instru-
ments, they will not help us much in this regard, for we will remain in doubt
as to whether the economic data were constructed satisfying those assump-
tions. The problem cannot be solved within statistics because no empirical
testing of the theorems is viable, for statistics is a formal, not a factual science.
Another type of statistical testing instruments, called non-parametric
statistics, are based on statistical theory that has less binding assumptions.
The only basic assumption is that sample selection must be made through
random mechanisms. Theorems that give logical support to these instru-
ments are also needed. The logic underlying non-parametric statistics is
presented now. Only two statistical instruments will be presented here
(taken from Freund and Simon 1992). It should be enough to understand
the logic, the principles of non-parametric testing.
Mean differences can also be tested with non-parametric statistics. The
null hypothesis that the k samples come from the same parent population
can be tested using the H-test, which is also called Krustal-Wallis test. The
logic goes as follows. Define a random variable H as follows:

H = 12 / n ( n + 1)  Σ R 2i / n i  − 3 ( n + 1) , i = 1, 2,…, k (6.13)


The sample size is n, which come from k groups, such that ni is the sample
size of the i group. The sample observations are ordered from the lowest
value to the highest as if it were a single sample. The term Ri is the sum
of the ranking numbers assigned to the ni values of the i group. On the
statistical inference, we have the following theorem:

Theorem 6.2  If
(a) Samples from k groups of identical parent populations are drawn
independently, with sample size n ≥ 5 for each group;
(b) The mechanism of selection in each group is random;

Then The sample distribution of H is approximately chi-square distribu-


tion with k–1 degrees of freedom.
86  A. Figueroa

The implication of this theorem is that, with such simple assumptions,


we have a statistical testing instrument for the following null hypothesis:

µ1 = µ 2 =…= µ κ (6.14)

With this instrument economic theories can be submitted to the falsifica-


tion process. If the null hypothesis is rejected, then the k parent popula-
tions from which the samples were drawn are not identical; hence, we
accept the alternative hypothesis that the means of the parent populations
are not identical.
Correlations coefficients can also be tested with non-parametric sta-
tistics. The test is called the Spearman’s rank correlation coefficient. The
method transforms the observed data set of two variables into ranking
numbers for each variable. Then it defines the coefficient rs as follows:


rs = 1 − 6 ( ∑d ) / n ( n
2 2
)
−1 (6.15)

The sample size consists of n pairs of two variables, V and W. The observed
values of V are ordered from the lowest to the highest values; the same
ordering is made for the values of W.  The difference in the ranking for
each pair is equal to d, which is transformed into d2 and then added up.
On the statistical inference, we have the following theorem:

Theorem 6.3  If
(a) The correlation between two parent populations is ρ = 0 ;
(b) Sample size consisting of n pairs are drawn from these parent popula-
tions by using a random mechanism;

Then The sample distribution of correlation coefficient rs has mean


equal to zero and standard deviation equal to 1/ n − 1 .

An implication of this theorem is that the statistic z, which has a stan-


dardized normal distribution, can be defined as follows:


( )
z = ( rs − 0 ) / 1 / n − 1 , such that z ~ N ( 0,1)

(6.16)
Falsifying Economic Theories (II)  87

Again, with such simple assumptions, we have a statistical testing instru-


ment. Suppose the theoretical model predicts ρ > 0 . The null hypoth-
esis can be stated as ρ = 0 . If the sample value of rs is such that the null
hypothesis is rejected, then we reject the hypothesis that the two parent
populations from which the samples were drawn are uncorrelated; there-
fore, we accept the alternative hypothesis that the two parent populations
are positively correlated, and the theoretical model is accepted. If the null
hypothesis is accepted, then we reject the alternative hypothesis; hence,
the theoretical model is also rejected.
In these two theorems of non-parametric testing, we can see that no
assumptions are made about the characteristics of the distribution of the
parent population from which the sample was drawn. By contrast to para-
metric testing, the only basic assumption in non-parametric testing is that
the sample was selected by a random mechanism.
Some final comments comparing parametric and non-parametric test-
ing are in order. First, parametric and non-parametric tests constitute in
some cases alternatives to null hypotheses testing. For example, as shown
above, the null hypothesis that the mean values of several parent popula-
tions are identical can be tested using the non-parametric H-test, which
has a chi-square distribution; the parametric alternative is to use the F-test.
The null hypothesis that the correlation coefficient between two parent
populations is zero can be tested using the non-parametric Spearman-test;
the parametric alternative is to use the z-test. Multiple regression analysis
can be carried out with parametric and non-parametric methods due to
the new developments in non-parametric econometrics (Pagan and Ullah
1999; Li and Racine 2006).
Second, more complex statistical tests have been created, especially in
the field of econometrics (statistics applied to economics). The logical
principles of statistical testing presented here applies all the same: The
more complex a statistical test is, the more assumptions must contain in its
derivation, and then the weaker the results of the testing will be because it
is less likely that the empirical data will satisfy those assumptions.
In sum, regarding the acceptance or rejection of a scientific theory,
the alpha-beta research method leads to the following conclusions.
The assertion that a scientific theory “explains” reality has a very pre-
cise meaning: Its beta propositions are statistically consistent with the
empirical data of the reality under study. On the other hand, if empirical
data refute the beta propositions, the theory is simply false. In statistical
88  A. Figueroa

t­esting under the alpha-beta method, the opposite term to false is not
true, but consistent.
If statistical analysis shows consistency between beta propositions of a
theoretical model and empirical data, the abstract society constructed by
the theory is a good approximation of the real-world society under study;
otherwise, the model fails. If all models fail, then the theory fails to explain
this reality.
According to the alpha-beta method, causality requires a scientific the-
ory. The reason is simple: Causality is the relation between endogenous
and exogenous variables, which can only be established by a scientific the-
ory. No scientific theory, no causality. We should then notice that if the
statistical testing leads us to accept a theory, then the causality relations
proposed by the theory are also accepted.
We should also note that statistical rejection or acceptance of the beta-­
hypothesis is not symmetric. Rejection is definitive, but acceptance is pro-
visional. We accept it just because there is no reason to reject it now;
hence, this logic for statistical acceptance may be called the principle of
insufficient reason. This logic is consistent with the falsification principle,
according to which the rejection of a scientific theory is definitive, but its
acceptance is provisional until new empirical data set or superior theory
appears.
The example of the theory “Figure F is a square,” shown above, illus-
trates this principle. If the two diagonals are not equal (a necessary con-
dition), we reject the proposition that F is a square; however, if the two
diagonals are equal, there is no reason to reject the proposition, and we
may accept it, but provisionally, for there are more tests to be performed,
such as whether all sides are equal. Truth is elusive in the social sciences,
for we have no way to test for the necessary and sufficient conditions
on the validity of a theory to explain reality, but just for the necessary
conditions.

Science Is Measurement
“Science is measurement” is a common accepted principle. This should be
read as a necessary condition in the alpha-beta method, for measurement
alone cannot lead to scientific knowledge. However, the question of iden-
tifying observable variables is not a simple task in economics. A criterion
to determine measurability is needed.
Falsifying Economic Theories (II)  89

Table 6.1  Kinds of reality based on Searle’s classification


Ontological (existence) Cognitive (knowledge)

Objective (positive) Subjective (normative)

Objective(physical) (1) This paper is thin (2) I hate thin paper


Subjective(mental) (3) Money supply has increased (4) I love Money

Searle’s Criterion: Introduction of Socially Constructed Variables


Philosopher John Searle (1995) has proposed a criterion of measurabil-
ity. Propositions about facts can be classified as ontological and cognitive;
in addition, each category can be divided into objective and subjective.
Hence, a two by two matrix can be constructed, as shown in Table 6.1.
Propositions that refer to physical objects are ontologically objective
and those that refer to mental constructions are ontologically subjective
(from Latin ontologia, existence). Things also have a meaning in terms of
knowledge, a cognitive sense. Therefore, propositions that refer to things
that have a meaning without a viewpoint are cognitive objective and with
a viewpoint are cognitive subjective.
Therefore, a proposition is either cognitive subjective, whenever its truth
or falsity is not a simple matter of fact but depends on the viewpoint of the
person, or cognitive objective whenever its truth or falsity is independent
from anybody’s viewpoint. The first is a normative proposition and the
second is a positive proposition. On the other hand, a proposition is either
ontological subjective, whenever the mode of existence of objects depends
on being felt by individuals, or ontological objective, whenever the mode
of existence is independent of any perceiver or any mental state. The first
refers to a mental situation and the second refers to physical objects.
Any proposition about a feature of reality can be placed in a cell of the
matrix and will have two senses, one is ontological and the other is cogni-
tive. Consider the examples shown in Table 6.1. Measures of a piece of
paper are ontologically objective and cognitively objective, as in cell (1).
Feelings about a piece of paper will be cognitively subjective and onto-
logically objective, as in cell (2). Money (a piece of paper) is ontologically
subjective because money is a socially constructed fact and thus socially
accepted as a means of exchange; it is also cognitively objective (a piece of
paper) because no viewpoint is needed to recognize a ten-dollar bill, as in
cell (3). Propositions referring to feelings about money are ontologically
subjective and cognitively subjective, as in cell (4).
90  A. Figueroa

Cell (1) corresponds to the standard category of measurable objects or


observable variables. Observable categories that are utilized in physics and
biology correspond to this cell only. This category is also utilized in eco-
nomics, as total agricultural output. However, cell (3) is also measurable,
and an observable variable, in economics. Positive statements can be made
about money, such as “the quantity of money supply has increased in the
economy this month.” Hence, the two cells (1) and (3) are considered
facts—measurable or observable—in economics. Economics also deals
with socially constructed facts. This is the major difference with physics
and biology, which increases the relative complexity of economics.
Money is the best example of a socially constructed variable. There are
monies that are not accepted in every country of the world. Not everyone
in the world can recognize a Peruvian bill of ten soles, but most people
will be able to recognize and accept a US bill of twenty dollars; however,
everybody can recognize the size of a piece of paper (larger or smaller).
Ethnicity is another example of a socially constructed variable. Take
the case of race. Race as skin color is both ontologically and cognitively
objective, and thus belongs to cell (1), just as a piece of paper. Race as
a marker of ethnicity, the social meaning of skin color, is ontologically
subjective but cognitively objective, and thus belongs to cell (3), just as
money. Color is a physical characteristic of objects. People’s skin color
is, however, something more than color, it has a social meaning; hence,
race serves as a social marker to identify social groups. Therefore, ethnic-
ity is also a socially constructed category and thus is observable, such as
“unemployment rates are higher among black workers than among white
workers.” Language is another social marker of ethnicity.
As stated above, money is physically a piece of paper, but it is also a
socially constructed fact; paper as money is ontologically subjective, but
cognitively objective; that is, physical paper lies beneath the concept of
money. The same parallel can be made about race: race is like money, skin
color is like paper. Physical facts like skin colors and phenotypes (biologi-
cal facts) lie underneath the concept of ethnicity (a socially constructed
fact). Certainly, biology books do not study categories such as ethnicity. In
a given society, everyone knows what the value of a bill of money is; simi-
larly, everyone knows what the ethnicity of a person is; everybody knows
who is who in society. If this were not the case, ethnicity or race as a social
problem could hardly exist.
One could say that the progress of the natural sciences is, to a greater
extend, due to the nature of the facts they utilize in their measurement;
Falsifying Economic Theories (II)  91

it is also due to the innovations in measurement instruments. Telescopes,


microscopes, spectroscopes all have gone through continuous progress
and sophistication. It seems that changes in paradigms in physics have
mostly come from innovations in the instruments of measurement. A new
instrument leads to new observations, which can falsify and dethrone a
theoretical paradigm. This is a different hypothesis to the one proposed by
Kuhn (1970), who said that new political and social ideas were the factors
responsible for changes in paradigms.
Progress in measurement is harder to achieve in economics. First, empir-
ical variables are much more complex to construct in economics than in
physics and biology because most variables (endogenous and exogenous)
are socially constructed facts. Money, poverty, inequality, social class, eth-
nicity, market power, democracy are all important variables in economics
and they all are socially constructed facts.
Second, the instruments of measurement are not as developed as in the
natural sciences. Production and distribution are still measured by apply-
ing surveys to firms and households and by using government statistics.
Consider an economic theory that assumes that households, firms, and
governments act guided by the motivation of self-interest, which must also
be present when supplying information. In particular, when participating
in the production of a public good, such as information, the incentives
of households and firms are not to supply the true information, but the
“politically correct” or “culturally correct” information. If we accept the
theory that governments act guided by the motivation of maximization of
votes, then this motivation creates incentives to supply and produce not
the true information but the “politically correct” information.
In multicultural and hierarchical societies, such as those of the Third
World, the problem may be more acute because each culture has its
socially constructed reality, which makes more difficult to give uniform
content and meaning to aggregate variables. For instance, socially con-
structed facts must be difficult to measure by asking people to declare
the “facts.” This is particularly the problem of measurement that exists in
human societies. This problem goes beyond the intention of people to tell
the truth or not.
The problem of collecting information from social actors directly also
has another problem. When information is obtained by asking people
about their incomes or by observing their behavior, the researcher is actu-
ally disturbing reality. As in the quantum theory of physics, in economics
“one cannot observe the state of the world without disturbing it.”
92  A. Figueroa

No telescopes or microscopes have been invented to observe human


economic behavior without disturbing its behavior. Hence, in economics,
and contrary to physics, paradigms have not been challenged by innovations
in measurement instruments. Problems of measurement explain, at least in
part, why economic theories tend to be immortal. If economics does not
appear as scientific as physics or biology, it is, in part, due to the problem of
measurement. Statistical testing alone cannot solve this problem.

Cardinal, Ordinal, and Weak Cardinal Variables: Geogerscu-­


Roegen’s Criterion
The distinctions made by economist Nicholas Georgescu-Roegen (1971)
on measurable variables include the quantity and quality characteristics of
objects. Three types of measures are defined as follows:

• Cardinal variable: Absolute zero value exists and differences in mag-


nitudes are measurable, as in flow of output (tons of steel produced
per year).
• Weak-cardinal variable: Absolute zero value does not exist; hence,
the origin is arbitrary. But once the origin is determined, the vari-
able is cardinally measurable, as in temperature (Centigrade or
Fahrenheit), chronological time (BC/AD).
• Ordinal variable: Only ranking or order is measurable, but dif-
ferences in ranking are not measurable, as in qualitative variables:
“friendly” (high/moderate/low), “democracy” (strong/weak).

Cardinal measure of objects refer to quantities, making abstraction


of any qualitative difference. It reflects a particular physical property of
objects. Cardinal numbers are countable and can be added and subtracted.
They can be measured along a horizontal or vertical axis in a graph. A
weak-cardinal number operates as a cardinal number. Measures of differ-
ences in time and temperature can be placed on a horizontal or vertical
axis, with an arbitrary origin.
Ordinal measure refers to positions in a ranking or ordering, as they
deal with qualitative differences. These numbers cannot be added or sub-
tracted, as they refer to uncountable categories. They cannot be measured
along a horizontal or vertical axis of a graph. However, ordinal variables,
transformed into ranking numbers, can be associated with cardinal and
weak-cardinal numbers and can thus be subject to quantitative analysis.
Falsifying Economic Theories (II)  93

Under the alpha-beta method, consider the following beta proposition


derived from a scientific theory to be tested using regression analysis as

β-hypothesis:
+ +
(6.17)
Y = F ( X1 , X 2 ) = β0 + β1 X1 + β2 X 2 , such that X 2 = 0 or 1

Let X1 be a cardinal variable and X2 an ordinal variable. Suppose the


endogenous variable Y measures market wage rates and the exogenous
variable X1 is education (measured by cardinal numbers, years of edu-
cation), whereas X2 is quality of school of graduation (either public or
private). Hence X2 would be measured by ranking numbers; say, public
school is equal to 0 and private school is equal to 1. Usually the qualita-
tive variable X2 is called a “dummy variable.” Equation (6.17) can then be
submitted to falsification using regression analysis.
Two conclusions from this section are relevant for the alpha-beta method
in economics. First, measurable variables include not only physical objects
(piece of paper), but also socially constructed variables (money). Second,
measurable variables include not only cardinal variables, but also weak-car-
dinal and ordinal variables. Quantitative research is viable using the three
types of variables. The three can operate as endogenous or exogenous vari-
ables. The use of both socially constructed variables and ordinal variables
in economics does not constitute a limitation for doing falsification of eco-
nomic theories, but the measurement of the variable are more involved.

The Nature of Statistical Falsification


of Economic Theories

In the last two chapters, we have shown the connections between the
alpha-beta method and the theory of statistical testing. Statistical testing
methods are logically derived from statistics, a formal science. There are
two statistical theories: parametric and non-parametric. The first makes
assumptions about the distribution of a variable in the parent population
from which samples are drawn. The assumptions include normal distribu-
tion of the variable, homoscedasticity, and so on. The second makes no
such assumptions. However, both theories have in common the assump-
tion that sample selection has been made through a random mechanism.
94  A. Figueroa

Statistical testing instruments are derived from each theory and then we
have parametric and non-parametric testing instruments.
A new epistemological problem now appears in the falsification process
of an economic theory. If the parametric testing instrument is applied and
the theoretical model under consideration is rejected, the origin of the
failure may not be attributed only to the assumptions of the economic
theory but also to the assumptions of the testing instrument. If the non-­
parametric testing is applied, and the theoretical model is rejected, the
origin of the failure can be attributed to the assumptions of the economic
theory only, unless there are reasons to doubt that the sample was random.
Note that this problem is not about errors of measurement. Certainly,
the instruments of measurement of variables could be faulty, as indicated
earlier. The problem at hand refers to the testing instruments themselves,
which may be faulty, when some of the assumptions or requirements of
the testing instrument are not met. Consider the case of medical tests,
which very often require that the patient be on fast; hence if the patient
does not comply with this requirement, the test will be invalid.
The same problem appears with the statistical testing instruments that
make some assumptions about reality. Suppose the beta-hypothesis ( β > 0 )
will be tested with the parametric instrument of regression analysis, which
assumes the sample was drawn from a parent population that is normally
distributed in the real world and that the causality relation is linear (in
natural or logarithmic numbers). The falsification is now about the two
sets of assumptions: the one that is underlying the beta proposition and
the other that is underlying the statistical testing instrument. Therefore, if
the beta-hypothesis is rejected, we will not know which of the two types
of assumptions have failed.
On the other hand, if the beta-hypothesis is accepted, we know that the
joint assumptions have passed the test. Because the set of assumptions about
the statistical inference is independent of the set of assumptions underlying
the beta proposition, we can say that the latter has passed the test too.
Another problem of applying falsification to economic theories origi-
nates in the instruments of measurement of variables. First, the nature
of empirical variables is such that most variables (endogenous and exog-
enous) are socially constructed variables, as was shown earlier. Socially con-
structed facts are not easy to measure.
Second, the instruments of measurement of the economic process are not
as developed as in physics. Production and distribution in most cases are still
measured by applying surveys to firms and households and by using gov-
Falsifying Economic Theories (II)  95

ernment statistics. We know that households and firms in capitalist societies


act guided by the motivation of self-interest, which must also be present
when supplying information, which is a public good. When participating
in the production of a public good, such as information, the incentives of
households and firms might not be to supply information at all; if supplying
information, it may not the true information, but the “politically correct”
information. Governments may also act guided by the motivation of self-
interest: the maximization of votes, which creates incentives to supply and
produce not the true information but the “politically correct” information.
Economics relies on natural experiments for empirical data, rather than
on controlled experiments. Economics is much like astronomy. However, no
telescopes have been invented to observe the economic process of produc-
tion and distribution. Hence, in economics, and contrary to physics, eco-
nomic theories have not been challenged by innovations in measurement
instruments. One could also say that economics is much like ethology, when
seen as the study of human behavior under natural conditions. However, no
instruments similar to those used by ethologists have been developed in eco-
nomics. Instead, people self-declarations and opinions still constitute the bulk
of economic data. Problems of measurement explain, at least in part, why
economic theories tend to be immortal. In sum, an economic theory may
fail because the assumptions about the measurement of variables are faulty.
The following representation will summarize the nature of falsification
in economics using the alpha-beta method. Falsification of a scientific the-
ory is made through beta propositions, which are the empirical p ­ redictions
of the alpha propositions of the theory. Then there is the procedure to
submit these predictions to the statistical testing against the empirical data
set and with the use of statistical instrument, which is built on a set of
assumptions τ of statistical theory. Finally, the data set b is built on a set
of assumptions λ about measurements and reliability of variables. Hence,
we may rewrite the scientific rules in the alpha-beta method, as shown in
Table 2.1, Chap. 2, as a more complex set of rules as follows:

α ⇒ β → (α ,τ ,λ ) :[ b ≈ β] (6.18)

If b ≠ β , then the set of assumptions (α, η, λ) is rejected.


If b = β , then the set of assumptions (α, η, λ) is accepted.

The vector (α, τ, λ) indicates that falsification in economics includes


three sets of assumptions: those of the scientific theory (α), those of the
96  A. Figueroa

testing instruments that are contained in the statistical theory (τ), and
those of the measurement of empirical variables (λ). In the case of rejec-
tion, the source of the failure may come from either set of assumptions
or from all three, but there is no way to identify them. We call this the
identification problem. In the case of acceptance, all sets of assumptions are
accepted; moreover, the theory is accepted because the other two assump-
tions were made independently. In any case, the researcher should report
the ways in which the identification problem might affect the falsification
results.
Regarding the influence of the statistical testing instrument, we know
that, compared to parametric statistics, non-parametric statics assumes only
the condition that the sample drawn from parent population should be
random. The conclusion we may draw in this comparison is that paramet-
ric statistics is based on a theory that contains more restrictive assumptions
about the origin of the empirical data utilized. Then the more restrictive
the assumptions of a statistical theory, it is more unlikely that the sample
data utilized do satisfy those conditions; hence, testing instruments that
rely on a large set of assumptions are less powerful to test scientific theo-
ries than those that do not.
Can the problem of identification be resolved by testing the assump-
tions of the statistics utilized in the testing of the scientific theory? No, it
cannot. First, statistics is a formal science, not a factual science. No beta
propositions can be derived from a statistical theory to test the empirical
validity of a statistics. On the other hand, testing the assumptions—for
instance, whether a sample comes from a normal distribution popula-
tion—would require another statistics to do the testing, which in turn
would come from other theorem, with other assumptions, which would
have to be tested, which in turn would require another statistics, and so
on. We would fall into the infinite regress problem, unless the final test
was non-parametric.
Not only in economics, but also in the social sciences in general, scien-
tific research is carried out under imperfect knowledge about the charac-
teristics of the parent population and that of the sample drawn from that
population. Non-parametric statistical testing instruments are therefore
relatively more powerful than parametric instruments for the falsification
of an economic theory, when both testing instruments compete in per-
forming the test.
This conclusion holds true even acknowledging two relative disadvan-
tages of non-parametric methods. First, a non-parametric method is less
Falsifying Economic Theories (II)  97

efficient (requires a larger sample for the same statistical error) than the
parametric method that it can replace; second, more assumptions imply
more precision on the level of significance. These are the usual arguments
against the use of non-parametric statistics. However, these two relative
disadvantages of non-parametric methods are more than compensated by
their advantage of requiring less restrictive conditions about the genera-
tion of the empirical data utilized to perform the test.
Under the conditions of falsification presented in Eq. (6.18), progress
of scientific knowledge in economics and the social sciences will come
from superior new statistical theories that are non-parametric and from
innovations in the instruments of measurement of variables. These inno-
vations will make falsification more powerful, as false theories could be
identified and eliminated and replaced by better theories. The Darwinian
evolutionary selection of theories will work more effectively. Thus, eco-
nomics will show more rapid scientific progress.
Chapter 7

The Alpha-Beta Method and Other


Methods

Abstract  The alpha-beta is a scientific research method that has been


logically derived from the composite epistemology, the combination
of the epistemologies of Nicholas Georgescu-Roegen and Karl Popper.
However, they are not the only known epistemologies. There are others,
which are examined in this chapter: deductivism, inductivism, and inter-
pretive. However, scientific research methods cannot be logically derived
from each. Interpretive epistemology is justified only as exploratory
research method. Statistical inference as empirical research method is also
logically justified. Finally, practical rules to choose a research method are
established. It is shown that although the alpha-beta method is the only
scientific one, the others (statistical empirical method and interpretive
exploratory method) play an important role in the transition from pre-­
scientific to scientific knowledge in economics.

The alpha-beta research method has been derived from the compos-
ite epistemology, the combination of the epistemologies of Nicholas
Georgescu-Roegen and Karl Popper. However, they are not the only
known epistemologies. There are others, which will be examined in this
chapter. Research methods will be derived, if possible, and compare to the
alpha-beta method.

© The Editor(s) (if applicable) and The Author(s) 2016 99


A. Figueroa, Rules for Scientific Research in Economics,
DOI 10.1007/978-3-319-30542-4_7
100  A. Figueroa

Deductivist Epistemology
Deductivism assumes that scientific knowledge can be reached by logical
deduction alone. Hence, the derived rule for scientific research would say
that that scientific knowledge must be established by pure thoughts rather
than by reference to empirical observation; that is, alpha and beta proposi-
tions alone are conducive to scientific knowledge, where beta propositions
are derived from alpha by using deductive logic. Thus, the derivation of
beta proposition is treated as a problem of solving a theorem, and once
it is solved, the method is completed. Economics is thus seen as formal
science, not as factual science.
Compared to the alpha-beta method, the rule derived from deductiv-
ist epistemology comprises only the alpha propositions and the corre-
sponding beta propositions. The step of submitting the beta proposition
to the falsification process is ignored. Thus, the untested beta proposi-
tion is taken as causality relations. The knowledge so generated is not
error-free. We already know that a logically correct proposition can be
empirically false.
Consider the following economic theory. The alpha proposition says
that the short-run level of output in a capitalist society is given by the
expenditure behavior of social actors, not by the behavior of producers of
goods. A model of this theory would derive the following beta proposi-
tion: if the government expenditure increases, the level of output of society
will too. The beta proposition has been derived from alpha using logical
deduction. Is this sufficient to have scientific knowledge? No. As shown
by the alpha-beta method, the beta proposition is logically correct but it
could be empirically false when tested statistically because the assumptions
contained in the theory were arbitrary.
Paul Samuelson’s classical book Foundations of Economic Analysis
(1947) is the best example of deductivist epistemology. The book goes
as far as to derive meaningful theorems from economic theories, which
correspond to what we have called here beta propositions. Empirical refu-
tation is ignored. (This is a paradox in a book that starts by claiming the
significance of epistemology in economics.) Even today, most textbooks
of economics use deductivist epistemology. In the language of alpha-beta
method, they derive beta propositions from an economic theory, and then
go immediately to applications to public policies. No falsification of the
theory is ever presented or even suggested.
The demarcation principle of deductivist epistemology is the theory
itself. The criterion of scientific knowledge is what the theory says. If
The Alpha-Beta Method and Other Methods  101

someone presents facts that refute the predictions of a theory, the proposi-
tion of this person will be disqualified on the grounds that it is contrary to
the theory. The person will be treated as ignorant in scientific knowledge.
Moreover, if reality and theory are found inconsistent with each other,
then “reality must be wrong.” Why? Because the theory and its empiri-
cal predictions are logically correct! The debate about the superiority of
theory A against theory B is also based on deductivism: the criterion of
knowledge utilized is the theory itself.
The risk of error in scientific knowledge using deductivist epistemology
is therefore enormous. We never know whether the empirical predictions
of a theory are consistent or inconsistent with empirical facts. The causal-
ity relations are hypotheses and they could just be empirically false.
Actually, the logic of deduction—not the deductivist epistemology—
plays a major role in the alpha-beta method. From the assumptions of
a scientific theory, beta propositions are derived by using the logic of
deduction. But that is just part of the alpha-beta method, which includes
the falsification process.
In sum, deductivist epistemology has to be abandoned. It belongs
to the formal science, not to factual sciences. It cannot lead to scientific
knowledge in the factual sciences.

Inductivist Epistemology
Inductivist epistemology as a theory of knowledge assumes that scientific
knowledge can be reached by empirical observations alone. No scientific
theory is needed. The derived rule of inductivist epistemology says that
scientific knowledge requires empirical observation; it is a necessary and
sufficient condition. Inductivism is just the opposite of deductivism.
A distinction needs to be made between inductivist epistemology and
inductive logic. Consider the following examples of syllogism (premises
and conclusion):

Deductive logic:
All men are mortal
Socrates is a man
Therefore, Socrates is mortal

Under deductive logic, every argument involves an inferential claim: the


conclusion follows necessarily from the premises.
102  A. Figueroa

Inductive logic:
We have observed that some poor countries are tropical
Then all tropical countries are probably poor

As we can see, under inductive logic, no definite conclusion can be derived


from the premises. The conclusion cannot be taken as true, but only as
probable.
In standard textbooks of logic, we find the following statement about
inductive logic: “In general, inductive arguments are such that the con-
tent of the conclusion is in some way intended to ‘go beyond’ the con-
tent of the premises” (Hurley 2008, p. 36). Nevertheless, no principle of
logic exists that justifies going beyond observations; that is, nothing in the
observations themselves are found that can afford us a reason for drawing
conclusions beyond those experiences.
Inductive logic refers to inductive inference: the logical passage from
particular statements, such as accounts of observations or experiments, to
universal statements, such as hypotheses or theories. However, from the
strict logical point of view, are we justified in making such inference?
The question whether inductive inferences are logically justified is
known as the problem of induction. To solve this problem, to have induc-
tive inference logically justified, a principle of induction must be estab-
lished; but in order to justify the principle of induction, we should have
to assume an inductive principle of higher order; and so on. “Thus, to
attempt to base the principle of induction on experience breaks down,
since it must lead to an infinite regress” (Popper 1968, p. 29).
The problem of induction can be represented as follows:

I :m → M / P
P :m ′ = ( m, n ) → M′ / P ′
P′ :m′′ = ( m, n, r ) → M ′′ / P′′

The principle of induction (I) says that from a particular observation (m)
we can reach a general statement (M), the justification of which is an
inductive rule (P). How do we establish P? By applying another induction,
which is to be found in the observation itself (m′), which now includes in
the observation a new element (n), from which we can generalize to state-
ment M′, the justification of which is an inductive rule (P′), and so on.
The Alpha-Beta Method and Other Methods  103

Thus, the inductive rule is just another induction; moreover, the algorithm
leads inevitably to the logical problem of infinite regress.
Consider the following example to illustrate the problem of induction:

I: m: We observe that country C is tropical and poor


M: Tropical countries are poor (why?)
P: m′: We observe that country C is tropical, suffers from malaria, and is poor
M′: Tropical countries suffer from malaria and are poor (why?)
P′: And so on.

If the initial statement (m) refers not to a single country but to a group
of countries, the conclusions would be formally the same. As we can see,
the observations themselves cannot give us a logical justification for draw-
ing conclusions beyond those experiences. In particular, no underlying
factors in the workings of the real world will ever appear with inductivism.
Under inductive logic, no definite conclusion can be derived from the
premises. The conclusion cannot be taken as true, but only as probable, as
indicated earlier. “If some degree of probability is going to be assigned to
statements based on inductive inference, this will have to be justified by a
new principle of induction, appropriately modified. And this new principle
in its turn will have to be justified, and so on” (Popper 1968, p. 30). The
logical problem of infinite regress is back again.
If it were just a matter of having more observations, How far do we
have to repeat our observations to draw a general conclusion? How can we
justify the conclusion? This is Popper’s answer: “[A]ny conclusion drawn
in this way may always turn out to be false: no matter how many instances
of white swans we may have observed, this does not justify the conclusion
that all swans are white” (Popper 1968, p. 27).
The problem of induction has no solution; that is, there is no such thing
as inductive logic. As Philosopher Susan Haack summarized, “According
to Popper, we have known since Hume that induction is unjustifiable;
there cannot be inductive logic” (Haack 2003, p. 35).
In sum, there are logical problems with inductivism. First, the principle
of induction would be another induction, for the principle of induction
must be a universal statement in its turn. This leads to the logical problem
of infinite regress, which denies the existence of inductive logic. Second,
inductivist epistemology rests on the assumption that there exists induc-
tive logic, which does not. Therefore, inductivist epistemology must be
abandoned because it does not full fill with two requirements of the
104  A. Figueroa

meta-theory of knowledge, shown in Table  1.1, Chap. 1. Inductivism


provides no logic of scientific knowledge; there is no logical route from
facts to explanation and causality relations of the real world, and, there-
fore, a demarcation rule between scientific knowledge and non-scientific
knowledge cannot be provided either.

Statistical Inference as Empirical Research Method


Now we consider an empirical research method in which we can logically
justify the conclusion that an empirical observation can go beyond the con-
tent of the observation. This is given by the logic of statistical inference.
As shown above, in Chaps. 5 and 6, from a sample drawn from a parent
population, a logical inference can be made about the population. Hence,
empirical regularities or empirical laws in the real world can be constructed
by using the logic of statistical inference. In terms of the alpha-beta method,
statistical inference implies constructing the empirical set of facts b.
Suppose that an empirical law b has been found using the logic of sta-
tistical inference. Can we know why such law exists? Do we have an expla-
nation? Do we have the causality? The answers are no, for that would
require inductive logic, which does not exist. There is no logical route
from data to theory because there is no such thing as inductive logic, as
shown above. We are back in the problem of induction.
Albert Einstein wrote the following statement in a letter to Karl Popper,
which says

I think (like you, by the way) that theory cannot be fabricated out of the
results of observation, but that it can only be invented (Popper 1968,
p. 458)

In terms of alpha-beta method, the proposition that there is no logical


route from observations to theory can be represented as follows:

/ α
b⇒

Statistical inference cannot solve the problem of induction. However,


it is useful in the task of establishing empirical regularities, which plays a
significant role in the construction of scientific knowledge because that
regularity will call for an explanation, for a scientific theory. In this respect,
statistical inference refers to the inference from observations of samples
The Alpha-Beta Method and Other Methods  105

to the population from which samples were drawn. This is viable because
there exists statistical theory, which gives logic to the inference. This
rule may be called statistical inference research method. It is an empirical
research method, not a scientific research method, as alpha-beta is.
Statistical inference is the logic of empirical knowledge (not of scientific
knowledge); thus, the criterion of knowledge is the discovery of empirical
regularities, based on statistical testing of an empirical hypothesis that is not
derived logically from a theory, for such theory is unavailable. This empiri-
cal hypothesis may be called H-hypothesis. The logical justification of this
hypothesis is not a scientific theory, but an intuition of the researcher.
The H-hypothesis can be subject to statistical testing using parametric or
non-parametric statistics. To illustrate the method, regression analysis will be
presented here. Notice that under this method the regression linear equa-
tion does not come from scientific theory, from a beta proposition (it is not
β-hypothesis), but just from an empirical hypothesis without theory. Then

H-hypothesis:
(7.1)
Y = F ( X1 , X 2 ) = β0 + β1 X1 + β2 X 2 , β1 > 0, β2 < 0

Sample estimate :
(7.2)
Y = b0 + b1 X1 + b 2 X 2 + e

In this case, the terms endogenous and exogenous variables cannot be


used because they pertain to a theory. The variables must change names:
Y is now called dependent variable and X1 and X2 are called independent
variables. Equation (7.1) introduces the additional assumption that the
relation between dependent and independent variables are linear in the
parent population. Equation (7.2) shows the estimates of the regression
coefficients from a sample drawn from the parent population.
In the testing of H-hypothesis, the null hypothesis is that the regression
coefficients are each equal to zero. The alternative hypothesis is that the
regression coefficients have the signs of the H-hypothesis. If the sample
estimate of each of the regression coefficients shows a p-value that is larger
than 5%, then there exists a high probability that the observed deviation
is just a random effect; hence, the null hypothesis is accepted and, conse-
quently, the H-hypothesis is rejected. (Note that this conclusion follows if
only one of the regression coefficients is rejected.)
The failure of the H-hypothesis is due in part to the hypothesis itself
and in part to the assumptions of the regression analysis, a parametric test,
106  A. Figueroa

and to measurement problems of the variables. We also face the identi-


fication problem here because these parts are not distinguishable. The
next step could be to test the hypothesis using the logarithms of the vari-
ables to escape from the linear assumption. If the conclusion prevails, then
H-hypothesis must be rejected: there is no such statistical correlation in
the parent population.
If the null hypothesis is rejected (p-value < 5% for each regression
coefficient), and the signs of the coefficients bj are the expected ones, then
the H-hypothesis is accepted. There exists such statistical correlation in
the parent population.
Note that in testing the H-hypothesis, the interpretation of the esti-
mated coefficient of determination (r2) will be the following: It is the
100r2 proportion of the observed total variation in the dependent variable
that can be attributed to the statistical association with the independent
variables. We cannot say that it is “the proportion of the observed total
variation in the dependent variable that is caused by the variations in the
independent variable” (as it usually appears in standard statistical text-
books), for there is no causality here. Causality requires a scientific theory
and testing a β-hypothesis, where endogenous and exogenous variables
are well established by the scientific theory.
Consequently, if the H-hypothesis is accepted in the regression test,
we cannot say that there exists causality among the variables involved in
the parent population. To be sure, the reason is that causality requires a
scientific theory, which in this case is unknown. Moreover, there is no
logical route to go from H-hypothesis to the scientific theory; hence, even
if the H-hypothesis were taken as a possible beta proposition, we could
not logically derive an alpha proposition from it. In the case of testing
H-hypothesis, therefore, we can write

The existence of statistical correlation does not imply the existence of causal-
ity among the variables involved.

On the other hand, if the H-hypothesis is rejected, we cannot conclude


that there could hardly be causality, that is, the absence of correlation does
not imply the absence of causality. The reason is the same: the scientific the-
ory and the derived beta propositions are unknown. The H-hypothesis could
just be part of a beta proposition once a scientific theory becomes available.
The Alpha-Beta Method and Other Methods  107

The logic of the statistical inference method is to construct empirical


regularities. For this purpose, the H-hypothesis must be tested repeatedly
using different samples of the parent population.
We can therefore conclude as follows:

• In the framework of a theoretical model, in which a β-hypothesis


has been submitted to statistical testing using regression analysis, the
existence of statistical correlation corroborates the causality relations
predicted by the scientific theory, whereas lack of correlation implies
empirical failure of the theory and thus absence of causality.
• By contrast, in the framework of the statistical inference method,
in which an empirical H-hypothesis, a hypothesis with no theory,
has been submitted to statistical test using regression analysis, the
existence of statistical correlation does not imply causality, whereas
the lack of statistical correlation does not imply absence of causal-
ity. Under this research method, no inference upon causality can
be made from regression analysis. This conclusion holds true no
matter how sophisticated the statistical or econometric method
applied in the regression analysis is, in which the bottom line is still
correlation testing, although in different forms. The reason is that
theory is unknown and theory cannot be derived logically from the
regression results. (More on causality fallacies will be shown in the
next chapter.)

It follows that the statistical inference research method is not conducive


to scientific knowledge. It cannot establish causality relations. Its logic is
the construction of empirical regularities. It leads to a description of the
real world. Therefore, it does not constitute a logic of scientific knowl-
edge, an epistemology.
Therefore, the statistical inference method is a pre-scientific method. It
cannot provide scientific knowledge by itself. But it plays a role in reach-
ing scientific knowledge when no theory is available. Statistical inference
method and alpha-beta method can be seen as complementary in one par-
ticular sense. If the empirical regularities coming from the statistical infer-
ence research method are subsequently placed into an alpha-beta method,
which is a scientific research method, then the set of empirical relations
b will be already known, and alpha-beta propositions would have to be
invented (as Einstein said) to explain that set of facts. Whenever the sta-
tistical inference research method is accompanied by proposals of a theory
108  A. Figueroa

that could explain the observed statistical relations—developed by trial and


error, not by logic—, then it belongs to the realm of scientific research.
In sum, although there is no logical route from empirical data to the-
ory, empirical regularities could be transformed into scientific knowledge,
not by deductive logic, but by trial-and-error procedure, until a theory is
invented. Similarly, a theory per se is not scientific knowledge, but it can
become scientific knowledge if the set of empirical predictions that are
derived by logical deduction (beta propositions) are found consistent with
empirical facts using statistical testing.

Interpretive Research Method


The interpretive research method assumes that social science is different
from natural sciences because humans have free will and have feelings.
The rules of scientific knowledge in the social sciences should then be
different from those of natural sciences. In physics, the researcher cannot
ask questions to matter about their motivations and feelings. In biology,
the researcher is an ethologist when studying a community of apes and
thus cannot ask questions either and uses the method of observation at a
distance, not to disturb their behavior. In the social science, according to
the interpretive method, the researcher can and should ask questions to
the people under study about their feelings, beliefs, and motivations, and
could then understand better human behavior. Social sciences now seem
simpler than the natural sciences.
The interpretive method is justified in the literature by saying that sci-
entific knowledge in the social science can only be achieved through inter-
pretation of the behavior of people, for human actions have purposes and
these must be made meaningful and intelligible, which requires interpre-
tation of what people do (Rosenberg 2008, Chap. 1). The interpretive
­epistemology is also called hermeneutics, for it deals with the interpreta-
tion of human behavior, which is somewhat similar to the interpretation
of texts.
How would the interpretive method operate? The standard answer is
that participating observation and field research is the appropriate method.
Ask informants about the underlying motivations and feelings that guide
their actions through the direct and detailed observation of people in their
natural settings in order to arrive at the understanding of how people
create and maintain their social worlds. This method is rich in detailed
description and limited in abstraction (Neuman 2003, p. 76).
The Alpha-Beta Method and Other Methods  109

Therefore, interpretive epistemology would reach valid results when


the description is so accurate that it makes sense to those being studied
and if it allows others to understand or enter into the reality of those
being studied. “The theory or description is accurate if the researcher con-
veys a deep understanding of the way others reason, feel, and see things”
(Neuman 2003, p. 79).
If full description, not abstraction, is the fundamental way of study-
ing social reality, interpretive epistemology can hardly lead to scientific
knowledge. Causality relations cannot be obtained through this method
either. On the other hand, this method is applied to some social actors in
particular settings, but not to the entire society. The inference from this
observation to the aggregate would be unviable, as it would need induc-
tive logic, which does not exist. There is no logical route from empirical
observation to theory, as shown above. In addition, the same reality can
be interpreted in several ways, depending upon the researcher; hence,
knowledge could not be impersonal, which is a requirement for scientific
knowledge.
Another problem is whether the data set collected through this method
is reliable. The presence of a researcher may distort the behavior of people.
This is similar to the Heisenberg’s uncertainty principle in physics: the
closer you get to measure a particle, the more it changes location or veloc-
ity in ways that cannot be predicted (see Chap. 9 below).
As to the declarations of informants about their behavior and its under-
lying reasons, they may not correspond to the truth. People know to
lie; people also know the existence of politically or culturally “correct”
answers. For example, to the question “Do you seek to maximize profits,”
a businessperson may answer by saying that he does not, that “he seeks
to maximize employment, which is a social need.” It is unclear what the
incentives people have to tell or not to tell the truth in interviews, which
in itself is a research question. The logical problem of continuous regress
in collecting information thus also appears here.
The other problem is about how much people know—consciously—
about their society and even about his behavior and real motives. Some
people play football and kick the ball with dexterity without solving the
complex physical equations involved, without even knowing mathemat-
ics; they solved it intuitively. Similarly, people may be solving the complex
problems of their social life intuitively. Research would then seek to dis-
cover the underlying motivations of the social actors. Scientific knowledge
intends to be error-free, which implies relying on facts, that is, on what
110  A. Figueroa

people do (behavior, observable) not on what people say about what they do
or what their motivations for doing things are.
The conclusion is that interpretive epistemology as it is presented in the
literature has no logic that can lead us to scientific knowledge, to causality
relations. As shown above, there is no such thing as inductive logic; hence,
there is no logical route from observations to theory and explanation. The
so-called grounded theory method mentioned in qualitative research text-
books is a misnomer: Scientific theory cannot be logically derived from
fieldwork observations, no matter how in depth fieldwork is. This problem
is similar to that of inductivism: No matter how many white swans you can
observe, there is no logical justification for the conclusion that all swans
are white.
In brief, interpretive method is not an epistemology because it fails to
meet two rules of the meta-theory of knowledge, as shown in Table 1.1,
Chap. 1: A logic of scientific knowledge is not provided and a demarcation
rule between scientific and non-scientific knowledge is not either.
However, interpretive research method can be very useful in the prog-
ress of scientific knowledge as exploratory research. In the first stages of
the construction of scientific knowledge, when nothing is known about
the research question, interpretive method can be very useful in order to
gain insights into how the world under study works. This will be the case
if the descriptive knowledge constructed from the fieldwork observations
is followed by the construction of an empirical hypothesis (H-hypothesis),
which goes beyond the anecdotal observation. This cannot be the result of
logical derivation, but of intuition. The empirical hypothesis can later on
be subject to statistical testing and thus to the beginning of the construc-
tion of empirical regularities on the workings of society.
On the other hand, the descriptive knowledge obtained from the field-
work observations and its consistency with the ways people believe and
reason about their daily life—their intuitive knowledge—should help the
researcher to gain insights for the construction of a theory of their behav-
ior. Again, the theory cannot be derived logically, but must be invented,
by trial and error. The interpretation should lead the researcher to the
construction of the motivations underlying the observed behavior of peo-
ple in an abstract way, selecting only what appears to be the essential fac-
tors, that is, proposing a theoretical hypothesis.
In theoretical physics, rocks falls not because they wish, but because
there are external forces (gravity) at work and yet the behavior of rocks
can be explained. People have wills, purposes, feelings, wants and needs.
The Alpha-Beta Method and Other Methods  111

But this fact does not mean that people’s behavior cannot be explained or
that the explanation is only possible if people declare about their behavior.
For example, consider the following economic theory: In a capitalist
society, consumers seek to maximize their individual utility functions, which
reflect their needs and wants, given their real incomes. This is an abstract,
unobservable proposition (alpha proposition). Nevertheless, the following
observable proposition can be derived from the abstract proposition: the
higher the price of a good, the smaller the quantities of the good that will be
bought in the market (beta proposition). This follows because the theory
implies that consumers will have incentives to substitute this good for oth-
ers that satisfy the same necessities. The alpha-beta method has the property
to transform unobservable propositions (alpha) into observable proposi-
tions (beta). After falsification, the theory may be rejected or accepted; if
accepted, we have a theory that is able to explain the behavior of consumers.
According to Popper (1993), both an ameba and physicist Albert
Einstein use theories to understand the real world in which they live. The
difference is just the epistemology: the theory that the ameba uses is based
on instincts, whereas in the case of Einstein, the theory to explain the
behavior of physical bodies is based on logic, on epistemology.
In the case of the social science, some particularities are present. People
need not know Newton’s gravity theory to predict that if someone jumps
through a window, he or she will end up on the ground. This is common-­
sense knowledge. But people will probably need more than intuition to
know how the solar system works.
Similarly, people need not know demand–supply theory to predict that
higher scarcity of a good will increase its price, as in the case that the price
of potato will increase if there is a landslide that interrupts the road con-
necting the town with the farming areas. But people will probably require
more than intuition to understand the aggregate: how the market system
works, and what factors underlie the observed income inequality degree
and the unemployment level in society.
In sum, interpretive epistemology is rather a pre-scientific research
method. It can contribute to the progress of economics when nothing
is known about a research question. Fieldwork observation and its inter-
pretation by itself does not constitute scientific knowledge, but it belongs
to the realm of scientific research when the researcher collects informa-
tion seeking to discover new questions and propose empirical or theoreti-
cal hypotheses with which to start the algorithm that leads to scientific
knowledge.
112  A. Figueroa

Up to now, this chapter has presented three well-known epistemologies:


deductivism, inductivism, and interpretive. The conclusion is that these
epistemologies do not comply with the requirements of the meta-theory
of knowledge, as shown in Table 1.1, Chap. 1. Therefore, they must be
abandoned as epistemologies. However, interpretive epistemology can be
seen as empirical research method. In this category, we have included the
statistical inference method. These research methods provide pre-scientific
knowledge. Finally, we have also shown that these two empirical research
methods play a significant role in the progress of scientific knowledge,
for they complement the alpha-beta method, which is a scientific research
method. This principle of complementarity will be developed further in the
next section.

Research Methods: Scientific and Empirical

Before going into empirical research, the researcher must have a research
question. What is a research question? In this interrogative sentence, a
tension between propositions is identified from the literature review. The
tension may take the form of a puzzle, paradox, controversy, or vacuum,
which the researcher intends to solve.
The choice of a research method is not a matter of personal taste or
preference. It is an objective matter. On the criteria of theory availability
and data availability, which comes from literature review, four methods can
be distinguished. These are shown in Table 7.1.
When both theory and data are available to solve the research question,
the researcher is placed in cell (1), and the research will be devoted to test
statistically the beta propositions of the theory, that is, to test a theoretically

Table 7.1 Research methods: scientific and empirical


Theory (α) Dataset

Available Unavailable

Available (1) Statistical testing of β (2) Construct data and test β


Alpha-beta method Alpha-beta method
Explanatory Explanatory
Unavailable (3) Statistical testing of H (4) Exploratory
Statistical inference m. Interpretive method
Empirical regularities Descriptive
Proposal of α Proposal of H or α
The Alpha-Beta Method and Other Methods  113

based hypothesis (β-hypothesis). Cell (2) shows the case in which data
need to be constructed in order to submit the theory to falsification; thus,
the research will eventually move to cell (1). These methods constitute
scientific research or basic research and thus alpha-beta method provides
the rules to follow.
When data set is available but theory is not, cell (3), the researcher
will be able to use the statistical inference method. It seeks to test sta-
tistically a hypothesis that has no theoretical foundation, which is called
H-hypothesis. If it is accepted, the correlations obtained may then be used
as empirical regularities in need of theory to explain the phenomenon:
Why is it that there exists correlation between the variables included in
H-hypothesis? What factors might underlie the observed correlation?
Because there is no logical route from correlations to theory, the needed
theory would have to invented, that is, constructed by trial and error.
The task is to find an alpha proposition from which a beta proposition can
be derived, such that H = b = β is obtained. If such theory is found, the
researcher who started in cell (3) will have moved to cell (1).
If both theory and data are unavailable, the case of cell (4), exploratory
research is the only logical possibility. The researcher may use the inter-
pretive research method. The research method is based on case studies,
in which each case and its context is carefully designed, and participatory
observation and fieldwork are followed to collect primary qualitative and
quantitative data to produce descriptive knowledge. From the interpreta-
tion of the fieldwork data, the researcher is expected to gain insights to
propose theoretical or empirical hypotheses.
The aim of an exploratory research is to explore new research questions
and then propose new hypotheses, either an empirical H-hypothesis or the
first trial for alpha propositions. Therefore, from cell (4), the researcher
could move to either cell (2) or cell (3) and then ultimately will reach cell
(1). Exploratory research then constitutes the very first stage of scientific
research about a new research question, about which nothing is known.
It should be noted that the usual separation of researchers into quantita-
tive/qualitative or into theoretical/empirical has no logical justification, as
shown in Table 7.1. As to the first separation, cell (1) includes qualitative
elements, which are contained in the alpha propositions. Statistical testing
in cells (1), (2), and (3) uses basically quantitative data, but it can include
qualitative data (ordinal variables), socially constructed variables, as well
as “dummy” variables. Cell (4) collects primarily qualitative data, but
quantitative data are necessary as well, if hypotheses are to be generated.
114  A. Figueroa

As to the second separation, researchers in cells (1) and (2) will clearly
be required to do both theoretical and empirical work. In cell (3), the
work is mostly empirical, but it ends proposing a theoretical hypothesis
in order to move to cell (1). Finally, in cell (4), the work starts as theory-­
free research, but the researcher will have to conclude with theoretical or
empirical hypotheses, and ultimately move to cell (1).
Table  7.1 is also helpful to place into perspective the so-called case
studies. Case study is a common method utilized in social research. The
question is whether it has epistemological justification. Because there can
be realities without theory, we can say that it is justifiable to place the case
study method in cell (1). The case study may be constituted by a sample
of particular firms, households, countries, and so on, that is, of those social
groups that are outliers or exceptions to the scientific theory or were never
part of a sample. Then the mean and variance of the variables involved can
be calculated from this sample.
Alternatively, the sample may be constituted by one social group only
(individuals, firms, households, or countries), for which observations over
time is possible and then allow us to calculate the mean and variance of the
variables over time. Therefore, a case study can produce a scatter diagram
of variables (similar to the scatter diagram presented in Fig. 6.2) from
variations over time for a particular social group. It should be clear that a
case study that produces a single observation (not a scatter diagram, but
one point only) is useless. The statistical value of one observation is nil!
A case study can be applied to solve the fallacy of ontological universal-
ism, which is the belief that if a relation is true in one place or time must
also be true in any other place or time (to be discussed in Chap. 8 below).
They can be summarized in two parts by two types of problems about
generalizations as follows:

(a) To test statistically the validity of a theory for a particular social real-
ity. The question is whether this reality behaves as the theory says.
For example: Theory T was found valid in country C 1. Is theory
T also valid in country C2? This research question would correspond
to either cell (1) or cell (2), depending upon the availability of
empirical data.
(b) To test statistically the validity of an H-hypothesis for a particular social
reality. The question is whether the H-hypothesis, which was accepted
in country C1, can also be accepted in country C2. This research
question would correspond to cell (3).
The Alpha-Beta Method and Other Methods  115

The conclusion that emerges from Table 7.1 is that all empirical research
methods contribute to scientific knowledge, although in different forms
and stages. All methods are complementary. Research that appears in cell
(1) produces scientific knowledge. But the other three cells contribute to
the possibility of reaching cell (1) at some point of the algorithm. They
contribute to the growth of scientific knowledge by supplying the needed
dataset to falsify a theory, or by constructing a set of empirical regularities
that will call for a theory to explain them, or by providing insights to gen-
erate new theoretical or empirical hypothesis on new questions that seek
to push the frontier of scientific knowledge. Even case studies have their
place. The different empirical research methods play different roles in the
construction of scientific knowledge and thus offer different outputs from
research. All of them are important in the growth of scientific knowledge.
The empirical research methods presented in Table 7.1 are applicable
to economics and the social sciences. Each discipline can operate in any
of the cells, depending on the theory and data availability. There is no
epistemological justification to argue that economics is quantitative and
the other social sciences are qualitative, or that economics is theoretical
and the other social sciences are empirical, which would imply a separation
of disciplines by empirical research methods, that is, by cells. As shown
earlier, the construction and progress of scientific knowledge in the social
sciences requires the researchers to engage in theoretical-empirical and
quantitative-qualitative research.
Table 7.1 also shows that the epistemology of economics and the other
social sciences assumes that there exists a difference between what ­people
actually do (behavior) and what they say they do; moreover, scientific
knowledge comes from behavior (facts), not from what people say what
they do and why. Then scientific theories in the social sciences can be sub-
ject to falsification only based on hard data: observations about human
behavior; hence, falsification using soft data (perception, opinions, beliefs
that people state in interviews) will be unviable. Progress of scientific
knowledge requires both hard and soft data, as indicated in Table 7.1.
In some stages, as in cell (4), researchers will act as ethologists of their
own species and will collect mostly soft data. In the other cells, researchers
will use hard data.
This is another reason to support the claim made before: The social
world is much more complex than the physical world (and biological world
as well!) and, therefore, social sciences need to be more epistemology-­
intensive compared to the natural sciences. The alpha-beta method is
116  A. Figueroa

indeed a very involved method, much more than the scientific research
methods utilized in the natural sciences, as will be shown below, in Chap. 9.
Before that, the next chapter deals with some important fallacies that have
been uncovered by the rules of the alpha-beta method.
Chapter 8

Fallacies in Scientific Argumentation

Abstract  Fallacy is defined in logic science as an argument that appears


to be correct, but is logically incorrect. This chapter deals with fallacies
in economics. Logical argumentations about scientific knowledge in eco-
nomics may constitute fallacies. The alpha-beta method is a logical system;
therefore, it can help us uncover the fallacies in economic arguments. The
most relevant fallacies about the economic process are discussed here, such
as fallacies of composition; fallacies of causality, including the popular fal-
lacy that the existence of statistical correlation implies causality; fallacies of
deductivism and inductivism; the fallacy of ontological universalism; the
fallacy of misplaced concreteness, and the fallacy of forecasting.

Logic is a formal science that studies the quality of arguments so as to


evaluate whether arguments are correct or incorrect. Fallacy is defined in
logic science as an argument that appears to be correct, but it is logically
incorrect. Fallacies therefore are usually presented in textbooks of logic.
This chapter deals with fallacies too, but they are seen in the particular
perspective of the alpha-beta method. Logical argumentations about sci-
entific knowledge in economics may constitute fallacies. To be sure, the
fallacies presented and discussed here are those that are exposed to us by
using the alpha-beta method. This is another advantage of this method,
which results from the fact that the method is derived from the composite
epistemology, the combination of the epistemologies of Karl Popper and

© The Editor(s) (if applicable) and The Author(s) 2016 117


A. Figueroa, Rules for Scientific Research in Economics,
DOI 10.1007/978-3-319-30542-4_8
118  A. Figueroa

Nicholas Georgescu-Roegen. The objective of this chapter is to present


only those fallacies that are more common or relevant in economics.

Fallacies of Composition
Fallacies of composition refer to the logical errors of inference, going from
the parts to the whole or vice versa.

The Fallacy of Aggregation  “What is true for the individual must be true
for the aggregation of individuals.”

Examples

“Hydrogen and oxygen are gases; then water is also gas”


“In a group of people watching a parade, if one person tiptoes, then he
can see better the parade; hence, all people should tiptoe to see better”

The Fallacy of Division  “What is true for the aggregate is also true for the
individuals of the aggregate.”

Example

“Water is liquid; then all its constitutive elements (hydrogen and oxygen)
are liquid substances”

The error in these two types of fallacies comes from ignoring the effect
of the interactions among individual elements of an aggregate. Therefore,
what is true for the part need not be true for the whole, and vice versa.
Economics is a social science. Its objective is to explain the functioning
of human societies. Human societies are made of individuals. Economics
analyzes the individual behavior just as a logic artifice, as a method to
construct the aggregate. The risk of falling into fallacy of composition
problems is therefore very high in economics.

Examples

“If a farmer has a good harvest, he will become richer; then if all farmers
have good harvest, they all will become richer” (fallacy of aggregation: mar-
ket price will fall if all farmers produce more).
“If national income increases, then the incomes of all individuals increase”
(fallacy of division: reason is obvious).
Fallacies in Scientific Argumentation  119

“If a borrower cannot repay the loan to a bank, he or she has a problem.
If all borrowers cannot repay, they all borrowers have a problem” (fallacy
of aggregation: If all borrowers cannot repay, the banks have a problem).
“If an individual acts seeking his or her own interest, he or she will reach
the objective; if all individuals act seeking their own interest, the group
will attain their objectives” (fallacy of aggregation: in the interactions may
appear problems of congestion and negative externalities). From the behav-
ior of individuals may result social wellbeing or social disaster, depending on
the form of their interactions. This is one of the most debated propositions
in economics about individual freedom and social wellbeing.

The fallacy problem also applies to economic theory. Consider the


following proposition:

“If a theory explains reality, it explains all the components of that reality.”

Therefore, if a theory is able to explain the behavior of a group of con-


sumers, then it also explains the behavior of every consumer of the group;
if it explains the behavior of the capitalist system, then it also explains the
behavior of every capitalist country. This fallacy ignores the consequences
of using abstraction as the method to construct theories. Because of
abstraction, the theory explains only general relations or average relations,
but it cannot explain the behavior of every member of the group. Due to
the use of abstraction, there will be exceptions to the general relation.

Fallacies of Causality
Causality relations are the fundamental product of scientific knowledge.
To know what causes what is the result of having a scientific theory, empir-
ical predictions of the theory, and survival from falsification, as shown
in the alpha-beta method. However, there are ways in which causality is
stated as fallacy.

The Fallacy of Statistical Causality “If A is correlated with B, then A


causes B”

The alpha-beta method indicates a clear relationship between causality


(a beta proposition derived from a theory) and statistical correlation: the
latter serves to reject or accept the former. Then only under alpha-beta
method—cell (1) in Table 7.1—causality implies correlation and correla-
tion implies causality.
120  A. Figueroa

However, it is common to read in the literature that from statistical test-


ing—testing H-hypothesis in cell (3), Table 7.1—the researcher jumps from
correlation to causality. This is a fallacy. Correlation is just the description
that there is a statistical association of occurrence between observed vari-
ables. The fallacy originates in the use of inductivist epistemology: “from a
set of observations, a generalization logically follows.” However, we have
already seen that there is no logical justification for such inference; that is,
there is no logical route from observation to explanation, from correlation to
alpha propositions, and thus there is no route from correlation to causality.
The confusion also originates in the use of the word “explanation.” The
existence of statistical correlation indicates that the variations of the depen-
dent variable are associated with the variations of the independent variable,
However, in the usual statement about correlation, the world “association”
is replaced wrongly by “explanation,” and then the statement reads “vari-
ation of the dependent variable are explained by the variation of the inde-
pendent variables.” This statement is not only false; it is misleading.
In the case of correlation between two variables Y and X, where r(y/x)
means that X is the independent variable and Y the dependent and that r is
statistically significant, the fallacy can be easily shown as follows:

If r ( y / x ) > 0 implies X causes Y


But r ( y / x ) > 0 also implies r ( x / y ) > 0 , which in turn implies Y causes
X

Therefore, it is clear that the existence of statistical correlation between


Y and X leaves causality undetermined.

The Fallacy of Identification  “If A occurs together with B, then A causes


B.”
Example  “(A) Whenever you see firemen, (B) there is fire; therefore, A
causes B.”

It is more likely that the causality goes in the opposite direction: B


causes A. This fallacy originates from the error in considering that facts
per se can tell us what causes what, what is endogenous and exogenous.
Causality is shown by the beta proposition, which is derived from alpha
propositions of a scientific theory.
Fallacies in Scientific Argumentation  121

The Fallacy of Spurious Correlations  “If fact A occurs at the same time
with fact B, then A is the cause of B.”
This fallacy is also known as Cum hoc, ergo propter hoc (simultaneously
with that, then because of that). There is no logical justification to draw
the conclusion of causality from the observation that facts occur simulta-
neously. The use of the alpha-beta method may reveal that a third factor
(C) is the cause of occurrence of A and B.

Examples

“There is correlation between (A) ice cream sales increases and (B) drowning
deaths increase; hence, A causes B.” Consider a third Factor (C) Weather:
It is summer time. Hence people buy more ice cream and go to the beaches
in larger quantities, which increase deaths. Then C may cause both A and B;
thus, A does not cause B.
A study found the following correlation: “(A) children sleeping with lights
on are correlated with (B) children suffering from myopia. Then the study
concluded: (A) causes (B).” Another study found that there is a third factor
causing both: (C) the myopia of children’s parents. The theory is that myopia
is mostly inherited; hence, myopic parents tend to leave the lights on in their
children bedroom. Then C cause both A and B; thus, A does not cause B.

The Fallacy of Sequence  “If fact B occurs after fact A, then A is the cause
of B.”

This fallacy is also known as the problem of post hoc, ergo propter hoc
(after that, then because of that). Spurious correlation originates this
fallacy.

Example

“(A) After divorce (B) women tend to have higher cancer incidence; hence,
A causes B.” A third factor may cause both: age. After divorce, women will
be older and older women are more likely to develop cancer compared to
younger women. Aging may cause both cancer and divorce; that is, divorce
does not cause cancer.

There is a test in econometrics called “the Granger test of causality.” If data


alone could tell us what is endogenous and what is exogenous, science
would be unnecessary! There cannot be such test. Actually, “test of
causality” is a misnomer. What this test does is something different: it tests
122  A. Figueroa

whether two variables have a sequential pattern of occurrence. It should


then be called the Granger test of sequence.
The fallacies of causality are in large part due to the existence of the
spurious correlation problem. The alpha-beta method eliminates this
problem and the fallacy.

Fallacies of Deductivism and Inductivism

Consider the following statement:

“If theory and reality do not coincide, reality is wrong, for theory being
logically constructed cannot be wrong.”

This fallacy is related to the deductionist epistemology, in which the


demarcation principle is the theory itself. A logically correct system (a the-
ory) may be empirically false, as the alpha-beta shows. The reason is that
the assumptions of the theory are arbitrary. However, this fallacy is very
common. This fallacy assumes wrongly the superiority of theory over facts.
On the fallacy of inductivism, consider the following statement:

“To understand the real world all what you need is empirical data.”

This fallacy is related to the inductivist epistemology. As shown earlier,


this epistemology assumes that there exists a logical route from obser-
vations to theory and causality; moreover, it attributes superiority to
observation over theory. Some popular statements that reflect this fallacy
include the following:

“Don’t talk to me about theory, but about facts;”


“Let data speak for themselves;”
“This study shows scientific empirical evidence;”
“This research shows statistical correlations among variables we have stud-
ied, which implies that there exists causality among them;”
“The empirical results found in country C are applicable to all the countries
of the region.”

The alpha-beta method shows that there is logical route from theory
to facts, but there is no logical route from facts to theory and explanation.
The alpha-beta method is able to prove that both deductivism and induc-
tivism do not provide a logic of scientific knowledge; that is, they are truly
Fallacies in Scientific Argumentation  123

not epistemologies. Therefore, any statement based on these epistemologies


will lead to fallacies.

Fallacy of Ontological Universalism


Consider the following statement:

“If a theory explains one country, then it explains all countries”

This proposition assumes the principle of ontological universalism.


It applies to physics. A theory of physics applies to every place on Earth.
The physical reality is the same everywhere because the atom behaves
identically everywhere. The proposition is a fallacy in economics because
social realities are not necessarily homogeneous. The principle does not
apply to biology either. The behavior of plants is not identical everywhere.
Plant physiology is different in the Andes compared to the US geography.

Fallacy of Theory of Everything


Consider the following statement:

“If an economic theory does not explain everything, it does not explain
anything.”

Economics can construct scientific theories that lead to unity of knowledge


in a society. This implies constructing a unified theory. A good unified the-
ory is able to explain the society taken by parts and also taken as a whole.
However, unified theory does not imply theory of everything, because in
order for any economic theory to explain reality it must have exogenous
variables, which by definition cannot be explained. Causality needs exog-
enous variables. In the social sciences there cannot be theory of everything.
This possibility exists in physics because no exogenous ­variables are needed
to explain facts (more on these will come later in Chap. 9).

Fallacy of the Misplaced Concreteness


Consider the following statement:

“If an economic theory explains a reality, then it explains all aspects of that
reality.”
124  A. Figueroa

Scientific knowledge is abstract knowledge. In order to be understood,


the complex social world has been transformed into an abstract, simpler
world by means of a scientific theory, in which the essential factors only are
taken into account, and the rest are just ignored. Like a map, the abstract
world is a representation of reality, not at the scale 1:1, but at a higher
scale. If the theory is accepted, if it has survived the falsification test, then
it is a good approximation of the real world. However, the theory is not
the real world, as the map is not the territory. The fallacy comes from this
confusion. (The name of the fallacy is borrowed here from the term used
by philosopher Alfred North Whitehead, who referred to it as “mistaking
the abstract for the concrete.”)
In the abstract world, the boundaries of the economic process are
clearly delineated, which implies a particular set of endogenous variables
and exogenous variables. The causality relations, therefore, refer to the
relations between these two sets of variables. It would be a mistake to ask
the theory causality relations about other set of variables and because the
theory has no answer conclude that the theory fails.
The fallacy of misplaced concreteness appears mostly when the scien-
tific theory is applied to solve practical problems. Consider the following
cases. Firstly, it would be a mistake to apply public policies to the agricul-
tural sector of a society based on a theoretical model in which the model
assumes a single good as endogenous variable, where the role of productive
sectors, such as agriculture, has been ignored. This is a case of misplaced
concreteness in the use of the theory. A model where agriculture output
is an endogenous variable would be needed for the public policy purpose.
Secondly, the application of a theory is more involved than shifting the
equilibrium situation in a diagram resulting from a change in the exoge-
nous variable of the theoretical model. This happens in the abstract world.
Its application in the real world would require the introduction of some
elements of the reality to put the theory in action, such as the timing, the
channels, the organizations to be in charge, and so on. This is the engi-
neering of the science. Similar problems appear in the natural sciences.
To construct a building, the theory of gravity will be applied, but addi-
tional practical problems must also be solved: the timing and provisions of
bricks, cements, workers, the financing, and so on. Because of this, people
ask a civil engineer, not a physicist, to build their houses.
From a scientific theory that has survived the falsification process, we
are able to obtain empirically valid causality relations, which show the
means to attain given objectives. We could then say that there is nothing
more practical than a good theory. However, this conclusion does not refer
Fallacies in Scientific Argumentation  125

to how to apply theory in practice, which calls for the engineering of


the science. Thus, when dealing with problem-solving tasks, science and
engineering must be distinguished, as they play different roles. Science
and engineering are clearly separated by university degrees in the natu-
ral sciences (physicist and civil engineer, biologist and agronomist), but
this is not the case in economics. Formally all economists have obtained
the same university degree. However, the difference can be established by
their research activities: Scientists are those who conduct basic research,
for whom the alpha-beta method should be helpful; engineers conduct
applied research, taking as given a particular economic theory, which
becomes their paradigm.

Fallacy of Forecasting

Consider the following statement:

“If economics cannot forecast correctly, then it is not a science.”

It is common to believe that social sciences can forecast the future


events, which implies that they can explain the future! In reality, the social
sciences can only explain the past. The reason is simple: Falsification of a
scientific theory can only be made against observed facts.
In an international conference held in 1985, this debate was heard.
An economist was challenged by a biologist to tell the audience what the
world prices of cereals would be in year 2000. The economist said that
he could not tell. Then the biologist told the audience that this answer
was consistent with his view that economics is not science. The economist
reacted and asked the biologist to tell the audience what the world bio-
mass of anchovies would be in year 2000. The biologist responded saying
that he could not tell, but he would be able to say what factors the answer
depended upon. The economist said that that was exactly what he wanted
to say for the case of cereal prices: he could not forecast the prices, but he
could tell what the determinants of prices were.
Forecast is prophecy or guess about the future values of variables. In
economics, prediction is not forecast; prediction refers to the beta propo-
sition: the value of the endogenous variables is known if the values of the
exogenous variables are known. This is a conditional prediction.
Consider a static process. From an accepted theory, consider the
following beta proposition:
126  A. Figueroa

+
b :Y = F ( X ) (8.1)

It says that the value of the endogenous variable Y depends upon the
value of the exogenous variable X.  Therefore, the value of the endog-
enous variable Y in a particular period of the future will depend upon the
value the exogenous variable will take in that period. Which value will the
exogenous variable X take in the future? The theory cannot determine the
value of the exogenous variable for the future; hence, it cannot predict
the future value of the endogenous variable. The future value of the exog-
enous variable X could only be guessed. With this value of the exogenous
variable so determined, the future value of the endogenous value Y could
be predicted. Hence, forecasting is a conditional prediction, conditional
on the correct guessing of the future value of the exogenous variable X.
The forecasting of the value of an endogenous variable could become
false ex post, but that does not refute the validity of the theory. The theory
is one that has been corroborated by falsification. It was the guessing of
the exogenous variable what failed.
The problem also appears in the case of dynamic processes. From a
valid theory, consider the following beta proposition:

++ (8.2)
b :Y = F ( X,t )

It says that the trajectory of the endogenous variable Y depends upon the
time period, for a given value of the exogenous variable X. Therefore, the
theory predicts a particular trajectory, in which the value of Y = Y ¢ will
occur at the particular period t′ in the future. This will be true if, and only
if, the value of X remains unchanged at period t′. But the theory cannot
determine that because X is exogenously determined. Then the forecast Y′
could fail ex post because at period t′ the value of X took another value.
Finally, consider an evolutionary process. From a corroborated evolu-
tionary theory, consider the temporal dynamic equilibrium before regime
switching, which for simplicity is given by Eq. (8.2), where t = T . Because
the trajectory is fully determined for a given value of the exogenous vari-
able X = X¢ , then the value of the endogenous variable Y = Y ¢ can be
forecast for period T = T ¢ < T* . However, this forecast could fail because
the value X′ may take another value at period T′.
Fallacies in Scientific Argumentation  127

The conclusion is that a good economic theory is unable to determine


the value of the endogenous variables at a particular period in the future
because it cannot determine the value of the exogenous variables at that
period. This is true for static, dynamic, or evolutionary models.
Economics can explain the past, but, from that knowledge, it cannot
forecast the future. There is no epistemological justification for this step.
A scientific theory generates causality, which implies that changes in the
exogenous variables cause changes in the endogenous variables, but the
future value of the exogenous variables cannot be determined from the
theory. Even if the exogenous variables were endogenized, the new theory
could not solve the problem, as another exogenous variable would have
to be introduced in the new theory, the value of which will not be known
in the future.
The well-known problem of forecasting the weather is attributed to the
chaotic nature of the weather system. This is true, but it is not all. Even in
non-chaotic systems, like the economic process, there exists the problem of
forecasting. Even in physics, the problem of forecasting is there. Consider
the debate about whether after the “big bang” the forces of gravity will bring
the universe to the “big crunch” in the future. This problem of forecasting
shows the limits of scientific knowledge. Attempts to know the future lead
to guesswork. Futurology is outside the realm of science.
Chapter 9

Comparing Economics and Natural Sciences

Abstract  How does the alpha-beta method compare with the methods
applied in the natural sciences? In order to make this comparison ana-
lytical, this chapter presents the main theories of physics and evolutionary
biology, together with their methods for accepting or rejecting them. The
conclusion is that alpha-beta method is not applicable to physics, but it is
to evolutionary biology. The alpha-beta method is thus appropriate for
sciences dealing with hyper complex realities, such as economics and biol-
ogy. Therefore, economics is, like physics, a science; however, economics
is not a science like physics; economics is more like biology. This chapter is
not a digression; on the contrary, epistemology comparisons have heuristic
value in science.

Along the book, we have assumed that the social world is more complex
than the physical world. The rules of the alpha-beta research method have
been developed to deal with the social world and are thus applicable to
economics and the social sciences in general. How does the alpha-beta
method compare with the methods applied in the natural sciences?
This comparison is introduced in the book with the idea that our
understanding of the epistemology of economics will be improved if we
try to compare the alpha-beta method with that of the natural sciences.
Therefore, this chapter should not be seen as a digression; on the contrary,
comparison has a heuristic value; it is an aid to learning. The comparison
will be limited to the fields of physics and evolutionary biology.

© The Editor(s) (if applicable) and The Author(s) 2016 129


A. Figueroa, Rules for Scientific Research in Economics,
DOI 10.1007/978-3-319-30542-4_9
130  A. Figueroa

Physics
Theoretical physicists have recently written scientific books about the
state of knowledge attained in physics that are addressed also to the large
audience. The most popular is the one written by Cambridge University
professor Stephen Hawking. Based on his book A Brief History of Time
(1996 edition), the structure of the discipline in terms of theoretical and
empirical propositions together with the method followed will be pre-
sented here.
The first propositions made in physics were empirical hypotheses that
were not associated to any scientific theory in the sense the term has been
used in the alpha-beta method. This type of empirical hypothesis have
been called H-hypothesis here. Two of the most important were:

H (1): The earth is the center of the universe (Ptolemy, I A.D.).


H (2): The sun is the center of the universe, and the earth and other planets
move in circular orbits around the sun (Copernicus, c. 1514).

The discovery of the telescope (1609) was sufficient to disprove both


empirical hypotheses. Two new empirical observations emerged with the
help of this instrument. This type of empirical observations, which do not
originate from statistical correlations, will be called here observations (O).
As it will be explained later on, this type of observations correspond to
equilibrium situations of bodies. The observations that refuted the empirical
hypotheses mentioned above were:

H (1) ≠ O (1) : Not everything in space orbits around the earth (Galileo)
H ( 2 ) ≠ O ( 2 ) : Planets follow elliptical paths around the sun (Kepler)

Gravity Theory
The most famous theory of physics is gravity theory, which was proposed
by Isaac Newton in 1687. It can be stated as follows:

ϕ1: The universe is a static system, in which bodies are attracted each other
by a force, which is stronger the more massive the bodies are and the closer
they are to each other.

Empirical predictions can be logically derived from theory ϕ 1. Call


them propositions γ1; that is, ϕ 1 implies γ1. They are
Comparing Economics and Natural Sciences  131

γ1 (1): Planets follow elliptical paths around the sun.


γ1 (2): Light is a particle and, like a cannon ball, can be caught up if the
observer runs fast enough.
γ1 (3): Attraction among objects operates instantaneously, over both short
and vast distances.

Are these empirical hypotheses consistent with reality? The following


empirical observations (O) can serve to seek falsification of this theory:

γ1 (1) = O ( 2 ) : Kepler’s empirical observation corroborates the first empiri-


cal prediction.
γ 1 ( 2 ) ≠ O ( 3 ) : Nothing can travel faster than the light (Hawking 1996,
p. 32). Experimental physicists have shown that light travels at 670 million
miles per hour (or 300,000 kilometers per second), regardless of the veloc-
ity of the observer.
γ 1 ( 3 ) ≠ O ( 3 ) . According to the theory the gravitational force travels with
infinite velocity, which contradicts observation O(3).

There are two empirical refutations, which are enough to reject the
theory. The question now is to find another theory that predicts proposi-
tion γ1(1), but not the others.

Special Relativity Theory


Albert Einstein developed two theories in the early nineteen hundred. The
first one is the special relativity theory. This theory can be stated as follows:

ϕ2: Time and space are not absolute categories; their behavior depend upon
the relative motion of the observer.

The empirical predictions of theory ϕ2 are the following:

γ2 (1): No object can travel faster than the light speed.


γ2  (2): Relativity becomes more significant when observers move at veloci-
ties closer to light speed. Standing on Earth, all observers see the same
motion of objects because speed differences among observers are too small.
If one observer were in the outer space traveling at velocities closer to light
speed, he would see the motion of objects differently compared to what an
observer standing on Earth would see.
γ2 (3): The speed of light appears the same to all observers (p. 39)
132  A. Figueroa

γ2 (4): E = mc 2 , where E is energy, m is mass and the constant c is the speed


of light. This is considered to be the most famous equation in the history
of science.

The implicit assumption of gravity theory was that time is absolute.


In special relativity theory, it is assumed that each observer has his own
measure of time as if recorded by a clock he carried; that is, clocks carried
by different observers would not agree if they were moving at different
speeds. The speed of light, however, appears the same to every observer
no matter how fast he is moving.
The results of empirical refutation of relativity theory can be shown as
follows:

γ 2 (1) = O ( 3 ) : Light seems indeed to travel faster than any other thing.
Some simple examples can be mentioned. In a thunderstorm, one sees the
lightening before hearing the thunder. In a baseball stadium, when a batter
hits the ball, one sees the ball being hit before hearing the sound.
γ 2 ( 2 ) ≈ O ( ? ) : No information is available.
γ 2 ( 3 ) = O ( 4 ) : The Michelson-Morley experiment (p. 39).
γ 2 ( 4 ) = O ( 5 ) : The explosion of atomic bombs.

Thus, available facts do not refute especial relativity theory.


Gravity theory and special relativity theory are in conflict regarding the
speed of light; thus, they both cannot be true. “[Gravity theory] said that
objects attracted each other with a force that depended on the distance
between them. This meant that if one moved one of the objects, the force
on the other one would change instantaneously. Or in other words, gravi-
tational effects should travel with infinite velocity, instead of at or below
the speed of light” (p.  39).1 Thus, in terms of γ propositions, we get
γ 2 (1) ≠ γ1 ( 2 ) . The empirical observation O(3), that nothing travels faster
than the light, refutes gravity theory, but not special relativity theory.

General Relativity Theory


Einstein’s second theory is general relativity, which can be stated as follows:

ϕ3: Gravitation results from distortions in space-time geometry.2 Gravity is


not a force like other forces, but is a consequence of the fact that space-time
Comparing Economics and Natural Sciences  133

is not flat (as had been previously assumed): it is curved by the distribution
of mass and energy in it. Bodies always follow strait lines in four-dimensional
space-time, but they appear to us to move along curved paths in our
three-­dimensional space (p. 40).

The empirical predictions of the theory ϕ3 are the following:

γ3(1): Planets follow elliptical paths around the sun (p. 40).


γ3(2): Light should be bent by gravitational force. For example, light cones
near the sun would be slightly bent inwards on account of the mass of the
sun (p. 42).
γ3(3): Time should appear to run slower near a massive body like the Earth
(p. 43)
γ3(4): The universe had a beginning of time, a big bang singularity (ibid,
p. 44).
γ3(5): Because the speed of universe expansion is higher than gravity, the
universe is expanding.

On the empirical refutation of the predictions of this theory, the obser-


vations are:

γ 3 (1) = O ( 6 ) : Orbits measured by radar agree with theory. “In fact, the
orbits predicted by general relativity theory are almost exactly the same as those
predicted by the Newtonian theory of gravity γ 1 (1) = γ 3 (1)  . However, in
the case of planet Mercury, which being the nearest planet to the sun, feels the
strong gravitational effect, general relativity theory predicts a small deviation
from Newtonian predictions. This effect was noticed before 1915, and served
as one of the confirmations of Einstein’s theory” (pp. 40–42).
γ 3 ( 2 ) = O ( 7 ) : Observing an eclipse from West Africa in 1919, a British
expedition showed that light was indeed deflected by the sun. The light
deflection has been accurately confirmed by a number of later observations
(p. 42).
γ 3 ( 3 ) = O( 8 ) : Experiments have corroborated this prediction (p. 43).
γ 3 ( 4 ) = O ( 9 ) : Hubble’s finding in 1929 that the universe is expanding.
γ 3 ( 5 ) = O( 9 ) : Hubble’s finding in 1929 that the universe is expanding;
the distance between the galaxies is growing over time.

Hence, available empirical observations do not refute the predictions of


general relativity theory.
134  A. Figueroa

Quantum Theory
The general theory of relativity deals with the large objects of the universe.
Phenomena on extremely small scales are studied by quantum mechanics
theory. The use of better quality “microscopes” (not similar to regular
microscopes because these instruments allow us to see molecules, atoms,
and smaller particles, and because some of them are very big instruments)
has generated more knowledge on empirical observations.
For a long time, it was assumed that matter is composed of atoms. The
atom was the ultimate unobservable element in physics and assumptions
were made about its composition and behavior. By the early 1930s, atoms
became observable. We now know that atoms are constituted by a nucleus
(containing proton and neutron) and electrons. In turn, what are these
elements made of? By 1968, it was observed that the nucleus is made of
even smaller elements, called quarks. What are quarks and electrons made
of? Today, they are still unobservable. Some physicists assume that the
ultimate element is something called string.
Quantum theory can be stated as follows:

ϕ4: Particles do not have exactly defined positions and speeds. The universe
is an uncertain place, governed by chance, when examined in smaller and
smaller distances and shorter and shorter time scales (the subatomic world).

An empirical prediction of theory ϕ4 is the following:

γ4(1): Observations about the position and velocity of objects in the subatomic
world are uncertain. This is Heisenberg’s uncertainty principle. This principle
says that “one can never be exactly sure of both the position and the velocity
of a particle; the more accurately one knows the one, the less accurately one
can know the other” (p. 243).

Quantum mechanics does not predict a single definite outcome from


an observation, but a number of possible outcomes, and tell us how likely
each of these is; therefore, it introduces an unavoidable element of unpre-
dictability or randomness into physics (p. 73).
As to the empirical refutation of the theory, we have the following
statement

γ 4 (1) = O ( 9 ) : It agrees perfectly with experiments (p. 73).


Comparing Economics and Natural Sciences  135

The Search for the Theory of Everything


According to the general theory of relativity, the universe is a smooth
place-spatial geometry on macroscopic scales, but according to quantum
mechanics, it is a chaotic arena on microscopic scales. Can these two theo-
ries be valid? “Today scientists describe the universe in terms of two basic
partial theories—the general theory of relativity and quantum mechan-
ics…. Unfortunately, however, these two theories are known to be incon-
sistent with each other—they cannot both be correct” (p. 18). The smooth
spatial geometry of general relativity breaks down on short distance scales
due to the violent fluctuations of particles of quantum mechanics.
How could this happen? How is it that there is order at the large scale
but disorder at the small scale? Can order be generated out of disorder?
How could this contradiction be solved? A unified theory of the forces of
nature is needed, forces that are independent of the scale of objects, from
the small distances within the atomic world to the largest distances in the
vast universe. This is the most challenging question in today’s physics: to
find a unified theory of physics, the theory of everything.

Evolutionary Biology
Biologists have recently written scientific books for the large audience.
The introductory book by Ernst Mayr (1997), the late Harvard University
professor, is one of the most popular. Other authors include Casti (2001),
Smith (2002), and Pasternak (2003). These works will be used to present
the structure of this discipline here.
There are two fields in biology that are quite different in the nature of
knowledge. Functional biology—molecular biology—is more like phys-
ics. The other part is evolutionary biology, which studies the interactions
between populations. This is the field that will be compared with the social
sciences. According to Ernst Mayr, biology is, like physics, a science,
but biology is not a science like physics; biology is an autonomous science
(Mayr 1997, pp. 32–33).
The scope of evolutionary biology is the study of the interactions
between vast numbers of organisms, each of them in itself of enormous
complexity. This is similar to the complex social world studied by econom-
ics. Therefore, in principle, the alpha-beta method would be applicable
to evolutionary biology. The rule of scientific knowledge would be the
construction of abstract worlds, or theories, to explain the behavior of
136  A. Figueroa

these organisms and the biology world. The empirical predictions of the
theories would then be confronted against empirical facts.
Given the complexity of the biological world, scientific theories may
need to be presented in the form of theoretical models. Indeed, math-
ematical models are utilized in biology. The reason is that equations—the
reduced form equations—will allow the biologist to predict behavior. The
prediction is, however, mostly qualitative. Precise numerical fit is usually
too much to hope for because in any model so much is left out. What is
the justification to leave out of a model something that surely affects the
outcome? Biologist John Maynard Smith explains as follows: First, if it
was important, the model will not give the right predictions, even qualita-
tively; second, if we try to put everything into a model, it will be useless.
So, in biology, only rather simple models are useful. The price that biolo-
gists must pay for this simplification is the lack of quantitative accuracy in
their predictions (Smith, 2002, pp. 196–197).
Evolutionary biology can be represented as an evolutionary process.
The repetition of the process is non-mechanical, but subject to qualitative
changes as the process is repeated. Endogenous and exogenous variables
are also distinguishable. The exogenous variables refer mostly to different
biophysical environments or niches. Because an evolutionary process can
be represented by a sequence of dynamic processes, in evolutionary biol-
ogy the endogenous variables will show regime switches over time even if
the exogenous variables remain fixed.
The theory of biological evolution can then be restated in terms of
alpha-beta propositions, as follows:

α: Theory of evolution by natural selection. Evolutionary change occurs


because certain characteristics of individuals are better suited to the current
environmental circumstances of a species than are others. The mechanism
is natural selection, which includes genetic variation and fitness for survival
and reproduction (Mayr 1997, pp. 38–39, 188–189).

In any given environment, therefore, some kinds of individuals are


more likely to survive and reproduce than others. Evolution implies
qualitative changes in the composition of individuals in the population.
This is a theory of individual selection, not of species (Smith 2002,
p. 198).
The empirical predictions that can be derived from this theory include
the following:
Comparing Economics and Natural Sciences  137

β(1) Hypothesis of heredity. Offspring constitute a random sample of the


characteristics of their parents. Innate characteristics of individuals are
inherited, but acquired characteristics are not. When individuals repro-
duce, they pass their characteristics on to their offspring; the result is a
population consisting of individuals with characteristics that make for and
against survival.
β(2) Hypothesis of the common descent. Living organisms have evolved from
a common descent. In terms of their origin, living organisms constitute a
single family tree. Animal and plants of more recent geological periods are
descendants of those from older geological periods. There is a historical
sequence in the existence of living organisms.
β(3) Hypothesis of the survival of the fittest. Changes in the physical environ-
ment will cause a non-random elimination of some individuals. The survival
of some type of individuals, of those that have the fittest traits in the com-
petition for the scarce resource in the environment, will prevail. This is the
Darwinian competition.
β(4) Hypothesis of the multiplication of species. The same species living in dif-
ferent physical environments will become different species. Separations by a
mountain range or an arm of a sea or ecological discontinuity will generate
this result. Alternatively, the same species in different physical environments
will show different traits.

Empirical evidence does not seem to refute these predictions. For


instance, regarding beta proposition (1), biologist John Maynard Smith
states “The theory that the mechanism of evolution is natural selection
could be disproved if it could be shown that offspring do not resemble
their parents or that acquired characteristics are often inherited” (Smith
2002, p. 211). On beta proposition (2), the observed sequence of fossils
found in the Earth’s strata is consistent with the prediction. The theory
would have been refuted if fossils elephants and giraffes had been found in
the early Cretaceous period (dinosaurs).

Comparing Natural Sciences and Social Sciences


Comparisons about differences in epistemology between natural sci-
ences and social sciences can be made on three accounts: the existence
of exogenous variables, the ontological universalism, and measurement
problems.
138  A. Figueroa

The Existence of Exogenous Variables


In physics, endogenous variables and the underlying mechanisms connect-
ing them can be identified, but exogenous variables cannot. It seems that
there is nothing from outside the physical world that can enter into the
system, change independently (exogenously), and be able to move the uni-
verse from one equilibrium situation to another. There are no exogenous
variables in physics. Therefore, the physical world cannot be represented in
the form of an abstract process diagram, as shown in Fig. 1.1, Chap. 1.
However, scientific theories in physics can generate empirical predic-
tions, as shown earlier. It should be noted that these predictions refer to
equilibrium conditions of the bodies, be this static, dynamic, or evolu-
tionary equilibrium. The relationships between material objects are about
equilibrium conditions, which become physical laws because the relation-
ships do not depend upon the values of exogenous variables, for there are
no exogenous variables that could change them. Therefore, theoretical
physics can even predict the future, at least for those processes that are
mechanical. This is contrary to the limitation of economics to forecast
the future due to the existence of exogenous variables, as shown above
(Chap. 8).
The set of propositions (α, β) used in economics has a different content
relative to the set of propositions (ϕ, γ) that were used here for physics. Beta
propositions refer to the effect of exogenous variables upon endogenous
variables, which is not the case with gamma propositions. Even the nature
of empirical falsification is different. In physics, the expression γ ≈ O refers
to the question of consistency between the predictions of the theory about
equilibrium conditions alone with a set of empirical observations; by con-
trast, in economics, β ≈ b refers mostly to the consistency between facts
and the predictions of the theory about causality relations, the effect of
changes in exogenous variables upon changes in endogenous variables.
Because scientific theories in economics and physics are both falsifiable,
we may conclude that Popperian epistemology is applicable to both sci-
ences. However, these sciences use different epistemologies. The rules of
scientific research in physics follows directly from the Popperian epis-
temology. In the case of economics, it was necessary to develop the alpha-­
beta method, containing the rules of scientific research, which was derived
from the composite epistemology, a combination of the epistemologies of
Karl Popper and Nicholas Georgescu-Rogen. Popper epistemology was
not sufficient.
Comparing Economics and Natural Sciences  139

Indeed, the use of a method based directly on Popperian epistemol-


ogy would explain the relatively rapid growth of knowledge in physics.
Falsificationism in physics is simpler and measurement instruments are
more accurate, and evermore, which has meant scientific progress through
the funeral of some theories. For example, as shown above, the gravity the-
ory of Newton reigned in physics during two centuries until Einstein’s spe-
cial relativity theory appeared in 1905. Falsification in economics is more
complex than in physics. The alpha-beta method ensures that by construc-
tion economic theories are falsifiable; however, the nature of falsification in
economics calls for attention to disturbing factors in the statistical testing
stage, as shown above (Chap. 7).
A theory of everything is possible in physics, but not in economics. Since
the economic process has boundaries, the existence of exogenous vari-
ables is unavoidable; moreover, exogenous variables cannot be explained
without falling into the logical problem of continuous regress. If the
exogenous variable is to be explained a new economic theory would be
needed, which will have to assume a new exogenous variable; if this latter
exogenous variable is to be explained, another theory would be needed,
which will have to assume another exogenous variable, and so on. Hence,
not everything could be explained in economics. Due to the existence of
exogenous variables, economics is a science, but not like physics. It is a
different science, a more complex one.
In evolutionary biology, it is clear that the relationships among living
organisms may be represented in process form, as in the case of human
societies. This similarity should not be surprising, for human species is part
of the biological world. The conditions for the reductionism to an abstract
process are met: There is repetition in these relationships and there are
exogenous variables, which will affect the endogenous variables and thus
the equilibrium situation of these relationships. Alpha-beta method is
therefore fully applicable to the study of evolutionary biology. Economics
is more like evolutionary biology.
In sum, the alpha-beta method developed in this book provides us with
rules for scientific research in the study of complex realities. Therefore, the
alpha-beta method is applicable to economics and evolutionary biology,
for human societies are biological species. Compared to physics, these are
complex sciences, as they deal with complex social and biological worlds.
It is then understandable why biologist Edward Wilson includes in the
group of less complex natural sciences physics and chemistry, but not
biology, as shown in the preface.
140  A. Figueroa

Ontological Universalism
The next comparison between economics and the natural sciences refers to
the question of the ontological universalism—the assumption of one unitary
world. Economics considers the world society as its “universe.” In physics
and biology, the relevant comparable universe is the planet Earth.
The deviations between Einstein’s theories and Newton’s theory are
extremely small in the slow-velocity world we humans typically inhabit. If
you throw a baseball, Newton and Einstein theories can be used to predict
where it will land, and the answers will be different, but the difference
will be so slight that they are beyond our capacity to detect it experimen-
tally (Greene 2003, p. 76). Newton’s theory is not valid for all space and
time, but Einstein’s is. In the limited world of the planet Earth, however,
Newton and Einstein theories can both be right. Moreover, when consid-
ering the reality of the planet Earth alone, Newton’s theory is finer and
simpler than Einstein’s.
If the planet Earth could be separated into segments using any cri-
terion, these segments would not be different in the sense that gravity
theory would explain the relationships between objects, no matter how
those segments were created. Atoms would behave exactly in the same
manner everywhere in the planet Earth; there would not exist a different
physics for the South (poor regions) and for the North (rich regions). The
principle of ontological universalism in the planet Earth then applies in
physics and gravity is the general theory.
In evolutionary biology, the planet Earth can be separated into differ-
ent biophysical environments, which is the main exogenous variable. In
each environment, living organisms are expected to behave differently. We
know that plants behave differently in the tropics compared to the temper-
ate zones, or in mountains compared to sea-level environments. Separate
biological realities imply the construction of partial theories to explain
those specific realities. The question of unity of knowledge and the need
of a unified theory then arise in biology.
For example, plants physiology are different in the Andes compared to
the American plains. Why? In the temperate zones, plants have adapted to
an environment in which major changes in temperature take place between
seasons around the year. In the Andes, plants had to adapt to an environ-
ment in which major changes in temperature take place around a day.
In both zones, however, the theory of photosynthesis applies. Therefore,
photosynthesis is the unified theory of plant behavior.
Comparing Economics and Natural Sciences  141

In economics, the planet Earth can be separated into different social


environment spaces. It can also be separate into different time spaces.
The economic process takes place in different social and biophysical envi-
ronments. It is expected that societies behave differently according to
their specific environments, as proposed above (Chap. 4). People adapt
to these environments. Social environments can change exogenously, due
to foundational and re-foundational shocks, such as a conquest, a war, or
endogenously due to evolutionary changes. Therefore, partial theories will
be needed to explain societies living in particular biophysical and social
environments and thus a unified theory will also be needed to explain
the functioning of a type of human society taken as a whole. An exam-
ple would be to study the capitalist system of today using partial theories
to explain the functioning of the rich countries and the poor countries
taken separately and then to explain the capitalist system as a whole, which
would require a unified theory (e.g. Figueroa 2015).
The ontology of universalism—one world reality and one theory—is
applicable to physics, but not to biology or economics. Economics is,
again, more like biology than physics.

The Nature of Measurement


The progress of physics relative to other sciences is, to a greater extend,
due to the nature of the physical world, as said above; however, it is also
due to the innovations in measurement instruments. Most of the growth
of knowledge in physics has possibly come from innovations in the instru-
ments of measurement. New and better measurement instruments lead to
new facts, which can help in the falsification process of scientific ­theories,
as shown earlier. Possibly the same characteristic about measurement
instruments can be applied to evolutionary biology.
Improvements in measurement instruments are harder to achieve in
economics. First, most variables (endogenous and exogenous) are socially
constructed facts, as was shown above (Chap. 6). Money, real income,
poverty, inequality, unemployment, social class, ethnicity, market power,
democracy are important variables in the economic process of capitalist
societies and yet they all are socially constructed facts. Second, the instru-
ments of measurement are not as developed as in the natural sciences.
Production and distribution in capitalist societies are still measured by
applying surveys to firms and households and by using government offi-
cial statistics, not by direct measure of the economic process. The data
142  A. Figueroa

set so constructed measures not only behavior, but also social actors’ self-­
declarations and opinions, which very likely introduce significant bias and
distortions to the truth data, according to the incentives of the informa-
tion suppliers.
A simple H-hypothesis type that can be put forward here is that prog-
ress in physics is due essentially to the innovations in the instruments of
measurement. Think of the Hubble Space Telescope of today compared to
the telescope Galileo utilized in 1609. Telescopes, microscopes, and spec-
troscopes all have gone through continuous progress and sophistication,
which have certainly made falsification of theories more decisive and thus
made scientific progress more rapid in physics. This hypothesis is different
from Kuhn’s, in which paradigm changes depend mostly on sociological
factors, as the behavior of the community of scientists (Kuhn 1970).
Such process of innovations in measurement instruments has not hap-
pened in economics. Production and distribution are still measured using
imperfect instruments, which have not shown significant innovations. The
popular GDP (gross domestic product) is based partly on hard data (actual
output) and partly on soft data (responses of social actors to question-
naires about sales, inventories, employment, incomes, etc.). However, the
soft data collection implies a more fundamental problem: the attempt to
collect data is an interference and changes people’s behavior in unknown
directions. This is, in part, similar to the Heisenberg principle. As shown
earlier, the Heisenberg principle, also known as the uncertainty principle,
refers to a measurement problem in physics. According to this principle,
one can never be exactly sure of both the position and velocity of a particle
because the instrument used for measuring (to shine light on the particle)
will disturb the particle behavior; that is, the particle will change its posi-
tion or velocity in a way that cannot be predicted.
In economics, in addition to the Heisenberg problem, data collection
based on what people say has another uncertainty problem, which rests in
the fact that social actors can supply the real data or lie, according to the
incentives they face; thus, the quality of measurement of the economic
process is greatly affected in unknown directions as well. Soft data in eco-
nomics is thus endogenous, as it depends upon the incentive system facing
social actors to supply information. Consider the problem of measuring
corporate profits, when firms can decide how much profit to declare and
in which country to declare, depending on the incentives given by the
legislation of countries on corporate taxes.
Comparing Economics and Natural Sciences  143

As said before, economics relies on natural experiments for empirical


data, rather than on controlled experiments. On this problem, economics
is like astronomy. However, no telescopes have been invented to observe
the economic process of production and distribution. Hence, in econom-
ics, and contrary to physics, economic theories have not been challenged
by innovations in measurement instruments. Problems of measurement
explain, at least in part, why economic theories tend to be immortal. If
production and distribution of societies could be measured through sat-
ellite instruments, for example, economics would be able to make more
rapid progress in scientific knowledge. This conclusion also applies to the
social sciences in general.
Statistical analysis techniques in economics (econometrics) have gone
through significant progress, but those sophisticated techniques are still
applied to very imperfectly constructed databases. Sophisticated econo-
metrics cannot substitute poor quality data. As the saying goes in statistical
testing “garbage in, garbage out.” Scientific progress in economics will
thus depend to a large extent upon the innovations in the instruments of
measurement of the economic process.
In sum, if the growth of scientific knowledge in economics and the
social sciences do not appear as great as in physics or biology, it is due, in
part, to the complex nature of the social world. This complexity implies
greater epistemological challenges, including here the problem of falsi-
fication of scientific theories using imperfect measurement instruments.
The alpha-beta method is a contribution to ensure falsification of scientific
theories in economics, but economics must still deal with the problem of
measurement.

Notes
1. Another example: According to gravity theory, if the sun exploded, the
earth would instantaneously suffer a departure from its usual elliptic orbit.
However, according to special relativity theory this effect would not hap-
pen, for no information can be transmitted faster than the light speed;
hence, this effect would not be felt in the earth instantaneously. It would
take eight minutes, which is the time light takes to travel from the sun to the
earth (Greene 2003, p. 354).
2. Space-time is the four dimensional description of the universe, uniting the
three space dimensions and the single time dimension.
CHAPTER 10

Conclusions

Abstract The social sciences are more complex than physics, the exemplar
of sciences. Therefore, economics and the other social sciences require
a more sophisticated epistemology than physics. The book proposes a
composite epistemology, the combination of the epistemologies of Karl
Popper and Nicholas Georgescu-Roegen, to provide economics with such
epistemological need. Then, the alpha-beta method is logically derived
from the composite epistemology and contains operational rules for sci-
entific research, the use of which will lead to a Darwinian competition
of economic theories. By construction, economics can now be seen as a
critical science. The use of the alpha-beta method will lead to enhancing
the quality of learning, teaching, and research in economics. This is the
expected contribution of the book.

The social world is much more complex than the physical world. Therefore,
economics requires more sophisticated epistemology than physics. This
book has proposed the alpha-beta method, which contains a set of rules
for scientific research in economics. The method is constructed in such a
way that economic theories are necessarily falsifiable. Therefore, the usual
claim that economic theories are rarely falsifiable, which has led to the
coexistence of many economic theories, can now be ruled out. The sci-
entific progress of economics is now viable through the Darwinian evolu-
tionary process, in which selection of good theories and elimination of bad
theories can be practiced.

© The Editor(s) (if applicable) and The Author(s) 2016 145


A. Figueroa, Rules for Scientific Research in Economics,
DOI 10.1007/978-3-319-30542-4_10
146 A. FIGUEROA

The alpha-beta method has been derived from the composite epistemol-
ogy, the combination of the epistemologies of Nicholas Georgescu-Roegen
and Karl Popper. The Popperian epistemology alone could rarely lead to
falsification in economics. It was necessary to solve the problem of trans-
forming a complex real world into an abstract and simpler world by means
of a scientific theory, which generates at the same time causality relations.
The process epistemology of Georgescu-Rogen has been used to solve
this problem. Therefore, the alpha-beta method has epistemological jus-
tification and now under this method economic theory will be, by con-
struction, always falsifiable. The development of the alpha-beta method is
presented as the main contribution of this book.
Scientific knowledge seeks to explain and understand reality and it seeks
to be error-free knowledge. The aim of economics is to establish causality
relations—what causes what. The alpha-beta method allows us to reach
this objective in economics. Alpha propositions constitute the assump-
tions of the scientific theory, the construction of the abstract world, which
intends to be a good approximation of the complex real world. Beta prop-
ositions are derived from alpha proposition, show causality relations, and
are, by construction, empirically falsifiable. Therefore, we have an episte-
mological criterion to accept or reject economic theories. Science is epis-
temology. Hence, the alpha-beta is a scientific research method.
The rules of the alpha-beta method can be summarized as follows:

(A) The complex real world must be transformed into a simpler abstract
world by means of a scientific theory. A scientific theory is a set of
assumptions, which are called the alpha propositions; thus, a scientific
theory is a set of alpha propositions. The alpha propositions are
unobservable, as they refer to the underlying factors operating in the
workings of the real world. Thus, a scientific theory seeks to uncover
the underlying factors in the social facts that we observe.
(B) The alpha propositions must be able to generate, by logical deduc-
tion, observable propositions, which are called beta propositions.
Beta propositions are, by logical construction, empirically falsifiable
or refutable. Moreover, beta propositions contain the causality rela-
tions between the exogenous and endogenous variables established
by the scientific theory.
(C) Beta propositions are tested statistically against facts of the reality
under study. If facts refute the beta propositions, then the scientific
theory is rejected as a good approximation of this reality. If facts can-
CONCLUSIONS 147

not refute the beta propositions, the scientific theory is accepted as a


good approximation of reality; therefore, the theory explains this
reality and the causality relations of the theory have been empirically
corroborated.
(D) The alpha-beta method applies to economics. An economic theory is
a set of alpha propositions. Beta propositions are tested statistically
against empirical data and thus make economic theories always falsifi-
able. Accepting or rejecting economic theories constitute a complex
endeavor, as the testing involves not only the assumptions of the eco-
nomic theory, but also includes assumptions about both statistical
instruments and measurement of facts.

In economics, when the economic theory is accepted for a given reality,


it is so only for this reality. The theory may fail in explaining another reality.
The statement “If an economic theory explains a single reality, then it
explains every reality” is a fallacy. Ontological universalism does not apply
to human societies. Even in the case that the economic theory explains
reality, it is accepted only provisionally, until new facts, new testing instru-
ments, or new superior theories appear. The reason is that, through falsi-
fication, scientific theories can be proven to be false, but not to be true,
only to be consistent with facts.
According to the alpha-beta method, scientific theories are constructed
to be destroyed, not to be protected. This is consistent with the prin-
ciple of falsification. Those theories that survive the falsification process
are the good ones and are accepted, whereas the bad ones are eliminated.
Scientific theories operate under the Darwinian evolutionary type of com-
petition, which leads to the progress of science. The use of the alpha-beta
method should lead to the progress of economics. The alpha-beta method
makes economics a critical science, for the criteria to accept or reject theo-
ries have epistemological justification.
The idea that explanation, understanding, and causality of the real
world can be obtained without scientific theory, by using statistical or
econometric analysis alone is a logical impossibility. The statement “the
existence of statistical correlation implies causality” is a fallacy. This is so
no matter how sophisticated the statistical or econometric analysis is or
how large the sample size is. The underlying reason is that there is no
logical route from facts to scientific theory and causality. The only logical
route is the one from scientific theory to causality, which constitute the
logical justification of the alpha-beta method. It is in this sense that we
148 A. FIGUEROA

can speak about the logic of scientific knowledge; that is, science is the
logic way, the rational way, to establish causality. The statement “Science
is measurement” is incomplete, for measurements alone, facts alone, are
not conducive to causality.
The alpha-beta method is conducive to scientific knowledge because it
is a scientific research method; however, its application requires the avail-
ability of both theory and data set. When theory or data is not available,
other empirical research methods can be applied, which are conducive to
pre-scientific knowledge. The research strategies to go from pre-scientific
knowledge to scientific knowledge are also presented in the book.
Although the book has dealt with economics, it has also shown that
the application of the alpha-beta method can be extended to the other
social sciences. The basic reason is epistemological: Any research ques-
tion about the complex social world can be answered by reducing it to
an abstract and simpler social world, which leads to the use of the alpha-
beta method.
Regarding the comparison of economics with the natural sciences, the
alpha-beta method is not applicable to physics. However, it is applicable
to evolutionary biology. This is so because human societies, seen as human
species, are instances of biological species. The social world and the bio-
logical world are thus similar—complex realities. This is not the case of
physics and thus it uses other rules for the falsification of scientific theo-
ries. Hence, the book concludes that economics is, like physics, a science,
but economics is not a science like physics. Economics is more like evolu-
tionary biology. For one thing, the principle of ontological universalism is
valid in physics, but it is not in economics and biology.
As to the question posed at the beginning of the book—why the growth
of scientific knowledge in the social sciences has proceeded at a lower rate
than in physics—the book has shown that the answer lies in the nature of
these sciences. The social world is much more complex than the physi-
cal world; hence, the social sciences are more complex than physics, the
exemplar of sciences. For one thing, the atom is homogeneous, but people
are diverse. Scientific knowledge in the social sciences is therefore more
demanding on epistemology, more epistemology-intensive, than in phys-
ics. The implication is that a more complex reality would require a more
complex epistemology. The alpha-beta method is constructed to provide
such epistemological need.
The book has shown that although the principle of falsification is
applicable to economics, it is a very involved task. First, facts in economics
CONCLUSIONS 149

include physical categories and socially constructed categories, which are


not easy to measure. Second, the measurement instruments are imper-
fect. Falsification in economics refers to observed facts about the behav-
ior of people. However, most economic data are collected using surveys,
interviewing people. Economic data so collected refer to opinions of
people, in which they say what they do, which need not be what in fact
they do. Instruments similar to telescopes or microscopes do not exist in
economics. Finally, the way economic data are generated do not neces-
sarily conform to the assumptions of parametric statistics. Therefore, the
use of parametric statics in the falsification process of scientific theories
may lead to the rejection of scientific theories mistakenly. The use of
non-parametric statics is safer in the falsification process, but its domain
of testing is more limited. The alpha-beta method thus provides ways
to deal with the problems of accepting or rejecting economic theories.
The problems on factual data quality and measurement instruments do
not appear in physics. On the contrary, the discovery of new measurement
instruments seems to underlie the observed progress of physics. If the
growth of scientific knowledge in economics and the social sciences in gen-
eral do not appear as great as in physics, it is, in part, due to the complex
nature of the social world, and in part due to the problem of testing sci-
entific theories using imperfect statistical and measurement instruments.
Therefore, progress in economics will come from the introduction of inno-
vations in the epistemology, such as using more intensively the alpha-beta
method or developing new superior methods, developing new non-para-
metric statistical testing instruments, and new measurement instruments.
It is a fact that much of the research work in economics and the social
sciences in general do not follow the scientific research rules developed
in this book. If economists had followed these rules, we would not have
ended up with the current coexistence of all possible economic theories
developed over time. Economic theories have not been subject to the
Darwinian evolutionary competition. This fact is certainly unfortunate
and may indeed explain the relatively low growth of scientific knowledge
in economics and the social sciences in general; however, and precisely
because of this fact, the potentials for a more rapid progress of scientific
knowledge in economics are very high. The use of the alpha-beta method
will lead to enhancing the quality of learning, teaching, and research in
economics. This is the expected contribution of the book.
BIBLIOGRAPHY

Casti, J. (2001). Paradigms regained: A further exploration in the mysteries of mod-


ern science. New York: Perennial.
Figueroa, A. (2015). Growth, employment, inequality, and the environment: Unity
of knowledge in economics. New York: Palgrave Macmillan.
Freund, J., & Simon, G. (1992). Modern elementary statistics. Englewood Cliffs:
Prentice Hall.
Georgescu-Roegen, N. (1971). The entropy law and the economic process.
Cambridge, MA: Harvard University Press.
Greene, B. (2003). The elegant universe. New York: Vintage Books.
Haack, S. (2003). Defending science-within reason: Between scientism and cynicism.
Amherst: Prometheus Books.
Hausman, D. (2013). Philosophy of economics. The Stanford Encyclopedia of
Philosophy. http://plato.stanford.edu/archives/win2013/entries/economics/
Hawking, S. (1996). A brief history of time: Updated and expanded edition.
New York: Bantam Books.
Hurley, P. (2008). A concise introduction to logic. Boston: Cengage Learning.
Kuhn, T. (1970). The structure of scientific revolutions (2nd ed.). Chicago: Chicago
University Press.
Li, Q., & Racine, J. S. (2006). Nonparametric econometrics: Theory and practice.
Princeton: Princeton University Press.
Mayr, E. (1997). This is biology. Cambridge, MA: Harvard University Press.
Neuman, L. (2003). Social research methods (5th ed.). Boston: Pearson Education.
North, D. (1990). Institutions, institutional change, and economic performance.
Cambridge, UK: Cambridge University Press.
Pagan, A., & Ullah, A. (1999). Nonparametric econometrics. Cambridge, UK:
Cambridge University Press.

© The Editor(s) (if applicable) and The Author(s) 2016 151


A. Figueroa, Rules for Scientific Research in Economics,
DOI 10.1007/978-3-319-30542-4
152 BIBLIOGRAPHY

Pasternak, C. (2003). Quest: The essence of humanity. Chichester: Wiley.


Popper, K. (1968). The logic of scientific discovery. London: Routledge.
Popper, K. (1976). The logic of the social sciences. In T. W. Adorno et al. (Eds.),
The positivist dispute in German sociology (pp. 87–104). New York: Harper &
Row.
Popper, K. (1985). The rationality principle. In D. Miller (Ed.), Popper selections
(pp. 357–365). Princeton: Princeton University Press.
Popper, K. (1993). Evolutionary epistemology. In M.  Goodman & R.  Snyder
(Eds.), Contemporary readings in epistemology (pp.  338–350). Englewood
Cliffs: Prentice Hall.
Reiss, J. (2013). Philosophy of economics: A contemporary introduction. New York:
Routledge.
Rosenberg, A. (2008). Philosophy of social science (3rd ed.). Boulder: Westview
Press.
Samuelson, P. (1947). Foundations of economic analysis (p.  1983). Cambridge,
MA: Harvard University Press.
Searle, J. (1995). The construction of social reality. New York: The Free Press.
Smith, J. M. (2002). Equations of life. In F. Graham (Ed.), It must be beautiful:
Great equations of modern science (pp. 193–211). London: Granta Books.
Wilson, E. (1998). Consilience: The unity of knowledge. New York: Alfred Knopf.
Ziliak, S., & McCloskey, D.  N. (2008). The cult of statistical significance: How
standard error costs us jobs, justice, and lives. Ann Arbor: University of Michigan
Press.
INDEX

A assumptions, 95–6; justified by


abstract economic process, composite epistemology, 28;
construction of, 57, 59 and measurement instruments,
abstraction method 149; more complex than in
in alpha-beta method, 54 physics, 139; and Popper
in process epistemology, 10 epistemology, 139, 146; and
accepting or rejecting economic problem of identification, 64;
theories and the alpha beta- and realities without theory,
method, 147, 149 54–5
alpha-beta method set of rules for scientific research,
concept, 23–5, 34, 107, 111, 112, 117, 13, 15, 145
122–3, 125, 139, 143, 146–9 alpha propositions
derived from composite concept, 16
epistemology, 14, 22–3, 28, 99, non-tautological and unobservable,
117, 138, 146 17, 18, 25
in economics; economic theory primary assumptions of scientific
falsifiable by construction, 47; theory, 16
falsification through beta assumptions of scientific theory. See
propositions, 56; identification alpha propositions
problem, 96; method for auxiliary assumptions and model
accepting or rejecting economic construction, 55–7, 61
theories, 147, 149; resolves
Popperian epistemology
limitations, 138 B
falsification in economics under; beta-hypothesis, 72, 74, 76, 79, 80,
includes three sets of 82–4, 88, 93, 94, 105–7, 113

© The Editor(s) (if applicable) and The Author(s) 2016 153


A. Figueroa, Rules for Scientific Research in Economics,
DOI 10.1007/978-3-319-30542-4
154 INDEX

beta propositions needs scientific theory, 12, 27, 28,


causality relations, 20, 27, 37, 40, 88, 106, 107, 147
45, 52, 73, 76, 78, 88, 94, relations, 2, 11, 12, 15, 20, 27, 35,
100, 101, 119, 138, 146 37, 40, 43, 45, 52, 73–88, 94,
empirically falsifiable or refutable, 100, 101, 104, 107, 109, 110,
45, 146 119, 124, 138, 146, 147
empirical predictions of scientific statistical correlation, 119–20
theory, 19–20 See also beta propositions
logically derived from alpha chronological time, 59, 92
propositions, 19, 22, 24 composite epistemology
relations between exogenous and applicable to highly complex
endogenous variables, 26, 27 realities, 14
bio-economics, 52 assumptions, 13, 14, 146
biology and economics. See economics combining epistemologies of Popper
and biology and Georgescu-Roegen, 13–14,
biophysical environment in biology, 28, 99, 117–18
136, 140, 141 consistent with meta-assumptions of
biophysical environment in economic theory of knowledge, 6, 77
process operational via alpha-beta method,
degradation of (by human activity), 14, 15, 28
33 composition, fallacy of, 118
ecological niche of human species, 33 correlation coefficient
and evolutionary process, 38–41, derived from the regression line, 80
45, 136 as random variable, 81
and mechanical process, 37, 38, and regression coefficient, 81, 82
40, 59 in Spearman test, 86, 87
corroborated vs. verified theories, 27
critical science and Popperian
C epistemology, 9, 145
capitalism critical science, economics as, 147
definition, 21, 51 and alpha-beta method, 147
economic theory of, 30, 34
fundamental institutions of, 31,
33, 52 D
cardinal variable, 92–3 Darwinian competition for scarce
Casti, J., 135 resources in biology, 137
causality Darwinian competition of scientific
causality matrix, 27 theories
different from statistical correlation, and alpha-beta method, 149
119–20 in economics and social sciences, 97
fallacy of, 119 and evolutionary epistemology, 8,
matrix of beta propositions, 26, 27 145, 147, 149
INDEX 155

deductive logic epistemological differences/


in composite epistemology, 9, similarities, 9, 138
100, 101 the need of unified theory, 61, 123,
in falsification epistemology, 9 135, 140, 141
deductivism, 100, 101, 112, 122–3 economics as critical science and
deterministic economics process, 41 alpha-beta method, 47–61
dynamic economic process, 35, 37, 43 economics is a social science, 43–5
dynamic equilibrium economics, the science of
concept of, 36–37 epistemology-intensive science,
temporal in evolutionary process, 115, 148
38–40, 84, 126 factual science, 1, 85, 100
and time t, 39, 82 non-experimental science, 48
dynamic model, 73, 82–3 social science, 14, 28, 30, 44–45, 47,
dynamic process, 35, 37–9, 43, 52, 53, 49, 54, 58, 66, 96, 97, 111, 115,
59, 65, 84, 126, 136 118, 123, 125, 129, 135, 149
theoretical science, 47
economic theory
E empirical predictions as beta
econometrics, 63, 87, 107, 121, propositions, 53, 57, 95
143, 147 as family of models, 55–7
economic process, concept, 30 as set of alpha propositions, 21, 22,
economic process, equilibrium of 34, 47, 51, 56, 146, 147
not normative concept, 44 Einstein against inductivism (for
social equilibrium concept, 44 theory invention from data), 107
stability, 37, 53 Einstein and letter to Popper, 104
economic process, structure of Einstein’s theories in physics, 139, 140
reduced-form relations, 56 empirical hypotheses, types of
structural relations, 12 beta-hypothesis, derived from
economic process, types of scientific theory, 130
deterministic/stochastic, 41–3 H-hypothesis, not derived from
evolutionary, 37, 38, 43 scientific theory, 130
mechanical; dynamic, 82–4; static, empirical predictions of scientific
74–80 theory. See beta propositions
economics and biology empirical research methods
epistemological differences/ interpretive research method, 108–12
similarities, 135–7 statistical inference method, 104–8
ontological universalism, 49–52, epistemology
140–1 as logic of scientific knowledge,
economics and physics 2–7, 107
differences in measurement as rules of scientific research, 7, 8,
instruments, 91, 92, 95, 141–3, 12, 138
149 as theory of knowledge, 3–7, 101
156 INDEX

equilibrium Heisenberg’s uncertainty principle


definition, 34, 35, 44, 45 in economics (data collection), 142
fundamental for falsification, 52–3 in physics, 109, 142
error vs. failure of scientific theory, 24 hermeneutics. See interpretive
evolutionary economic process, 37, epistemology
38, 43 H-hypothesis
evolutionary epistemology. See no relation with causality, 106
Darwinian competition of not derived from theory, 105 (see
scientific theories also beta-hypothesis)
evolutionary model, 40, 57, 73, 83, testable using parametric of
84, 127 non-parametric statistics, 105
evolutionary process, 8, 38–41, 45, historical time (T), 38, 84
52, 55, 59, 60, 83, 126, 136, 145 historical vs. mechanical time, 38, 84
expanded reproduction process, 33 Hurley, P., 102

F I
failure vs. error of scientific theory, 24 identification problem/problem of
falsification in economics identification
applicable only under alpha-beta in rejecting beta-hypothesis, 72
method, 64 in rejecting H-hypothesis, 106
justified by composite epistemology, immortal theory (pseudo theory), 53,
146 92, 95, 143
See also Popperian epistemology not inductive logic, 7, 27, 101–4, 110
applicable in economics inductivism, 101, 103, 104, 110,
Figueroa, A., 61, 141 112, 122–3
forecasting, the fallacy of, 125–7 initial conditions as initial structure of
Freund, J., 85 society, 31, 33, 50, 51
initial inequality postulate, 51
initial resource endowment
G postulate, 51
general equilibrium model, 57–9, 61 institutional postulate, 50
Georgescu-Roegen, N., 10, 16, 28, interpretive epistemology
31, 92, 99, 118, 138, 146 assumptions, 108
Georgescu-Roegen’s process and exploratory research, 110
epistemology. See process limitations, 108
epistemology intepretive research method,
Greene, B., 140, 143n2 108–113

H K
Haack, S., 103 Kuhn, T., 92, 142
Hawking, S., 4, 130, 131
INDEX 157

L dynamic vs. evolutionary,


labor market, 31, 51, 58, 61 74–80, 82–84
laws of thermodynamics and economic multiple correlation coefficient, 81
process
Entropy Law, 33
Law Of Conservation Of Matter and N
Energy, 33 natural resources in economic process,
Li, Q., 87 31–3
logical time, 59 Newton’s theory in physics, 140
logic, science of Neuman, L., 108, 109
and epistemology, 16 non-parametric statics, 84–8, 149
formal science, 1, 4, 117 assumptions, 85, 96
and theory of knowledge, 6 North, D., 49
long run model, 59–61

O
M ontological universalism, the fallacy of,
markets and democracy, 31, 33, 34, 52 123
as fundamental institutions of ontological universalism, the problem
capitalism, 31, 33, 52 of
Mayr, E., 135, 136 in economics, 48–52
McCloskey, D.N., 78 in physics, 48–52
measurement of social facts ordinal variable
Georgescu-Roegen criterion, 92–3 as dummy variable in regression
science is measurement, 88–93 analysis, 93, 113
Searle’s criterion, 89–92 qualitative, 92, 93, 113
mechanical processes, 37, 38, 40
mechanical time (t), 38, 84
mechanical vs. historical time, 38, 84 P
methodology. See epistemology Pagan, A., 87
microeconomic equilibrium parametric statistics or statistical
model, 58 theory, 64, 65, 74, 84, 95,
misplaced concreteness, fallacy of, 96, 105
123–5 assumptions, 64, 65, 74, 95,
models 96, 149
concept, 34, 56 partial correlation coefficient,
economic theory as family of 81–2
models, 55–7 partial equilibrium model, 58, 61
make theories falsifiable, 53 Pasternak, C., 135
need of auxiliary assumptions, 56 physics and economics. See economics
types of; partial vs. general and physics
equilibrium, 57–9; short run vs. Popper, K., 7–9, 28, 50, 55, 99,
long run, 59–60; static vs. 102–4, 111, 117–18, 138, 146
158 INDEX

Popperian epistemology in static models, 74–80


assumptions, 7–9 regression coefficient, 76–9, 81, 82,
consistency with meta-assumptions 105, 106
of theory of knowledge, 5, 6 research methods
in economics, 15–16, 138, 146 pre-scientific; interpretive method,
scientific research rules or principles, 108–112; statistical inference
7, 8, 28, 138 method, 104–8
Popper’s falsification epistemology. See scientific (alpha-beta), 112–116
Popperian epistemology Rosenberg, A., 108
power structure in economic process,
33, 44
primary assumptions of scientific S
theory. See alpha propositions sample and population relationships,
principle of increasing endogenization, 66–71
58 Samuelson, P., 100
principle of insufficient reason, 88 scarcity postulate, 50
problem of induction, 102–3 sciences, types of
process epistemology complex vs. hypercomplex, 139
assumptions, 10, 11, 13 formal vs. factual, 1–2, 4, 6
consistency with meta-assumptions social sciences vs. natural sciences,
of theory of knowledge, 5, 6 108, 115, 137–143
rules or principles, 13 scientific knowledge
use of abstraction in studying error-free knowledge, 2, 6, 146
complex realities, 9–10 need of epistemology, 1–14, 107
proximate factors and structural vs. non-scientific knowledge, 6, 104,
equations, 12, 53 110
scientific theory
definition, 3, 17
R need of epistemology, 7, 8, 10–13,
Racine, J S., 87 16, 64, 146
rationality postulate, 50 Searle, J., 89
rationality, the assumption of, 3, 50 Searle’s criterion of measuring reality,
realities without theory, 54–5, 114 89–92
reduced form relations self-regulated economic process, 35
as beta propositions, 19–20, 56, 74 short run model, 59–60
as causality relations, 12, 20, 74 Simon, G., 85
regime switching, 37–9, 41, 43, 83, simple reproduction process, 31, 33, 34
84, 126 Smith, J. M., 135–7
regression analysis social group equilibrium model, 44
assumptions, 75, 79, 105 socially constructed variables, 89–94,
in dynamic models, 82–3 113
in evolutionary models, 83–84 concept, 89
INDEX 159

See also Searle’s criterion as normative science, 6


social sciences vs. “science is what scientists do”, 6
epistemology-intensive, 115, 148 time, concepts of
hypercomplex sciences, 139 chronological vs. logical (short and
social situation or social context, 50, long run), 59
55–7 mechanical (t) vs. historical (T),
static economic process, 34, 41 38, 84
static equilibrium, 34, 35, 39 transition dynamics, 37
static model, 74–80, 83
static process, 35, 38, 39, 52, 59,
65, 125 U
statistical association of variables Ullah, A., 87
as correlation coefficient, 80–82 ultimate factors and reduced form
different from causality, 27, 106, equations, 12, 20, 27, 53,
120 82, 136
statistical correlation and causality, unified theory
82, 106, 107, 119, 120, 122, in biology, 140
130, 147 of capitalism, 51, 61, 141
statistical inference research method, in economics, 49, 61, 123
104–8 and partial theories, 49, 141
statistics, science of in physics, 135
and econometrics, 143, 147 of plant behavior, 140
and epistemology, 64 and theory of everything, 123, 135
formal science, 64, 84, 93, 96 unity of knowledge as epistemological
stochastic economic process, 41, 43 requirement of science, 60–1
structural relations, 12

V
T verified vs. corroborated theories, 27
testing economic theories. See
alpha-beta method, falsification in
economics under W
theory of everything, fallacy of, 123 weak-cardinal variable, 92–3
theory of evolution by natural wealth inequality. See power structure
selection, 136 Wilson, E., 139
theory of knowledge
definition, 3, 6, 7
as formal science (logic), 6 Z
meta-assumptions, 5, 6, 77 Ziliak, S., 78

Potrebbero piacerti anche