Sei sulla pagina 1di 194

Jørgen Vitting Andersen

Andrzej Nowak

An Introduction
to Socio-Finance
Springer Complexity
Springer Complexity is an interdisciplinary program publishing the best research and
academic-level teaching on both fundamental and applied aspects of complex systems –
cutting across all traditional disciplines of the natural and life sciences, engineering,
economics, medicine, neuroscience, social and computer science.
Complex Systems are systems that comprise many interacting parts with the ability to
generate a new quality of macroscopic collective behavior the manifestations of which are
the spontaneous formation of distinctive temporal, spatial or functional structures. Models
of such systems can be successfully mapped onto quite diverse “real-life” situations like
the climate, the coherent emission of light from lasers, chemical reaction-diffusion systems,
biological cellular networks, the dynamics of stock markets and of the internet, earthquake
statistics and prediction, freeway traffic, the human brain, or the formation of opinions in
social systems, to name just some of the popular applications.
Although their scope and methodologies overlap somewhat, one can distinguish the
following main concepts and tools: self-organization, nonlinear dynamics, synergetics,
turbulence, dynamical systems, catastrophes, instabilities, stochastic processes, chaos, graphs
and networks, cellular automata, adaptive systems, genetic algorithms and computational
intelligence.
The three major book publication platforms of the Springer Complexity program are the
monograph series “Understanding Complex Systems” focusing on the various applications
of complexity, the “Springer Series in Synergetics”, which is devoted to the quantitative
theoretical and methodological foundations, and the “SpringerBriefs in Complexity” which
are concise and topical working reports, case-studies, surveys, essays and lecture notes of
relevance to the field. In addition to the books in these two core series, the program also
incorporates individual titles ranging from textbooks to major reference works.

Editorial and Programme Advisory Board


Henry Abarbanel, Institute for Nonlinear Science, University of California, San Diego, USA
Dan Braha, New England Complex Systems Institute and University of Massachusetts Dartmouth, USA
Péter Érdi, Center for Complex Systems Studies, Kalamazoo College, USA and Hungarian Academy
of Sciences, Budapest, Hungary
Karl Friston, Institute of Cognitive Neuroscience, University College London, London, UK
Hermann Haken, Center of Synergetics, University of Stuttgart, Stuttgart, Germany
Viktor Jirsa, Centre National de la Recherche Scientifique (CNRS), Université de la Méditerranée, Marseille,
France
Janusz Kacprzyk, System Research, Polish Academy of Sciences, Warsaw, Poland
Kunihiko Kaneko, Research Center for Complex Systems Biology, The University of Tokyo, Tokyo, Japan
Scott Kelso, Center for Complex Systems and Brain Sciences, Florida Atlantic University, Boca Raton, USA
Markus Kirkilionis, Mathematics Institute and Centre for Complex Systems, University of Warwick,
Coventry, UK
Jürgen Kurths, Nonlinear Dynamics Group, University of Potsdam, Potsdam, Germany
Andrzej Nowak, Department of Psychology, Warsaw University, Poland
Linda Reichl, Center for Complex Quantum Systems, University of Texas, Austin, USA
Peter Schuster, Theoretical Chemistry and Structural Biology, University of Vienna, Vienna, Austria
Frank Schweitzer, System Design, ETH Zurich, Zurich, Switzerland
Didier Sornette, Entrepreneurial Risk, ETH Zurich, Zurich, Switzerland
Stefan Thurner, Section for Science of Complex Systems, Medical University of Vienna, Vienna, Austria
Jørgen Vitting Andersen
Andrzej Nowak

An Introduction
to Socio-Finance

123
Jørgen Vitting Andersen Andrzej Nowak
CNRS, Centre d’Economie Faculty of Psychology
de la Sorbonne University of Warsaw
University of Paris 1 Warsaw, Poland
Paris, France

ISBN 978-3-642-41943-0 ISBN 978-3-642-41944-7 (eBook)


DOI 10.1007/978-3-642-41944-7
Springer Heidelberg New York Dordrecht London

Library of Congress Control Number: 2013956621

c Springer-Verlag Berlin Heidelberg 2013


This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of
the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation,
broadcasting, reproduction on microfilms or in any other physical way, and transmission or information
storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology
now known or hereafter developed. Exempted from this legal reservation are brief excerpts in connection
with reviews or scholarly analysis or material supplied specifically for the purpose of being entered
and executed on a computer system, for exclusive use by the purchaser of the work. Duplication of
this publication or parts thereof is permitted only under the provisions of the Copyright Law of the
Publisher’s location, in its current version, and permission for use must always be obtained from Springer.
Permissions for use may be obtained through RightsLink at the Copyright Clearance Center. Violations
are liable to prosecution under the respective Copyright Law.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
While the advice and information in this book are believed to be true and accurate at the date of
publication, neither the authors nor the editors nor the publisher can accept any legal responsibility for
any errors or omissions that may be made. The publisher makes no warranty, express or implied, with
respect to the material contained herein.

Cover image: Image by Yellow Dog Productions

Printed on acid-free paper

Springer is part of Springer Science+Business Media (www.springer.com)


Preface

Socio-Finance

The word “socio-finance” used in the title of this book is meant as a description
that catches the underlying nature of price formation in financial markets. Since
the term as such does not exist, let us be more precise and define socio-finance to
mean that price formation in financial markets is a sociological phenomenon that
relates individual decision-making to the emergent social level. We will consider
two different levels of this sociological influence. At the first level, socio-finance
considers how price formation results from the social dynamics of interacting
individuals. Interaction occurs either through the price or by direct communication.
An example could be a day trader in a stock market, where the decision choice of
when to enter/exit a position depends on the price trajectories created by other day
traders. At the second level, socio-finance considers how price formation results
from the social dynamics between groups of individuals. An example could be how
one financial market uses the outcome of another financial market in order to know
how to price an asset properly. In this book, models of both levels of socio-finance
will be presented, and it will be shown how complexity theory provides the tools to
understand such phenomena.

Social Media and the Stock Market

We live in a world of social media. At the time of writing, Facebook has just
had its initial public offering (IPO) on the stock market under the scrutiny of
media broadcasters the world over. Its less than successful introduction on the
stock market, with initial sharp declines in the price over the days following the
introduction, has raised new questions about how to determine the proper price of
a company, in this case even before it has been quoted for the first time. Just as
it would have been impossible 30 years ago to foresee anything of the Internet-
based world we live in today, so it seems impossible to know what awaits us in
terms of social media and the internet: What is the future “Google”, “Facebook” or
“Microsoft” going to look like?

v
vi Preface

Without trying to offer any clues as to where we are heading, we still think it is
safe to say that globalization in conjunction with the appearance of the Internet
and social media will introduce a new era for the stock market and the way in
which prices are determined in the market. Even for people not interested in the
financial markets, it can sometimes be hard to go to a restaurant and enjoy a meal
without overhearing a television announcement about the latest price movements in
the markets. With the omnipresence of news not only from the stock market but also
from other markets like commodity markets (e.g., oil, gold, wheat), one might think
that this factor should also play a role in the way professional traders think about
financial markets and how to price assets. Not so.
The main theories taught at the universities, and still used in the finance industry
(banks, pension funds, insurance companies, hedge funds), date back to the 1960s
and 1970s when the core foundations of traditional finance were laid out. These
theories still dominate the academic world of finance, and, to a lesser but still
important extent, they are what is used by the industry even today. As will be seen
shortly, the collective and dynamic formation of opinion on the proper price of a
stock does not play any role in such theories. Instead, pricing of an asset happens
through decision-making represented via a prototype individual reacting to “quasi-
static” information. In the main part of this book, we will argue for the introduction
of new tools that take into account the dynamic and social nature of information
sharing, not only on the level of individuals but also, as will be seen in Chap. 7,
between different markets worldwide.
As we are living in an ever changing and dynamic world, it seems only natural
(and prudent) to develop new tools to capture its changing complexity. Such a
need would be particularly clear in the case where events occur that go beyond
the understanding of the set of tools we currently possess. The 2008 subprime
crisis seems to have taken the world by surprise, and clearly the event was not on
the cards as far as standard theory was concerned. At least this is the impression
you get by listening to the words of the former Federal Reserve chairman, Alan
Greenspan, who during the market turmoil of 2008 admitted being in a “state of
shocked disbelief” that the free markets could reveal such flaws [76]. The current
European debt crisis, which could be considered as a continuation of the United
States 2008 subprime mortgage and lending crises, seems to point to a similar lack
of tools for understanding and tackling such problems.

The Market as a Voting Machine

The quote “In the short run, the market is a voting machine, but in the long run it is a
weighing machine” [59] is attributed to the American economist Benjamin Graham
(1894–1976). Graham himself used the quote to argue that investors should use
the so-called fundamental value investment approach and concentrate on accurately
analyzing the worth of a given financial asset. That is, he suggested ignoring the
short run “voting machine” aspect and instead concentrating on the long run, where
somehow the “weighing machine” of the market would ensure that we end up with
Preface vii

a price corresponding to the true worth of an asset. This sounds simple in principle.
But even if this was how the markets worked, how exactly can one assess the true
worth of an asset before it is determined by the weighing machine of the market?
Moreover, assuming that we are able to determine the true worth of an asset, what
should we do if the market somehow goes off track for a longer period of time than
expected, as is typically seen, for example, during speculative financial bubbles?
In this book we will be concerned with what can best be characterized as
the “voting machine” part of Graham’s quote. More precisely, it will be argued
throughout this book that price formation in financial markets is a sociological
phenomenon and that complexity theory is the tool required to understand this
phenomenon. The reader will be introduced shortly to both complexity theory and
its relationship to sociology. However, let us begin by defining what we mean by the
term “financial market.” We define a financial market to be a place where people
can buy and sell a broad range of assets such as currencies, financial securities
(e.g., stocks and bonds), commodities (e.g., crude oil, natural gas), metals (like gold,
silver, copper), and agricultural goods (e.g., corn, wheat, cattle). We would like to
emphasize this definition since it places humans as the main actors. Either directly or
indirectly (through programs made by humans in computer trading), there is always
a human making decisions behind every trade made in a market. This may sound
trivial, and indeed it is something either quickly forgotten or not emphasized in most
books on financial markets. In this book it will instead be the main focus.
Human decisions are almost always made in the social context of other individu-
als. The effect of social context can take different forms. An individual may ask oth-
ers for advice, he or she may ask others for information, and several individuals may
discuss different companies, stocks, and investment options, creating the so-called
“shared reality” that guides their individual decisions. Individuals can influence each
other, not only in cognitive but also in emotional ways. Multiple influences in a
social group may lead to euphoria or fear that can influence individual decisions
in a coordinated way. Individuals can also be influenced by observing each other’s
behavior and its consequences [14]. It has been demonstrated [18] that copying the
behavior of others is one of the prevailing mechanisms in decisions to purchase.
All of these mechanisms are social in nature. The link with sociology then
comes naturally when one considers the financial market as a “voting machine”
and the price as the outcome of the vote. It is of critical importance, however, that
the outcome of the social mechanisms is not equivalent to the sum of individual
decisions. The fact that individuals influence each other makes the outcome of
the group process very different from the counterfactual outcome of individuals
making their decisions in isolation from each other. In this book we will argue
that social process shapes financial markets in a more pronounced way than the
individual features of decision-making. Thus, although psychological processes
influence individual decisions, direct and indirect influences between individuals
play an even more important role.
Clearly, the “election” going on in a financial market is not democratic: those
entering the market with the most money will move prices the most. Furthermore,
the pool of voters, the market participants, is something that changes over time. Such
viii Preface

dynamics and the impact it can have on the pricing in markets is little understood
and rarely discussed in the general literature on finance. Finally, the “election” is
ongoing, so that the outcome at a given instant of time reflects the latest beliefs of
the market participants.
Our main emphasis in this book will be to point out that the way prices are
discovered by the market is a sociological process, where the decisions made by a
given market participant depend on the decisions made previously by other market
participants. This can happen in one of the two following ways:
• Market participants make a decision to buy or sell based on the past/present
price value of an asset. This is the case for market participants who trade using
technical analysis of past price movements, as well as fundamentalists who enter
the market whenever they think the market is over- or undervalued.
• Through communication with other market participants, a decision is made
which triggers a trade.
Actually, the only situation where price dynamics is not determined through a social
process is in direct reaction to headline news concerning interest rate decisions
made by central banks, earnings announcements for stocks, or other global news
announcements considered relevant for financial markets. One can, for example,
think of political news announcements, natural disasters such as earthquakes, or the
sudden onset of human disasters such as war and terrorism. As soon as a given
announcement is made public and the markets have seen the initial reaction to it,
the price dynamics becomes a social process of deciding how to react to the initial
reaction of the market. Some market participants will see the initial reaction as an
overreaction, and, in the case of bad news, a potentially good buying opportunity,
whereas other market participants may instead see the initial reaction as part of a
longer-term deviation from the present price trend, and hence as a good time to sell.

Neo, The Matrix, and the Market: A Complexity View

Imagine for a moment that we could be like the hero Neo in the cult film The
Matrix and observe the world in slow, slow, slow motion. We would then be able
to distinguish and observe the chronology of orders to sell and buy assets as they
arrived electronically at the world’s different stock exchanges.
Maybe we would see orders arriving from a trader of a major investment bank
who wanted to take a large position on the Standard & Poor’s 500 (S&P 500) index
before the opening of the markets in the United States. We could see how this
order changes the electronic order book of the S&P 500 index, and we could follow
how specially designed supercomputers from several banks and hedge funds would
immediately update their limit buy and sell orders in the order book of the S&P 500
index to try and profit from the impact.
Then perhaps we would see a trading Japanese housewife pressing the enter
button and make her trade of the day by submitting an order to convert her entire
holdings of US dollars (USD) into Japanese yen (JPY). The small blip this would
create on the JPY/USD rate could be the tipping point for a trader in a proprietary
Preface ix

hedge fund in London, who would subsequently begin to get rid of large holdings of
US dollars, converting them into what is considered to be a more secure holding of
Japanese yen. The resulting impact and the beginning of the decline in the JPY/USD
rate would be spotted by traders in a Russian bank. As a flight to safety measure
and having long hesitated to reduce the percentage of stocks in their portfolio, they
would then begin to sell out stock holdings worldwide.
Meanwhile, around a table in Washington D.C., eight men and four women
would go over the latest available information on the US economy and employment
data. Their decision on whether to change the federal fund rates and the following
explanation in a communique would hours later result in a huge storm of trading
activity. However, this activity would be dwarfed by the beginning of a stock market
fall a few weeks later. But was it the decision of the trading Japanese housewife to
convert her yen into US dollars, the meeting in Washington D.C., or the Russian
bank’s stock sellout that would later mark the beginning of a decade-long decline in
the stock markets?

Outline and Purpose of the Book

The purpose of this book is threefold. First, we give a short but broad introduction
to the standard economic theory of financial markets. This should enable the reader
to understand the traditional way of thinking, illustrated by examples. Secondly, the
reader will be introduced to the concepts of behavioral finance and a psychologically
defined view of financial markets. Finally, complexity theory and models which
take into account behavioral decision-making will be introduced as a tool to give
new insights into price formation in financial markets. The main part of the book is
written accessible for a broad audience. More specific and quantitative explanations
are made in grey boxes.

Acknowledgements

Jørgen Vitting Andersen wishes to thank the following people. Heartfelt thanks to
Jasmine, Anna, and Mia for their support. A special thanks to Didier Sornette for
having introduced me to the field of econophysics and for our many collaborations.
Both authors would like to thank Barbara Piper for her outstanding editing, and to
Lisa Tognon for her nice pencil illustrations. Finally, the authors would like to thank
the following colleagues for insightful discussions: Lucia Bellenzier, Serge Galam,
Dominique Guégan, Stephen Hansen, Justin Leroux, Sebastian Martinez, Michel
Miniconi, Lael Parrott, Philippe de Peretti, Giulia Rotundo, Magda Roszczynska-
Kurasinska, Sorin Solomon, and Maxences Soumaré.

Paris, France Jørgen Vitting Andersen


Warsaw, Poland Andrzej Nowak
September 2013
Contents

1 The Traditional Approach to Finance . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1


1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1
1.2 Rational Expectations Theory and the Efficient Market
Hypothesis .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1
1.3 Pricing of Stock Markets and Excess Volatility . . .. . . . . . . . . . . . . . . . . . . . 7
1.4 Markovitz’s Portfolio Theory.. . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 11
1.5 The Capital Asset Pricing Model .. . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 16
1.6 Have Your Cake and Eat It: Creating a Non-Markovitz Portfolio . . . . 18
1.7 Critics of the Traditional Viewpoint.. . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 21
2 Behavioral Finance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 25
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 25
2.2 Cognitive Processes: The Individual Level . . . . . . . .. . . . . . . . . . . . . . . . . . . . 26
2.2.1 Motives .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 27
2.2.2 Emotions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 28
2.2.3 Self-Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 28
2.2.4 Biases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 30
2.3 Prospect Theory .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 32
2.4 Pricing Stocks with Yardsticks and Sentiments . . .. . . . . . . . . . . . . . . . . . . . 34
2.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 35
2.4.2 Theory of Pricing Stocks by Yardsticks and Sentiments .. . . . . 36
2.4.3 Discussion .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 42
2.5 Sticky Price Dynamics: Anchoring and Other Irrational
Beliefs Used in Decision Making . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 42
2.5.1 Appendix: Quantitative Description
of the Trading Algorithm .. . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 49
2.6 ‘Man on the Moon’ Experiments of Behavioral Finance . . . . . . . . . . . . . 52
2.7 Social Processes Underlying Market Dynamics .. .. . . . . . . . . . . . . . . . . . . . 53
3 Financial Markets as Interacting Individuals: Price
Formation from Models of Complexity .. . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 59
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 59
3.2 Chaos Theory and Financial Markets . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 59
3.3 The Symphony of the Market . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 61

xi
xii Contents

3.4 Agent-Based Modeling: Search for Universality Classes


in Finance.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 62
3.5 The El Farol Bar Game and the Minority Game. . .. . . . . . . . . . . . . . . . . . . . 65
3.6 Some Results for the Minority Game . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 68
3.7 The $-Game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 71
3.8 A Scientific Approach to Finance . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 74
3.9 Taking the Temperature of the Market: Predicting Big
Price Swings.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 75
4 A Psychological Galilean Principle for Price Movements:
Fundamental Framework for Technical Analysis . . . .. . . . . . . . . . . . . . . . . . . . 77
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 77
4.2 Dimensional Analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 79
4.3 A Simple Quantitative Framework for Technical Analysis . . . . . . . . . . . 81
4.4 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 85
5 Catching Animal Spirits: Using Complexity Theory
to Detect Speculative Moments of the Markets . . . . . . .. . . . . . . . . . . . . . . . . . . . 93
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 93
5.2 Rational Expectations Bubbles . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 94
5.3 Going Beyond Rational Expectations Bubbles . . . .. . . . . . . . . . . . . . . . . . . . 96
5.4 The Idea of Decoupling .. . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 99
5.5 The Formalism of Decoupling.. . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 101
5.6 Decoupling in Computer Simulations and in Real Market Data .. . . . . 103
5.7 Using Decoupling to Detect Speculative Price Movements.. . . . . . . . . . 106
5.7.1 Monte Carlo Simulations Applied to $G
Computer Simulations .. . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 107
5.7.2 Experiments on Human Subjects and Monte
Carlo Simulations Applied to Data Generated
by Human Traders . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 113
5.7.3 Experimental Manipulation . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 118
5.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 119
6 Social Framing Creating Bull Markets of the Past: Growth
Theory of Financial Markets . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 121
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 121
6.2 The State of the Market .. . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 121
6.3 Long Term Growth of Financial Markets .. . . . . . . . .. . . . . . . . . . . . . . . . . . . . 125
6.4 How Big Is the Investment Level of a Given Stock Market? . . . . . . . . . 132
6.5 Impact of Short Selling on Pricing in Financial Markets . . . . . . . . . . . . . 135
7 Complexity Theory and Systemic Risk in the World’s
Financial Markets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 143
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 143
7.2 Systemic Risk: Tearing a Piece of Paper Apart . . . .. . . . . . . . . . . . . . . . . . . . 144
7.3 Systemic Risk in Finance .. . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 147
7.4 Self-Organized Criticality . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 149
Contents xiii

7.5 Two States of the World’s Financial Markets: The Sand


Pile and the Quicksand.. . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 154
7.5.1 News and the Markets . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 154
7.5.2 Change Blindness and Large Market Movements.. . . . . . . . . . . . 155
7.5.3 Price-Quakes of the Markets . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 156
7.5.4 Price-Quakes in the Worldwide Network
of Financial Markets .. . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 161
7.5.5 Discussion .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 163
8 Communication and the Stock Market . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 167
8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 167
8.2 A Model of Communication and Its Impact on Market Prices. . . . . . . . 168
9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 173

Index . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 181
The Traditional Approach to Finance
1

1.1 Introduction

Before we begin our general discussion of socio-finance, we will first make a detour
and take a closer look at the core ideas of modern finance. Even though the concepts
and methods of modern finance will not be used in the chapters that follow, we
think it is important to be aware of the abstract methods and the way of thinking
that people apply in traditional finance. Like any field of research it is useful to
understand the vocabulary and the main ideas in order to participate in discussions.
While it is by far the dominant trend in academia, the situation is more diffuse in the
industry, where it is often important to be pragmatic and go beyond the limitations of
the traditional ideas of finance. In particular, the exponential growth of hedge funds
over the last couple of decades seems to have spearheaded new and practical ideas
about the functioning of financial markets. Many of these ideas have been inspired
by complexity theory, which we will discuss later in this book.
The following sections in this chapter are meant as a short but general intro-
duction to the traditional theory of financial markets. The idea is to introduce the
reader to key concepts such as rational expectations and no arbitrage, which are the
building blocks that underpin modern portfolio and pricing theories like the Capital
Asset Price Model (CAPM), Arbitrage Pricing Theory (APT), and versions thereof.
The modern portfolio theory of Markovitz and the CAPM will be explained briefly.
We will try to make the explanations simple but concise, and focus on explaining
the core ideas rather than the formal framework. During our explanations we will
also point out some of the shortcomings of these theories.

1.2 Rational Expectations Theory and the Efficient Market


Hypothesis

Rational expectations theory and the efficient market theory were developed in
the 1960s and became widely accepted and well established in the 1970s. By the
1980s, however, questions about the validity of the theories had already surfaced

J.V. Andersen and A. Nowak, An Introduction to Socio-Finance, 1


DOI 10.1007/978-3-642-41944-7__1, © Springer-Verlag Berlin Heidelberg 2013
2 1 The Traditional Approach to Finance

with the discovery of a succession of anomalies, most notably the evidence for
excess volatility of returns [125], which we shall explain in a moment. Eventually,
competing theories arose in the 1990s, most notably in the field of behavioral
finance, which led to more recent theories such as feedback and herding, as well as
models of interaction between smart money and ordinary investors. We will return
to this in the next chapter. For the moment, however, we need to understand the
foundations that underpin the traditional view of finance.
It is probably not an exaggeration to say that the theory of rational expectations
really is at the very core, hiding deep down in the framework of most theories used
in finance since the mid-1970s. In the following, we shall therefore take some time
to introduce the idea, but also to try to digest its implications. The idea is very
simple and elegant, and has very deep implications. Unfortunately, it is also very
unrealistic – but we shall come back to that later on.
In order to plan for the future, people try to make predictions about what
awaits them. This human activity is perhaps especially pronounced in the fields
of economics and finance, where professionals in the industry and decision-makers
in the public sector try to make predictions about variables like the future growth
rate of a country (GDP), future unemployment rates, interest rates, and so on.
Rational expectation theory assumes that the way people make predictions is done
in a : : : well, rational manner. For financial markets, the predictions are about what
should be the value of a given asset. The box below describes the formalism of
the rational expectations theory when it is used to price a given asset. Readers
not interested in formalism can without a loss of comprehension skip the more
quantitative explanations made in boxes throughout the book.

Let us think about the situation where people at time t  1 use all available
information It 1 to try to predict what should be the price of an asset Pt after
the next time step, i.e., at time t. Rational expectations theory [101] then says
that the price Pt that rational people will forecast ex-ante (before the facts,
i.e., before you can look up the price of the asset in the Financial Times) will
only deviate randomly from its fundamental value Pt :

Pt D Pt C t ; (1.1)

where t is a random term which has expectation value zero and is indepen-
dent of P  . What is meant by ‘fundamental value’ will be clear in a moment,
when we describe the fundamental value of a stock, but for the moment just
think of it as the true or fair price of an asset. Rational expectations theory
therefore assumes that the price in a market is not necessarily equal to its
fundamental price, but that the deviations are random. Or put differently,
deviations from perfect prediction are random. When you take the expectation
value of (1.1), you find that on average the price of a market is indeed equal
to its fundamental value:
(continued)
1.2 Rational Expectations Theory and the Efficient Market Hypothesis 3

(continued)
E.Pt jIt 1 / D Pt ; (1.2)

where E denotes the expectation value. What is often confusing in


presentations of rational expectations theory is how the price dynamics enters
in all this? For a discussion on this see, e.g., [93]. Equation (1.2) does not say
how prices change, so where does the description of the price dynamics come
into play? This is important. It is implicitly expressed in (1.2) through an
additional assumption. The prices only change due to information ‘shocks’,
where new information suddenly changes the fundamental value P  , and
the price of the market then follows the change in the fundamental value
through (1.1). The announcement of a change in the interest rate by the
Federal Reserve is a clear example of this, but in principle, all new information
relevant to a change in the fundamental value could in turn lead to a change in
the market price through (1.1). Therefore, in the rational expectations picture,
the price of the market is at any instant in ‘equilibrium’. Prices in such a
picture only change because new information turns up that will change the
fundamental value and therefore also change the market price.

Rational expectations are also used in the reasoning behind the ‘efficient market
hypothesis’: if the price of an asset has not factored in all information, then arbitrage
opportunities exist and rational traders will therefore drive the price back to its
fundamental value. In the strongest version of the efficient market hypothesis, all
such arbitrage possibilities have been exploited, meaning that at all times the price
in financial markets equals its fundamental value [119].
Having introduced the idea of rational expectations, let us look at how economists
use the idea in practice. To illustrate the way of thinking, let us quote what has
become known as the ‘Lucas critique’, due to the economist Robert Lucas [87]:
Given that the structure of an econometric model consists of economic agents, and that
optimal decision rules vary systematically with changes in the structure of series relevant
to the decision-maker, it follows that any change in policy will systematically alter the
structure of econometric models.

Put simply, since people (in economic models) are rational, they will anticipate
the effect of economic policy-making and thereby render it useless. The usual
example economists take is the Phillips curve, where historical data show a negative
correlation between inflation and unemployment. The Central Bank could, for
example, decide to try and use the relationship from the Phillips curve to lower
the unemployment rate simply by raising inflation. They can easily do this simply
by lowering interest rates or by other measures that will put more money into
circulation. However, since people are rational, they see what the Central Bank
is doing and will then raise their expectations about inflation. The result of the
4 1 The Traditional Approach to Finance

expansionary effect of the increased money supply will therefore be used to pay
more in salary (because the expectation of inflation has increased) and not used to
employ more people.
The essence of Lucas’s critique is therefore that we need to understand how
an individual reacts to economic policy-making. Note that this asks for a relation
between microeconomics or individual behavior, in terms of rational agents, and
macroeconomics, in terms of economic policy-making. Such models are called
dynamic stochastic general equilibrium models, abbreviated to DSGE models.
Needless to say present day DSGE models are much more sophisticated than
the simple example mentioned above, including for example the more realistic
assumption of adaptive rational expectations.
Still, the important question is how far can one go in capturing reality with
the strong assumption of always having rational expectations as one of the central
building blocks? In our opinion, it is better to come to terms with the fact that
the nature of human beings, while fascinating, is far from the perfect assumptions
implicitly made under the rational expectations hypothesis. Take, for example, the
more extreme moments of the financial markets, where violent price swings seem
to be happening over certain clustered moments in time (‘volatility clustering’ is the
technical term). Here we seem to be very far from rationality.
There is actually another, a priori unexpected, field, quite different from eco-
nomics, where it turns out that rational expectations theory can be used in practice.
In addition, as we shall show, in that field the tool of rational expectations works
perfectly well: the prediction of failure in a material! Take a piece of paper and
begin to punch holes into it. Then as will be shown in Chap. 7, rational expectation
theory is the perfect tool for estimating the density of holes at which your piece of
paper will break. In particular, this example gives a clear demonstration of how the
method of rational expectations can give successive better predictions by updating
the prediction as the information set changes. However, as we shall see, this only
works because it is crystal clear what is meant by information in that case, whereas
this is ambiguous in the pricing of assets.
In hindsight, it may not be so surprising that the logical rational expectation
theory works out so nicely in a problem of physics, whereas it certainly can present
difficulties when applied to a field where human emotions play a role. Since this
book is about financial markets, the example from materials science will only be
discussed briefly, mainly as a prelude to the important topic of systemic risks.
However, we think it gives yet another example of how futile cross-disciplinary
approaches can serve as input in unexpected fields. It also gives a fresh look at an
old tool used beyond its traditional discipline.
When it comes to the way humans actually behave, the assumptions of rational
expectations are clearly wrong. There are many reasons for this, but let us take the
simplest first. According to the rational expectations hypothesis, you should only
observe prices move in the market whenever there is news that can be said to change
the fundamental value of an asset. If we take stock prices, as will be discussed below,
this can only be whenever there is either news related to earnings of a given stock
or news which can be said to be relevant for interest rate decisions. Even for those
1.2 Rational Expectations Theory and the Efficient Market Hypothesis 5

not interested in financial markets, it is hard not to notice the media coverage, with
often colorful descriptions of the yo-yoing movements of the markets from day to
day. Clearly, not all these movements can be due to new information relevant for
interest rates or earnings. If this still does not sound convincing, then look at the
hectic activity that occurs in stock markets on a timescale of hours, minutes, or
even seconds. The rational expectations hypothesis assumes that the price dynamics
is stone dead, i.e., no price changes whenever there is no news relevant to the
fundamental price.
In order to elaborate on this obvious puzzle, economists then either assume that
people are risk averse (something which is not included in the rational expectations
hypothesis) or that the ‘excess’ price movements are caused by ‘noise traders’, i.e.,
traders who are not rational and whose erratic behavior will then be arbitraged by
rational traders, who on average correctly predict what the ‘true’ price should be.
Another explanation as to why market participants do not all follow the rational
expectations hypothesis concerns the ‘cost’ of gathering information. Theoretically,
from the rational expectations hypothesis, you can only correctly assess what the
fundamental value should be if you have access to ‘all available information’, but
what does this actually mean?
Since we will argue for a sociological role in price formation in financial markets,
our main objection is, however, another reservation concerning rational expecta-
tions, one that was not mentioned above. Rational expectations theory assumes
that there is only one type of market participant trading in the market, the rational
one, who acts according to the rational expectations hypothesis. In physics, such an
assumption is called a mean field theory. For example, one assumes that all the atoms
in an iron rod act in the same way on average. Similarly, with rational expectations
theory, which assumes that all people in the market act in the same rational way, on
average. If we take some guidance from physics, it turns out that we often go wrong
when trying to understand complex problems with a simple mean field theory.
We are not by any means suggesting that one should think of financial markets as
an ensemble of ‘human atoms’. On the contrary, we believe that such an approach
will ultimately fail, precisely due to the differing behavior of people, a feature that
is not easily captured using a simple formula to represent each individual. Applying
a mean field type theory to something as complex as the financial markets clearly
imposes restrictions on our understanding. We would argue that the complexity of
the problem can only be properly understood by incorporating enough factors into
the theory at the outset.
However, it should be mentioned that there are many economic models where
humans (or ‘agents’) are allowed to deviate from the rational expectations hypoth-
esis. Generally, most such models tend to adopt a minimal departure from rational
expectations. If the fluctuations in an object are important, and that seems to be
the case for financial markets, which are renowned for their volatility, then using
a mean field theory with minor adjustments makes it difficult to obtain a proper
understanding of the complexity of the problem, including the fluctuations. In such
cases, it seems better to start from scratch, as it were, by determining the important
variables and the relevant constraints for the given problem.
6 1 The Traditional Approach to Finance

What seems clear is that there should be no obvious arbitrage possibilities in


financial markets, at least not on a permanent basis. The no-arbitrage idea, which
is another cornerstone of traditional finance, therefore seems a much more natural
starting point for any financial theory than the rational expectations hypothesis. The
question, however, is whether ‘pockets of predictability’ could exist over certain
periods of time, especially perhaps in the more extreme moments of large price
movements. We will return to this question in Chap. 3. There we will also discuss
psychology in terms of ‘sentiments’ and ‘anchoring’, and the way it plays a role in
shaping collective decisions by market participants, thereby rendering some market
movements non-random. Finally, let us mention that, almost by definition, when it
comes to stock markets, one would expect rational expectation models to work best
during ‘calm’ periods, where there could be greater hope of rationality. However,
financial markets often go through ‘stormy’ periods, where it seems much less
logical to expect people to demonstrate rational behavior.
We will come back to such a situation in Chap. 5, where we will take a different
approach in order to see whether speculative behavior created through social
phenomena could create events such as speculative bubbles and crashes. It will be
shown how dynamic consensus-making in price formation opens up the pathway for
speculative bubbles and crashes. After the 2008 US credit crisis, and the 2010–2011
European debt crisis, there has been much debate on the research and tools used
to understand the dangers connected to systemic risks. But it is difficult to see how
rational expectation theory could play a determining role in helping us to understand
such situations.
Economists are well aware of most of the shortcomings mentioned above. Still,
a large majority adhere to the assumptions of rational expectations in order to
explain pricing in financial markets using DSGE models. Why is this? One reason
could simply be tradition and the resultant inertia. Once a school of thought has
been established in academia, this brings along with it a set of journals, and,
in order to publish in those journals and gain recognition (and eventually gain
tenure for hopeful young candidates), this often means you need to formulate your
work according to the current school of thought. So applying rational expectations
theory to why people use rational expectations theory: as long as there is a
majority of people doing so, it can be hard to change, if only because of career
opportunities.Another reason, which is difficult to argue against, is the simple fact
that rational expectations theory allows you to utilize a mathematical framework,
and you can immediately begin to do calculations (see, for example, Sect. 5.2).
It should also be noted that the fields of economics and finance are very
traditional ones, since they deal with wealth and money. Let us say that you are
a portfolio manager. You are probably more easily excused from potential blame
if you lose your client’s money in a violent downturn of the markets and you can
show that you have stuck to what everybody else was doing. The blame will then
be placed on the unusual market behavior, and not on the theories you used in your
portfolio management (everybody else was doing the same thing: : :). However, if
you devise a new theory for doing portfolio management and you lose your client’s
1.3 Pricing of Stock Markets and Excess Volatility 7

money under the very same violent downturn of the markets, you can be almost
certain that your theories will be to blame, not the market.
There is no real tradition for interdisciplinary work in economics and
finance [67]. This may be because anyone already having a PhD in the field will have
invested a great deal of time in one very specialized area, so very few candidates
with a PhD ever think about applying for a position in another field, such as atomic
physics, for example, and probably with good reason. This might sound obvious
to most readers, but we would like to suggest that it ought not to be so obvious.
Economics and finance are very challenging fields and very enriching intellectually,
simply because they are so complex and difficult to grasp. Usually, a challenging
environment is one where one can make the most progress in terms of research.
For example, the Apollo space missions in the 1960s and 1970s not only placed a
man on the Moon, but led to tremendous spin-offs in many other fields. The authors
believe that problems in economics and finance could serve a similar purpose, but in
order to do so, the fields need to open up and exploit interdisciplinary approaches.
Whether this is possible or not remains to be seen.

1.3 Pricing of Stock Markets and Excess Volatility

Having introduced the general idea of a fundamental price in the last section, let
us give the definition for the pricing of a stock. As we have seen, the idea behind
rational expectations was that people try to estimate what should be the fundamental
price Pt of an asset at a given time t. When the asset is a stock, then we have to try
and guess/estimate all future cash flows for a given stock. However, since we need
to know the value of future cash flow in terms of what it is worth today (we price
the value of the stock today), we then need to discount these future cash flows to the
present day value of money. This should then be the fair value of the stock at time t.
The formal way of expressing this idea is given in the box below.

Let us first note that formally one can write (1.2) in terms of the future payout
of dividends as

X
1  
D./
Pt D E jIt 1 ; (1.3)
 Dt C1
Œ1 C r./ t

where D./ is the cash flow (dividend) at time  and the factor Œ1 C r./ t
is the discount factor transforming money at time  into present value money
at time t and r./ is the interest rate at time .
For those who do not feel comfortable with mathematical expressions, let
us describe in words the meaning of (1.3). If you want to apply this formula

(continued)
8 1 The Traditional Approach to Finance

(continued)
to find out the fundamental price of, say, IBM at time t, the formula says
that you have to calculate a sum of terms. Each term has a numerator and
a denominator. The numerator term is the most important. Here you should
use all available information at time t  1 to estimate the future dividends
of the IBM stock. This is what is meant by the numerator terms in (1.3).
The denominator is a term discounting the value of future money to the
present day value. Below we will explain in words how to calculate both the
numerator and denominator terms, in case you want to have a go at estimating
the fundamental value of a stock using this formula.
Let us first discuss how to calculate the numerator in each term. Doing the
job of a stock analyst, you try to gather all possible information about IBM
from the company itself, newspapers, journals, and what other stock analysts
write, to get an estimate of the growth prospects for the sale of IBM products
from time t to time t C 1. Note that this in itself sounds complicated, and
indeed it is complicated. As a matter of fact, it is not even very well defined.
Which information is relevant and which is not? Which sources can one trust
as important information and which not? Finally the growth prospects of
the company will be influenced by the general state of the economy, adding
further complications to this exercise. It is not therefore an overstatement to
say that getting a precise prediction of the earnings over the next time step,
from t to t C 1, is a tough call. P
And even after obtaining this estimate you are not done. In (1.3), the
sign implies a sum. Indeed, this summation sign tells you that, as well as
estimating the earnings of the company in the time period from t to t C 1
[once you have an estimate of the earnings the company will make, you can
then estimate the dividends, which is what is expressed in (1.3)], you must
then also estimate the dividends in the next time step from time t C 1 to
t C 2, in the time step following that from t C 2 to t C 3, and so on and so
forth right up to : : : infinity. Mathematically, it is easy to place the infinity
sign 1 on top of the sum sign and get on with the calculations. But from a
practical point of view, what could it mean to try to estimate earnings ‘up until
infinity’? Economists would probably say that it means ‘in the long run’, but
as the British economist John Maynard Keynes once noted “in the long run
we are all dead”. The further into the future you try to estimate things, the
less accurate they will be, adding further complexity to the business of giving
a reliable estimate of future dividends.
Now consider the denominator terms. These are discounting factors which
you need to calculate because money loses its value over time. Let us say that
you use the description given above and you manage to come up with the
estimate that the dividends the company will pay at the end of the period from
t to t C1 will be $10. Then you have to ‘discount’ these $10, since $10 at time

(continued)
1.3 Pricing of Stock Markets and Excess Volatility 9

(continued)
t C 1 is not worth the same as $10 at time t. Just think about inflation: for the
$10 you receive in dividends at time t C 1, you can buy a little less compared
to what you could have bought for $10 at time t. In (1.3), this is taken into
account by dividing the $10 by a factor 1 C r.t C 1/ where r.t C 1/ is the
interest rate given at time t C 1. So if for example you had a high interest rate
of 10 %, then the $10 you would get paid in dividends at time t C1 would only
be worth $9 in the value of money at time t. Note, however, that in order to
discount the different terms properly you also need to know what the interest
rates will be in the future, another problem in itself.
Having both D and r varying over time considerably complicates future
estimates of the expression on the right-hand side of (1.3), so often in the
literature the interest rate is treated as a constant, with little discussion. One
therefore ends up with the final expression, which in the standard literature is
called the fundamental value:

X
1
 
Pt D   t E D./jIt 1 ; (1.4)
 Dt C1

with   1=.r C 1/. Formally the solution (1.3) can be seen as a special
case, namely the solution without rational expectation bubbles. This will be
explained in more detail in Sect. 5.2.

To sum up then, the procedure for estimating the fundamental price of a stock
according to classical financial thinking has been given above in terms of an
evaluation of a numerator and a denominator describing future cash flows and
discounting factors in time (i.e., interest rates), respectively. Most important for
the following discussion is the idea according to this way of thinking that the
fundamental price of a stock can only depend on two variables: interest rates and
dividends. According to the classical way of thinking in finance, any factors not
related to these two variables should not matter in determining the pricing of a stock.
Having described in detail how to estimate the fundamental price of a stock, it
is interesting to note that this very procedure can be applied, not just to predict the
proper pricing of a stock today, but also to check the pricing of a stock in the past.
Take for example the stock price of Coca Cola at the beginning of the 1900s. We
are now in a situation where we know the cash flow attributed to Coca Cola after the
year 1900. We can also look up and determine what the interest rates were for the
‘future’ of the year 1900. One can thus apply (1.3) using the dividends and interest
rates that are all known since then, plug them into the formula, and then get an
estimate for what should have been the fundamental value of the Coca Cola stock
at the beginning of the 1900s. We say ‘estimate’ since, as mentioned previously, the
pricing formula sums to infinity, and we are not there yet! Checking the assumptions
10 1 The Traditional Approach to Finance

behind (1.3) in this way was the idea of the economist Robert Shiller. This was one
of the reasons Shiller was given the Noble prize in economics in 2013. In a moment,
we will provide the derivation of his idea, which led to what is now known by the
technical term ‘excess volatility’ [48, 125].
Shiller discovered a peculiar thing when he tried to calculate the fundamental
value of an asset in the past by using known dividend and interest data from the
past up to the present. He found that the fundamental price (derived according to
the rational expectation hypothesis) would fluctuate much less compared to the real
market price observed between the past and present. He coined the name ‘excess
volatility’ to highlight the ‘strange’ excess of volatility one sees in real market prices
compared to what one would have expected according to the rational expectations
hypothesis. The notion of excess volatility has been a hotly debated topic since then,
mainly because of the difficulty in estimating the dividends ‘up to infinity’. We will
not get into this debate, which is technical in nature [49], but will instead give a short
formal derivation of Shiller’s brilliant idea of checking (1.3) by using real market
data.

Note first that P  in (1.3) is not known at time t, but has to be forecast as
described above in order to assign a value to the price Pt to the asset at time t.
Ex-post (that is after the facts, i.e., after you can read the price Pt in the
journals), it therefore follows from (1.1) that

Pt D Pt C Ut ; (1.5)

where Ut is a forecast error. In this equation, Pt is known at time t (we can


look up its value in the newspapers), whereas Ut ; Pt are not known. When
making the forecast at time t  1, one assumes that one has access to all
available information. Since the forecast is assumed to be optimal, Ut must
therefore be uncorrelated with any information variable at time t. Pt , however,
is itself known information at time t, so Pt and Ut must be uncorrelated. But
this means that
 2 .Pt / D  2 .Pt / C  2 .Ut / ; (1.6)
where  2 denotes the variance (the standard deviation squared) of a variable.
Since the variance of a variable is always positive, this leads to

 2 .Pt / >  2 .Pt / : (1.7)

Equation (1.7) provides a handle for testing the rational expectation theory
in (1.3).

To summarize Shiller’s idea, we use historical price data to construct the fun-
damental price Pt from the actual dividend paid from time t up to the present
time, discounted by the actual interest rates, and then compare the variance of the
1.4 Markovitz’s Portfolio Theory 11

fundamental value Pt with the variance of the quoted price Pt over the same time
period [48]. This was applied to the historical data of Standard & Poor’s Composite
Stock Price Index from year 1871 until year 2002. Clear evidence showed that (1.7)
was violated, thus giving rise to a remarkable amount of controversy within the
academic community. This so-called excess volatility is still one of the oft-debated
‘puzzles’ in finance.

1.4 Markovitz’s Portfolio Theory

Another cornerstone of economic theory and practice is that returns above what is
called the riskless rate come with increased risks. The idea is that if you want to
have a higher return than what you could get by depositing your money in the bank,
then there is a price to pay in terms of taking a higher risk with your money. This
is the basis of Markovitz’s portfolio theory [92] and of the Capital Asset Pricing
Model (CAPM) [96], both of which were Nobel Prize winning theories. A simple
way of saying this is that investors want to be compensated for taking risk. That is,
when investors open a financial position, they want to earn a return high enough to
make them comfortable with the level of risk that they are thereby assuming. This
widely accepted principle undoubtedly has some backing from the human psyche:
nothing is free (or put differently, you have to work harder to get more). But as we
shall see in Sect. 1.6, there are actually ways to overcome the constraint imposed by
this principle – the key to doing so lies in examining what is meant by ‘risk’.
In order to see how Markovitz’s portfolio theory works in practice, let us consider
the pleasant idea that one day you are unexpectedly given a million dollars. Let us
also imagine that, having resisted spending the money for some time, you make what
the economists would call a ‘sound’ decision to invest the money. But where should
you invest it, and how? Had this question been posed 10 or 20 years ago, the answer
would have been easy, since you would probably just have turned to your family
or friends and heard stories about investing a small amount in the stock market and
getting wealthy by doing so. Today, of course, we live in a different era. But funnily
enough, most economic advisors would probably tell you today, like then, to invest
a large portion of your money in the stock market. So let us say that you decide to
invest 60 % of your million dollars in the stock market and the remaining 40 % in
bonds, since they are usually considered to be a safer investment than stocks. Still
you have to make a decision about exactly which stocks to buy using your $600,000.
You could, of course, make your own stock selections, using the idea behind the
theory of rational expectations described in Sect. 1.2. That is, you begin to select
your stocks using the rationale which tries to estimate how the different companies
will perform in the future. There is a whole industry claiming to be experts on this,
so given this fact you might feel slightly uneasy regarding your abilities to pick the
right ‘winners’ in the pool of stocks. This is the moment in the decision process
where the Markovitz portfolio theory enters the scene. And it becomes very useful,
since it tells you how to choose your stocks automatically, without having to try and
predict the performance of each individual stock. The beauty of portfolio theory is
12 1 The Traditional Approach to Finance

4
a x

3.5

2.5 x
RETURN

t
2 x b
x

1.5 g
R x
f
1

0.5
x

0 c
0 0.5 1 1.5 2 2.5 3 3.5 4

RISK

Fig. 1.1 Return versus risk in Markovitz’s approach

that, since most assets do not move up and down in perfect synchrony, you can lower
your risk by spreading out your investments over several assets. Generally speaking,
it is the tendency that, even on a bad day with many stocks down, some will be up.
This should cancel out the risk you take holding a portfolio of stocks, compared to
placing all your bets on one or few stocks. It happens from time to time that a given
company makes bad decisions, and its stock loses more than half of its value over
the period of a year, whereas this rarely happens for the stock market as a whole.
As in the old proverb, it is a bad policy to put all your eggs in one basket, since
dropping the basket will break all the eggs, whereas placing the eggs in different
baskets, one takes advantage of the fact that all the baskets are unlikely to fall at the
same time.
To explain Markovitz’s portfolio theory, let us first take a look at the risk–return
plot of Fig. 1.1. In a moment, we will describe how to use this figure, but let us
first come back to the $600,000 that you decided to invest in the stock market. We
suggest that you first get an intuition of how the magic of portfolio theory works by
simply constructing lots of different portfolios and then comparing the results. The
procedure is as follows. To simplify the calculations let us take a stock market with
relatively few stocks, such as the Dow Jones Industrial Average, which contains
only 30 stocks. You then create a portfolio of Dow Jones stocks by allocating parts
of your $600,000 and buying a certain number of shares of each stock.
Let us take one simple example where you put the same weight on each stock,
buying therefore for $20,000 of each stock. What you want to know is how this
specific portfolio would have behaved, let us say, over the past year. This means that
you now have to look up historical data of how each stock performed daily over the
past year. Nowadays, you can get this data on various websites without having to
1.4 Markovitz’s Portfolio Theory 13

pay any fee. Let us just give one example where you can find this data:

www.finance.yahoo.com/q/cp?s=D̂JI+Components .
With this data for each stock on your computer, you can calculate the daily return of
this specific portfolio by adding up the returns that you would have obtained from
each stock. Since you have placed the same amount of money in each stock, each
stock has the same weight of 20,000/600,000 = 1/30 in your portfolio. Therefore
you calculate the daily return of this specific portfolio by taking 1/30 times the daily
return of the first stock, adding 1/30 times the daily return of the second stock, and so
on, adding up the contributions from each of the stocks in your portfolio. This tells
you how your portfolio performed over 1 day. If you then add up the daily returns of
your portfolio throughout the year, you get the yearly return of your portfolio, which
is what is plotted on the vertical axis in Fig. 1.1. The performance of this specific
portfolio corresponds to one point on this plot.
Having found the ordinate value of the point for your portfolio, you then need
to calculate the risk of your portfolio over that year (the abscissa value of your
portfolio). That is, you should calculate how big the daily fluctuations of your
portfolio were. In Markovitz’s portfolio theory, the risk is given by the standard
deviation. Note that, although this is one measure, it is not the only one. In Sect. 1.6,
we will come back to the size of the daily fluctuations in your portfolio. The
greater the fluctuations in this daily value, the greater the standard deviation. Most
software programs that handle data packages, such as Excel, for example, have
built-in functions to calculate the standard deviation of a time series, otherwise
it requires just a couple of lines in a program to calculate it. Having both the
return on your portfolio after 1 year, which we can call re1 , and also the risk
your portfolio experienced during that period, which we can call ri1 , we can plot
a point g D .ri1 ; re1 / (indicated by a cross) on the risk–return performance diagram
of Fig. 1.1.
Now repeat the procedure described above, but choosing different weights in
your portfolio. That is, instead of buying for $20,000 of each stock, buy, for
example, $10,000 of the first stock, $30,000 of the second stock, and so on until
you again have allocated your $600,000 among the different stocks. You now have a
portfolio with different amounts of each stock. Calculating the return and risk of this
portfolio, you get a new point .ri2 ; re2 / in the risk–return performance diagram of
Fig. 1.1. You can then keep on changing the allocation of your $600,000 by buying
different amounts of different stocks, and thereby create new portfolios with yet
different weights, giving additional points in the diagram of Fig. 1.1.
Having spent a good deal of time (!) examining these different portfolios and
calculating their performance in terms of data points .rin ; ren /, a pattern will begin to
appear: all the points will be found to lie inside a cone like the one shown in Fig. 1.1
(see the solid line connecting the three points a, b, and c). An illustration of how this
looks for a real portfolio construction can be seen in Fig. 1.2. This figure actually
shows you two cones, corresponding to two different risk measures – something we
14 1 The Traditional Approach to Finance

Fig. 1.2 Efficient frontiers of a portfolio. Efficient frontiers for a three-asset portfolio composed
of Chevron stocks, Exxon stocks, and Malaysian currency (ringgit). The solid line shows the
theoretical Markovitz mean-variance efficient frontier, while the dotted line shows the theoretical
mean-(higher moment) efficient frontier, assuming no correlations between the assets. The plus
signs (respectively, open circles) correspond to the empirical mean-variance [respectively mean-
(higher moment)] portfolios constructed by scanning the weights w1 (Chevron), w2 (Exxon),
P in the interval Œ0; 1 by steps of 0.02, while still implementing the
and w3 (Malaysian ringgit)
normalization condition i wi D 1. Both families define a set of accessible portfolios excluding
any ‘short’ positions, and the frontier of each domain defines the corresponding empirical frontier
(The figure is taken from [2])

will come back to in Sect. 1.6. For the moment, just consider the plus signs plotted
in Fig. 1.2, which correspond to the Markovitz case.
Here then comes Markovitz’s insight: among the many different portfolios you
have created for a given risk, i.e., the many plus signs in Fig. 1.2, you will naturally
seek the portfolio with the highest return. Or equivalently, for a given return, you
will only want to consider the portfolio that has the smallest risk. Markovitz’s
portfolio theory therefore tells you that, among all the possible portfolios that you
can create, you should only pick those portfolios that lie on the line between the
points b and a in Fig. 1.1. This is called the efficient frontier, since the portfolios
on this line are efficient. The idea is then that, depending on the risk an investor is
willing to take, a rational investor will always choose the portfolio composition that
corresponds to the point which lies on the efficient frontier with this given risk.
So to sum up, in Markovitz’s model investors are risk averse, and when choosing
among portfolios, they only take into account the mean and the variance of their one-
period investment. Portfolios are chosen such that (1) the variance is minimized for
1.4 Markovitz’s Portfolio Theory 15

a given return and (2) the return is maximized for a given variance. Therefore the
Markovitz approach is often called the standard ‘mean-variance’ approach.
Another concept that comes out of the Markovitz approach is that of different
kinds of risks. Generally referred to as specific risk and systematic risk. The former
describes the risk associated with each individual asset. Through the Markovitz
approach, this risk can be reduced via the diversification procedure, and this
individual asset risk is therefore often called the idiosyncratic risk. However, for
a given fixed return, the optimal, i.e., the minimal, risk of the portfolio as a whole
cannot be reduced further, so this risk in the ‘system’ of your portfolio is said to be
‘systematic’.
Finally, let us mention one last result related to Markovitz’s portfolio theory: the
two-fund separation theorem. This result shows that one can construct any optimal
portfolio, i.e., a portfolio which lies on the efficient frontier, from two or more
given portfolios that are also efficient. Apart from the theoretical implications of
this theorem, which provides another handle to test the theory, it also has a practical
application, namely, that it can be less costly in terms of transaction costs to hold
a certain ratio of two funds, instead of having to purchase a large number of assets
individually.
In practice, one does not need to do the exercise described above to find the
optimal portfolio for a given risk, since Markovitz’s portfolio theory gives you
formulas for finding the corresponding weights. The formulas will not be given
here, but can be found in most textbooks on traditional finance. Given the description
above, it seems obvious that one should use Markovitz’s theory whenever one wants
to create a portfolio of one’s own. After all, who would not aim to choose an optimal
portfolio that would give the largest possible return for a given level of risk?
As usual the caveat lies in the assumptions behind the theory. Markovitz’s
portfolio theory considers investments over one time period, i.e., an investor selects
a portfolio at time t that produces a return at time t C 1. It is implicitly assumed
that the future will be like the past. Expressed even more explicitly, it is assumed
that the probability distribution function of the returns on the assets in the portfolio
does not change between time t and t C 1, and that the correlations between the
different stocks will remain constant. But this clearly does not make any sense in a
real world.
Even if one can elaborate on this by introducing economic models to ascertain the
way the future will behave and create scenarios for the way stock returns can change,
and then apply Markovitz’s method using these future projections, this raises new
problems about the method itself. Not only that, even with perfect knowledge about
the future, this method can fail disastrously when it comes to preventing big risks
affecting one’s portfolio. As we shall show by means of a practical example in
Sect. 1.6, even with perfect knowledge of the future probability distributions of
the stocks, by using Markovitz’s portfolio method, one can end up in a situation
where one takes much greater risks and has a much smaller return compared to
other methods! We will suggest instead trying to manage large risks, ignoring small
and intermediate risks, and we shall thus show that one can actually simultaneously
16 1 The Traditional Approach to Finance

gain in returns. Considering Markovitz’s method from this standpoint, something


thus sounds less than optimal. This point will be explained in detail in Sect. 1.6.

1.5 The Capital Asset Pricing Model

Sharpe [122] and Lintner [86] added an assumption to the Markovitz model. They
assumed borrowing/lending at a risk-free rate which is the same for all investors
and does not depend on the amount borrowed or lent. As will be shown below, this
together with the assumption of rational expectations of investors led to a formula
expressing how to price individual assets in terms of the performance of the market
in general.
Consider once again Fig. 1.1, which shows the risk versus the return of a
portfolio. As we saw in the last section, the curve traced out by connecting points
a, b, and c encloses all possible portfolios created by all possible combinations of
weights in a portfolio. Without risk-free borrowing or lending, Markovitz showed
that only the frontier ba describes portfolios that are mean-variance efficient, since
only portfolios above b along ba maximize the expected return given a fixed
expected variance of the return. We shall use this figure to derive the CAPM in
the box below.
The CAPM is famous, since it provides a tool (an equation) to find the proper
price of a stock. It does so by taking into account the way the market is priced and
the way the stock is correlated to the price movements of the market. Basically, a
stock that moves in exact synchrony with the market should have the same price
as the market. Stocks that are more volatile than the market should be priced
differently. The idea of pricing a stock in accordance with the way the market
behaves is something we will come back to in Chap. 2. However, in that case, it
is the ‘sentiment’ of a stock with respect to a general ‘sentiment’ of the market that
plays a role, relatively speaking.

The introduction of borrowing/lending at a risk-free rate by Lintner and


Sharpe turns the efficient set into a straight line. If all funds are invested in
the risk-free asset, i.e., all funds are loaned at the risk-free rate Rf , one gets a
portfolio at the point Rf in Fig. 1.1, i.e., a portfolio with zero variance and a
risk-free rate of return Rf . If instead one progressively invested a percentage
x of all funds in the risky assets, one would follow the line Rf  g, where the
point g corresponds to investing 100 % in a portfolio of risky assets. Points to
the right of g correspond to investing more than 100 % by borrowing at the
risk-free rate. Formally, this can be written

Rp D xRf C .1  x/Rg ; (1.8)

(continued)
1.5 The Capital Asset Pricing Model 17

(continued)
E.Rp / D xRf C .1  x/E.Rg / ; (1.9)
.Rp / D .1  x/.Rg / : (1.10)

But the portfolio g in Fig. 1.1 is not mean-variance efficient. In order to get a
mean-variance-efficient portfolio with the greatest return per unit of risk-free
borrowed money (the largest slope of the line), one swings the line from Rf
up and to the left as far as possible, which gives the tangency portfolio t.
At this point enters the assumption that all investors in the market
use rational expectations. Since all investors agree completely about the
distribution of returns, they all see the same optimal solution and combine the
same risky tangency portfolio t with risk-free lending/borrowing. Because all
investors have the same portfolio t, this has to be the value-weighted market
portfolio. That is, each asset’s weight in the market portfolio M must be given
by the total market value of all outstanding units of the asset divided by the
total market value of all risky assets. In short, the tangency portfolio is the
market portfolio t D M .
The important point in Sharpe and Lintner’s argument is that knowing
t D M gives in turn a benchmark for the returns of each individual asset.
Since an asset which has the same time series of returns as the market portfolio
M should give exactly the same return as the market, one gets
 
E.Ri / D E.Rf / C E.RM /  E.Rf / ˇi ; i D 1; : : : ; N ; (1.11)
COV.Ri ; RM /
ˇi D ; (1.12)
.RM /

where COV.Ri ; RM / is the covariance of the return of asset i with the market
return, and .RM / is the variance of the market return. Since an asset which
has a time series which exactly follows that of the general market will have
unit covariance with the market return, i.e., COV.Ri ; RM / D 1, it can be seen
from (1.12) that the return of the asset Ri is the same as that of the market RM .

Equation (1.12) is the famous CAPM relation, which allows one to price an asset,
E.Ri /, from knowledge of a certain ‘market portfolio’, the correlation of the stock
to this market portfolio, and the risk-free return Rf . The market portfolio should in
principle include not just all traded financial assets, but also consumer durables, real
estate, and human capital, features that are impossible to estimate in practice. Even
using a more limited view of the ‘market portfolio’ as just financial assets would
still mean including all quoted assets worldwide, and this is another exercise that is
hardly feasible in practice.
18 1 The Traditional Approach to Finance

Instead, a typical choice in the financial literature is just to take the US common
stocks, but this is strictly speaking not a ‘market portfolio’ in the CAPM sense.
It should therefore be noted [137] that the CAPM can be seen as describing an
idealized self-consistent market, resulting from a convergence to an equilibrium
point in which all agents optimize à la Markowitz. In doing so, they shape the
structure of the statistical properties of the returns by their collective decision.
Conversely, the statistical structure of the stock returns (captured by expected
returns and the covariance matrix) shapes the investor’s decision. This is a chicken-
and-egg situation: which came first? In this sense the CAPM can be seen as a
theory about the self-consistent fixed point reached on a self-organized dynamic.
For complexity researchers, this is an important point, because the CAPM can be
viewed as an example of how self-organization and self-consistency also underpin
some of the most important pillars of financial economic theory.
When we begin to discuss ‘sentiments’ in the next chapter, we will derive
another pricing formula, somewhat similar to (1.12). In that case, the question is
how such ‘sentiments’, or possibly insider-trading, can influence the pricing of
stocks. However, the pricing formula will in that case follow from a different line of
argument.

1.6 Have Your Cake and Eat It: Creating a Non-Markovitz


Portfolio

Having gone through a lot of theoretical arguments in the previous sections,


it is time to get practical. The idea is to get some intuition about portfolio
construction by showing examples from empirical data. At the same time, this
will give us the opportunity to highlight some of the problems with the Markovitz
portfolio selection method, problems that are not discussed in most textbooks on
finance.
In particular, we will take a closer look at the notion behind what is meant by
‘risk’ in the Markovitz approach. Taking the variance of the returns of the assets
as a measure of risk makes perfectly good sense when the probability distribution
function is a normal distribution, since then the distribution can be completely
described by just one parameter, the variance. However, as we shall see, the problem
is that the empirical data is, in general, not distributed according to a normal
distribution. Actually, more often than not, one sees probability distributions with
‘fat tails’ [111], meaning that large and/or extreme events are more likely to occur
than is assumed for the normal distribution. This is problematic since the Markovitz
procedure is designed to ‘tame’ events that are not extreme. In the cases where the
probability distribution functions are fat-tailed, the variance does not give a good
description of rare events, because it does not describe what happens out in the tail,
so to speak. As we shall see, use of the Markovitz optimization procedure can then
lead to an optimization that only controls the small and intermediate events, but does
not take into account the large events occurring out in the tails.
1.6 Have Your Cake and Eat It: Creating a Non-Markovitz Portfolio 19

Fig. 1.3 Making a portfolio of just two assets. The central square gives an illustration of an
empirical bivariate probability distribution of daily returns made from two assets. The two assets
are the Chevron stock and the Malaysian currency (ringgit) in US dollars, sampled over almost
three decades from 1 January 1971 to 1 October 1998. Each point in the central square corresponds
to a daily joint event .rringgit ; rchv / of given returns on the Malaysian ringgit and the Chevron stock.
For clarity, only a quarter of the data points are represented. Notice that we need to know the
bivariate probability distribution if we want to construct a portfolio from the two assets, since
in order to know the return/risk of the portfolio, we need to know how the two assets move
jointly on a given day. Only in the special case where the price movements of the two assets
are completely uncorrelated can we determine the behavior of the portfolio directly from the two
marginal distributions. The time series of the two assets can be seen at the top (Malaysian ringgit)
and on the side (Chevron stock). The empirical marginal distributions constructed from the time
series can be seen below (Malaysian ringgit) and to the left (Chevron stock) of the time series
represented by circles. The best Gaussian fits to the marginal distributions are represented by
fat solid lines, while thin solid lines represent best fits with a modified Weibull distribution (for
more information see [2]). It is clear that the marginal distributions are not well described by
the Gaussian distributions due to the ‘fat tails’ seen for large positive/negative returns. This is in
particular the case for the Malaysian ringgit

To illustrate the problem encountered by the Markovitz portfolio in ‘taming’


extreme events, we consider the simplest possible example of a portfolio, containing
just two assets. Figure 1.3 illustrates the daily time series of two assets: Malaysian
currency (Ringgit) shown at the top of the plot and Chevron stock shown on the
20 1 The Traditional Approach to Finance

20
daily return

10 w1=0.095

0
−10
−20
0 1000 2000 3000 4000 5000 6000 7000

20
daily return

10 w1=0.38

0
−10
−20
0 1000 2000 3000 4000 5000 6000 7000

4 w1=0.38
wealth

2 w1=0.095

0
0 1000 2000 3000 4000 5000 6000 7000

Fig. 1.4 Having your cake and eating it. The uppermost plot shows the daily variations in the
returns of a Markovitz portfolio, whereas the middle plot shows the daily variations of the return
of the HM portfolio for the same two assets (Chevron stock and Malaysian currency). As can be
seen by comparing these two plots, the HM portfolio avoids taking the big risks seen in the top plot
for the Markovitz portfolio. The price to pay is to accept more intermediate risks, as seen from the
generally noisier structure of the HM return fluctuations seen in the middle plot. However, apart
from taking fewer big risks than in the Markovitz approach, there is another appealing reward, viz.,
a larger return. This can be seen from the bottom plot, which gives the cumulative returns of the
two portfolios. The HM portfolio (with weight w1 D 0:38) gains almost three times as much as
would be obtained by the Markovitz method. For further explanation of the method, see [2]

top right. Given the two time series, one can then construct the (non-normalized)
empirical probability distribution function of their daily returns, shown beneath each
time series by circles in a lin–log plot. The best Gaussian fit is shown as a thick solid
line. It can be seen that the empirical data does not fit the Gaussian description at
all, particularly for the Ringgit currency, with some particularly disturbing events
having returns in excess of ˙10 %. These are clearly events one would like to tame
and/or control in any portfolio construction, but as we shall see, since the Markovitz
approach only focuses on the variance, such events are not properly accounted for.
The large square in the middle of the plot is the empirical joint distribution of the
two assets, where each point corresponds to a joint event (that is, a pair of returns)
occurring on the given day.
Figure 1.4 shows the daily fluctuations of the two different portfolio construc-
tions (two upper plots), along with their performance (lower plot). The uppermost
plot shows the daily fluctuations of the Markovitz portfolio versus time, while
1.7 Critics of the Traditional Viewpoint 21

the middle plot shows the daily fluctuations of the higher order moments (HM)
portfolio. Note that the Markovitz portfolio does a good job in limiting the
fluctuations of the portfolio most of the time, but that there is a price to pay, as
seen by the big spikes (positive as well as negative) that occur from time to time. In
contrast, the HM portfolio looks noisier since it tends to take more intermediate
size risk. However, the HM portfolio largely avoids the spikes suffered by the
Markovitz portfolio. This can be seen by looking at the horizontal dotted lines in
the daily return plots, which are the maximum values sampled for the returns of the
HM portfolio. Notice that the daily returns of the Markovitz portfolio exceed these
bounds.
To sum up then, minimizing small risks can in some case lead to a dangerous
increase of large risks [2]. Furthermore, the cumulative wealth of the Markovitz
portfolio is drastically inferior to that accrued in the HM portfolio: at the end
of the time period, the Markovitz portfolio has gained twofold whereas the HM
portfolio has gained sixfold. In other words, you can have your cake and eat it too,
simultaneously decreasing the large risk and increasing the profit! This example
illustrates how misleading it can be to focus on the variance as a suitable measure
of risks and highlights the limitations of standard portfolio optimization techniques.
Not only do they fail to provide a suitable quantification of the really dangerous
market moves, but in addition they miss important profit opportunities.

1.7 Critics of the Traditional Viewpoint

We have discussed here various shortcomings in the assumption of rational


expectations, as well as some of the problems within the framework of the
CAPM and Markovitz’s portfolio theory. One additional shortcoming was
discussed in the last section and concerned the notion of risk, which in the
Markovitz and CAPM picture is described by the standard deviation of the
return. This is a one-dimensional measure, whereas the fundamental decision
problem involves in principle an infinite number of dimensions, given only
by full knowledge of the probability distribution of the portfolio returns,
as we discussed in the last section. In the last chapter of the book, we
will discuss yet another caveat in controlling risks, which also lies beyond
the control of the Markovitz framework: systemic risks which describe the
simultaneous failure of an entire system, as seen for example during ‘runs’ on a
bank.
There is no shortage of criticisms of the foundations we have been discussing so
far in this chapter, in particular concerning the core assumptions they make. Without
offering a detailed discussion on such criticism, let us instead refer to a particularly
clear example of it in the following excerpt from an article by Philip Ball (a long-
standing editor for Nature), published in the Financial Times of 29 October 2006.
It is worth noting that the article was published well before the 2008 worldwide
financial crisis. We give the floor to Philip Ball:
22 1 The Traditional Approach to Finance

Baroque Fantasies of a Peculiar Science


by Philip Ball, published in the Financial Times of 29 October 2006

It is easy to mock economic theory. Any fool can see that the world of
neoclassical economics, which dominates the academic field today, is a gross
caricature in which every trader or company acts in the same self-interested
way, rational, cool, omniscient. The theory has not foreseen a single stock
market crash and has evidently failed to make the world any fairer or more
pleasant.
The usual defense is that you have to start somewhere. But mainstream
economists no longer consider their core theory to be a start. The tenets are so
firmly embedded that economists who think it is time to move beyond them
are cold-shouldered. It is a rigid dogma. To challenge these ideas is to invite
blank stares of incomprehension. You might as well be telling a physicist that
gravity does not exist.
That is disturbing because these things matter. Neoclassical idiocies per-
suaded many economists that market forces would create a robust post-Soviet
economy in Russia (corrupt gangster economies do not exist in neoclassical
theory). Neoclassical ideas favouring unfettered market forces may determine
whether Britain adopts the euro, how we run our schools, hospitals and
welfare system. If mainstream economic theory is fundamentally flawed, we
are no better than doctors diagnosing with astrology.
Neoclassical economics asserts two things. First, in a free market, compe-
tition establishes a price equilibrium that is perfectly efficient: demand equals
supply and no resources are squandered. Second, in equilibrium no one can
be made better off without making someone else worse off.
The conclusions are a snug fit with rightwing convictions. So it is
tempting to infer that the dominance of neoclassical theory has political
origins. But while it has justified many rightwing policies, the truth goes
deeper. Economics arose in the eighteenth century in a climate of Newtonian
mechanistic science, with its belief in forces in balance. And the foundations
of neoclassical theory were laid when scientists were exploring the notion of
thermodynamic equilibrium. Economics borrowed wrong ideas from physics,
and is now reluctant to give them up.
This error does not make neoclassical economic theory simple. Far from
it. It is one of the most mathematically complicated subjects among the
‘sciences’, as difficult as quantum physics. That is part of the problem: it is
such an elaborate contrivance that there is too much at stake to abandon it.
It is almost impossible to talk about economics today without endorsing its
myths. Take the business cycle: there is no business cycle in any meaningful
sense. In every other scientific discipline, a cycle is something that repeats

(continued)
1.7 Critics of the Traditional Viewpoint 23

(continued)
periodically. Yet there is no absolute evidence for periodicity in economic
fluctuations. Prices sometimes rise and sometimes fall. That is not a cycle;
it is noise. Yet talk of cycles has led economists to hallucinate all kinds of
fictitious oscillations in economic markets. Meanwhile, the Nobel-winning
neoclassical theory of the so-called business cycle ‘explains’ it by blaming
events outside the market. This salvages the precious idea of equilibrium, and
thus of market efficiency. Analysts talk of market ‘corrections’, as though
there is some ideal state that it is trying to attain. But in reality the market is
intrinsically prone to leap and lurch.
One can go through economic theory systematically demolishing all the
cherished principles that students learn: the Phillips curve relating unemploy-
ment and inflation, the efficient market hypothesis, even the classic X-shaped
intersections of supply and demand curves. Paul Ormerod, author of The
Death of Economics, argues that one of the most limiting assumptions of
neoclassical theory is that agent behaviour is fixed: people in markets pursue
a single goal regardless of what others do. The only way one person can
influence another’s choices is via the indirect effect of trading on prices. Yet
it is abundantly clear that herding – irrational, copycat buying and selling –
provokes market fluctuations.
There are ways of dealing with the variety and irrationality of real agents
in economic theory. But not in mainstream economics journals, because the
models defy neoclassical assumptions.
There is no other ‘science’ in such a peculiar state. A demonstrably false
conceptual core is sustained by inertia alone. This core, ‘the Citadel’, remains
impregnable while its adherents fashion an increasingly baroque fantasy. As
Alan Kirman, a progressive economist, said: “No amount of attention to the
walls will prevent the Citadel from being empty.”
Behavioral Finance
2

2.1 Introduction

The aim of this chapter will be to introduce the reader to models of financial markets
inspired by psychology. The chapter can in a sense be seen as a prelude to the
rest of the book, but with the caveat that it only treats individual, or averaged
individual, behavior without taking into account collective (sociological) effects on
price formation in the financial markets. Collective effects and complexity models
describing financial market price formation will then be introduced in the following
chapters.
The origins of behavioral finance can be traced back to questions about the
validity of the assumption of rational expectations in decision-making and the theory
based upon it, which surfaced in the 1980s. Criticisms were notably raised by a
succession of discoveries of anomalies, and in particular the evidence of excess
volatility of returns described in Sect. 1.2. Since the reported excess volatility raised
some of the very first empirically founded questions relating to the efficient market
theory, it became a subject of heated and continuing academic dispute, but the details
of these controversies will not be presented to the reader in this book.
The field of decision-making itself became a research topic in the field of
psychology in the 1950s, through the work of Ward Edwards, and also Herbert A.
Simon [127], who introduced the concept of decision-making based on bounded
rationality. However, it was not until the work of Daniel Kahneman and Amos
Tversky that results from cognitive psychology found their way into economics and
finance. In this chapter, we shall give an overview of the kind of irrational beliefs
that commonly enter into decision-making. We shall also discuss the implications
for financial markets of the Nobel Prize winning prospect theory due to Kahneman
and Tversky.
After introducing the reader to various cognitive processes and human biases,
we return to the general problem of the way human emotional and cognitive biases
impact pricing in financial markets. One issue in particular is, if such human
biases occur in the context of pricing in financial markets, how will they manifest

J.V. Andersen and A. Nowak, An Introduction to Socio-Finance, 25


DOI 10.1007/978-3-642-41944-7__2, © Springer-Verlag Berlin Heidelberg 2013
26 2 Behavioral Finance

themselves and how can one detect this? Typically, biases are put forward as
postulates in the field of behavioral finance, and have been tested in well-controlled
laboratory experiments. However, the impact they might have in financial markets
is still very much disputed. Critics typically argue that experimentally observed
behavior is not applicable to market situations, since learning and competition will
ensure at least a close approximation of rational behavior [46]. To probe what could
be the impact and signature of behavioral biases in financial markets, it is therefore
important to suggest tests and tools that apply directly to financial market data.

2.2 Cognitive Processes: The Individual Level

Almost everyone would agree that decisions made by humans differ from the predic-
tions of rational models. Opinions differ, however, as to whether these differences
can be treated as noise that will cancel out in a higher level of description, or whether
the difference is so fundamental that standard economic theory cannot be treated as
an adequate model of human economic behavior. The difference exists on both the
individual and the social level. The line of research in cognitive psychology that
resulted in prospect theory, which we will explain shortly, clearly pointed out how
individual decision-makers differed from the rational model.
Individual decisions and judgments differ from the prescriptive models in many
ways. These differences involve cognitive processes, motivation, emotion, self-
structure, and personality factors. The decisions of individuals also strongly depend
on how the alternatives are framed, rather than just being based on objective
probabilities and outcomes. Individuals use heuristics to estimate probabilities
rather than deriving them from the frequencies of occurrences of events. Availability
heuristics, for example, describes a process where the decision-maker estimates
the probability of events on the basis of how easy it is to recall those events from
memory.
Other challenges to the view that humans are rational decision-makers come
from cognitive psychology. The results of many experiments show that humans do
not use the rules of mathematics and logic as suggested by decision theory, but
instead base their judgments on schemas. This leads to a dependence of information
processing on context. Child street vendors in Brazil, for example, are perfectly
capable of performing arithmetic operations using money in the context of selling,
but are unable to repeat the same operations on abstract numbers in a laboratory
situation [23]. Wason’s card selection task provides another example of this [70].
Imagine that you are working in a company producing games. Your task is to
check whether the cards follow the rules of the game. For a certain game that the
company produces all the cards must follow the rule: If a card has a vowel on one
side, then it must have an even number on the other side. You know that all the cards
have a number marked of one side and a letter on the other. In front of you there are
four cards. Marked on their upper sides are the following:

E K 5 4
2.2 Cognitive Processes: The Individual Level 27

Which cards do you have to turn over to make to make sure that none of them
violates the rule? The correct answer is ‘E’ and ‘5’. This is because, if there were
an odd number, say 3, on the other side of E (which is a vowel), this card would
violate the rule. But the rule would also be violated if there were a vowel, say A, on
the other side of 5, since the rule dictates that a vowel on one side implies an even
number on the other. Nothing on the other side of K would violate the rule, since the
rule does not state what should be on the other side of consonants. Using the same
reasoning the card with 4 could not possibly violate the rule. This puzzle is quite
difficult, and even people trained in mathematics often have problems finding the
correct answer. In the original study less than 10 % of individuals correctly solved
the puzzle. The vast majority of people make mistakes in this puzzle [147]. This
is because people do not usually use the rules of formal logic in their reasoning,
combined with the fact that this example does not look similar to any problems they
have encountered so far.
Now consider the following puzzle. Imagine you work in a bar. The rule is
that, if a customer drinks alcohol, that customer must be over 18 years old. The
bar is dark and you see someone’s hand holding a glass of whiskey and another’s
holding a Coca Cola, but you do not clearly see the faces associated with the hands.
You also see two other individuals, one with the face of someone who looks about
16 years old and another with a face that looks over 40 years old. Which of the four
individuals do you need to check in order to know how old they are or what they are
drinking? The answer is straightforward and usually people have no problems with
an answer: one should check the age of the person drinking whiskey and determine
what the 16 year old is drinking. Logically, the card game and the bar example are
equivalent. The reason why the bar example is so much easier is that it follows a
permission schema, a schema that is often used in reasoning in everyday life.
The conclusion we can draw is that, even in problems that have clear formal
solutions, people usually use schemas. So to solve problems and make decisions,
people are often likely to search for the most relevant schema and use that to arrive
at the answer. It follows that, if more than one schema is available, the choice of
schema is likely to influence decisions. This effect is often called framing [141].
Therefore, the context in which the decision is made can influence the choice of
schema, and the chosen schema will be used as the tool for decision-making.
Another set of challenges to the view of humans as rational decision-makers
comes from personality psychology and social psychology. Human decision-making
is shaped by various motives. It also depends on emotions.

2.2.1 Motives

In economics it is assumed that humans maximize their outcomes. These outcomes


are usually defined in terms of the expected value that would result from the choice.
When making a decision, however, individuals often try to maximize more than one
outcome. For example, when making money, individuals may also be trying to win
approval or make new friends. Choices that seem irrational from the perspective
28 2 Behavioral Finance

of one goal may turn out to be rational if we take all the goals into consideration.
It is also the case that decisions that satisfy all the goals of an individual will be
preferred and made faster than decisions that satisfy only some goals but frustrate
other goals [78].
Economic theory assumes self-interest. Psychological research has revealed that
people’s choices may be governed by different motives and value orientations [61,
97]. Although prevalence of self-interest is quite common, individuals are often
oriented toward cooperation, trying to maximize both their own outcomes and those
of a partner. Another frequent motive is competition, where individuals, rather than
trying to maximize their own gain, attempt to maximize the difference between their
own outcome and the outcome of a partner. In opposition to this is an equalitarian
orientation, where individuals try to keep their own outcomes and the outcomes
of the partner even. Value orientation depends on personality and other individual
characteristics of the decision-maker and the relation between the decision-maker
and the partner, but also the situation, cultural considerations, and the nature of the
decision.

2.2.2 Emotions

Human decisions and judgments are also strongly shaped by emotions. Common
sense analysis of market dynamics is often made in terms of the dominance of
fear or greed. Many lines of research in psychology have proven that emotions
influence memory, information processing, judgments, and decisions. Positive
emotions, for example, facilitate action and risky decision-making, while negative
emotions prompt individuals to refrain from acting and encourage safety-seeking.
The emotional congruency effect describes the tendency to recall positive memories
when individuals experience positive emotions, and to recall negative memories
when in a negative mood.
There is also evidence both from psychology [35,107,154] and neurophysiology
that decisions are often made on the basis of emotions, while cognitive processes
serve to justify the decisions.

2.2.3 Self-Structure

Self-structure is the largest and most powerful of psychological structures. It


encodes all self-relevant information and performs regulatory functions with respect
to other psychological structures. It is also a strong source of motivation. Two
motives dominate the regulatory functions of self-structure. The self-enhancement
motive drives the tendency for positive self-evaluation. Conforming to social norms,
seeking approval, striving to win against the competition, or trying to look attractive
all stem from the self-enhancement motive.
People also have a tendency to confirm their view of themselves. This tendency
is described as self-verification. If someone believes that he or she cares about the
2.2 Cognitive Processes: The Individual Level 29

environment, he or she would likely donate to environmental causes. If someone


believes he or she has no talent at math, this person is likely to do poorly in this
subject. In other words, people are likely to engage in actions congruent with beliefs
they have about themselves.
Beliefs concerning the self may originate in feedback from others. This process
is called labeling. If others perceive an individual as a risky player, for example, the
individual will be more likely to take risks. What is interesting is that people have
the tendency to construct beliefs of themselves on the basis of observing their own
actions. In effect, if individuals for some reason engage in a behavior, e.g., trading,
this may lead to the development of a self-view as a trader, and this, in turn, may
encourage them to engage in further trading.
The two modes of functioning of self-structure are called promotion and
prevention [63]. In the promotion mode, individuals are oriented toward achieve-
ment. They are seeking situations where they can succeed, be noticed, win a
competition, etc. Their emotions oscillate between the happiness of success and
the sadness or anger associated with failure. In the prevention mode, individuals
aim to minimize the losses. Everything that is happening around them is perceived
as a potential threat. Individuals have the tendency to refrain from action since an
action could potentially result in negative consequences. Their behavior is aimed
at self-protection. Their emotions vary between anxiety and relaxation. Whether an
individual is in the promotion or prevention mode depends to some extent on the
individual’s own general characteristics, but also on the situation and their current
mood. Factors that are highly valued in promotion mode may have a negative effect
in prevention mode.
Economic theory assumes that rational individuals take into account all the
relevant information. Research in psychology shows that the decision-making
process consists of different stages. Up until a certain moment, individuals are
open to all incoming information. When they arrive at a decision, however, their
information processing changes qualitatively. They selectively seek information that
supports their decision, and avoid information that contradicts their decision. They
are no longer open to arguments. This phenomenon is called cognitive closure.
Individuals differ in the strength of their tendency for cognitive closure [78].
Individuals characterized by a high tendency for cognitive closure form opinions
relatively soon, after hearing a small amount of information. From this moment they
concentrate on defending the opinion they have already formed. Individuals with a
low tendency for cognitive closure do not form fixed opinions for a relatively long
time. They are open to new incoming information and arguments, and are ready to
adjust their opinions.
The tendency for cognitive closure also depends on the situation. Situations in
which it is difficult to concentrate, for example, with a lot of noise, encourage
early cognitive closure. Cognitive overload, i.e., trying to perform multiple mental
operations at the same time also tends to accelerate cognitive closure. Time pressure
has similar effects. In summary, although information may be available, some
individuals under some circumstances are likely to ignore information that is
contrary to an opinion they have already formed.
30 2 Behavioral Finance

Deviations from rationality on the individual level are not necessarily


incompatible with the existing theories in economy. Although each individual’s
decisions are to some extent irrational, it may be that these deviations from
rationality can be treated as errors or noise, and while this may be quite pronounced
on the individual level, the error of many individuals taken together on the group
level could cancel out, provided that the individual errors are not correlated, i.e.,
provided that they are independent of each other.
If the errors of different individuals are correlated, as for example in the case of
false information influencing a large proportion of traders, individual errors could
add up, rather than cancel. In this case the predictions of the rational model may
still be corrected by adjusting the outcome in a way that reflects the influence of the
external factor. Social processes occurring among individuals can make the outcome
of a group process very different from the sum of the outcomes of individual
processes. Processes occurring at the group level cannot be reduced to processes
occurring at the individual level.

2.2.4 Biases

In the following, we introduce the most commonly reported and empirically verified
human biases. The list of biases given here is by no means meant to be exhaustive
and we will only give a rather crude introduction to the topic.

Framing. Framing refers to the cognitive bias in which people make different
decisions depending on the way a problem is presented to them. If one frames the
same question in different ways, people will give different answers, even though the
question the people answered was the same question [74]. The example in [141]
gives a clear illustration of this bias. In a thought experiment, participants were
offered two different solutions to save lives in a situation where 600 people had
contracted a deadly disease. In the first solution labelled A, they were offered two
choices:
• Save 200 people’s lives out of the 600.
• Save 600 people’s lives with a chance of 33 and 66 % of saving no one.
In the second solution B the participants were offered exactly the same scenario but
described differently:
• Let 400 die, but save 200.
• With a 33 % chance you save all 600 lives, but with a 66 % chance that 600 people
will die.
Given the way the problem is presented in solution A, people will in general opt for
the first choice, because the second seems risky. However, in solution B, people will
instead opt for the second choice. In this description, it is the hope of saving people
that makes the second choice attractive. Notice that the two solutions A and B are
the same but framed differently, through a different use of words.
2.2 Cognitive Processes: The Individual Level 31

Overconfidence. Extensive evidence shows that people are overconfident in their


judgments. Two different but related forms appear:
• The confidence intervals that people assign to their estimates of quantities are
usually far too narrow. A typical example is the case where people try to estimate
the level of a given stock market index a year from today.
• People are very poor when it comes to actually estimating probabilities. We will
discuss this point in more detail below, when discussing prospect theory.

Law of Small Numbers. This judgmental bias happens because individuals


assume that the characteristics of a sample population can be estimated from a small
number of observations of data points. According to the law of large numbers, the
probability distribution of the mean from a large sample of independent observations
of a random variable is concentrated around the mean with a variance that goes
to zero as the sample size increases. In contrast, the law of small numbers in
psychology [73] describes the bias people introduce in neglecting the fact that
the variance for a small sample is larger than the variance for a big sample. One
manifestation of this bias is the belief that, in a small or a large hospital, it is equally
likely to find a daily birth rate of girls in excess of 50 %. Because the variance is
greater for a small hospital, it is actually more likely to find a birth rate of girls in
excess of 50 % at a small hospital as compared to finding the same event at a larger
hospital.

Self-Attribution Bias and Hindsight Bias. Overconfidence may in part stem from
two other biases, self-attribution bias and hindsight bias:
• Self-attribution refers to people’s tendency to ascribe any success they have in
some activity to their own talent, while blaming failure on bad luck.
• Hindsight bias refers to people’s tendency to believe, after an event has occurred,
that they predicted it before it actually happened.

Optimism and Wishful Thinking. Most people display an unrealistically rosy


view of their abilities and prospects. Typically, over 90 % of those surveyed think
they are better than the average in skills such as driving (whereas a survey in an
unbiased population should give 50 %). People also display a systematic planning
fallacy: they predict that tasks such as writing a book will be completed much sooner
than they actually are, and that the book will be understood by the reader much more
than may actually be the case!

Belief Perseverance. There is much evidence that once people have formed an
opinion, they have the tendency to cling to it too tightly and for too long. In this
respect two different effects appear to be relevant. The first effect is that people are
reluctant to search for evidence that contradicts their own beliefs. Secondly, even if
they actually do find such evidence, they treat it with excessive skepticism.
32 2 Behavioral Finance

Anchoring. Anchoring is a term used in psychology to describe the common


human tendency to rely too heavily on, or to anchor onto, one piece of (often
irrelevant) information when making decisions. In a later section of this chapter, we
will come back to the term in more detail, and give a specific recipe for identifying
anchoring in financial markets.

2.3 Prospect Theory

The following description follows the text that accompanied the 2002 Nobel Prize
in Economics attributed to Daniel Kahneman. We will, however, try to concentrate
on the essence of the theory, rather than give a general overview. As we will see,
the core idea behind prospect theory comprises three elements which are all new
with respect to standard economic theory. However, before we introduce these three
new elements, we first give a formal representation in the box below, in order to
understand the differences. The reader who is not interested in the formal description
can jump directly to the explanation given after the box.

When it comes to human decision-making, standard economic theory relies


heavily on the idea that each decision-maker tries to maximize her or his
utility. If we call the utility function u, then such a function is defined on a
set of possible outcomes X D fx1 ; x2 ; : : : ; xN g. Assume for simplicity that
the decision-maker has to choose between two different actions a and b. Let
pi be the probability for xi which results in the wealth wi under the action
a, and qi the probability for the same outcome and wealth, but instead under
the action b. Then classical economic theory gives the following criterion for
choosing a over b :
X   X  
pi u wi .xi / > qi u wi .xi / : (2.1)
i i

The inequality (2.1) says that a rational decision-maker will assign probabil-
ities to different random events and then choose the action which maximizes
the expected value of her or his utility.
In contrast to this, prospect theory assumes three differences, which
we first illustrate quantitatively and then explain below. In short, the three
differences are (i) wi ! wi , (ii) u ! v, and (iii) qi ! .qi /, so that instead
of (2.1) prospect theory suggests:
X   X  
.pi /v wi .xi / > .qi /v wi .xi / : (2.2)
i i
2.3 Prospect Theory 33

Fig. 2.1 Value assigned to


gains and losses according to
prospect theory. The figure
illustrates loss aversion, since
a small loss is assigned a
much higher negative value
than the positive value
assigned to a gain of the same
size. (Figure taken from [73])

Here follows the reasoning behind the changes imposed in prospect theory
compared to standard economic theory:
(i) wi ! wi . It is not the absolute level of wealth w that matters in decision-
making, but rather the relative wealth with respect to a given reference
level w0 . In plain words, the stress you feel when investing $1 million in a
given project/financial market (expressed via the utility function) would be
experienced very differently by a wealthy person like, say, Bill Gates. The
reference point is often the decision-maker’s current level of wealth which
then gives the status quo for decision-making. Another example illustrating this
point is in salary negotiations for people changing jobs. In that case the former
salary will often serve as the reference point in the negotiations. However,
the reference level could also be some aspirational level that a subject strives
to obtain.
(ii) u ! v. People are risk averse. This is illustrated in Fig. 2.1, which shows that
the value that people assign to gains versus losses is not symmetric. As can be
seen from this figure, even the slightest loss is something people really dislike,
whereas the same amount of gain is not something considered highly desirable.
This difference is often phrased by saying that the utility function has a ‘kink’
for small losses. Interestingly, it has recently been pointed out that financial
markets themselves react in such a risk averse manner, with the correlations
between stocks in an ascending market being different from the correlations
between stocks in a descending market. For a discussion on this point, see,
e.g., [13, 114].
(iii) qi ! .qi /. The last part of prospect theory relates to the fact that people have
problems assigning the proper probability to events, often placing too high a
weight on events that are highly unlikely to occur and placing too little weight
on events that are very likely to occur. This is illustrated in Fig. 2.2. We only
have to think about lottery tickets: people keep on buying them even though
their chance of winning the jackpot is basically nil.
34 2 Behavioral Finance

Fig. 2.2 Decision weight as


a function of probability. The
figure illustrates the fact that
people assign too high a
weight to events that are very
unlikely to occur and too little
weight to events that are
almost sure to occur.
(Figure taken from [73])

In our view what is particularly appealing about prospect theory is that it can
actually be checked, and it can make predictions about how people will behave
in different situations, which can then be used to cross-check the theory. In other
words, this is an example of a falsifiable theory, something we will return to and
explain in Chap. 4. Let us just finish this section by mentioning that the three
assumptions of prospect theory have indeed been checked in many experiments and
under various decision-making conditions.

2.4 Pricing Stocks with Yardsticks and Sentiments

Having discussed the way certain biases can influence human decision-making, it
is time to suggest in practical terms how biases could weight the way stocks find
their prices in the market. The aim in this section will thus be to ‘quantify’ human
sentiments and show how, in certain cases, they can influence the pricing of a given
stock. We will suggest the somewhat surprising result that the pricing of a given
stock can be expressed in terms of the general ‘sentiment’ of the market. This is a
very similar situation to one we discussed in Sect. 1.5, except that in that case the
pricing of a given stock was expressed in terms of, not the sentiment, but the general
performance of the market. It turns out that the formula we find for the pricing of a
given stock in terms of sentiment has a very similar structure to that of the CAPM
model found in Sect. 1.5.
Human decision-making by professionals trading daily in the stock market can
be a daunting task. It includes decisions about whether to keep on investing or to
exit a market subject to huge price swings, and how to price in news or rumors
attributed to a specific stock. The question then arises as to how professional
traders, who specialize in the daily buying and selling of large amounts of a
given stock, know how to price a given stock properly on a given day? Here we
introduce the idea that people use heuristics, or rules of thumb, with reference to
certain ‘yardsticks’ derived from the performance of the other stocks in a stock
2.4 Pricing Stocks with Yardsticks and Sentiments 35

index. Under- or over-performance with respect to such a yardstick then signifies


a generally negative or positive sentiment of market participants towards a given
stock. Using the empirical data from the Dow Jones Industrial Average, stocks can
be shown to have daily performances with a clear tendency to cluster around the
measures introduced by these yardsticks. We illustrate how sentiments, most likely
due to insider information, can influence the performance of a given stock over a
period of months, and in one case years.

2.4.1 Introduction

One of the founders of behavioral finance, D. Kahneman (Shefrin [123]), once


pointed out how media coverage of financial markets tends to depict them with
the traits of a stereotypical individual. Indeed, the media often describe financial
markets with attributes like “thoughts, beliefs, moods and sometimes stormy
emotions. The main characteristic of the market is extreme nervousness. It is full
of hope one moment and full of anxiety the next day.” One way to get a first
quantification of the sentiment of the market is to probe the sentiments of its
investors. Studying sentiments of consumers/investors and their impact on markets
has become an increasingly important topic [146]. Several sentiment indices of
investors/consumers already exist, and some have now been recorded over a time
span of a few decades. The Michigan Consumer Sentiment index, published
monthly by the University of Michigan and Thomson Reuters, is probably the one
which has the largest direct impact on markets when published. The natural question
then arises as to whether it is possible to predict market movements by considering
the sentiments of consumers/investors?
Fisher and Statman [45] made a study of tactical asset allocation from data
describing the sentiment of a heterogeneous group (large, medium, small) of
investors. The main idea in [45] was to look for indicators for future stock returns
based on the diversity of sentiments. The study found that the sentiments of different
groups do not move in lockstep, and that sentiments for the groups of large and
small investors could be used as contrary indicators for future S&P 500 returns.
However, more recent research [135] on investor sentiment expressed in the media
(as measured from the daily content of a Wall Street Journal column) seems to point
in the opposite direction, with high media pessimism predicting downward pressure
on market prices. Such results are more in line with theoretical models of noise
and liquidity traders [38, 39]. Other studies [8] claim very little predictability of
stock returns using computational linguistics to extract sentiments on 1.5 million
Internet message boards posted on Yahoo! Finance and Raging Bull. However, in
that study it was shown that disagreement induces trading and also that message
posting activity correlates with volatility of the market.
Common to all the aforementioned studies is the aim to predict global market
movements from sentiments obtained either from surveys or from internet message
boards. In the following, we propose instead to obtain a sentiment-related pricing
for a given asset by expressing the sentiment of a given stock relative to the market.
36 2 Behavioral Finance

This is similar to the principle of the Capital Asset Pricing Model (CAPM) which we
met in Sect. 1.5, since it relates the price of a given asset to the price of the market,
instead of trying to give the absolute price level of the asset/market. Put differently,
we will in the following introduce a method that does not estimate the impact that
a given ‘absolute’ level of sentiment can have on the market, but instead introduce
a sentiment of an asset relative to the sentiment of the general market, whatever the
absolute (positive/negative) sentiment of the general market. As we shall illustrate,
this gives rise to a pricing formula for a given stock relative to the general market,
much like the CAPM, but now with the relative sentiment of the stock to the market
included in the pricing.

2.4.2 Theory of Pricing Stocks by Yardsticks and Sentiments

In the following we consider how traders find the appropriate price level of a given
stock on a daily basis. One could for example have in mind traders that specialize in
a given stock and actively follow its price movements so as to determine opportune
moments either to buy certain amounts or instead to sell as part of a larger order. The
question is, what influences the decision-making for traders as to when to enter and
when to exit the market? As we saw in Chap. 1, according to the standard economic
view, only expectations about future earning/dividends and future interest rate levels
should matter in the pricing of a given stock. Looking at the often very big price
swings during earnings or interest rate announcements, this part clearly seems to
play a major role, at least in some specific instances. But what about other times
when there is no news which can be said to be relevant for future earnings/interest
rates? The fluctuations seen in daily stock prices simply cannot be explained by
new information related to these two factors, nor can risk aversion, so why do
stock prices fluctuate so much, and how do traders navigate the often rough seas
of fluctuations?
Here we take a heuristic point of view and argue that traders need some rules
of thumb, or as we prefer to say, yardsticks, in order to know how to position
themselves. In the box below we will derive a new pricing formula for stocks based
on the relative ‘performance’ of a stock with respect to the ‘performance’ of the
market. The performance we suggest is basically the ratio of the return of a stock to
its risk, where we quantify risk in terms of how volatile a stock is. The main idea is
that professionals, possibly with insider information, may have a bias with respect
to the stock they trade, and this bias will make their trade weigh on the performance
of a stock relative to the general market.

A first rough estimate for a trader would obviously be the returns of other
stocks in a given stock index. Let siI .t/ be the daily return of stock i belonging
I
to index I at time t, and Ri .t/ the return of the remaining N 1 stocks in the

(continued)
2.4 Pricing Stocks with Yardsticks and Sentiments 37

(continued)
index I at time t. We emphasize the exclusion of the contribution of stock i in
I
Ri in order to avoid any self-impact which would amount to assuming that
the price of a stock rises because it rises. We use a capital letter to denote the
index and a lower case letter to denote a specific stock i . Using the average
return of the other stocks as a first crude yardstick, one would then conclude
that traders of stock i would price the stock according to

1 X I
siI .t/  Ri
I
.t/  sj .t/ : (2.3)
N 1
j ¤i

A powerful tool that is often used in physics to check the validity of an


equation is dimensional analysis, checking that the quantities on each side
of the equation have the same dimensions. We will give a more detailed
explanation of the method in Chap. 5. By the same token, an expression should
be independent of the units used. Since (2.3) expresses a relationship between
returns, i.e., a quantity that expresses increments in percentages, it is already
dimensionless. However, we argue that there is a mentally relevant ‘unit’ in
play, namely the size of a typical daily fluctuation of a given stock. Such a
mental ‘unit of fluctuation’ is created by the memory of traders who closely
follow the past performance of a given stock. Dividing both sides of (2.3)
by the size of a typical fluctuation would therefore be one way to ensure
independence from such units. Taking the standard deviation as measure, the
renormalized (2.3) takes the form

siI .t/ I
Ri .t/
q Dq ; (2.4)
h 2 .siI /iT I
h 2 .Ri /iT

where the variable  2  hX 2 i  hX i2 denotes the variance of the variable X


and h iT denotes an average over a given window size T .
As we will show in a moment, (2.4) turns out to be a good approximation
for most stocks over daily time periods. There are, however, strong and
persistent deviations. Actually, in the following we will define stocks for
which (2.4) holds on average as neutral with respect to the sentiment traders
have on the given stock. Similarly, we use this relation as a measure, positive
or negative, of how biased a sentiment traders have on the given stock. More
precisely, the sentiment ˛iI of a given stock i is defined as

siI .t/ I
Ri .t/
˛iI .t/ D q q : (2.5)
h 2 .siI /iT I
h 2 .Ri /iT

(continued)
38 2 Behavioral Finance

(continued)
We emphasize that the sentiment is defined with respect to the other stocks
in the index, which serve as the neutral reference. The ratio of a stock’s
(excess) return to its standard deviation tells us something about its perfor-
mance, or reward-to-variability ratio, also called the Sharpe ratio in finance.
Therefore, (2.5) attributes a positive (resp. negative) bias/sentiment to a stock,
˛iI > 0 (resp. ˛iI < 0), when the Sharpe ratio of the stock exceeds
(resp. underperforms) the Sharpe ratio of the sum of the other stocks in the
index [134].
Rewriting (2.5), the pricing of stock i can now be given in terms of a
renormalized performance of the other stocks in the index as well as a possible
bias:
q
q h 2 .siI /iT
siI .t/ D h 2 .siI /iT ˛iI .t/ C q I
Ri .t/ : (2.6)
I
h 2 .Ri /iT

As a first check of (2.6), we take the expectation value E. / of (2.6) by


averaging over all stocks listed on the index, and then average over time (daily
returns). A priori, over long periods of time, one would expect to find as many
positively biased as negatively biased stocks in an index composed of many
stocks. Using this assumption the term in ˛iI disappears by symmetry. One
obtains
I
E.Ri / p
E.siI / D q h 2 .s I /iT : (2.7)
I
h 2 .Ri /iT

Equation (2.7) is very similar in structure to the Capital Asset Pricing Model
in Finance (CAPM) [150], discussed in Sect. 1.5:

E.siI /  Rf Cov.siI ; RI /
D E.RI /  Rf ; ˇi D ; (2.8)
ˇi  2 .RI /
where Rf in (2.8) is the risk-free return which, since we consider daily
returns, will be taken equal to 0 in the following:

Cov.siI ; RI /
E.siI / D E.RI / : (2.9)
 2 .RI /

The main difference between the CAPM in the form (2.9) and our hypothesis (2.7)
is that we stress the use of standard deviations in the pricing formula, rather than
the covariance between the stock return and the index return on the right-hand
side of (2.9). Furthermore, we argue that the covariance between a given stock
2.4 Pricing Stocks with Yardsticks and Sentiments 39

0.15 0.15

0.1 0.1

0.05 0.05

0 0
si

si
−0.05 −0.05

−0.1 −0.1

−0.1 0 0.1 −0.1 0 0.1


cov(s ,R )/σ2 R (σ /σ )R
s R −i
i −i R −i i −i
−i

Fig. 2.3 Testing different hypotheses. Data showing pricing according to the CAPM hypothesis
(left) and the sentiment hypothesis (right). If the data follows one of these hypotheses, one should
see it spread evenly around the diagonal (fat solid line) in the corresponding plot. This seems to
be the case for the sentiment pricing, but not for the CAPM pricing, where data points are ‘tilted’
with respect to the diagonal. The plot on the left shows the CAPM hypothesis (2.9) using the daily
returns of the Dow Jones Industrial index over the period 3 January 2000 to 20 June 2008. The plot
on the right illustrates our hypothesis (2.7) using the same data set. Each point corresponds to a
daily return si of a given stock i

and the index is not a very stable measure over time, in contrast to the variance
of a given stock. One cause of instability in the covariance could for example
be sudden ‘shocks’ in terms of specific good or bad news for a given company.
After such a shock, we postulate that the covariance between the stock and the
index changes, whereas the stock’s variance remains the same, but with a change in
relative performance. The pricing formula that we obtain is reminiscent of the so-
called capital allocation line in finance. This expresses the return of a portfolio that
is composed of a certain percentage of the market portfolio, but with the remainder
invested in a risk-free asset. However, the capital allocation line only expresses the
return of this specific portfolio, whereas our expression is supposed to hold true for
each individual asset.
The data points in Fig. 2.3 are used to contrast the CAPM hypothesis and our
hypothesis using daily returns of the Dow Jones Industrial Average over almost a
decade of data [136]. A perfect fit of the data to the two equations would in each
case lie on the green diagonal. The data for CAPM appear tilted with respect to the
diagonal, whereas the data concerning our hypothesis appear to be symmetrically
distributed around the diagonal, which is what one would expect if the data
40 2 Behavioral Finance

CIT CIT
0
0
−0.5
−1 −50
−1.5
0 100 200 300 400 500 0 100 200 300 400 500
CAT CAT

0.2 0
0 −20

Sentiment
−0.2 −40
Return

0 100 200 300 400 500 0 100 200 300 400 500
UNT UNT
0.1 10
0 0
−10
−0.1 −20
0 100 200 300 400 500 0 100 200 300 400 500
CSCO CSCO
0.4 0
0.2 −20
0 −40
−0.2 −60
0 100 200 300 400 500 0 100 200 300 400 500
Time (days)
Time (days)

Fig. 2.4 Extracting sentiment biases in stocks. Left: Cumulative returns of four different stocks in
blue over a given period (see below) compared with the return of the general market index (Dow)
in green over the same period. Right: Corresponding cumulative bias in sentiment for the given
stock, plotted in red. The fact that a constant decline in the cumulative sentiment can be observed
over a certain period (for Citibank stock over the whole period) indicates the underperformance
of the stock with respect to the general market. The data for Citibank, Caterpillar, and United
Technologies Corporation are for the time period 3 January 2000 to 20 June 2008, and the data for
Cisco are for the time period 1 January 2009 to 2 June 2011

obeyed (2.7) on average. The fact that the cloud of data points is symmetrically
scattered around the diagonal gives the first evidence in support of (2.7).
The sentiment ˛ in (2.5) was introduced as a behavioral trait, and as such we
would expect to see its effect on a long time scale of at least the order of weeks
or months. Figure 2.4 shows the cumulative sentiment for four different stocks,
Citibank, Caterpillar, and United Technologies Corporation over the period from
1 June 2000 to 20 June 2008, and Cisco over the time period from 1 June 2009 to
2 June 2011. The plots to the left show in green the return of the Dow Jones and in
blue the given stock over the given time period.
The case of the Citibank stock is particularly striking, with a constant negative
sentiment seen by the continuous decline in the cumulative sentiment curve of
Fig. 2.4, corresponding to a constant underperformance over 2 years. It should be
noted that the data was chosen in order to have both a declining general market,
as happens over the first half of the period shown, and also an increasing overall
market, as happens over the rest of the time period chosen. It is remarkable that the
sentiment of the Citibank stocks remains constant regardless of whether the general
trend is bullish or bearish.
2.4 Pricing Stocks with Yardsticks and Sentiments 41

−2
10
Probability

−3
10

−0.1 −0.08 −0.06 −0.04 −0.02 0 0.02 0.04 0.06 0.08 0.1
α (Sentiment Value)

Fig. 2.5 Probability distribution function of the sentiment variable ˛i

Similarly, it should be noted that the general sentiment for Caterpillar and United
Technologies Corporation was neutral in the declining market, but then became
distinctly negative over the last 3 or 4 months of the time series when the general
market was bullish. The price history for Cisco Systems tells a similar story. The
only difference here is the two big jumps occurring after 350 and 400 days. These
two particular events took place on 11 November 2010 and on 10 February 2011. On
11 November 2010, the price dropped because of a bad report for the third quarter
earnings. This gave rise to a loss of confidence by investors who were expecting a
sign of recovery after a couple of hard months. On 10 February 2011, Cisco Systems
announced a drop in their earnings (down 18 %) together with a downward revision
(7 %) of sales of their core product.
It is worth noting that the decline in cumulative sentiment took place before
the two events: prior to 11 November 2010, there was a long slow descent of the
cumulative sentiment (implying a constant negative sentiment), and after the 11
November 2010 the descent continued. This could be taken as evidence that some
investors with insider knowledge were aware of the problems of the company, which
was revealed only to the public on the two aforementioned days.
Figure 2.5 shows the probability distribution function of the sentiment variable
˛ defined in (2.5), obtained by sampling, using the daily return for all stocks of the
Dow Jones Industrial Average over the period 3 January 2000 to 20 June 2008. As
can be seen from the inset of Fig. 2.5, the distribution appears to be exponential for
both positive and negative sentiments. One notes that the empirical distribution is
42 2 Behavioral Finance

symmetric with respect to the sign of the sentiment, something which was implicitly
assumed in deriving (2.7) from (2.6).

2.4.3 Discussion

To sum up, we have pointed out the importance of a measure of the sentiment of a
given stock relative to its peers. The idea is that people use heuristics, which one
might call rules of thumb or yardsticks, obtained from the performance of the other
stocks in a stock index. Under- or over-performance with respect to a yardstick then
signifies a general negative or positive sentiment of the market participants towards a
given stock. We have introduced a quantitative method to check such an assumption.
The bias created in such cases does not necessarily have a psychological origin,
but could be due to insider information. Insiders having superior information about
the state of a company reduce/increase their stock holding, gradually causing a
persistent bias over time. The introduction of a measure for the relative sentiment of
a stock has allowed us to come up with a pricing formula for stocks very similar in
structure to the CAPM model. Using empirical data from the Dow Jones Industrial
Average, stocks are shown to have daily performances with a clear tendency to
cluster around the measures introduced by the yardsticks, in accordance with our
pricing formula.

2.5 Sticky Price Dynamics: Anchoring and Other Irrational


Beliefs Used in Decision Making

The last section gave an example of how to quantify price formation in financial
markets due to a psychological phenomenon caused by people using heuristics to
price a stock relative to an index. We will continue along similar lines in this section
and look at how irrational beliefs can influence the pricing of stocks. We will then
introduce a tool to detect and quantify this. The aim is to take seriously some of the
ideas coming from behavioral finance, but also to introduce quantitative methods
which can provide us with tests. The tests should be directly applicable to financial
market data in order to verify the presence or otherwise of such irrational human
beliefs in the markets.
Even though we limit ourselves in this chapter to the impact of psychological
factors, the approach should be seen as more general, suggesting a way to introduce
quantitative methods from psychology and sociology that will shed light on the
impacts of such factors on the pricing of assets. The ideas coming from behavioral
finance should not therefore be seen as limiting, but rather as a starting point for
tackling the more general question of the impacts of sociological and psychological
factors on the markets.
We end this chapter by giving yet another example of how to extract information
at the individual level of a representative agent trading in the market, before
2.5 Sticky Price Dynamics: Anchoring and Other Irrational Beliefs 43

Fig. 2.6 How to make particles move in a flashing ratchet and how to use this idea in a trading
algorithm. A flashing ratchet switches between an on state and an off state. In the on state,
Brownian particles concentrate around the potential minima. In the off state, the particles undergo
one-dimensional isotropic diffusion. Periodic or stochastic switching between the two states
induces a particle current in the positive x direction. As explained in the text, a patient trader
can use ‘sticky’ price movements and wait for a favorable moment to enter the market, just as the
particles in the figure have to wait for the ratchet to be on to make a ‘favorable’ or positive move
(The figure is taken from [85])

discussing ways to study effects at the group (sociological) level in the next chapter.
It is not immediately obvious that the way motors work in biology could be relevant
to financial trading. In certain situations, this will nevertheless be our claim. More
specifically, we will use insights into the way biological motors work, to suggest
ways to detect the presence in financial markets of a specific human behavioral trait
known as anchoring.
To understand the idea, consider Fig. 2.6, which shows an example of a so-called
flashing ratchet. The flashing ratchet has been suggested as a mechanism that
provides some biological molecules with motors allowing them to move and operate
on a micro- to nanoscopic scale. The ratchet flashes, say randomly, between on
and off, and in doing so, this induces a drift of the red particles, as illustrated in
the figure. The basic mechanism behind this is brought about by the asymmetric
potential of the ratchet, which pulls the particles to the right whenever the ratchet
is on. When the ratchet is off, the particles diffuse freely without any average drift.
Another way of looking at the same problem is to say that the combined system of
the ratchet and particles acts as if particles were influenced by a motor. Looking at
it this way, a particle just waits, and whenever there is an opportune fluctuation (the
ratchet is on), it exploits this fluctuation to move.
The reason for discussing this mechanism is the a priori surprising suggestion
of using this same mechanism in a trading algorithm. This idea, proposed by
Lionel Gil [57], gives yet another startling example of how cross-disciplinary
ideas can bring unexpected new insights into a problem where no one would have
expected a link. We will describe the idea in detail shortly, but first we need to
discuss the behavioral trait known as anchoring, introduced by Kahneman and
Tversky in the 1970s.
44 2 Behavioral Finance

Anchoring is a term used in psychology to describe the common human


tendency of relying too heavily on (being anchored to) one piece of often irrelevant
information when making decisions. One of the first observations of anchoring was
reported in a now classic experiment by Tversky and Kahneman [140]. Two test
groups were shown to give different mean estimates of the percentage of African
nations in the United Nations depending on a completely unrelated (and randomly
generated) number suggested by the experimenters to the two groups. Evidence for
the human tendency to anchor has since been reported in many completely different
domains. For example, customers show inertia when it comes to switching from a
given brand [152]. In this case it is the old brand’s price that acts as an anchor. Other
evidence comes from studies of online auctions [41]. People have a tendency to bid
more for an item the higher the ‘buy-now’ price. Anchoring has also been shown
in connection with property prices [102]. In this case it was shown that a subject’s
appraisal depends on an arbitrarily posted house price.
In the context of financial markets, anchoring has been observed via the so-called
disposition effect [62, 75, 104, 124], which is the tendency to sell assets that have
gained value and keep assets that have lost value. In that case the buying price
acts as an anchor. This is different from the anchoring discussed in the following,
where a recent price level acts as an anchor. As noted in [148], conclusive tests
using real market data are usually difficult because investors’ expectations, as
well as individual decisions, cannot be controlled or easily observed. However, in
experimental security trading, subjects have been observed to sell winners and keep
losers [148].
The main premise in the following is that situations occur in financial markets
that can lead to ‘sticky’ price dynamics, and that this can be detected via the flashing
ratchet method. One can think of a variety of circumstances that could create such
‘sticky’ price dynamics. The European Monetary System (EMS) gives a particularly
clear example. In this case a monetary policy induces bands over which currencies
were allowed to fluctuate with respect to one and another, illustrating a policy-
caused mechanism for such ‘sticky’ price movements in financial markets [57].
As we shall see shortly, another possible situation leading to ‘sticky’ price
movements in equity markets is when market participants actively follow the price
over days or weeks, thereby creating a subjective reference (anchor) and memory of
when an asset is ‘cheap’ or ‘expensive’. Several studies on the persistence of human
memory have reported sleep as well as post-training wakefulness before sleep to
play an important role in the offline processing and consolidation of memory [109].
It therefore makes sense to think that conscious as well as unconscious mental
processes influence the judgment of those who specialize in active trading on a
day-to-day basis.
The idea behind the flashing ratchet method can now be outlined as follows.
We assume that anchoring is present in financial markets because of ‘sticky’
price movements, with recent prices used by market participants as an anchor in
determining whether an asset is over- or undervalued. Such irrational behavior
would in principle open up arbitrage possibilities for speculators buying when an
asset is perceived to be undervalued and selling when it is overvalued. As will be
2.5 Sticky Price Dynamics: Anchoring and Other Irrational Beliefs 45

Table 2.1 Table showing the four different configurations x1 ; x2 ; x3 ; x4


that can occur when trading two assets in the ‘sticky price’ algorithm.
Quasi-static price levels of the two assets are denoted by AN1 and AN2 and
the fluctuations around those levels are dA1 and dA2 , respectively
x1 x2 x3 x4
A1 — A1 —
l dA1 l dA1
AN1 —— AN1 —— AN1 —— AN1 ——
l dA1 l dA1
A1 — A1 —

A2 — A2 —
l dA2 l dA2
AN2 —— AN2 —— AN2 —— AN2 ——
l dA2 l dA2
A2 — A2 —

seen, just as the biological motors make a move when a favorable fluctuation occurs,
so a patient trader can wait and act when the right price fluctuation occurs.
Table 2.1 illustrates how the flashing ratchet algorithm works in the simplest
case with only N D 2 different assets. For the interested reader, a complete
analytical derivation of the problem is given for this simple case in the appendix
in Sect. 2.5.1. The table illustrates the different configurations that can arise with
two assets, assuming price fluctuations around quasi-static price levels given by AN1
for asset 1 and AN2 for asset 2. The solution of the method can easily be generalized
to an arbitrary number of assets and with price levels which, instead of remaining
constant, vary slowly over time. To simplify things, we will only consider the case
where a trader always takes the long position for one asset. The general case for
short only or short plus long positions is easy to figure out. It will be useful to keep
Table 2.1 in mind in the following.
Here is how the flashing ratchet idea works. Assume that you are long on the
position of say asset 1 bought at an earlier time. Let us also assume for simplicity
that the share of asset 1 was bought at the reference price level given by AN1 . As
illustrated in Table 2.1, four different situations can occur, corresponding to the four
different configurations x1 ; x2 ; x3 ; x4 , but your reaction to the four different cases
should be different, as we shall illustrate. Consider first that, e.g., configuration x3
occurs, that is, we have a fluctuation where the price A1 of asset 1 is below the
quasi-static price level AN1 by an amount dA1 , and that simultaneously the situation
is similar for asset 2. This is not a profitable situation for you, so if configuration x3
occurs and you are long on asset 1, you accept your temporary loss and do nothing.
A patient trader would instead wait till configuration x1 occurs, since in this case
asset 1 is overvalued and asset 2 is undervalued, giving the opportunity to close the
long position on asset 1 and open a new long position on asset 2. Now being long on
asset 2, a patient trader would have to wait until configuration x4 comes up, closing
46 2 Behavioral Finance

the long position on asset 2 and opening a new long position on asset 1, and so on
and so forth.
In principle, this sounds like a very simple algorithm for statistical arbitrage, but
in order to be able to settle the question as to whether such a strategy is profitable,
and evaluate how risky it would be, one would need to know three things:
• The probability of finding the price of the asset to have a given value A. The
method assumes the price evolution to be ‘sticky’, i.e., quasi-stationary, so that
at any given time t a market participant has a notion of the likelihood of a certain
price value A at that very moment of time. It would be natural for this probability
to change over time, but to keep the discussion simple, it will be assumed in the
following to be time-independent. Generalization to a time-dependent probability
distribution is straightforward. In short one needs to know the price probability
distribution function P .A/.
• The probability P .A ! B/ of going from one quasi-stationary price A to another
quasi-stationary price B.
• The transaction costs C .
The method is described in detail in Sect. 2.5.1 and quantifies the circumstances
under which the presence of anchoring in financial markets would be detectable in
terms of a profitable investment strategy that speculates on this particular cognitive
bias. The method will first be illustrated in the case where an exact solution is
available, before applying the algorithm to real financial data.
To see how the method works in a controlled setup, we assume a fictitious market
with quasi-stationary price levels ANi , and for simplicity take AN1 D AN2 D 1:0.
Assume further that the price fluctuations have only two possible values dA1 D
dA2 D ˙0:11 with equal probability for the plus or minus signs. Note that Table 2.1
gives a very simple schematic representation of the problem, which is by its very
nature probabilistic. In general, anyone observing the market will see a fluctuating
time series of prices with no a priori knowledge of the underlying probability
distributions governing the price dynamics. As mentioned before, one can also
imagine markets in which price probability distributions change slowly over time.
In order to use the flashing ratchet algorithm, a market observer then has to estimate
the quasi-static levels Ai . There are various ways to do so. One of the simplest is
just to take the average of the prices over the last m days.
The solid line in Fig. 2.7 shows the steady state analytical solution given by the
average return of the algorithm. The figure shows the performance of the algorithm
(circles) as a function of time. After some initial fluctuations in performance, the
algorithm is indeed seen to converge to the theoretical average return value. The
general performance of the algorithm in terms of the expected average return and
the expected average risk (taken here as the standard deviation of the return) can
be found by using (2.14)–(2.16), given in the Sect. 2.5.1. To see how the numerical
flashing ratchet algorithm checks against these expressions, the average return and
average volatility are shown in Fig. 2.8 against one of the model parameters, viz.,
dA1 , for the simple case of a Bernoulli distribution (2.18) and (2.19).
It should be noted that the results of [57] and those presented here derive from
an effect also mentioned in [130], which appears as soon as the price exhibits mean
2.5 Sticky Price Dynamics: Anchoring and Other Irrational Beliefs 47

0.05

0.04
Cumulative return r(t)

0.03

0.02

0.01

−0.01
0 100 200 300 400 500 600 700 800 900 1000

Time step t

Fig. 2.7 Return of trading algorithm. Averaged return r.t / as a function of time t in the trading
algorithm that exploits sticky price dynamics. We test the algorithm against the simplest possible
case where the price fluctuates by ˙dA around a given price level A.N Circles represent the result
obtained by using the algorithm (2.16) with AN1 D AN2 D 1:0 and dA1 D dA2 D 0:11, and memory
m D 5. The solid line represents the analytical expression (2.18)

reversal. However, in [130], just one asset was considered, and no calculation was
done with respect to risk and return for a portfolio. Apart from the policy-induced
case of the EMS, there is no a priori reason to expect real market data to show truly
‘quasi-static’ behavior on longer time scales of months or years. Problems relating
to such long term drifts in the price anchor ANi were noted in [57]. When ANi is time-
dependent, the assumptions made in Sect. 2.5.1 are no longer valid, and the return
of the algorithm was then shown to vanish [57].
In order to get round this difficulty, one solution might be to generalize the
formalism to time-dependent probability distributions. Here, however, we suggest
another approach. In principle it would be difficult to correct for the drift of a single
asset or a small set of assets. But using instead a market index which is composed
of a portfolio of assets, it is then possible to modify the algorithm in such a way that
it is always market neutral, regardless of any drift in the portfolio of N assets.
Figure 2.9 gives an example of how this might work out. The figure shows the
market neutral algorithm applied to real market data of the Dow Jones stock index,
as well as the CAC40 stock index. The first half of the time period for the Dow
Jones index was used in-sample to determine the best choice among three values of
the parameter m D 5; 10; 15 days. The algorithm was then applied out-of-sample to
the second half of the time period for the Dow Jones index, and over the whole time
period for the CAC40 stock index. Since we are interested in looking at any possible
48 2 Behavioral Finance

0.35

Average return r (o), average volatility σ (x)


0.3

0.25

0.2

0.15

0.1

0.05

0
0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
dA1

Fig. 2.8 Return and volatility of trading algorithm. Averaged return r (circles) and averaged
volatility  (crosses) versus price fluctuations dA1 . The data points were obtained in the steady
state using the flashing ratchet algorithm (2.16) with memory m D 5 to estimate the quasi-static
price levels ANi from which the classification was made according to Table 2.1. The random price
time series (with randomness stemming from the sign of dAi ) were generated with fixed values
AN1 D AN2 D 1 and dA2 D 0:11. Solid lines represent the analytical results (2.18) and (2.19)

impacts coming from human anchoring, it seems justified a priori to use only daily
or weekly data since the higher the frequency of trading (say seconds or minutes),
the more computer dominated the trading becomes. It therefore seems reasonable
to probe only three possible values corresponding to 1, 2, or 3 weeks in-sample. In
order to look for arbitrage possibilities, weekly data was used in order to avoid the
impact of transaction costs due to over-frequent trading.
Another reason for restricting to weekly data relates to the main claim put
forward according to which market participants, by actively following the price,
thereby create a subjective reference (anchor) and memory of when an asset is
‘cheap’ or ‘expensive’. The out-of-sample profit from the market-neutral trading
algorithm (with transaction costs taken into account) on the CAC40 index, as well
as the second period performance on the Dow Jones index, gives evidence that
anchoring does indeed sometimes play a non-negligible role on the weekly price
fixing of the Dow Jones and CAC40 stock markets, and reconfirms the claim in
[57] that the policy imposed by the European Monetary System leads to arbitrage
possibilities. The results also give yet another illustration of the difficulty in proving
market efficiency by only considering lower order correlations in past price time
series.
2.5 Sticky Price Dynamics: Anchoring and Other Irrational Beliefs 49

12

10

8
Cumulative return r(n)

−2
0 200 400 600 800 1000 1200 1400 1600
Number of days n

Fig. 2.9 Trading algorithm applied to market data. Total cumulative return of the portfolio for
the market neutral sticky price algorithm applied to daily price data of the Dow Jones stock index
(dotted line) and the CAC40 stock index (solid line) over the period 3 January 2000 to 2 May 2006.
The first half of the time period for the Dow Jones index was used in-sample to determine the best
choice among three values of the parameter m D 5; 10; 15 days. The second half of the time period
for the Dow Jones index as well as the full period for the CAC40 index were done out-of-sample
with m D 10. A trading cost of 0.1 % was included for each transaction

2.5.1 Appendix: Quantitative Description of the Trading Algorithm

Assume that, at every time step t, an agent uses a fixed amount of his wealth
to hold a long position on one out of N assets. For simplicity, N D 2 will be
assumed in the following, but the arguments can be extended to arbitrary N .
Assume furthermore that the probability distribution functions (PDFs) of the
price of the two assets, viz., P1 .A1 / and P2 .A2 /, are stationary distributions.
Instead of the usual assumption of a random walk for the returns, short term
anchoring of prices at quasi-static price levels is imposed. No specific shape
is assumed and the assets may or may not be correlated, but any correlation
is irrelevant for the following arguments. As noted in [57], the assumption of
short term stationary prices can arise due to price reversal dynamics caused,
e.g., by monetary policies. As will be argued and tested in the following, short

(continued)
50 2 Behavioral Finance

(continued)
term ‘stationary’ prices can also be created by short term human memory of
when an asset was ‘cheap’ or ‘expensive’.
Consider any instantaneous fluctuation of the prices .A1 ; A2 / around their
quasi-static price levels .AN1 ; AN2 /. Classifying the 2N different cases according
to whether Ai < ANi or Ai > ANi , one has (for N D 2) the four different
configurations xi shown in Table 2.1. In the steady state, the probability
flux into a given configuration xi equals the probability flux out of that
configuration:
X X
P .xj /P .xj ! xi / D P .xi /P .xi ! xj / : (2.10)
j j

In the steady state, the average return Rav per unit time is then given by

X
4 X
4
Rav D P .xi /P .xi ! xj /rav .xi ! xj / ; (2.11)
i D1 j D1

where rav .xi ! xj / is the average return gained/lost in the transition xi !


xj . For each configuration xi , one is assumed to hold a long position of either
asset 1 or asset 2. Let s D i be a state variable indicating that one is long on
the position of asset i . Then

rav .xi ! xj / D P .s D 1jxi /rav .xi ! xj js D 1/


CP .s D 2jxi /rav .xi ! xj js D 2/ ; (2.12)

where P .s D i jxj / denotes the probability of holding asset i given that


one is in configuration xj , while rav .xi ! xj js D k/ denotes the average
return in the steady state from holding asset k when there is a transition from
configuration xi to xj , given by
Z Z
A0k
rav .xj ! xk js D k/ D dAk dA0k P .A0k jxi /P .Ak jxj / ln ; (2.13)
Ak

with P .Ak jxi / the conditional probability of getting the price Ak given that
one is in configuration xi . For example, knowing that one is in configuration
x1 , one has

P .A2 /.A2  0/
P .A2 jx1 / D Z 0 ;
P .A02 /dA2 0
1

(continued)
2.5 Sticky Price Dynamics: Anchoring and Other Irrational Beliefs 51

(continued)
where .A2  0/ is the Heaviside function.
Using (2.11)–(2.13), the general expression for the average return gained
by the algorithm takes the form

X
4 X
4 X
2 Z Z
Rav D P .xi /P .xi ! xj /P .sjxi / dAk dAl
i D1 j D1 sD1

Al
P .Al jxi /P .Ak jxj / ln : (2.14)
Ak

The corresponding risk measured by the average standard deviation of the


return is given by
˝ ˛
 2 D .r  Rav /2
X
4 X
4 X
2 Z Z
D P .xi /P .xi ! xj /P .sjxi / dAk dAl P .Al jxi /
i D1 j D1 sD1
 
Al 2 2
P .Ak jxj / ln  Rav : (2.15)
Ak

The ‘trick’ with this algorithm consists in breaking the symmetry by always
choosing P .sjxi / according to the following rules:

P .s D 1jx1 / D 0 ; p.s D 2jx1 / D 1 ; p.s D 1jx2 / D p.s D 2jx2 / D 1=2 ;


P .s D 1jx3 / D p.s D 2jx3 / D 1=2 ; p.s D 1jx4 / D 1 ; p.s D 2jx4 / D 0 :
(2.16)

That is, if not already long, one always takes a long position on asset 2
(resp. 1) whenever configuration x1 (resp. x4 ) occurs, since the asset is
undervalued in this case. Likewise, if one is long on asset 1 (resp. 2) whenever
configuration x1 (resp. x4 ) occurs, one sells that asset, since it is overvalued.
To illustrate the algorithm, consider the simplest case where P .Ai / takes
only two values ANi ˙ dAi with equal probability 1/2. Inserting

P .A1 jx1 / D ı.AN1 C dA1 / ; P .A1 jx2 / D ı.AN1 C dA1 / ;


P .A1 jx3 / D ı.AN1  dA1 / ; P .A1 jx4 / D ı.AN1  dA1 / ;
(2.17)
P .A2 jx1 / D ı.AN2  dA2 / ; P .A2 jx2 / D ı.AN2 C dA2 / ;
P .A2 jx3 / D ı.AN2  dA2 / ; P .A2 jx4 / D ı.AN2 C dA2 / ;

(continued)
52 2 Behavioral Finance

(continued)
and P .xi / D P .xi ! xj / D 1=4 into (2.14), one gets the average return:
" #
ANi ˙dAi 1 AN1 C dA1 AN2 C dA2
Rav D ln C ln ; (2.18)
8 AN1  dA1 AN2  dA2

with variance

N
Ai ˙dAi 2 15 2 AN1 C dA1 15 2 AN2 C dA2 1
.av / D ln C ln 
64 N
A1  dA1 64 N
A2  dA2 32
!
AN2 C dA2 AN1 C dA1
 ln : (2.19)
AN2  dA2 AN1  dA1

2.6 ‘Man on the Moon’ Experiments of Behavioral Finance

The important insight that we get over and again from behavioral finance is that
people are not perfect beings who analyze information and react to it in an optimal
manner. Applying this to financial markets, the question is, however, how far do such
insights help us understand behavioral impacts on the process of price formation?
The reader may have noticed a surprising similarity between the way decision-
making is considered in the rational expectations approach discussed in Chap. 1 and
the way we have so far discussed behavioral finance in this chapter. In both cases
one considers how an individual investor would react to new incoming information,
but always in a context where the person is completely isolated, i.e., not interacting
with any outsiders, in order to consider his or her decision.
This is particularly clear in the case of behavioral finance, where decision-
making is often presented in the form of a questionnaire: given various conditions
and choices, how would a given person react? To carry out such questionnaire
experiments, the person could just as well be on the moon as sitting at a computer
terminal or standing on a trading floor. The title of this section was meant to stress
this fact, but also to point out some of the challenges we still need to tackle.
Only then will we get a more realistic description of decision-making and a better
understanding of financial markets and the way pricing takes place.
It should be noted that prospect theory is no different with respect to such
‘man on the moon’ experiments: the theory deals with human decision-making ‘in
isolation’ and has been corroborated by experiments here on the earth, but could
just as well have been checked in a capsule on the moon. The same criticism
applies to the theories presented in Sects. 2.6 and 2.7, which also only considered
2.7 Social Processes Underlying Market Dynamics 53

an averaged individual response to information. But financial markets are not


isolated objects, which can be understood by assigning typical human decision-
making to a representative (average) agent. In short, both the core assumptions
of traditional finance (like the dynamic stochastic general equilibrium models
of Sect. 1.1) and the core hypothesis in behavioral finance consider the average
individual response of humans in a static or quasi-static setting, whereas price
formation in financial markets is clearly a collective phenomenon with dynamic
price formation.
Even more intriguing, due to feedback loops, prices may depend on the past price
evolution as well as on future expectations. But description of such a scenario, has
so far escaped a firm theoretical understanding. Events such as the plunge taken
by the British pound back in the 1980s due to short selling by George Soros, or the
sudden daily 28 % correction in the 1987 crash, or even the more recent ‘flash crash’
of 2010, all seem like classical examples where a dynamic and collective framework
would be needed to provide an adequate explanation.

2.7 Social Processes Underlying Market Dynamics

Social mechanisms underlying non-rational individual choices have received much


less attention than the decision mechanisms operating at the level of the individual
[51]. As described above, understanding social mechanisms behind the non-
rationality of human choices is crucial to understanding the dynamics of financial
markets. The dynamics of financial markets is in fact social dynamics. It is driven
by social processes and influenced by social factors.
One of the crucial assumptions of social sciences is that individuals react to
their own understanding of events and situations rather than to objective reality
[94]. Humans actively construct their own representation of reality using their
perception and knowledge, but also by consulting with others, especially when
they are uncertain. The information and influence received from others shape the
understanding of the individual. An important function of human interactions is to
construct social representations [99], which are also called shared reality. Social
representation for groups is an analog of cognitive representations for individuals.
Social representation is inter-subjective: it is a collective view of the world, a mutual
understanding and evaluation of situations, objects, and events. Coherent group
action is possible if the members of the group share their understanding of the
situation, evaluation of objects and events, goals and action plans.
Shared reality thus provides the platform for common action. Humans often
treat social representations as objective reality. In their book Social Construction
of Reality, Berger and Luckmann argue that almost all the social institutions (like,
for example, marriage) are created in social interactions and need to be supported
by social interactions if they are to exist [17]. In this view, the value of different
stocks in financial markets can be understood as a social representation created
in interactions among market participants. The value can be negotiated directly in
54 2 Behavioral Finance

discussions among investors and their advisors or indirectly by observing others


buying and selling financial products, while their actions are reflected in the
movements of the stock prices.
Shared reality often takes the form of shared narratives [34]. Narration may
be understood as a ‘meta-code’ on the basis of which ‘transcultural messages
abut the nature of a shared reality can be transmitted’. People tend to store and
communicate their knowledge in the form of narratives [15]. In psychology, it has
been proposed that scripts, or schematic stories, are the natural representation of
knowledge [121]. Narratives are stories that have a beginning, a body, and an end.
They have a plot. Narratives describe temporal sequences of actions performed
by agents and events and the consequences of actions. Agents have roles, try to
achieve goals, and have intentions and plans. Narratives are the instruments by
which individuals understand the world in which they live. They link actions to
their consequences, they tell agents how to structure their experience and how
to evaluate other agents, events, and objects. Narratives are also the basis of
prediction. They tell us about the consequences of actions and about typical courses
of events. Narratives also tell individuals how others are likely to behave. In terms
of narratives, individuals convey economic knowledge, for example, about how
economic growth occurs [117].
In a society many narratives usually coexist. Individuals’ interpretation of reality
and their expectations depend on the narrative they adopt. Individuals tell stories to
others in the social structure they belong to. A narrative is more likely to be adopted
if those the individual interacts with also adopt it, or if it reflects a narrative scheme
that already exists in society. People attach their personal narratives to the narratives
of the groups they belong to. They also change existing narratives and create new
ones, usually in a social process, in interaction with others.
Individuals and organizations that are highly placed in power and authority
create narratives purposefully in a top–down process. For example, national banks
may offer an official interpretation of the financial situation. The intention of such
narratives is to impose an interpretation of events in a financial market and to control
the market dynamics. Interacting individuals also socially construct narratives in a
bottom–up process on the basis of personal experiences [113]. Narratives shared
by a social group are created by integration of stories describing the individual
experiences of the actors. Shared narratives allow individuals to establish what is
common in their experience, find coherent explanations of observed events, and
coordinate their decisions and actions. Individuals also construct personal narratives
that instantiate common group narratives.
As we shall argue in Chaps. 3 and 8, financial markets are in fact complex
systems where the global dynamics emerges from interactions among individuals.
Different types of interactions drive the dynamics of the system. Individuals try to
understand the situation and the trends in financial markets. Both their attempts to
understand and their decisions collectively create an emergent shared reality which
is also called a social representation. This shared reality implies the commonality
of the individuals’ inner states, views, opinions, and evaluations. Achieving such
commonality is motivated, i.e., individuals want to achieve a view of reality that is
2.7 Social Processes Underlying Market Dynamics 55

congruent with the views of others. It also involves the experience of a successful
connection to other people’s inner states [42].
The agreed-upon facts, their interpretations, and relations between those facts, as
well as their evaluation and emotional connotations, become the basis for investment
decisions in financial markets. Because they are shared by a number of individuals,
they result in coordinated decisions by a group of people. The communications
concern both the global understanding of the market situation and trends and
interpretations of recent events, such as rapid changes in indexes, or news published
in print or online.
If the decisions of large enough groups of people become synchronized they can
influence the dynamics of financial markets. Individual, independent decisions of
investors result in a Brownian (i.e., random) motion of stock prices, where small
changes are prevalent and large changes are relatively rare. Coordinated decisions
stemming from communication among larger groups of people result in price
dynamics that resembles Levy flights, a form of dynamics in which the direction
of changes is random, but the size of changes varies with each step. This makes
very large changes relatively frequent. The dynamics of prices observed in real
markets lies somewhere between Brownian motion and Levy steps, which suggests
that the decisions of individual investors result partly from communication with
others, partly from independent decision-making [31].
Social representation of financial markets is created through different types of
communication between heterogeneous agents. The dynamics of financial markets
depends to a large extent on the decisions of the largest players, mainly govern-
ments, who decide in which currencies to buy and in which to sell. To minimize the
impact of these high-volume transactions on financial markets, they are spread over
longer time spans. Other high volume players are banks. When the experts working
for banks think they know which currency will go up and which will go down, these
banks engage in trading. The volume of trading by individual investors is usually
much smaller. The decisions of an individual investor usually have a negligible
effect on the market. If, however, a larger number of investors synchronize in their
decisions, they can influence the market in a significant way. Social representation
of the market, also called shared reality, provides one of the main ways in which a
large number of investors are likely to synchronize.
Investors communicate with other investors and with financial advisors. Advisors
also frequently communicate with other advisors to gain better understanding of
what is currently happening in financial markets. They also communicate with
other people in their social networks such as their families and friends. Some
of the communications are face-to-face discussions in dyads or groups, and such
interactions are also often mediated by telephone or Internet. An important medium
for the creation of shared reality is Internet discussion groups. Here both advisors
and investors can interact with strangers, and this extends the creation of a shared
reality. Media, in particular specialized papers and juveniles, also play important
roles in the creation of shared reality.
The meso-level also plays an important role in creating the shared reality of
financial markets. The social system deciding about the creation of shared reality
56 2 Behavioral Finance

has a structure. There are many entities between the level of individuals and the
whole system. These entities may have formal or informal structure. Circles of
friends, discussion groups, alliances, etc., underlie the formation of clusters in the
social network. Individuals who belong to the same cluster will usually make the
same decisions, and their actions are likely to be coordinated. Understanding the
meso-level of financial markets may represent a significant step in understanding
the dynamics of the markets.
Survey research in Poland with 30 investors and 30 advisors gives a more
detailed view of the role of communication in market decision-making. In this study,
57 % of respondents answered that they sometimes make decisions on the basis
of information from others, and 10 % responded that they do so often or in every
case. Communication is especially important when confronted with novelty: 93 %
of respondents answered that, at least sometimes, they seek information from others
when something new is happening in the market. Social communication appears
also to be an important way of dealing with contradictory information: 79 % at least
sometimes communicate with others to check contradictory information.
Although the frequency of communication does not significantly differ between
investors and advisors the information received from others has more impact on
decision-making by investors than advisors. Investors more often than advisors
admitted that they take into account the information from others when making
decisions and more often change their decisions in consequence of information
received from others. Public media press, TV, and radio were indicated as the most
significant sources of information. Internet portals and direct communication with
others were also indicated as important sources. Internet discussion groups were
indicated to be a much less important source of information.
Interacting individuals create shared reality in almost any domain of social life.
What is specific to shared reality in the market context is that individuals also
interact indirectly through indexes. The activity of an individual is reflected in
price movements. Other individuals observe these price movements. By reacting
to changes in prices with their decisions to buy or sell, individuals further affect
prices. Price dynamics of markets is thus driven by massive feedback loops where
the decisions of individuals drive price changes that affect the decisions of other
individuals.
Strategies of individuals play a crucial role in determining how current patterns of
changes affect further changes, i.e., in establishing the nature of the feedback loops
in financial markets. Beliefs about market dynamics underlie individual strategies.
The strategies of players may directly influence their decisions, or they may be
implemented by computer programs that are increasingly used for trading. The
observed dynamics of the market may either reflect a dominant strategy among the
players, or emerge from a combination of different strategies. Common features
of strategies are likely to cause large swings in the market if the pattern of price
changes that is present in a large proportion of strategies occurs in the market.
This is due to synchronization of the large number of decisions in the market.
Synchronization may be especially pronounced when the decisions are made by
computer programs that use the same rules. Technical analysis is used in an attempt
2.7 Social Processes Underlying Market Dynamics 57

to predict price changes following a specific pattern of changes. Interestingly, since


technical analysis is used as the basis for making financial decisions, it may become
the basis for strategies of individuals and affect the dynamics of the market.
The most dramatic changes in the market are bubbles and crashes. They occur
when individual investment strategies form a positive feedback loop: when the
decision to buy by some individuals facilitates buying by other individuals, or the
decision to sell facilitates the decision by other individuals to sell. In other words,
when rising prices result in a robust decision by a large proportion of individuals to
buy, or conversely falling prices result in a decision to sell. Such positive feedback
loops may be caused either by cognitive mechanisms based on beliefs and mental
models concerning market dynamics, or by emotions, where positive emotions
caused by rising value lead to the decision to buy, and negative emotions result
in the decision to sell.
The positive feedback loop resulting in bubbles and crashes can also be produced
by the fact that, in general, people believe that the price of the asset is what other
people believe the price is. As some individuals watch, the value grows. This
makes others believe that the value grows, which reinforces the growth belief of the
original group. From this perspective, the need for closure [78], i.e., the tendency to
hold already formed beliefs, even when faced with contradictory facts, is likely to
result in especially pronounced bubbles and crashes, because it makes individuals
insensitive to smaller changes in market trends which contradict their beliefs about
the direction of market dynamics.
Individual investment strategies are likely to be based on the broader schemas
held by individuals. Traditionally, higher stock prices are associated with wealth
and profit. Falling stock prices are associated with bankruptcy. There is also a
widespread belief that in the long run stock prices rise. This general schema, shared
by many market players, is consistent with players taking the long position, which
results in market growth. But modern financial instruments allow players to profit
equally well from rising and falling stock prices. As an increasing number of
players, and especially the big players, become aware of the fact that profits can
easily be made on falling stocks, and thus increasingly use the short strategy, the
shared reality of market players who associated profit with market growth will begin
to change. As discussed above, shared beliefs shape market dynamics. If the shared
reality of market players no longer associates profits with rising stocks, the general
tendency of markets to grow may no longer be valid.
The general idea expressed in this book is that social dynamics drives financial
markets. From this perspective, one can also consider systemic risks, i.e., the risk
of collapse of the entire financial system. The resilience of the financial system
is for a large part due to the relative independence of the different institutions. If
some institutions fall, those remaining can assure functionality of the entire system
and create conditions for the fallen institutions to recover, or for new financial
institutions to be created in their place. It would require a very large portion of
financial institutions to fail for the entire financial system to collapse. The situation
changes dramatically when the institutions become strongly interdependent, so
that the failure of one institution implies the failure of several others. Financial
58 2 Behavioral Finance

mechanisms that may underlie this scenario are relatively clear. Since banks borrow
money from other banks, the failure of a bank may have a cascade effect. Social
mechanisms may have similar effects.
Information spreads through social networks and results in the failure of financial
institutions. As discussed by Christakis and Fowler [31], in September 2009,
individuals stood in long queues to withdraw their money from Northern Rock,
simply because other people were doing the same. As a consequence, the bank had
to close its operations for a day and borrow money from the Bank of England.
Many other financial institutions were affected. A short scare experienced by a small
number of individuals and spread through social networks can cause a widespread
panic that affects several institutions.
Globalization of media, the rapid growth of Internet, and progress in mobile
communication result in increasing numbers of people sharing the same informa-
tion, beliefs, and emotions. People are then also more likely to synchronize their
decisions. As a result, financial markets lose degrees of freedom. For example, a
cascade of failures of several banks may result from a spreading collapse of trust,
rather than from lack of financial resources. If the decision-makers in a bank no
longer trust other banks that want to borrow money to be able to pay it back, they
may refuse to lend, although there may in fact have been sufficient funds.
Such a collapse of trust can spread through social mechanisms. If the shared
reality is such that, in general, no one is sure that the financial institutions will
return borrowed money, financial activities may slow down to a level that would
not be sufficient for the survival of many institutions. In assessing the risk of
systemic failure, it is important to take into account both the financial and the social
mechanisms that result in a loss of independence of financial institutions and the
globalization of the system [90].
Financial Markets as Interacting Individuals:
Price Formation from Models of Complexity 3

3.1 Introduction

In this chapter we embark upon what could be referred to as investment decision-


making in a complex environment to stress the difference with the ‘man on the
moon’ scenario of Sect. 2.6. Decision-making in a complex environment refers to
the case where the investment decision of an investor is directly influenced by the
outcome of actions taken by other decision makers. A day trader in a stock market
is one such example, since the choice of when to enter/exit a position depends
on the price trajectories created by other day traders. In order to gain a better
understanding of how aggregation of individual behavior can give rise to measurable
effects in a population, in general, and financial markets, in particular, the way
forward would seem therefore to model specific human traits on a microscale, and
study the emergence of a dynamics with observable or even predictable effects
on a macroscale. This chapter will introduce some of the financial market models
which are complexity-oriented. They will be used later in the book to explore price
dynamics.

3.2 Chaos Theory and Financial Markets

Having dealt in the last two chapters with the traditional view of finance and
then the more recent behavioral finance view which takes its starting point in
individual investor psychology, we now return to the main theme of this book,
that is, price formation in financial markets seen as a sociological phenomenon.
Having discussed in Chap. 1 the main building block of financial models, the
rational expectation hypothesis, one might ask whether it is possible to do better
using more realistic assumptions. One reason why macroeconomists continue to
use rational expectations in their models is perhaps the lack of established and
convincing alternatives, or maybe simply the fear of the explosion of complexity
in modeling a more accurate representation. If one should not or does not use the
sharply defined hypothesis of rational expectations, then what should one use?

J.V. Andersen and A. Nowak, An Introduction to Socio-Finance, 59


DOI 10.1007/978-3-642-41944-7__3, © Springer-Verlag Berlin Heidelberg 2013
60 3 Financial Markets as Interacting Individuals: Price Formation from Models. . .

In situations like this it is often a good idea to attack the problem head on. This
is also the case when one does not have a clear idea of how to solve the problem
in mind at the outset. One of the main issues with rational expectation theory is the
assumption that all investors act alike. That is, they analyze the market in a cool and
pervasive manner by using all available information relevant for future cash flows
of a given asset. This description not only sounds far from reality, it actually is far
from reality, and most economists would probably agree. Let us instead take the
completely opposite view. Assume for a moment, in a thought experiment, that we
know all investors, their wealth, and their investment strategies independently of
prevailing market conditions. Let us also assume that we are somehow able to enter
this huge amount of information into a fast supercomputer. Will we then be able to
understand and predict future market movements?
This line of thinking goes back to the tradition of the eighteenth century French
scientist Pierre Simon de Laplace, who wrote in his book A Philosophical Essay on
Probabilities (1814):
We may regard the present state of the universe as the effect of the past and cause of the
future. An intellect which at any given moment knew all the forces that animate nature
and the mutual positions of the beings that compose it, if this intellect was vast enough
to submit the data to analysis, could condense into a single formula the movement of the
greatest bodies of the universe and that of the lightest atom; for such an intellect nothing
could be uncertain and the future just like the past would be present before its eyes.

One century later, however, a compatriot of Laplace, Henri Poincaré, would discover
that just having the equations is not enough, since we also need to know the initial
conditions (i.e., initial position and velocity of the constituents) of a system in exact
detail, since the slightest change in the initial conditions could give completely
different results, even with the same equations. With the invention of computers
and the growth in computing power they obtained in the 1970s, it became possible
to formulate this idea in a broad sense and it now goes by the name of chaos theory.
Many readers may recognize the term ‘chaos theory’, coined by the meteorologist
Edward Lorenz, who claimed that the “flap of a butterfly’s wings in Brazil sets off a
tornado in Texas”.
Let us then step back and reconsider the question posed two paragraphs above
concerning the predictability of future market movements. We can summarize
the situation as follows: according to rational expectation theory as used in
the traditional approach to finance, financial market movements are random and
consequently impossible to predict, and that is all there is to it! Laplace on
the other hand proposes the exact opposite scenario, with everything in this
universe described by determinism. This includes everything from the motions of
atoms at the smallest scale up to the motions of galaxies at the largest. And in-
between these two extremes, the same goes for all other activity including the
movement of prices in a stock market. Even though Laplace’s considerations are
more philosophical than practical, the idea is that everything could ultimately be
understood deterministically, if only we could grasp the correct underlying variables
governing all the forces in nature and analyze the positions of all the entities that
compose it.
3.3 The Symphony of the Market 61

Poincaré agreed with the determinism in the equations, but pointed out that in
practice a deterministic understanding of the world would forever remain beyond
our reach, since changing the initial conditions of the equations changes their
outcome, and we will never be able to know the initial conditions of our universe,
specifying exactly how it started out. Needless to say, this has not dissuaded people
from trying to understanding market movements using chaos theory, assuming some
underlying variables governing cycles of price formation in financial markets to
be much like business cycles governing the economy. The goal in that case is
to determine a so-called strange attractor, whose dimensions give the number of
relevant variables.
The principle of a parsimonious description of a system in terms of some few
underlying variables is the hallmark of most theories in the field of physics, and
it has proven successful when imported to other domains, such as biophysics and
geophysics. It is probably fair to say that the jury is still out when it comes
to the possibility of gaining insight by describing financial markets or economic
developments in terms of chaos theory. Rather than looking for a few equations
governing the relevant variables, the complexity approach to be presented in the
following stresses the impact that each constituent can have on a system. By treating
each constituent of a system, there is a sense in which complexity theory actually
appears more ‘chaotic’ than chaos theory, in that the latter, instead of considering
all the constituents, only considers a few relevant variables of a given system. If we
return to the question of price formation in a financial market, each constituent is an
entity, be it a computer or a human, that contributes to forming the price by trading.
The question then is how to find order in the apparent ‘cacophony’ raised by the
voices of the constituents.

3.3 The Symphony of the Market

In order to visualize the complexity models that we will introduce in the following
sections and chapters, let us try to draw an analogy between the way prices are
formed in a financial market and the production of a symphony by an orchestra.
One can think of each market participant as a musician in such an orchestra, playing
different tunes (trading frequencies and directions of trade) with different degrees
of loudness (amounts of cash behind a trade). The maestros conducting the market
participants would be played by central bankers worldwide. The question is then
whether we are somehow able to decode the ‘symphony’ of price movements played
by such an orchestra?
Even though the analogy might sound a bit far-fetched, it does illustrate the
collective behavior of a system where humans have to perform an act which depends
on the outcome of what other people are doing. Just as different traders use different
trading strategies, so the different musicians in an orchestra have different scores
to guide them as they play. A score tells the musician how to play, but he/she has
to wait for the outcome of his/her colleagues’ actions to play the instrument in the
right manner. This is similar for trading decisions based on a trading strategy played
62 3 Financial Markets as Interacting Individuals: Price Formation from Models. . .

by a trader. The latter never acts in complete isolation, but makes his/her decisions
depending on the ‘tunes’ played by the market. Trend-following strategies depend
on the momentum of the market, whereas fundamental value strategies depend on
the price level of the market, buying when the price has become undervalued and
selling when the price has become overvalued. In addition, anticipation of how
future tunes will be played matters, thanks to new policy-making by central bankers
or new economic statistics.
The complex but fascinating way the market symphony is created becomes
clearer when we consider its hierarchical structure. At the first level of the hierarchy,
we can imagine each given stock and its price movements being played by a local
orchestra of individual traders. This local tune (i.e., price movements of the stock)
is not made in complete isolation, but depends (partly) on the way other local
orchestras (other stocks) play their melodies. Going up one level, we can then
consider the tune of a stock created by the aggregate action of traders, as if it is now
itself produced by a single instrument, but playing in a national orchestra composed
of all stocks in an index. Adding up the contributions of the local orchestras
creates what one could call a national anthem, illustrated by a given country’s stock
index.
As we shall see in Chap. 7, where we introduce such a hierarchical model, the
hierarchical structure does not stop at the level of a given country, but goes on to
a super-hierarchical structure where a global symphony is played by a worldwide
orchestra of all nations together. As we shall show in Chap. 7, each national
orchestra does not play its anthem in isolation, but has to wait for the outcome
of the other orchestras in order to know how to play its national anthem properly.

3.4 Agent-Based Modeling: Search for Universality Classes


in Finance

Having presented this picture in which the diversity of human decision-making


somehow collectively creates a symphony of price movements, we have to ask how
one might formalize such a picture. A first step in this direction is to use agent-
based models. These are constructed precisely in order to allow for diversity when
it comes to decision-making.
The concept of agent-based models goes back to the 1940s, where it arose from
ideas put forward by John von Neumann and Stanislaw Ulam. They were interested
in self-replicating systems and came up with the idea of cellular automata. Just
like cellular automata, agent-based models cover a very broad range of mainly
computational models that simulate the actions and interactions of some set of
autonomous entities called agents. Just from this description, the link to complex
systems should be clear. In the context of finance and economics, the great
advantage of using agent-based models is that they provide a handle for introducing
behavioral traits at the microlevel of each agent, and then study the way such traits
and rules of interaction among the agents can generate complex behavior at the
system level.
3.4 Agent-Based Modeling: Search for Universality Classes in Finance 63

Originating from the field of physics, several types of agent-based models have
been proposed to describe the dynamics of pricing in financial markets. Typically,
these kinds of models consider a financial market as composed of agents that use
strategies to buy or sell shares. Strategic decisions by the agents could, for example,
be based on estimation of fundamental values and/or the latest price movements of
the market if using technical analysis. The agents are adaptive in the sense that they
adapt to the price movements of the market by choosing the most successful strategy
at each time step. In this perspective, such models offer a ground-breaking view of
financial markets, since the agents are bounded rational, and in some cases exact
solutions can be obtained using techniques borrowed from physics.
Before we go into the specifics of the different agent-based models, let us begin
with a word of caution. It is easy to fall into the trap of considering each model as
a complete description of a given financial market. But it is clear that no model can
ever completely explain all the detailed price dynamics of a financial market, just as
no map can ever show all the details of a location. Nonetheless, maps are useful to
gain an overview, and our claim in the following is that we need models to guide us
through the ‘wilderness’ of price dynamics in financial markets.
We would rather emphasize a view in which the models serve as probes and
handles, used to get a successively better description of what one could call the
generic behavior of financial markets. Generic behavior could, for example, be
the point of identifying certain moments where the markets are framed, a point
we shall elaborate in Chap. 6. And we would stress realism as the main advantage
of complexity models compared to the traditional rational expectation models: we
know for certain the starting point cannot be true under the rational expectation
hypothesis, whereas the much weaker assumptions in complexity models can serve
as an almost exact starting point in terms of realism.
At the lowest level of description, a complexity model of a financial market could
for example be:
A financial market is a place where a number of actors trade, using different strategies.

Even though this statement does not say a lot, it is hard to dispute, and can serve as
a foundation to build upon, as will be explained in the next section. The concern of
the traditional economist is, however, that this will never get us very far, since the
next refinement in such a description can be taken in an infinite number of different
directions.
Interestingly, physics faced what appear at first glance to be somewhat similar
concerns back in the 1950s–1970s. The question then was: How much detail is really
needed to describe a given system properly? This question is also relevant in the
present context, since if a detailed description of the price dynamics of a financial
market is needed with all its interwoven facets, then there will not be much hope of
getting insight into financial markets using the rather simple models we introduce
in the following sections. However, it is notable that the problems encountered in
physics at that time eventually led to the development of tools now used in different
areas of physics, and which go under the general name of renormalization group
theory.
64 3 Financial Markets as Interacting Individuals: Price Formation from Models. . .

Back then, there were concerns in the domain of particle physics (atomic physics)
that, in order to understand the interactions of the forces in nature, one needed a
detailed description of the forces described at a certain length scale. It turns out,
for example, that an electron, which looks like a singular solid object when viewed
from afar, looks very different as one zooms in and views from smaller and smaller
distances. As one decreases the distance (which means going to higher energies in
collisions), an electron appears to be composed of self-similar copies of yet other
electrons, together with positrons and photons. In some ways, this is similar to a
stock index, in the sense that it is not just a stock index, but has a life of its own
in terms of the stocks which constitute it. Likewise, the forces that act between
different stock indices look different when compared to the forces acting between
the individual stocks.
This development of renormalization group theory was the result of successive
contributions, first by Stueckelberg and Petermann [133], followed by M. Gell-
Mann and F.E. Low [55], then R. Feynman, J. Schwinger, and S.-I. Tomonaga (who
won the Nobel Prize in physics in 1965 for their contribution), who showed how
details depending on a given scale were irrelevant. A deeper understanding of this
problem came from a different branch of physics, condensed matter physics, when
L.P. Kadanoff and K.G. Wilson [72,149] introduced the technique to understand the
general nature of phase transitions. This led to yet another Nobel Prize in physics,
illustrating how important the technique has become in modern physics.
This lesson from physics tells us that insight can be gained as one studies how
forces change when varying the typical scale of a phenomenon. The question is
whether a similar insight can be used to understand how market ‘forces’ change
when, instead of looking at correlations between different stocks in an index,
one goes to a different scale and considers the forces acting between different
stock indices, with currency and commodity indices adding further complexity
to the picture. In a similar vein, it would be interesting to obtain a theoretical
understanding of how market forces are modified by changing the time scale in
question, relating events at the time scale of seconds and minutes to what happens
at the hourly, daily, and monthly time scales. We mention this partly because we
find it appealing to use such an analogy to gain insight into market forces, but more
importantly because renormalization group theory explains the concept known as
universality in physics.
It turns out that, despite their apparently different nature, phase transitions seen
for example in magnetic systems and alloyed materials, not to mention superfluid
and superconducting transitions, have similar critical exponents, meaning that their
behavior near these transitions is identical. The phase transitions of these very
different systems can therefore all be described by the same set of relevant variables,
whence, as mentioned two paragraphs ago, a detailed description for each system
is irrelevant to understanding the physics governing the transitions. It should be
mentioned that the notion of universality classes also extends to systems that are
out of equilibrium. In that case it has been shown how domain growth of growing
crystals, or growth of oxygen domains in high-temperature superconductors, are
again determined by a small set of relevant variables (like the dimension of the
3.5 The El Farol Bar Game and the Minority Game 65

system, and whether the so-called order parameter is conserved or not). Given the
successes of renormalization group theory, physicists are therefore trained to seek
the relevant variables of a system, knowing that ultimately the details should not
matter. It is through such spectacles that the models introduced in the following
sections should be viewed: as primitive models of financial markets which, despite
their simplicity, are intended as probes to search for the relevant variables in
financial markets.

3.5 The El Farol Bar Game and the Minority Game

The El Farol bar game was invented by the economist Brian Arthur [9] and first
published in 1994. In this game, N people have to decide independently each week
whether to show up at their favorite bar, where popular live folk music is played
once a week. The problem, however, is that the bar has a limited number of chairs
L < N=2. Now since each person will only be happy if seated, the people try to
use the previous week’s attendance to predict future attendance. If a person predicts
that the number of people that will attend the bar is greater than L, that person will
stay at home. The relevant thing to note is that the El Farol game describes adaptive
behavior, since the audience in the bar uses past data to predict future attendance.
One reason for discussing this model is that it gives a clear example of a situation
in which rational expectations theory cannot be used to solve the problem. Suppose
a rational expectations prediction machine exists and that all the agents possess a
copy of it. If for example the machine predicted that a number larger than L would
attend the bar, nobody would show up, thereby negating the prediction of the rational
expectations prediction machine.
Inspired by the El Farol bar game, the minority game (MG) was introduced in
1997 by Ye-Cheng Zhang and Damien Challet as an agent-based model to study
market price dynamics [138]. The model was designed according to a leading
principle in physics: in order to solve a complex problem, one should first identify
essential factors at the expense of trying to describe all aspects in detail. The
minority game should therefore be considered as a kind of minimal model of a
financial market.
Specifically, the minority game is described by just three parameters:
• The number N of agents (market participants).
• The memory mi of agent i , representing the past number of days used by the
agent to decide whether to buy or sell an asset. mi is therefore the length of the
signal (see Table 3.1).
• The number si of strategies held by agent i .
We assume in the following that all agents use the same memory mi  m and the
same number of strategies si  s. The minority game is so named because agents
are rewarded whenever their decision is in the minority. We will explain this point
in detail below.
In order to quantify the technical analysis, MG agents use lookup tables
representing the last m directional moves of the market. Representing an upward
66 3 Financial Markets as Interacting Individuals: Price Formation from Models. . .

Table 3.1 Example of an m D 3 strategy table used by


agents in the minority game. The left column of the of
the table shows the 2m possible price histories of upward
(1) and downward (0) movements of the market over the
past m time steps. The right column then assigns the
action prescribed by this particular strategy at the next
time step (1 for buy, 1 for sell)

Price history Prediction


000 1
001 1
010 1
011 1
100 1
101 1
110 1
111 1

movement of the market by 1 and a downward movement by 0, a strategy can be


represented as in the lookup tables given in Table 3.1, where m D 3. A strategy tells
the agent what to do whatever the market behavior was in the past. If the market
went down over the last 3 days, the strategy represented in Table 3.1 tells the agent
that now is a good moment to buy (000 ! 1 in the table). If instead the market went
down over the last 2 days and then up today, the same strategy tells the agent that
now is a good moment to sell (001 ! 1 in the table).
Despite the apparent simplicity of the model, its complexity is revealed when
one considers the total number of possible strategies. Representing just the up and
down movements, there are already 2m possible price histories when one takes into
account the last m days. Since for each possible price history, i.e., for each entry in
the left column of Table 3.1, a strategy gives a recommendation of what to do, i.e.,
m
buy D 1 or sell D 1, the total number of strategies S is given by S D 22 . Even for
the relatively short time period of say 2 weeks, that is, ten trading days, S D 21;024 , a
number which exceeds 10300 . To grasp the magnitude of such a number, note that the
total number of elementary particles in the universe is ‘only’ supposed to be around
1090 . So in a toy financial market where traders only need to make decisions based
on the directional moves of the market over the last 2 weeks, without taking into
account their magnitude, they would have far more choices than there are particles
in the universe! If nothing else, this exercise shows that it is little wonder that
people can be overwhelmed when making decisions about when to invest. Trading
is complex!
Naturally, the possible set of different price histories for a given fixed value
of the memory m always stays the same, i.e., the left-hand column of Table 3.1
is the same for all possible strategies. What characterizes a strategy is therefore
what recommendation it makes given this constant set of different price histories.
That is, what characterizes a strategy is just given by the right-hand column of
Table 3.1. Therefore, a strategy is represented formally by a binary vector with 2m
3.5 The El Farol Bar Game and the Minority Game 67

m
components. As mentioned above there are S D 22 such vectors. Many of these
vectors are, however, very similar in the trading strategy they recommend. Take for
example Table 3.1 and change just the last entry, so that instead of recommending
to sell if the last 3 days where up, it would then recommend to buy. The two
strategies are represented by the vectors v1 D .1; 1; 1; 1; 1; 1; 1; 1/ and
v2 D .1; 1; 1; 1; 1; 1; 1; 1/, respectively. Clearly, these two vectors, or trading
strategies, are highly correlated. This remark was elaborated upon in [26], where
it was shown that a qualitative understanding of the model can be obtained from a
much smaller set of just 2m independent vectors (or strategies) instead of the total
m
set of S D 22 strategies. So to get a qualitative description of the minority game,
one only needs a small subset of the total number of strategies.
It turns out that a qualitative understanding of the MG for a given fixed value of S
can be obtained from just one parameter ˛ [120], given by the ratio of the size of the
set of uncorrelated strategies over the total number of strategies attributed to the pool
of N agents, viz., ˛  2m =N . However, for a general value of s, it is more natural
to define the control parameter instead as ˛ 0  2m =sN [27], since this parameter
gives the ratio of the size of the set of uncorrelated strategies to the total number
of strategies for any value of s. The distinction between the two definitions is not
important for the results discussed in the following, since s is taken as constant,
so we will stick to the notation used in the literature, i.e., ˛  2m =N . We will
limit ourselves to a very brief introduction to the MG, but for readers interested in
the model, more information can be found in [138] or at the website www.unifr.ch/
econophysics/mgame/show, which is specifically dedicated to research on the MG.
The price dynamics of the model enters in very a special way, and a word of
caution is necessary since the details given in the following turn out to be absolutely
crucial for understanding the model. Figure 3.1 gives a schematic representation
of how the model works. At each time step, the agents update the score of their
different strategies according to the MG score, which is determined by the ability
of a strategy to predict what will be the minority action of the ensemble of agents.
For a formal description of the payoff function, see the box below. Given the actual
history of the market, each agent chooses the best performing strategy from the set
of s available strategies. The best strategy is then used to decide whether to buy or
sell an asset and place an order. All orders are then gathered together, and this leads
to a new price movement of the market: up if a majority choose to buy, and down if
a majority chooses to sell.

The performance of the i th agent’s j th strategy is determined via its payoff


MG
function fi;j .t/, which in the MG is updated according to

j j
X
N
MG
•fi;j .t/ D ai .t/A.t/ D ai .t/ ak .t/ ; (3.1)
kD1

(continued)
68 3 Financial Markets as Interacting Individuals: Price Formation from Models. . .

(continued)
j
where ai .t/ denotes the action of the i th agent’s j th strategy. The latter is
found in the strategy’s lookup table as the right-hand column corresponding
to the price history that occurred at time t. Let us take the example where,
over the last 3 days, the market first went up, then up again, and today finally
went down, so that the price history was .110/. If the i th agent’s jPth strategy
j
was the one shown in Table 3.1, then we find ai D 1. A.t/ D k ak .t/ is
the sum of the actual orders placed by the agents, i.e., the order balance. The
asterisk on ak indicates that agent k uses the best strategy at time t. If the
action of strategy j of agent i has the opposite sign to the majority given by
A.t/, we see from (3.1) that this strategy gains.

The name of the model should now be clear, since the payoff function says that,
every time a given strategy takes the opposite position from that taken by the
majority, it gains, whereas if it takes the same position as the majority, it loses.
The gain or loss in the payoff function is proportional to the cumulative action of
the agents.
It is important to note that there is nonlinearity in this model. This nonlinearity is
created by the fact that agents try to cope with market changes by using the optimal
strategy out of the available pool of s strategies at each time step. Therefore as the
market changes, the optimal strategies of the agents change, and this in turn leads
to changes in the market (see Fig. 3.1). This illustrates a highly nonlinear and non-
trivial temporal feedback loop in the model, and it is argued that this is the way
things takes place when people trade in real markets. However, it has been shown
in [24] that it was the sharing of the same information that matters for the dynamics
of the MG. That is, if the agents used a randomly generated price history, instead of
the one generated by their own decision-making, the properties of the model would
basically remain unchanged compared to the case where the agents use their own
generated price history. This is not the case for the game we present in Sect. 3.7,
where the agents’ own feedback on the price history is crucial.
The fact that people try out optimal strategies is known in psychology. An
individual is likely to have several, possibly conflicting, heuristic rules for decision-
making [139]. They may be treated as decision schemas which specify strategies for
making decisions in specific circumstances. Based on their experience, individuals
are likely to choose the decision schema that gives the new best fit for the current
situation.

3.6 Some Results for the Minority Game

In the most basic version of the MG, the agents can either buy or sell a unit of
stock at each time step and they are assumed to have an unlimited amount of money
and stock. The dynamics of the market return r.t/ at time t is a function of the
3.6 Some Results for the Minority Game 69

Fig. 3.1 Representation of price dynamics in the MG. Agents first update scores of all their s
strategies depending on their ability to predict the minority action of the agents in the last time
step. After scores have been updated, each agent chooses the strategy which now has the highest
score. Depending on what the price history happens to be at that moment, this strategy determines
which action a given agent will take. Finally, the sum of the actions of all agents determines the
next price move: if positive, the price moves up, if negative, the price moves down. (Figure taken
from [69])

difference between the number of market participants who buy and the number who
sell. Therefore the larger the order imbalance, the larger the upward or downward
movement of the price [28, 30, 112].

Specifically,

1X 
N
P .t/
r.t/ D log D ak .t  1=2/ ; (3.2)
P .t  1/
kD1

where is a constant describing the liquidity or market depth. The appearance


of t  1=2 in the argument of ak .t  1=2/ models the fact that the decisions
for actions made by the agents take place between the announcement of the
price at the two times t  1 and t (see Fig. 3.1). In the MG literature, however,
this fact is not usually stressed and one finds an abuse of notation, apparently
evaluating r and ak at the same time t. What is meant is nevertheless that

(continued)
70 3 Financial Markets as Interacting Individuals: Price Formation from Models. . .

Fig. 3.2 Volatility  2 =N as a function of the control parameter ˛ in the MG. Surprisingly, agents
can enter a state where the volatility of the market is smaller than the volatility of a system
where the agents trade randomly. The point for which the volatility is a minimum defines a
‘phase transition’ between a symmetrical phase where markets are unpredictable (efficient) and
an asymmetrical phase where a certain degree of predictability exists (see also Fig. 3.3) (Figure
taken from [153])

(continued)
the action is taken first and then the price is announced. The sequence of
events is clearly illustrated in Fig. 3.1. Having made this point clear, we will
for simplicity stick to the standard abuse of notation and evaluate r and ak at
the same instant of time.

Figures 3.2 and 3.3 illustrate the main results for the minority game. They are taken
from [153] and show the behavior of the model for different values of N . The figures
are constructed by creating different games with different memories m, and thereby
varying the control parameter ˛ D 2m =N .
Figure 3.2 shows the volatility of the model for a given fixed value of ˛ plotted
on the x-axis. The first thing to notice is the data ‘collapse’ shown by the overlap
of the different curves for different values of N . This tells us that ˛ is indeed the
relevant parameter to describe the model. Intuitively, this makes sense because ˛
gives us the ratio between the number of uncorrelated strategies in play and the total
number of strategies that the N traders adopt.
When this ratio is small, most agents adopt very closely related strategies. The
small ˛ case can be understood in terms of ‘crowd–anti-crowd theory’ [69]. It turns
out in this case that two groups of agents form, holding strategies with opposite
predictions. This gives rise to huge fluctuations, as shown by the large value of  in
3.7 The $-Game 71

Fig. 3.3 Predictability of price moves in the MG: predictability parameter H=N versus control
parameter ˛. As can be seen from the figure, there are two ‘phases’. (i) Small ˛. Here price
movements are unpredictable. (ii) Large ˛. Here prices are no longer created randomly. This means
that information is left in the price movements and this gives rise to a certain predictability (Figure
taken from [153])

Fig. 3.2. On the other hand for large values of ˛, it becomes less and less likely that
any two agents will adopt the same optimal strategy. In this case, the MG becomes a
random game and the variance per agent asymptotically approaches the value 1, as
would happen in a game where agents sold or bought randomly at each time step.
Most interesting is the fact that, for ˛  0:34, agents somehow organize
themselves into a state where the market has lower volatility compared to the case
where they trade completely randomly. The minimum separates the state of the
system into two different phases. This can be better understood by considering
Fig. 3.3, which shows the predictability parameter H of the system. Figure 3.3
shows that two different ‘phases’ exist in this model: (i) the crowded symmetric
phase for small values of ˛, where the market is unpredictable, as shown by the
predictability parameter H D 0 in the lower plot, and (ii) the uncrowded phase,
where on the contrary actions of the agents leave information which can be traded
on, as shown by the predictability parameter H > 0.
For a longer discussion on the minority game and literature on the model, we
refer the reader to the website: www.unifr.ch/econophysics/mgame/show.

3.7 The $-Game

The main problem with the minority game described in the last section is of course
that it is hard to imagine people trading in real markets just because they would like
to be in the minority! People trade either because they try to make profit or because
72 3 Financial Markets as Interacting Individuals: Price Formation from Models. . .

they need to hedge their positions. But even when just trying to hedge a position,
traders will of course try to do so at the best possible price. This elementary fact is
not taken into account in the minority game, and was the main criticism that led to
another agent-based model called the $-game ($G), introduced to stress the fact that
people trade to make profit [144]. This point will now be explained further.
Actually, it is sometimes advantageous to be on the minority side of trades,
especially when one enters a new position. This is because, when opening a position,
it is better to be opposite to the rest of the market in order to get a favorable price: if
most people buy but few people (including oneself) sell, the imbalance will push the
price up, ensuring that one has sold at a favorable price, and vice versa when entering
a position by buying an asset. However, in the time that follows after entering a
position, whether long or short, one no longer wants one’s bet to be a minority bet.
Let us say we bought some gold today. We then want this bet to be on the majority
side following the purchase, since then the imbalance created by the majority of
trades pushes the price up in our favor. The best strategy in terms of profitability
is therefore not always to target the minority but to shift opportunistically between
the minority and the majority. The payoff function that takes into account these
considerations was proposed in [144].

The payoff function in the $G is given by

j
X
N
j
$G
•fi;j .t/ D ai .t  1/ ak .t/ D ai .t  1/r.t/ : (3.3)
kD1

Notice that the payoff function now depends upon two different times. To
understand this point, imagine we only have access to the daily close of a
given market and decide to invest in this market at time t  1 and enter a
position determined by how the market closes at that instant. Then it is not
until the day after, knowing how the market closed, i.e., at time t, that we will
know whether the decision made the day before was a wise decision or not.
This is especially clear from (3.3), where one can see the link between the
j
decision made on day t  1, given by ai .t  1/, and the return of the market
between day t  1 and day t, given by r.t/. So a strategy gains or loses the
return of the market over the following time step, depending on whether it
was right or wrong in predicting the market movement. Therefore, in the $G,
agents correspond to speculators trying to profit from predicting the direction
of price change.

One should note that the payoff for both the MG (3.1) and the $G (3.3) assign to
strategies that are not active. This means that all strategies except the optimal one
at a given time t count a ‘virtual’ return,
Psince their contributions are not counted in
the order imbalance given by the term N a
kD1 k

.t/. The lack of this self-impact has
3.7 The $-Game 73

500
MG−payoff

−500

−1000

−1500
0 50 100 150 200 250 300 350 400 450 500

1.5
1
MG−wealth

0.5
0
−0.5
−1
−1.5
−2
0 50 100 150 200 250 300 350 400 450 500

Fig. 3.4 Good at predicting the minority, bad at earning money. The figure shows the best (dotted
line) and worst (solid line) performing agent for a minority game with N D 501 agents, memory
m D 10, and s D 10 strategies per agent. Upper: Payoff function (3.1). Lower: Wealth (Figure
taken from [144])

been shown to have important theoretical consequences, but here we note that, from
a practical point of view, it makes good sense for agents to assign a virtual return to
their strategies. This is the reality in practice when investors try out new arbitrage
ideas in a financial market. They first use price data as information in the search for
arbitrage possibilities without actually entering the market [151].
As we shall see, the emergent properties of the price dynamics and the wealth of
agents are strikingly different from those found in the MG. Most remarkably, it can
be seen from the upper plot in Fig. 3.4 that the highest performing MG agent [i.e.,
optimal according to (3.1)] performs consistently badly in terms of wealth, whereas
conversely the relatively good performance in terms of wealth for the worst MG
agent is a clear illustration of the fact that a minority strategy will perform poorly
in a market where agents compete to make money. In contrast, $G agents as defined
in terms of (3.3) match by definition the performance of the wealth of the agents.
However, this does not exclude the potential usefulness of MG strategies in certain
situations, in particular for identifying extremes, as will be illustrated in Sect. 3.9.
As in the MG, the dynamics of the $G contains nonlinear feedback, thought to
be an essential factor in real markets because each agent uses his/her best strategy
at every time step. This attribute makes agent-based models highly nonlinear and in
74 3 Financial Markets as Interacting Individuals: Price Formation from Models. . .

general unsolvable. As the market changes, the best strategies of the agents change,
and as the strategies of the agents change, they thereby change the market. As shown
in [118], the Nash equilibrium for the $G without constraints is given by Keynes’
beauty contest, where it becomes profitable for subjects to guess the actions of the
other participants, and the optimal state is one for which all subjects cooperate and
take the same decision (either to buy or to sell). A plain way of saying the same
thing is that, in the $G without any constraints on the amount of money or stocks
the agents can hold, bubbles are generated spontaneously!
This intrinsic spontaneous bubble generating property is specific to the $G and
completely absent in the MG, since agents seeking the minority generate mean-
reverting behavior. As will be shown in Chap. 5, we can use this intrinsic tendency
for creation of speculation in the $G to propose a detection method for bubbles
in real markets. We will discuss this in more detail later. For the moment, we just
note that, because of the constant buying of the agents, the price in the bubble state
deviates exponentially in time from the fundamental value (assumed constant) of
an asset. All subjects profit from further price increases or decreases in the bubble
state, but it requires coordination among the subjects to enter and remain in such a
state.

3.8 A Scientific Approach to Finance

We would like to advocate what could best be called a scientific approach to finance.
Scientific because we propose experiments to go hand in hand with theory, an
approach that comes from the hard sciences, but which is rare in traditional finance.
It has been argued that the reason for the success of the hard sciences is precisely the
link between proposed theories/models and tests of those theories via experiments.
This lies at the heart of the philosophy of science suggested by the Austrian born
and British educated philosopher Karl Popper (1902–1994). He introduced the idea
that a scientific theory can never be completely verified, no matter how many times
experimental testing confirms the assumptions of the theory, but that if there is a
single case in which the theory is shown not to hold experimentally, the theory is
thereby falsified.
The term ‘falsifiable’ introduced by Popper simply means that if a theory is
wrong one should be able to show this by experiments. By the same logic, this also
means that if somebody introduces a theory that cannot be falsified, then it is not a
scientific theory. One could mention the efficient market hypothesis as an example
of a theory that can never be falsified by experiments, so that in Popper’s view it
is not a scientific theory. Popper cites falsifiable theories as the root of the progress
of scientific knowledge over time. If his ideas could be generalized to finance, it
would imply that falsifiable theories are the best candidates for making progress in
this field.
Having stressed the need for experiments in accordance with Popper, let us
mention that experimental finance took a big step forward when the Nobel Prize
in economics was awarded to Vernon Smith in 2002. His most significant work
3.9 Taking the Temperature of the Market: Predicting Big Price Swings 75

was concerned with market mechanisms and tests for different forms of auction.
However, by far the major part of the experimental work in finance (including work
by Vernon Smith) has considered human rationality and the ability of markets to
find the right price close to an equilibrium setting. In contrast to this approach, we
have seen how behavioral finance adopts a much more realistic description of the
way decision-making actually takes place in financial markets.
It would thus seem natural to incorporate the insights gained from behavioral
finance and apply it to experiments done on financial markets. Interestingly, very
little effort has been made in this direction. The main reason may perhaps be because
the majority of research done in behavioral finance is concerned with how individual
decision-making takes place, including prospect theory, in a static setting, whereas
price-setting in financial markets is clearly a collective and dynamic phenomenon.
Ultimately, one should of course view the actual pricing that takes place in a
given financial market as one big experiment performed in real time! So if we have
a model for a financial market, like for example the minority game or the $-game,
we should be able to falsify at least part of our model against real market data.
An example of this approach will be shown in Chap. 5. Most important, however,
is the following minimal requirement, or test, for any models claiming to describe
price dynamics in financial markets: assign values to the parameters of the model
randomly, and generate a history of price time series according to the model with
these parameter values. Now imagine that we do not know the parameters used to
generate the price time series and carry out a search to try to determine their value.
If this is not possible, then for sure we will never be able to estimate the parameters
describing any real time series of financial data.

3.9 Taking the Temperature of the Market: Predicting Big


Price Swings

Just as one can get information on the internal state of water by inserting a
thermometer into the liquid, the idea of applying models of agent-based simulations
to real financial market data and looking at how agents react through their decision-
making would be a way to probe the internal ‘state’ of the market. The hope would
be to find a new way of characterizing markets by getting general information about
whether the market is in a ‘hot’ or ‘cold’ state, for example. In the following we shall
discuss ways to implement this idea in practice and we shall illustrate how sudden
large swings in markets seem to have precursors that can be used in an attempt to
predict such swings.
As mentioned in the last section, before making any claims that agent-based
models are applicable to extract information on real market data, it is clear that
a minimal requirement must first be met. Imagine therefore that we are given a
‘black-box’ time series generated by a multi-agent game for fixed unknown values
of the parameter set .N; m; s/, as well as an unknown specific realization of initial
strategy choices assigned to the agents in the game. Doing reverse engineering, a
minimum requirement would be that, using the black-box time series as a blind test,
76 3 Financial Markets as Interacting Individuals: Price Formation from Models. . .

90
H(t)
95%
80 75%
Mean
70

60
H(t)

50

40

30

20
←Past Future →
10
4600 4605 4610 4615 4620 4625 4630
Timestep

Fig. 3.5 Predicting big price swings in a financial market model. Comparison between the forecast
density function and the realized time series H.t / for a typical large movement. The large, well-
defined movement is correctly predicted (Figure taken from [80])

one should be able to extract the parameter values used to create this time series. If
one cannot estimate the values of the parameters of a theoretically defined model, it
would be useless to spend time on real market data, which needless to say, are not
created according to some theoretical framework.
A reverse engineering test for the minority game was proposed in [80]. A priori
it seems an almost impossible task to try to estimate the parameters in a model
that has an astronomically large number of different initial conditions given by the
m
22 different strategies. Nonetheless, it was shown in [80] how it is possible to
perform this task for a moderate total number of strategies, i.e., for a sufficiently
small memory used by the agents in the MG. This is indeed very good news since,
as shown in [80], it is possible, at least in the case of the MG, to reverse engineer and
find the parameters of a black-box-generated time series without knowing in detail
the theoretical setup of the model! Similarly, the possibility of reverse engineering
was later also found to hold for moderate values of m in the case of the $G [145].
Let us end this chapter with an appetizer for what follows in Chap. 5. The study
in [80] gives an example of how large changes can be predicted in a multi-agent
population described by the MG. The remarkable fact is that the predictability
actually increases prior to a large change. The authors show how predictable
corridors appear at certain time periods, a typical example of which is given in
Fig. 3.5. In Chap. 5, we identify the mechanism which gives rise to such large market
swings.
A Psychological Galilean Principle for Price
Movements: Fundamental Framework 4
for Technical Analysis

4.1 Introduction

At the beginning of the last century, the Danish multi-artist and poet Storm Petersen
wrote: “Det er svært at spå – især om fremtiden” [43]. For those not fluent in Danish,
this can be translated as: “Prediction is difficult – especially about the future.” That
is certainly true, so at the very least one should not make the task even more difficult
by trying to predict what is impossible to predict. When we have a framework that
can generate predictions about the future, one approach is to insist that there are
certain things that the future should not depend on. If we know what the future
cannot depend on, this in turn can help put some limits on the theory that we use to
predict the future. This will be the philosophy we apply in this chapter, in which we
will be using technical analysis to predict the future.
In a nutshell, technical analysis is concerned with issuing future predictions
about asset prices on the basis of past data. In the last chapter, we saw concrete
examples of how price formation can result from the social dynamics of interacting
individuals. These examples employed agent-based models in which agents use
technical analysis of past price movements when deciding whether to buy or sell
assets. The interaction between the individuals (agents) occurs through the price.
Their collective decisions create new price movements, which in turn lead to further
technical analysis decisions by each individual, creating yet other collective price
movements, and so on. It is easy to extend such models to include fundamental
value analysis.
This is something we will come back to in Chap. 6, but we will first make a
detour. In this chapter, we will not study the underlying origin of the price dynamics,
but instead take the prices as something that are static and given. We will then
search for some very general and simple principles to characterize the fundamental
framework underpinning any technical analysis. The discussion below will follow
[5] and propose the concept of a ‘psychological Galilean principle’, according to
which investors perceive so-called market forces only when the trends change.

J.V. Andersen and A. Nowak, An Introduction to Socio-Finance, 77


DOI 10.1007/978-3-642-41944-7__4, © Springer-Verlag Berlin Heidelberg 2013
78 4 A Psychological Galilean Principle for Price Movements

We will try to draw an analogy with something we all experience when we drive
a car. In particular, we will show how to characterize financial markets in terms of
large or small ‘price velocities’, that is, markets with very strong or weak trends,
and in terms of large or small ‘price accelerations’, that is, where the direction of
the market is changing rapidly or slowly. Using such terminology suggests a kind of
qualitative equivalence between price dynamics and Newton’s first law of classical
motion, according to which velocities change because forces exert their influence to
cause acceleration or deceleration.
Let us also stress that, from a practical point of view, technical analysis is
often used by practitioners as an alternative to fundamental analysis (described in
Chap. 1) when they need to determine the proper price of an asset. Although it is
considered largely as pseudo-science in the academic spheres of finance, we try to
take the practitioners’ point of view seriously, and suggest a quantitative framework
in which basic elements of technical analysis can be understood [16, 47, 50]. It is
hard to believe that there are not at least some elements of technical analysis taking
place within a large fraction of the trades animating a financial market. It does
not seem credible to imagine somebody who, for whatever reason, buys or sells
shares and does so without having first looked up the performance of those shares
over some period of time. Whatever the reason for deciding to buy some shares of,
say, McDonald’s, would one buy them without knowing anything at all about their
performance over the last few days or weeks or months?
It therefore seems appropriate, whether one believes in technical analysis or not,
to try and shed more light on the issue. It should also be mentioned that, even
though technical analysis may not be mainstream in finance, there is also a growing
amount of literature describing anomalies that are simply too striking to be ignored.
Most notable among these is the momentum effect, where investors prosper from
buying stocks that performed at their best 1 year ago and selling (short) the worst
performing stocks of the last year. This may be one of the simplest examples of
technical analysis. For a recent testimony see [40, 98].
For example, in 2010, European investors would have gained more than 12 % by
keeping the best performing stocks of 2009 compared to the worst performing stocks
of 2009 [40, 98]. Since the effect has been documented to hold in certain markets
over a century [40, 98], this raises other intriguing questions about the dynamics of
financial markets on a longer time horizon. Could this momentum effect, together
with other trend-following effects emphasized by technical analysis techniques, be
the ultimate source of speculative bubbles? If so this could perhaps carry whole
economies off track. We will return to this question and analyze it in more detail in
Chaps. 6 and 7.
Technical analysis comes in many forms, including not only the study of the
relationship between past and future prices, but also the analysis of the impact from
trading volume and other variables relevant to trading (e.g., measures of confidence)
on future prices. However, a general feature of technical analysis is that of price
trends and price deviations from trends. Technical analysis that detects and tries to
profit from a trend is called a trend-following technique, whereas technical analysis
that detects deviation from a trend, or oscillations around a trend (say, oscillations
4.2 Dimensional Analysis 79

around a given support level), is called a contrarian technique. In the next section,
we will introduce the idea of dimensional analysis (a tool mainly used in physics)
as a first step toward gaining insight into a complex problem. The idea will be to
get an intuitive understanding of a framework for technical analysis based on first
principles, using this technique of dimensional analysis.
The chronology of this chapter is thus as follows. First we introduce the idea of
dimensional analysis and show how this will help us eliminate some extraneous
scenarios for the future. The approach will also introduce three parameters that
appear naturally when insisting on a framework for technical analysis derived
from first principles. By definition, any description of the future will always be
probabilistic. Still, we can gain a better understanding of what awaits us in the future
by imposing a deterministic constraint, in which we aim to say for sure what the
future should not depend on.

4.2 Dimensional Analysis

Before we embark on a discussion of dimensional analysis, we should mention


that technical analysis seeks predictions of the future, and this means formulating
a ‘law’ relating past prices to future prices. Dimensional analysis can help us in
this venture since it dictates which variables such a law can depend on. We will
show in a moment how dimensional analysis helps us to determine Kepler’s third
law for planetary motion. The idea of dimensional analysis will then be applied to
characterize different ‘phases’ in financial markets.
Mass, length, and time are all examples of basic physical dimensions that
characterize a quantity. The dimension of a physical quantity is given by a
combination of such basic dimensions. We will use the notation of capital letters
to denote the dimension of a quantity. The velocity v of a car, for example, has
dimensions of length/time, denoted by L/T, since we measure how fast we drive by
measuring the distance traveled over a given time. The dimension of mass of an
object will be denoted by M.
The reader may ask why we are suddenly concerned with the dimensions of
quantities? One reason is this: it turns out that, when faced with a very complex
problem in physics, a powerful first check of the validity of an idea is just to
see whether the dimensions are the same on the left- and right-hand sides of the
equation that describes our hypothesis. Interestingly, this principle is little known,
and certainly rarely used, in the field of economics or finance. So taking another
lesson from physics, we will show in the following how the idea of dimensional
analysis can give new insights into the conditions of prediction when one uses
technical analysis to forecast future price movements.
But to begin with, we need to understand how the method works and the best way
is to exemplify by applying this technique to an old problem in physics, namely,
Kepler’s third law. This was published by Kepler in 1619 and describes the time it
takes for a given planet to make one round trip around the sun:
80 4 A Psychological Galilean Principle for Price Movements

The square of the orbital period of a planet is directly proportional to the cube of the
semi-major axis of its orbit.

In the box below, we will show how the mere fact of insisting that both sides of an
equation should have the same dimensions will lead us to Kepler’s third law.

Expressed in terms of an equation, the law is

P 2 / a3 : (4.1)

It builds upon Kepler’s first law, which says that the orbit of every planet
around the sun is an ellipse with semi-major axis a and semi-minor axis b.
P in (4.1) is the time it takes for a planet to make one revolution around the
sun. P therefore has the dimensions of time T.
In order to calculate P using the trick of dimensional analysis, one first has
to consider which variable could determine P . It seems reasonable to assume
that the mass of the sun M , the gravitational constant G, and the radius of the
orbit a should be relevant variables to determine the period of revolution P .
So the assumption is that P is somehow proportional to powers of these three
quantities, i.e.,

P / M a G b ac ; (4.2)

with a dimensionless constant of proportionality. The dimensions of the


gravitational constant G are ŒL3 M1 T2 . Dimensional analysis now uses the
constraint that the dimensions of the left- and right-hand sides of (4.2) must
be the same. Since P is expressed in units of time, the same must be true
for the term on the right. Equating the dimensions on either side of (4.2), we
obtain

T D Ma ŒL3 M1 T2 b Lc D Mab L3bCc T2b : (4.3)

The constraint of dimensional analysis is that, since the left-hand side has
dimensions of time, so should the right-hand side. This can only be true if all
mass terms on the right-hand side cancel out, i.e., if

aDb: (4.4)

Similarly, all length terms also have to cancel out on the right-hand side
of (4.3), giving
3b D c : (4.5)

(continued)
4.3 A Simple Quantitative Framework for Technical Analysis 81

(continued)
And finally, the dimension of time on the right-hand side of (4.3) should match
the one on the left-hand side, giving

1 D 2b : (4.6)

To sum up, using dimensional analysis, we find

a D b D 1=2 ; c D 3=2 : (4.7)

Inserting this result into (4.2), it now reads

P / M 1=2 G 1=2 a3=2 : (4.8)

Since M; G are constant, this yields Kepler’s third law.

4.3 A Simple Quantitative Framework for Technical Analysis

Having seen how dimensional analysis can be used to derive Kepler’s third law, we
now turn our attention to its use in obtaining a general quantitative framework for
technical analysis. We would like to begin from first principles and determine gen-
eral properties that should in principle apply to all technical analysis techniques. We
prefer to give some general guiding principles concerning technical analysis, rather
than going into detail about each specific method applied in a specific description.
The disadvantage in doing so is, of course, that the following presentation may
appear simplistic, but we prefer the gain in generality of the method, even if parts of
the following presentation may appear obvious to some readers.
As mentioned at the beginning of this chapter, technical analysis can in principle
include many other variables than just price data, but in the following we will
focus on this, because it is the simplest case and also the classic case of technical
analysis. So the aim here is to use past price data to predict future price data.
We will concentrate on the simple case in which prices over some past time interval
exhibit a trend (a ‘price velocity’) and perhaps also a deviation from such a trend
(a ‘price acceleration’). So with this definition of technical analysis, how can we
make progress and say something general about this method using dimensional
analysis?
A first remark is that dimensional analysis exploits the fact that a physical
law must be independent of the units used. Meters and feet are both examples of
length units so they both have the dimension of L. A conversion factor, which is
82 4 A Psychological Galilean Principle for Price Movements

Google stock (25/1/2011−14/2/2011)


640
US $s a prediction
620

600
0 2 4 6 8 10 12 14 16 18 20
Time (days)
640
US $s

b prediction
620

600
0 50 100 150 200 250 300 350 400 450
Time (hours)
c
Euros

460 prediction
450
440
0 2 4 6 8 10 12 14 16 18 20
Time (days)
640
US $s

d prediction
620

600
5 10 15 20 25
Time (days)

Fig. 4.1 Technical analysis: predicting the future should not depend on units. Example of a stock
price (Google) and the invariance of the prediction of future stock prices under change of units.
Predicting future stock price by simple technical analysis (solid lines), the prediction should be
invariant under change of time [compare (a) and (b)], as well as under change of currency [compare
(a) and (c)] and time translation [compare (a) and (d)]

dimensionless, then relates two different units, e.g., the conversion factor between
feet and meters is 0.3048, since 1 foot = 0.3048 m. Returning to financial markets,
the units of the price of a stock can for example be US $. Therefore, when it comes
to predictions issued by a technical analysis instrument, an obvious requirement is
that changing the units of a stock price should not influence the prediction.
This point is illustrated in Fig. 4.1, which shows the stock price (in circles) of
Google over a 3 week period in four different plots. The solid lines are examples of
fits to the data set of past price values, which then enable prediction of future price
values of the stock. For more detail, see the Appendix to this chapter. The squares
in each of the plots give an example of a prediction 3 days ahead. The comparison
between plots (a) and (c) illustrates that the prediction issued should not depend on
which currency units one uses. Whether the stock is quoted in US $, British pounds,
yen, or euros should not change our perception about where the price is heading.
Another requirement is that the unit of time should not matter for our prediction.
Whether we use hours or days to describe our prediction 3 days (or 72 h) ahead,
the outcome should, of course, be the same. This is seen by comparing plots (a)
and (b). Finally our prediction should also be translation invariant with respect to
time, since the prediction from our technical analysis should clearly not depend on
whether we define our data set over the time interval Œ1; 15 as in the figure, or over
4.3 A Simple Quantitative Framework for Technical Analysis 83

any arbitrarily translated interval, say Œ1 C ı; 15 C ı. This point can be seen by
comparing plots (a) and (d), where ı D 5 to obtain the latter.
To summarize, the requirements for any technical analysis prediction technique
should be:
1. Predictions should be invariant under changes of currency units.
2. Predictions should be invariant under changes of time units.
3. Predictions should be translation invariant with respect to time.
Notice also that predictions are not translational invariant with respect to money
units. Adding say a constant of $10 to each stock price in a time series would change
the returns of that time series and thereby change the predictions compared to the
original price time series. Looking at Fig. 4.1, requirements (1), (2), and (3) seem
very straightforward indeed, but insisting that they are fulfilled, just like insisting
that the dimensions were the same on both sides of the equation for Kepler’s third
law, does lead to new insight, as can be seen from the derivation presented in the
Appendix to this chapter.
Requiring that any prediction rule from technical analysis must obey the three
requirements (1), (2), and (3), and considering only technical analysis techniques
which include trends of prices and deviation from these trends, we show in the
Appendix how a simple general quantitative framework for technical analysis results
in the following three dimensionless quantities that can be used in a characterization
of different market ‘phases’. We introduce the parameters first, then describe them
in more detail after a general discussion:
v
• TN  tN , dimensionless time interval of the learning period, i.e., of past price
p0
data.
v
• T  •t, dimensionless time horizon for prediction in the future.
p0
2
• F  v =p0 a, the Froude number.
We introduce a dimensionless time in the box below.

As shown in the Appendix, in order to have a prediction rule which obeys the
three requirements (1–3), the time t should be made dimensionless. This is
done by defining a new dimensionless time  by
v
 t; (4.9)
p0

where v is the ‘price velocity’, expressing how fast prices change over the
learning interval, although it is perhaps more easily thought of as the trend
of the data over the learning period. p0 is the price value at the beginning of
the learning period. First, a note on the introduction of a dimensional price.
Discretizing (4.9), one gets

(continued)
84 4 A Psychological Galilean Principle for Price Movements

(continued)
•p=•t pt  p0 t
 tD N : (4.10)
p0 tN  t0 p0

Here we use the notation that the learning period of the technical analysis
is the interval Œt0 ; tN . Actually, it is sufficient just to use tN to denote the
interval, because the requirement of time translation invariance means that
t0 can be chosen arbitrarily [compare plots (a) and (d) in Fig. 4.1]. In the
following, we thus choose t0  0. Therefore, (4.10) becomes

p tN  p0 t t
 D R.t/ ; (4.11)
p0 tN tN

where R.t/ is the return of the market over the learning interval.

As shown in the box above, dimensional analysis tells us that the time scale it
makes sense to adopt in technical analysis would use units of the return made over
the learning interval. In other words, instead of thinking of physical time in terms
of minutes, hours, or days, the above dimensional analysis suggests considering
a ‘financial’ time that expresses the time needed to produce the return over the
learning interval in relative terms.
This makes sense from a psychological point of view, since humans tend to
focus on how big a fraction of a given task has already been completed or needs
to be completed, rather than the precise amount of physical time it will take to
finish the task. It has been shown that humans have two different systems for
representing numbers: an approximate system that allows for rough approximation
of quantities and a verbal system capable of representing numbers exactly. These
two systems interact. In the process of mental calibration, the approximate system
can be calibrated mentally by comparing the judgments made by this system to other
sources of judgments. For example, it has been shown that information addressed to
the verbal system by giving exact numbers in addition to showing a large number
of dots strongly influenced the judgments made on the basis of the approximate
system.
It can therefore be argued that in the context of financial decisions the approx-
imate system of judgments, which is the intuitive system used to make decisions,
is mentally calibrated according to market volatility. This implies that, if over the
learning period the market went up with a given percentage x, but then after the
learning period the market went down over several days, this would mean that the
‘financial’ time  went backward, even though the physical time went forward. Only
when the market went back at some time in the future to the same level as after the
learning period would we be back to the same ‘financial’ time as right after the
learning period.
4.4 Applications 85

If we now go back to the three parameters which were found to be relevant


in the characterization of technical analysis, we see that the two time parameters
TN and T should be understood in terms of ‘financial’ time, as described above.
Using (4.11), TN can simply be seen as the return of the market over the learning
interval, whereas T is the fraction ˛  •t=tn of the return of the market R in the
(future) prediction interval, where •t is how far out in the future (in real time) we
issue a prediction. Less trivial is the third dimensionless parameter, the so-called
Froude number defined by

v2
F  ; (4.12)
p0 a
where a is the price acceleration over the learning period. The definition of the
Froude number comes from hydrodynamics, where it is used to determine the
resistance of an object moving through water. In the case of a ship sailing on water,
v is then the velocity of the ship, a the acceleration due to gravity, and p0 the length
p
of the water line level of the ship. p0 a is the characteristic water wave propagation
velocity, so the Froude number can be thought of as analogous to the Mach number,
which is probably better known.
Therefore, if we were to introduce a terminology for the financial markets, we
could think of the numerator as the ‘speed of sound’ of the market, i.e., how much
prices typically move per time unit. The denominator represents the characteristic
velocity in such a market, i.e., medium, taken over the learning interval. Then we
note the following:
• F > 1 corresponds to a supercritical flow of prices in the medium/market. This
is a fast/rapid flow.
• F < 1 corresponds to a subcritical flow of prices in the medium/market. This is
a slow/tranquil flow.
• F D 1 corresponds to the critical flow of prices in the medium/market. This is
flow at the borderline between fast and slow.

4.4 Applications

As we saw in the last section, we can place some constraints on a framework that
tries to predict the future by insisting that there are certain things out there in
the future that do not depend on the way we represent our price time series data.
Ultimately, however, any description of the future will by definition always be prob-
abilistic. Still, as we have seen, we can gain a better understanding of what the future
has to bring by constraining deterministically. Having a data set of, for example,
daily prices of a stock expressed in US $, then what the future holds for this stock
should not depend upon choosing a different time or currency frame, i.e., looking at
the same data but expressed in hours instead of days or euros instead of yen. Using
the price time series data as the only information relevant for the future, the specific
time we define as the origin of the data set should not matter for future prices.
86 4 A Psychological Galilean Principle for Price Movements

As we saw in the last section, insisting on such straightforward requirements


led us to conclude that in any technical analysis which uses only the trend (price
velocity) and deviation from the trend (price acceleration), there are only three
dimensionless variables that matter: the Froude number F , the dimensionless length
of the learning interval TN , and the dimensionless time horizon T , expressing a
moment out in the future at which we try to predict the price of a given asset. Having
these three variables, we can now classify the different market phases.
Taking the length of the learning interval TN as a fixed parameter in the
following, there are four possible combinations of the parameters T and F :
• Super bull .T > 0; F > 0/, corresponding to positive price velocity and
positive acceleration. Such a regime is depicted by c.
• Balanced bull .T > 0; F < 0/, corresponding to positive price velocity and
negative acceleration. Such a regime is depicted by d.
• Super bear .T < 0; F < 0/, corresponding to negative price velocity and
negative acceleration. Such a regime is depicted by e.
• Balanced bear .T < 0; F > 0/, corresponding to negative price velocity and
positive acceleration. Such a regime is depicted by b.
Here we have adopted the pictographs from [5].
Let us try to see how the above characterization works in practice by giving an
example taken from [5]. Figure 4.2 illustrates the different market phases for two
time series of currencies, the US $ in British pounds and Swiss francs. Upper plots
are the currency time series. In order to characterize the different market phases,
we first need to fix the length of the learning interval tN , which in turn determines
the dimensionless parameter TN . In Fig. 4.2, tN D 3 weeks D 15 trading days
was used as the learning interval. The plots illustrate the case where we predict the
movement of the market •t  5 days after the 15 day learning period. Middle plots
then show the Froude number (4.16) versus the dimensionless prediction horizon
T D .v=p0 /•t. The four quadrants sampled clockwise correspond to the four
regimes c, d, e, and b defined above.
The quantities .F; T / were determined using a ‘running window’ of 15 days
throughout the entire data set. In each such 15 day interval of data, a mean-square
fit of a second-order regression polynomial was performed (see the Appendix). This
is used to give an estimate of the price velocity v and price acceleration a, thereby
determining .F; T /. Reference to a running window means that one uses the data
over the first 15 day interval Œ1; 15 to get an estimate of .F; T /, then takes one
step forward in time and uses the interval Œ2; 16 over the next 15 days to get the
estimate of the parameter set in this period, followed by the period Œ3; 17, and so
on, thus creating the cloud of points seen in the middle plot of Fig. 4.2.
When we try to issue a prediction, the interesting part is obviously what exactly
it is we predict, but equally important is the confidence we have in our prediction?
One way to get a measure of confidence is to look at the signal/noise ratio. Here the
signal is given by how big a return we predict at time T in the future. In principle,
the larger the return we predict, the more confidence we should have in knowing
which direction the market will move. But since market noise is also an issue, the
return should be weighed against the noise in our signal.
4.4 Applications 87

Fig. 4.2 Characterizing price flows in markets in terms of a fundamental ‘Mach’ parameter (called
the Froude parameter in the text). US dollar in British pound and Swiss franc from 4 January 1971
until 19 May 1999. The plots at the top show the time series of the prices. Middle plots show the
Froude number defined in (4.16) as a function of the reduced prediction horizon   .A1 =A0 /•t ,
where •t is fixed at 5 days. Plots at the bottom show the number of realizations of each of the six
relevant patterns as a function of the threshold Rth for the predicted amplitude of the price move.
Symbols are p1 cC (crosses), p3  dC (plus signs), p4  d (small circles), p6 e (dots),
p7  bC (squares), and p8  b ( diamonds). Dotted, dashed, and continuous lines delineate
domains of different predicted return signs (see text) (Figure taken from [5])

A natural measure of noise is the standard deviation of the fit of the price by
a parabola (see Appendix). Therefore as a measure of confidence related to each
prediction, it is natural to choose the ratio of the predicted return to the standard
deviation of the fit to the market data. This quantity is called Rth in the following.
The notation reminds us that it will only be natural to believe in a prediction for
some sufficiently large threshold value Rth .
Returning to the classifications mentioned above, the interesting question for a
practitioner would be what to expect in the future given that we characterize the
market today as being in one of the four super/balanced bull/bear phases? A first
crude prediction then is simply to say whether we should expect the market to go up
or down from where we stand today? Let us therefore list the different possibilities
and introduce notation for the resulting patterns:
88 4 A Psychological Galilean Principle for Price Movements

• p1 cC super bull predicting a positive return,


• p2 c super bull predicting a negative return (impossible within the present
framework),
• p3  dC balanced bull predicting a positive return,
• p4  d balanced bull predicting a negative return,
• p5 eC super bear predicting a positive return (impossible within the present
framework),
• p6 e super bear predicting a negative return,
• p7  bC balanced bear predicting a positive return,
• p8  b balanced bear predicting a negative return.
The lower plots of Fig. 4.2 show the frequency of the different market phases given
above as a function of the confidence measure Rth . A general feature for both
currencies is the apparent exponential decay of the presence of the pattern as a
function of the confidence we require to accept it as valid. The faster decay of the
patterns ‘balanced bear predicting a positive return’ and ‘balanced bull predicting
a negative return’ for both currencies reflects the predominance of trend-following
patterns in the markets.
Finally, let us illustrate some examples of out-of-sample predictions on different
equity markets. Figure 4.3 gives the success rate for three different technical analysis
strategies as a function of Rth . The figure compares three strategies:
• Average approximant S  (for full definition see [5]) represented by crosses.
• A trend-following strategy corresponding to the linear approximation of S1
[see (4.20) in the Appendix], represented by open squares.
• A bare parabolic parameterization, including both a trend and a deviation from
the trend [see (4.21) in the Appendix], represented by open circles.
Let us first mention that coin tossing would give a fraction of successful predictions
equal on average to 0.5. However, one could argue that, since there is often an overall
trend in a given asset market, and in particular in stock markets, this could create a
bias. We will discuss this in detail in Chap. 7 and try to explain why stock markets
in particular are likely to have an overall trend. But for the time being we just take
this as a fact.
The question for markets which show such a bias is then: Is it just this overall
trend that one is able to predict? Put differently, a ‘dumb’ strategy predicting ‘up’
at every time step would appear like a very clever strategy in a market with an
overall upward trend, giving a success rate often much higher than 50 %. One way
to estimate the effect of such a bias is to do bootstrapping. In general, this just
refers to a self-sustained process that proceeds without external help. That is, we do
not need to invoke any theory to get new insight, but instead look only at the data
in different ways to gain more insight. Bootstrapping is often used in problems of
statistics, like the one we have in front of us now, by using resampling methods.
Specifically, we would like to assess the significance of the success rate we obtain
for our predictions without taking into account the effects of a general trend. One
way to proceed is to take the original return data of the market and reshuffle it,
thereby destroying any correlations in the data, but keeping the same overall trend
in the data. The total return (i.e., the trend) for the shuffled market data is the same
4.4 Applications 89

SP500 DOW NAS


1 1 1

0.8 0.8 0.8


P1(Rth)

0.6 0.6 0.6

0.4 0.4 0.4

0.2 0.2 0.2

0 0 0
0 0.5 1 1.5 0 0.5 1 1.5 0 0.5 1 1.5
f(Rth)

-2 -2 -2
10 10 10

-4 -4
10 10 10-4
0 0.5 1 1.5 0 0.5 1 1.5 0 0.5 1 1.5
Rth
NIK FTS DAX
1 1 1

0.8 0.8 0.8


P1(Rth)

0.6 0.6 0.6

0.4 0.4 0.4

0.2 0.2 0.2

0 0 0
0 0.5 1 1.5 0 0.5 1 1.5 0 0.5 1 1.5
f(Rth)

10-2 10-2 10-2

-4 -4
10 10 10-4
0 0.5 1 1.5 0 0.5 1 1.5 0 0.5 1 1.5
Rth

Fig. 4.3 Super bull case. Success rate as a function of Rth . The figure compares three strategies.
(i) Average approximant S  (see definition in [5]) represented by crosses. (ii) A trend-following
strategy, corresponding to the linear approximation of S1 (4.20), represented by open squares. (iii)
The bare parabolic parameterization (4.21), represented by open circles. The continuous (resp.
dotted) line corresponds to the 90 % (resp. 99 %) confidence level, i.e., 100 (resp. 10) out of 1,000
surrogate data sets would give a fraction of successful prediction outside the domain bounded
by the two continuous (resp. dotted) lines. These confidence limits are only strictly valid for
the average approximant S  which has been applied to 1,000 surrogate time series obtained by
reshuffling the daily returns at random (Figure taken from [5])

as for the non-shuffled market data. Sampling statistics on how one predicts on the
reshuffled market data enables one to place limits on the method predicting on real
(non-shuffled) data.
Figure 4.3 was generated from 1,000 surrogate time series for each market by
randomly reshuffling the daily returns. On each of the 1,000 surrogate time series, a
prediction was made and the success rate measured as a function of Rth , just as for
the original (non-shuffled) time series. The 90 % (continuous line) and 99 % (dotted
line) confidence levels are plotted in the figure. They are defined by the fact that
900 (resp. 990) among the 1,000 surrogate time series gave a success rate in the
interior of the domain bounded by the continuous (resp. dotted) line. It should be
noted, however, that the confidence levels were only calculated for the approximant
strategy.
90 4 A Psychological Galilean Principle for Price Movements

Appendix

The essence of all technical analysis is to use part of a past price time series to
describe future price movements. In order to capture this general property, we
first introduce the following parametric description of the price movements
over a given time interval t0  tpresent :

S0 D A0 C A1 t C A2 t 2 ; t0  t  tpresent ; (4.13)

where S0 can be thought of as the simplest representation of a price time series


that captures a price level, a trend, and a deviation from the trend in a series
expansion of the price. In such a representation, the three constants A0 , A1 ,
and A2 describe the price level at t D 0, the ‘price velocity’ or trend over the
given interval, and an acceleration or deviation from the trend, respectively.
Now using dimensional analysis on (4.13), one notes that, since the left-
hand side has the dimensions of money [Mo], so must each of the terms on
the right-hand side. Note that we use the designation [Mo] to distinguish from
[M], which was used earlier to describe the mass of an object. An example of
units of money is then US $. The dimension of A0 is therefore [Mo], while
A1 has dimension [Mo/T], and A2 has dimension [Mo/T2 ].
As illustrated in Fig. 4.1, we want our prediction from technical analysis
to be independent of the time unit we use, which is a problem with the
expression (4.13), since the coefficients Ai change when the unit of time
changes. More precisely, if the unit of time changes by a factor k, viz.,
t ! t 0 D tk, then the coefficients Ai change accordingly, viz., Ai ! A0i D
Ai =k i . The expression (4.13) can be made independent of the time units by
introducing a dimensionless time

A1
 t: (4.14)
A0

Now expressing (4.13) in terms of this dimensionless time , one gets


 
S0 .; F / D A0 1 C  C F 1  2 : (4.15)

The new expression F is the so-called Froude number. A description of the


Froude number from physics is given in Sect. 4.3. We have

A21
F  : (4.16)
A2 A0

(continued)
4.4 Applications 91

(continued)
Using the fact that Ai ! A0i D Ai =k i under a change of time units, it is easy
to see that both  in (4.14) and the Froude number F in (4.16) are invariant
under a change of time units. The expression (4.15) is therefore invariant
under changes of time units.
Since we also want our prediction from technical analysis to be indepen-
dent of the money unit (see Fig. 4.1), we introduce the prediction in terms of
future returns:

prediction S0 .; F /
R0 .; F / D log (4.17)
S0 .present ; F /
 
A0 1 C  C F 1  2
D log   ; (4.18)
A0 1 C present C F 1 .present /2

where  >  present .


The last requirement for our technical analysis prediction scheme was time
translation invariance, since the origin of the learning interval should not
matter [compare plots (a) and (d) in Fig. 4.1]. This requirement means that it
is only the length of the learning interval that matters. So the learning interval
can be represented by just one parameter, namely tpresent , by always choosing
it between 0 and tpresent .
To summarize, we are looking for a general technical analysis expression
which uses a certain learning interval Œ0; tpresent  of past price data. The
simplest possible expression which takes into account both a price trend and a
first order deviation from this trend is given by (4.13). The problem, however,
is that this expression is not invariant, either under changes of time units,
or under changes of money units, as it should be in a general prediction
scheme (see Fig. 4.1). This remark led us to introduce a dimensionless time
unit  in (4.9) and a dimensionless constant F in (4.16) which can be used
to characterize different market phases. One can therefore view a general
technical analysis description derived from first principles, taking into account
the requirements (1), (2), and (3) obtained from dimensional analysis in
Sect. 4.3, as a series of approximations:

S00 D A0 ; (4.19)
S01 D A0 .1 C / ; (4.20)
 
S02 D A0 1 C  C F 1 T 2 : (4.21)

Equation (4.19) describes technical analysis using just the fundamental price
value, while (4.20) describes such analysis using the fundamental price value
plus a trend, and (4.21) describes technical analysis using the fundamental
price value plus a trend and a deviation from the trend.
Catching Animal Spirits: Using Complexity
Theory to Detect Speculative Moments 5
of the Markets

5.1 Introduction

At the very height of the international credit crisis and during the near collapse
of the Icelandic banking system at the end of 2008 and the beginning of 2009, the
Icelandic politician Johanna Sigurdardottir attracted voters by promising to “end the
era of testosterone”. The businessman of the year in 2008 in Iceland was a woman,
and after Sigurdardottir became prime minister by winning the elections in February
2009, half of her ministers were women. Furthermore, the male CEOs of two of the
country’s three largest banks were replaced by women. If anything, this Icelandic
tale shows that there is a perception from both the public and political side that risky
behavior is an attribute deeply rooted in male decision-taking, especially during
crisis, and can be avoided by including more women in the decision process.
Politically as well as economically, it therefore seems important to develop new
research directions and tools to give a better understanding of the outcome of
collective decision-making that can lead to the formation of financial speculative
bubbles and ensuing financial crises. This should tell us whether there is some truth
in such a viewpoint. Only when we have a better understanding of how collective
risky behavior emerges can we begin to contemplate predicting such behavior, with
the ultimate goal of preventing financial crises in the future [1, 129].
In fact, research already exists which shows a link between testosterone and risky
behavior in a financial context [32], but at the level of individuals. In [32] it was
shown how higher and lower testosterone levels in a group of traders on a London
trading floor led to higher and lower returns, respectively, for the traders involved.
As explained in [32], the higher returns were only obtained by the traders by taking
more risks, thereby showing a clear link between testosterone levels in traders and
risky behaviour at the individual level.
In order to understand risky behavior at the market level in general, and
collectively risky behavior seen during the formation of speculative bubbles and
market crashes in particular, we need new tools that address the way heterogeneous
risky behavior at the individual level can lead to exuberant behavior at the level

J.V. Andersen and A. Nowak, An Introduction to Socio-Finance, 93


DOI 10.1007/978-3-642-41944-7__5, © Springer-Verlag Berlin Heidelberg 2013
94 5 Catching Animal Spirits: Using Complexity Theory to Detect Speculative. . .

of the market. To identify the general signatures of situations where heterogeneous


individual risk-taking leads to collective risk-taking seen at the market level, we
will describe a number of experiments and introduce a formalism to understand the
formation of such collective risky behavior. Special emphasis will be devoted to
detecting speculative herding and understanding how it forms.
Everyday references to such phenomena as speculative financial bubbles and
crashes often talk about turbulent periods, using labels like ‘erratic’ or ‘disorder’
as characteristics of such periods. Contrary to such a notion, it will be demonstrated
in this chapter that, during crisis, order rather than disorder is the hallmark of the
phenomenon, as is illustrated by complete agreement in trading decisions generated
dynamically in the pricing process. We will discuss a new technique based on agent-
based simulations that gives a robust measure of detachment of trading choices
created by feedback and predicts the onset of speculative moments in experiments
with human subjects.
A word of caution must be given here, since doing experiments on financial
markets is often met with a great deal of skepticism, if not outright hostility,
and simply brushed off as not relevant to what happens in real financial markets.
To argue that this is not a productive stand, one could cite the situation faced
by astrophysicists. They are in a somewhat similar situation to people observing
financial market price dynamics from afar. Just as astrophysicists can only observe
the evolution of the universe and not manipulate stars and galaxies to probe how
the universe functions, we will never be able to do experiments on the scale of what
happens in real markets. However, they have been able to get very precise ideas and
testable models on the expansion of the universe using experiments here on earth to
investigate forces at the fundamental levels of matter.
Needless to say, when we do experiments on financial markets, we need to
demonstrate that they are relevant to what happens in real markets. At this stage
it is probably fair to say that the research on understanding price formation as a
collective and dynamically created process is still at its inception. But given what is
at stake, we will argue that doing experiments in this direction is a risk worth taking.
However, before discussing experiments on financial markets, we will make a short
detour and consult traditional thinking on anomalous moments in the markets, in
this case, the creation of bubbles.

5.2 Rational Expectations Bubbles

A priori the two terms ‘rational expectations’ and ‘bubbles’ appear incompatible.
However, as we shall see below, the surprising fact is that rational expectations can
in fact lead to bubbles [3, 131]. Here we briefly discuss the framework leading to
rational expectations bubbles for the reader interested in the traditional financial
thinking on this question. This section can easily be skipped by readers who are
either already familiar with the topic or already skeptical about the somewhat
unrealistic constraints imposed by rational expectations.
5.2 Rational Expectations Bubbles 95

The basic assumptions of rational expectations are that, in an efficient market,


the expectation value of the return Rt C1 of a given asset should equal the
interest rate r taken over the same period of time:

E.Rt C1 jIt / D r ; (5.1)

where E denotes the expectation value and is conditioned on all available


information It . The time t refers to the moment where one tries to evaluate
the return Rt C1 over the next time period. The return of an asset can in turn be
written in terms of market gains/losses plus cash flow from dividends Dt C1
over one time period:

Pt C1  Pt C Dt C1 Pt C1 Dt C1
Rt C1 D D 1C : (5.2)
Pt Pt Pt

The price Pt is known at time t, but the two quantities Pt C1 and Dt C1 are not
known at time t and need to be estimated. Therefore, inserting (5.2) in (5.1),
one gets

E.Pt C1 jIt / E.Dt C1 jIt /


C Dr C1; (5.3)
Pt Pt
Pt .r C 1/ D E.Pt C1 jIt / C E.Dt C1 jIt / ; (5.4)

or
Pt D E.Pt C1 jIt / C E.Dt C1 jIt / ; (5.5)
where   1=.1 C r/. Equation (5.5) can be solved recursively. Writing (5.5)
for t C 1,
Pt C1 D E.Pt C2 jIt C1 / C E.Dt C2 jIt C1 / ; (5.6)
inserting (5.6) in (5.5), and using
 
E E.X jIt C1 /jIt D E.X jIt / ; (5.7)

one gets

Pt D  2 E.Pt C2 jIt / C  2 E.Dt C2 jIt / C E.Dt C1 jIt / : (5.8)

Equation (5.7) follows since rational expectations have to remain unchanged


for all time. Altering an expectation is costless, so if it is expected to change
in the future, rationality requires a revision now. Iterating (5.8) n times, one
ends up with the solution

(continued)
96 5 Catching Animal Spirits: Using Complexity Theory to Detect Speculative. . .

(continued)
X
n
n
Pt D  E.Pt Cn jIt / C  i E.Dt Ci jIt / : (5.9)
i D1

The rational bubble term is the first term on the right-hand side of (5.9), viz.,
Bt D  n E.Pt Cn jIt /, which has the property that Bt C1 D .1 C r/Bt C
t C1 ,
with
t a noise term and E.
t C1 jIt / D 0. The equation for Bt means that it
grows exponentially in time. Therefore the structure
P of the solution to (5.9)
is that of a fundamental price term, viz., Pt D niD1  i E.Dt Ci jIt /, plus a
bubble term that pops up somehow unexpectedly. This illustrates that, even
within the rational expectations framework, the price can deviate from the
fundamental price Pt .

5.3 Going Beyond Rational Expectations Bubbles

For many the ongoing global financial crisis has most likely served as a wakeup call
and renewed doubts about the ability of present formal models to capture the more
disruptive phenomena, such as the formation of speculative bubbles and subsequent
crashes seen in the markets. Many practitioners believe such failure is due to a
variable missing from the models, the subjective human response, which evades
mathematical description.
Exactly how one should take into account such human responses is debatable,
and this is probably among the main reasons why frameworks like the rational
expectation theory presented in the last section and Chap. 1 have been at the very
core of mainstream financial theory over the last few decades. Notwithstanding,
prospect theory as discussed in Chap. 2 gives us a clue about how to quantify human
responses at the individual level. Prospect theory also gives a very clear illustration
of the advantage of having a formal theory from which one can understand and
check a given hypothesis of human behavior.
Having a formal framework makes it easier to formulate questions in a precise
manner, and sometimes the very fact of quantifying a postulate in a more rigorous
manner can in itself open the way for additional questions which would have been
difficult to formulate without such a framework. This is indeed the case in the
natural sciences, where many discoveries are made through an analytical framework
supported by the use of mathematics. It therefore seems natural to insist on a more
formal description of how the effects of human decision-making can determine price
dynamics in financial markets.
It is important to remind ourselves that the theory of rational expectations bubbles
presented in the last section is a static theory where prices are at equilibrium at
any moment of time. This view implies that, at each time step, humans can use all
available information and then calculate what the price of an asset should be. As
5.3 Going Beyond Rational Expectations Bubbles 97

the pricing formulas in Chap. 1 showed, only two variables matter in the rational
expectation view: dividends and interest rates. So the idea is that, every time new
information (either directly or indirectly) related to those variables enters the market
place, this information should then give rise to new price changes. Therefore the
rational expectations view is based on a notion of ‘quasi-static’ markets, where new
price changes are due only to fresh incoming information about those two variables.
Still, the hectic activity one can observe just by looking at the computer screens
at any stock exchange seems to indicate a completely different story, where many
other factors than interest rates and earnings/dividends ratios are relevant to the
price of an asset. Naturally, interest rate or earnings announcements are important
moments for the pricing of an asset, if not for anything else than the fact that traders
expect other traders to put emphasis on them. Big changes in prices often occur at
such moments, but even positive earnings announcements are sometimes met with
a decline in prices after an initial positive reaction. In those cases, people claim that
the news had already been ‘priced in’. This suggests that it is something more than
just interest rate and earnings expectations that people use as a yardstick on how to
price an asset.
In the following we propose to look at pricing as a ‘social phenomenon’.
As we mentioned in Chap. 2, humans treat socially created shared reality as
objective reality. Pricing thus happens in a social consensus-building process, where
formation of a dynamic collective opinion ensures that the right level of pricing is
reflected in the past and present prices that people use. Specifically, we suggest the
view that pricing takes place in financial markets on two different time scales, short
and long, with resulting fast and slow price dynamics:
• A slow price dynamics that reflects the general state of the economy. In particular,
we have in mind market participants acting on macroeconomic news, such as
monthly unemployment statistics and interest rate decisions made by the US
Federal Reserve Board. We also include external ‘shocks’ like the Russian default
crisis and the terrorist attack of 11 September 2001 to this list. These are moments
where major changes in consensus are being probed.
• A fast price dynamics where market participants act according to the general
consensus made on the long time scale (slow dynamics). The fast price dynamics
opens up the possibility for a ‘framing effect’ which we will discuss in the
following.
In Chap. 7, we will give additional evidence for this view, pointing out the separation
of time scales that exists between the two different price dynamics. As we shall
see, this has implications for pricing when looking at the level of the whole world
economy/financial system.
Assessing appropriate levels for the price of an asset is therefore more a question
of finding the consensus of other market participants, with different price levels open
to testing such a consensus. This view also gives an explanation for various ‘support
levels’ that market participants constantly probe in order to know what should be
the relevant price for a given asset. Under such a notion, market participants do not
necessarily have a clear idea about what the exact equilibrium value or fundamental
value of the asset should be, but rather relate a given price level to past values of
98 5 Catching Animal Spirits: Using Complexity Theory to Detect Speculative. . .

the price, much as has already been discussed in Sect. 2.7. In the following, we will
argue that a proper understanding of speculative moments in the markets, including
such moments as are seen during the formation of speculative bubbles and crashes,
requires a dynamic description that goes beyond simple equilibrium arguments. The
main idea we will present is that speculative moments are generated as a result of
the investment strategies of humans which dynamically create situations in which
the decision-maker is subject to the effects of ‘framing’. We will use the term
‘decoupling’ later on to describe this specific state.
We propose to consider the market place as an environment where people use
different trading strategies in order to adapt to the price movements of the market.
In the following, we only consider price movements of the markets over such short
time horizons that changes in macroeconomic variables can be neglected. That is,
we consider only the fast price dynamics mentioned above. We will suggest a bridge
between the empirical study of market ‘mood’ emergence out of individual behavior
and a mathematical framework used to capture, explain, and predict it. More pre-
cisely, we will provide a method by which the empirically observed soft human deci-
sion heuristics that underlie financial decision-making can be encapsulated in formal
agent-based rules. This method enables us to detect periods of deterministic market
evolution and to predict (in controlled experiments) future states of the market.
Our key observation is that the convergence of humans to a particular mood (fear
or euphoria) in certain situations corresponds to the decoupling of the agents’ rules
observed in simulations, whereupon the decisions of the agents become independent
of or decoupled from the latest inputs of the market. This describes a process
in which individuals begin to ignore new facts. Such a description suggests that
the dramatic moments often seen during bubbles and crashes observed in real
financial markets could result from the reasonable, albeit subjective, behavior of
each individual. As we show in experiments involving human subjects, the detection
of decoupling indicates herding on a locked position and allows us to predict the
onset of speculative moments. We will show how an agent-based model, the $-game,
thus provides a formalism for reproducing the emergence of collective macroscopic
trends, and possibly a wider range of social macroscopic disruptions.
Research on human decision-making [58, 78, 83] indicates that the rules may
change dramatically during the course of decision-making. In the initial stages of
the decision process, models of rational investors open to new information in certain
cases correctly describe the strategies of the decision-makers. However, it is often
observed that, once the decision is made, investors’ minds close with respect to
fresh incoming information, and all information processing is aimed at supporting
the decision that was already made. The decisions of an individual experiencing
cognitive closure [78] in this sense become decoupled from incoming information.
We hypothesize in the following that when the majority of investors experience
cognitive closure, the market dynamics changes dramatically. Investors become
locked in their positions, and their decision heuristics are immune to disconfirming
information. Investors at such moments deviate significantly from any model of
rational decision-making. They become incapable of reacting to alarm signals, and
this allows the momentary creation of ‘mini’ bubbles and crashes that elude any
5.4 The Idea of Decoupling 99

rational decision criteria. Moreover, as we will show, the detection of cognitive


closure in a high percentage of investors allows for prediction of such speculative
moments before they are actually seen in the price dynamics of the markets.
The main steps illustrating these ideas are as follows:
1. We first introduce the idea that markets become predictable for certain periods
of time. In the following, such times will be called pockets of predictability,
since they are not necessarily long periods, but can be created instantaneously
and afterwards also disappear almost instantaneously. As we shall show, the
main mechanism behind such pockets of predictability is that, at certain times,
investors happen to use investment strategies which ‘decouple’ from market price
movements. That is, their investment strategy has a certain general but simplified
property making it reminiscent of a rule of thumb. We will use the notion of
decoupling to characterize times when this kind of investment strategy takes
effect.
2. We then demonstrate the existence of such pockets of predictability in the price
movements of market games using agent-based simulations.
3. By slaving agent-based computer simulations to real financial market data
and using the technique of decoupling, we find evidence for such pockets of
predictability (called prediction days) in the real market data.
4. Having verified the presence of pockets of predictability in computer simulations
and real data, we then generalize the idea of decoupling to the detection of
speculative behavior. Using an agent-based model, we demonstrate:
• First, that decoupling is a sufficient mechanism leading to speculative forma-
tion of price ‘runs’,
• Secondly, that the detection of decoupling allows one to predict such specula-
tive behavior in the market game before it occurs.
5. In experiments with human participants, we show that investors investing in
similar setups to the $-game follow the dynamics predicted by decoupling during
the moments when they collectively create a mini bubble or negative bubble.
6. Using artificial agents acting on data generated by subjects in experiments, we
show that:
• Certain speculative moments created by investors can be explained by the
phenomenon of decoupling, and
• Such moments can be predicted with high probability using the above, by
the detection of a large number of decoupled strategies, i.e., a criterion that
indicates that a large number of agents have undergone cognitive closure.

5.4 The Idea of Decoupling

We remind the reader that, in the minority game [138] and the $-game, the strategy
of an agent is represented by a reference table that specifies the conditions leading
to one out of two possible decisions, either to buy or to sell an asset. A strategy is
represented by vectors corresponding to a particular pattern of ups and downs in the
recent history of the market price, and it contains a decision to buy or to sell if such
100 5 Catching Animal Spirits: Using Complexity Theory to Detect Speculative. . .

Table 5.1 Decision table for a


strategy that uses the m D 2 most
recent time steps

Signal Action
00 +1
01 1
10 +1
11 +1

a sequence occurs. The reference table consists of 2m binary vectors of length m


(the length of memory). An example is shown in Table 5.1.
Based on the actual history of the market, each agent chooses the dominant strat-
egy from a set of S available strategies, and uses it to make a decision. The dominant
strategy in the $-game is the one that would have produced the highest profit based
on the history of the market so far. In contrast, the dominant strategy in the minority
game is the one that would have produced the highest success rate for taking the
minority action at every time step, based on the history of the market so far.
The optimal state in the $-game is the one in which all agents cooperate and take
the same decision (either buy or sell). This is a Nash equilibrium for the $G given
by Keynes’ beauty contest, where it becomes profitable for the agents to guess the
actions of the other participants and mimic their decisions. In what follows, we will
describe decoupling in the context of the $G, but it should be noted that this is a
general property of agent-based games which use lookup tables like Table 5.1.
To understand the following, it is important to note that the optimal state of the
$G without any constraints is the solution in which the price deviates exponentially
in time from the fundamental value of the asset, enabling all agents to profit from
constant price increases or decreases in a bubble or negative-bubble state. However,
finding the optimal solution in the $G requires coordination among the agents if they
are to enter and remain in such states. Note that this coordination is not driven by an
intentionally coordinated behavior of all agents; rather, it emerges from independent
decisions of the majority of agents who choose their optimal strategies from the full
set of strategies. These optimal strategies presented in the reference tables happen
to lead to the same action, which on an aggregate level is seen as synchronization.
The question is whether this mathematical formalism can adequately describe the
process of human decision-making.
At first glance, agents’ strategies are very different from what we know about
human decision heuristics [140]. Decision heuristics provide rules for human
decision-making [73]. They are expressed in terms of verbally (or rather propo-
sitionally [115]) formulated conditional rules. Clearly, humans are cognitively
incapable of precisely representing the many vectors and exact sequences of
market dynamics needed to formulate and valuate the strategies. However, the
reverse formalization of human decision heuristics by lookup tables is simple. Any
conditional rule of human reasoning can be represented in a lookup table. To accept
the notion that agents’ strategies represent human decision heuristics, we just need to
5.5 The Formalism of Decoupling 101

assume that each agent’s strategy depicts in an algorithmic way the implementation
of a decision heuristic which for humans would be specified in a higher level
language.
In this vein, cognitive closure in market players may be interpreted as meaning
that they set their minds on what will happen in the more distant future, regardless
of what happens in the near future. In terms of decision heuristics, after observing
certain patterns of market dynamics, investors may come to the conclusion that
the market trend is set and, furthermore, that temporary market reversals are not
indicative of the real market trend. For example, if the market player judges that
the market trend is up, then the increase in price serves as a confirmation of the
expected trend and the decision is to buy. If the price drops, this may be perceived
as a momentary deviation from the governing trend, which indicates immediate
correction, so the decision is also to buy. In terms of agents’ strategies, this may
be translated as decoupling of agents’ strategies, an idea that will be crucial for
understanding how speculative periods form in the markets.
As long as the majority of investors are reacting to incoming information,
the market dynamics is unpredictable. If, however, a large enough proportion of
investors make their decisions about the direction in which the market will evolve
regardless of what happens next, the market may become temporarily predictable,
since investors are in fact locked into their decision, and decisions are temporarily
decoupled from information concerning the market. A prolonged locked-in decision
by investors to buy results in mini bubbles. Locking the decision on selling will
result in short term market crashes.
Some strategies represented by reference tables have a unique property: the
actions that they recommend are decoupled from the incoming information. A
decoupling of the strategy means that the different patterns of market history lead
to the same decision (e.g., buy), regardless of whether the market went up or down
in the previous time step. As we will show, the main interest in the mechanism of
decoupling is that it provides a way to predict the formation of speculative moments
before they are visible in the price data.

5.5 The Formalism of Decoupling

The simplest example of decoupling in agent-based models is the case in which an


agent uses a strategy like the one presented in Table 5.1, but with the action column
consisting only of 1s. In this case, the strategy is trivially decoupled because, no
matter what the price history, this strategy will always recommend buying. In the
notation used in [145], such a strategy would be referred to as an infinite number of
time steps decoupled, conditioned on any price history. In plain words, if somebody
ends up holding a strategy having only 1s in the decision column, such a strategy
would always make the same decision to buy at every time step, independently of
what goes on in the price history of the market. It should be noted that the probability
m
of an agent possessing such a strategy is very small and is given by 22 , because
m
22 is the total number of strategies.
102 5 Catching Animal Spirits: Using Complexity Theory to Detect Speculative. . .

The strategy presented in Table 5.1 is one time step decoupled, conditioned on
the price history being D .01/ at time t, because in cases where the market at
time t C 1 went up, viz., .01/ ! .11/ or down, viz., .01/ ! .10/, the strategy for
both cases recommends buying at time t C 2, that is, buying is recommended for
both .11/ and .10/. In plain words, every time we see an occurrence of the price
history where the market first went down (0), then up (1), we know for sure what
action this strategy will recommend in two time steps. To see this, imagine that the
market following the down–up movement were to go down (0). Then the updated
price history to be used by the strategy at the next time step would be up–down,
i.e., (10), and here the strategy recommends to buy. If instead the market following
the down–up movement were to go up (1), the updated price history at the next
time step would be up–up, i.e., (11), in which case the strategy also recommends
to buy. So this means that, whatever happens in the time step after the up–down
movement of the market, we know for sure that following this time step the strategy
will always recommend to buy. Likewise, the same strategy is seen to be one time
step decoupled, conditioned on the price history .11/, since, independently of the
next market movement at time t C 1, the strategy will always recommend buying at
time t C 2.
In a game with only one agent and only one strategy, such as the one in Table 5.1,
we could therefore know with certainty what the agent would do at time t C 2 if the
price history at time t was either .01/ or .11/, independently of the price movement
at time t C 1.
As discussed above, we have seen how the strategies of agents can sometimes
lead to momentary pockets of predictability regarding the action a given agent will
take in the near future. But even knowing for sure what one or even several agents
will do does not mean that we necessarily know what will happen at the market level.
To know for sure how the market will behave, we need to encounter a situation in
which not only a majority of agents are decoupled, but in which that majority of
agents are decoupled in the same direction. The formalism corresponding to such a
condition is given in the box below.

To see how such a condition can arise, we introduce the following formalism.
We call a strategy decoupled if, conditioned on having a given price history
at time t, we do not need to know the price movement at the next time step
t C 1 in order to determine what the strategy will recommend at time t C 2.
On the other hand, if we need to know the price movement at t C 1 in order to
determine what the strategy will recommend at time t C 2, we say that such a
strategy is coupled to the price time series.
At any time t, one can therefore ascribe the actions of agents to two
contributions, one from coupled strategies and one from decoupled strategies:

.t / .t /
A .t / D Acoupled C Adecoupled : (5.10)

(continued)
5.6 Decoupling in Computer Simulations and in Real Market Data 103

(continued)
The condition for certain predictability one time step ahead is therefore
ˇ .t / ˇ
ˇA ˇ
decoupled .t C 2/ > N=2 ; (5.11)

because in that case we know that, given the price history at time t, the
sign of the price movement at time t C 2 will be determined by the sign of
.t /
Adecoupled .t C 2/ regardless of what happens at time t C 1.

5.6 Decoupling in Computer Simulations and in Real Market


Data

A priori, it is highly nontrivial whether one should ever find the condition for
decoupling to be fulfilled at any instant of time, even in computer simulations
of agent-based models. As shown in [145], if the agents in the MG and $G play
their strategies randomly, the condition is never fulfilled. When the agents choose
their strategies randomly, there is no feedback between the price movements of the
market and the decision-making of the agents, so it is important to note that in this
case we cannot use decoupling to predict what agents will do next.
The way decoupling arises must therefore be related to feedback and dynamics
of the pricing, and this must somehow require the optimal strategies of agents to
be attracted to regions in the phase space of strategies which contain decoupled
strategies. In the $G, the natural candidates for attractors are the two most trivial
strategies with actions either all C1 or all 1. But since it is very unlikely that
an agent will possess these two strategies, an attractor would necessarily have to
consist of regions in the phase space of strategies that are highly correlated with
such strategies. In the MG, it seems even less obvious that decoupling should ever
take place, since there seem to be no natural attractors for decoupling in that game.
Interestingly, even the MG exhibits predictable behavior, as can be seen from
Fig. 5.1. Each dot in the figure shows the value of Adecoupled versus time. Given
the parameter values of the MG used in the simulations, the condition (5.1) means
that, whenever the Adecoupled value becomes larger than 50 or smaller than 50, we
can predict for sure the direction of the market two time steps ahead, regardless
of the market move one time step ahead. As can be seen from the figure, most of
the dots lie within the interval Œ50; 50, and for such events we have no predictive
power. However, the dots enclosed by a circle illustrate prediction days given by
the condition (5.11), which means that, standing at time t, one can predict for sure
the outcome of the market at time t C 2, regardless of which direction the market
takes at time t C 1. Crosses in the figure correspond to events where two or more
consecutive price movements can be predicted ahead of time.
At first, it might seem like a somewhat theoretical exercise to be able to
predict ahead of time how agents in a computer simulation will behave. One could
104 5 Catching Animal Spirits: Using Complexity Theory to Detect Speculative. . .

100

80

60

40

20
A decoupled

−20

−40

−60

−80

−100
0 500 1000 1500 2000
Time t (a.u.)

Fig. 5.1 Decoupling in the MG. Adecoupled is defined in (5.10) as a function of time for the MG
with N D 101; s D 12; m D 3. Circles indicate one-step prediction days, defined by the
condition (5.11). Note that a prediction day implies prediction with certainty of the price direction
of the market two time steps ahead. Crosses correspond to the subset of days with two or more
consecutive one-step prediction days (Figure taken from [145])

argue that, since the setup of the computer program is entirely determined by the
parameters of the simulations, all information is already encoded in the program and
the only thing needed to predict two time steps ahead would be to let the program
run two additional steps and then see what happened. However, this remark misses
the point concerning the more interesting situations of practical applications where
simulations of agent-based models are slaved to real market data, as explained in
Chap. 4. This corresponds to considering real predictions in real time.
When one uses real market data as input to computer simulations of the agent-
based models, one encounters the situation of knowing for example the closure
of the markets today. Observing then a moment of decoupling in the computer
simulations given the market closure of today means that we know for sure what the
agent-based model will predict, not for tomorrow, but for the day after tomorrow. In
that case there is no need to wait and see how the markets close tomorrow to be able
to make the prediction tomorrow – it can already be made today. In this respect the
mechanism of decoupling can now be seen as a natural candidate for understanding
and defining the process that leads to the observed ‘big swings’ in the market, as
defined in Chap. 3.
To see how the method of decoupling works when applied to real market data,
consider Fig. 5.2, which shows as an example the NASDAQ Composite price history
(thick dashed line) as a function of time in units of days over a half-year period.
5.6 Decoupling in Computer Simulations and in Real Market Data 105

1.25

1.2

1.15
Normalised price

1.1

1.05

0.95

0.9

0.85
0 20 40 60 80 100 120
Time (days)

Fig. 5.2 Finding pockets of predictability in real market data. Thick dashed line: NASDAQ
Composite price history as a function of time (days). Thin solid lines: Predicted trajectories
obtained from third-party games. The first in-sample 61 days are used to calibrate 10 third-party
games. The days 62–123 are out of sample. The third-party games make a poor job at predicting
the out-of-sample prices of the NASDAQ Composite index, while Table 5.2 shows that they predict
specific pockets of predictability associated with the forecast prediction days (Figure taken from
[145])

The sample period shown was chosen so as to have no apparent market direction
over the first half of the period that was used as in-sample. As described in Chap. 3,
when ‘slaving’ a game to real market data, one uses the last m price directions of the
real market data as input to the $G agents in the computer simulations. The agents
therefore adjust their optimal strategies according to the real price history. In this
way the Adecoupled defined in (5.10) can be calculated from the optimal strategies of
the agents which are determined dynamically via the price history of the real data.
The data of the in-sample period was first used to fix the parameters of the
$-games which could best describe the NASDAQ Composite over the in-sample
period. The $-games that best fitted the NASDAQ data in-sample were found using
genetic algorithms that explored the three different parameters of the $-games as
well as different initial strategies attributed to the agents in the game. Having
fixed the parameters of the $G that supposedly give the best representation of the
NASDAQ data in-sample, the remaining half of the data set was then used out-of-
sample.
The ten thin solid lines in Fig. 5.2 show third-party $-games obtained in this
manner. The third-party $-games were all constructed with the same optimal
parameters found in the genetic algorithm search, but each game had agents using
106 5 Catching Animal Spirits: Using Complexity Theory to Detect Speculative. . .

Table 5.2 Out-of-sample success rate % (second row) using


different thresholds for the predicted global decoupled action
(first row) of the third-party $-games calibrated to the NASDAQ
Composite index. Nb (third row) is the number of days from
t D 62 to 123 which have a greater predicted global decoupled
action jAdecoupled j than the value indicated in the first row

jAj 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5


% 53 61 67 65 82 70 67 67 100 100
Nb 62 49 39 23 17 10 6 3 2 1

different initial realizations of their strategies, attributed to them at the beginning


of the time period. Note the fact that the ten thin solid lines all closely follow the
real market data (thick dashed line) over the in-sample period. This illustrates the
fact that different games with the same parameters and all slaved to the same input
data (here the NASDAQ Composite) perform similarly, despite having agents with
different pools of strategies assigned to them at the beginning of each game. Given
m
the size of the pool of strategies (remember there are 22 possibilities), this may
seem a surprising result at first glance. It constitutes the fundamental reason why one
can use reverse engineering to find the parameters of a given time series generated
by an agent-based model, as shown in [80]. See the discussion of this in Chap. 3.
Of course, the best way to try out a method is to test it on real data and see how it
works. In order to see if predictions could be made using the idea of decoupling, the
ten third-party games were then fed with the NASDAQ price history over the second
half (the out-of-sample) period and predictions were issued at each close of a given
day. The ten games would then issue a prediction when detecting a prediction day.
It turns out that using just the majority decision of the third-party games does a poor
job at predicting the out-of-sample prices of the NASDAQ Composite index, while
Table 5.2 shows that they predict specific pockets of predictability associated with
the forecast prediction days. As can be seen from the table, the larger the threshold
for prediction (measured by the parameter Adecoupled ), the greater the success rate
in predicting the direction of price movements on the NASDAQ Composite. The
most important point to note is that the success rate increases with the amplitude
of Adecoupled . The larger the value observed for Adecoupled , the more confident one
should be in the prediction. However, a larger value also comes at a price, since
there are fewer such events, meaning worse statistics.

5.7 Using Decoupling to Detect Speculative Price Movements

Before taking on the difficult task of identifying speculative short term movements
in real financial market data, it seems appropriate to have a laboratory in which
to understand the basic causes of such speculative bubbles. Only when we have
a testing ground to build our understanding through the testing of hypotheses can
we hope to discover which ingredients are necessary and sufficient for humans to
engage in the kind of collective exuberance seen during speculative bubbles and
crashes.
5.7 Using Decoupling to Detect Speculative Price Movements 107

Using computer simulations is one way we can exercise our skills through
modeling of such collective phenomena, and thereby develop tools to detect and
eventually predict these events. Bubble after bubble can be generated on a computer,
and this allows us to gain insight into their generic nature. This knowledge can then
be used to devise experiments by which one can check the computer simulations
against real human behavior. Again the advantage of doing experiments with
humans in the laboratory is that we can change conditions over and over, getting
insight into what really matters in the creation of speculative price movements. The
disadvantage when it comes to real markets is that we only have one realization of
the markets, the one we see right now on the computer screen or hear about in the
news. We cannot go back in time and change conditions and see what would then
have happened if things had been different. However, on the computer as well as in
the laboratory, we do have this luxury, so let us use it.
The interesting part of the $-game is that it has an inherent capacity to generate
bubbles. We clearly need models like this if we want to make ‘laboratory’ studies
of bubbles/crashes on the computer. Take any standard $-game with arbitrary
parameter values, introduce no constraints on the amount of stocks and money each
agent can possess, let it run on a computer, and one will soon see a price bubble
created in the simulations. Of course this does not sound like a good general model
of real financial markets, but it does look like what one can sometimes observe
during the ‘runs’ of euphoria observed in a typical bull market. We can see this right
now in the commodity markets where, e.g., the gold/silver and crude oil markets
seem to engage regularly in bullish runs. Introducing real life constraints such as
the amount of money/stocks each agent/market participant can hold will make the
price dynamics in the game correspondingly more realistic and avoid speculative
price formation. However, the sensitivity of the $G to create speculation/bubbles
is something that can be exploited in the detection of real life speculative price
movements, as will be shown shortly.
Before applying agent-based simulations to data generated in experiments made
by humans, we need to know what the general intrinsic properties of the $G are
with respect to the creation of bubbles and negative bubbles. In particular, the
idea will be to understand how the price dynamics generated by decoupling could
be the mechanism responsible for such short term bubbles and negative bubbles.
Recall that the idea behind decoupling is that, at some moments in time, traders are
subject to ‘framing’, i.e., they begin to ignore new information about the evolving
price history of the market. We have seen how that could give rise to pockets of
predictability on particular days, but can one extend this reasoning to detect the
onset of a speculative price bubble before it is actually seen in the price history?

5.7.1 Monte Carlo Simulations Applied to $G Computer


Simulations

When confronting an unknown situation in a model, e.g., in physics or finance, one


solution is to generate many different scenarios by slightly changing the parameters
108 5 Catching Animal Spirits: Using Complexity Theory to Detect Speculative. . .

of the model and then see how the model responds to such changes. Looking at
the characteristics of each realization of a given scenario then teaches us something
about the persistence of the model under changing conditions. A general name has
been given to such procedures: they are known as Monte Carlo simulations. This
is because the method is driven by generating a lot of random numbers, just as the
spinning roulette wheel in the casino of Monte Carlo generates a string of random
integers.
In the following we use Monte Carlo (MC) simulations to generate a large
number of different $-games on a computer, so that a new random number of
strategies is assigned to the agents in each game (the same for all agents in each
specific game). In addition, for each different game, new strategies are assigned
randomly to the agents. The MC simulations should then give insight into the
robustness of the way bubbles are formed, regardless of how many strategies the
agents adopt and regardless of which pool of strategies each agent happens to use.
This insight will be useful in the next section, when we discuss the experiments with
human traders and their decision-making in a very similar setup to the MC computer
simulations of the $-game.
In the experimental setup with humans, we are in a situation where we can
directly impose two out of the three parameters of the $G. We can control the
memory m used by the subjects simply by only allowing them access to the last
m directions of the price generated by their common action. In addition, the number
of subjects involved in the experiments corresponds to the number of agents N
used in the MC simulations. The only difference in the decision-making between
the realization of the experiments and the simulations of the agent-based model is
then due to the strategies. In the $G, each agent has s strategies, as specified in a
m
lookup table like the one in Table 5.1, drawn from the total pool of 22 strategies
before the game starts. In contrast, the human subjects are just told to trade in such
a way as to optimize their profit. Comparing the outcome of the experiments with
genuine properties of the $G with the same m; N therefore gives us insight into
human decision-making in financial market games as modeled by the $G.
The idea then is to try to predict a speculative moment or bubble in the making,
and as much in advance as possible. The advantage of using market games like the
$G and the MG is that it is easy to give a precise definition of a speculative bubble or
negative bubble: whenever the last m price movements were all positive or negative,
we are in a bubble or negative bubble, respectively. In this state, we know for sure
the game is trapped and cannot leave that state. If the majority of strategies used,
i.e., optimal strategies, recommend to buy when in the bubble state, then continuing
to buy can only enforce the selection of such optimal strategies.
The definition of a bubble state as one in which all m previous price moves have
the same sign should be considered as the zeroth order target to predict at which time
it occurs. One could call this time tb0 . However, in the following, we shall consider
another target that is more difficult to predict in advance, namely the time for which
all subsequent price movements have the same sign. We call this moment in time
tb . To illustrate the difference in the definition of tb0 and tb , consider a game with
memory m D 3 and price history D .001/ at time t. Then, standing at time t,
5.7 Using Decoupling to Detect Speculative Price Movements 109

this means that 3 days ago the market went down, 2 days ago down, and yesterday
up. If the market keeps on going up following that price history, this means tb D t.
However, it is not until t C 2 that the price history becomes D .111/, so only at
that moment do we know tb0 D t C 2. In order words, tb always occurs before tb0 ,
or more precisely tb0 D tb C m, which explains why tb is the more difficult target to
predict among the two variables.
As mentioned previously, simulating the $G without any constraints on the
amount of stocks or money available to the agents, the latter will eventually enter a
bubble or a negative-bubble state. However, we do not know exactly when this will
happen. Different realizations of the pool of initial strategies assigned to the agents
will mean that the time tb for the onset of the bubble is random, even for different
simulations of the game that use the same parameter values for .N; s; m/. Still, we
would like to be able to predict tb for any given realization of a game before the
bubble is realized in a given simulation. How can we do this?
We begin by launching our bubble laboratory on the computer [118]. Each of the
MC simulations started out on the computer corresponds to running a $-game with
given parameters. We let each game run until either a bubble or a negative bubble is
created. Recall from the above that a bubble is defined as m consecutive increases
in the market, and a negative bubble is defined as m consecutive decreases in the
market. Hence, at every time step, consider all decoupled strategies that the group
of agents use, i.e., strategies that are presently optimal, and divide them into two
groups according to which price direction they are decoupled along. In order to get
an idea of the average behavior of the game with respect to bubble formation, we
mix data from the generation of the L different bubbles, defining a tb for each of
them, and then plot the average percentage of decoupled strategies along the two
directions of decoupling.
To establish the extent to which the decoupling of agents’ strategies can act as
an early predictor for bubbles or negative bubbles, we consider for each of the
decoupled bubbles/negative bubbles, as a function of time t  tb , the percentage
Cbub
decoupled of decoupled optimal strategies used by agents at a given moment that
recommend a decision along the direction of the bubble/negative bubble, as well
bub
as the percentage decoupled of decoupled optimal strategies with decisions in
disagreement with the direction of the bubble/negative bubble. The time tb of the
onset of a bubble/negative bubble is defined as the moment at which the price begins
to constantly increase or decrease. Plotting the percentage of decoupled strategies
versus time, one obtains plots like those in Fig. 5.3. In each of the two plots, the
x-axis represents the difference in time t  tb elapsed from the moment tb when a
bubble/negative bubble was created in a given game.
Figure 5.3 illustrates two different MC simulations with m D 3 and m D 6. In
each simulation, L D 50; 000 different $-games were run on a computer. Each of
the L games were run with the same parameters N D 11, m D 3 (top plot), m D 6
(bottom plot) as used in the experiments with human subjects to be presented in the
next section. Since we would like to be able to understand human decision-making
by having them play similar games, we let the number of strategies s be chosen
randomly in each of the L MC simulations shown in Fig. 5.3. This should therefore
110 5 Catching Animal Spirits: Using Complexity Theory to Detect Speculative. . .

0.5

0.4

0.3
Percentage of decoupled strategies

0.2

0.1

−20 −15 −10 −5 0 5 10 15 20


tb − t
0.4

0.35

0.3

0.25

0.2

0.15

0.1
−20 −15 −10 −5 0 5 10 15 20
t −t
b

Fig. 5.3 Decoupling and the onset of predictability. Splitting in the agents’ use of different optimal
decoupled strategies (solid and dotted lines) indicating an onset of a speculative bias in computer
simulations of the $-game. The splitting as a function of time allows one to predict the presence
of a speculative bias before it can be seen in the price history generated by the agents. Solid lines
Cbub
indicate the percentage of optimal decoupled strategies decoupled used by agents who, at a given
moment, recommend a trading decision along the direction of the bubble/negative bubble. Dotted
bub
lines indicate the percentage decoupled of decoupled optimal strategies with trading decisions in
disagreement with the direction of the bubble/negative bubble. Time is normalized in such a way
that the moment when a speculative bias begins (defined as the moment at which the price begins
to constantly increase or decrease) corresponds to t D tb . It is therefore not before t D tb C m,
i.e., only after observing m consecutive price increases/decreases, that a collective speculative bias
can be defined ex-post from the price time series. Vertical dashed lines indicate this moment for
memory lengths m D 3 (upper plot) and m D 6 (lower plot). The observation that a split between
the solid and dotted lines occurs before the onset of a bubble (indicated by tb ) means that prediction
of biased price movements in the game is possible before they are actually visible in the prices

bring out the essential features of the game, i.e., those that are independent of the
number of strategies assigned to each agent and independent of the pool of different
strategies assigned to the agents, since these two quantities are averaged over in the
MC simulations. In this way we can directly compare the human decision-making in
the experiments and the simulations of the agent-based models illustrated in Fig. 5.3.
The experiment with human subjects will be described and interpreted in the next
section.
The hallmark of the plots presented in Fig. 5.3 is the clear splitting of the
decoupled strategies a long time before the onset of the bubble. Notice that the
units are chosen so that each of the L bubbles happens at the same time t  tb D 0,
5.7 Using Decoupling to Detect Speculative Price Movements 111

as indicated on the x-axis. The figure shows that prediction is possible in these
market games: by observing the split in the decoupled strategies, we know a long
time in advance what will happen before it actually shows up in the price history. In
both cases, a small split is already observed at the very beginning of the time series
shown, so we already know 20 time units before it can be formally confirmed that a
bubble or negative bubble is in the making.
˙bub
Another feature to notice in the figure is the fluctuations in decoupled versus time.
Far away from the onset of the bubble or negative bubble, the agents’ choices of
strategies are stable. At least this is how it looks when averaging over the L different
bubble/negative bubble trajectories, since the percentage of decoupled strategies
remains almost constant. However, as we approach the generation of the bubble
˙bub
or negative bubble, characteristic fluctuations begin to show up in decoupled . It is
interesting to note that we can understand the trading leading to such phenomena
by looking at a completely different field, in particular, by studying what happens
during phase transitions in materials. Here we give a short introduction to the idea
of phase transitions before returning to the interpretation of Fig. 5.3.
When an iron bar is heated up to high enough temperatures, it possesses no
magnetization, but if it is then left to cool below the so-called Curie temperature,
magnetization will begin to appear. This illustrates the concept of a transition from
a high temperature phase of the iron bar, which is symmetric, to a low temperature
phase of the bar, which is asymmetric. The high temperature phase is symmetric
because any microscopic magnetization inside the bar has no preferred direction,
so taken as a whole the bar is not magnetized. This changes as the temperature is
lowered, since for a sufficiently low temperature called the critical temperature TC ,
the microscopic magnetic domains throughout the bar suddenly begin to align to
produce a macroscopic magnetization of the bar as a whole.
In physics one talks about the symmetry being broken at the temperature TC ,
when going from the high temperature symmetric phase to the low temperature
asymmetric phase. The phase transition in the iron bar is illustrated by the two plots
in Fig. 5.4. The upper plot shows the specific heat as a function of temperature.
The specific heat is closely related to the magnetic spin susceptibility, which is an
example of a response function. Loosely speaking, the divergence of the specific
heat at TC indicates that the susceptibility of the material diverges as the temperature
approaches TC , whereupon huge fluctuations of the microscopic magnetization
set in.
As a simple picture one can imagine the microscopic magnetization as made
up of atoms having a spin in one of two directions, say spin up given by
microscopic magnetization m D 1 and spin down with magnetization m D 1.
The total magnitude jM j of the macroscopic magnetization can then be obtained by
adding all microscopic contributions from atoms which have spins with m D 1
magnetization and from the other part of the atoms which have spins with m D 1
magnetization. The lower plot in Fig. 5.4 shows how the macroscopic magnetization
jM j of the bar is zero in the high temperature phase of the materials, while the bar
becomes magnetic, with jM j > 0, as the temperature is lowered below TC .
112 5 Catching Animal Spirits: Using Complexity Theory to Detect Speculative. . .

Fig. 5.4 Illustration of a


phase transition. Examples of
singularities in
thermodynamic functions at
second-order phase
transitions in an iron bar. (a)
Specific heat C at an
order–disorder transition. (b)
Magnitude jM j of the
spontaneous magnetization at
a magnetic phase transition
(Figure taken from [91])

In the case of the iron bar, the magnitude of the magnetization is called the order
parameter of the phase transition. It can be defined in terms of the densities of the
microscopic magnetization by
mC  m
M D :
mC C m

The concept of order parameter is very general and characterizes the onset of
phase transitions in many different systems in nature. As we have seen for a
ferromagnetic system, the order parameter is the net magnetization, while for
solid/liquid transitions, it is the density. More generally, symmetry-breaking phase
transitions cover a very broad range of interesting phenomena from liquid crystals
to superconductors, and they even play an important role for understanding the ratio
of matter to antimatter in cosmology.
Returning to Fig. 5.3, a similar phase transition can now be understood to take
place through the way agents trade. Time in this case corresponds to temperature in
the physical example, with the equivalent of a high temperature phase corresponding
to the situation in which the agents still have not chosen whether to create a bubble
or a negative bubble. The low temperature phase corresponds to the situation where
the agents have entered the bubble/negative-bubble state, i.e., times t > tb . The
time tb itself is then the counterpart of the critical temperature. The microscopic
magnetization in the physical system corresponds to decoupled trading strategies,
either decoupled along (m D 1) the direction of the bubble/negative bubble or
against it (m D 1).
5.7 Using Decoupling to Detect Speculative Price Movements 113

The order parameter Mbubble for the trading system is defined similarly to the
magnetic order parameter for the iron bar:
Cbub bub
decoupled  decoupled
Mbubble D Cbub Cbub
:
decoupled C decoupled

Cbub Cbub
Except for the small split of decoupled and decoupled seen for early times, this quantity
would be 0 in what corresponds to the high temperature phase, i.e., small t, and
nonzero in what corresponds to the low temperature phase, i.e., large t. Furthermore,
it would have the same shape as the order parameter in Fig. 5.4. The fluctuations in
Cbub Cbub
decoupled and decoupled just before tb correspond to the increased susceptibility as the
phase transition is approached.
Let us summarize the main findings from Fig. 5.3. To begin with, it gives us
the first confirmation that the method of decoupling can be used in a predictive
manner in market games to establish short term bubble/negative-bubble formation
before it is actually visible in the price history. The plots presented in Fig. 5.3 clearly
illustrate that, before the onset of the bubble or negative bubble, there is a splitting
of the optimal strategies used by the agents, and this can be used as a predictor for
the onset of tb . It is remarkable that a clear split is observed for m D 6 even 20 time
steps before tb , and it should be noted that, until tb C m  1, any predictability is
nontrivial, because only at time tb Cm do the agents encounter m consecutive pluses
or minuses. Drawing the analogy with phase transitions in physical systems, we have
also noted that the creation of bubbles in the computer market games corresponds
in a well-defined manner to the kind of symmetry breaking seen in ferromagnets. In
the next chapter, we will elaborate further on this argument and apply it to the long
term growth and stability of financial markets.

5.7.2 Experiments on Human Subjects and Monte Carlo


Simulations Applied to Data Generated by Human Traders

Before discussing experiments with market games as presented above, let us take
a moment to talk about the usefulness of doing experiments in the first place. In
particular, let us recall Karl Popper’s view, introduced in Chap. 3. According to
Popper, the reason for the obvious progress of scientific knowledge is related to
a kind of natural selection between scientific ideas, very much as seen in biological
evolution. In order to carry out selection on something, one needs a criterion for
selection, and Popper’s idea was to use what he called the falsifiability of a given
theory. Note that ‘falsifiable’ does not mean that a given theory is actually false, but
rather that if it were false, this could be shown by experiment. Popper argued that
scientific theories cannot be proven in general, only tested indirectly. This means
that no number of positive outcomes of a given experiment can actually confirm
a scientific theory. The best we can hope for is to not falsify a theory. In terms
of natural selection between competing theories, the fact that certain theories are
114 5 Catching Animal Spirits: Using Complexity Theory to Detect Speculative. . .

falsified then leads to an evolutionary process for theories that have not been falsified
so far. The real problem, however, comes when theories are not falsifiable, since they
then become mere axioms.
From our point of view, one of the main problems in finance is the absence
of anything like Popper’s approach as described above. For example, the efficient
market hypothesis is an example of a theory that is not falsifiable. On the other
hand, the CAPM mentioned in Chap. 1 is in fact a falsifiable theory, and most data
actually disconfirm it. Taking Popper as a reference, in order to make progress,
it is therefore important to have experiments with which to falsify theories. But the
argument against doing experiments in finance is often that the conditions one might
impose on human subjects in the laboratory have nothing to do with the conditions
experienced by traders in real markets. This is of course true, in the sense that one
can never recreate in laboratory experiments something which will ever come close
to approximating the complexity seen in real markets. But maybe that is not what
one needs to aim for.
If we look at prospect theory as described in Chap. 2, this is a clear example
of a hypothesis that can be falsified by doing experiments outside the domain of
real financial markets. Still, prospect theory is now used directly on market-related
issues by both practitioners and people in the academic world, giving them a new
framework in which to formulate and possibly understand certain problems. We
therefore propose prospect theory as an enlightening example of how experiments
and the development of theory can go hand in hand when performing experiments
in the laboratory rather than in the real market.
For some reason, the tradition of doing experiments is more widespread in
economics than in finance. Still, experiments done by Vernon Smith, for exam-
ple, have been applied to probe traditional financial market theories. Somewhat
surprising to us is the apparent lack of experimentation in the field of behavioral
finance. We would like to see more such experiments in a field one could call
experimental behavioral finance (EBF). This is to distinguish it from the more
common experiments which only focus on traditional core theories without taking
into account any behavioral aspects of decision-making. Experimentation has the
clear advantage that, whenever a new claim is made about some generic behavior
in human decision-making, others can check the results by redoing the same
experiments in their laboratory.
Having presented the computer simulations in the last section, we now describe
experiments carried out with human subjects [65]. One of the main questions we
would like to answer is whether it is possible to capture human decision-making in
financial market games in the laboratory, as described by the theory of artificial
market games in general and the decoupling of trading strategies in particular.
Specifically, we would like to see whether the connection between decoupling
of strategies and the emergence of speculative periods (bubbles and/or negative
bubbles) that was discovered for artificial agents is reproduced in laboratory
experiments with human subjects.
Several experiments with humans have been performed [118], the aim being to
make the trading environment resemble as closely as possible the setup encountered
5.7 Using Decoupling to Detect Speculative Price Movements 115

in the market games ($G, MG) described in the previous sections. We recall the
three parameters used in these market games: the number N of market participants,
the memory m that each market participant uses for decision-making, and the
number s of strategies used by each market participant. The number of human
subjects corresponds directly to the variable N in the games. In the experiments,
each human subject had access to the last m price movements, represented by a
string of pluses when the price increased and minuses when the price decreased,
to bet on whether the market would rise or fall in the following time step. This is
identical to the information used for decision-making by each agent in the market
games. Also as in the market games, the cumulative result of the subject’s action
generates the subsequent movement in the price history.
The only real difference between the market game setup and the human trading
experiments therefore enters through the strategies. Instead of having a fixed number
of strategies, the parameter s in the market games, the subjects were just told to try
to optimize their gain in the best possible manner. The idea is that humans will in
many ways use similar heuristics to those described by the strategies in the market
games.
Before doing an experiment on trading, it is a good idea to know how long it
has to run in order to test one’s working hypothesis. To get a rough idea, it is again
useful to turn to Monte Carlo simulations of the $G. It turns out that the average time
htb i it takes for the agents in the computer simulations to create a bubble/negative
bubble scales as htb i / 2m . That is, the average time to create a bubble or negative
bubble does not depend on how many agents take part in the game, nor on how
many strategies they hold. The only thing that matters turns out to be the information
content in the bit string of price history as given by 2m .
Another useful piece of information to probe in the experiments would be how
often the bubble or negative-bubble state was created due to decoupled strategies.
Here it turns out that, for m D 3 (one of the values used in the experiments), Monte
Carlo simulations found [118] that an average of 57 % of the bubbles or negative
bubbles entered a state of decoupling, whereas for m D 6 (another value used in the
experiments), 54 % of the generated bubbles or negative bubbles were in a state of
decoupling. This shows that decoupling is a sufficient but not necessary condition
for the occurrence of synchronization.
As shown in [118], out of eight experiments performed, the subjects in seven
cases managed to synchronize into a short term speculative bubble/negative-bubble
state, regardless of the length of the given history. It was not entirely obvious a
priori that the human subjects would perform according to anticipations suggested
by game theory, so we consider this as an initial sign of encouragement that one
can actually gain insight into human decision-making from such simple market
games. The fact that the results of the market game simulations and the dynamics of
human groups making investment decisions are very similar may indicate that the
complex dynamics observed on the level of human groups is an emergent group-
level consequence of simple rules of individual decision-making, rather than a
manifestation of very complex rules of individual decision-making.
116 5 Catching Animal Spirits: Using Complexity Theory to Detect Speculative. . .

Interestingly, in six cases the human participants synchronized to create bubbles,


and in one case, they were able to create a negative bubble. There is no game-
theoretical explanation for such an asymmetry between the creation of bubbles as
opposed to negative bubbles, so the question here is whether a ‘social norm’ might
play a role, since completely independently of the setting of the experiments, human
subjects implicitly expect (for historical reasons) a stock market to be something
that grows over time. For a more detailed discussion about the asymmetry between
growing and decaying markets, see the next chapter. The observed asymmetry is
very likely the result of individual beliefs concerning the way one makes profits in
financial markets. Although profits can be made either when prices are increasing
or when they are decreasing, profit-making is generally associated with financial
booms rather than financial collapse. More generally, it is safe to say that these
results confirm the hypothesis that subjects are capable of coordinating to achieve a
market behavior that is optimal for their own gains, i.e., a monotonic series of either
constant buying or constant selling.
As a next step, one can attempt to establish the relevance of decoupling for the
emergence of bubbles/negative bubbles in real laboratory experiments. To achieve
this goal, we first carry out the experiment in which the human subjects trade
according to the setup of the $G as described above. We then use the price histories
generated by the human subjects as input to $G Monte Carlo (MC) simulations. That
is, instead of performing MC simulations with games where the agents use their own
action to generate a price history, they would then use the price history generated by
the humans. The price history of the humans would thus be used to determine which
of the agents’ strategies were optimal. In this way, one can consider the actions of
the agents in the MC computer simulations as slaved to the price history of the
humans, thereby also giving a handle on how to define decoupling by looking at the
decoupling of agents’ strategies in the computer simulations.
In [118], it was found that, in four out of seven experiments in which the
subjects were synchronized (that is, 57 % of the experiments), the creation of the
bubble/negative bubble could be explained by decoupling, i.e., by having more
than 50 % of agents choose to use strategies that are decoupled along the direction
of the bubble/negative bubble. It should be noted that this result is consistent
with simulations where the price was the result of artificial agent behaviors. As
mentioned above, in those cases, decoupling was responsible for synchronization in
around 54–57 % of cases.
Figure 5.5 gives an illustration of the price histories generated by humans in three
experiments. The circles illustrate the evolution of the price. In all three experiments
the price was normalized to unity at the beginning of the experiment. The first two
plots give examples of the creation of a bubble, while the last plot illustrates a
negative bubble. The time t  tb was chosen so that the onset of the bubble/negative
bubble happened at time zero. In all three cases, the dashed vertical lines show
the time tb C m when the presence of a bubble/negative bubble becomes evident
from the price history itself. This is the time when the last m directions of the price
movements had the same sign (positive for up and negative for down). The solid
lines illustrate the percentage of optimal decoupled strategies along the direction of
5.7 Using Decoupling to Detect Speculative Price Movements 117

Price (circles) and percentage of decoupled strategies (lines)


1.6 2.5

1.4
1
2
1.2

0.8
1
1.5

0.8 0.6

1
0.6
0.4

0.4
0.5
0.2
0.2

0 0 0
0 10 20 30 −20 0 20 40 60 −20 0 20 40

tb − t tb − t tb − t

Fig. 5.5 Speculative biases in price movements (indicated by circles) in experiments with human
subjects. The onset of speculative bias was subsequently detected in Monte Carlo simulations with
agents of the $-game trading on the price data generated by humans. The rather sharp transition in
the splitting of solid and dotted lines over time (for definitions of the different lines, see Fig. 5.3)
can be used to mark the onset of the speculative bias before it is visible in the price history. The
lengths of memory used in experiments with human subjects were m D 3 (plots to the left and in
the middle) and m D 6 (plot to the right)

the bubble (first two plots) or negative bubble (last plot), whereas the dotted lines
are the optimal decoupled strategies along the opposite direction.
For experiments in which synchronization is due to decoupling, one can see a
Cbub Cbub
clear split of decoupled and decoupled before tb C m. As seen in Fig. 5.5a, even when
a bubble is created very rapidly (with small m D 3), we see a split. However, as
expected, this split becomes clearer over a longer time period for the longer memory
m D 6, as seen in Fig. 5.5c. In this case the subjects traded in a descending market
over a longer period before the final synchronization occurred. Such a condition
resembles features seen in real markets, with a typical run up or down before the
first stages of a bubble/negative bubble sets in. This should give us confidence in
applying our method to real market data.
Interestingly, in one experiment the subjects did not manage to find the optimal
state of synchronization (see Fig. 5.6). While creating this market, the subjects
could not find the optimal solution because their dominant strategy, which could
be described as a ‘return to the mean’, did not allow it. This strategy simply dictates
that, when the price keeps increasing over (approximately) 5–6 time steps in a row,
start selling, and when the price keeps decreasing over (approximately) 3–4 time
118 5 Catching Animal Spirits: Using Complexity Theory to Detect Speculative. . .

1.4

Decoupling (lines) and cumulative return (circles)


1.2

0.8

0.6

0.4

0.2

0
0 10 20 30 40 50 60 70 80 90

Time step t

Fig. 5.6 Illustration of a biased price evolution (indicated by circles) leading to moments of
predictability in an experiment with human subjects. This is the only experiment in which a clear
bubble/negative bubble was not created. The price data generated in these experiments were used as
input to agent-based Monte Carlo (MC) simulations of the $G. Dashed lines indicate the percentage
of optimal strategies in the Monte Carlo simulation decoupled along the direction of the price
increase, whereas dotted lines indicate the percentage of optimal strategies against the direction of
the price increase. The clear peaks in the dashed lines predict a price increase before it can be seen
in the experiment

steps in a row, start buying. Such a strategy prevents synchronization into a bubble
state and shows that different solutions to the game can be reached depending on
the set of strategies available to the player population [100]. It is remarkable that
decoupling was still found at certain moments in time with very high confidence.
Applying Monte Carlo simulations and using the moments at which decoupling
occurred at a 98 % rate or higher (at those moments, only 2 games out of 100
would not show decoupling), a stunning 87 % success rate was found for predicting
a unit move of the market two time steps in advance. It is important to note that no
parameters were used in such predictions, which were fully out-of-sample results.

5.7.3 Experimental Manipulation

Our main hypothesis is that speculative moments can be triggered by the emer-
gence of coordination between investors. This coordination is associated with the
collective decoupling of investors’ strategies: the strategy set enters and remains
as a whole in a state that is independent of the price movement of the following
5.8 Summary 119

time step. This can be compared with a magnetic phase transition in which,
once the elementary magnets are aligned at very low temperatures, the total
magnetization can no longer be affected by exogenous or endogenous fluctuations.
In the neighborhood of the phase transition, small (even microscopic) changes in
the external or internal parameters can switch the macroscopic system between
macroscopically orthogonal states (up-magnetized, down-magnetized, or split).
To investigate this possibility, experiments were performed in which a false
feedback was introduced once the subjects had reached the speculative state. The
introduction of a false feedback meant that subjects saw a response opposite to what
should have resulted from their actions. If the subjects were truly decoupled, false
feedback should not influence their actions, i.e., they should continue to buy (sell)
one time step ahead regardless of the change.
As expected, in two out of four experiments in which a false feedback was
introduced, subjects did not react to the price change following the manipulation.
In these two experiments, subjects entered a clear decoupled state before being
manipulated, and the percentages of agents that chose to use strategies that were
decoupled along the direction of the bubble/crash were 75 and 65 %. In the two
remaining experiments, one step before the first manipulation was introduced,
subjects were in a state diagnosed by the MC simulations as a state with 48 or
50 % of decoupled strategies predicting up and the remaining strategies coupled and
predicting down. In such a state, one should expect the subjects to be undecided
and prone to change. This assumption was confirmed by another experiment, in
which subjects reacted to disconfirming information and switched to selling instead
of continuing to buy.

5.8 Summary

Let us summarize this chapter. We have produced an analytical description of how


agent-based simulations can capture, explain, and predict the market ‘moods’ that
underlie speculative moments in financial markets:
• Applying the method to computer simulations, we illustrated how short term
bubbles/negative bubble can be diagnosed in terms of two sets of decoupled
strategies with opposite actions (buy or sell). This analysis enabled us to clearly
predict a bubble/negative bubble before it took place in the simulations.
• Applying the same method to trading data generated by subjects, we illustrated
how the market moods of the subjects can be captured and used to correctly
predict which action they will take and also when they have entered a state
leading to a bubble/negative bubble.
We showed that the process of cognitive closure is one of the primary driving factors
in the creation of market bubbles and negative bubbles:
• As long as the majority of investors are reacting to incoming information, the
market dynamics is unpredictable.
120 5 Catching Animal Spirits: Using Complexity Theory to Detect Speculative. . .

• If, however, a sufficiently high proportion of investors make a decision about


how the direction of the market will evolve regardless of what happens next, the
market may become temporarily predictable, because the investors are in fact
locked into their decisions, and these decisions are temporarily decoupled from
information concerning the market.
Since the purpose of our study is to consider conditions leading to the creation of
bubbles/negative bubbles, it is important to note that the agents in the computer
simulations as well as the subjects in the laboratory experiments had access to an
unlimited amount of cash (in case they wanted to go long) and stocks (in case
they wanted to go short). Placing constraints on amount of cash and/or stocks
would naturally cause bubbles/negative bubbles to burst, something that would be
an interesting study in itself.
Social Framing Creating Bull Markets
of the Past: Growth Theory of Financial 6
Markets

6.1 Introduction

Could it be that the long term growth of the stock markets that we saw over the last
century was just the reflection of some kind of social consensus, due to the fact that
people undergo the effects of social framing, in this case, thinking that investing in
the stock market was a good place to invest their money because, by experience,
the stock market would in the long term always go up – even after correcting for
inflation? If so, could it also be that something has changed compared to the last
century, with several factors now challenging such a notion? In short, this will be
the central issue in this chapter.
If we take as evidence the so-called equity premium puzzle, i.e., the empirical
observation that the returns on stocks as compared to government bonds were much
higher in any decade (for the US markets) over the last century, then something
special does indeed seem to have happened on the stock markets. Financial markets
are not, however, isolated entities that can be put under a ‘market microscope’ and
studied without any interference, at least not when we consider very long time
scales, as we shall in this chapter. Looking at long time scales, such as years or even
decades, the link between the financial markets and the economy obviously begins
to play a dominant role in their performance. However, this link, which partly goes
under the name of the wealth effect and which we will explain further below, is not
a clear-cut relationship. Before going into the details of how to take this link into
account, we introduce the idea that a market can be in a certain ‘state’ that one can
think of as a metaphor for social framing.

6.2 The State of the Market

In order to introduce the reader to the notion that a given system can be in a ‘state’
which is either symmetric, or has its symmetry broken, let us begin with a simple
experiment that is easy to perform. Take a pen, place it upright on a table with

J.V. Andersen and A. Nowak, An Introduction to Socio-Finance, 121


DOI 10.1007/978-3-642-41944-7__6, © Springer-Verlag Berlin Heidelberg 2013
122 6 Social Framing Creating Bull Markets of the Past

Fig. 6.1 Symmetry breaking for a falling pen. (a) The pen is held vertically on a table, with
no other constraints. (b) When the pen falls, it chooses an arbitrary direction. (c) The pen is held
vertically between two bricks. (d) When the pen falls, only two directions are available to it. Market
forces ensure that the pen falls in one direction (the direction indicating the bull market). Drawings
made by Lisa Tognon

your fingertip supporting it at an angle as close to 90ı with the table as possible
(see Fig. 6.1a). Then gently remove your fingertip and observe the pen fall to the
table in an a priori random direction (Fig. 6.1b). If you redo the experiment the pen
will probably fall in a different direction, maybe because the pen was held slightly
differently in your second try. When the pen falls, the pull of gravity takes it into
one out of infinitely many directions.
In physics, such a phenomenon is called symmetry breaking, because a certain
symmetry is broken compared with the initial state of the pen. That initial state is
symmetric in the sense that each direction for the final state of the pen is equally
probable. Once the symmetry is broken, the pen remains in that given state, i.e.,
one does not observe the pen first fall in one direction then flip back up to choose
another direction. To get out of a state of broken symmetry, a force is required, in
this case to lift the pen back up to its symmetric vertical direction.
For many systems, symmetry breaking is related to a phase transition. Take
for example the case of the iron bar we discussed in the last chapter. At low
temperatures, it is in a magnetic phase. When heated above the so-called Curie
temperature TC , the bar transforms into another state (or phase) where it loses its
magnetization. In that case the low temperature phase (the magnetic phase) is said
to be in a state of broken symmetry since the atomic spins (which can be thought of
as miniature magnets) all have a fixed direction, just like the falling pen. The fact that
for low temperatures all (or rather, almost all) atomic spins of the iron bar point in
the same direction is what creates the macroscopic magnetization. On the other hand
for high temperatures – the case without macroscopic magnetization – the material
6.2 The State of the Market 123

is in a symmetric state, since overall the magnetic spins of the atoms point in random
directions.
At the temperature TC , the state of the bar is right at the borderline between a
symmetric state and a state with a broken symmetry. This is when something special
happens. The fluctuations of the orientations of the atomic magnetics spin become
enormous, correlations become long-ranged, and the susceptibility of the material to
magnetization increases. In fact, it is right at the borderline between an ordered and a
disordered state. In the next chapter we will describe other systems which somehow
organize into a similar state right at the tipping point between order and chaos, with
the difference that their dynamics somehow attracts the system into such a critical
state on its own. Such systems do not need anything like temperature changes to
enter a critical state, whence the term self-organized criticality to describe such
cases.
There is a growing number of researchers, in particular with a background in
physics, who view financial markets as being right at the borderline between order
and chaos. The following description can be seen as another argument for such a
viewpoint. But let us first recapitulate. In the case of the magnetic iron bar, it is the
temperature that changes the material from a symmetric state (high temperatures)
to a state with a broken symmetry (low temperatures). For the falling pen, it was
the gravitational force (perhaps initiated by a small movement of your finger) that
caused the pen to move from a symmetric state to a state of broken symmetry.
Now consider a similar experiment, but placing the pen between two books as
shown in Fig. 6.1c. Instead of a continuous symmetry, the final state of the pen is
given by one of two states: it can fall to the right or to the left. Let us use this picture
to discuss the topic of this chapter, namely the sustainability of long term growth
of stock markets. We shall try to take seriously the viewpoint and intuition of many
practitioners, according to which the state of a stock market is either a bull or a bear
market. Let us take the position of the pen as a picture representing the state of a
given stock market at a given instant of time. Gravitational forces would then give
a very poor description of a market, since that force would make the pen fall all
the way down either to the right or the left – it would never get stuck in-between.
These two extremes would correspond to heaven or hell on earth since the state of
the market would be either a pure bull or a pure bear market, going only one way
every day: up or down, depending on its state.
Needless to say, reality is more complicated. A better representation of the state
of, say, the US stock market would therefore be to place the pen as in Fig. 6.1d
which is in-between the two extremes, but firmly to the right, indicating that we are
presently in a bull market state. The further to the right the pen, the more bullish
the market. As long as the pen is not completely tilted to the right, there is still
the possibility of a temporary bear market phase. Just as one can encounter water
droplets in the vapour phase of water, so a bullish market phase can allow bullish
periods containing bearish market characteristics. When we characterize a market
as bullish, it simply means that the probability of finding the market in an increasing
mode is greater than the probability of finding it going down.
124 6 Social Framing Creating Bull Markets of the Past

Here and in the remainder of this chapter, we will use a long term perspective. In
other words, we shall typically talk about stock market behavior over many years,
if not decades in time. Moving the pen to the left from the position indicated in
Fig. 6.1d would then characterize a market becoming gradually less and less bullish
and more and more bearish. This would continue until the pen was directly in
the middle, in which case the market could be characterized as neither bullish nor
bearish, but somewhere in-between. At that point one should encounter rollercoaster
markets with sky-rocketing volatility, since the market is right at the borderline
between a bullish and a bearish state.
The claim in the following is that, so far in human history, we have only
experienced a true long term bull market, but with intermittent bear markets. That
would correspond to the pen being in the position shown in Fig. 6.1d. Notice that
in this part of the figure, we have replaced the gravitational force by market forces.
The topic of this chapter will be to describe exactly what market forces hold the
market in check.
Let us suggest at the outset that the presence of a majority of market players
belonging to long-only institutions seems to have played a crucial role in keeping the
market in a long term bullish state of the kind symbolised in Fig. 6.1d. Throughout
history, there has been a strong bias, with the majority of market players taking only
long positions in the stock markets and an almost complete absence of short-selling
market participants. We shall argue in the next few sections that this has led to a
broken symmetry, similar to what happens when a small initial hesitation of your
fingertip introduces a bias in the direction which subsequently makes gravity pull
the pen onto the table.
However, in order to see more clearly how such a bias comes about in the context
of stock markets, we need to discuss the so-called wealth effect, which describes
feedback between the stock market and the real economy. The combination of the
bias of long-only players and the wealth effect is then argued to create the observed
long term growth of the stock market. The most dominant market player is of course
the Federal Reserve. As one definition of the wealth effect, and also to stress the
importance of the relation between the performance of the stock market and the
economy, consider the following quote from Mr. Bernanke in The Economist [21]:
Higher stock prices will boost consumer wealth and help increase confidence, which can
also spur spending. Increased spending will lead to higher incomes and profits that, in a
virtuous circle, will further support economic expansion.

One finds the same philosophy behind the so-called Greenspan put, which is the
idea that the central bank stands as the insurer of the stock market. It gives yet
another reason for the claim that, at least when it comes to US equity markets, we
have only experienced a true long term bullish market state. However, as we shall
see, the pendulum (or rather the pen) may be swinging back right now due to the
appearance of hedge funds, a relatively new phenomenon whose impact we have
only just begun to experience, since the end of the last century.
6.3 Long Term Growth of Financial Markets 125

6.3 Long Term Growth of Financial Markets

Growth theory in the field of economics has a long history, going back to the
classic growth theory formulated in the eighteenth century by David Hume and
Adam Smith who were trying to understand how productive capacity could lead to
the wealth of nations. In the twentieth century, Joseph Schumpeter introduced the
idea that entrepreneurship could force creative destruction in industries by creating
new products and business models, thereby ensuring a dynamics that would lead
to long term economic growth. Some of the first analytical attempts to understand
long term economic growth were made in the 1950s by Robert Solow and Trevor
Swan, looking at the relationship between investment, labor time, capital goods, and
output. The idea of using growth theory to try to understand the long term outlook
is therefore familiar to economists, but interestingly enough, very little research has
been done on the financial markets themselves to understand the foundations of
market growth.
Taking the most basic view, all one would need to ensure that financial markets
keep on going up is money – lots of money. Where the money should come from is
a different story, to which we shall return, but in principle as long as enough money
flows into the markets, the volume of buyer-initiated transactions will outnumber
the volume of seller-initiated transactions and the markets will continue to go up.
At a short time scale, this is not the situation in real markets, which tend to look
more like a random walk. But as we go to longer and longer time scales, a pattern
appears. The solid line in Fig. 6.2 shows the S&P 500 index over the last 140 years.
Looking at the market over this time span, there is clearly only one direction: up!
One reason for markets not behaving like a random walk at such long time scales,
and instead showing a clear upward drift, is simply inflation. So to keep up with the
natural tendency of money to lose value over time, financial markets have to grow
at least at the same pace as inflation.
Still this does not explain the upward drift of the S&P 500 index in Fig. 6.2. If
one compensates for inflation, as shown by the solid line in Fig. 6.2, the ‘real’ value
of the S&P 500 index still shows a clear upward drift when we look at the trend
over these long time scales. The most natural interpretation for the upward drift in
the ‘real’ value of the S&P 500 index is the great economic expansion in the US
over the last one and a half centuries. Still, the clear upward drift we see does raise
some questions.
A natural question is whether investing in stocks was really as profitable as the
steady increase in Fig. 6.2 seems to indicate. Mehra and Prescott [95] answered
that question in 1985 by showing how investment in stocks would give an average
annual return (coming from increasing stock prices as shown in Fig. 6.2, as well as
from payment of dividends, which are not shown in Fig. 6.2) in clear excess of the
return from US bonds – in fact, by several percent, depending on the time period
chosen. They referred to such an excess as the equity premium puzzle. Even though
126 6 Social Framing Creating Bull Markets of the Past

1800

1600

1400

1200
SP500 index

1000

800

600

400

200

0
1860 1880 1900 1920 1940 1960 1980 2000 2020
Year

4
10

3
10
SP500 index

2
10

1
10

0
10
1860 1880 1900 1920 1940 1960 1980 2000 2020
Year

Fig. 6.2 Stock market performance index since 1871. The dashed line is the S&P 500 index, while
the solid line represents the same data, but corrected for inflation. The lower figure shows the same
data with a log scale for the ordinate

this effect is called a puzzle in academic circles, it is taken as a matter of fact in the
banking culture. You just have to ask your local bank advisor about the best way to
invest your retirement funds and you will almost certainly get the standard answer,
recommending a portfolio with a large ratio (say two or three) of stocks to bonds.
6.3 Long Term Growth of Financial Markets 127

This strategy worked perfectly in the past century. But before giving in to something
which appears rather like magic, and despite the fact that it has always worked in
the past, it would be reassuring to have a clearer idea of why it worked, and also
to know whether anything has changed compared to the past? In the following, we
shall suggest answers to both questions.
The idea is to see what it takes in terms of cash to ensure the long term stable
growth of a given stock market. Money entering the stock markets is the basic
ingredient for further growth. Without new money being invested, the growth of
the market stalls. So in the following we will use cash availability to pinpoint the
condition for growth. In order to understand the problem of how to keep a stock
market growing, however, one has to look beyond the stock market and take into
account the state of the whole economy. This is where things become interesting,
but also very complicated! What is the relationship between the performance of the
stock market and the performance of the economy itself? It is hard to give a very
precise answer, but as we mentioned in the introduction to this chapter, the fact that
central banks firmly believe there is a relationship is in itself an argument for taking
it seriously. At the same time, it is also an argument for trying to understand things
in more qualitative terms. The essence of the Greenspan put is that, as long as we
can avoid big market losses and keep the stock market going, this in turn will keep
the economy going. But is that true?
It should be noted that the Greenspan put has been in place now for quite an
extended period of time, over which both the economy and the stock markets have
experienced solid growth. In general, it refers to the monetary policy of the United
States Federal Reserve which has pumped liquidity back into the market after every
major crisis since the 1980s, including the 1987 stock market crash, the burst of the
internet bubble in 2000, and more recently, the 2008 credit risk crisis. During each
such crisis, the Fed has among other things lowered the Fed Funds rate, which is the
interest rate that banks charge each other for loans. The effect of this is an increase
of liquidity, part of which then goes back into the markets.
Interestingly, our findings in the following will show that, given sufficient
liquidity, long term stable growth of the stock market and the economy is possible,
whereas insufficient liquidity makes the markets stall. This could therefore be taken
as an argument in favour of the Greenspan put. However, as we shall see, there
is an underlying assumption here that the markets have a bias, or to use the term
introduced in the last section, that they have broken symmetry, with the majority
of investors being long on the markets, aiming for long term profit from their
investment. As soon as this symmetry begins to be restored by introducing more
and more short market participants into the pool of investors, then market growth
will stall, or worse, decline. Perversely, we also find a bizarre situation where, in an
increasing pool of investors with a negative view on the market, the Greenspan put
can actually increase the rate at which the market declines. This is something that
is clearly not intended by policy-makers running the Federal Reserve.
In order to see how long term growth of the stock market can come about, let us
try to write down a balance equation for the cash available to a pool of archetypal
investors in the last century. Here ‘pool of investors’ should be taken in the broad
sense, but what we have in mind is the balance sheet for a typical pension fund or
128 6 Social Framing Creating Bull Markets of the Past

major bank investor, making investments over the long term. Is it possible for such a
pool of archetypal investors, by their aggregate action, to keep on pushing the stock
market up to their own (long term) benefit? If so, we would thereby also get one
possible explanation for the long term growth of the markets observed over the last
century. Let us see.
We first write down the balance equation for the cash available to such a pool of
investors, assuming constant market growth. We will then check afterwards whether
such a constant growth is in fact possible. We first give the equation, then discuss
the different terms (for a formal description and solution of the problem, see the
Appendix at the end of this chapter):

change in cash per time unit


D income from investment of cash at interest rate r.t/
Cincome from dividends d.t/ of a growing number of shares n.t/
Ccash flow from other sources
cost of buying new shares at a given stock price P .t/:

This equation accounts for the expenditures as well as the income an investor would
have in building a lifelong portfolio of stocks, buying new shares at a steady rate and
thereby ensuring constant growth of the stock market. Expenditures include buying
new shares at an increasing price as the market grows. Income includes dividend
payments from an increasing reserve of shares, as well as interest payments from
the part of the cash not placed in stocks. The term ‘cash flow from other sources’
is just meant to describe all additional inflow/outflow of money into the pool of
investors. We would like to determine the rate ˛ at which the pool of investors can
ensure market growth by buying new shares.
Let us first discuss the simplest case, leaving out the term ‘cash flow from other
sources’. We will consider money inflow from ‘other sources’ in the next section in
order to try to estimate the relative level of investment in a market, but first we should
try to understand how long term market growth can come about. The argument for
the Federal Reserve to lower the Fed Fund rate was that lower interest rates would
ease access to money, part of which should find its way into the stock market and
result in rising prices there. This in turn is assumed to lead to higher consumer
confidence and wealth, thereby also triggering higher consumer spending. Higher
consumer spending then means higher earnings for companies, thereby giving rise to
higher dividends for the stock of a given company. As a measure of the wealth effect,
we therefore take as proxy the extent to which dividends follow the market price.
Using just one variable, i.e., dividends, to gauge the general state of the economy
is clearly a very rough description of something as complex as the economy as a
whole. Still, since we are interested here in the impact of the economy on the stock
market and vice versa, dividends seem to be the relevant variable linking the two.
In the scenario we described above, it turns out that the crucial variable
concerning market growth is precisely the extent to which dividends follow the
6.3 Long Term Growth of Financial Markets 129

1
10
Dividend

0
10

−1
10
1 2 3
10 10 10
Index

Fig. 6.3 Dividend as a function of the S&P 500 index value over the period from 1 January 1871 to
31 December 2009 (Data taken from www.econ.yale.edu/~shiller/data.htm). Solid lines represent
linear and square root growth of dividends versus the index value

price of the market. Using the balance equation, it was shown in [143] how market
growth at any rate higher than or equal to the interest rate was impossible if the
dividends did not follow the evolution of the price. On the other hand, for the case
where dividends were an increasing function of the market price, super-interest rate
growth was indeed found to be possible, i.e., faster than the rate given by the interest
rate. Therefore, the first thing we need to understand is what empirical data has to
tell us about the way dividends perform as a function of the stock market price.
Figure 6.3 shows some empirical data clarifying this issue for the US market.
The figure shows how dividends performed as a function of the S&P 500 index over
the last century and a half. As can be seen, there was a clear linear relationship
between price and dividends from the very beginning of the time period (small
prices implying small dividends) until approximately up to index values of a
few hundred, corresponding to the end of the 1980s and the beginning of the
1990s, where a sublinear relationship between dividends and prices took over.
Considering the analysis of our balance equation, such a change in behavior is of
immense importance, as we shall discuss in the following. We shall argue that some
fundamental change occurred in the markets to make the crossover from a linear to
a sublinear relationship between dividends and prices, and we shall give our own
suggestion as to what made the markets change. First, however, let us take a look at
a graphical solution of the cash balance equation given above to see how constant
growth can come about.
130 6 Social Framing Creating Bull Markets of the Past

Fig. 6.4 Will a pool of investors have enough money to push the stock market up at a given growth
rate ˛? The thick solid line (curve a) shows the price of the stock market as a function of time. This
straight line in the linear-logarithmic plot means that the price grows exponentially with time t .
The growth corresponds to an annualized growth rate of ˛ D 20 %. The dotted line (curve b) shows
the evolution of the cash C.t / for the case ˛ D 0:2, r D 0:1, C0 D 10, and d D 0:02, where
C0 is the initial amount of cash available to the investors and d is the dividend which is assumed,
through the wealth effect, to follow the price evolution (for a full definition of the model, see the
Appendix to this chapter). Curve c is similar to curve b, but with d D 0:08. Curve d is similar to
curve c, but with C0 D 5

Figure 6.4 shows some formal solutions to the balance equation, assuming the
linear relationship between dividends and prices, as was indeed observed over the
first century of data shown in Fig. 6.3. The solutions illustrate the evolution of the
investors’ cash supply (not their total wealth, which includes another part coming
from the value of their shareholding) and its dependence on different variables,
since we would like to know whether continued market growth is possible given the
amount of money available to buy new shares. The thick solid line (a) represents
the price of the market assuming a constant market growth ˛. The straight line
for the price function in the linear-logarithmic plot means that the price grows
exponentially with time t.
Note that this is not the solution of our balance equation for the cash of the
investors given above. It is the growth of the stock market whose sustainability
we would like to probe, given the investors’ cash solution. The cash solutions
themselves are shown as the three thin lines illustrating different growth conditions
for the market. In order to find out whether each given condition can enable the
chosen market growth, i.e., the thick solid line, the cash solution at any time has to
be above the thick solid line. If it is not, this means that there is no more money
6.3 Long Term Growth of Financial Markets 131

available to keep on buying shares, and consequently the growth is not sustainable
and must stop.
Curve b shows the money available in the case with initial dividends at 2 %. The
fact that the dotted line goes below the value of the market (after approximately
3,500 days) means that the solution is not stable and market growth will stop since
there is not enough money available to buy new shares. Increasing initial dividends
to 8 %, as shown by curve c, investors have enough money to ensure continued
market growth, since this curve remains at all times above the market price needed
to buy new shares, ensuring that there is always sufficient cash to keep on buying
shares and thereby push the market up at the given rate. Finally, with the same initial
dividends of 8 % but a smaller initial amount of money available to the investors, as
shown in curve d, they are no longer able to sustain the growth of the market. This is
shown by the fact that the money available to them falls below the market price after
approximately 2,800 days. It therefore seems important to have sufficient liquidity
to ensure stable long term growth of the stock market, and likewise of the economy.
This follows because of the steady increase in the dividends as time goes by, since
we have assumed a linear relationship between price and dividends.
Since the solutions in Fig. 6.4 only illustrate the evolution of the cash part of
the wealth, even a significant drop in share price at the end of the time period
considered, say by a large amount of investors deciding to sell all their shares, would
still have made the investments over the period highly lucrative for case c. This is
because of the wealth effect which, through increasing dividend payments, ensures
a high amount of cash holding at the end of the given time period.
Now similar considerations can be applied to the more morbid case of a pool of
investors who are negative on the market and try to keep pushing it down to their
own benefit. Such a pool of investors with a negative view on the market would
correspond to having a pool of short-sellers [29, 110]. Using the terminology of
Sect. 6.1, this would correspond to a group of investors breaking the symmetry of
the market, but downward, the long term bear market case. In the history of the stock
market, we have never experienced this case. Indeed, looking at Fig. 6.2 for the US
market, we have never seen several decades of long term decline of the stock market.
This is certainly positive, and should be in the interests of all. But just because we
have never seen such a long term downward trend, does not mean that it would be
impossible. By studying this hypothetical scenario, we could shed some light on
whether it could actually happen, and at the very least, understand the mechanisms
that could lead to such a case, and perhaps even prevent it, should it ever be seen to
unfold.
In terms of our balance equation for the growth of the stock market, such a
doomsday scenario can now be analyzed in a similar way, but by taking the growth
rate ˛ and the dividends to be negative. The dividends should be taken negative,
since a short-selling investor has to pay dividends to the owner from whom the
stock was borrowed. Then instead of the condition C.t/ > P .t/, which says that
there should at any instant of time be enough money to keep on buying new shares,
one would have the condition W .t/ > jnjP .t/ for the short-selling case, i.e., the
wealth of a typical short-selling investor should be sufficient for an investor to be
132 6 Social Framing Creating Bull Markets of the Past

Fig. 6.5 Will a pool of investors have enough money to push the stock market down at a given
rate ˛? Testing whether enough money is present to push the stock market down. (a) Price P .t /
as a function of time t for ˛ D 0:2. (b) jnjP .t / for the case ˛ D 0:2, r D 0:1, C0 D 10,
and d D 0:08. (c) Wealth W .t / as a solution of (6.1) and (6.7) for the parameters of curve b. (d)
Same as curve b except for signs of ˛ and d , i.e., ˛ D 0:2 and d D 0:08

able to buy back at any time all the shares that were borrowed (and sold short) at an
earlier time.
Figure 6.5 illustrates the solution of the cash equation for this case. Notice that
the thick solid line (a) representing the market price is now tilted downwards,
indicating a negative long term growth of the market. Curve b in Fig. 6.5 illustrates
jnjP .t/ instead of the cash. As mentioned above, we should compare this term to
the wealth in order to know whether it would be possible for the pool of investors
to keep on pushing the market down to their own profit. Curve c shows the wealth.
Since curve c lies above curve b at any time, this shows that it is indeed feasible
for the group of short-selling investors to keep pushing the markets down for their
own profit. Interestingly, if we now compare the wealth of the pool of short-selling
investors to a pool of investors who instead decide to go long (changing the signs on
the growth rate ˛ and the dividends d ) by comparing curves c and d, we see that it is
slightly more advantageous to push the market up, instead of trying to push it down.

6.4 How Big Is the Investment Level of a Given Stock Market?

In the last section, we assembled ideas about which ingredients are important for
the growth of a stock market: enough liquidity, a pool of biased investors being
long on the market, and finally the importance of the wealth effect, ensuring that
6.4 How Big Is the Investment Level of a Given Stock Market? 133

dividends follow the price of the market. The approach we presented was self-
consistent in the sense that, given the factors mention above, we just looked at
whether it would be possible to observe a certain given growth. There was no need
for any external sources of money to help sustain the growth, and nor was it assumed
that money would be taken out of the pool required to ensure market growth. In that
sense, what we considered was a closed system. Real markets are obviously much
more complicated, with investors who occasionally take their profit/loss in the stock
market and invest it elsewhere, e.g., in commodities or the housing market, which
might look like more opportune investment choices at the time. This observation
demands a description where the stock market is considered as an open system in
which the sources of funding available for investment in the stock market vary over
time.
Since the level of money available to be invested in the stock market at any given
time depends on the decision-making of each and every market participant, this is
a quantity that is hard to estimate reliably. The best method here may be to make
a statistical estimate, and one way to do this is to look at the total money supply
available in the economy at a particular moment in time. This is a well defined
quantity, since weekly statistics are released by the Federal Reserve (see, e.g., their
website www.federalreserve.gov/releases/hT6/about.htm). It comes in four different
categories, denoted Mi with i D 0; 1; 2; 3. The Mi are like a system of Russian
dolls with M0 contained in M1 , M1 contained in M2 , and M2 contained in M3 .
M0 denotes the total of all physical money (i.e., bank notes) plus accounts at the
central bank that can be exchanged for physical money, M1 denotes M0 plus the
amount of money held in check or currency accounts, M2 denotes M1 plus savings
accounts and retail money market mutual funds, and M3 denotes M2 plus deposits
of eurodollars and repurchase agreements. Since March 2006, the Federal Reserve
no longer collects statistics on M3 .
At the beginning of this chapter, we placed emphasis on the state of a given
market, e.g., a long time bullish state. The notion of state is not meant to imply
something static, but rather is intended to capture the mood of the market and its
participants at a given instant in time. One could easily envision the situation where
one encounters an almost ‘pure’ bullish state which then changes over time into
something more and more bear-like (even though the claim in the last section was
that we have never experienced a truly long term bear market). One question is then:
If we are in an almost pure bullish state, can we tell when the market has peaked?
Are there any indicators one could use to detect such a peak?
We saw in the last section how conditions without sufficient liquidity can lead
to the end of market growth, so it is natural to suggest looking at the amount of
money available to further supply market growth. Using this quantity as an indicator
would, however, require knowledge of the absolute amount of liquidity available to
be invested in the stock market, something which seems hard to estimate. Another
possibility is to try to use the relative level of investment with respect to the market
at some earlier time. In fact we can use the balance equation for the cash supply to
relate the level of cash invested in a given stock market at a given time t to some
future level at a time t 0 > t. For example, given that one knew that the stock market
134 6 Social Framing Creating Bull Markets of the Past

Fig. 6.6 How big is the ‘level’ of investment in the stock market? Solid line: US S&P 500 index
over the time period 07 May 1993 to 17 October 2003, using weekly data. Unit of time: 1 week.
Dashed line: level of cash invested. The S&P 500 index has been normalized so as to compare with
the level of cash invested. The percentage of the money supply was chosen so that the maximum of
the S&P 500 index (which occurred in March 2000) corresponds to 1 (i.e., 100 % invested) (Figure
taken from [142])

reached its peak 5 years ago, assuming that the level of investment was close to
100 % at that point, how big a percentage is invested now at time t 0 ?
Figure 6.6 gives one illustration of the level of relative investment, using the
top of the stock market as a reference to define subsequent levels of investment.
Defining the top of the market as full investment, the dashed line defines the level
of investment determined from the cash balance equation (see the Appendix for
more details). Again the idea is that cash is the ‘fuel’ required for further stock
market growth, so that, without sufficient cash available for investment, growth
stalls. As can be seen from the dashed line in Fig. 6.6, following the March 2000
peak of the S&P 500, the level of cash invested in the market declined by 50 %
3 years after the peak was reached. Taking this as a very rough guideline for the
possibility of further expansion of the market after 2003, we see that liquidity
did not appear to be a problem blocking further expansion, even though the
availability of cash does not necessarily imply that the market will expand. With
hindsight, the market did expand from the low level of cash invested around the
year 2003.
6.5 Impact of Short Selling on Pricing in Financial Markets 135

6.5 Impact of Short Selling on Pricing in Financial Markets

The motivation for the exercise in Sect. 6.2 was mainly to see whether it is possible
to understand some of the elements leading to the long term growth of the market
which is so clearly visible when we look at Fig. 6.2. The main message from the
simple analysis of the growth equation is that ‘super-interest’ market growth, i.e.,
with ˛ > r, is possible under certain conditions, most notably when there is enough
liquidity and there is spillover to the economy in terms of dividends that keep up
with rising market prices. The linear relationship between dividends and prices
witnessed in the US markets for over a century therefore ensured suitable conditions
for super-interest growth.
This gives us one explanation for the equity premium puzzle, but another
underlying assumption was that investors break the symmetry, with the majority
being long on the market. And as we also saw, by changing the symmetry breaking
to favor short-selling investors, we could end up in the hitherto unknown and
certainly scary scenario of a continuous decline in the markets, where short-selling
investors profit and thereby continue to push the markets down, with even bigger
resulting profits for themselves, and so on and so forth.
When faced with a complex problem, it is reassuring to have equations express-
ing relationships between different variables that are thought of as key to the
problem. What comes out of the equations in terms of solutions can introduce new
intuition and guidelines, and very often gives a new angle on the problem, where-
upon new questions are raised. Ultimately, however, it should be the comparison
with experimental data that determines our level of confidence in a model, and
whether refinements or outright rejection are warranted.
Unlike many problems encountered in physics, for example, problems in social
sciences are often characterized by a large number of variables required to describe
a given system. This is certainly the case for the problems addressed in this chapter,
even though we gave a ‘minimal’ description and attempted to focus on what we
thought was most relevant to the problem at hand. The term mean-field analysis is
then used to characterize such an approach, since we try to focus on the ‘average’
properties of investor decision-making on a long time scale, in this case leading to
one explanation for the growth of a given stock market. Ignoring details is often a
fruitful first shot to get some initial insight. However, in a problem with a high level
of complexity, additional insight eventually demands a description with an increased
level of complexity, if only to check that the initial simpler assumptions are indeed
sound enough to use in a general description.
Looking at a financial market as a complex system, it would then be interesting
to assess the extent to which specific investment decisions at the individual level,
such as the decision by an investor to go short on the market, could impact price
dynamics on the level of the market. With respect to the impact of short selling in
136 6 Social Framing Creating Bull Markets of the Past

financial markets, we would like to point to the advent of hedge funds as a possible
relevant factor. Hedge funds are a relatively new phenomenon, even when looking
only at the modern history of financial markets. We will argue that the arrival of
hedge funds has had a profound effect on the functioning of the financial markets,
and that this arises from the very definition of the way they work.
In contrast to more traditional investment funds like pension funds, or most
investment accounts managed by banks or insurance companies, hedge funds
typically target an absolute return as performance, and not a benchmark given by
a stock index. The fact that a hedge fund typically maintains a hedged portfolio
in a broad range of assets such as equities, bonds, and commodities makes it
natural to demand an absolute performance from a hedge fund, in contrast to mutual
funds which often specialize in creating portfolios by picking among the stocks
making up an index. One distinguishing feature between hedge funds and traditional
institutional investors is therefore that hedge funds target absolute returns, whereas
institutional investors target returns relative to an index. This aspect will be very
important for the following discussion. An additional major difference that makes
hedge funds stand out is that most investment banks and mutual funds are restrained
by law not to enter short positions on stocks. No such restraints are imposed on
hedge funds, which on the contrary use a variety of investment strategies to profit
from downturns as well as from upswings of various markets.
Using the tools of complexity science, the agent-based model introduced in
Chap. 3, allows us to consider the impact of short-selling investors, like hedge funds,
by having a heterogeneous group of investors that trade in a market. In this way, we
can probe the impact of a group of hedge funds with different investment strategies
trading alongside a larger pool of traditional institutional investors restrained to use
long-only strategies. There are clearly many ways to implement such a complex and
heterogeneous scenario, here we shall just give an appetizer to show how this can
be done, and look at a general outcome in a statistical sense via scenario analysis.
Figure 6.7 illustrates a simulation study of an agent-based model which intro-
duces different percentages of possible short-selling investors who are not restrained
to long-only positions, but can also go short on the market. The figure represents
specific simulations done with fixed values of interest rate and dividends (cor-
responding to the values used in Fig. 6.3) and for specific parameter values of
the $-game. Note, however, that the results discussed in the following have been
confirmed for a broad range of values around the parameters used to construct
Fig. 6.7 [143].
In order to have a reference, curve a corresponds to the case where all agents
are long-only, i.e., there are no short-selling agents. Curves b and c then show
market behavior when 20 and 40 %, respectively, of market participants are allowed
to take long as well as short positions (the remaining 80 and 60 %, respectively,
of the agents are then only allowed to take long positions). The thick solid line
gives a typical price history for each of the three mentioned cases. Thin dotted
lines illustrate average market behavior when starting out the $-game simulations
with the same parameters, except that different randomly chosen trading strategies
are assigned to the agents. The thin dotted lines represent the 5, 50, and 95 %
6.5 Impact of Short Selling on Pricing in Financial Markets 137

Fig. 6.7 Impact of an increasing population of short sellers in the market. Thick solid line: price
P .t / from one configuration of the $-game. Thin dotted lines represent average properties of such
market simulations with the 5, 50, and 95 % quantiles from bottom to top, respectively. (a) Market
without short-selling agents. (b) Case where 20 % of agents are allowed to take both long and short
positions. (c) Case where 40 % of agents are allowed to take both long and short positions (Figure
taken from [143])

quantiles (from bottom to top), i.e., at every time t, out of the 1,000 different initial
configurations, only 50 got below the 5 % quantile line, 500 got below the 50 %
quantile, and 950 below the 95 % quantile.
As can be seen from the figure in the case with long-only agents trading in the
market (curve a), the price is on average tilted towards positive returns. This result
confirms the analysis of the cash balance equation in Sect. 6.2. Therefore, when
introducing heterogeneity by looking at long term growth of a market where traders
have different trading strategies, we still find that collectively and on average, a
heterogeneous group of traders are able to keep pushing the market up to their own
profit. It should be noted that the computer simulations in Fig. 6.7 were carried
out under the same assumptions as the analysis of the cash equation, i.e., it was
assumed that the dividends followed the price of the market via the wealth effect.
As discussed when we studied the cash balance equation, this is important because
an agent without sufficient cash would not be able to enter a position, while long-
only agents without shares would be inactive if their best strategy showed a sell
signal.
Even more remarkable is the fact that, when we compare the case without
short-selling agents represented by curve a to the cases with short-selling agents
represented by curves b and c, we observe three different consequences of introduc-
ing these short-selling agents:
138 6 Social Framing Creating Bull Markets of the Past

1. The average long term return of the market decreases for a stock market with
short-selling agents. This is seen by looking at the 50 % quantile which, at the
end of the time series, is lower for a market with short-selling agents than for a
market without short selling-agents.
2. The average volatility of the market increases for a stock market with short-
selling agents compared to the case without short-selling agents. This is seen by
the increased gap in the return between the 5 and 95 % quantiles, which is larger
for a market with short-selling agents than for one without.
3. The effects described in (1) and (2) increase when the percentage of short-selling
agents is increased. This is seen from the fact that the effects described in (1) and
(2) are more pronounced in curve c than in curve b.
In this chapter, we have been arguing that, since their very origin and up until very
recently, financial markets have been in a state of broken symmetry which favored
long term growth rather than contraction. This broken symmetry, as manifested
by a long term bull phase, can be ascribed to the historically almost complete
dominance of the financial markets by long-only players, which, combined with
the wealth effect, ensured long term market growth. We have also argued that
we may have entered a new and as yet unknown market regime owing to the
arrival of new market players in the form of hedge funds, which obey different
market constraints. A natural question then arises as to whether the exponen-
tial growth of hedge funds the world has witnessed since the beginning of the
twenty-first century could have an impact on the long term trends of financial
markets.
It is probably too early to give a clear answer to this question, given the fact
that the hedge fund industry is still a relatively new and rather poorly understood
phenomenon. However, the rapid growth of this industry, together with the fact that
it has become more and more common for traditional institutional investors to invest
directly in hedge funds, which thereby indirectly open them up for exposure to short
positions, is a new phenomenon which raises questions. Given the recent market
crisis, we should at the very least demand increasing awareness of this industry and
the impact it could have on financial markets. The fact that we have seen a temporary
ban on short-selling on, e.g., the most exposed bank shares during the 2008 credit
crisis, a ban enforced by the US Securities and Exchange Commission (SEC), and
the discussion in 2009 of the introduction of an uptick rule that disallows short
selling of securities except on an uptick, suggests that the authorities are aware that
short selling could have damaging effects.
In our own investigations in this section, two predictions came out concerning
the impact of the new hedge fund players in the markets: over long time horizons,
one should see an increase in volatility and, more importantly, an increase in the
probability of periods where the market is bearish. If, as is often speculated, one
has a spillover from the financial markets into the economy through the so-called
wealth effect, the introduction of an accelerating number of hedge funds since the
beginning of this century opens up the possibility of dire consequences for the
future.
6.5 Impact of Short Selling on Pricing in Financial Markets 139

Appendix

Deriving the Growth Equation for Financial Markets

The creation of the bull market we saw over the twentieth century will in the
following be considered from the point of view of a growth phenomenon. The
wealth effect and the investor bias on long positions will be shown to allow for
a sustainable long term growth of financial markets, and can thus be identified
as important ingredients needed to create an equity premium. It should be
noted that the model presented in the following is an exogenous model, taking
into account the impact of the actual growth of the real economy by assuming
that dividends follow stock market prices. An empirical justification for such
an assumption can, for example, be seen in Fig. 6.3.
In order to study how the century long bull market of the twentieth century
could have come about, consider for a moment the common action of all
investors in a given stock market. Over time, this would mean a fluctuating
pool of investors with different levels of wealth, entering and exiting the
market at different times, depending on the different moments that each
investor found opportune for investing in the market. In reality, such a pool
of different investors would in itself imply, and require, a description leading
to highly fluctuating markets. However, taking a step back and looking at
the long term trends only, it is interesting to ask how the aggregate action of
investors could lead to the long term bull market witnessed throughout the last
century.
Let us call W .t/ the total aggregate wealth of the pool of investors who
hold a given number n.t/ of market shares [22] in a stock market at time t, so
that

W .t/ D n.t/P .t/ C C.t/ ; (6.1)

where P .t/ is the price of a market share and C.t/ the cash possessed by the
aggregate pool of investors at time t. Since we are considering the dynamics
of financial markets over several decades, the question that will be posed
in the following is whether it is possible for long term investors to keep on
accumulating shares and profiting from dividends on the shares while holding
an increasing amount of shares as time goes by. The prototype investor here
would be an investor making lifetime investment in a pension fund. At each
time step, the aggregate pool of investors buy new market shares with the
result of pushing up the price of the market, thereby creating an excess
demand of market shares denoted by A.t/. This gives rise to the following
equation for the return r.t/ of the market [20, 44]:

(continued)
140 6 Social Framing Creating Bull Markets of the Past

(continued)
A.t/
r.t/  ln P .t C 1/  ln P .t/ D ; (6.2)

where is the liquidity of the market. The fact that the price goes in
the direction of the sign of the order imbalance A.t/ is intuitive clear and
well-documented [25, 28, 64, 79, 112]. Since we are interested in the long
term sustainability of investors pushing up the market to their own benefit,
fluctuations will be ignored and A.t/ will be taken as constant, A.t/  A.
Then we have

d ln P .t/ A
r.t/  D ; P .t/ D eAt = ; (6.3)
dt
and

dn.t/
DA; n.t/ D At : (6.4)
dt
Besides expenses to keep on buying shares, the pool of investors get an income
d.t/ from dividends on the shares they are already holding, and r.t/ from
interest rates on the cash supply C.t/. This implies the following balance
equation for the cash supply of the pool of investors as a function of time:

dC.t/ dn  
D  P .t/ C C.t/r.t/n.t/d.t/ C Cflow t; r.t/; d.t/; P .t/; : : :
dt dt
 
D AeAt = C C.t/r.t/ C Atd.t/ C Cflow t; r.t/; d.t/; P .t/; : : : ;
(6.5)
 
where the term Cflow t; r.t/; d.t/; P .t/; : : : in (6.5) is just meant to describe
all additional inflow/outflow of money into the pool of investors, and as
indicated can be thought of as depending on time, dividends, interest rates,
the price of the market, and many other factors, such as tax cuts, etc.
It is preferable to express (6.5) in terms of the growth rate of the financial
market, viz., ˛  A= , and the cash in terms of the market liquidity
CN  C = . For a general  description of the solutions to (6.5), see [143].
In the following, Cflow t; r.t/; d.t/; P .t/; : : :  0. Assuming a wealth
effect, dividends will be taken to grow in proportion to the price, that is,
d.t/=d0 D P .t/=P0 , whence (6.5) becomes

dCN .t/ N
D ˛e˛t C C.t/r.t/ C ˛te˛t d0 ; (6.6)
dt

(continued)
6.5 Impact of Short Selling on Pricing in Financial Markets 141

(continued)
with solution [143]
  2
td0  1 d0 rt r˛ C ˛ C ˛d0
CN .t/ D ˛e˛t  C e C C 0 :
˛r .˛  r/2 .˛  r/2
(6.7)

Estimating Investment Levels in a Given Stock Market

Equation (6.6) can be used to relate the level of cash invested in a given stock
market at a given time t to some future level at time t 0 > t. For example, the
question might be, given that one knew that the stock market reached its peak
say 5 years ago, and assuming the level of investment was close to 100 % at
that point, then how big a percentage is invested right now at time t 0 ?
Let I.t/ be the total amount of money invested in the stock from time 0 to
time t. Then
Z t
 
I.t/ D A.t 0 /P .t 0 /dt 0 D P .t/  P .0/ : (6.8)
0

The idea is to assume that, over time, a constant percentage of the total money
supply in circulation will find its way into the stock market, so that
 
M.t/ D K exp RM.t / t ;

where RM .t/ is the percentage increase/decrease in the M3 .t/ money supply


(see the Federal Reserve website at www.federalreserve.gov/ for a precise
definition), and K is the percentage of the money supply which is invested in
stocks, taken to be constant here. The flow term arising from variations in the
M3
money supply M3 therefore takes the form Cflow .t/ D M.t/  M.t  1/. In
discrete form, (6.6) becomes
   
CN .t/  CN .t  1/ D I.t/  I.t  1/ C I.t/d.t/ C M.t/  M.t  1/ : (6.9)
Complexity Theory and Systemic Risk
in the World’s Financial Markets 7

7.1 Introduction

In this penultimate chapter of the book, we take a top view of the financial markets,
looking now at the network of the world’s stock exchanges and suggesting that
there is a sociological phenomenon behind the way price formation propagates
across markets. However, the social process of price formation comes with a
psychological twist, since the claim will be that ‘change blindness’ could at times
play an important part in such a process, with traders ignoring small price changes,
but jumping into the market whenever a large price change occurs.
We introduce a complexity model to catch this idea of socio-finance dominated
price formation between groups of individuals. But in this case, each group of
individuals will represent the entire financial market of a given country. The most
interesting part of this kind of nonlinear dynamics is the creation of memory effects
via price ‘stresses’ and ‘strains’, but viewed at the system level. We will use the
analogy of earthquakes to make this point, with a slow buildup of stresses/strains
in the markets (we will come back to their precise definition) followed by a quick
release, much as one sees in domino cascades. The technical term for such a picture
is self-organized criticality, and we shall give a brief explanation of the origins of
this theory. From a practical point of view the most interesting implication, however,
is the possibility of memory effects, which suggests that the making of a financial
crisis is something that takes time to build up. At the same time, this also highlights
some of the pathways to situations one would wish to avoid, i.e., those that could
lead to financial crisis at the system level.
This book is about finance, but the next section will nonetheless discuss a
problem that belongs to a different area, namely the rupture of materials as seen
in the field of materials science. The problem illustrates one case of systemic risk,
which is the possibility of failure of the system as a whole, in this case the breaking
of a material. The reason we jump to this topic and start talking about materials is
not because we expect the reader to be particularly interested in damaged materials,
but rather because we would like to point out some of the advantages of thinking

J.V. Andersen and A. Nowak, An Introduction to Socio-Finance, 143


DOI 10.1007/978-3-642-41944-7__7, © Springer-Verlag Berlin Heidelberg 2013
144 7 Complexity Theory and Systemic Risk in the World’s Financial Markets

a b
2 2

4 4

6 6

8 8

10 10

2 4 6 8 10 2 4 6 8 10

c d
2 2

4 4

6 6

8 8

10 10

2 4 6 8 10 2 4 6 8 10

Fig. 7.1 Making holes in a piece of paper. (a) 20 % holes, (b) 40 % holes, and (c) 60 % holes, at
which point the piece of paper is broken, since a single cluster of neighboring holes spans from
one side to the other. (d) 20 % holes as in configuration (a), but with a greater likelihood of rupture

‘out of the box’, i.e., applying solutions from one field to problems that have arisen
in another.

7.2 Systemic Risk: Tearing a Piece of Paper Apart

Suppose we take a piece of paper and begin to punch holes in it at random locations.
Figure 7.1a shows an example of what the sheet of paper might look after we have
punched away 20 % of it. Now one question is: How many holes do we have to
punch in the paper before it will be separated into two pieces? This type of problem,
even though it might appear to be irrelevant and somewhat simplistic, is actually of
general interest in the field of materials failure, where one would like to know at
which exact moment a piece of material or a system is at risk of complete failure.
Here we discuss the somewhat surprising result that ideas from finance can be
used to tell when systemic risk will occur, not in finance, but in material failure.
7.2 Systemic Risk: Tearing a Piece of Paper Apart 145

After first showing how to apply financial theory to understanding systemic risks
in material failure, we will then return to the important problem of systemic risks
in finance, but instead of applying ideas from finance, we shall apply ideas from
physics in order to gain insight into the likelihood of systemic risk in the world’s
financial system of stock exchanges.
As a more dramatic example than the piece of paper, one could think of Fig. 7.1
as illustrating an important material part of an aircraft whose lifetime must be
assessed by the maintenance team. The holes could for example correspond to
microscopic cracks in the material. The engineers would like to assess the density C
of microcracks at which a cluster becomes likely to span right across from one side
to the other. This is the point when the material would no longer be safe for use in
the aircraft. Looking at Fig. 7.1b, in which the system has a 40 % damage, this point
has not yet been reached, while at 60 % damage as in Fig. 7.1c, a spanning cluster
of damage separates the system into two parts. However, the problem of knowing
exactly when a system will fail is probabilistic in nature, since the breaking point
pC for a given sample will depend on the specific random realization of the different
holes or cracks. This is the point illustrated by Fig. 7.1d.
We can use methods from physics to tell us something about how a system will
behave on average. It can be shown that, for infinitely large systems, they will on
average have a systemic failure at pCinf D 0:592 746 0 ˙ 0:000 000 5 [155], and the
correction to pCinf is also well known for a given finite system of size LL. However,
if one is sitting in an airplane, it will probably be small comfort to know that an
aeronautical engineer can say something about the general state of the materials in a
fleet of aircraft. What one would like is to be sure that they have a pretty good idea
of the state of the plane in which one is actually flying.
Physics is in general very good at describing situations where one wants to know
some average or general state of a system, but it is less adequate when it comes to
describing specific realizations of a system. Loosely speaking, the main tools we
have from physics have been developed to describe and understand how a system
will behave in general, and not to understand the specific path a given system will
take. In finance, however, nobody will take you seriously if you begin to discuss and
take into account all the possible paths the stock market could have taken in the past.
After all, who cares? Without excluding the possibility of living in a multiverse, we
are somehow stuck with the realization of the world’s events that we have actually
experienced.
Naturally, people in finance are very much aware of this fact, and their tools have
been made to cope with precisely this situation. It is not therefore surprising that
a mindset taken from finance might help us solve the problem of predicting more
precisely at which moment the paper will be torn apart in Fig. 7.1. To get an idea,
consider Fig. 7.1d, which shows a piece of paper with 20 % holes in it, just like
Fig. 7.1a. Still, comparing the two samples, it appears that Fig. 7.1d is much closer
to global failure, since there is a path of holes that almost spans the system. Not
surprisingly, it turns out that Fig. 7.1d is closer to a global rupture [4, 132].
Similarly to the core idea in finance, where rational expectations assumes that, in
the pricing of an asset, all available information is incorporated in the price of the
146 7 Complexity Theory and Systemic Risk in the World’s Financial Markets

asset, one should use all available information to judge when the piece of paper will
be torn apart. That is, one should apply the rational expectations idea from finance
and use all available information about the state of the paper to predict its failure.
To be more precise, let us call Rt the fundamental value for the rupture time (that
is, the time when the material actually breaks), estimated at time t, that is estimated
before the material breaks, i.e., t < Rt . Put another way, Rt corresponds to the
optimal prediction we have for the rupture time when we look at the system at time
t. Then rational expectations theory allows us to calculate Rt from

Rt D E.Rt jIt 1 / : (7.1)

Note that (7.1) has exactly the same structure as the rational expectation equation
for pricing of financial assets, viz., (1.2), as introduced in Chap. 1. The nice thing
about (7.1) is that it is perfectly clear what is meant by information and how this can
be used in an optimal way to predict Rt , whereas this was not clear for (1.2).
In the case of the piece of paper, the information It 1 could simply be the fact
of knowing that, for a given percentage x, the system has not yet broken. The mere
fact that, having punched say 50 % holes, the paper still has not broken gives us
new information which we should use to predict when it will actually break. If the
system has not broken at 50 %, this means that the most likely density of holes for
which it will break has changed compared to the value of 0.59 which was the most
likely density for failure before any holes had been made in the paper [4, 132]. The
information that the paper has not broken with 50 % holes in it means that it is more
likely to break for a higher density of holes than pCinf given above.
This fact is illustrated in Fig. 7.2, which shows the probability distribution
function for global failure to occur at pc (circles). We see there that this probability
distribution changes when we condition on systems that have not yet failed for
different densities of holes made in the paper (crosses, dots, squares). The more
detailed information one has about the system, the better the chances of predicting
its failure. Using the extent of the largest cluster of holes is another way of obtaining
more detailed information. Figure 7.3 illustrates how the rational expectation
theory for material fracture expressed by (7.1) does a very good job in using
new information, whenever a new hole is made, to predict when global failure of
the system will set in. In the more general case of systems where the damage is
temporally and spatially correlated, such as fiber bundles breaking under stress,
(7.1) can be expressed via a set of equations which calculate the probability for
different scenarios. As one would expect, the more detailed information one has
access to (e.g., concerning temporally and spatially correlated damage), the more
significantly one can increase the predictability for global failure [4, 132].
To sum up, every time we punch a new hole in the piece of paper, new
information is revealed about the system and it can be used to gain a better prediction
for when the system will fail. In this sense, the rational expectations scheme used
in finance seems much better suited to deal with material failure than with the
actions of real humans, maybe because materials correspond well to the idealized
7.3 Systemic Risk in Finance 147

Fig. 7.2 The probability for when failure will happen changes as we punch more and more holes
in the paper. The figure shows the probability for failure as a function of the density of holes
for which the sample breaks, also called the percolation threshold pc . Circles: Unconditional
probability distribution function (PDF) PL .pc / as a function of the percolation threshold pc (in
percent) for L D 20. This is the PDF that tells us the probability of failure before we have made
any holes in the sample. Crosses: Conditional PDF PL .pc jp D 0:5/, conditioned on those systems
which have not failed for a fixed occupation density p D 0:5. This is the PDF that tells us the
probability for failure after we have made 50 % holes in the sample and the sample still has not
broken. Dots: Conditional PDF PL .pc jp D 0:53/, conditioned on those systems which have not
failed for a fixed occupation density p D 0:53. Squares: Conditional PDF PL .pc jp D 0:55/,
conditioned on those systems which have not failed for a fixed occupation density p D 0:55
(Figure taken from [4])

assumptions made under rational expectations, whereas we have seen that this is not
the case with humans.

7.3 Systemic Risk in Finance

Systemic risk refers to the risk of collapse of an entire system. In a financial context,
systemic risk can occur for example when the financial stress of major banks spreads
through ripple effects to other banks that have acted as counterparties in common
transactions with the banks in trouble. This topic has been widely debated after
excessive risk-taking by major financial institutions pushed the world’s financial
system into what many considered was a state of near systemic failure in 2008. For
example, the IMF in its Global Financial Stability Report (2009) acknowledged the
148 7 Complexity Theory and Systemic Risk in the World’s Financial Markets

Fig. 7.3 Punching holes in a piece of paper and using tools from finance to tell when it will break.
Error (percentage) in predicting the rupture time Rt [as calculated from (7.1)] as a function of
time t (The figure is taken form [4]). Different symbols correspond to different cases of systems
that break (see [4] for the precise definition). Using traditional averaging methods from physics,
the error in predicting the rupture of a given system would be constant at 4.66 %. Using instead the
rational expectations approach formulated in (7.1), the prediction error decreases significantly as
time goes by, giving an increasingly better prediction for when the material will actually break apart

lack of proper tools and research on the topic, and used one out of its three chapters
to report on the detection of systemic risk [68]. It is very important to note that this
kind of risk is completely absent in the approach of the modern portfolio theory
used by a large majority of institutional investors and banks, since only the risk of
individual assets or portfolios of assets is considered, not the risk concerning the
financial system as a whole.
In this context, the following can be considered as yet another starting point for
critics of the Markovitz portfolio theory and the CAPM described in Chap. 2. For
a general review article on the topic, see [37], which points to one of the main
challenges in the following words:
Although the increase in theoretical, empirical, and policy analyses of financial instability
has been substantial, practically all writings share – in our view – the following limitation:
while ‘systemic risk’ is now widely accepted as the fundamental underlying concept for the
study of financial instability and possible policy responses, most work so far tackles one
or several aspects of that risk, and there is no clear understanding of the overall concept of
systemic risk and the linkages between its different facets.

Since complexity theory describes emergent behavior at the system level, it clearly
appears to be the appropriate tool to study such a phenomenon as systemic risk.
7.4 Self-Organized Criticality 149

In the following, this will be illustrated by a concrete example applied to financial


stresses in the world’s network of stock exchanges. In the typical literature on
financial systemic risk, one usually deals with bank contagion [88], looking at
exposures in which default by one bank would render other banks insolvent. Some
studies extend such a view to include also macroeconomic fundamentals, giving
an additional macro perspective on systemic risk in the banking sector. Yet other
studies take a broader look at the financial system, studying contagion in financial
markets, but typically one market at a time [19, 37, 114]. On the other hand, very
few studies have been made about systemic risk at the largest possible scale, i.e.,
the world as a whole. See, however, [10, 36, 52, 66, 77], which include some very
interesting studies of the world’s financial markets.
We will argue here that systemic risks are not isolated within individual countries.
This should seem obvious, given the buzzword ‘economic globalization’ which has
defined a trend in the public arena over the last couple of decades. As we have
argued many times, whenever there is some sort of interaction among elements in a
system, one rarely gets a good understanding by looking at elements in isolation.
We therefore suggest a bird’s-eye view when estimating global risks, but stress
the relevance for individual countries, by looking at the world’s network of stock
exchanges instead of focusing on individual markets. In particular, we shall suggest
new approaches to understanding the risk of contagion, where the transmission of
financial shocks can propagate across countries.
As with the piece of paper where the ongoing punching of holes steadily pushed
it toward global failure, we will introduce a model in which ‘damage’ is gradually
created by large price movements spreading through the world network of financial
markets. ‘Failure’ then occurs when the extent of the damage reaches some critical
level, and is represented in terms of ‘avalanches’ of price movements sweeping
round the globe. The trigger for such large price movements may be endogenous,
or it may be exogenous through some kind of local or global shock. Both situations
can trigger a domino effect of price movements in the world system of financial
markets.
While inspired by psychological tests on human reactions to large changes, the
model has its origin in a nonlinear dynamical model for earthquakes. Using pricing
mechanisms as prescribed in standard finance, the model serves as a prototype for
combining psychological and sociological thinking with standard pricing concepts
in finance and nonlinear response models in physics. In order to understand the
idea behind the model, we must first discuss the notion of self-organized criticality,
introduced by the Danish physicist Per Bak and coworkers [11, 12] at the end of the
1980s and the beginning of the 1990s.

7.4 Self-Organized Criticality

One often encounters a situation where a system is continuously perturbed by some


(external or internal) disturbance, and the question then arises as to how the system
will react to such perturbations. For example, one can think of the speed of global
150 7 Complexity Theory and Systemic Risk in the World’s Financial Markets

Fig. 7.4 The sand pile. An example of a self-organized critical (SOC) phenomenon. (Figure taken
from [11])

internet traffic as a function of the continuous level of spam mail that circulates and
the frequency of attack by hackers on internet routers. Another example is plate
tectonics, where the plates are continuously perturbed by slow collisions with other
plates. From time to time, the response is a release of built up stresses in the form
of an earthquake. Or think of the climate of the earth. We still do not know exactly
how the world climate will respond to the release of man-made CO2 gas.
In 1987, Bak, Tang, and Wissenfeld (BTW) [12] introduced a model that has
in many respects become a prototype for describing such systems, i.e., systems
in which a slow buildup of stresses is subsequently released in a sudden burst of
avalanches. Their general idea was illustrated by a sand pile. Imagine ourselves
at the beach on a nice warm day, and let us say that, in the absence of any other
tempting activities, we begin to make a pile of sand by dropping grains of sand,
one by one, as illustrated in Fig. 7.4. Initially, we see a heap of sand being formed
with a small slope, but as we keep on dropping grains of sand, the pile grows in
size, and so does the slope of the pile. During this process we notice that, at the
beginning, dropping a grain of sand does not have any other effect than increasing
the slope locally at the place where we dropped the grain. However, as more and
more sand is dropped, we begin to notice that dropping a grain of sand can lead to a
redistribution of the sand. This is initiated by the grain setting off small avalanches
of nearby grains of sand, which then tumble down the pile.
7.4 Self-Organized Criticality 151

When we finally end up with something that looks like the sand pile in Fig. 7.4,
we have reached a point where the slope of the pile cannot get any steeper. Then
instead of increasing the slope, the system responds by releasing more and more
avalanches of sand. By tumbling down the pile, these will in turn set off other
grains, which then also tumble down and set off even more avalanches of grains,
into what develops into a cascade of sand grains falling down the slope. In this
state the system is said to be critical, because the action of dropping a grain has
unpredictable consequences which range from nothing happening (the grain sticks)
to a huge avalanche of grains perturbing a large part of the pile. The formalism that
Bak and coworkers introduced to model such a scenario is described below.

In order to model this along the lines suggested in the BTW paper, we think
of a discrete description of the sand pile, describing its state by the height
h.x; y/ of a column of sand located at the coordinates .x; y/, where h; x; y
are all integers. A grain is added to the pile by randomly choosing a site .x; y/
and increasing h by one unit. The BTW model assumes that the local slope
of the pile described by z.x; y/ can never exceed a threshold value zc . If this
situation arises when we add a grain of sand at site .x; y/, sand at the site will
tumble and be redistributed to the neighboring sites .x ˙ 1; y/, .x; y ˙ 1/.
Describing the sand pile in terms of the local slope, the BTW algorithm can
be formulated as follows.
Add one grain of sand:

z.x; y/ ! z.x; y/ C 1 : (7.2)

If the slope is greater than the critical slope, i.e., if z.x; y/ > zc , then
redistribute the sand:

z.x; y/ ! z.x; y/  4 ; (7.3)


z.x ˙ 1; y/ ! z.x ˙ 1; y/ C 1 ; (7.4)
z.x; y ˙ 1/ ! z.x; y ˙ 1/ C 1 : (7.5)

When the condition z.x; y/ > zc is fulfilled, the neighboring sites .x ˙ 1; y/


and .x; y ˙ 1/ can in turn become critical when they receive a grain of sand
from the site .x; y/, and so they also redistribute sand to their neighbors
according to (7.3)–(7.5), and so on, with the end result that the initial
perturbation at site .x; y/ creates an avalanche of tumbling sand.
One of the main findings by BTW was that the probability P .s/ of creating
an avalanche of size s followed a power law distribution:

P .s/ / s ˛ : (7.6)

(continued)
152 7 Complexity Theory and Systemic Risk in the World’s Financial Markets

(continued)
Power laws like (7.6) are characteristic of systems that undergo critical phase
transitions. BTW therefore coined the term ‘self-organized criticality’ (SOC)
to describe a class of systems which somehow enters a critical state on its
own, without the need for any external tuning parameter, like temperature, for
instance.

Notice that despite the name ‘critical’, according to (7.6), the most likely outcome
of adding a grain of sand is actually that nothing at all will happen, i.e., an avalanche
of size 1. The main message we learn from self-organized critical (SOC) systems
is that we cannot predict exactly when a big event will happen, only that big events
will eventually happen, again according to (7.6). If this sounds familiar, it should,
because this is the case for earthquakes as described by the famous Guttenberg–
Richter law – yet another example of a power law distribution. It is still not clear
what might be necessary and sufficient conditions for a system to enter a SOC state
with avalanche dynamics described by (7.6) all by itself, as it were, without any
tuning parameters. Per Bak stressed the highly nonlinear dynamics that results from
the threshold condition z > zc with subsequent execution of (7.2)–(7.5). According
to Bak, this makes it impossible to use a standard differential equation approach to
describe such systems.
Let us summarize the main characteristics of a SOC system. This will be
important for the topic in the next section, where we introduce a SOC model of
the world’s stock exchanges:
• Separation of Time Scales. The separation of time scales happens because
there is a slow variable gradually adding stresses to the system, followed by
sudden releases of such stresses, modelled by a fast variable describing the
avalanches propagating through the system. If we consider the sand pile, the
addition of sand grains is the variable acting on the slow time scale, while
the triggering of avalanches happens at a much faster time scale. A similar
description is found for earthquakes, where stresses build up over decades as
the tectonic plates slowly slide on top of one another, to be released suddenly
in the phenomena we associate with earthquakes. In the next section, we shall
see how financial markets can be understood in a similar picture, with a slow
variable corresponding to the economic growth/contraction that slowly induces
‘stresses’ across markets, while the fast variables are the collective moves of the
markets occurring simultaneously worldwide, especially during volatile periods
on the markets.
• No External Driving Force Needed to Reach Criticality. The term ‘self-
organized’ in the SOC abbreviation is meant to stress this fact. Before the
introduction of SOC systems, physicists always thought that an external driving
force like temperature or pressure would be necessary for a system to enter the
7.4 Self-Organized Criticality 153

critical state seen during phase transitions. The BTW picture of SOC was an
eye-opener for many physicists, who began to look for systems in nature that
could organize into a critical state on their own, characterized by power law
distributions of events. Apart from the sand pile, the most obvious candidates for
such a self-organization are earthquakes, where stresses introduced by sliding
tectonic plates over millions of years manage to drive the earth’s crust into a
critical state, with power law distributions arising in the Guttenberg–Richter law
for earthquakes. In the next section, the claim will be that, on a worldwide scale,
the financial markets could be in a similar ‘critical state’, driven mainly by the
slow ‘stresses’ of expanding or contracting economies.
• Memory Effects at the System Level. Looking at Fig. 7.4, the shape of the sand
pile is created by the repeated action of sand being added and then tumbling
down, and as such it is a reflection of past actions. The way the system will
respond to a new perturbation (the dropping of a grain of sand) depends on how
the system was created, so the system possesses a memory created collectively
by its constituents (the grains of sand). In terms of financial markets, the claim
in the next section will be that a similar memory exists for the network of stock
exchanges around the world, and that we can only understand how the system
will react to new perturbations from a detailed understanding of the scenario that
led to the present ‘state’ of this network.
Not long after the original paper introduced the idea of self-organized criticality
in a sand pile, another paper was published, suggesting how earthquakes could be
seen as generated by SOC dynamics [105]. In short, the idea presented in [105]
was to consider a tectonic system as being composed of blocks that can move
independently of each other in such a way that, when a block moves, it influences
the force on neighboring blocks as if they were connected by springs [81]. Stresses
build up on the blocks for two reasons:
1. The tectonic plate movement in [105] was meant to describe a subduction zone
where the relative movement of an upper tectonic plate squeezing a lower-
lying tectonic plate induces stresses on the latter. The slow dynamics was taken
into account by a small but steady increase in the force acting on each of the
blocks describing the lower-lying tectonic plates. Since stick and slip motion
was assumed, no block would move until the block with the highest stress on it
hit a threshold force Fc for sliding.
2. As for the sand pile dynamics, a block sliding due to the stress would then
redistribute its stress to neighboring blocks, giving rise to similar dynamics to the
sand pile as described in (7.3)–(7.5), but with one important change: the force on
the sliding block would not be conserved in the redistribution.
The model in [105] thus became the first known SOC model that would still
exhibit criticality (power law distributions of event sizes) with variables that were
not conserved. Similarly, as we shall see, the model of the world stock exchange
network is an example of a SOC model where the variables are not conserved.
154 7 Complexity Theory and Systemic Risk in the World’s Financial Markets

7.5 Two States of the World’s Financial Markets: The Sand Pile
and the Quicksand

Just like the sand pile described in the last section, financial markets are continu-
ously perturbed by external events or news that influence the pricing of the markets.
And just as we saw how avalanches could be created and spread in the sand pile, it
appears that what happens in one market can sometimes propagate across countries,
especially during crises in which one often sees days where the world markets
are unanimously red or black. In the following we suggest a link between the
dynamics observed in SOC systems and the price dynamics seen in the network of
the world’s financial markets. Specifically, we will introduce a model where tremor
price dynamics (TPD) of ‘price-quakes’ is being created in part due to a worldwide
tendency by traders to react disproportionately to big price changes occurring in
other countries.

7.5.1 News and the Markets

If you live in the sunny south of France, you just need to take your car and go for a
drive to know what is going on in financial markets on the other side of the planet.
That is, if while driving you tune into the radio station France Info, which brings you
continuous live news coverage, they will tell you the opening and closing numbers
of the most important markets in the world as they happen. They will also give you
more detailed information and analysis of the French domestic stock market, the
CAC40, and provide you with information if, for example, a major company in the
CAC40 makes new acquisitions. And of course France Info is not alone in such
coverage. Rather, it is the trend among news providers, whether they communicate
by radio, television, or the worldwide web, to inform about the performance of the
main stock markets on the planet, reaching an ever broader public around the world,
and often in real time.
Nowadays we are of course used to such a ubiquitous and instantaneous presence
of market data in the public sphere, but we just need to go back to the days before
the internet to realize that something important has changed in the communication
of market data to the public. Even though it now seems hard to imagine life without
the World Wide Web, it should be noted that the first web server outside Europe was
set up in 1992! It was simply beyond the reach of most people to follow the crash
of 19 October 1987 live in the way that absolutely everybody could follow the flash
crash on 6 May 2010. This then poses intriguing questions as to what impacts there
could be on this changed way of communications and its influence on the markets.
It has been shown in computer simulations that the processes of social influence
lead to the development of clusters of like-minded individuals [103]. The size and
shape of the clusters depend on the structure of communication between individuals
[82]. Clusters of similar opinions tend to develop in areas of dense communication
between individuals. The local pattern of communication results in clusters of
relatively small size. With growth of the communication network, the clusters
7.5 Two States of the World’s Financial Markets: The Sand Pile and the Quicksand 155

of similar opinions also grow in size. Global patterns of communication thus tend to
produce unification of opinions. Clearly, the extension of communications networks
due to the development of new information and communication technologies will
result in globalization of opinions. It is also the case that exposure to specific
information prompts the individual to use this information in decision-making.
Listening to financial news on the car radio can be relaxing or interesting, but a
more tense atmosphere pervades the trading floors where not only lots of money but
also careers are at risk. It would be strange if the bouts of irrationality sometimes
observed in the general public were never to affect these highly trained professionals
when they are trading. Since they are the ones that supposedly have a real impact
on the way the markets are moving, it would be interesting to look more seriously
at whether herding could be part of their decision-making behavior and to see how
that might imprint on the dynamics of price formation.

7.5.2 Change Blindness and Large Market Movements

Let us try to get a first idea why large movements in the world’s stock exchanges
should play a special role, and also try to understand the role they may play in
clustering of market movements [126]. We can do this by calculating the conditional
probability that the daily return of a given country’s stock market, viz.,

p.tclose /
R D log ;
p.tclose  1/
has the same sign as the daily return of the world market [6]. This is shown in
Fig. 7.5 (left). Note that the return of the world market is calculated without taking
into account the given country’s return in order to exclude any artificial self-impact.
As can be seen from the figure, on days when there is little overall price
movement in the stock markets worldwide, a given country’s price movement is
not correlated with the general worldwide price movements, since the probability
that it moves in the same direction as the world index is close to 0.5, i.e., random.
However, as we can also see from Fig. 7.5 (left), on days with big price movements
worldwide, it is very likely that a given country’s stock market will follow with
a price movement in the same direction as the rest of the world. And the larger
the price movement worldwide, the stronger the tendency. Put another way, on
days with small price movements worldwide, there is little coherence among
different countries’ performances, but clustering or herding is seen on days with
large movements worldwide, with essentially all stock markets moving in the same
upward or downward direction.
This is in agreement with experiments made in psychology which have shown
that humans react disproportionately to big changes, a phenomenon called change
blindness, since only large changes are taken into account, while small changes
go unnoticed [71, 84, 116]. Change blindness has been reported in laboratory
experiments, even when participants are actively looking for changes. When small
rapid changes occur to photographs, observers often miss these changes, provided
156 7 Complexity Theory and Systemic Risk in the World’s Financial Markets

1.1 1.1

a b

1
1

))
US open−close
0.9
Prob(sign(R )=sign(R ))

0.9
m

0.8
0.8

Prob(sign(R )=sign(R
i

0.7

0.7

i
0.6

0.6
0.5

0.5
0.4

0.4 0.3
−0.06 −0.04 −0.02 0 0.02 0.04 −0.06 −0.04 −0.02 0 0.02 0.04

Rm R US open−close

Fig. 7.5 How a given stock market follows the price movements of other stock markets. Left:
Conditional probability that the daily return Ri of a given country’s stock market index has
PN K R
the same sign as the world market return defined by Rm  P j j , where K is the
j ¤i j ¤i Kj
j
capitalization of the j th country’s index. Right: Conditional probability that the close–open return
Ri of a given country’s stock market index following a US open–close has the same sign as the US
open–close return (+ European markets, ı Asian markets)

that the change is made during a saccade [60], a flashed blank screen[108], a blink
[116], or some other visual disruption [106]. For a review on change blindness, see,
e.g., [128].

7.5.3 Price-Quakes of the Markets

The empirical data in Fig. 7.5 (left) suggests that a threshold-like price dynamics
is created by traders, in which (i) small price changes worldwide go unnoticed but
(ii) large price changes worldwide are used to some extent in the local pricing of
a given country’s stock market. According to Per Bak, such threshold dynamics
is the hallmark of SOC, and it suggests the intriguing possibility of using SOC
techniques to gain insight into pricing in the world’s stock markets at the system
level. This is the core idea behind the tremor price dynamics (TPD) model which
is defined briefly below and described in more detail in the Appendix. The TPD
model considers pricing in the world’s stock markets as a collective phenomenon
in which traders take into account large movements or large cumulative movements
(say, over several days) in other countries as a yardstick in the pricing of a domestic
stock market.
7.5 Two States of the World’s Financial Markets: The Sand Pile and the Quicksand 157

We shall be interested here in understanding how pricing takes place at two of the
most important moments during the day: the open and close of a stock market. These
are the two moments where the volume typically peaks and they are otherwise only
exceeded at certain times on days when there are important releases of economic
news. Taking trading volume as proxy for importance, it therefore makes sense to
focus on the close and opening as the moments where news gets priced in.
The idea behind the TPD model is as follows. Imagine a trader who, at the
opening of the Tokyo stock exchange, tries to price in new worldwide information
coming from the other stock exchanges about what has happened since the markets
last closed in Tokyo. It seems natural to assume that she/he does so by taking
into account both the release of local economic news in Japan (news that has
come out since the previous day’s close) and also by seeking out news about how
other markets have performed since the markets closed in Tokyo. Because of time
zone differences, new information at the opening in Tokyo would therefore include
the price difference between the open and close the day before for the European
and American markets. To take into account new information from the Australian
market, however, this would include the price difference between the close of the
day before and the open the same day, since this market is the first market to open
worldwide, and opens before the Japanese markets.
We now postulate a universal behavioral mechanism for the pricing by traders,
evaluating two different terms in order to know the return Ri of the market i since
the last close or open. The formal definition of the pricing of the model which
explains the two terms is shown in the box.

The return function is given by

1 X
N
 
Ri .t/ D  ˛ij Rjcum .t  1/  RC Rjcum .t  1/ˇij C i .t/ : (7.7)
Ni
j ¤i

Equation (7.7) describes the essential part of the TPD model. A more detailed
description is given in the Appendix.

According to (7.7), traders evaluate two things in order to know how to find the
proper return of the market at the open or close. These two contributions are:
• Local economic news described by the i term in (7.7) since the last close or
opening. This is news that is only relevant for stock exchange i , such as decisions
made by the central bank about a given country’s interest rates, or for example,
statistics about unemployment rates, or other economic news for country i , like
the GDP. If we now think about the analogy with the sand pile model, i should
be considered as the slow variable, with trends on a long time scale of months or
years. The effect of i is to describe perturbations made to stock exchange i , just
as we saw how dropping a grain of sand in (7.3) would perturb the sand pile.
158 7 Complexity Theory and Systemic Risk in the World’s Financial Markets

Fig. 7.6 Price-quakes. An illustration of how the threshold effect in the TPD model (7.7) can lead
to price-quakes, in this case with Japan as the epicenter (Courtesy of Lucia Bellenzier)

• Big cumulative changes from other stock exchanges, weighted in terms of their
importance (in terms of capitalization, described by the ˛i;j term) and their
relatedness (in terms of geographical position, representing, e.g., overlap of
common economic affairs or importance as trading partners, described by the ˇi;j
term). This is described by the first term in (7.7), i.e., the long expression given by
the sum. Since the Heaviside -function takes the value 1 whenever its argument
is greater than zero and zero otherwise, it describes the same threshold dynamics
that we saw in the sand pile model. Whenever the cumulative return of a foreign
stock exchange j exceeds the threshold Rc , traders at stock exchange i get a
contribution to Ri from exchange j , just as critical sand piles would redistribute
their grains to neighboring sites via (7.4)–(7.5). The dynamics described by the
sum in (7.7) therefore corresponds to the fast variable, with avalanches releasing
stresses built up by the slow (economic) variables mentioned above.
Figure 7.6 illustrates the idea behind the model with one example of how price
movements can propagate across the globe.
The reasoning behind (7.7) goes as follows. At the opening/close of a given stock
exchange i , traders price in new internal economic news since the last close/opening
via the i term. The first term in (7.7) describes the fact that traders look up what has
happened in other stock markets worldwide, but it is only when a sufficiently large
(possibly cumulative) price move happens in another stock exchange j that it has an
influence on the stock exchange i . The use of the Heaviside -function ensures that
the first term in (7.7) is zero for small cumulative moves of stock exchange j , i.e.,
in this case, stock exchange i does not feel any influence from stock exchange j .
However, the pricing of stock exchange i receives the contribution ˛ij Rjcum ˇij when
a sufficiently large cumulative move (> RC ) occurs at stock exchange j .
The two coefficients ˛ij ; ˇij (explained further in the Appendix) describe how big
an influence a price move of stock index j can have on the given stock index i .
7.5 Two States of the World’s Financial Markets: The Sand Pile and the Quicksand 159

It is important to note that ˛ij is asymmetric, i.e., ˛ij ¤ ˛ji , since the impact of a
big price movement of stock index i on another stock index j is not generally the
same as the impact of the same big price movement of stock index j on stock index
i . It should also be mentioned that, when a big (possibly cumulative) price move
of stock exchange j has had an impact on stock exchange i , it gets priced in. The
‘stress’ due to the large cumulative move of stock exchange j is thereby released
and Rjcum is set equal to zero, just as overcritical sites in the sand pile became stable
after redistribution of sand grains.
We would like to point to the similarity between the structure of (7.7) and the
capital asset pricing model, since (7.7) predicts how an individual stock exchange
should be priced in terms of the performance of the global market of exchanges,
but with human behavioral characteristics included in the pricing. Each stock index
is composed of a given number of stocks. As such, each index, or block, can itself
be thought of as a spring–block system (for the definition of such a system, see
the Appendix), where each block in the sub-spring–block system now represents a
given stock. This opens the possibility for a hierarchical description of the world’s
stock exchanges, where the stress on a given stock can either influence another stock
in another index directly, or indirectly through its influence on its index, and it can
influence other indices and hence other stocks worldwide.
The processing of news is a key building block in the model. However, news
will not always influence one stock index directly, but can instead influence single
stocks, sectors, or groups of stocks, possibly in different markets and at the same
time. Therefore, the model may be viewed as a kind of factor model. Idiosyncratic
shocks may have almost no effect on the index, since they may to a certain extent
average out, but other factors (interest rates, oil prices, labor markets) with an impact
on many stocks may have a noticeable effect on the index, possibly on all indices.
As a result, a large movement of an index is likely to stem from the impact of an
important factor, which is then also likely to have an impact on stocks in other
markets. The model can be thought of as filtering for large index movements, which
of course may happen jointly in many markets, because they are caused by the same
factor.
Yet another interpretation of (7.7) is to view the world’s financial system as a
set of coupled oscillators. The oscillations happen because stresses are gradually
built up and then released (see the equations in the Appendix). Each stock exchange
can therefore be seen as oscillating with a given frequency, and this oscillation can
in turn influence the frequencies of the other oscillators in the system, leading to
synchronization effects.

To recapitulate, the TPD considers two terms to have importance in the pricing
of the stocks on the index i of a given country at the moment of the opening
or close:
(continued)
160 7 Complexity Theory and Systemic Risk in the World’s Financial Markets

(continued)
• Internal economic news, released since the last close or opening, only
relevant for the specific index i , and
• Large external (possibly cumulative) price movements of index j since the
last close (or opening) having an impact on index i .
The model assumes that the impact ˛ij that a given country j can have on
another country i is determined by the relative value of the capitalizations
Ki ; Kj of the two indices, and in fact by an expression of the form
 
Kj
˛ij D 1  exp  :
Ki 
A large value of  , i.e.,   1, then corresponds to a network of the world’s
indices that is dominated by the index with the largest capitalization Kmax .
At the present time, this is the US financial market, so choosing  large
corresponds to the case where pricing in any country only takes into account
the movements of the US markets as external information. On the other hand,
a small value of  , i.e.,   1, corresponds to a network of indices with equal
strengths, since ˛ij becomes independent of i; j .
In addition the TPD assumes that countries which are geographically close
also have larger economic interdependence, as described by the coefficient

z z
i j
ˇij D exp  ;

where zi  zj the time zone difference between countries i and j and  gives
the scale over which this interdependence declines. Small , i.e.,   1, then
corresponds to a world where only indices in the same time zone are relevant
for the pricing, whereas large , i.e.,   1, describes a global influence
in the pricing that is independent of time zone differences. The structure of
the TPD pricing formula is similar to the capital asset pricing model since it
predicts how an individual stock exchange should be priced in terms of the
performance of the global market of exchanges. The difference is that human
behavioral characteristics are now included in the pricing.

One can check the specific assumption in the TPD that large movements of large
capital indices should have a particular impact on smaller capital indices. The open–
close return of the US stock market gives a clear case to check for such a large-move
impact. Since the Asian markets close before the opening of the US markets, they
should only be able to price in this information when they open on the following day.
An eventual large-move US open–close should therefore have a clear impact on the
following close–open of the Asian markets. In contrast, the European markets are
still open when the US market opens up in the morning, so the European markets
7.5 Two States of the World’s Financial Markets: The Sand Pile and the Quicksand 161

have access to part of the history of the open–close of the US markets. An eventual
large-move US open–close would therefore still be expected to have an impact
on the following close–open of the European markets, but less than for the Asian
markets, since part of the US move would already be priced in when the European
markets closed. Since the opening of the Asian markets could by itself influence the
opening of the European markets, this could also distort the impact coming from the
US markets. Figure 7.5 (right) illustrates again the crucial part of the assumption in
our model that large moves are indeed special and have an impact across markets.
As expected, this effect is seen more clearly for the Asian markets than for the
European markets.

7.5.4 Price-Quakes in the Worldwide Network of Financial Markets

By analogy with earthquakes, we now introduce a measure for the strength of


‘seismic’ activity in the world network of stock exchanges. To do this, we suggest
considering each stock exchange as a seismograph which can at any moment in
time measure the amplitude of the ‘wave of stress’ imposed on it by large price
movements on other stock exchanges worldwide.

This relevant quantity is Ritransfer given by

1 X
N
 
Ritransfer .t/ D  ˛ij Rjcum .t  1/  RC Rjcum .t  1/ˇij ; (7.8)
Ni
j ¤i

which describes the part of the return of a given stock index i that is attributed
to large movements of other stock indices. This is the first term in (7.7).

The global ‘seismic’ activity on the world’s stock exchanges can then be determined
at any moment as an average of the values recorded on each of the seismographs
worldwide, which leads to the definition
X
A.t/  Ritransfer .t/ :
i

Using A.t/ defined in this way, one can investigate whether such activity could be
used to characterize special periods with high ‘tremor’ activity on the world’s stock
exchanges.
Before one can use (7.8) to determine ‘tremor’ activity, one must first estimate
the parameters of the model. A description of how to do so can be found in the
Appendix. Let us mention that, given the optimal parameters for the model, we
predicted the sign of the open or close for each stock exchange using the sign
162 7 Complexity Theory and Systemic Risk in the World’s Financial Markets

World Return (thick solid line) and "Seismic" Activity (thin solid line)
1.5

0.5

−0.5

−1

−1.5
0 500 1000 1500 2000

Time(Days)

Fig. 7.7 Seismographic activity of price-quakes A.t / (thin solid line). The thick solid line
represents the world return index normalized according to the capitalization of the different stock
indices

of Ritransfer , which describes the part of the return of a given stock index i that is
attributed to large movements on other stock indices. Using a total of 58,244 events,
we found a very convincing 63 % success rate in predicting the sign of the return of
the open or close of a given stock exchange by back-testing.
Figure 7.7 shows recordings of the ‘tremor’ activity in the world’s stock
exchanges. It resembles seismograph recordings of tectonic plate movements. The
large event tail of the probability distribution function of this activity exhibits the
familiar power law behavior, as seen for the seismic activity of earthquakes [6]. As
mentioned in [11], memory effects at the system level are generated dynamically
in SOC systems. Only when a SOC system has entered a steady state does the
system exhibit long-range correlations with power law events. In this sense the large
event tail of the probability distribution function signals the presence of memory
effects and a steady state of the global network of stock exchanges. Most notable
is a striking tendency for large ‘tremor’ activity during down periods of the market.
That is, the collective response of the worldwide network of stock exchanges seems
to be stronger, with larger ‘price-quakes’ (positive as well as negative), when the
world is in a bear market phase, as compared to when it is in a bull market
phase.
7.5 Two States of the World’s Financial Markets: The Sand Pile and the Quicksand 163

7.5.5 Discussion

We have introduced a new model for pricing on the world’s stock exchanges
that uses ideas from finance [6], physics, and psychology. The model is an
extended version of the Burridge–Knopoff model originally introduced to describe
tectonic plate movements. We have used an analogy with earthquakes to get a new
understanding of the buildup and release of stress in the world network of stock
exchanges, and we have introduced a measure that correctly captures the enhanced
activity of price movements, observed especially during bear markets.
In this sense our measure of ‘seismic activity’ gives yet another measure for
assessing phases of systemic risk, much like the principal components analysis
measure of [19] and the index cohesive force of [77]. However, it would also
be interesting to use our model to investigate ‘tipping points’, using scenario
analysis to determine particularly dangerous moments of contagion in the financial
system. Nonlinearity enters the model as the human behavioral tendency to react
disproportionately to big changes. As predicted, such a nonlinear response was
observed in the impact of pricing from one country to another. The nonlinear
response allows a classification of price movements on a given stock index as either
exogenously generated due to economic news specific to the country in question, or
endogenously created by the ensemble of the world’s stock exchanges reacting as a
complex system. This approach could shed new light on the risk of systemic failure
when large financial price-quakes propagate worldwide [6].

Appendix

At time t, a trader on a given stock exchange i estimates the price Pi .t/ of


the index as Pi .t/ D Pi .t  1/ exp Ri .t/, where Ri .t/ is the return of stock
exchange i between times t  1 and t as given by

1 X
N
 
Ri .t/ D ˛ij Rjcum .t  1/  RC Rjcum .t  1/ˇij C i .t/ ; (7.9)
Ni
j ¤i

h  i 
Rjcum .t/ D 1  Rjcum .t  1/  RC Rjcum .t  1/ C Rj .t/ ; (7.10)

X
N  
z z
  Kj i j
Ni D Rjcum .t1/RC ; ˛ij D 1exp  ; ˇij D exp  ;
Ki  
j ¤i
(7.11)

(continued)
164 7 Complexity Theory and Systemic Risk in the World’s Financial Markets

(continued)
and N is the total number of stock exchanges. The second term in (7.9),
i , represents internal economic news only relevant for the specific index
i , whereas the first term in (7.9) describes external news with large price
movements on index j having an impact on index i . t stands for the time
of the close (open) of exchange i , while t  1 is the time of the last known
information (close or open) of exchange j , as known at time t. ˛ij is a
coefficient that describes the influence of stock index j on stock index i in
terms of the relative value of the capitalization Ki ; Kj of the two indices.
A large value of  (  1) then corresponds to a network of the world’s
indices with dominance of the index with the largest capitalization Kmax .
At the present time this is the US financial market, so choosing  large
corresponds to the case where pricing in any country only takes into account
the movements of the US markets as external information. In contrast, a small
value of  (  1) corresponds to a network of indices with equal strengths,
since ˛ij becomes independent of i; j .
In addition, we assume that countries that are geographically close also
have greater economic interdependence, as described by the coefficient ˇij ,
while zi  zj is the time zone difference between countries i; j . The quantity
 gives the scale over which this interdependence declines. Small  (  1)
then corresponds to a world where only indices in the same time zone are
relevant for the pricing, whereas large  (  1) describes a global influence
in the pricing, regardless of the time zone difference.
The structure of (7.9) is similar to the capital asset pricing model since
it predicts how an individual stock index should be priced in terms of the
performance of the global market of exchanges, but with human behavioral
characteristics included in the pricing. The five parameters of the model are:
the total number of stock exchanges N , the return threshold RC , the time scale
of the impact across time zones , the scale of impact from capitalization  ,
and the standard deviation i in the noise term i .
The function is the Heaviside step function so only when the cumulative
return Rjcum of index j exceeds a threshold RC does the index j have a
possible impact on the pricing in index i (depending on ˛ij ; ˇij ). The factor Ni
in (7.9) means that the index i takes into account an average impact among
the indices j that fulfill the condition imposed by the Heaviside function.
Equation (7.10) includes the key assumption in finance that, when new
information arrives, it gets priced in, i.e., incorporated into the price of the
index. That is, after the information that Rjcum > RC has come out and had
its impact on index i , this information is deleted, i.e., Rjcum ! 0. It should
be noted, however, that memory effects are present in the model since it is
the cumulative ‘stress’ that determines when a block ‘slips’. In self-organized

(continued)
7.5 Two States of the World’s Financial Markets: The Sand Pile and the Quicksand 165

(continued)
critical (SOC) systems, memory is known to be an essential ingredient for the
criticality of the system [11].
Formally, (7.9)–(7.11) describe a 2D BK model of tectonic plate motion
[81, 105]. It can be seen as an extension of the 2D Olami–Feder–Christensen
(OFC) model [105], where each block is connected to all other blocks
with i; j -dependent coupling constants Cij D ˛ij ˇij . However, in the OFC
model, each block is only connected to its four neighbors and has only three
(x; y; z-dependent) coupling constants. In addition, in our model, out-of-plane
stresses are introduced randomly (in both sign and magnitude) via i for
each block, in contrast to the constant (same sign) pull of the OFC model.
Equations (7.9)–(7.10) thus provide an interesting perspective on the world’s
financial system as a complex system with self-organizing dynamics, and
possibly similar avalanche dynamics to what is observed for earthquakes.
The parameters of the model can be estimated by maximum likelihood
analysis (see below). As an additional check on our assumptions in (7.9)–
(7.11), we have constructed the difference

1 X
N
 
i D Ri .t/  ˛ij Rjcum .t  1/  RC Rjcum .t  1/ˇij
N 1
j ¤i

from the empirical data of 24 of the world’s leading stock exchanges, using
daily data since the year 2000. According to (7.9), this difference should have
a Gaussian distribution. We found the optimal parameters to be

 D 0:8 ;  D 20:0 ; RC D 0:03 ;  2 D 0:0006 :

Figure 7.8 shows that, for these parameter choices, our definition of price
movements due to external (random) news does indeed fit a normal distribu-
tion. The values obtained for the optimal parameters suggest a fairly ‘global’
network of stock exchanges with a significant influence on pricing across time
zones, while pricing is not dominated solely by the index with highest capital.
In principle, this seems to be in good agreement with expectations.
Furthermore, the value of Rc is consistent with the estimate one can obtain
independently by visual inspection of Fig. 7.5. Maximum likelihood estima-
tion can be used to slave either  or  to the remaining three parameters [6].
Slaving  to RC , , and , one finds:

X
N
Ki  cum 
Ci .t/  Rj .t  1/  RC  Rjcum .t  1/e.ti tj /= ; (7.12)
Kj
i ¤j

(continued)
166 7 Complexity Theory and Systemic Risk in the World’s Financial Markets

5
10

4
10
Number of events

3
10

2
10

1
10
−0.06 −0.04 −0.02 0 0.02 0.04 0.06
Returns

Fig. 7.8 Impact of change blindness on market prices. Circles: observed returns Ri . Plus signs:
term Ritransfer arising due to change blindness. Squares: difference i  Ritransfer  Ri which,
according to (7.11)–(7.9), should be Gaussian distributed. Solid line: Gaussian distribution

(continued)
N 
X 
0 Ki 2  cum 
Ci .t/  Rj .t  1/  RC  Rjcum .t  1/e.ti tj /= ; (7.13)
Kj
i ¤j

X T X N
1  data 
 Ri  i .t/ Ci .t  1/
t D1 i D1
Ni
D N : (7.14)
X
T X
1   0
Ci .t  1/2  i .t/  Ridata Ci .t  1/
t D1 i D1
.Ni /2
Communication and the Stock Market
8

8.1 Introduction

Throughout this book we have described how interaction between market par-
ticipants can occur through the price, with choices by market participants that
depend on the price trajectories created by other traders. In the last chapter, we
also discussed how price formation results from the social dynamics between
large groups of individuals, where one financial market uses the outcome of other
financial markets in order to decide how to price an asset properly.
As we have suggested, the social dynamics of communicating individuals can
also be important with respect to the fixing of price levels in financial markets.
The reason is that human decisions are almost always made in the social context
of other individuals. To elaborate further on this, let us go back to the quote from
Graham [59] in the first chapter: “In the short run, the market is a voting machine,
but in the long run it is a weighing machine.” Graham intended to argue that
investors should use a fundamental value investment approach and concentrate on
accurately analyzing the worth of a given financial asset. His idea was to ignore
the short run ‘voting machine’ part, and instead concentrate on the long run, where
somehow the ‘weighing machine’ of the market would ensure that we end up with a
price corresponding to the true worth of the asset. The interesting part of Graham’s
quote, however, is the allusion to human decision-making and its impact on the
markets. But Graham does not refer to the way the decision-making process actually
takes place, and this will be our focus in the following.
In situations of uncertainty, people often consult others to get more information
and thereby a better understanding of the situation. This is particularly true of the
financial markets, where market participants consult the media or other colleagues
to get an idea of the origin behind price movements, or to assess the impact a given
piece of information could have on the markets. An individual may ask others for
advice or information, and several individuals may discuss different companies,
stocks, and investment options, creating a so-called shared reality that guides
their individual decisions. As mentioned in the first chapter, copying the behavior

J.V. Andersen and A. Nowak, An Introduction to Socio-Finance, 167


DOI 10.1007/978-3-642-41944-7__8, © Springer-Verlag Berlin Heidelberg 2013
168 8 Communication and the Stock Market

of others is one of the predominant mechanisms in decisions to purchase [18].


Communication and opinion formation among market participants therefore seem
to be an important ingredient in any financial market. But how can one quantify such
a scenario? In the following, we will briefly introduce a model that does exactly this.

8.2 A Model of Communication and Its Impact on Market


Prices

Consider a population of market participants, shown schematically in Fig. 8.1A,


where for simplicity we imagine that people have just two different opinions on
the market, which we can characterize as either bullish or bearish. Figure 8.1A
represents the opinions of the participants at the beginning of a given day. During
the day people meet in subgroups, as shown in Fig. 8.1B, to update their view of
the market. Figure 8.1C illustrates the case where a majority opinion in a given
subgroup manages to polarize the opinion of the group by changing the opinion of
those who had an opinion belonging to the minority. More realistically, one could
assume that there will be a certain probability for a majority opinion to prevail, or
even that under certain conditions a minority could persuade a part of the majority
to change their opinion.
The influence of the financial market on decision-making can now be included
in a natural way by letting the strength of persuasion depend on how the market has
performed since the last meeting of the market participants. The idea is that, if for
example the market had a dramatic downturn at the close yesterday, then in meetings
the next morning, those with a bearish view will be more likely to convince even a
bullish majority of their point of view. In the formal description below, this is taken
into account by letting the transition probabilities for a change of opinion, i.e., the
probabilities of transitions like Fig. 8.1B, C, depend on the market return over the
last period. A formal description of the model is given in the box below.
Figure 8.2a gives one illustration of the link between the bullishness of market
participants obtained through communication (thin solid line) and market prices
(thick solid line). Let us mention also that the model can reproduce the so-called
stylized facts of financial markets [33], which are clustering of volatility (Fig. 8.2b),
fat-tailed returns (Fig. 8.2c), no arbitrage possibilities, i.e., zero autocorrelations of
returns (Fig. 8.2d), and long memory effects in volatility, i.e., nonzero autocorrela-
tions of volatility (Fig. 8.2d).

We will in the following proceed similarly to the socalled Galam model


of opinion formation [53, 54], we assume a population of agents with two
opinions. Since we consider the case where the agents correspond to market
participants communicating their view on the market, we call the two opinions
‘bullish’ and ‘bearish’. Letting B.t/ denote the proportion of bullishness in
a population at time t, the proportion of bearishness is then 1  B.t/. At the

(continued)
8.2 A Model of Communication and Its Impact on Market Prices 169

Beginning of the day: 50 percent of the population are bulish (black circles)

A)

Communication in groups of different sizes leads to a majority concensus in each group

C)
B)

End of the day: 45 percent of the population are bulish (black circles)

D)

Fig. 8.1 Changing the bullishness in a population via communication in subgroups. (A) At the
beginning of a given day t , there is a certain percentage B.t / of bullishness. (B) During the day,
communication takes place in random subgroups of different sizes. (C) Extreme case of complete
polarization created by a majority rule in opinion. (D) At the end of the day, the sentiment of the
population as a whole has changed due to communication. The link between communication and
pricing is made by allowing the price to depend on the change of bullishness, and also letting the
probability of a transition like B ! C depend on the previous day’s market performance. This is
formulated in (8.1)–(8.5). (Figure taken from [7])

(continued)
beginning of each day, random groups of agents are formed. For a given group
of size k with j agents feeling bullish and k  j bearish, we let mk;j denote
the transition probability for all k members to adopt the bullish opinion as a
result of their meeting. After one update, the probability of finding an agent
with a bullish view can therefore be written
 kj
B.t C 1/ D mk;j .t/Cjk B.t/j 1  B.t/ ; (8.1)

where


Cjk 
j Š.k  j /Š

(continued)
170 8 Communication and the Stock Market

a
2
P(t), B(t)

1.5
1
0.5

0 200 400 600 800 1000 1200 1400 1600 1800 2000
Time t
b
0.1
Volatility

0
−0.1
0 200 400 600 800 1000 1200 1400 1600 1800 2000
Time t
c
slope α = −3
Frequency

0
10

−3 −2 −1
10 10 10
d Return r
0.4
Autocorr.

0.2
0
−0.2
0 10 20 30 40 50 60 70 80 90 100
τ

Fig. 8.2 Reproducing the ‘stylized facts’. (a) Example of how a change of bullishness B.t /
in a population (thin line) can have an impact on prices P .t / (thick solid line). (b) Volatility
clustering as a function of time. (c) ‘Fat-tailed’ returns. (d) No arbitrage possibilities, i.e., zero
autocorrelations of returns (thin solid line), and long time memory effects in volatility, i.e.,
nonzero autocorrelations of volatility versus time (thick solid line). Parameter values are  D 1:1,
0 D 0:01, ˛ D 400, ˇ D 0:001, D 0. (Figure taken from [7])

(continued)
are the binomial coefficients. Notice that the transition probabilities mk;t
depend on time, since we assume that they change as the market performance
changes (see below).
As illustrated in Fig. 8.1, communication takes place in groups of different
sizes. Taking the sum over the different groups, (8.1) generalizes to

X
L X
k
 kj
B.t C 1/ D ak mk;j .t/Cjk B.t/j 1  B.t/ ; (8.2)
kD1 j D0

with
X
L
1
ak D 1 ; ak  ;
L
kD1
(continued)
8.2 A Model of Communication and Its Impact on Market Prices 171

(continued)
where L is the size of the largest group and the condition ak  1=L expresses
the assumption of the same impact across groups.
The link between communication and its impact on the markets is taken
into account by assuming that the price return r.t/ changes whenever there
is a change in the bullishness. The idea is that the bullishness itself is not
the relevant factor determining how prices will change. Those feeling bullish
would naturally already hold long positions on the market. Rather, when
people change their opinion, say becoming more negative about the market,
or less bullish, this will increase their tendency to sell. Assuming the return to
be proportional to the percentage change in bullishness as well as economic
news, the return r.t/ is given by

B.t/  B.t  1/
B.t/
r.t/ D C .t/ ; (8.3)

where .t/ represents daily economic news and is assumed to be Gaussian


distributed with mean  0 and a standard deviation that varies as a function
of time depending on the changes in sentiment:
 
B.t/  B.t  1/ =B.t/
.t/ D 0 exp : (8.4)
ˇ

Finally, the feedback from financial market behavior to sentiment is obtained


by
r.t/ j
mk;j .t/ D mk;j .t  1/ exp ; mk;j .t D 0/  ; (8.5)
˛ k
where the condition mk;j .t D 0/  j=k describes initially unbiased behavior.

The main emphasis throughout this book has been to point out that the way prices are
discovered by the market is a sociological process. Price discovery and investment
decisions have been shown to happen in the following ways:
• Either market participants make a decision to buy or sell based on the past/present
price value of an asset, or
• Through communication with other market participants, a decision is made
which triggers a trade.
We have coined the term ‘socio-finance’ to encompass such a description.
References
9

1. G.A. Akerlof, R.J. Shiller, Animal Spirits: How Human Psychology Drives the Economy and
Why It Matters for Global Capitalism (Princeton University Press, Princeton, 2009)
2. J.V. Andersen, D. Sornette, Have your cake and eat it too: increasing returns while lowering
large risks! J. Risk Financ. 2, 70 (Spring 2001). See also D. Sornette, J.V. Andersen,
P. Simonetti, Minimizing volatility increases large risks. Int. J. Theor. Appl. Financ. 3(3),
523–535 (2000). For a general review of the method, see D. Sornette, P. Simonetti, J.V.
Andersen,  q -field theory for portfolio optimisation: ‘Fat tails’ and non-linear correlations.
Phys. Rep. 335(2), 19–92 (2000). For a quantitative treatment of the method, see [89]
3. J.V. Andersen, D. Sornette, Fearless versus fearful speculative financial bubbles. Physica A
337, 565 (2004)
4. J.V. Andersen, D. Sornette, Predicting failure using conditioning on damage history: demon-
stration on percolation and hierarchical fiber bundles. Phys. Rev. E 72, 056124 (2005). The
article can be retrieved from the site arXiv.org/abs/cond-mat/0508424
5. J.V. Andersen, S. Gluzman, D. Sornette, Fundamental framework for technical analysis of
market prices. Eur. Phys. J. B 14, 579–601 (2000)
6. J.V. Andersen, A. Nowak, G. Rotundo, L. Parrot, S. Martinez, Price-quakes shaking
the world’s stock exchanges. PLoS ONE 6(11), e26472 (2011). doi:10.1371/journal.pone.
0026472. Available from www.plosone/article/info and arxiv.org/abs/0912.3771
7. J.V. Andersen, S. Galam, V. Ioannis, P. Dellaportas, A socio-financial pricing model of
communication. PLoS ONE (2013, submitted)
8. W. Antweiler, M.Z. Frank, Is all that talk just noise? The information content of internet stock
message boards. J. Financ. 59(3), 1259–1294 (2004)
9. B. Arthur, Bounded rationality and inductive behavior (the El Farol problem). Am. Econ. Rev.
Pap. Proc. 84, 406 (1994)
10. K.-H. Bae, G.A. Karolyi, R.M. Stulz, A new approach to measuring financial contagion. Rev.
Financ. Stud. 16(3), 717–763 (2003)
11. P. Bak, How Nature Works: The Science of Self-Organized Criticality (Copernicus Press,
New York, 1996)
12. P. Bak, C. Tang, K. Weisenfeld, Self-organized criticality: an explanation of 1=f noise. Phys.
Rev. Lett. 59, 381–384 (1987). doi:10.1103
13. E. Balogh, I. Simonsen, B. Nagy, Z. Neda, Persistent collective trends in stock markets’. Phys.
Rev. E 82, 066113 (2010)
14. A. Bandura, Social Learning Theory (Prentice Hall, Englewood Cliffs, 1977)
15. R.H. Bates, Analytic Narratives (Princeton University Press, Princeton, 1998)
16. R.J. Bauer, J.R. Dahlquist, Technical Market Indicators, Analysis and Performance (Wiley,
New York, 1999)

J.V. Andersen and A. Nowak, An Introduction to Socio-Finance, 173


DOI 10.1007/978-3-642-41944-7__9, © Springer-Verlag Berlin Heidelberg 2013
174 9 References

17. P.L. Berger, T. Luckmann, Social Construction of Reality: A Treatise in the Sociology of
Knowledge (Anchor Books, Garden City, 1966)
18. S. Bikhchandani, D. Hirshleifer, I. Welch, Learning from the behavior of others: conformity,
fads, and informational cascades. J. Econ. Perspect. 12(3), 151–170 (1998)
19. M. Billio, G. Mila, A.W. Lo, L. Pelizzon, Measuring systemic risk in finance and insurance
sectors. MIT Sloan research paper no. 4774-10, Mar 2010. Available at SSRN: http://ssrn.
com/abstract=1571277
20. J.-P. Bouchaud, R. Cont, A Langevin approach to stock market fluctuations and crashes. Eur.
Phys. J. B 6, 543–550 (1998)
21. Buttonwood – Betting on Ben. The Economist, 19 Feb 2011, p. 83
22. By a market share is understood a portfolio of stocks from which the financial market index
is composed
23. T.N. Carracher, D.W Carracher, A.D. Schliemann, Mathematics in the streets and at school.
Br. J. Dev. Psychol. 3, 21–29 (1985)
24. A. Cavagna, Phys. Rev. E 59, R3783 (1999)
25. D. Challet, R. Stinchcombe, Analyzing and modelling 1 C 1d markets. Physica A 300,
285–299 (2001)
26. D. Challet, Y.-C. Zhang, On the minority game: analytical and numerical studies. Physica A
256, 514 (1998)
27. D. Challet, Y.-C. Zhang, On the minority game: analytical and numerical studies. Physica
A 256, 514 (1998); Y.-C. Zhang, Modeling market mechanism with evolutionary games.
Europhys. News 29, 51 (1998)
28. L.K.C. Chan, J. Lakonishok, Institutional trades and intraday stock price behavior. J. Financ.
Econ. 33, 173 (1995)
29. H. Chen, V. Singal, Role of speculative short sales in price formation: case of the weekend
effect. J. Financ. 58, 685–706 (2003)
30. T. Chordia, R. Roll, A. Subrahmanyam, J. Financ. Econ. 65, 111 (2002)
31. N.A. Christakis, J.H. Fowler, Connected: The Surprising Power of Our Social Networks and
How They Shape Our Lives (Little Brown & Company, New York, 2009)
32. J.M. Coates, J. Herbert, Endogenous steroids and financial risk taking on a London
trading floor. Proc. Natl. Acad. Sci. 105(16), 6167–6172 (2008); J.M. Coates, M. Gurnell,
A. Rustichini, Second-to-fourth digit ratio predicts success among high-frequency financial
traders. Proc. Natl. Acad. Sci. 106(2), 623–628 (2009)
33. R. Cont, Empirical properties of asset returns: stylized facts and statistical issues. Quant.
Financ. 1(2), 223–235 (2001)
34. B. Czarniawska-Joerges, Narratives in Social Science Research (Sage, London/Thousand
Oaks, 2004)
35. A. Damasio, The Feeling of What Happens: Body and Emotion in the Making of Conscious-
ness NY, US. (Harvest Books, New York, 2000)
36. B. DasGupta, L. Kaligounder, Global stability of financial networks against contagion:
measure, evaluation, and implications (2012). arXiv:1208.3789
37. O. De Bandt, P. Hartmann, Systemic risk: a survey. European central bank working paper
no. 35, Nov 2000. Available at SSRN: http://ssrn.com/abstract=258430
38. J.B. De Long, A. Schleifer, L.H. Summers, R.J. Waldmann, Noise trader risk in financial
markets. J. Pol. Econ. 98, 703–738 (1990)
39. J.B. De Long, A. Schleifer, L.H. Summers, R.J. Waldmann, The survival of noise traders in
financial markets. J. Bus. 64, 1–19 (1991)
40. E. Dimson, P. Marsh, M. Staunton, Credit Suisse Global Investment Returns Sourcebook 2010
(London Business School, Zurich, 2010)
41. A. Dodonova, Y. Khoroshilov, Anchoring and transaction utility: evidence from online
auctions. Appl. Econ. Lett. 11, 307 (2004)
42. G. Echterhoff, T. Higgins, J.M. Levine, Shared reality: experiencing commonality with
others’ inner states about the world. Perspect. Psychol. Sci. 4, 496–521 (2009)
9 References 175

43. Even though Storm Petersen used the phrase, it is not entirely sure who actually invented
it. See, for example, http://politiken.dk/pol_oplys/ECE122209/hvem-sagde-det-er-svaert-at-
spaa---isaer-om-fremtiden/
44. J.D. Farmer, Market force, ecology and evolution. Ind. Corp. Change 11(5), 895–953 (2002)
45. K.L. Fisher, M. Statman, Investor sentiment and stock returns. Financ. Anal. J. 56(2), 16–23
(2000)
46. For a central book on experimental economics, see for example, K. Binmore, Does Game
Theory Work? The Bargaining Challenge. Economic Learning and Social Evolution (MIT,
Cambridge, 2007)
47. For a large collection of free technical analysis materials, see http://decisionpoint.com/
48. For a review, see R.J. Shiller, From efficient market theory to behavioral finance, http://papers.
ssrn.com/abstract_id=349660
49. For discussion, see for example, S.F. Leroy, Excess volatility. UCSB working paper (2005).
Available at http://www.econ.ucsb.edu/~sleroy/downloads/excess.pdf
50. For some academic litterature on technical analysis, see, e.g., J.A. Murphy, Futures fund
performance: a test of the effectiveness of technical analysis. J. Futures Mark. 6, 175–185
(Summer 1986); W. Brock, J. Lakonishok, B. LeBaron, Simple technical trading rules and
the stochastic properties of stock returns. J. Financ. 47, 1731–1764 (1992)
51. For some articles discussing such topics, see D. MacKenzie, Long-term capital management
and the sociology of arbitrage. Econ. Soc. 32(3), 349–380 (2003); D. Beunza, I. Hardie,
D. MacKenzie, A price is a social thing: towards a material sociology of arbitrage. Organ.
Stud. 27, 721 (2006); S.A. Ross, Neoclassical finance, alternative finance and the closed end
fund puzzle. Eur. Financ. Manage. 8(2), 129–137 (2002)
52. L. Gagnon, G.A. Karolyi, Price and volatility transmission across borders. Working paper
series 2006-5, Ohio State University, Charles A. Dice Center for Research in Financial
Economics. Available at http://ideas.repec.org/p/ecl/ohidic/2006-5.html
53. S. Galam, Local dynamics vs. social mechanisms: a unifying frame. Europhys. Lett. 70(6),
705–711 (2005)
54. S. Galam, A Physicist’s Modeling of Psycho-Political Phenomena (Springer, Berlin, 2012)
55. M. Gell-Mann, F.E. Low, Phys. Rev. 95(5), 1300 (1954)
56. I. Giardina, J.-P. Bouchaud, M. Mézard, Physica A 299, 28 (2001)
57. L. Gil, A simple algorithm based on fluctuations to play the market. Fluct. Noise Lett. 7(4),
L405–L418 (2007)
58. P.M. Gollwitzer, J.A. Bargh (eds.), The Psychology of Action. Linking Cognition and
Motivation to Behavior (Guilford Press, New York, 1996)
59. B. Graham (1894–1976) was an American economist who is quoted as saying that In the short
run, the market is a voting machine, but in the long run it is a weighing machine. He was a
mentor to investors like, e.g., Warren Buffet, especially because of the second part of this
statement, but in this book we shall place emphasis on the notion of the financial market as a
voting machine
60. J. Grimes, On the failure to detect changes in scenes across saccades, in Perception, ed. by
K. Akins. Vancouver Studies in Cognitive Science, vol. 2 (Oxford University Press, New
York, 1996), pp. 89–110
61. J.L Grzelak, M. Poppe, Z. Czwartosz, A. Nowak, Numerical trap. A new look at outcome
representation in studies on choice behaviour. Eur. J. Soc. Psychol. 18(2), 143–159 (1988)
62. K. Heilmann, V. Laeger, A. Oehler, The disposition effect: evidence about the investor’s
aversion to realize losses, in Proceedings of the 25th Annual Colloquium, Wien (IAREP, 2000)
63. E.T. Higgins, Promotion and prevention: regulatory focus as a motivational principle. Adv.
Exp. Soc. Psychol. 30, 1–46 (1998)
64. R.W. Holthausen, R.W. Leftwich, D. Mayers, The effect of large block transactions on
security prices: a cross-sectional analysis. J. Financ. Econ. 19, 237 (1987)
65. C.H. Hommes, Complexity, evolution and learning: a simple story of heterogeneous expecta-
tions and some empirical and experimental validation. Technical report, CeNDEF, University
of Amsterdam, 2007
176 9 References

66. K. Hou, G.A. Karolyi, B.C. Kho, What factors drive global stock returns. Working paper
series 2006-9, Ohio State University, Charles A. Dice Center for Research in Financial
Economics. Available at http://ideas.repec.org/p/ecl/ohidic/2006-9.html
67. However, the field of econo-physics provides an exception. To find out more about this field,
we suggest the following: J.P. Bouchaud, M. Potters, Theory of Financial Risks (Cambridge
University Press, Cambridge/New York, 2000); N.F. Johnson, P. Jefferies, P.M. Hui, Financial
Market Complexity (Oxford University Press, Oxford/New York, 2003); R.N. Mantegna,
H.E. Stanley, An Introduction to Econophysics: Correlations and Complexity in Finance
(Cambridge University Press, Cambridge/New York, 2000); J.L. McCauley, Dynamics of
Markets: Econophysics and Finance (Cambridge University Press, Cambridge/New York,
2004); B.M. Roehner, Patterns of Speculation: A Study in Observational Econophysics (Cam-
bridge University Press, Cambridge/New York, 2002); D. Sornette, Why Stock Markets Crash
(Critical Events in Complex Financial Systems) (Princeton University Press, Princeton, 2003)
68. International Monetary Fund Global Financial Stability Report, Apr 2009. Available at http://
www.imf.org/external/pubs/ft/gfsr/2009/01/pdf/text.pdf
69. N.F. Johnson, M. Hart, P. Hui, Crowd effects and volatility in markets with competing agents.
Physica A 269, 1–8 (1999); M. Hart, P. Jefferies, P. Hui, Crowd–anti-crowd theory of the
minority game. Physica A 298, 537–544 (2000)
70. P.N. Johnson-Laird, Mental Models: Towards a Cognitive Science of Language, Inference,
and Consciousness, vol. 6 (Harvard University Press, Cambridge, 1983)
71. R.H. Jones, D.H. Crowell, L.E. Kapuniai, Change detection model for serially correlated data.
Psychol. Bull. 71(5), 352–358 (1969)
72. L.P. Kadanoff, Scaling laws for Ising models near Tc . Physics 2, 263 (1966)
73. D. Kahneman, A. Tversky, Prospect theory: an analysis of decision-making under risk.
Econometrica 47(2), 263–292 (1979)
74. D. Kahneman, A. Tversky (eds.), Choices, Values, and Frames (Cambridge University Press,
Cambridge, 2000)
75. Y. Khoroshilov, A. Dodonova, Buying winners while holding on to losers: an experimental
study of investors’ behavior. Econ. Bull. 7(8), 1–8 (2007)
76. B. Knowlton, M.M. Grynbaum, Greenspan shocked that free markets are flawed. New York
Times, 23 Oct 2008
77. M. Kritzman, Y. Li, S. Page, R. Rigobon, Principal components as a measure of systemic
risk. MIT Sloan research paper no. 4774-10, July 2010. Available at SSRN: http://ssrn.com/
abstract=1633027
78. A. Kruglanski, Motivated closing of the mind. Psychol. Rev. 103, 263–283 (1996)
79. J. Lakonishok, A. Shleifer, R. Thaler, R.W. Vishny, The impact of institutional trading on
stock price. J. Financ. Econ. 32, 23 (1991)
80. D. Lamper, S.D. Howison, N.F. Johnson, Predictability of large future changes in a competi-
tive evolving population. Phys. Rev. Lett. 88, 017902 (2002)
81. K.-T. Leung, J. Möller, J.V. Andersen, Generalization of a two-dimensional Burridge–
Knopoff model of earthquakes. J. Phys. I 7, 423 (1997)
82. M. Lewenstein, A. Nowak, B. Latané, Statistical mechanics of social impact. Phys. Rev. A
45(2), 763 (1992); A. Nowak, M. Lewenstein, P. Frejlak, Dynamics of public opinion and
social change, in Chaos and Order in Nature and Theory, ed. by R. Hegselman, U. Miller
(Helbin, Vienna, 1996), pp. 54–78; A. Nowak, B. Latane, M. Lewenstein, Social dilemmas
exist in space, in Social Dilemmas and Cooperation, ed. by U. Schulz, W. Albers, U. Mueller
(Springer, Berlin/Heidelberg, 1994), pp. 269–289
83. K. Lewin, Resolving Social Conflicts and Field Theory in Social Science (American Psychol-
ogy Association, Washington, D.C., 1997)
84. T.D. Lewin, N. Momen, S.B. Drifdahl, D.J. Simons, Change blindness, the metacognitive
error of etimating change detection ability. Vision 7(1–3), 397–413 (2000)
85. H. Linke, M.T. Downton, M. Zuchermann, Chaos 15, 026111 (2005)
86. J. Lintner, The valuation of risk assets and selection of risky investments in stock portfolios
and capital budgets. Rev. Econ. Stat. 47(1), 13–37 (1965)
9 References 177

87. R. Lucas, Econometric policy evaluation: a critique, in The Philips Curve and Labor Markets,
ed. by K. Brunner, A. Meltzer. Carnegie-Rochester Conference Series on Public Policy, vol.
1 (American Elsevier, New York, 1976), pp. 19–46. ISBN:0444110070
88. K. Lund-Jensen, Monitoring systemic risk based on dynamic thresholds. IMF working paper
no. 12/159, June 2012. Available at SSRN: http://ssrn.com/abstract=2127539
89. Y. Malevergne, D. Sornette, Multi-moments method for portfolio management: generalized
capital asset pricing model in homogeneous and heterogeneous markets, in Multi-moment
Asset Allocation and Pricing Models, ed. by B. Maillet, E. Jurczenko (Wiley, Chichester/
Hoboken, 2006), pp. 165–193; Y. Malevergne, D. Sornette, Higher-moment portfolio theory
(capitalizing on behavioral anomalies of stock markets). J. Portf. Manage. 31(4), 49–55
(2005); Y. Malevergne, D. Sornette, Multivariate Weibull distributions for asset returns
I. Financ. Lett. 2(6), 16–32 (2004); Y. Malevergne, D. Sornette, High-order moments and
cumulants of multivariate Weibull asset return distributions: analytical theory and empirical
tests II. Financ. Lett. 3(1), 54–63 (2004)
90. A. Malinowski, Communication as a factor in investment decision-making. Masters thesis (in
Polish), Advanced School for Social Sciences and Humanities, 2010
91. H.J. Maris, L.P. Kadanoff, Teaching the renormalization group. Am. J. Phys. 46(6), 653–
657 (1978)
92. H. Markovitz, Portfolio Selection: Efficient Diversification of Investments (Wiley,
New York, 1959)
93. J.L. McCauley, K.E. Bassler, G.H. Gunaratne, Detrending data and the efficient market
hypothesis. Physica A 37, 202 (2008); K.E. Bassler, J.L. McCauley, G.H. Gunaratne,
Nonstationary increments, scaling distributions and variable diffusion processes in financial
markets. Proc. Natl. Acad. Sci. 104, 17297 (2007)
94. G.H. Mead, Mind, Self, and Society (University of Chicago Press, Chicago, 1934)
95. R. Mehra, E.C. Prescott, The equity premium: a puzzle. J. Monet. Econ. 15(2), 145–161
(1985)
96. R.C. Merton, Continuous-Time Finance (Blackwell, Cambridge, 1990)
97. D.M. Messick, C.G. McClintock, Motivational bases of choice in experimental games. J. Exp.
Soc. Psychol. 4(1), 1–25 (1968)
98. Momentum in financial markets: why Newton was wrong. The Economist, 8 Jan 2011, pp. 69–
70
99. S. Moscovici, The phenomenon of social representations. Soc. Represent. 3, 69 (1984)
100. B.I. Murstein, Regression to the mean: one of the most neglected but important concepts in
the stock market. J. Behav. Financ. 4, 234–237 (2003)
101. J.F. Muth, Rational expectations and the theory of price movements (1961); reprinted in
The New Classical Macroeconomics, Vol. 1. International Library of Critical Writings in
Economics, vol. 19 (Elgar, Aldershot, 1992), pp. 3–23
102. G.B. Northcraft, M.A. Neale, Experts, amateurs, and real estate: an anchoring and adjustment
perspective on property pricing decisions. Organ. Behav. Hum. Decis. Process. 39, 84 (1987)
103. A. Nowak, J. Szamrej, B. Latane, From private attitude to public opinion: a dynamic theory
of social impact. Psychol. Rev. 97, 362–376 (1990)
104. T. Odean, Are investors reluctant to realize their losses? J. Financ. 53, 1775–1798 (1998)
105. Z. Olami, H.J.S. Feder, K. Christensen, Self-organized criticality in a continuous, non-
conservative cellular automaton modeling earthquakes. Phys. Rev. Lett. 68, 1244–1247
(1992)
106. J.K. O’Regan, R.A. Rensink, J.J. Clark, Change-blindness as a result of mudsplashes. Nature
398(6722), 34 (1999)
107. J. Panksepp, Affective Neuroscience: The Foundations of Human and Animal Emotions, vol.
4 (Oxford University Press, New York/Oxford, 2004)
108. H. Pashler, Familiarity and visual change detection. Percept. Psychophys. 44(4), 369–
378 (1988)
109. P. Peigneux, P. Orban, E. Balteau, C. Degueldre, A. Luxen, Offline persistence of memory-
related cerebral activity during active wakefulness. PLoS Biol. 4(4), e100. doi:10.1371/
journal.pbio.0040100 (2006)
178 9 References

110. H.D. Platt, A fuller theory of short selling. J. Asset Manage. 5(1), 49–63 (2002)
111. V. Plerou, P. Gopikrishnan, L. Amaral, M. Meyer, H.E. Stanley, Scaling of the distribution
of fluctuations of financial market indicies. Phys. Rev. E 60, 6519–6529 (1999); G. Xavier,
P.Gopikrishnan, V. Plerou, H.E. Stanley, A theory of power law distributions in financial
market fluctuations. Nature 423, 267–230 (2003)
112. V. Plerou, P. Gopikrishnan, X. Gabaix, H.E. Stanley, Quantifying stock-price response to
demand fluctuations: a theory of the cubic laws of financial activity. Phys. Rev. E 66, 027104
(2002)
113. D.E. Polkinghorne, Narrative and self-concept. J. Narrat. Life Hist. 1(2), 135–153 (1991)
114. T. Preis, D.Y. Kennett, H.E. Stanley, D. Helbing, E. Ben-Jacob, Quantifying the behav-
ior of stock correlations under market stress. Sci. Rep. 2, 752. doi:10.1038/srep00752
(2012)
115. Z.W. Pylyshun, What the mind’s eye tells the mind’s brain: a critique of mental imagery.
Psychol. Bull. 80, 1–24 (1973)
116. R.A. Rensink, Change detection. Annu. Rev. Psychol. 53, 245–277 (2002)
117. D. Rodrik, In Search of Prosperity: Analytic Narratives on Economic Growth (Princeton
University Press, Princeton, 2003)
118. M. Roszczynska, A. Nowak, D. Kamieniarz, S. Solomon, J. Vitting Andersen, Short and
long term investor synchronization caused by decoupling. PLoS ONE 7, e50700 (2012).
doi:10.1371journal.pone.0050700. The article can be retrieved from the website http://dx.
plos.org10.1371journal.pone.0050700
119. P.A. Samuelson, Proof that properly anticipated prices fluctuate randomly. Ind. Manage. Rev.
6(2), 41–49 (1965); E.F. Fama, Efficient capital markets: a review of theory and empirical
work. J. Financ. 25, 383–417 (1970); P.A. Samuelson, Collected Scientific Papers (MIT,
Cambridge, 1972)
120. R. Savit et al., Phys. Rev. Lett. 82, 2203 (1999); D. Challet, M. Marsili, Phys. Rev. E 62, 1862
(2000)
121. R. Schank, R. Abelson, Scripts, Plans, Goals, and Understanding (Lawrence Erlbaum
Associates, Hillsdale, 1977)
122. W.F. Sharpe, Capital asset prices: a theory of market equilibrium under conditions of risk.
J. Financ. 19(3), 425–442 (1964)
123. H. Shefrin, A Behavioral Approach to Asset Pricing. Academic Press Advanced Finance
Series, Chap. 15 (Academic/Elsevier, Amsterdam/Boston, 2008)
124. H.M. Shefrin, M. Statman, The disposition to sell winners too early and rise losers too long.
J. Financ. 40, 777 (1985)
125. R.J. Shiller, Do stock prices move too much to be justified by subsequent changes in
dividends. Am. Econ. Rev. 71, 421–436 (1981); S.D. LeRoy, R.D. Porter, Stock price
volatility: tests based on implied variance bounds. Econometrica 49, 97–113 (1981)
126. Similar results have been found for individual stocks of a given stock market. See, e.g.,
F. Longin, B. Solnik, Is the correlation in international equity returns constant 1960–1990.
J. Int. Money Financ. 14, 3–26 (1995); P. Cizeau, M. Potters, J.-P. Bouchaud, Correlation
structure of extreme stock returns. Quant. Financ. 1, 217–222 (2001)
127. H.A. Simon, A behavioral model of rational choice. Q. J. Econ. 69(1), 99–118 (1955)
128. D.J. Simons, C.F. Chabris, T. Schnur, Evidence for preserved representations in change
blindness. Conscious. Cogn. 11, 78–97 (2002)
129. D. Sornette, Predictability of catastrophic events: material rupture, earthquakes, turbulence,
financial crashes and human birth. Proc. Natl. Acad. Sci. USA 99(SUPP1), 2522–2529
(2002); K. Ide, D. Sornette, Oscillatory finite-time singularities in finance, population and
rupture. Physica A 307(1–2), 63–106 (2002); D. Sornette, K. Ide, Theory of self-similar
oscillatory finite-time singularities in finance, population and rupture. Int. J. Mod. Phys. C
14(3), 267–275 (2002); D. Sornette, W.-X. Zhou, Quant. Financ. 2, 468 (2002); D. Sornette,
W.-X. Zhou, Evidence of fueling of the 2000 new economy bubble by foreign foreign capital
inflow: implications for the future of the US economy and its stock market. Physica A 332,
412–440 (2004); D. Sornette, W.-X. Zhou, Predictability of large future changes in major
financial indices. Int. J. Forecast. 22, 153–168 (2006)
9 References 179

130. D. Sornette, J.V. Andersen, Increments of uncorrelated time series can be predicted with a
universal 75 % probability of success. Int. J. Mod. Phys. C 11(4), 713–720 (2000)
131. D. Sornette, J.V. Andersen, A nonlinear super-exponential rational model of speculative
financial bubbles. Int. Mod. Phys. C 13(2), 171–187 (2001)
132. D. Sornette, J.V. Andersen, Optimal prediction of time-to-failure from information revealed
by damage. Europhys. Lett. E 74(5), 778 (2006). The article can be retrieved from the site
arXiv.org/abs/cond-mat/0511134
133. E.C.G. Stueckelberg, A. Petermann, Helv. Phys. Acta 26, 499 (1953)
134. Taking the risk-free return used in the usual definition of the Sharpe ratio equal to 0
135. P. Tetlock, Giving content to investor sentiment: the role of media in the stock market.
J. Financ. LXII(3), 1139–1168 (2007)
136. The data used to construct Fig. 2.3 was taken from 1/1/2000 to 20/6/2008
137. The following important point of view was pointed out to the authors by D. Sornette
138. The minority game was introduced in the two papers: D. Challet, Y.-C. Zhang, Emergence
of cooperation and organization in an evolutionary game. Physica A 246, 407–418 (1997);
D. Challet, Y.-C. Zhang, On the minority game: analytical and numerical studies. Physica
A 256, (1998); see also the book D. Challet, M. Marsili, Y.-C. Zhang, Minority Games:
Interacting Agents in Financial Markets (Oxford University Press, Oxford, 2004)
139. A. Tversky, D. Kahneman, Subjective probability: a judgment of representativeness. Cogn.
Psychol. 2, 430–454 (1972)
140. A. Tversky, D. Kahneman, Judgment under uncertainty: heuristics and biases. Science 185,
1124 (1974)
141. A. Tversky, D. Kahneman, The framing of decisions and the psychology of choice. Science
211(4481), 453–458 (1981)
142. J. Vitting Andersen, Estimating the level of cash invested in financial markets. Physica A 344,
168–173 (2004)
143. J. Vitting Andersen, Could short selling make financial markets tumble? Int. J. Theor. Appl.
Financ. 8(4), 509–521 (2005)
144. J. Vitting Andersen, D. Sornette, The $-game. Eur. Phys. J. B 31, 141 (2003). A similar rule
for updating scores was introduced in [56]
145. J. Vitting Andersen, D. Sornette, A mechanism for pockets of predictability in complex
adaptive systems. Europhys. Lett. 70, 697 (2006)
146. F.A. Wang, Overconfidence, investor sentiment and evolution. J. Financ. Intermed. 10, 138–
170 (2001)
147. P.C. Wason, D. Shapiro, Natural and contrived experience in a reasoning problem. Q. J. Exp.
Psychol. 23, 63–71 (1971)
148. M. Weber, C.F. Camerer, The disposition effect in securities trading: an experimental analysis.
J. Econ. Behav. Organ. 33, 167 (1998)
149. K.G. Wilson, The renormalization group: critical phenomena and the Kondo problem. Rev.
Mod. Phys. 47(4), 773 (1975)
150. With the risk-free return equal to 0
151. J. Wohlmutt, J. Vitting Andersen, Modelling financial markets with agents competing on
different time scales and with different amounts of information. Physica A 363, 459 (2006)
152. G. Ye, Inertia equity: the impact of anchoring price in brand switching (2004). SSRN-
id548862
153. C.H. Yeung, Y.-C. Zhang, Minority games. in Encyclopedia of Complexity and Systems
Science (Springer, New York/London, 2009), pp. 5588–5604
154. R.B Zajonc, Feeling and thinking: preferences need no inferences. Am. Psychol. 35(2), 151
(1980)
155. R.M. Ziff, Phys. Rev. Lett. 69, 2670 (1992)
Index

Agent-based models, 62–65, 94 Card selection, 26


decoupling, 101 Cash availability, 125, 127–134
minority game, 65–71 balance equation, 128–133, 137, 139–141
Monte Carlo simulation, 107–113 Cellular automata, 62
nonlinearity of, 73 Challet, D., 65
for short selling, 136 Change blindness, 143, 155–156
simulation of, 99, 103, 104, 136 Chaos theory, 60, 61
Anchoring, 6, 31, 43, 44, 47 Clustering, 144–146, 154, 155
in financial markets, 44, 46 of market movements, 155
Arithmetic, 26 of volatility, 168, 170
Arthur, B., 65 Cognitive closure, 29, 98, 99, 101
Astrophysics, 94 Cognitive processes, 26–32, 57
Collective risk-taking, 94
Communication, viii, 55, 56, 154, 167–171
Bak, P., 149, 152, 156 Competition, 28
Balanced bear market, 86 Complexity theory, vii, 18, 61, 135, 136, 143,
Balanced bull market, 86 148
Ball, P., 22 financial market, 63
Bank contagion, 149, 163 Computer simulation, 99, 103–107
Belief perseverance, 31 $-game, 107–113
Bias, 25, 30–32, 38, 40, 42, 117, 118, 124 Monte Carlo, 107–118
anchoring, 31 of short selling, 136
belief perseverance, 31 of social processes, 154
framing, 30 Condensed matter physics, 64
hindsight, 31 Cooperation, 28, 100
law of small numbers, 31 Coordination, 118
long-only, 127, 139 Coupled oscillators, 159
optimism, 31 Covariance, 17, 18, 38
overconfidence, 30 instability of, 39
and price formation, 34 Credit crisis, vi, 6, 93, 138
self-attribution, 31 Cross-disciplinary research, 4, 7
Biological motors, 43, 45 Crowd–anti-crowd theory, 70
Bounded rationality, 25, 63 Curie temperature, 111, 122
Brownian motion, 43, 55
Burridge–Knopoff model, 163, 165
Decision-making process, vi, 29, 32, 52, 98,
155
Capital asset pricing model (CAPM), 16–18, behavioral aspects of, 114, 167
36, 38, 39, 148, 159, 160 collective, 93
falsifiability of, 114 heuristic rules, 68, 98, 100

J.V. Andersen and A. Nowak, An Introduction to Socio-Finance, 181


DOI 10.1007/978-3-642-41944-7, © Springer-Verlag Berlin Heidelberg 2013
182 Index

Decoupling, 98–120 Genetic algorithm, 105


in agent-based models, 101 Globalization, vi, 58, 149
Determinism, 60, 85 Graham, B., vi, 167
Dimensional analysis, 37, 79–81 Greenspan, A., vi
for technical analysis, 81–85, 90–91 Greenspan put, 124, 127
Disposition effect, 44 Growth theory, 125–132
Dividends, 7–8, 125, 128–131, 133, 135, 137, super-interest rate, 129, 135
139, 140 Guttenberg–Richter law, 152, 153
negative, 131
$-game, 71–74, 98, 99, 105–106, 136
computer simulation, 107–113 Hedge funds, 136, 138
payoff function, 72 Herding, 98, 155
price dynamics, 73 Hierarchy, 62
speculative bubbles, 107 Hindsight, 31
strategy, 99 Human behavior, vii, 4, 5, 25–32, 52
DSGE models, 4, 53 accounting for, 96, 160
collective, 61, 75, 97, 118
irrational, 155
Earthquakes, 143, 149, 152, 153, 162, 163 risky, 93
Edwards, E., 25 speculative, 99
Efficient market hypothesis, 3 Hume, D., 125
non-falsifiability of, 114
El Farol bar game, 65
Emotions, 28, 57 Index cohesive force, 163
Equilibrium, 3, 64, 96 Inflation, 9, 125
DSGE models, 4 Initial conditions, 60
Nash, 74, 100 Interest rates, 7, 9, 95, 128
Equity premium puzzle, 121, 125, 135, 139 International Monetary Fund (IMF), 147
European debt crisis, vi, 6 Iron bar, 112, 122
European Monetary System (EMS), 44 Irrational beliefs, 42, 53, 155
Experimental behavioral finance, 114
Experimental finance, 74, 94, 99, 114
with humans, 113–119 Kahneman, D., 25, 32, 35, 44
Kepler’s third law, 79–81
Keynesian beauty contest, 74, 100
Facebook, v Keynes, J.M., 8
Falsifiability, 74, 113
Fat tails, 18, 19, 162, 168, 170
Fear, vii, 98 Labeling, 29
Federal Reserve Board, vi, 97, 127, 128, 133, Laplace, P.S., 60
141 Law of small numbers, 31
Feedback loops, 56, 57, 103, 124, 171 Levy flights, 55
in minority game, 68 Liquidity, 69, 127, 131–135, 140
nonlinear, 73 Logic, 27
Financial market, definition of, vii Lorenz, E., 60
Financial time, 84 Loss aversion, 33
Flash crash, 53, 154 Lucas critique, 3
Flashing ratchet, 43
algorithm, 45–46, 48
Fluctuations, 37, 45, 111, 123, 139, 140 Magnetization, 111–112, 119, 122
Framing, 30, 97, 98, 121 Market depth, 69
Froude number, 83, 85, 86, 90 Market forces, 64, 77, 124
Fundamental price, 2, 7–11, 91, 97, 100, 146 Market neutral algorithm, 47
Index 183

Market phases, 86–89 Poincaré, H., 60


Market portfolio, 17, 18 Popper, K., 74, 113
Markovitz portfolio theory, 11–16 Portfolio management, 6
criticism of, 148 Markovitz theory, 11–16
Material rupture, 4, 145, 146, 148 non-Markovitz, 18–21
Materials science, 143 Predictability, 6, 35, 60, 76, 77, 109, 120
Mean field theory, 5, 135 in $-game, 106, 110
Mean-variance approach, 15, 17 of failure, 146
Media coverage, vi, 5, 58, 154–155 in minority game, 70, 71, 103
Memory effects, 143, 153, 162, 168, 170 pockets of, 99, 102
Meso-level, 55 of speculative moments, 101, 108, 118
Michigan Consumer Sentiment index, 35 in self-organized critical systems, 152
Minority game, 65–71 Prediction days, 99, 104–106
complexity of, 66 Price acceleration, 78, 81, 86
control parameter, 70 Price dynamics, 3, 56, 63
decoupling in, 103 avalanche of movements, 149, 158, 165
limitations of, 71 impact of communication, 168–171
lookup table, 65 in $-game, 73
phase transition, 70, 71 fast, 97, 152, 158
predictability in, 70, 71, 103 in minority game, 69
price dynamics, 69 slow, 97, 152, 157
reverse engineering test, 76 as social phenomenon, 97, 143, 167
strategy, 65–68, 103 sticky, 42–52
volatility, 70 threshold-like, 156
Momentum effect, 78 tremor, 154, 156–166
Monte Carlo simulations, 107–118 Price-quakes, 156–166
Motives, 27 measure of amplitude, 161–163
Price stresses, 143, 158, 159, 161
Price trends, viii, 78, 81, 86
Narratives, 54 Price velocity, 78, 81, 86
NASDAQ Composite, 104–106 Principal components analysis, 163
Nash equilibrium, 74, 100 Probability, 33, 34, 46, 47, 60, 85
Natural selection, 113 of avalanche, 151
Neoclassical economic theory, 22 of changing opinion, 168, 169
Newton’s first law, 78 conditional, 146, 147, 155, 156
Noise traders, 5 of failure, 145–147
Nonlinearity, 73, 143, 149, 152, 163 power law distribution, 151, 152, 162
Prospect theory, 25, 26, 32–34, 75, 96
falsifiability of, 114
Olami–Feder–Christensen (OFC) model, 165 Psychology, 6, 25–32, 42, 149
Opinion formation, 168 anchoring, 44
Optimism, 31 change blindness, 143, 155–156
Order, 94 cognitive closure, 29, 98, 99, 101
parameter, 65, 113 cooperation, 28
Overconfidence, 30 emotions, 28, 57
financial time, 84
Galilean principle, 77
Particle physics, 64 labeling, 29
Pension funds, 136 motives, 27
Percolation threshold, 147 narratives, 54
Petersen, S., 77 prevention mode, 29
Phase transition, 64, 111–113, 119, 122 promotion mode, 29
critical, 152 schemas, 27
in minority game, 70 self-enhancement, 28
184 Index

self-interest, 28 Simon, H.A., 25


self-structure, 28–30 Smith, A., 125
self-verification, 28 Smith, V., 74, 114
and strategy, 68 Social media, v
Social processes, vi–viii, 30, 53–58, 121,
167
Rational expectations theory, 2–3, 59, 145, clusters of opinion, 154
146, 148 Socio-finance, definition of, v, 171
bubbles, 9, 94–96 Sociology, v, vii, 5, 25, 43, 53, 59, 143,
failure of, 65 149
for material failure, 146 Solow, R., 125
Relative investment level, 133, 134, 141 Soros, G., 53
Relative wealth, 33 Spanning cluster, 145
Renormalization, 37 Speculative bubbles, vii, 6, 57, 74, 78, 93, 94,
group, 63, 64 96, 98, 106–119
Reward-to-variability ratio, 38 Spillover, 135, 138
Risk, 11–19, 21, 36, 46 S&P 500 index, 125, 129, 134
aversion, 33 State of the market, 75, 121–124
specific, 15 Strategy, 46, 56, 57, 63, 98, 99
systematic, 15 decoupled, 102, 110, 117
systemic, 21, 143–153 in $-game, 99
Risk-free rate, 11, 16, 17, 38 long-only, 124, 136–139
Russian default crisis, 97 in minority game, 65–68, 103
short selling, 135–138
Stress and strain, 143, 150, 152, 153
Sand pile, 150–152 Super bear market, 86
Schema, 27 Super bull market, 86
Schumpeter, J., 125 Super-interest rate growth, 129, 135
Securities and Exchange Commission (SEC), Swan, T., 125
138 Symmetry breaking, 111–113, 121–124, 127,
Self-attribution, 31 135, 138
Self-consistency, 18, 133 Synchronization, 55, 56, 115–117
Self-enhancement, 28
Self-interest, 28
Self-organized criticality, 123, 149–153, 165 Tangency portfolio, 17
absence of external force, 152 Technical analysis, viii, 77–91
BTW paper, 150, 151 contrarian, 79
memory effects, 153, 162 dimensionless time, 83, 86, 90
sand pile, 150–152 financial time, 84
time scales, 152 independence from units, 81, 82, 90
Self-structure, 28–30 learning interval, 83, 84, 86, 91
Self-verification, 28 market phases, 86–89
Sentiments, 6, 16, 18, 34, 35, 37, 40–42, 169, momentum effect, 78
171 as pseudo-science, 78
and pricing, 39 time translation invariance, 82, 91
Shared reality, vi, 53–58, 97 trend-following, 78
Sharpe ratio, 38 Tectonic plates, 152, 153, 163, 165
Shiller, R., 10 Testosterone, 93
Short selling, 131, 132, 135–138 Trading algorithm, 43, 47–52
ban on, 138 Tremor price dynamics (TPD), 154, 156–166
uptick rule, 138 Trust, 58
and volatility, 138 Tversky, A., 25, 44
Sigurdardottir, J., 93 Two-fund separation theorem, 15
Index 185

Ulam, S., 62 Von Neumann, J., 62


Universality, 64
Uptick rule, 138
Wealth effect, 121, 124, 128, 130–132, 138,
139
Volatility, 36, 46, 48, 84, 124
clustering, 4, 168, 170 Yardsticks, 34, 36, 37, 156
excess, 10–11, 25
in minority game, 70
and short selling, 138 Zhang, Y.C., 65

Potrebbero piacerti anche